url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://www.proofwiki.org/wiki/Baire-Osgood_Theorem
# Baire-Osgood Theorem ## Theorem Let $X$ be a Baire space. Let $Y$ be a metrizable topological space Let $f: X \to Y$ be a mapping which is the pointwise limit of a sequence $\left \langle{f_n}\right\rangle$ in $C \left({X, Y}\right)$. Let $D \left({f}\right)$ be the set of points where $f$ is discontinuous. Then $D \left({f}\right)$ is a meager subset of $X$. ## Proof Let $d$ be a metric on $Y$ generating its topology. Using the oscillation we have, following the convention that we omit the metric when writing the oscillation, that: $\displaystyle D \left({f}\right) = \bigcup_{n \mathop = 1}^\infty \left\{{x \in X: \omega_f \left({x}\right) \ge \frac 1 n}\right\}$ which is a countable union of closed sets. Since we have this expression for $D \left({f}\right)$, the claim follows if we can prove that for all $\epsilon>0$ the closed set: $F_\epsilon = \left\{{x \in X : \omega_f \left({x}\right) \ge 5 \epsilon}\right\}$ Let $\epsilon > 0$ be given and consider the sets: $\displaystyle A_n = \bigcap_{i, j \mathop \ge n} \left\{{x \in X: d \left({f_i \left({x}\right), f_j \left({x}\right)}\right) \le \epsilon}\right\}$ which are closed because $d$ and the $f_i$ are continuous. Because $\left \langle{f_n}\right\rangle$ is pointwise convergent, it is pointwise Cauchy with respect to any metric generating the topology on $Y$, so $\displaystyle \bigcup_{n \mathop = 1}^\infty A_n = X$. Given a nonempty open $U \subseteq X$ we wish to show that $U \nsubseteq F_\epsilon$. Consider the sequence $\left \langle{A_n \cap U}\right\rangle$ of closed subsets of $U$. Since the union of these is all of $U$ and $U$—being an open subspace of a Baire space—is a Baire space, one of them, say $A_k$, must have an interior point, so there is an open $V \subseteq A_k \cap U$. Because $U$ is open in $X$, $V$ is open in $X$ as well. We will show that: $\displaystyle V \subseteq F_\epsilon^c = \left\{{x \in X: \omega_f \left({x}\right) < 5\epsilon}\right\}$ This will show that: $V \nsubseteq F_\epsilon$ and thus that $U \nsubseteq F_\epsilon$ Since $V \subseteq A_k$: $d \left({f_i \left({x}\right), f_j \left({x}\right)}\right) \le \epsilon$ for all $x \in V$ and all $i,j \ge k$. Pointwise convergence of $\left \langle{f_n}\right\rangle$ gives that: $d \left({f \left({x}\right), f_k \left({x}\right)}\right) \le \epsilon$ for all $x \in V$. By continuity of $f_k$ we have for every $x_0 \in V$ an open $V_{x_0} \subseteq V$ such that: $d \left({f_k \left({x}\right), f_k \left({x_0}\right)}\right) \le \epsilon$ for all $x \in V_{x_0}$. By the triangle inequality: $d \left({f \left({x}\right), f_k \left({x_0}\right)}\right) \le 2 \epsilon$ for all $x \in V_{x_0}$. Applying the triangle inequality again: $d \left({f \left({x}\right), f \left({y}\right)}\right) \leq 4 \epsilon$ for all $x, y \in V_{x_0}$. Thus we have the bound: $\omega_f \left({x_0}\right) \leq \omega_f \left({V_{x_0}}\right) \le 4 \epsilon$ showing that $x_0 \notin F_\epsilon$ as desired. $\blacksquare$ ## Source of Name This entry was named for René-Louis Baire and William Fogg Osgood. ## Sources • N. L. Carothers: Real Analysis (2000)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851214289665222, "perplexity": 165.03772938633534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705502703/warc/CC-MAIN-20130516115822-00000-ip-10-60-113-184.ec2.internal.warc.gz"}
http://basicresearchpress.com/
# The Mechanical Theory of Everything Dr. Joseph M. Brown’s latest book, The Mechanical Theory of Everything, is available now from Basic Research Press and Amazon. This volume represents a lifetime of research into the problems and foundations of biology, physics, mathematics, and language. Also available on Amazon. ### The Neutrino: A Counter Example to the Second Law of Thermodynamics by Joseph M. Brown The kinetic particle model of the neutrino was first discovered in 1968-9 and published in Brown and Harmon [1]. All that was known at that time was that the neutrino had to be the result of a complete condensation of the ether gas which pervades the universe. Shortly after that time it was discovered that the Maxwell-Boltzmann parameters vr and vm arranged in the form [(vr—vm)/vm]2 had the value 1/137.1. Since vr and vm characterize the gas that makes up the ether and the magnitude of the parameters so arranged was close to the fine structure constant [2], the researchers were encouraged that the kinetic particle approach to physical theory must have merit. A little over ten years later it was discovered that if background particles were condensed, as required by the neutrino model, and aligned to all move in the same direction without changing their individual speeds, then if they were squeezed together so they all touched each other without changing their energy then the condensed assembly would translate at the speed vr—vm (see [3] and [4]). Thus, it was known that the speed of light is vr—vm. Further, the condensation and acceleration process described above provided a means for extracting background particles, which were forming the condensed state. However, it was not known at that time (1982) how the background particles could come in from the background and result in a complete condensation. It was not known how this condensation could be possible until 2012 [5]. The following paragraphs outline the rigorous analysis of the neutrino. This is a proof that a stable inhomogeneous state of Newtonian particles can exist. This analysis shows that the second law of thermodynamics is not universally true. In this analysis the ether gas is made of brutino particles and is called the brutino gas.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243638515472412, "perplexity": 763.4463718588553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164268.69/warc/CC-MAIN-20160205193924-00328-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/elementary-geometry-for-college-students-5th-edition/chapter-10-section-10-1-the-rectangular-coordinate-system-exercises-page-456/10c
## Elementary Geometry for College Students (5th Edition) (-a,-b) and (a,b) Using theorem 10.1.2 Midpoint formula M = ($x_{m}, y_{m})$ = ($\frac{x1+x2}{2}$, $\frac{y1+y2}{2}$) M = ($\frac{-a+a}{2}$, $\frac{-b+b}{2}$) = (0,0)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774511456489563, "perplexity": 3053.548267059496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827639.67/warc/CC-MAIN-20181216095437-20181216121437-00381.warc.gz"}
https://www.physicsforums.com/threads/quick-question-on-laurent-series-proof-uniqueness.947678/
# Quick question on Laurent series proof uniqueness 1. May 18, 2018 ### binbagsss 1. The problem statement, all variables and given/known data I am looking at the wikipedia proof of uniqueness of laurent series: https://en.wikipedia.org/wiki/Laurent_series 2. Relevant equations look above or below 3. The attempt at a solution I just don't know what the indentity used before the bottom line is, I've never seen it before, would someone kindlly explain this to me or point me to a link? Many thanks 2. May 18, 2018 ### stevendaryl Staff Emeritus Let's integrate the function $1/z^n$ in the complex plane around a loop enclosing the origin, $z=0$. In this loop, let $z = R e^{i\theta}$ so $dz = iR e^{i\theta} d\theta$. So ($\theta$ goes from 0 to $2\pi$): $\int \frac{1}{z^n} dz = \int \frac{1}{R^n e^{i n \theta}} iR e^{i\theta} d\theta$ $= \frac{i}{R^{n-1}} \int e^{(1-n) i \theta} d\theta$ If $n$ is an integer and $n \neq 1$, then we have $\int \frac{1}{z^n} dz = \frac{1}{R^{n-1}} \frac{1}{1-n} [e^{2\pi (1-n) i} - 1] = 0$ (since $e^{2(1-n)\pi i} = 1$) If $n=1$, then $\int \frac{1}{z^n} dz = \int i d\theta = 2\pi i$ So we can summarize this: $\int \frac{1}{z^n} dz = 2\pi i \delta_{n-1}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685758352279663, "perplexity": 494.97165141728806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202530.49/warc/CC-MAIN-20190321172751-20190321194751-00082.warc.gz"}
https://mathoverflow.net/questions/376187/looking-for-a-reference-on-conformal-mapping-on-bbb-rn
# Looking for a reference on conformal mapping on $\Bbb R^n$ A mapping $$T: \Bbb R^n\to \Bbb R^n$$ is said to be conformal if it is bijective and preserves angles, i.e., if $$x, y: [0,1]\to \Bbb R^n$$ are curves with $$x(t_0)=y(t_0)$$ then $$\cos (Tx(t_0),Ty(t_0))= \cos (x(t_0),y(t_0))= \frac{x'(t_0)\cdot y'(t_0)}{|x'(t_0)|| y'(t_0)|}.$$ A typical example of conformal mapping is the inversion $$I:\Bbb R^n \to \Bbb R^n$$ $$I(x)= \frac{x}{|x|^2}$$, with the convention that $$I(0)=\infty$$ and $$I(\infty)=0$$. Trivial examples are rigid motions i.e., a combination of orthogonal group, Scallings or homothety and/or translations. I am barely looking for a proof or a reference for the following Theorem: Theorem: Every conformal mapping is the composite of finely many rigid motions and the Inversion mapping. There are several book complex analysis dealing with the case $$n=2$$ on the complex plane. But I haven't seen any for the higher dimensional situation. My two cents: a proof for $$n=3$$ is given explicitly by Dubrovin, Fomenko and Novikov in [1], §15.2 pp. 138-142. The authors explain also how to extend their proof to the case $$n>3$$ and leave the details as an exercise. The answer by @Piotr Hajlasz triggered my curiosity and pushed me to go a little bit beyond reference [1], which requires a $$C^4$$ regularity on the conformal map considered ([1], §15.2 p. 138). According to Caraman ([2] section 3, chapter 2, p. 358), the proof of Liouville's theorem requiring a minimal regularity on the mapping was given first by Reshetnyak in [3]. Reshetnyak assumes the mapping to be of class $$W^1_n$$: while the paper is short, the offered proof is highly non trivial. Reference [1] Boris A. Dubrovin, Analtoly T. Fomenko and Sergey P. Novikov, Modern geometry - methods and applications. Part I. The geometry of surfaces, transformation groups, and fields, translated by Robert G. Burns. 2nd ed. (English) Graduate Texts in Mathematics, 93, Berlin-Heidelberg-New York: Springer-Verlag, pp. xv+468 (1992), MR1138462, Zbl 0751.53001 [2] Petru Caraman, $$n$$-Dimensional Quasiconformal (QCF) Mappings, revised, enlarged and translated from the Romanian by the Author (English), Tunbridge Wells, Kent: Abacus Press, pp. 551 (1974), ISBN 0-85626-005-3, MR0357782, Zbl 0342.30015. [3] Yuriĭ G. Reshetnyak, "Liouville’s theorem on conformal mappings for minimal regularity assumptions", (English, translated from the Russian), Siberian Mathematical Journal 8 (1967), pp. 631-634 (1968), MR0218544, Zbl 0167.36102. Theorem (Liouville). If $$\Omega\subset\mathbb{R}^n$$, $$n\geq 3$$ is open and $$f:\Omega\to\mathbb{R}^n$$ is conformal, then $$f$$ is a Mobius transformation. While the theorem is true for $$f\in C^1$$, there is no easy proof in that case. Standard proofs assume that $$f\in C^3$$ or even $$f\in C^4$$. The classical and well know proof due to Nevanlinna can be found here (see page 265):
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306442141532898, "perplexity": 681.0902608451098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00273.warc.gz"}
https://crypto.stackexchange.com/questions/61328/the-equivalence-of-exponentiation-and-multiplication-over-elliptic-curves
# The equivalence of exponentiation and multiplication over elliptic curves I am trying to convert $(g^a)^{i^2}\mod p$ to an equivalent Elliptic Curve expression. Let's assume a base point $G$ over an appropriate ECC. Will $(g^a)^{i^2}\mod p$ be equivalent to $G\cdot a\cdot i^2$ (where the dot operation is multiplication over the elliptic curve)? • Yes it will be. – SEJPM Aug 6 '18 at 11:17 • Ok, thanks. Now, is there a source which explains why this is true? – Shak Aug 6 '18 at 11:24 $$\left(g^{a}\right)^{i^2}\bmod p$$ $$g^{a\cdot i^2}\bmod p$$ means the following: Consider the group $\mathbb Z _p^*$, that is the set of non-negative integers smaller than $p$ along with the neutral element $1$ and the group operation of multiplication modulo $p$ and write this group multiplicatively with $g\in\mathbb Z_p^*$ and $a,i$ being normal integers. Now if, instead we wrote the above as an additive group (which is merely a change of notation!), we would replace $\cdot$ with $+$ and exponentiation $g^x$ with multiplication by a scalar $x\cdot G$. So the above expression immediately becomes $(a\cdot i^2)\cdot G$ (up to commutativity and associativity). And the commutativity is a given on elliptic curves (because otherwise DH key exchanges wouldn't work).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609966278076172, "perplexity": 279.0065386376817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00365.warc.gz"}
http://mathhelpforum.com/differential-geometry/127768-locally-exact-differential.html
1. ## Locally Exact Differential.... A differential $pdx+qdy$(in $\mathbb {C}$)is said to be a locally exact differential in a region $\Omega$if it is an exact differential in some neighbourhood of each $x\in\Omega$. Show that every rectangle with sides parallel to axes $R\subset\Omega$ satifies $\int_{\partial R}pdx+qdy=0$if it is a locally exact differential.. 2. Originally Posted by ynj A differential $pdx+qdy$(in $\mathbb {C}$)is said to be a locally exact differential in a region $\Omega$if it is an exact differential in some neighbourhood of each $x\in\Omega$. Show that every rectangle with sides parallel to axes $R\subset\Omega$ satifies $\int_{\partial R}pdx+qdy=0$if it is a locally exact differential.. If pdx+ qdy is locally exact, then it is, as you say, an exact differential. That means that there exist some function F(x,y) such that dF= pdx+ qdy, at least inside $\Omega$. But then $\oint pdx+ qdy= \oint dF$. Let t be any parameter for the curve and we have $\oint pdx+ qdy= \int_a^b \frac{dF}{dt}dt= F(x(b),y(b))- F(x(a),y(a))$ where a and b are the beginning and ending values for t. But since this is a closed curve, t= a and t= b give the same point so x(b)= x(a), y(b)= y(a) and F(x(b),y(b))- F(x(a),y(a))= 0. 3. Originally Posted by HallsofIvy If pdx+ qdy is locally exact, then it is, as you say, an exact differential. That means that there exist some function F(x,y) such that dF= pdx+ qdy, at least inside $\Omega$. But then $\oint pdx+ qdy= \oint dF$. Let t be any parameter for the curve and we have $\oint pdx+ qdy= \int_a^b \frac{dF}{dt}dt= F(x(b),y(b))- F(x(a),y(a))$ where a and b are the beginning and ending values for t. But since this is a closed curve, t= a and t= b give the same point so x(b)= x(a), y(b)= y(a) and F(x(b),y(b))- F(x(a),y(a))= 0. Hmm....you misunderstand the problem..Locally exact means that $\forall x\in\Omega,\exists\delta>0,pdx+qdy$is exact in $B(x,\delta)$.It is weaker than 'exact differential'
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883362650871277, "perplexity": 854.6061473157014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982939917.96/warc/CC-MAIN-20160823200859-00133-ip-10-153-172-175.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/1-volume-ideal-gas-directly-proportional-number-moles-gas-constant-temperature-pressure-st-q4770946
1."The volume of an ideal gas is directly proportional to the number of moles of the gas at constant temperature and pressure" is a statement of _____________ Law. 2.Hydrogen gas exerts a pressure of 466 torr in a container. What is this pressure in atmospheres (1 atm = 101,325 Pa = 760 torr)? 3.If equal masses of O2(g) and HBr(g) are in separate containers of equal volume and temperature, which one of the following statements is true? 4.What is the temperature in Celsius at 4K, which is the temperature of liquid helium? 5.A sample of a gas occupies 1.40 × 103 mL at 25°C and 760 mmHg. What volume will it occupy at the same temperature and 380 mmHg? 6.What is the temperature in Celsius at 77K, which is the temperature of liquid nitrogen? 7.If 2.3 mol of a gas occupies 50.5 mL how many moles of the gas will occupy 85.5 mL at the same temperature and pressure? 8.A 250.0-mL sample of ammonia, NH3(g), exerts a pressure of 833 torr at 42.4°C. What mass of ammonia is in the container (R = 0.08206 L atm/ mol K, 9.A gas cylinder containing 1.50 mol compressed methane has a volume of 3.30 L. What pressure does the methane exert on the walls of the cylinder if its temperature is 25°C (R = 0.08206 L atm/ mol K)? 10.Calculate the density of CO2(g) at 100°C and 10.0 atm pressure (R = 0.08206 L atm/ mol K).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535730242729187, "perplexity": 1298.512619915284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898751.26/warc/CC-MAIN-20141030025818-00222-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/space-time.111067/
# Space Time 1. Feb 17, 2006 ### wolram I have read that, on the scale of clusters (million light years) the effect of the expansion of the universe is 10 million times smaller than the gravity binding the cluster. So space time, (flows) through the cluster (like a super fluid) or akin to a group of logs staying in a fixed possition in a flowing stream, all poor analogies but the best i can come up with. This suggests that the (coulping) between matter and space time is extremely weak. My question, how can space time if viewed as a (super fluid or some thing that gravitationaly bound bodies can ignore) carry massive bodies ie gallaxies with it? 2. Feb 17, 2006 ### Garth In GR the coupling of the general expansion of space to objects within it is generally understood to be dependent on the average density of the universe compared with that of the object concerned. If the average cosmological density is ~ 10-29 gms/cc and the average density of the cluster is ~ 10-22 gms/cc then the cosmological expansion effect (Hubble flow) is 10 million times smaller than the gravitational binding of the cluster. However, the centre of mass of the cluster itself, or the super-cluster it belongs to, is carried along with the Hubble flow. There is a question about how the gravitational field of the cluster/galaxy/stellar system within the cosmological field is treated. The local field is treated as Newtonian, which is the weak field limit of the Schwarzschild solution that itself is embedded in flat non-expanding Minkowskian space-time (at $r \rightarrow \infty$). If it is embedded in a cosmological metric as $r \rightarrow \infty$ then bound orbits might themselves be expanding with the universe. This is generally not thought to be the case, but with the present non-identification of DM and the non-explained Pioneer anomaly (~ equal to the Hubble acceleration cH) the standard understanding may be in need of revision. Garth Last edited: Feb 17, 2006 3. Feb 17, 2006 ### wolram Thanks Garth, all very confusing, to me any way, i wonder how the system (space time and mass) stayed together when dark energy started an accelerated expansion. 4. Feb 17, 2006 ### Garth DE just affects the cosmological expansion. If gravitationally bound objects are not affected by that expansion (as is normally thought to be the case) then they will not be affected by an accelerated expansion caused by DE. Garth 5. Feb 17, 2006 ### wolram 6. Feb 27, 2006 ### Garth Similar Discussions: Space Time
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281439185142517, "perplexity": 1435.8108715479125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190236.99/warc/CC-MAIN-20170322212950-00447-ip-10-233-31-227.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1416929/applying-cauchy-theorem-to-the-integral-of-overlinez2-over-two-different-c
# Applying Cauchy theorem to the integral of $\overline{z}^2$ over two different curves. I have solved the following exercise, in which I had to compute $$\int _\gamma \! \overline z ^2 \, \mathrm{d}z,$$ where $\gamma$ is 1. The circumference $\left| z\right| =1$ 2. The circumference $\left|z-1 \right|=1$ Let's call each curve $\gamma_1$ and $\gamma_2$ respectively. Then I have parametrized them as $\gamma_1 \left( t \right) = e^{it}$ and $\gamma_2 \left( t \right) = 1+ e^{it}$ for $t\in\left[0,2\pi \right)$. After using the following formula for complex integration around a curve $$\int _\gamma \! f\left( z \right) \, \mathrm{d} z = \int _0 ^{2\pi} f \left( \gamma \left( t \right) \right) \gamma ' \left( t \right) \, \mathrm{d}t$$ I have found that $$\int_{\gamma_1} \! \overline z ^2 \,\mathrm d z = 0$$ and $$\int_{\gamma_2} \! \overline z ^2 \,\mathrm d z = 2\pi i.$$ Then here go the actual questions: 1. Can we say by Cauchy theorem that $\overline z ^2$ is holomorphic in $\mathrm{supp}\left(\gamma_1\right)$ but not in $\mathrm{supp}\left(\gamma_2\right)$? 2. If it is holomorphic in $\mathrm{supp}\left(\gamma_1\right)$, then there exists $F\left( z \right)$ such that $F'\left(z \right)=\overline z ^2$ defined in $\mathrm{supp}\left(\gamma_1\right)$? Can we compute it explicitly? I have trouble seeing how this function has different behaviour in two overlapping sets. Cauchy's integral theorem says that if $f$ is holomorphic (on a simply connected domain $D$) then $\int_\gamma f(z)\,dz = 0$ for every closed curve in $D$. Not the other way around. Just because the integral of a particular function along a particular curve happens to be $0$, it doesn't tell us that $f$ is holomorphic. On the other hand, if we find a closed curve $\gamma$ such that $\int_\gamma f(z)\,dz \neq 0$, we can be sure that $f$ is not holomorphic on (a neighbourhood of) the interior of $\gamma$. If you want some kind of converse, Morera's theorem says that if $f$ is continuous and $\int_\gamma f(z)\,dz = 0$ for every closed curve $\gamma$ in $D$, then $f$ is holomorphic on $D$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905473589897156, "perplexity": 52.927230962811144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00551.warc.gz"}
http://mathoverflow.net/questions/79712/associated-sheaf-functor-doesnt-preserve-arbitrary-products/79845
# associated sheaf functor doesn't preserve arbitrary products I need example that associated sheaf functor doesn't preserve arbitrary products. I think that one can provide an example for sheaves over topological space. Thanks for your help. - Homework? And what associated sheaf functor? Do you mean the sheafification functor from presheaves to sheaves? Or something else? Are you allowing sheaves over sites (it seems you must be because you talk about providing an example of sheaves over a topological space)? Finally, if you think you have an example - actually give your example! –  Anon Nov 1 '11 at 14:16 Sorry for my bad english, which leads to some misunderstanding. It's not a homework. Yes I meant sheafification functor. I strongly believe there is example over site of open subsets of a topological space. –  Anonymous Nov 1 '11 at 15:35 Why do you need this example? –  S. Carnahan Nov 1 '11 at 18:59 I don't know why Anon would think this is homework, or why someone is voting to close this. It is a perfectly reasonable question for someone who is not expert in topos theory. –  Todd Trimble Nov 2 '11 at 15:40 Let $X$ be a space. An abelian group (or a set, if you prefer) $G$ determines a constant presheaf, call it $C_G$, and the associated sheaf is the sheaf of locally constant maps into $G$. Given a family $G_i$ with product $G$, the product of the presheaves $C_{G_i}$ is the presheaf $C_G$, but the product of the associated sheaves is not in general the sheaf of locally constant maps to $G$. (If $X$ is not locally connected then one can easily have a map to $G$ which projects to a locally constant map to $G_i$ for each $i$ but which is not locally constant itself. ) This is presumably a down-to-earth special case of what Todd Trimble is saying. - Not only down-to-earth, but also easy to understand: 1+ –  Martin Brandenburg Nov 2 '11 at 19:18 It could be more down-to-earther if $G$, $X$ and $G_i$'s were specified... –  Andrej Bauer Oct 8 '12 at 15:46 An example where sheafification does not preserve arbitrary products is where we take sheaves over a (sober) space $X$ that is not locally connected, for example the space of irrationals or Cantor space. Recall that a Grothendieck topos $E$ is locally connected if the (essentially unique) geometric morphism $\Gamma = f_\ast: E \to Set$ has a left adjoint $f^\ast$ that in turn has a left adjoint. More generally, a geometric morphism $f_\ast: E \to F$ between toposes is an essential geometric morphism if its left adjoint $f^\ast$ has a left adjoint. We have the following facts: • A presheaf topos $Set^{C^{op}}$ is locally connected. Here the left adjoint to the global sections functor $\Gamma: Set^{C^{op}} \to Set$ is the diagonal functor $\Delta: Set \to Set^{C^{op}}$, which of course has a left adjoint. • A geometric morphism $f_\ast: E \to F$ is essential if and only if $f^\ast$ preserves arbitrary products. (Of course $f^\ast$ is already left exact and so preserves equalizers, if it preserves small products, then it preserves small limits. Using the fact that Grothendieck toposes are cototal, this is enough to ensure that $f^\ast$ has a left adjoint.) • Then in particular for a small site $(C, J)$, the inclusion functor $i: Sh(C, J) \to Set^{C^{op}}$ is an essential geometric morphism if and only if sheafification $a: Set^{C^{op}} \to Sh(C, J)$ preserves small products. Thus, putting the last two facts together, the composite geometric morphism $$Sh(C, J) \stackrel{i}{\to} Set^{C^{op}} \stackrel{\Gamma}{\to} Set$$ is essential, i.e., $Sh(C, J)$ is locally connected, if $a$ preserves products. • In the case where the site is $(\text{Open}(X), J)$ where $J$ is the canonical Grothendieck topology given by covering families, $Sh(X) = Sh(C, J)$ is locally connected if and only if $X$ is a locally connected space. Thus $a: Set^{\text{Open}(X)^{op}} \to Sh(X)$ preserves small products only if $X$ is locally connected. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786950349807739, "perplexity": 225.32360239706807}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00053-ip-10-28-5-156.ec2.internal.warc.gz"}
http://science.sciencemag.org/content/169/3941/181
Reports The Hydrogen Atom and Its Reactions in Solution See allHide authors and affiliations Science  10 Jul 1970: Vol. 169, Issue 3941, pp. 181-183 DOI: 10.1126/science.169.3941.181 Abstract Hydrogen atoms have been generated in solution by photolysis of thiols in solutions of organic compounds, and the relative rate constants, kH, have been measured for the reaction H• + QH → H2 + Q•, where QH is any organic compound which contains hydrogen. This represents the first kinetic study of the hydrogen atom in which it is generated in solution by a technique not involving ionizing radiation. The relative values of kH are in agreement with the values from radiolysis for most of the substances studied; however, for some compounds significantly different results have been obtained.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8487359881401062, "perplexity": 2142.037055079312}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159938.71/warc/CC-MAIN-20180923232129-20180924012529-00062.warc.gz"}
https://opencardiovascularmedicinejournal.com/VOLUME/3/PAGE/110/
# Deterministic Chaos and Fractal Complexity in the Dynamics of Cardiovascular Behavior: Perspectives on a New Frontier Vijay Sharma* Division of Pharmacology and Toxicology, Faculty of Pharmaceutical Sciences, The University of British Columbia, 2146 East Mall, Vancouver, Canada open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited. * Address correspondence to this author at the Division of Pharmacology and Toxicology, Faculty of Pharmaceutical Sciences, 2146 East Mall, University of British Columbia, Vancouver, Canada; Tel: (604) 822 6159; Fax: (604) 822 8001; E-mail: [email protected] ## Abstract Physiological systems such as the cardiovascular system are capable of five kinds of behavior: equilibrium, periodicity, quasi-periodicity, deterministic chaos and random behavior. Systems adopt one or more these behaviors depending on the function they have evolved to perform. The emerging mathematical concepts of fractal mathematics and chaos theory are extending our ability to study physiological behavior. Fractal geometry is observed in the physical structure of pathways, networks and macroscopic structures such the vasculature and the His-Purkinje network of the heart. Fractal structure is also observed in processes in time, such as heart rate variability. Chaos theory describes the underlying dynamics of the system, and chaotic behavior is also observed at many levels, from effector molecules in the cell to heart function and blood pressure. This review discusses the role of fractal structure and chaos in the cardiovascular system at the level of the heart and blood vessels, and at the cellular level. Key functional consequences of these phenomena are highlighted, and a perspective provided on the possible evolutionary origins of chaotic behavior and fractal structure. The discussion is non-mathematical with an emphasis on the key underlying concepts. Key Words: Vascular function, heart function, blood pressure, metabolism, cardiac conduction, vasomotion, temperospatial organization, fractal mathematics, chaos theory, emergence, systems biology, network analysis, complexity, self-organization, physiological time, circadian rhythms, ultradian rhythms, evolution.. ## INTRODUCTION Investigations into the behavior of physiological systems have traditionally placed great emphasis on the concept of homeostatic equilibrium, and how disturbances in homeostatic control can produce disease. The underlying assumption of this concept is that physiological systems exist at a steady state which can be switched to other states by a particular stimulus. However, biological systems are capable of more complex behaviors; there are patterns, rhythms and tempos to almost all biological processes from the level of the cell to that of the whole body. It is therefore important to take a step back and ask what the behavior of the system really is. Five kinds of physiological behavior are possible, and systems have the ability to move between them. Each has a precise mathematical definition, but can be understood intuitively as follows: 1. Equilibrium describes a system at steady state, the classic view of physiological systems. 2. Periodicity describes a system with a simple rhythm in which a cycle is repeated at a set frequency. This is seen in many circadian rhythms, such as the central and peripheral body clocks which control many of the other circadian rhythms in the body. 3. Quasi-periodicity describes a more complex behavior produced because the system cycles with at least twofrequencies whose ratio is not a rational number. This is a common behavior in physiological systems; it is seen in the electrophysiological behavior of the heart and the nervous system. 4. Deterministic chaos describes a system which is no longer confined to repeating a particular rhythm, and is free to respond and adapt. As is explained in more detail below, the system is constrained only by ‘boundary conditions’ imposed to prevent it from collapsing. Chaos is seen in a wide range of physiological systems, including the cardiovascular system. 5. Random behavior represents a breakdown of order in the system. Its behavior is now uncoordinated. The control of homeostatic equilibrium is the easiest behavior to study using our existing technological and mathematical tools, and this is why it was the first behavior to be systematically investigated by the scientific method. The study of circadian rhythms and temperospatial organization has led us into the periodic and quasi-periodic behaviors. Now the ability to study chaotic behavior is within reach, and we are standing on the edge of a new frontier in which we can begin to ask unbiased questions about the behavior of a system. Why is this important? The first reason is that the type of behavior determines the approach which can be used to study it. As discussed below, equilibrium, periodicity and quasi-periodicity can be studied using the reductionist approach, whereas deterministic chaos cannot. The second reason is that it allows us to ask deeper questions about biology. What advantages do the different behaviors confer, how are they created and what are the consequences of moving from one kind of behavior to another? The aim of this review is to discuss how the emerging mathematical concepts of chaos theory and fractal mathematics are allowing us to approach these questions. The discussion will be non-mathematical, with an emphasis on the concepts, and will focus on the cardiovascular system. The cardiovascular system is a rich area for the study of dynamics, and it is a system in which clinical applications for chaos theory have already been found. ## THE APPLICATION OF DYNAMICS TO THE STUDY OF PHYSIOLOGICAL BEHAVIOR Dynamics is the branch of physics concerning the study of how motion is produced by the action of forces. When applied to biology, it is used to study how the behavior of a system is created and controlled. A system is considered dynamic if it is deterministic and obeys the causality principle. Determinism is a philosophical proposition, which states that every event is determined by an unbroken chain of prior occurrences. The causality principle, derived from Descartes’ Third Meditation, states that every effect has an antecedent and proximate cause. Random behavior is not a dynamic because it does not meet these two stipulations. A dynamic system can be linear or non-linear. In a linear system, the behavior of the entire system can be deduced by adding the behaviors of each of its components together. Small causes produce small effects and large causes, large effects. Equilibrium, periodicity and quasi-periodicity are examples of linear dynamics. The conventional reductionist approach excels in the study of linear systems, because it is possible to derive a complete description of the system by breaking it down into its component parts. In a non-linear system, the parts of the system are not simply added together, but participate in a cascade of amplification. Because a non-linear system is more than the sum of its parts, a complete explanation of its behavior cannot be obtained by the reductionist approach. In the physical world, non-linear dynamics usually relate to discontinuous ‘sudden’ phenomena such as tornadoes, explosions or breakages. The non-linear dynamic most frequently encountered in biology is deterministic chaos. In a chaotic system, small changes in the initial conditions are amplified and have profound effects on the final state so that the behavior of such a system cannot be predicted over the long term. It is perhaps unfortunate that the term ‘chaos’ is used to describe this because mathematical chaos does not refer to complete disorder. It refers to an orderly system whose behavior is so complex that it appears to be random; the human mind cannot see the patterns in the raw data because it lacks the computational power to do so. However, the beauty of mathematics is that it gives us the power to transcend the limits of our intuitive understanding. The mathematics of chaos theory applies transformations to the raw data which ‘force’ the underlying patterns to be revealed. To understand how this is done, and what deterministic chaos is, it is worth considering the history of how chaotic behavior was first discovered. Mathematical chaos was observed independently by a number of scientists and mathematicians in different fields before taking shape as a theory in the second half of the 20th century [1]. It was officially (and accidentally) discovered by Edward Lorenz in 1963 [2]. Lorenz was a meteorologist who was running a series of weather simulations, and wanted to see a particular simulation again. To save time, he entered data from a previous computer readout and started the simulation from its halfway point, expecting that this would make no difference to the final results. To his surprise, he found that the results of the new simulation were markedly different from the previous one, and traced the fault to the computer printout. The printout had approximated the 6 figure readout of the computer to 3 figures. This small difference in initial conditions (using a 3 rather than a 6 digit input) was enough to substantially alter the outcome of the simulation. Indeed, it is now known that, in non-linear systems, these differences are amplified by iteration in an exponential manner. This is the ‘butterfly effect’: a creature as meek as a butterfly can trigger a storm thousands of miles away simply by beating its wings. It does so because the tiny initial displacement of the air is amplified in a cascade. This phenomenon is called the ‘sensitivity to initial conditions’. Lorenz concluded that, because of this phenomenon, the behavior of a chaotic system such as the weather can never be accurately predicted in the long term. In 1901, Willard Gibbs pioneered the use of phase space to represent the state of a system. However, it was the Belgian physicist Ruelle who first used this approach to study the behavior of chaotic systems, and this resulted in the discovery of the attractors of a chaotic system [3]. Phase space is an abstract two or three-dimensional space in which the x, y and z- axes are used to represent key parameters which describe the state of the system. The state of the system at any given moment can then be represented as a point in phase space; the process by which data are mathematically converted into a point in phase space is called embedding. The state of a dynamic system is continuously changing, and by plotting the different states of the system which arise over time on a phase space diagram, one obtains a graphical representation of all possible states of the system, and the behavior of the system is revealed (Fig. 1). Fig. (1). Possible states of a system represented in phase space. Equilibrium is represented as a point and periodic behavior as a cycle. Quasiperiodic behavior arises because the system is cycling with at least two frequencies whose ratio is not rational. A strange attractor is a sign of chaotic behavior, and has a low fractal dimension. The attractor shown is the Lorenz attractor. Random behavior produces a random scatter of points in phase space, with a high fractal dimension. Fig. (2). The first four iterations of the Koch Snowflake. (B). A visual representation of the Mandelbrot set, one of the best-known fractals. Fig. (3). Increasing complexity in biological systems. Unique characteristics are conveyed by the individual components, which are then organized by universal principles. The components form pathways, then scale-free networks containing functional modules, then hierarchical networks. Emergent properties are seen with each increasing level of complexity (modified from [98]). Fig. (4). An overview of the relationship between dynamics and physiological function. The motion of a dynamical system is driven by an underlying force, and most dynamical systems would come to rest in the absence of this force. Initial transient states of the system are therefore killed off with time and, under the influence of the driving force, the system evolves towards a particular state or behavior. If these events are followed in phase space, one sees that, given enough time, the state of a dynamical system evolves towards a particular set of points in phase space. This set is referred to as the attractor of the system, and this attractor is a property of a deterministic system [3]. A random system will never have an attractor. The attractor of a deterministic non-chaotic system can be a fixed point, a limit cycle (a periodic system) or a limit torus (a quasiperiodic system) (Fig. 1). In this case, the system is predictable, and will either loop continuously retracing the limit cycle or torus, or, in the case of a point attractor, settle at equilibrium. The attractor of a chaotic system is more complex and is referred to as a strange attractor. The points representing the state of the system loop endlessly within the boundaries set by the attractor, always towards a central point, but the trajectory traced by the point never repeats itself. The strange attractor contains an infinite number of possible trajectories through which the state of the system can cycle. Strange attractors are always fractals (discussed below), taking on beautiful shapes in phase space, which resemble such forms as butterfly wings, swirling clouds, spider’s webs or leaves.[1] Physiological systems can change their dynamics and move between periodicity, quasiperiodicity and chaos. In phase space, the attractor will change, ‘exploding’ into a chaotic attractor or ‘imploding’ into a limit torus or limit cycle. This phenomenon is termed ‘bifurcation’. If the system is moving toward chaos, the bifurcations are cascading it. If the system is moving toward quasiperiodic or periodic behavior, the bifurcations are creating negative feedback loops to stabilize it. This phenomenon gives the system a remarkable flexibility (see [1, 4-7] for a more complete discussion). ## FRACTAL MATHEMATICS This section provides a brief introduction to fractals and their properties. The reader is referred to a number of excellent books for a more complete discussion [4, 8-11]. Fractal mathematics is a fundamentally new kind of geometry, and one which remains unfamiliar to much of the scientific, medical and general communities. It has three applications in biology: the study of physical structure, the study of the structure of processes in time and the study of the dynamics underlying behavior (the attractor of a chaotic system is always a fractal). Our understanding of the form of natural objects has been dominated by the concepts of Euclidian geometry and calculus. However, Euclidian geometry is unable to describe many of the exquisite patterns with which nature is replete, such as the shape of a cloud, a tree or a snowflake. One of the reasons for this is that objects in the natural world have features which extend over many scales, and examination of an object at finer scales reveals new previously unseen detail. Euclidian geometry fails to describe this because it can only deal with one scale at a time; it describes figures whose edges are flat or curved and misses the detail. In the 1970’s, Benoit Mandelbrot reported a new kind of mathematics, which he called ‘fractal mathematics’ [10]. The term fractal was derived from the latin word fractus, meaning broken, in order to reflect its defining features of self similarity and scaling. In the words of Mandelbrot, “ a fractal is a rough or fragmented geometric shape which can be split into parts, each of which is (at least approximately) a reduced-sized copy of the whole” [10]. Fractals can be observed throughout nature, from the small scale of atoms to the large scale of galaxies. The natural world is replete with examples: crystals, snowflakes, river networks, mountains, lightning, trees, webs, the list is long. The self-similarity of a fractal can be defined as perfect (geometrical) or statistical. Exact self-similarity represents the geometrically perfect fractal. A simple mathematical example of perfect self-similarity is given by the Koch snowflake (Fig. 2A). Starting with a straight line, a Koch snowflake is generated by substituting the middle third of the line with an equilateral triangle and repeating the process many times. The iteration vastly increases the length of the figure’s perimeter. After 40 iterations, a Koch snowflake generated from a 1-metre segment has a length which, if unwound, would stretch from the earth to the sun. Another example is the Mandelbrot set (Fig. 2B). Nature never conforms to geometric ideals, and just as one would be hard-pressed to find a perfect sphere or cube in nature, one would be equally hard-pressed to find a fractal with perfect self-similarity. Most fractals in nature exhibit statistical self-similarity. Statistical self-similarity refers to a situation in which the fractal is approximately self-similar at different scales; the statistical properties of the part are proportional to the statistical properties of the whole. Examples of such self similarity in the human body include self-similar invaginations of alveolae and the intestinal tract which increase the surface area for absorption, or the self-similar branching pattern of the dendritic, bronchial and vascular trees [5, 12-15]. An important extension of the fractal concept is that fractals are not only observed in physical structures; biological processes in time also exhibit fractal properties in that fluctuations at a given timescale resemble the fluctuations of the same process observed at a smaller timescale. A classic example of this is seen in recordings of ion channel currents; self-similar patterns of ion channel opening are observed over several timescales [16]. The other extension is that the attractor of a chaotic system is always a fractal, and fractal geometry therefore has a role to play in describing dynamics. For a fractal system, the measurement of any parameter depends on the resolution at which the measurement is taken. If one is measuring length, for example, the value of the measured length would increase as the finer details are revealed. There is therefore no one ‘true’ value of the measurement, but rather a relationship between the measurement and the resolution. This is referred to as a scaling relationship, and it poses two fundamental challenges to our existing analytical tools. The first is modest. If a fractal physiological system is studied at different resolutions, the measurements obtained would be expected to differ, and this may explain some of the measurement discrepancies found in the literature [17]. Biological fractals, however, are usually only self-similar over a few orders of magnitude, so this problem is overcome by studying the structure at the finest level of detail. The second challenge is profound, and it arises when studying fractal processes in time: fractals can have no mean and infinite variance! How is this possible? We are used to using means and variances to describe datasets, and the sample mean we obtain by experiment is meant to reflect the population mean of the parameter. As the sample size is increased, the sample mean should approach the value of the population mean. This does not happen for fractals: as the sample size increases, the value of the sample mean continues to change and diverge to either zero or infinity, never approaching a ‘true mean’. This situation arises because the value of the parameter depends on the scale at which it is measured, and can therefore never have a single true value. Self-similarity at multiple scales can also affect variance, because smaller fluctuations in the data are amplified as the resolution is increased. The measured variance increases as the sample size or sample time increases, and tends towards infinity. This poses fundamental problems to the approach of statistical hypothesis testing (see [4] for review). Firstly, we always rely on the mean and variance of our measurements to define the parameters of the system being studied. However, if there is no mean and infinite variance, there is no way to determine what the parameters of the system are, so we have no way of detecting a change in those parameters. Furthermore, because there is no true mean of the system, the value of the calculated mean will be seen to change even if the underlying process remains unchanged. This means that, if we wish to look for changes in the state of a fractal system, we cannot use our existing analytical tools because they are essentially blind. It is clear that we need new analytical tools to study fractals, but our ability to describe the properties of fractals is still rudimentary. The major parameter which has been described is the fractal dimension, a measure of a fractal’s space-filling properties. Put another way, fractal dimension measures the extent to which the irregularities of a fractal embedded on a curve spread out into the second dimension, or the extent to which the irregularities of a fractal embedded on a surface spread out into the third dimension. Fractals always have fractional dimensions (e.g. 1.7, 2.3 etc), which is a counter-intuitive idea when first encountered, but it can be understood by considering the familiar Euclidian shapes. Euclidian geometric shapes completely ‘break into’ the next dimension (e.g. line to square to cube) without creating detailed structure along their edges. The fraction (1.7, 2.3) represents the detail that Euclidian geometry misses; it describes the existence of a detailed structure along the edge of the shape. The dimension of the curve, surface or solid in which the fractal is embedded is referred to as the topological dimension, and is always an integer (1, 2 or 3). A fractal is defined mathematically as a set of numbers for which the fractal dimension is greater than the topological dimension [10]. ## THE APPLICATION OF FRACTAL MATHEMATICS TO CHAOS THEORY As discussed above, the attractors of a chaotic system are always fractals. The fractal dimension of the attractor provides an indication of how deterministic the system is. If the fractal dimension is low, then the data would have been generated by a deterministic system. If the fractal dimension is high, then the data would have been generated by a random system. The reason for this can be understood intuitively as follows. As the fractal dimension increases, the form of the fractal is seen to disperse and increasingly resemble a random scatter of points. A random scatter of points in phase space is indicative of a truly random system. Significant technological and analytical advances are needed before the dynamics of circadian clocks can be studied in detail. However, it is clear from existing evidence what the dynamics of the system are likely to be. The healthy state of this multi-oscillator system appears to be one of orderly, periodic, synchronous oscillations. Since the periodicity of the clocks is essential to their function, any bifurcation away from this behavior would be self-defeating. The system needs to be far enough away from the border between order and chaos to prevent chaotic behavior, but also far enough from equilibrium to maintain periodic dynamics. This reliable order is bought at the price of a lack of flexibility. This disadvantage can be seen in the way the system responds to challenges. The central and peripheral clocks are slow to respond to permanent changes in the timing of the sleep-wake cycle (2 days for the central clock, 8 days for the peripheral clocks) [27]. This rigidity helps to prevent unnecessary adjustments of the clocks in response to, for example, an afternoon nap. However, in the presence of more permanent behavioral changes such as shift work, it may be an important cause of disease [20, 28]. ## BLOOD FLOW AND VASCULAR FUNCTION Although mean blood pressure exhibits a simple diurnal variation which is essentially periodic, the cardiovascular system is capable of more complex behaviors. These are generated both in the blood vessels and in the heart. Part of the complexity arises from the basic structure of the cardiovascular system. The heart and the vasculature contain structures which have a fractal-like appearance. Examples of structural fractals include the venous and arterial vascular trees, the organization of muscle fibres (bundles, fibres, fibrils, myofilaments), the arrangement of the His-Purkinje network in the heart and the structure of cardiac connective tissues (e.g. the chordae tendinae and aortic valve leaflets) [4, 5, 29, 30]. Living organisms possess complex, spatially distributed systems which depend on a vascular supply for their survival. The function of fractal networks such as the vascular tree is to achieve fast and efficient transport across these systems, and there is good evidence that the use of a fractal branching system minimizes the work of transport [31-33]. Although the scale-range of self-similarity is infinite for geometric fractals, in biology there is only a finite range over which self-similarity is observed. For the vascular tree, there are at least three orders of magnitude which are self-similar, from the larger feeder vessels (c. 5mm for small mammals) to the smallest arterioles (c. 10-20μm for small mammals). The small capillary networks into which the arterioles feed have a different topology to the rest of the vascular tree and are not part of the fractal network. The fractal structure of the vasculature has profound consequences for animals, and it is worth pausing to reflect on this. One such consequence is physiological time. The classical Newtonian view of time is chronological time, in which time is viewed as being universal and above nature. This perspective of time is imposed by evolution in the form of ultradian and circadian rhythms, and we regulate our daily lives by it. However, Meyen proposed an alternative view in which time is related to variability and change, so that each self-contained system has its own time defined by specific events that occur within that system. Psychological time is an example of Meyen’s concept in which perceptions of the flow of time are dependent on the situation; time flies when we are busy, and grinds to a halt when we are waiting. The concept of physiological time arises from Meyen’s perspective and is defined by Boxenbaum as “a species-dependent unit of chronological time required to complete a species-independent physiological event” [34]. In order to maintain body temperature in the face of an impaired ability to lose heat, all the physiological processes of a larger animal are slower than those of a smaller animal (heart rate, respiration rate, movement etc.); the heart rate of an elephant is 30 beats per minute whereas that of a shrew is 1000 beats per minute. However, a shrew and an elephant get through the same number of heartbeats and respirations over the course of their lives (approximately 200 million breaths and 800 million heartbeats), and therefore live an equivalent amount of physiological time [35]. In chronological time, the elephant takes longer to use up its allotted beats and breaths, and therefore lives longer by living slower. It is well known that the relationship between the pace of physiological processes and animal size obeys a power-law function, so does it have a fractal origin? Metabolic rate is limited by the rate at which nutrients and oxygen can be supplied to the metabolic tissues, and this supply is constrained as the transport distances covered by the transport network are increased. The transport network, as discussed above, is fractal over three orders of magnitude. In the model proposed by Brown, Enquist and West, the fractal nature of the transport networks explains the power law relationship between metabolic rate and animal size [36, 37]. The implication of this theory is that the fractal structure of the vasculature has, over the course of evolution, exerted an influence on species’ physiological time and ultimately on their lifespan. It is remarkable that a fractal, which can often be represented by very simple equations, can influence a species’ own experience of time and ultimately set constraints on how long it will live. The basic pattern of blood distribution is fractal, and this is imposed both by the anatomy of the vascular tree and by the local regulation of vascular tone [38-46]. Superimposed upon this basic pattern is a complex rhythm of vascular flow. It is well-established that blood vessels exhibit rhythmic changes in diameter which beget rhythmic changes in resistance [47, 48]. This behavior has been found to bifurcate between quasi-periodicity and chaos. The origin of the chaotic behavior lies not in the heart rate or neuronal control of the blood vessels, but in chaotic dynamics of the vasomotor response [49-51]. The chaotic behavior is produced by the interaction of two calcium oscillators in the cytoplasm, one of which is a fast oscillator maintained by voltage-dependent calcium uptake, and the other of which is a slower oscillator maintained by calcium-induced calcium release [52]. Chaotic behavior is prevented by nitric oxide and various purines, and inhibition of either of these is associated with the induction of chaotic vasomotion [6, 53]. Chaotic behavior can be induced by physically decreasing the perfusion pressure. These data indicate that blood pressure can act as a toggle between order and chaos in the control of vasomotion [49]. Vasodilator substances which lower the local pressure by increasing vessel diameter maintain the system in periodic dynamics. Vascular conductance is improved by periodic sine-wave vasomotion, so the appearance of chaotic behavior in the blood vessel is detrimental to blood flow [53, 54]. Chaotic behavior has been observed in systemic blood pressure, and has been suggested to be partly attributable to chaotic fluctuations in peripheral vascular resistance produced by chaotic vasomotion [55-57]. However, if blood pressure variability is examined in the whole animal, further information about the origin of chaotic behavior is revealed. Nitric oxide inhibitors decrease the chaotic behavior of blood pressure variability, which is consistent with their influence on chaotic vasomotion [58, 59]. Intriguingly, blockade or stimulation of adrenergic α-receptors has the same effect [58, 59]; the decrease in complexity produced by the agonist could be related to a decrease in sympathetic tone mediated by the baroreceptor reflex. Denervation of the baroreceptors also decreases chaotic behavior of blood pressure variability [60]. Blockade of β-adrenoceptors or of the parasympathetic nervous system with atropine has little or no effect on overall complexity [58, 59], suggesting that chaotic behavior is regulated by the sympathetic nervous system at the level of the resistance vessels. Overall, these results suggest that chaotic behavior of the blood pressure is likely to reflect the regulation of tone in the resistance vessels; periodic vasomotion is induced by any factor which decreases the local pressure, and the influence of the sympathetic nervous system on blood pressure variability may reflect a local influence on vasomotion mediated by changes in local pressure. However, blood pressure is regulated by a wide range of factors and it is likely that there are other contributors to the chaotic behavior evident in blood pressure variability. ## FRACTAL COMPLEXITY AND CHAOTIC BEHAVIOR IN THE HEART It is well established that beat-to-beat variability exists in the heart rate; ‘regular sinus rhythm’ is, in fact, anything but. This variability was classically regarded as random, but is now known to exhibit fractal properties in time, and to exhibit chaotic behavior, indicating that the variability is in fact deterministic. The analytical methods employed to study this determinism have been reviewed recently [9], so this discussion will focus primarily on the concepts. It is presently unclear what the underlying mechanism is for cardiovascular chaos, although it appears to be related to the function of the autonomic nervous system. Heart rate variability and the sensitivity to initial conditions are attenuated when the parasympathetic nervous system is blocked by atropine, and enhanced by blockade of the sympathetic nervous system using the β-blockers propanolol or atenolol [58, 59, 61, 62]. Chaotic behavior is not influenced by classic pharmacological modulators of the blood vessels such as nitric oxide synthase inhibitors or agonists and antagonists of α-adrenergic receptors, suggesting that it is the direct action of the autonomic nervous system on the heart which is important [58, 59, 61, 62]. It is clear that chaotic behavior is increased when parasympathetic activity exceeds sympathetic activity; this effect is demonstrable when the sympathetic and parasympathetic nervous systems are inhibited. However, the reflex cardiac parasympathetic cardiac activity induced by nitric oxide synthase inhibitors does not produce an increase in cardiac chaotic behavior. This may simply reflect the fact that the parasympathetic nervous system is already exerting its maximum effect on the heart rate dynamics; removal of sympathetic drive may further increase chaos in part because it is a physiological antagonist of the parasympathetic drive, acting at a distinct site. Differences in the degree of chaos seen in the heart are likely to reflect differences in the ‘drive’ which maintains the dynamics of the system; the bifurcation of the system from order to chaos should be a threshold effect. The dynamics of the heart rate can also be examined using fractal analysis, and heart rate variability has been found to exhibit self-similarity [63, 64]. The importance of the fractal structure is that the self-similarity extends over many timescales; this confers the effect of memory on the system. The heart is able to repeat beating patterns it has previously used. However it is not ‘remembering’ the rhythms per se; by assuming a fractal structure in time, the system is using a basic law of mathematics to achieve the effect of memory. Fractal structure appears to be related, at least in part, to the presence of chaotic behavior and can be lost by bifurcations towards either order or randomness. If one studies the effects of pharmacological interventions on fractal complexity of the time series, the patterns are not the same as those seen for chaotic behavior. Denervation of the heart increases the fractal complexity of the heart rate, indicating that the fractal complexity has its origin in the heart itself, and is modulated by the autonomic nervous system [65]. Intriguingly, both the sympathetic and the parasympathetic nervous systems appear to decrease the fractal complexity of the heart rate variability [66-69]. Chaotic behavior and fractal complexity of the time-series therefore reflect different properties of the heart which respond differently to pharmacological interventions; atropine, for example, decreases chaotic behavior but increases fractal complexity. What are the differences between chaotic behavior and fractal complexity? Fractal analysis essentially reveals the structure of the signal, whereas chaos theory examines the underlying dynamics of the system that generates the signal. Fractal structure is therefore a more superficial measure of the state of the system, but it is easier to apply and has found clinical applications. Fractal complexity and chaos are related to pathology in two clinical scenarios. In the setting of acute cardiovascular events such as myocardial infarction or arrhythmias, increased fractal complexity is associated with increased mortality, and is a superior predictor of mortality compared with more conventional measures of heart rate variability [65, 70-76]. The reason for this correlation is unknown. If one considers the dynamics of the heart in this setting, arrhythmias appear to represent a state in which the system is driven away from the border between order and chaos and towards true randomness; in this context, the increased chaos may be regarded as harmful. There has been much discussion over whether increases or decreases in chaotic behavior are beneficial or harmful in the context of acute cardiac events. There are two aspects of chaos to consider. The first is where the chaos occurs. Helpful chaotic behavior is probably due to chaotic behavior in the sinoatrial node.[77] However, if chaotic behavior appears independently in the rest of the conduction system, this may interfere with the cardiac cycle and be harmful. The second is whether the system is being maintained close to the border between order and chaos; any movement away from this border could be harmful, be it towards randomness or periodicity. There are therapeutic implications for this. It has been found that, with the use of properly timed stimulation delivered through pacemakers, the heart can be stimulated to bifurcate from chaos to periodic behavior (chaos control) or from periodic behavior to chaos (chaos anti-control). This approach has been used as a novel approach to treat arrhythmias, and both chaos control and chaos anti-control have been advocated as strategies [78-80]. By contrast, in the setting of chronic heart failure, fractal complexity is decreased, and in this case it is the decrease in complexity which is associated with increased mortality [81-84]. When fractal complexity breaks down, the breakdown can reflect a cascading of the system into true randomness or a reversion into periodic order. Heart failure may cascade the system in either direction, producing random or periodic behavior; in both cases the fractal complexity is seen to decrease, and in both cases the effect is associated with increased mortality [85]. Fractal complexity and chaotic behavior of the heart have both been found to decrease with ageing, and this loss of complexity is also believed to be detrimental (see [7, 86] for recent reviews). Why is the loss of chaotic behavior harmful? Loss of chaotic behavior clearly creates a loss of flexibility in the system. However, it also leads to a loss of information storage and generation [5, 87, 88]. The ability to store and transmit information is lost because random behavior is meaningless, and periodic behavior simply repeats the same information over and over again. This is reflected in the loss of fractal complexity in the heart rate signal; the ‘memory effect’ conferred by the fractal structure is lost. Chaotic behavior is unpredictable behavior, and unpredictable behavior allows for a physiological ‘freedom of expression’; the key to generating useful information is the ability to change. In a recent review, Goldberger [89] pointed out that many disease states could be regarded as producing a breakdown in complexity, leading to ordered periodic behaviors: “To a large extent, it is these periodicities and highly-structured patterns – the breakdown of multi-scale complexity under pathological conditions – that allow clinicians to identify and classify many pathologic features of their patients.” [89]. Where does chaotic behavior in the heart come from? Chicken embryo heart cells provide one of the best examples of physiological chaos ever observed [90, 91]. Pacing is a basic property of cardiomyocytes, and chicken embryonic heart cells are seen to spontaneously beat in a regular fashion. When an external electrical stimulus is applied, the timing of the next endogenous beat is altered; it can occur sooner or later. The system is therefore driven by two bio-oscillators; the internal oscillator of the cell’s own pacing rhythm, and the external electrical impulses [90, 91]. The cells will beat with regular periodicity in response to some external rhythms, but will revert to their own endogenous rhythms in response to others. This is chaotic behavior, with bifurcations induced by the external applied rhythm. The effect is to introduce beat to beat variability. In the intact heart, the property of spontaneous self-excitation is a property of all the excitable tissue found therein, but the pacemaker of the heart is the sino-atrial node. The path of excitation from the sinoatrial node, through the atria to the atrioventricular node and then to the His-Purkinje network generates the classic familiar pattern of the electrocardiogram (ECG). Analysis of the ECG reveals that there are irregularities in the record, the same beat-to-beat variability observed when heart rate data are analyzed, and analysis of the ECG has revealed evidence of chaotic behavior [4, 92, 93]. In the normal setting, this chaos is good. Studies of the spatial evolution of the cardiac electrical activity have revealed that the electrical behavior of the heart can be understood, in part, in terms of reaction-diffusion processes. These processes produce spiral waves, a known precursor to chaotic behavior, which appear when the heart goes from normal rhythm to a tachycardia and then break up as the heart transitions to fibrillation and the system bifurcates towards harmful chaos [77]. This spatial evolution model suggests a mechanistic difference between the harmful chaos and the beneficial chaos. One could speculate that bifurcations in cardiac rhythm are harmful because these produce chaos in the conduction system. However, the chaotic variability of normal sinus rhythm is beneficial, because this represents an interplay between the conduction system and the heart muscle which confers efficiency and flexibility. How can chaotic behavior in the heart be permanently lost? An explanation is most likely to be found in the structural changes in the heart produced by pathology and by aging. The structural properties of the heart determine its ability to respond to stimulation; if the heart is less compliant due to remodeling, or there is a loss of cardiomyocytes, the mechanical function of the heart could be constrained and forced toward more periodic behavior. Chaos in the heart may not just arise in the conduction system. Metabolic pathways can bifurcate towards chaos, and metabolism is intricately related to function. Calcium handling is a key regulator of contractile force, and chaotic behavior within these pathways could beget chaotic behavior in the generation of contractile force. Where does fractal complexity in the heart rate signal come from? To a certain extent, fractal complexity may reflect underlying chaos, and it is lost by a bifurcation away from chaos in either direction (towards order or randomness). However, some of the fractal complexity of the heart may have its own unique origins. If one considers the endogenous rhythms of the cardiomyocytes, a fractal structure may exist in the multioscillator system of the cardiac syncytium. If one considers the conduction system in the heart, there is a fractal structure to the His-Purkinje system, which could beget fractal organization of the heart rate signal. The relationship between fractal complexity and underlying chaos may not be as straightforward as the discussion in this review has so far assumed. This is clearly illustrated by the effects of the parasympathetic nervous system, which increases chaos in the heart rate, but decreases fractal complexity. The major influence of the parasympathetic nervous system is on the conduction system; the ventricles receive sparse parasympathetic innervation. It is possible that, by decreasing conduction through the heart, the parasympathetic nervous system releases some of the cardiomyocytes from the unifying influence of the His-Purkinje system and the dual oscillator system is more prone to chaotic behavior. At the same time, a decrease in the influence of the fractal His-Purkinje system produces a decrease in the fractal complexity of the signal. In this model, the conduction system is the origin of the fractal complexity, whereas the interplay between the conduction system and the heart muscle is the origin of the chaotic behavior. The sympathetic nervous system innervates the entire heart and increases the rate and force of contraction; it can impose order on the conduction system and the heart muscle, and may therefore drive the dual oscillator towards more periodic behavior. If the heart becomes more periodic in its behavior, fractal complexity of the heart rate will also decrease. This may explain why denervation of the heart increases fractal complexity; it would be the loss of the sympathetic drive which allows fractal complexity to increase. ## CELLULAR DYNAMICS The example of cardiovascular chaos illustrates that chaotic behavior at the level of the system is conferred by behavior at the cellular level. It is therefore important to consider the cellular level in more detail if the mechanistic bases of chaos and fractal complexity are to be understood. The central dogma of molecular biology implies the existence of a hierarchy within the cell in which each organizational level is given a particular task: DNA stores information, RNA processes information, proteins execute the programs and metabolites fuel and fine tune the programs. However, the true distribution of cellular functions within the cell is more complex than this. The proteome is the repository of short term information storage, the metabolome is an important controller of gene expression and RNA can execute cellular programs by influencing gene expression or regulating the subcellular targeting of the proteome [94-98]. It is therefore clear that, instead of being distinct levels in a linear chain, the genome, transcriptome, proteome and metabolome fulfil their functions by forming complex networks. Surprisingly, gene, protein and metabolic networks are all organized according to the same principles and form a type of network which is referred to as ‘scale-free’ [99]. In a scale free network, the number of interconnections formed by a node, referred to as its degree, obeys a power-law distribution i.e. there are groups of nodes which form large numbers of interactions, and others which form only a few; power law functions are characteristic of fractals, and the cellular networks can sometimes assume a fractal structure. Oltvai and Barabasi described this new paradigm in terms of a complexity pyramid in which the uniqueness conferred at the level of individual cellular components is integrated using common organizing principles [98]; this is summarized in Fig. (3). Metabolic pathways represent an archetypal network in which to consider network dynamics. The topology of the metabolic network (substrate supply, energy production in the mitochondria, energy utilization) is scale-free. Within the network, evidence is emerging of compartmentalization. ATP generated by glycolysis is used to fuel membrane-bound transporters and ion channels in the sarcolemma and sarcoplasmic reticulum, whereas ATP generated by mitochondrial oxidative phosphorylation is targeted to cytoplasmic processes such as the generation of contractile force (see [100] for review). Chaotic behavior has been frequently related to the interaction that occurs between oscillators, and metabolic pathways are replete with them. There are ultradian rhythms in oxygen consumption. Within the metabolic machinery, there are more irregular oscillators. A well-studied example is the matrix membrane potential, whose energy is harnessed to generate ATP. The matrix membrane potential undergoes asynchronous oscillations which are triggered by reactive oxygen species (ROS) in a feed-forward cycle of ROS-induced ROS release. The function of these oscillations is unclear, but they may represent a mechanism for clearing ROS. Weiss et al. proposed that if these fluctuations are asynchronous, the induction of oscillations could be used to ‘neutralize’ unneeded mitochondria at times of low energy demand. However, if the fluctuations become synchronous, all the mitochondria would synthesize and consume ATP according to a cyclic pattern in unison, and energy supply would be uncoupled from energy demand [100]. The switch towards synchronous fluctuations in membrane potential represents a loss of complexity in the system, a harmful bifurcation toward periodic behavior. Another well-studied metabolic pathway is glycolysis. Glycolytic flux can be constant or periodic when glucose is provided at a single constant level. When the input of glucose is periodic, glycolytic flux can be either periodic or chaotic depending on the amplitude and frequency of the glucose input [101, 102]. It is likely that most biochemical pathways can exist in a number of oscillatory states and can bifurcate from periodic to chaotic behavior. The study of such phenomena in biochemical pathways is limited by technology: it is not easy to track and record metabolites in real time as can be done with, for example, calcium. The next level of organization to consider is communication between cells. Such communication can arise through the exchange or autocrine or paracrine factors, or occur through direct contact. Ultradian rhythms display fractal properties, and the oscillators present within cell populations have the intriguing ability to self-synchronize (see [103] for review). In the model of cell synchronization proposed by Brodsky, the coordinated signal to synchronize protein synthesis oscillations is a loss of gangliosides throughout the cell population. The resulting increase in intracellular calcium activates protein kinases which modulate the periods their target oscillators and produce a synchronous rhythm in the entire population [103]. However, the fractal structure of the multi-oscillator system is preserved. Cell-cell communication is of importance when considering phenomena such as the conducted vasomotor response which, as discussed above, can exhibit both periodic and chaotic behavior. ## THE EVOLUTIONARY ORIGINS OF CELLULAR DYNAMICS The evolutionary origins of the cellular dynamics described above can be understood using the concept of self organization. Self organization is a phenomenon first described by Alan Turing, in which repeated cycles of repulsion and attraction between units increase the internal complexity of a system, creating new levels of complexity without the direction of an external force or program [104]. Self-organization as a mechanism of evolution can be understood as follows. The initial behavior of the system is random; there are no meaningful interactions. However, because of their own properties, the components of the system will gradually start to interact with some of their neighbours and repel others. As the interactions become stronger, the system can bifurcate to become chaotic and then finally bifurcate again to become orderly. This process is exemplified by the spontaneity with which fish come together to form shoals or birds come together to form flocks. At the moment of self-organization, the dynamics of the system may bifurcate to create a new more complex behavior. In order for the new network to stabilize, the interactions of its components need to be strong so that repeated cycles of feedback can reinforce the new structure. However, if the interactions are too strong, the network will become rigid and lose its ability to adapt. When the principle of self-organization is applied to evolution, it is seen that self-organization can generate complex systems which are then moulded by natural selection until they exist at the boundary between order and chaos. The fully evolved system exists at the very edge of stability, and an intricate system of feedback, developed by evolution, keeps it there. When the system is pushed into chaotic behavior, the system quickly reverts back to order; this enables the system to maintain its flexibility without losing its structure [105, 106]. Little is known about how the first cellular networks evolved, but the principle of self-organization suggests a possible mechanism. Network dynamics in biological systems can be understood, at least in part, in terms of wave propagation through an excitable medium. It has been known for more than half a century that chemical reactions can spread in an oscillating manner akin to wave propagation. According to a theory first developed by Alan Turing, oscillations and chemical waves can self organize into a cellular network [104]. In this case, a chemical (e.g. a morphogen or second messenger) is synthesized rapidly at a particular location but diffuses slowly. At the moment of synthesis, a localized peak concentration is achieved. The chemical then diffuses into the surrounding medium and the concentration at the site of synthesis begins to fall. However, the next burst of synthesis gives rise to a second peak, and the process repeats itself. Repeated intermittent ‘bursts’ of chemical synthesis result in a chemical oscillator which gives out chemical diffusion waves. Chemical oscillators can exhibit constant, periodic and chaotic behavior, behaving as a chaotic system with bifurcations [107]. The chemical diffuses out to interact with neighbouring processes and targets, and can elicit responses not only by its unique identity, but by its behavior in time. In addition, the site at which the chemical is synthesized enables site-specific responses to be elicited, which can lead to spatial organization of signaling, or contribute to the formation of architecture. As similar processes occur in parallel, the collective behavior of the system can suddenly bifurcate to produce a more complex behavior, and a functional module will have formed. Additional cytoplasmic structures can refine the function of the module. Aon et al. recently proposed a model based on the phenomenon of percolation, in which the cytoskeleton controls the paths that effector molecules take through the cell [108]; their model implies that the chemical diffusion waves can be directed and distributed, or blocked by the cytoskeleton. An example of the effects of cytoskeletal organization on a network is provided by the study of Aon and Cortassa, who found that increasing the extent of microtubular protein polymerization increased glycolytic flux in yeast [109]. Self-organization can create complexity independently, and provides an explanation of how function can gradually emerge from a sea of random behavior. However, if this is applied to evolution there is a major problem: how is the complexity passed on? Self-organisation, by definition, occurs without genetic influence, yet, in order for the complexity to survive in the species and be moulded by evolution, there must be traits to the new complexity which are heritable. It is unclear how complexity generated by self-organization can be propagated to an entire species, but knowledge of evolutionary processes permits some speculations. One would expect that the first time a new complex behavior appears, it will probably be lost. However, if a particular genetic background favors the generation of complex behaviors which improve survival, the favorable traits will be passed on and allow for the continued appearance of beneficial complex behaviors which will eventually survive in the species. The genome may gradually evolve to encode more of the behavior, essentially ‘recording’ it and leaving less to self-organization. The first step to testing this idea is to quantify genetic influences on traits believed to have arisen by self-organisation. No studies have attempted to quantify genetic influences on chaotic dynamics as our technological and mathematical tools are not yet up to this challenge. However, structural traits can be more easily studied. Fractal structures are believed to form largely by self-organisation with limited genetic input. Recent studies have attempted to quantify genetic influence on fractal structure, but the data are conflicting. Glenny and coworkers recently quantified the genetic influence on flow distribution in the monozygotic offspring of armadillos, and concluded that 2/3 of fractal vascular geometry was determined by genetic influences [110]. The structure of the fractal retinal vasculature has been studied in monozygotic and dizygotic twins, and the genetic influence in this case was small – only vessel tortuosity was found to be genetically determined [111]. These initial studies do not allow any definitive conclusions to be drawn. However, there is probably a balance and a division of labor between the ability of a biological system to self-organize, and the imposition of instructions from the genome. This is an important and virtually untouched area of research. ## THE INFLUENCE OF DYNAMICS ON FUNCTION The choice of dynamics has important consequences for the function of physiological systems. The biochemical machinery of the cell has evolved to function under clearly defined conditions of pH, temperature, osmolarity, and at well defined levels of a host of electrolytes, nutrients and proteins. This is why the maintenance of equilibrium is important: the basic machinery which drives all functions is vulnerable to changes in these conditions. Periodic behavior is seen in the ultradian and circadian rhythms of the body. These evolved from the need to develop a sleep-wake cycle that is synchronised with the cycles of light and dark experienced on the earth’s surface. The clocks which drive circadian rhythms must maintain periodic behavior or they will lose their ability to keep time. Periodic behavior is also seen in the sine wave behavior of vasomotion; if this is lost, the movement of blood becomes less efficient. Quasiperiodic behavior is the most complicated behavior of linear dynamics and it defines the limits of order. It is seen in multi-oscillator systems such as the cardiac pacemaker. Many of these systems evolved to exist at the border between order and chaos and their function requires them to take advantage of both as the need arises. Chaotic behavior confers flexibility and efficiency on the system; it is usually beneficial, unless periodic behavior is integral to the function of the system as it is in the vasomotor response. Random behavior represents a complete breakdown of the system – fatal arrhythmias represent such a breakdown. Random behavior represents a complete breakdown of the system – fatal arrhythmias represent such a breakdown. Random behavior is also the initial behavior of unrelated components from which cellular networks first emerged. This paradigm is summarized in Fig. (4). Fractal complexity in time is likely to be due, at least in part, to underlying chaotic behavior. Chaos gives the system the ability to generate and store information. Fractal complexity in time, such as that seen in the heart rhythm, is one example of this; it allows the system to repeat behaviors it has previously used. However, the system does not need to remember the behavior. Instead of developing a separate repository into which the information is saved, evolution has made the intriguing choice of using a basic law of mathematics to achieve the effect of memory. The loss of chaotic behavior in either direction (towards order or random behavior) leads to a loss of fractal complexity and a loss of this memory effect. ## LIMITATIONS OF CHAOS THEORY AND FRACTAL MATHEMATICS The use of fractal mathematics and chaos theory presents significant difficulties both at the level of the theory and at the level of application. At the present time, the properties of fractals are incompletely described, and further work is needed to discover new mathematical descriptors which can be applied to fractal analysis. However, the major drawback of fractal analysis is the lack of statistical tools which can determine whether differences in the properties of fractal objects or processes are significant. Because fractals have such bizarre statistical properties, this may require the development of a fundamentally new kind of statistics. Until these tools are developed, we are limited to asking if a structure or process has a fractal structure, and to what extent, which does not do justice to the true potential of the fractal concept. It also does not permit detailed mechanistic studies to determine the origin of fractal structure. Chaos theory in its current form is also limited. At its present stage of development, it can be used to ask if experimental data were generated by a random or deterministic process, but it is a difficult and frustrating analytical approach to use. It is not clear how much data are required in order to construct the phase space set and determine its fractal dimension; the amounts of data are likely to be extremely large, and biological systems may not remain in a single state long enough to gather the required amounts of data. A low dimensional attractor is used as evidence of a deterministic system, but should be interpreted with caution; it is possible to produce a low dimensional attractor by constraining the choices available to a random process. A low dimensional attractor therefore does not definitively establish that a process is deterministic. There are also problems with artifacts introduced because of the sampling interval used to collect the data or inappropriate assumptions in the actual equations used to transform the raw data. Finally, there is also the problem of statistical hypothesis testing. Bifurcations are easy to detect and do not require statistical analysis. However, to detect more subtle changes in the attractor, a new statistical approach will be required. ## CONCLUSION To obtain an integrated understanding of physiology, we require an understanding of the complex dynamics of physiological systems. The full promise and potential of fractal mathematics and chaos theory remain to be realized, and await further development of the theories. Nevertheless, these concepts have already provided revolutionary insights into the nature of living things. A new frontier has been opened.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8032507300376892, "perplexity": 813.1369755454144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00064.warc.gz"}
https://brilliant.org/discussions/thread/wiki-weekend-party-15-number-theory/
# Wiki Weekend Party 15 - Number Theory Welcome to the Wiki Weekend Party! Every week, we try to encourage all of you by providing a list of interesting and/or challenging wikis that need writeups or improvement. Thanks to your help, we have over 800 wiki pages on Brilliant now, and it’s getting harder and harder to find a topic or technique that doesn’t have at least a cursory explanation here. But math is big! There’s lots more to write about. Please look through the Number Theory pages below and see if there’s anything you can contribute to. We've also made it obvious which wiki pages are empty / lacking content, by hiding the icon display. We would appreciate if you added some information to those pages, so as to help out others! Note by Calvin Lin 5 years, 2 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Hello sir. I have edited the Place Value wiki. - 5 years, 2 months ago Thanks! Staff - 5 years, 2 months ago @Calvin Lin Sir, I have added the GCD/LCM Wiki. I will soon complete it. - 5 years, 2 months ago Thanks! It looks great! Staff - 5 years, 2 months ago Thank you sir! ^_^ - 5 years, 2 months ago I added some problems to the LCM wiki. There were already several people who edited it so my face isn't showing. - 5 years, 2 months ago Sir, can I add Wikis on any senior high school level topic in Mathematics? - 5 years, 2 months ago Certainly!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953137993812561, "perplexity": 4365.856810478759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00628.warc.gz"}
http://www.uaf.edu/anlc/tanacross/writing/
## Writing Tanacross Without Special Fonts ### Gary Holton #### Alaska Native Language Center Introduction One of the greatest barriers to writing Native language actually has nothing to do with the language itself. Rather, it has to do with the difficulty of using computers to write the special characters used in the writing system. Most generally available fonts are designed for writing English and other European languages. The characters necessary for writing Tanacross are not available in these fonts. Even the new €œUnicode€ font, which claims to contain all of the characters necessary for all of the world€™s writing systems, does not always adequately represent some Tanacross characters (for example, the nasal hook). There are fonts available for writing Tanacross language. One commonly-used font is the Tanana font produced byLinguistSoftware. This is a commercial product which sells for approximately \$100. While using the LaserTanana font will enable you to write on a computer and print out the necessary characters, it is important to note that using this font to compose electronic documents will not guarantee that others will be able to read your electronic document. That is, a file composed using the LaserTanana font and emailed to another person may not be readable by that person. There are several reasons for this. First, the person receiving the file does must have the LaserTanana font installed on their computer in order to be able to read the file. Second, the version of the LaserTanana font on the receiving computer must be the same as that on the sending computer. The Macintosh and Windows versions of the LaserTanana font are not compatible! And the Tanana font is not fully compatible with other LinguistSoftware fonts such as LaserYukon or LaserGwichin! ### Tanacross special characters Given all these complexities it would be very useful if we could figure out a way to write Tanacross language using a standard font. In the next section I will describe one such way which has been used successfully by linguists and language workers in Alaska. But first, let€™s begin by reviewing the Tanacross characters which are difficult to represent using the standard English font. These fall into four categories: barred-l ł tone nasal hook ą underscored-consonants łthsshx Let€™s look more closely at each of these categories. First, the barred-l. There€™s really no way to get around this. The barred-l and the plain-l are distinct sounds in Tanacross, and the require different letters. So whatever we do to make Tanacross easier to write, we will have to have a special symbol for barred-l. Next, let€™s consider tone. This is a tough one. Tone is an important part of Tanacross language. Knowing the correct pitch to place on various syllables of a word is crucial to achieving a Native-speaker pronunciation. Thus,shos nek-'ęh€˜I see a bear€™ has the pitch pattern low-low-low, while ts'nk-'ęh€˜I see a beaver€™ has the pitch pattern high-high-low. However, in spite of the important of pitch to pronunciation, tone done not play much of a role in distinguishing word meanings. In this sense Tanacross is very different from tone languages such as Chinese, where a single syllable such as ma could represent five different words depending on what tone it is pronounced with. So, even if we omit the tone markings altogether, we will rarely encounter a situation of ambiguity, where we are unable to tell what was meant by a certain word. The worse that will happen is that we may not know exactly how to pronounce the word. Of course, this is only true for unfamiliar words; for words we already know we can easily supply the correct tone pattern. What about the nasal hook? In the Mentasta dialect of Ahtna, nasalized vowels are represented by a following n. This is more difficult to do in Tanacross because Tanacross contrasts nasalized vowels with a sequence of vowel followed by n. Thus,ąą€˜yes€™ is different from aan €˜come!€™. One way to get around this problem is to use a double-n to indicate a €˜real€™ n, leaving the single n to represent nasalization. Current Tanacross spelling Ahtna-type spelling €˜yes€™ ąą aan €˜come!€™ aanaann However, this approach is somewhat awkward, as it requires us to write double n's all over the place. Even a common words like €˜people€™ end up as denndeey iinn. The other problem with this approach is that it is difficult to know whether or not a writer is following this approach or not in a given word. Does a single n mean n or does it mean nasalization? We can€™t tell unless the writer tells us what system they are using. So we need some way to represent nasalization. We can€™t really get around it. Finally, let€™s consider the underscored consonants. Underlining is actually quite easy to achieve using most word-processors. However, formatting such as underlining is often lost as computer files are passed back and forth, hence, it is best to avoid relying on formatting to represent characters. How important is the underscore? There are some analogies to be drawn with tone. The underscore indicates that a sound is €œsemi-voiced€, that is, pronounced somewhere in between a voiceless sound and a voiced one. Thus, the semi-voiced s is pronounced somewhere between the voiceless sound s and the voiced sound z. In a way the semi-voiced sound s actually shares more in common with z than with s. If the underscore is omitted, then the semi-voiced sound will look just like the voiceless sound. This can cause some confusion regarding the pronunciation of the sound, but there is rarely confusion as to meaning. For example, compare łii €˜dog€™ and wuł_iig& €˜his dog€™. The first word has the voiceless barred-l, while the second word has the semi-voiced underscore barred-l. If we were instead to write the second word as wułiig& there would be no confusion as to the meaning of the word€”it would still mean €˜his dog€™€”but there could be some confusion as to pronunciation, if you didn€™t already know that this word had a semi-voiced sound in it. ### Using a standard font Given what we know about the sounds represented by the special characters in the Tanacross alphabet, it is possible to use a standard font and keyboard to represent those sounds. In this system the barred-l is represented by a backslash (€œ\€). The nasal hook is represented by a tilde (€œ~€). Tone marking is omitted. Underscore can be represented by an underscore following the letter or can be omitted altogether. The following table compares this system of standard font symbols with the writing system which makes use of special font symbols. Character Special font symbol Standard font symbol barred-l ł \ nasal hook ̨ ~ tone ; (omit) underscore s s_ (or omit) Let€™s take a look at some examples which compare the use of the two systems. tsa€™ nek-€˜e~h ts' nk-'ęh €˜I see a beaver€™ wu\iig€™ wułiig' €˜his dog€™ she~e~' shęę' 'only' It may take some time to grow accustomed to the use of the backslash and tilde characters. Of course, there are many words which will look identical or almost identical in the two systems. Examples includeshi€™€˜meat€™,shos€˜bear€™,tsets€˜firewood€™, andkon€™€˜fire€™. This system of standard font symbols can be used on any computer with any program. It can be used in email and on the web. There is never a need to install a font. The characters never show up garbled in a file sent to another person. If a prettier font is needed for publication or distribution, it is extremely easy to convert to a special font like LaserTanana. Simply use a word processor to globally replace the backslash with barred-l and the tilde with nasal hook.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852569043636322, "perplexity": 1645.4681624174443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462555.21/warc/CC-MAIN-20150226074102-00017-ip-10-28-5-156.ec2.internal.warc.gz"}
https://eventuallyalmosteverywhere.wordpress.com/tag/chains/
# Antichains in the grid In the previous post on this topic, we discussed Dilworth’s theorem on chains and antichains in a general partially ordered set. In particular, whatever the size of the largest antichain in a poset, it is possible to partition the poset into exactly that many chains. So for various specific posets, or the directed acyclic graphs associated to them, we are interested in the size of this largest antichain. The following example turned out to be more interesting than I’d expected. At a conventional modern maths olympiad, there are typically three questions on each paper, and for reasons lost in the mists of time, each student receives an integer score between 0 and 7 per question. A natural question to ask is “how many students need to sit a paper before it’s guaranteed that one will scores at least as highly as another on every question?” (I’m posing this as a straight combinatorial problem – the correlation between scores on different questions will be non-zero and presumably positive, but that is not relevant here.) The set of outcomes is clearly $\{0,1,\ldots,7\}^3$, with the usual weak domination partial order inherited from $\mathbb{R}^3$. Then an antichain corresponds to a set of triples of scores such that no triple dominates another triple. So the answer to the question posed is: “the size of the largest antichain in this poset, plus one.” In general, we might ask about $\{1,2,\ldots,n\}^d$, again with the weak domination ordering. This directed graph, which generalises the hypercube as well as our example, is called the grid. Heuristics for the largest antichain Retaining the language of test scores on multiple questions is helpful. In the previous post, we constructed a partition of the poset into antichains, indexed by the elements of some maximal chain, by starting with the sources, then looking at everything descended only from sources, and so on. (Recall that the statement that this is possible was referred to as the dual of Dilworth’s theorem.) In the grid, there’s a lot of symmetry (in particular under the mapping $x\mapsto n+1-x$ in every coordinate), and so you end up with the same family of antichains whether you work upwards from the sources or downwards from the sinks. (Or vice versa depending on how you’ve oriented your diagram…) The layers of antichains also have a natural interpretation – each layer corresponds to a given total score. It’s clear a priori why each of these is an antichain. If A scores the same as B overall, but strictly more on the first question, this must be counterbalanced by a strictly lower score on another question. So a natural guess for the largest antichain is the largest antichain corresponding to some fixed total score. Which total score should this be? It ought to be the middle layer, that is total score $\frac{(n+1)d}{2}$, or the two values directly on either side if this isn’t an integer. My intuition was probabilistic. The uniform distribution on the grid is achieved by IID uniform distributions in each coordinate, which you can think of as a random walk, especially if you subtract off the mean first. It feels that any symmetric random walk should have mode zero or next-to-zero. Certainly this works asymptotically in a rescaled sense by CLT, and in a slightly stronger sense by local CLT, but we don’t really want asymptotics here. When I started writing the previous paragraph, I assumed there would be a simple justification for the claim that the middle layer(s) was largest, whether by straight enumeration, or some combinatorial argument, or even generating functions. Perhaps there is, and I didn’t spot it. Induction on d definitely works though, with a slightly stronger hypothesis that the layer sizes are symmetric around the median, and monotone on either side of the median. The details are simple and not especially interesting, so I won’t go into them. From now on, the hypothesis is that this middle layer of the grid is the largest antichain. Why shouldn’t it, for example, be some mixture of middle-ish layers? (*) Well, heuristically, any score sequence in one layer removes several possibilities from a directly adjacent layer, and it seems unlikely that this effect is going to cancel out if you take some intermediate number of score sequences in the first layer. Also, the layers get smaller as you go away from the middle, so because of the large amount of symmetry (coordinates are exchangeable etc), it feels reasonable that there should be surjections between layers in the outward direction from the middle. The union of all these surjections gives a decomposition into chains. This result is in fact true, and its proof by Bollobas and Leader, using shadows and compression can be found in the very readable Sections 0 and 1 of [1]. Most of the key ideas to a compression argument are present in the case n=2, for which some notes by Leader can be found here, starting with Proof 1 of Theorem 3, the approach of which is developed over subsequent sections. We treat the case n=2, but focusing on a particularly slick approach that does not generalise as successfully. We also return to the original case d=3 without using anything especially exotic. Largest antichain in the hypercube – Sperner’s Theorem The hypercube $\{0,1\}^d$ is the classical example. There is a natural correspondence between the vertices of the hypercube, and subsets of $[d]$. The ordering on the hypercube corresponds to the ordering given by containment on $\mathcal{P}([d])$. Almost by definition, the k-th layer corresponds to subsets of size k, and thus includes $\binom{d}{k}$ subsets. The claim is that the size of the largest antichain is $\binom{d}{\lfloor d/2 \rfloor}$, corresponding to the middle layer if d is even, and one of the two middle layers if d is odd. This result is true, and is called Sperner’s theorem. I know a few proofs of this from the Combinatorics course I attended in my final year at Cambridge. As explained, I’m mostly going to ignore the arguments using compression and shadows, even though these generalise better. As in the previous post, one approach is to exhibit a covering family of exactly this number of disjoint chains. Indeed, this can be done layer by layer, working outwards from the middle layer(s). The tool here is Hall’s Marriage Theorem, and we verify the relevant condition by double-counting. Probably the hardest case is demonstrating the existence of a matching between the middle pair of layers when d is odd. Take d odd, and let $d':= \lfloor d/2\rfloor$. Now consider any subset S of the d’-th layer $\binom{[d]}{d'}$. We now let the upper shadow of S be $\partial^+(S):= \{A\in \binom{[d]}{d'+1}\,:\, \exists B\in S, B\subset A\},$ the sets in the (d’+1)-th layer which lie above some set in S. To apply Hall’s Marriage theorem, we have to show that $|\partial^+(S)|\ge |S|$ for all choice of S. We double-count the number of edges in the hypercube from $S$ to $\partial^+(S)$. Firstly, for every element $B\in S$, there are exactly d’ relevant edges. Secondly, for every element $A\in\partial^+(S)$, there are exactly d’ edges to some element of $\binom{[d]}{d'}$, and so in particular there are at most d’ edges to elements of S. Thus $d' |S|=|\text{edges }S\leftrightarrow\partial^+(S)| \le d' |\partial^+(S)|,$ which is exactly what we require for Hall’s MT. The argument for the matching between other layers is the same, with a bit more notation, but also more flexibility, since it isn’t a perfect matching. The second proof looks at maximal chains. Recall, in this context, a maximal chain is a sequence $\mathcal{C}=B_0\subset B_1\subset\ldots\subset B_d$ where each $B_k:= \binom{[d]}{k}$. We now consider some largest-possible antichain $\mathcal{A}$, and count how many maximal chains include an element $A\in\mathcal{A}$. If $|A|=k$, it’s easy to convince yourself that there are $\binom{d}{r}$ such maximal chains. However, given $A\ne A'\in\mathcal{A}$, the set of maximal chains containing A and the set of maximal chains containing A’ are disjoint, since $\mathcal{A}$ is an antichain. From this, we obtain $\sum_{A\in\mathcal{A}} \binom{d}{|A|} \le d!.$ (**) Normally after a change of notation, so that we are counting the size of the intersection of the antichain with each layer, this is called the LYM inequality after Lubell, Yamamoto and Meshalkin. The heuristic is that the sum of the proportions of layers taken up by the antichain is at most one. This is essentially the same as earlier at (*). This argument can also be phrased probabilistically, by choosing a *random* maximal chain, and considering the probability that it intersects the proposed largest antichain, which is, naturally, at most one. Of course, the content is the same as this deterministic combinatorial argument. Either way, from (**), the statement of Sperner’s theorem follows rapidly, since we know that $\binom{d}{|A|}\le \binom{d}{\lfloor d/2\rfloor}$ for all A. Largest antichain in the general grid Instead of attempting a proof or even a digest of the argument in the general case, I’ll give a brief outline of why the previous arguments don’t transfer immediately. It’s pretty much the same reason for both approaches. In the hypercube, there is a lot of symmetry within each layer. Indeed, almost by definition, any vertex in the k-th layer can be obtained from any other vertex in the k-th layer just by permuting the labels (or permuting the coordinates if thinking as a vector). The hypercube ‘looks the same’ from every vertex, but that is not true of the grid. Consider for clarity the n=8, d=3 case we discussed right at the beginning, and compare the scores (7,0,0) and (2,2,3). The number of maximal chains through (7,0,0) is $\binom{14}{7}$, while the number of maximal chains through (2,2,3) is $\binom{7}{2, 2,3}\binom{14}{4,5,5}$, and the latter is a lot larger, which means any attempt to use the second argument is going to be tricky, or at least require an extra layer of detail. Indeed, exactly the same problem arises when we try and use Hall’s condition to construct the optimal chain covering directly. In the double-counting section, it’s a lot more complicated than just multiplying by d’, as was the case in the middle of the hypercube. Largest antichain in the d=3 grid We can, however, do the d=3 case. As we will see, the main reason we can do the d=3 case is that the d=2 case is very tractable, and we have lots of choices for the chain coverings, and can choose one which is well-suited to the move to d=3. Indeed, when I set this problem to some students, an explicit listing of a maximal chain covering was the approach some of them went for, and the construction wasn’t too horrible to state. [Another factor is that it computationally feasible to calculate the size of the middle layer, which is much more annoying in d>3.] [I’m redefining the grid here as $\{0,1,\ldots,n-1\}^d$ rather than $\{1,2,\ldots,n\}^d$.] The case distinction between n even and n odd is going to make both the calculation and the argument annoying, so I’m only going to treat the even case, since n=8 was the original problem posed. I should be honest and confess that I haven’t checked the n odd case, but I assume it’s similar. So when n is even, there are two middle layers namely $\frac{3n}{2}-2, \frac{3n}{2}-1$ (corresponding to total score 10 and total score eleven in the original problem). I calculated the number of element in the $\frac{3n}{2}-1$ layer by splitting based on the value of the first coordinate. I found it helpful to decompose the resulting sum as $\sum_{k=0}^{n-1} = \sum_{k=0}^{\frac{n}{2}-1} + \sum_{k=\frac{n}{2}}^{n-1},$ based on whether there is an upper bound, or a lower bound on the value taken by the second coordinate. This is not very interesting, and I obtained the answer $\frac{3n^2}{4}$, and of course this is an integer, since n is even. Now to show that any antichain has size at most $\frac{3n^2}{4}$. Here we use our good control on the chain coverings in the case d=2. We note that there is a chain covering of the (n,d=2) grid where the chains have 2n-1, 2n-3,…, 3, 1 elements (%). We get this by starting with a maximal chain, then taking a maximal chain on what remains etc. It’s pretty much the first thing you’re likely to try. Consider an antichain with size A in the (n,d=3) grid, and project into the second and third coordinates. The image sets are distinct, because otherwise a non-trivial pre-image would be a chain. So we have A sets in the (n,d=2) grid. How many can be in each chain in the decomposition (%). Well, if there are more than n in any chain in (%), then two must have been mapped from elements of the (n,d=3) grid with the same first coordinate, and so satisfy a containment relation. So in fact there are at most n image points in any of the chains of (%). So we now have a bound of $n^2$. But of course, some of the chains in (%) have length less than n, so we are throwing away information. Indeed, the number of images points in a given chain is at most $\max(n,\text{length of chain}),$ and so the number of image points in total is bounded by $n+\ldots+n+ (n-1)+(n-3)+\ldots+1,$ where there are n/2 copies of n in the first half of the sum. Evaluating this sum gives $\frac{3n^2}{4}$, exactly as we wanted. References [1] – Bollobas, Leader (1991) – Compressions and Isoperimetric Inequalities. Available open-access here. # Chains and antichains I’ve recently been at the UK-Hungary winter olympiad camp in Tata, for what is now my sixth time. As well as doing some of my own work, have enjoyed the rare diversion of some deterministic combinatorics. It seems to be a local variant of the pigeonhole principle that given six days at a mathematical event in Hungary, at least one element from {Ramsay theory, Erdos-Szekeres, antichains in the hypercube} will be discussed, with probability one. On this occasion, all were discussed, so I thought I’d write something about at least one of them. Posets and directed acyclic graphs This came up on the problem set constructed by the Hungarian leaders. The original formulation asked students to show that among any 17 positive integers, there are either five such that no one divides any other, or five such that among any pair, one divides the other. It is fairly clear why number theory plays little role. We assign the given integers to the vertices of a graph, and whenever a divides b, we add a directed edge from the vertex corresponding to a to the vertex corresponding to b. Having translated the given situation into a purely combinatorial statement, fortunately we can translate the goal into the same language. If we can find a chain of four directed edges (hence five vertices – beware confusing use of the word ‘length’ here) then we have found the second possible option. Similarly, if we can find an antichain, a set of five vertices with no directed edges between them, then we have found the first possible option. It’s worth noting that the directed graph we are working with with is transitive. That is, whenever there is an edge a->b and b->c, there will also be an edge a->c. This follows immediately from the divisibility condition. There are also no directed cycles in the graph, since otherwise there would be a cycle of integers where each divided its successor. But of course, when a divides b and these are distinct positive integers, this means that b is strictly larger than a, and so this relation cannot cycle. In fact, among a set of positive integers, divisibility defines a partial order, which we might choose to define as any ordering whether the associated directed graph is transitive and acyclic, although obviously we could use language more naturally associated with orderings. Either way, from now on we consider posets and the associated DAGs (directed acyclic graphs) interchangeably. Dilworth’s theorem In the original problem, we are looking for either a large chain, or a large antichain. We are trying to prove that it’s not possible to have largest chain size at most four, and largest antichain size at most four when there are 17 vertices, so we suspect there may some underlying structure: in some sense perhaps the vertex set is the ‘product’ of a chain and an antichain, or at least a method of producing antichains from a single vertex. Anyway, one statement of Dilworth’s theorem is as follows: Statement 1: in a poset with nm+1 elements, there is either a chain of size n+1, or an antichain of size m+1. Taking n=m=4 immediately finishes the original problem about families of divisors. While this is the most useful statement here, it’s probably not the original, which says the following: Statement 2: in a poset, there exists $\mathcal{C}$ a decomposition into chains, and an antichain $A$ such that $|\mathcal{C}|=|A|$. Remark 1: Note that for any decomposition into chains and any antichain, we have $|\mathcal{C}|\ge |A|$, since you can’t have more than one representative from any chain in the antichain. So Statement 2 is saying that equality does actually hold. Remark 2: Statement 1 follows immediately from Statement 2. If all antichains had size at most m, then there’s a decomposition into at most m chains. But each chain has size n, so the total size of the graph is at most mn. Contradiction. Unsuccessful proof strategies for Dilworth Since various smart young people who didn’t know the statement or proof of Dilworth’s theorem attempted to find it (in the form of Statement 1, and in a special case) in finite time conditions, it’s easy to talk about what doesn’t work, and try to gain intellectual value by qualifying why. • Forgetting directions: in general one might well attack a problem by asking whether we have more information than we need. But ignoring the directions of the edges is throwing away too much information. After doing this, antichains are fine, but maybe you need to exhibit some undirected ‘chains’. Unless these undirected chains are much longer than you are aiming for, you will struggle to reconstruct directed chains out of them. • Where can the final vertex go?: in a classic trope, one might exhibit a directed graph on nm vertices with neither a chain of size n+1 nor an antichain of size m+1. We attempt to argue that this construction is essentially unique, and that it goes wrong when we add an extra vertex. As a general point, it seems unlikely to be easier to prove that exactly one class of configurations has a given property in the nm case, than to prove no configurations has the same property in the nm+1 case. A standalone proof of uniqueness is likely to be hard, or a disguised rehash of an actual proof of the original statement. • Removing a chain: If you remove a chain of maximal length, then, for contradiction, what you have left is m(n-1)+1 vertices. If you have a long chain left, then you’re done, although maximality has gone wrong somewhere. So you have an antichain size n in what remains. But it’s totally unclear why it should be possible to extend the antichain with one of the vertices you’ve just removed. An actual proof of Dilworth (Statement 1), and two consequences This isn’t really a proof, instead a way of classifying the vertices in the directed graph so that this version of Dilworth. As we said earlier, we imagine there may be some product structure. In particular, we expect to be able to find a maximal chain, and a nice antichain associated to each element of the maximal chain. We start by letting $V_0$ consist of all the vertices which are sources, that is, have zero indegree. These are minima in the partial ordering setting. Now let $V_1$ consist of all vertices whose in-neighbourhood is entirely contained in $V_0$, that is they are descendents only of $V_0$. Then let $V_2$ consist of all remaining vertices whose in-neighourhood is entirely contained in $V_0\cup V_1$ (but not entirely in $V_0$, otherwise it would have already been treated), and so on. We end up with what one might call an onion decomposition of the vertices based on how far they are from the sources. We end up with $V_0,V_1,\ldots,V_k$, and then we can find a chain of size k+1 by starting with any vertex in $V_k$ and constructing backwards towards the source. However, this is also the largest possible size of a chain, because every time we move up a level in the chain, we must move from $V_i$ to $V_j$ where j>i. It’s easy to check that each $V_i$ is an antichain, and thus we can read off Statement 1. A little more care, and probably an inductive argument is required to settle Statement 2. We have however proved what is often called the dual of Dilworth’s theorem, namely that in a poset there exists a chain C, and a decomposition into a collection $\mathcal{A}$ of antichains, for which $|C|=|\mathcal{A}|$. Finally, as promised returning to Erdos-Szekeres, if not to positive integers. We apply Dilworth Statement 1 to a sequence of $m^2+1$ real numbers $a_0,a_1,\ldots,a_{m^2}$, with the ordering $a_i\rightarrow a_j$ if $i\le j$ and $a_i\le a_j$. Chains correspond to increasing subsequences, and antichains to decreasing subsequences, so we have shown that there is either a monotone subsequence of length m+1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 66, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352463841438293, "perplexity": 307.3919030136437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689661.35/warc/CC-MAIN-20170923123156-20170923143156-00180.warc.gz"}
https://www.math.stonybrook.edu/cgi-bin/preprint.pl?ims92-4
## Preprint ims92-4 E. Cawley The Teichm\"uller Space of the Standard Action of $SL(2,Z)$ on $T^2$ is Trivial. Abstract: The group $SL(n,{\bf Z})$ acts linearly on $\R^n$, preserving the integer lattice $\Z^{n} \subset \R^{n}$. The induced (left) action on the n-torus $\T^{n} = \R^{n}/\Z^{n}$ will be referred to as the standard action''. It has recently been shown that the standard action of $SL(n,\Z)$ on $\T^n$, for $n \geq 3$, is both topologically and smoothly rigid. That is, nearby actions in the space of representations of $SL(n,\Z)$ into ${\rm Diff}^{+}(\T^{n})$ are smoothly conjugate to the standard action. In fact, this rigidity persists for the standard action of a subgroup of finite index. On the other hand, while the $\Z$ action on $\T^{n}$ defined by a single hyperbolic element of $SL(n,\Z)$ is topologically rigid, an infinite dimensional space of smooth conjugacy classes occur in a neighborhood of the linear action. The standard action of $SL(2, \Z)$ on $\T^2$ forms an intermediate case, with different rigidity properties from either extreme. One can construct continuous deformations of the standard action to obtain an (arbritrarily near) action to which it is not topologically conjugate. The purpose of the present paper is to show that if a nearby action, or more generally, an action with some mild Anosov properties, is conjugate to the standard action of $SL(2, \Z)$ on $\T^2$ by a homeomorphism $h$, then $h$ is smooth. In fact, it will be shown that this rigidity holds for any non-cyclic subgroup of $SL(2, \Z)$. View ims92-4 (PDF format)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836041331291199, "perplexity": 157.02220173070148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649683.30/warc/CC-MAIN-20180324034649-20180324054649-00570.warc.gz"}
https://www.physicsforums.com/threads/change-in-internal-energy.179454/
# Change in Internal Energy 1. Aug 5, 2007 ### alexithymia916 1. The problem statement, all variables and given/known data One mole of hydrogen gas is heated from 278 K to 390 K at constant pressure. Hydrogen has a specific heat of 43.6 J/mol*K. The universal gas constant is 8.31451 J/K mol. Calculate the change in the internal energy of the gas. Answer in units of J 2. Relevant equations Change in Internal energy = (1.5)nRT n= number of moles R= universal gas constant T= change in temperature 3. The attempt at a solution (1.5)(1 mol)(8.31451 J/k mol)(112 K) = 1396.83768 what am i doing wrong?? the website i have to submit my hw to says it's incorrect... 2. Aug 6, 2007 ### chaoseverlasting I think thats right... You may want to look at the significant figures though. 3. Aug 6, 2007 ### alexithymia916 noo def not my teacher HATES significant digits, so we're just supposed to carry it out about 5 decimal places grr i can't figure this one out :(
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095881938934326, "perplexity": 2319.733887581814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00086-ip-10-171-10-70.ec2.internal.warc.gz"}
https://link.springer.com/chapter/10.1007/978-1-4615-3600-0_10
# Variational Inequalities and Related Projections • Sjur D. Flåm ## Abstract The variational inequality problem is to find a point x * in X such that $$\begin{array}{*{20}{c}} {\langle f(x*),x - x*\rangle \geqslant 0} & {for all x \in X.} \\ \end{array}$$ (10.1) Here X is a nonempty closed convex subset of ℝ n , and f maps ℝ n into itself, this space being equipped with the ordinary inner product, <.,.>. Such problems emerge as necessary optimality conditions in mathematical programming or in noncooperative game theory [2,3,5,9,10]. Most solution methods proceed by linearizing f at the current estimate xk produced during iteration k. That is to say, in the next iterative step k + 1, the function f(x) is replaced by an approximate $$f({x^{k}}) + \frac{1}{{{\tau _{k}}}}G(x - {x^{k}})$$ (10.2) where τk is a positive scaling parameter and G is a positive-definite symmetric matrix. This local representation of f has the advantage of leading us to recognize the solution x k+1 , to be furnished at stage k + 1, as the unique point in X that is closest to $${x^{k}} - {\tau _{k}}{G^{{ - 1}}}f({x^{k}})$$ . Thus, $${x^{{k + 1}}} = {P_{X}}({x^{k}} - {\tau _{k}}{G^{{ - 1}}}f({x^{k}}))$$ (10.3) where P X denotes the projection operator onto X. Indeed, using expression (10.2) in place off and writing xk+1= x*, inequality (10.1) takes on the form $$\begin{array}{*{20}{c}} {\langle f({x^{k}}) + \frac{1}{{{\tau _{k}}}}G({x^{{k + 1}}} - {x^{k}}),x - {x^{{k + 1}}}\rangle \geqslant 0} & {for all x \in X,} \\ \end{array}$$ which can equivalently be written as $$\begin{array}{*{20}{c}} {\langle {x^{{k + 1}}} - ({x^{k}} - {\tau _{k}}{G^{{ - 1}}}f({x^{k}})),G(x - {x^{{k + 1}}})\rangle \geqslant 0} & {for all x \in X,} \\ \end{array}$$ and the latter variational inequality amounts to equation (10.3). ## Keywords Variational Inequality Variational Inequality Problem Nonempty Closed Convex Subset Interior Point Algorithm Convex Feasibility Problem These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. ## References 1. [1] Aharoni, R., A. Berman, and Y. Censor, “An Interior Points Algorithm for the Convex Feasibility Problem,” Advances in Applied Mathematics 4 (1983), 479–489. 2. [2] Auslender, A., Optimisation,Methodes Numériques, Masson, Paris, 1976.Google Scholar 3. [3] Ben-Tal, A., J. Barzilai, and A. Chames, “A Duality Theory for a Class of Problems with Essentially Unconstrained Duals” in P. Stähly (ed.) Methods of Operations Research Vol. 45, 1983, pp. 23–33.Google Scholar 4. [4] Chames, A. and W.W. Cooper, Management Models and Industrial Applications of Linear Programming, Vol. 1–2, J. Wiley, New York, 1961.Google Scholar 5. [5] Dafermos, S., “Traffic Equilibrium and Variational Inequalities,” Transportation Science 14 (1980), 42–54. 6. [6] Dafermos, S., “An Iterative Scheme for Variational Inequalities,” Mathematical Programming 26 (1983), 40–47. 7. [7] Fukushima, M., “An Outer Approximation Algorithm for Solving General Convex Programs,” Operations Resarch 31 (1983), 101–113. 8. [8] Fukushima, M., “A Relaxed Projection Method for Variational Inequalities,” Mathematical Programming 35 (1986), 58–70. 9. [9] Kinderlehrer, D. and G. Stampacchia, An Introduction to Variational Inequalities and their Applications, Academic Press, New York, 1980.Google Scholar 10. [10] Pang, J.-S., “Assymetric Variational Inequalities over Product Sets: Applications and Iterative Methods,” Mathematical Programming 31 (1985), 206–219.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9025660753250122, "perplexity": 2777.8753381450947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592579.77/warc/CC-MAIN-20180721125703-20180721145703-00218.warc.gz"}
https://math.stackexchange.com/questions/3161487/looking-for-a-conceptual-proof-of-the-pythagorean-theorem-from-first-principles
# Looking for a conceptual proof of the pythagorean theorem from first principles I was considering asking this question on math stack exchange but decided not to because "first principles" seems like more of a physics thing. I'm looking for a conceptual proof of the pythagorean theorem from first principles. Actually, it might be better to say the distance formula rather than the pythagorean theorem because I'm thinking about distances in three dimensions. I don't find any of the usual proofs of the distance formula satisfying. There are a number of proofs from first principles in euclidean geometry but then I feel like I have to move triangles and squares around or break out proportions every time I use the distance formula. On the other hand there are a lot of conceptual proofs of the pythagorean theorem, e.g. using the dot product or the law of cosines, but each of these just pushes the question around. If I use the dot product to "prove" the pythagorean theorem I need to know why we use the dot product from first principles. If I use the law of cosines then I need to know why vectors that are $$90^{\circ}$$ to each other are perpendicular. I'm sure this sounds perverse so let me try to make what I'm looking for a bit more precise. For me "understanding" a thing is always understanding it within a particular conceptual system. Within that system we have admissible intuitions and understanding is when those intuitions are matched with rigorous math. Those intuitions could be of the formal relations between various quantities and how we interpret that physically. So there are all these different ways to understand this stuff, but what I'm lacking is a way to understand them simultaneously. (--ok, I really tried here, and I know it is super confused but its not getting any better than this) The distance formula is such an elementary and fundamental fact about our everyday experience that I feel like there must be some better way to understand it than "we can prove it and its mutually compatible with other concepts like inner products and angles and the math that relates them". I've been wondering about this on and off ever since I learned about the dot product proof of the pythagorean theorem. At first I was more wondering why the the dot product lets us compute angles via $$u \cdot v = \|u\| \|v\| \cos(\theta)$$, and this still seems extraordinary to me, so I would want a way to understand the relation between inner products and angles from first principles too. Right now my thinking is that from first principles (whatever first principles is) distance is a thing, rigid motions is a thing, distance is preserved by rigid motions, rigid motions includes translations, rotations, and reflections, so distance is translation invariant and respects scaling, so distance comes from a norm, then because the parallelogram law is so obviously true (not obvious to me at all), this norm comes from an inner product given by the polarization identity. If I could understand why the parallelogram law is true from first principles then by the rest of the above I would have one of these nice conceptual systems from first principles through inner products, and that would include the pythagorean theorem. But understanding why the parallelogram law is true seems at least as hard as the pythagorean theorem itself, since its essentially a more general form, so I'm going to settle with understanding how it can fail. I see two avenues for this. First there are various examples of norms where the parallelogram fails to hold, for example all of the $$L^p$$ spaces. Second, the pythagorean theorem, and presumably the parallelogram law, fail to hold in geometries with nonzero curvature. I think this clarifies what I mean by "conceptual proof of the pythagorean theorem from first principles" so I think its reasonable to ask for approaches found in textbooks or expository papers to the pythagorean theorem / inner products / angles following a similar outline to what I gave above. • As a side question, could anybody explain what am I trying to say in my fourth paragraph? Its seems like other people would have written about it. And I could try to haphazardly explain it in the comments more. Mar 25, 2019 at 0:14 • The Pythagorean Theorem is a mathematical concept, so the first principles proof you're looking proof will necessarily come from mathematical axioms. However, if what you want is a more physical interpretation, this video could help you: youtube.com/watch?v=CAkMUdeB06o Mar 25, 2019 at 0:16 • This doesn’t seem like a good question for Physics SE given that we know that physical space isn’t Euclidean. Mar 25, 2019 at 0:25 • Without you telling us what you would accept as first principles it will likely not be possible for anyone to give you an acceptable answer. You are clearly already aware of several proofs, but don’t like their “givens”. So you will need to identify what you are willing to accept as given. – Dale Mar 25, 2019 at 1:25 • Math.SE could still work, and this question would be better suited for there, as it's purely about theoretical mathematics. "First principles" in mathematics do exist. They're called "axioms". That said, there is freedom in this case in what you will choose as your axioms - there are many equivalent sets that will describe the same thing, i.e. Euclidean plane geometry. The nature of the proof will then depend on which axiom set you choose to use. Mar 25, 2019 at 6:42 I'm not certain it will actually help clear up anything at all for you, but it's worth knowing that other folks before you have asked much the same thing, some of them with considerable profit from the exercise. The classic example of this is Riemann's "Inaugural Lecture," of which there is a translation near the start of Volume 3 of Spivak's Differential Geometry. A somewhat less understandable (!) translation is available here: https://www.maths.tcd.ie/pub/HistMath/People/Riemann/Geom/WKCGeom.html. One question to ask yourself is "What are my first principles, and why?" Another is "Under which sets of first principles is the theorem true?" For the second, you've already observed that for curved spaces, it doesn't necessarily hold. So what axioms are you willing to use to define "not curved spaces"? Hilbert's version of Euclid's Axioms is a pretty good start. Of course, they spend a lot of time talking about lines and intersections and angle measures, and the resulting proofs of the Pythagorean Theorem use things that seem apparently irrelevant, like squares constructed on the triangle sides, and translations/rotations in the plane, etc. But there's a reason for that: the Euclidean axioms are about as simple as you can make them, which necessarily means that it's quite some distance from the axioms to the more interesting theorems. There's a related reason that has to do with philosophy and status and a bunch of other things, and I sadly can't recall where I first read this: Euclid and friends were not much interested in numbers and measurements --- those were things that concerned tradesmen and shopkeepers rather than philosophers. As philosophers, they thought of ratios as the far more central concept. Things like algebra were ... well, they were irrelevant to some degree. So to the Greeks, the claim is not that $$a^2 + b^2 = c^2$$, but instead that if you erect a square on each leg of the right triangle, and another square on the hypotenuse, then the "sum" of the first two squares is the third square, where "sum" here literally meant some kind of addition of areas (done via cutting and pasting, without too much concern about whether this was indeed well-defined) rather than about sums of numbers. A modern view says that there's an area-measurement function, and that the whole greek argument boils down to (1)it being additive on disjoint sets or something, (2) it being invariant under congruence transformations, and to one key lemma: the area-function, applied to a triangle of base $$b$$ and height $$h$$, yields $$(1/2)b h$$ [or an equally simple lemma about rectangles]. But to the greek mathematicians, "measurement" wasn't the kind of central idea it is now. At least some historians of mathematics believe that tradespeople had known Pythagoras's theorem long before it was proved as a geometric theorem --- it was the sort of thing that made it easy to lay out the foundations of a house in a rectangle rather than a somewhat skewed parallelogram! I'm rambling here, and I apologize. At the heart of things, I suggest you think hard about what you believe are your "first principles", and why. Only then can you get a really satisfactory answer to your question. Here you go: things that are independent add in quadrature. That is the concept. It works for error propagation, random noise, I/Q modulated signals, the real and imaginary parts of an S-matrix, and the magnitudes of vectors. The last case being the Pythagorean theorem. Of course, vectors that are independent are perpendicular, or: $$\vec a \cdot \vec b\equiv \frac 1 4[(\vec a + \vec b)^2 -(\vec a -\vec b)^2] = ab\cos{\theta} = 0$$ which implies: $$\cos{\theta} = 0$$ $$\theta = 90^{\circ}$$ (Note: I used the coordinate free definition of the dot product to provide food for thought). • Does your rule apply to e.g., probabilities? Mar 25, 2019 at 3:10 If you are working in a Euclidean space with the standard inner product $$\vec a \ \cdot \vec b = \sum a_i b_i$$ and the related metric $$|\vec x| = \sqrt{\vec x \cdot \vec x}$$ then the Pythagorean theoem is a simple consequence of the axioms of the space. If $$\vec a \cdot \vec b = 0$$ then $$|\vec a + \vec b|^2 = (\vec a + \vec b) \cdot (\vec a + \vec b) = \vec a \cdot \vec a + \vec b \cdot \vec b = |\vec a|^2 + |\vec b|^2$$ If you are working in a non-Euclidean space then the Pythagorean theorem may not be true. If you want to physically validate the Pythagorean theorem (or, more precisely, validate that the spatial dimensions in an inertial frame act like a Euclidean space) then you will have to carry out a physical experiment using, for example, light beams as your straight lines, accurate clocks to determine distance, and working in free fall.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284531235694885, "perplexity": 270.74388371501936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00249.warc.gz"}
https://cms.math.ca/10.4153/CJM-2012-060-7
location:  Publications → journals → CJM Abstract view # Generalized Frobenius Algebras and Hopf Algebras Published:2013-02-06 Printed: Feb 2014 • Miodrag Cristian Iovanov, University of Southern California, Department of Mathematics, 3620 South Vermont Ave. KAP 108, Los Angeles, California 90089-2532 Format: LaTeX MathJax PDF ## Abstract "Co-Frobenius" coalgebras were introduced as dualizations of Frobenius algebras. We previously showed that they admit left-right symmetric characterizations analogue to those of Frobenius algebras. We consider the more general quasi-co-Frobenius (QcF) coalgebras; the first main result in this paper is that these also admit symmetric characterizations: a coalgebra is QcF if it is weakly isomorphic to its (left, or right) rational dual $Rat(C^*)$, in the sense that certain coproduct or product powers of these objects are isomorphic. Fundamental results of Hopf algebras, such as the equivalent characterizations of Hopf algebras with nonzero integrals as left (or right) co-Frobenius, QcF, semiperfect or with nonzero rational dual, as well as the uniqueness of integrals and a short proof of the bijectivity of the antipode for such Hopf algebras all follow as a consequence of these results. This gives a purely representation theoretic approach to many of the basic fundamental results in the theory of Hopf algebras. Furthermore, we introduce a general concept of Frobenius algebra, which makes sense for infinite dimensional and for topological algebras, and specializes to the classical notion in the finite case. This will be a topological algebra $A$ that is isomorphic to its complete topological dual $A^\vee$. We show that $A$ is a (quasi)Frobenius algebra if and only if $A$ is the dual $C^*$ of a (quasi)co-Frobenius coalgebra $C$. We give many examples of co-Frobenius coalgebras and Hopf algebras connected to category theory, homological algebra and the newer q-homological algebra, topology or graph theory, showing the importance of the concept. Keywords: coalgebra, Hopf algebra, integral, Frobenius, QcF, co-Frobenius MSC Classifications: 16T15 - Coalgebras and comodules; corings 18G35 - Chain complexes [See also 18E30, 55U15] 16T05 - Hopf algebras and their applications [See also 16S40, 57T05] 20N99 - None of the above, but in this section 18D10 - Monoidal categories (= multiplicative categories), symmetric monoidal categories, braided categories [See also 19D23] 05E10 - Combinatorial aspects of representation theory [See also 20C30] top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8604914546012878, "perplexity": 971.9450078075084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258948913.96/warc/CC-MAIN-20160723072908-00275-ip-10-185-27-174.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1007/BF00377050
Archive for History of Exact Sciences , Volume 44, Issue 3, pp 265–286 The ascendancy of the Laplace transform and how it came about • Michael A. B. Deakin Article DOI: 10.1007/BF00377050 Deakin, M.A.B. Arch. Hist. Exact Sci. (1992) 44: 265. doi:10.1007/BF00377050 Abstract The modern Laplace transform is relatively recent. It was first used by Bateman in 1910, explored and codified by Doetsch in the 1920s and was first the subject of a textbook as late as 1937. In the 1920s and 1930s it was seen as a topic of front-line research; the applications that call upon it today were then treated by an older technique — the Heaviside operational calculus. This, however, was rapidly displaced by the Laplace transform and by 1950 the exchange was virtually complete. No other recent development in mathematics has achieved such ready popularisation and acceptance among the users of mathematics and the designers of undergraduate curricula.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168120980262756, "perplexity": 1373.2605900445858}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00194-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/208136/another-multivariable-limit/208140
Another Multivariable limit How can I show that the limit of the following function at $(0,0)$ is 7 ? $$f(x,y)= \dfrac{x^3 y^2}{2x^2+y^2} +\dfrac{\tan(7xy)}{\sin(xy)}$$ Thanks ! - Hint: Separate it by summands,and for the 2nd one you can also introduce like $h:=xy$, if $x,y\to 0$ then of course $h\to 0$, then consider $$7\cdot\frac{\tan(7h)}{7h}\cdot\frac{h}{\sin h}$$ For the first one, you can pull out $x^2y$, for example, and prove that the rest is bounded around $(0,0)$. $$\lim_{y \rightarrow 0} \left( \lim_{x \rightarrow 0} \left( \dfrac{x^3 y^2}{2x^2+y^2} +\dfrac{\tan(7xy)}{\sin(xy)} \right) \right)= \lim_{y \rightarrow 0} \left( \lim_{x \rightarrow 0} \left( \dfrac{\tan(7xy)}{7xy} \dfrac{7xy}{xy}\dfrac{xy}{\sin (xy)} \right) \right) = 7$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931908845901489, "perplexity": 209.0082156700678}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
https://socratic.org/questions/what-happens-to-a-neutron-in-beta-decay
Chemistry Topics # What happens to a neutron in beta decay? Aug 6, 2017 Atomic number increases by 1 #### Explanation: During a beta decay, a neutron splits/transmutates into a proton and a beta particle(positron/anti-electron) with the release of gamma rays. Beta particle is not present in nucleus, yet it is emitted from the nucleus because a neutron changes in to a proton, increasing atomic number by one. As a result of the decay an isobar of an element is formed. Neutron-------: proton + beta particle + Gamma ray. ##### Impact of this question 762 views around the world You can reuse this answer
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475639462471008, "perplexity": 2173.6258131262257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00231.warc.gz"}
https://www.solumaths.com/en/calculator/calculate/pythagorean
# Pythagorean theorem calculator Quizzes and games : geometry, equation, numbers The calculator uses the Pythagorean theorem to verify that a triangle is right-angled or to find the length of one side of a right-angled triangle. Quizzes and games : geometry, equation, numbers ### Pythagorean, online calculus #### Summary : The calculator uses the Pythagorean theorem to verify that a triangle is right-angled or to find the length of one side of a right-angled triangle. pythagorean online # Pythagorean theorem online calculator The calculator by means of the pythagorean function makes it possible to know if lengths satisfy the Pythagorean theorem. If the lengths contain variables, the calculator will seek to find the values of the variables which allow to verify the Pythagorean theorem. The Pythagorean theorem is expressed as follows: In a right triangle, the square of the hypotenuse is equal to the sum of the squares of the opposite sides. If we consider the triangle ABC rectangle in A, if we put BC = a, AC = b, AB = c then the theorem of Pythagoras is written BC^2=AB^2+AC^2 or a^2=b^2+c^2. The Pythagorean theorem admits a reciprocal which states : If in a triangle the square of one side is equal to the sum of the squares of the opposite sides, then the triangle is a right triangle. ## Verify that a triangle is a right triangle knowing the length of its sides The calculator makes it possible to verify that a triangle is a right triangle knowing the length of the hypotenuse and the length of the opposite sides. If it is desired, for example, to verify that there exists a right-angled triangle whose hypotenuse has length 5 and the opposite sides for length 3 and 4, enter pythagorean(3;4;5). The calculator returns 1 if the values passed in parameter make it possible to deduce that the triangle is a right triangle, 0 otherwise. The calculator returns the details of the calculations used to use the Pythagorean theorem. ## Find the length of one side of a right triangle from the length of the other two The calculator allows you to find the length of one side knowing the two others thanks to the Pythagorean theorem. It is thus possible to calculate the length of the hypotenuse or the length of one of the sides adjacent to the right angle. ### Find the length of the hypotenuse The calculator allows to find the length of the hypotenuse if we know the length of the sides adjacent to the right angle. For example, if you are looking for the hypotenuse of a right-angled triangle whose adjacent sides are 3 and 4, you need to enter pythagorean(3;4;x), the value of the hypotenuse is then calculated. ### Find the length of a side adjacent to the right angle The calculator allows to find the length of a side adjacent to the right angle if we know the length of the hypotenuse and the length of the other adjacent side. For example, if you are looking for the length of the side of a right-angled triangle whose hypotenuse is 5 and the length of the other side is 3, you need to enter pythagorean(x;3;5), the value of the side adjacent to the right angle is then calculated. It is also possible to find the length of the sides of an isosceles right triangle from the length of the hypotenuse. For example, if you are looking for the length of the sides adjacent to the right angle of an isosceles right triangle whose hypotenuse is 4, you must enter pythagorean(x;x;4). ## Quiz on the Pythagorean theorem In order to practice using the Pythagorean theorem, the site offers an application quiz.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881264328956604, "perplexity": 126.89491794771548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00143.warc.gz"}
https://www.examveda.com/round-robin-scheduling-11415/
Examveda # Round-robin scheduling B. is quite complex to implement C. gives each task the same chance at the processor D. allows processor-bound tasks more time in the processor E. None of the above ### Solution(By Examveda Team) Round-robin scheduling gives each task the same chance at the processor. The simplest preemptive scheduling algorithm is round-robin, in which the processes are given turns at running, one after the other in a repeating sequence, and each one is preempted when it has used up its time slice. Related Questions on Operating System Identify false statement A. You can find deleted files in recycle bin B. You can restore any files in recycle bin if you ever need C. You can increase free space of disk by sending files in recycle bin D. You can right click and choose Empty Recycle Bin to clean it at once
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.934456467628479, "perplexity": 4085.9711464195443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00708.warc.gz"}
https://www.physicsforums.com/threads/accelerating-light-clock-simulation.988992/
# Accelerating light-clock simulation • B • Start date • #1 1,420 112 ## Summary: I'm trying to simulate a super-sized accelerating light-clock in Python. ## Main Question or Discussion Point So this what I found out: One cycle takes less coordinate time, and less proper time according to the lower mirror when the clock accelerates. Proper time per one cycle does not seem to get larger with more cycles, although coordinate time per one cycle does get larger. So what do you think, is this a real phenomenon? Output of low acceleration run: Code: accelerations are: 0.001 0.001 proper time and coordinate time are: 0.799840042654 0.799840127936 Output of higher acceleration run: Code: accelerations are: 1.0 0.7143 proper time and coordinate time are: 0.672944473242 0.724897959184 Python: from math import sqrt ,asinh c=1.0 def gamma(a,t): return sqrt( 1 + (a*t/c)*(a*t/c) ) def d(a,t): return (c**2 / a) * (gamma(a,t) -1) def T(a,t): return (c/a)*asinh(a*t/c) def v(a,t): return a*t / gamma(a,t) def properaccelerationatheight(a,h): return a / (1 + ( (a*h) / c**2 ) ) #tstep1 and tstep2 return a varying timestep def tstep1(): global a2,t,h,p dist = abs( d(a2,t) + h - p) return dist / (10*c) def tstep2(): global a1,t,p dist = abs( d(a1,t) - p) return dist / (20*c) # h is the height of the light-clock, a1 and a2 are proper accelerations of the mirrors h = 0.4 * c for a1 in [0.001, 1.0]: a2 = properaccelerationatheight(a1,h) print '\naccelerations are:',round(a1,4),round(a2,4) # t is coordinate time, p is the position of a light pulse t=0.0 p=0.0 for i in xrange(44444): dt=tstep1() p += dt*c t+=dt for i in xrange(44444): dt=tstep2() p -= dt*c t+=dt print 'proper time and coordinate time are:' print T(a1,t), t Related Special and General Relativity News on Phys.org • #2 Nugatory Mentor 12,761 5,367 Proper time per one cycle does not seem to get larger with more cycles, although coordinate time per one cycle does get larger. You can calculate the proper time for any cycle using the inertial frame in which the bottom of the clock is momentarily at rest at the start of the cycle. So yes, the proper time per cycle ought to be independent of the duration of the acceleration. (You can use any other frame as well, as in your simulation - it’s just more work and harder to see at a glance why you should expect the proper time to be the same). The coordinate time does change if you choose to use a frame (as in your simulation) in which the speed of the bottom end of the clock is different for each cycle. That shouldn’t be surprising either - you’d expect different results for frame-dependent quantities when the clock is moving at different speeds relative to you. (I haven’t looked at your code or checked the results for quantitative accuracy, just saying that the qualitative results you’re reporting are to be expected) • #3 Ibix 6,421 5,072 I'm presuming that the proper length of the light clock doesn't change? You can just view this as gravitational time dilation, I think. Then yes, the lower clock ticks slower. The ratio of tick rates is constant from symmetry: nothing actually changes from the perspective of either mirror. The tick rate slowing with coordinate time is more or less standard kinetic time dilation. • #4 Dale Mentor 29,527 5,860 Summary:: I'm trying to simulate a super-sized accelerating light-clock in Python. So what do you think, is this a real phenomenon? Yes, what you describe is correct. This effect was one of the first lab experiments confirming GR. It was called the Pound Rebka experiment, although no attempt was made to measure the results in a free falling frame. • #5 PeterDonis Mentor 2019 Award 29,591 8,883 It was called the Pound Rebka experiment, although no attempt was made to measure the results in a free falling frame. It probably would have been tough to get a research grant if they had had to say in their grant application that the apparatus--carefully designed to send its data by radio as it fell--would be destroyed by impact with the ground at the end of the experiment. Likes etotheipi, vanhees71 and Dale • #6 1,420 112 I'm presuming that the proper length of the light clock doesn't change? You can just view this as gravitational time dilation, I think. Then yes, the lower clock ticks slower. The ratio of tick rates is constant from symmetry: nothing actually changes from the perspective of either mirror. The tick rate slowing with coordinate time is more or less standard kinetic time dilation. Just to make sure I understand correctly: Do you mean the lower clock clock ticks slower than the large light-clock, or do you mean the lower clock ticks slower then some clock higher up, or do you mean both of those things? --------- Yes, my clock's proper length stays the same. It length-contracts correctly in an inertial frame. • #7 PeterDonis Mentor 2019 Award 29,591 8,883 Do you mean the lower clock clock ticks slower than the large light-clock, or do you mean the lower clock ticks slower then some clock higher up, or do you mean both of those things? Since by definition you are making the large light clock large enough that time dilation is non-negligible from the bottom to the top, the large light clock does not have a well-defined "tick rate" at all. • #8 1,420 112 Since by definition you are making the large light clock large enough that time dilation is non-negligible from the bottom to the top, the large light clock does not have a well-defined "tick rate" at all. How about cycles per minute rate, which is what I meant by tick rate? And by cycle I mean that light travels from lower mirror to upper mirror and then back to lower mirror. • #9 PeterDonis Mentor 2019 Award 29,591 8,883 How about cycles per minute rate, which is what I meant by tick rate? Still not well-defined, since by construction a "minute" is not the same at the lower and upper ends of the big clock. by cycle I mean that light travels from lower mirror to upper mirror and then back to lower mirror. The time by a clock at the lower mirror for a round trip of light is well-defined. So is the time by a clock at the upper mirror. But if either of those is what you mean by "tick rate", then (a) you need to say so explicitly, and (b) you've made having the big clock pointless, since you are depending on some other clock to tell you what its "tick rate" is. • #10 Ibix 6,421 5,072 How about cycles per minute rate, which is what I meant by tick rate? Measured by who, is Peter's point. For a small light clock you can compare to a local clock. For a large one you need to compare to a small clock either at the top or bottom mirror - which you already did - and they will give different answers. Edit: beaten to it, I see. • #11 1,420 112 Measured by who, is Peter's point. For a small light clock you can compare to a local clock. For a large one you need to compare to a small clock either at the top or bottom mirror - which you already did - and they will give different answers. Edit: beaten to it, I see. Does a big (tall) accelerating light-clock tick (cycle) extra fast because of the acceleration, according to an observer at the lower mirror? I could use an answer to that question. Many answers from different people would be extra nice. • #12 PeterDonis Mentor 2019 Award 29,591 8,883 Does a big (tall) accelerating light-clock tick (cycle) extra fast because of the acceleration, according to an observer at the lower mirror? Extra fast compared to what? • #13 Ibix 6,421 5,072 Does a big (tall) accelerating light-clock tick (cycle) extra fast because of the acceleration, according to an observer at the lower mirror? Ticks fast compared to what, as Peter asks? Compared to when it's not accelerating? Yes - trivially the tick time is 0.8s for your 0.4 light second clock in that case. But that's directly attributable to the velocity change of the lower mirror in an inertial frame - you can get it by writing down ##\gamma(t)## and calculating ##\int dt/\gamma## between reflection events. Which is what you did to get your function T() in fact. I must say I don't understand your program, now I look at it. It seems to be using two loops to estimate the intercept time of light pulses with the mirrors by some iterative process. Yet to be able to have written the program you must have written down the location of the mirror as a function of coordinate time (I think that's what your function d() does). Why not solve this intercept calculation algebraically and skip the program altogether? • #14 PeterDonis Mentor 2019 Award 29,591 8,883 Compared to when it's not accelerating? Yes - trivially the tick time is 0.8s for your 0.4 light second clock in that case. I don't think it's as simple as this. How are you making this comparison? • #15 Ibix 6,421 5,072 I don't think it's as simple as this. How are you making this comparison? I'm considering a clock whose length measured in its (possibly instantaneous) inertial rest frame is 0.4 (lifted from jartsa's program), and comparing the proper time between reflection events as measured by a small clock colocated with the lower mirror. That is, I'm comparing the arc length of the lower mirror's worldline between reflection events for different proper accelerations of the lower mirror. • #16 PeterDonis Mentor 2019 Award 29,591 8,883 I'm comparing the arc length of the lower mirror's worldline between reflection events for different proper accelerations of the lower mirror. Ok, so in an inertial frame in which the lower clock is momentarily at rest at the instant the light pulse reflects off the upper mirror (this is the simplest way to take advantage of the symmetries involved), we have (in units where ##c = 1##, so we are measuring time in seconds and distance in light-seconds): Coordinate time ##t_0## at the instant of the upper mirror reflection. A lower mirror at ##x_0 = 1 / a## at coordinate time ##t_0##, where ##a## is the proper acceleration of the lower mirror. An upper mirror at ##x_0 + L = L + 1 / a## at coordinate time ##t_0##, where ##L## is the mirror spacing (i.e., ##0.4## in the OP's formulation). The lower mirror's worldline is then given by ##x = (1 / a) \sqrt{1 + a^2 t^2}##. The proper time along the worldline from ##t = 0## to coordinate time ##t## is given by ##\tau = (1 / a) \sinh^{-1} \left( a t \right) = (1 / a) \ln \left( a t + \sqrt{1 + a^2 t^2} \right)##. We then simply calculate the point of intersection of the reflected light signal with the lower mirror's worldline; that gives us the value of ##t## we need to plug into the formula for ##\tau##. (The total round-trip time will then be just twice this.) The reflected light signal's worldline is simply ##x = (1 / a) + L - t##, so we have $$\frac{1}{a} + L - t = \frac{1}{a} \sqrt{1 + a^2 t^2}$$ which gives, after some algebra, $$t = \frac{L}{2} \frac{2 + a L}{1 + a L}$$ However, we can also rearrange the above equation for where the reflected signal meets the lower mirror to get this interesting equation: $$1 + a L = a t + \sqrt{1 + a^2 t^2}$$ The expression on the LHS can then be substituted into the equation for ##\tau## to obtain $$\tau = \frac{1}{a} \ln \left( 1 + a L \right)$$ We can then take the ratio ##\tau / t##: $$\frac{\tau}{t} = \frac{1}{a} \ln \left( 1 + a L \right) \frac{2}{L} \frac{1 + a L}{2 + a L}$$ We can see the following general features from the above: (1) For ##a = 0##, ##t = L##, as expected. (2) As ##a## increases, ##t## decreases (because the factor multiplying ##L / 2## is ##2## at ##a = 0## and is strictly decreasing with increasing ##a##, as can be seen by taking its derivative with respect to ##a##). This is to be expected since as ##a## increases, the lower mirror accelerates more towards the reflected light signal, so it will take less coordinate time to meet it. (3) As ##a## increases, ##\tau## decreases (because ##\tau## is a ratio of a term logarithmic in ##a## to a term linear in ##a##; taking the derivative gives a more complicated formula from which it's not easy to see that it must be strictly decreasing with increasing ##a##, but it can be shown). (4) As ##a## increases, ##\tau / t## decreases, at least for small ##a##. This is simplest to see by taking the power series expansion of ##\ln \left( 1 + a L \right)## and plugging that into the above formula; this gives, keeping only terms up to those quadratic in ##a##, $$\frac{\tau}{t} = \frac{2 + a L - a^2 L^2}{2 + a L}$$ Which is decreasing with ##a##. I think that remains true for ##a## not small, but I have not done a detailed computation to check. Likes Ibix • #17 Ibix 6,421 5,072 Which is decreasing with ##a##. I think that remains true for ##a## not small, but I have not done a detailed computation to check. In the limit ##a\rightarrow\infty## the worldline of the lower mirror hugs the lightcone of its instantaneous rest event which, in the symmetric frame we are using, is simultaneous with the upper mirror reflection event. By inspection ##\tau\rightarrow 0## and ##t\rightarrow L/2##, so ##\tau/t\rightarrow 0##. Last edited: Likes Nugatory • #18 PeterDonis Mentor 2019 Award 29,591 8,883 In the limit ##a\rightarrow\infty## the worldline of the lower mirror hugs the lightcone of its instantaneous rest event which, in the symmetric frame we are using, is simultaneous with the upper mirror reflection event. By inspection ##\tau\rightarrow 0## and ##t\rightarrow L/2##, so ##\tau/t\rightarrow 0##. Ah, yes, I figured there was a simpler way of looking at that case. The other interesting thing to do is a similar calculation for the upper mirror, where we pick a frame in which the lower mirror reflection event is at ##t = 0## and the upper mirror is momentarily at rest in the frame at ##t = 0##. This calculation shows that for the upper mirror, ##t## and ##\tau## both increase as ##a## increases, although their ratio still decreases. • #19 Ibix 6,421 5,072 The other interesting thing to do is a similar calculation for the upper mirror, where we pick a frame in which the lower mirror reflection event is at ##t = 0## and the upper mirror is momentarily at rest in the frame at ##t = 0##. This calculation shows that for the upper mirror, ##t## and ##\tau## both increase as ##a## increases, although their ratio still decreases. Agreed. If you're feeling lazy, you can just reuse your maths from #16, but with ##-1/L<a<0## (for more extreme negative values the other mirror is below the Rindler horizon and the clock doesn't work any more). Likes Nugatory • #20 PeterDonis Mentor 2019 Award 29,591 8,883 If you're feeling lazy, you can just reuse your maths from #16 Yes, the quick summary is: $$t = \frac{L}{2} \frac{2 - a L}{1 - a L}$$ which is increasing in ##a## (and requires the limit ##a L < 1##), $$\tau = \frac{1}{a} \ln \left( \frac{1}{1 - a L} \right)$$ which is increasing in ##a##, $$\frac{\tau}{t} = \frac{1}{a} \ln \left( \frac{1}{1 - a L} \right) \frac{2}{L} \frac{1 - a L}{2 - a L}$$ And for small ##a## $$\frac{\tau}{t} = \frac{2 - a L - a^2 L^2}{2 - a L}$$ which is decreasing in ##a##. • #21 1,420 112 (2) As aa increases, tt decreases (because the factor multiplying L/2L / 2 is 22 at a=0a = 0 and is strictly decreasing with increasing aa, as can be seen by taking its derivative with respect to aa). This is to be expected since as aa increases, the lower mirror accelerates more towards the reflected light signal, so it will take less coordinate time to meet it. I think coordinate time should start increasing when at some large accelerations the upper mirror's velocity becomes large, and because of that the first leg takes a long time. For example, if the first leg takes time 3*L, and if we pretend that the second leg takes zero time, then the two legs take time 3*L. Oh yes, aL < 1 . I guess that means that the upper mirror does not reach very high velocity during the first leg. Never mind then. Last edited: • #22 PeterDonis Mentor 2019 Award 29,591 8,883 I think coordinate time should start increasing Don't wave your hands. Do the math. You have coordinate time as a function of ##a##. If there is a range of ##a## for which ##t## increases, it should be easy to show mathematically. Oh yes, aL < 1 . Not for the case where the reflection at ##t = 0## is off the upper mirror, i.e,. where the return light signal whose path I was analyzing is from the upper to the lower mirror. There is no limit on the acceleration for that case. For the case where the reflection at ##t = 0## is off the lower mirror, so the return light signal is from the lower to the upper mirror, there is a restriction ##aL < 1##, yes, because if ##a L \ge 1##, the lower mirror is behind the upper mirror's Rindler horizon and the return light signal will never catch up to the upper mirror. • #23 Ibix 6,421 5,072 I think coordinate time should start increasing when at some large accelerations the upper mirror's velocity becomes large, and because of that the first leg takes a long time. The maths Peter wrote down in #16 has the focus of the hyperbolae followed by the mirrors be at the origin. Since such hyperbolae are invariant under Lorentz transform (they're analogous to Euclidean circles centred at the origin, which are invariant under rotation about the origin) you can take the ##(x,t)## coordinates of the three reflection events he discusses (##t=0##, ##t=\pm L(2+aL)/2(1+aL)##) and boost them by ##-v## to get the coordinates of three reflection events where the upper mirror is travelling at ##+v## at its reflection event. Then you can see if your intuition is correct or not. • #24 PeterDonis Mentor 2019 Award 29,591 8,883 Since such hyperbolae are invariant under Lorentz transform (they're analogous to Euclidean circles centred at the origin, which are invariant under rotation about the origin) you can take the ##(x,t)## coordinates of the three reflection events he discusses (##t=0##, ##t=\pm L(2+aL)/2(1+aL)##) and boost them by ##-v## to get the coordinates of three reflection events where the upper mirror is travelling at ##+v## at its reflection event. Then you can see if your intuition is correct or not. While this is a valid process, I'm not sure it addresses the point @jartsa was trying to make. I think he was talking about a case where the upper reflection is still at ##t = 0##, but the acceleration is very large so the velocity of either mirror goes from zero to close to the speed of light in the coordinate time it takes for a light signal to go between the mirrors. I think he forgot that for the case where the reflection is at the upper mirror, the light signal and the mirror are traveling towards each other on both legs, so increasing the acceleration just increases their effective approach speed, making ##t## smaller (approaching the limit of ##L / 2## that you gave earlier using a simple argument that I'm not sure @jartsa has read). • #25 Ibix 6,421 5,072 I think he was talking about a case where the upper reflection is still at ##t = 0##, but the acceleration is very large so the velocity of either mirror goes from zero to close to the speed of light in the coordinate time it takes for a light signal to go between the mirrors. Possibly. I'm reading his second paragraph, which has asymmetric times for two legs, as referring to a reflection that isn't symmetric around the instantaneous rest. So if you were discussing the first tick of the light clock in #16 then I was aiming to discuss the ##n##th tick (or the ##x##th tick, I suppose, since I didn't impose any discrete condition on ##v##). I think the process I offered is appropriate on that basis - we'll see what @jartsa says he meant. • Last Post Replies 53 Views 33K • Last Post Replies 9 Views 3K • Last Post Replies 15 Views 583 • Last Post Replies 1 Views 2K • Last Post Replies 8 Views 4K • Last Post Replies 2 Views 2K • Last Post Replies 7 Views 3K • Last Post Replies 13 Views 1K • Last Post Replies 19 Views 1K • Last Post Replies 20 Views 5K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823264241218567, "perplexity": 1167.3842301838615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880243.25/warc/CC-MAIN-20200702205206-20200702235206-00160.warc.gz"}
https://www.physicsforums.com/threads/pauli-lubanski-pseudo-vector-in-spin-representation.342801/
# Pauli lubanski pseudo vector in spin representation 1. Oct 4, 2009 ### rntsai I'm trying to calculate the pauli-lubanski pseudo vector for different representations of the poincare group. The first rep is the infinite dimensional "angular momentum" rep where the generators of the lorentz part take the form : M_ab = x_a*d_b - x_b*d_a (for 3 rotations) M_ab = x_a*d_b + x_b*d_a (for 3 boosts) (here d_a is partial differentiation with respect to x_a, the indices...should be obvious). the momentum part of the generators are : P_a = d_a (4 translations) The pauli-lubanski pseudo vector is defined : W_a = e_abcd * M_bc * P_d (e_abcd is antisymmetric levi-civita symbol) A bit of a surprise (to me) is that W_a = 0 for this rep! (check it if you like). I moved to calculating W_a in a "spin" rep of say dimension N; so now : M_ab -> M_ab*I_N + S_ab P_a -> P_a * I_N where S_ab = NxN matrices (6 constant matrices satissfying the lorentz algebra multiplication). I_N is NxN identity matrix (S_ab and P_c commute : S_ab * P_c = 0) and the pauli-lubanski pseudo vector becomes : W_a = e_abcd * S_bc * P_d So it seems like each of the four components is an NxN matrix. Even the invariant W^a*W_a is an NxN matrix...I assume with eigenvalues equal to some multiple of spin(spin+1)...although looking at this matrix that doesn't look obvious. Anyway, my question is this : does the above look right? where can I find an explicit example where the above calculations are carried out in detail. Also please let me know if there's a better place to post this if this is outside the forum's area 2. Oct 6, 2009 ### Wriju Couple of things I'd like to point out (unless you've figured it out by yourself) 1) The P-L (pseudo) tensor is constructed in a manner so that it receives NO contribution from orbital ang. mom. since that can take any arbitrarily large/small integer multiple of h-bar while spin ang. mom. is a CHARACTERISTIC of the particle (i.e represenatation) like mass and hence provides the only non-vanishing contribution. No wonder you found vanishing answer with M_ab. 2)S_ab-matrix for a spin-j particle (i.e. Lorentz group representation) is (2j+1) dimensional. S_0i=0, S_ij=e_ijk*J^K where J^K's are the usual spin-j matrices e.g. J^3=diag(-j,-j+,...,j-1,j). Hope that helps. Wriju Similar Discussions: Pauli lubanski pseudo vector in spin representation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668771624565125, "perplexity": 3937.017573910235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809419.96/warc/CC-MAIN-20171125051513-20171125071513-00362.warc.gz"}
http://mathoverflow.net/questions/103353/maximal-number-of-binary-strings-given-constraints
# Maximal number of binary strings given constraints Let $k, N, m \in \mathbb{N}$ such that $k \leq N$. What is the maximal number $e$ of strings $\sigma_1, \sigma_i, \dots, \sigma_e$, each of length $N$ such that $$\forall j < k, \left(\sum_{i=1}^e \sigma_i[j]\right) \leq 2^{N-k}(m-1)$$ For example if $m=3$, $k=4$, $N=5$, we have $e = 14$. An example of such a set is $$\begin{array}{cc} \sigma_1 & \underbrace{\overbrace{0000}^{k}0}_{N}\\\\ \sigma_2 & 00001\\\\ \sigma_3 & 10000\\\\ \sigma_4 & 10001\\\\ \sigma_5 & 01000\\\\ \sigma_6 & 01001\\\\ \sigma_7 & 00100\\\\ \end{array} \hspace{20pt} \begin{array}{cc} \sigma_8 & 00101\\\\ \sigma_9 & 00010\\\\ \sigma_{10} & 00011\\\\ \sigma_{11} & 11000\\\\ \sigma_{12} & 11001\\\\ \sigma_{13} & 00110\\\\ \sigma_{14} & 00111\\\\ \end{array}$$ By a few trials, it seems that the following holds • If $k \leq m$ then $e = 2^{N-k} + 2^{N-m}k$ • If $k \geq m$ then $e = 2^{N-k} + 2^{N-m}m$ Has this problem been already studied ? My intuition is that the following exchange lemma holds: Let $S$ be a set of strings verifying previous properties. Then there exists a set $S'$ of same cardinality such that if $\sigma \in S'$ has $n$ bits at 1 at the $k$'s first positions, then all strings having strictly less than $n$ bits at 1 at the $k$'s first positions are in $S'$. But I don't know how to prove this. - Is there no constraint on the last $N-k$ values of the strings? – Douglas Zare Jul 28 '12 at 2:32 No, there is no constraint on those values. But this parameter is necessary. You can see $N-k$ as the possibility to add a ponderation to the $k$-prefix. – Turingoid Jul 28 '12 at 3:01 So, your strategy is to include all strings from $C(k,l)$, the $k$-strings with $l$ 1's, until you reach a point where the bound forces you to select a proper subset from $C(k,l)$, while varying the last $N-k$ digits, right? If so, a relevant fact is that for $l > 0$, the full set $C(k,l)$ contributes $\binom{k}{l}/(k/l) = \binom{k-1}{l-1}$ 1's to any given $j$-column per suffix string of $2^{N-k}$. Thus, if your exchange lemma holds, you can get 22 strings for $N = 6, k = 5, m = 4$, instead of 18. – Hugh Denoncourt Jul 29 '12 at 19:47 I do not understand why you use N=6 when he used N=5 in the example, and so on. So I don't know if the fact that you found 22 instead of 18 has a real meaning here, or if you just did a typo with the numbers. – Arthur MILCHIOR Jul 31 '12 at 2:15 @Hugo My exchange lemma says that before including strings with $l$ 1's I should first include strings with $l−1$ 1's. However, it doesn't says that the last $l$ is totally filled, so I don't see how you can conclude that it would give another bound. Maybe my "less" was unclear and I should say "strictly less". – Turingoid Jul 31 '12 at 2:17 This is a counterexample to the formula $e = 2^{N-k} + 2^{N-m}m$ for the maximum number of strings satisfying the given constraints. The parameters of the example are $N = 6$, $k = 5$, and $m = 4$. The constraint is that the columns sum to no more than $6$. The conjectured formula predicts that 18 is the maximum number of strings. $$\begin{array}{cc} \sigma_1 & 000000\\\\ \sigma_2 & 000001\\\\ \sigma_3 & 100000\\\\ \sigma_4 & 100001\\\\ \sigma_5 & 010000\\\\ \sigma_6 & 010001\\\\ \sigma_7 & 001000\\\\ \sigma_8 & 001001\\\\ \end{array} \hspace{20pt} \begin{array}{cc} \sigma_9 & 000100\\\\ \sigma_{10} & 000101\\\\ \sigma_{11} & 000010\\\\ \sigma_{12} & 000011\\\\ \sigma_{13} & 110000\\\\ \sigma_{14} & 110001\\\\ \sigma_{15} & 001100\\\\ \sigma_{16} & 001101\\\\ \end{array} \hspace{20pt} \begin{array}{cc} \sigma_{17} & 100010\\\\ \sigma_{18} & 100011\\\\ \sigma_{19} & 011000\\\\ \sigma_{20} & 011001\\\\ \sigma_{21} & 000110\\\\ \sigma_{22} & 000111\\\\ \end{array}$$ If $C(k, \ell)$ denotes the number of $k$-strings with $\ell$ 1's, and all such strings are included with all possible $N-k$ suffixes, then the total contribution to the column sum from $C(k, \ell)$ (when $\ell > 0$) is $2^{N-k}\binom{k-1}{\ell - 1}$. In this example, all strings from $C(5,0)$ and $C(5,1)$ were included, but, not all strings from $C(5,2)$ could be included. If the exchange lemma is correct, this idea can be used to at least predict lower and upper bounds for $e$. @Hugh Your interpretation is good. You could as well have given a counter example with N=5 as you duplicate all $k$ prefixes and hence don't "use" the extra N-k bits. I'll try to fix my bound. In fact I don't need to have a tight bound, but just an upper bound in function of $k$ and $m$ so that I can take $m$ and $k$ big enough to make $e$ arbitrarily small. (And sorry for having mistyped your name in my comments) – Turingoid Aug 1 '12 at 23:20 @Hugh In fact, using my previous comment, I realized that I needed to give an upper bound only to an infinity of $m$. So it suffices to take $m_i$ so that we can put exactly all strings with at most $i$ 1's in the $k$ prefix and all $N-k$ suffixes. I guess it isn't too difficult to prove that the created set is an optimal solution for $m_i$. – Turingoid Aug 2 '12 at 3:53 @Turingoid: True! And, the number of such strings is a nice simple sum of binomial coefficients times a power of 2. For nice choices of $m$, you get a nice result. I wonder if the optimal number of strings can always be obtained by repeating the strings over all $N-k$ suffixes. I meant to ask: Was there a motivation behind the question, perhaps as part of a larger question, or did it arise out of recreation? – Hugh Denoncourt Aug 2 '12 at 18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009747505187988, "perplexity": 206.61236753951576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.8/warc/CC-MAIN-20160723071024-00174-ip-10-185-27-174.ec2.internal.warc.gz"}
https://discuss.dgl.ai/t/gatv2-implementation/2172
# GATv2 implementation Hi all, I was wondering if anyone already implemented - or looked into - GATv2? It’s proposed in this paper: [2105.14491] How Attentive are Graph Attention Networks?. This is basically the only difference to normal GATconv: I have some difficulties with implementing it in a handy way as the DGL version of GATconv decomposes the weight vector into a_l and a_r. Any suggestions? Kind regards, Erik Hi Erik, The way of implementing GATv2 will be similar to GAT. 1. Decompose W\cdot [h_i||h_j] into W_l \cdot h_i + W_r \cdot h_j. This is very similar to the a_l/a_r trick in the original GAT. 2. After step one, the result should be an edge data. You can then directly apply LeakyReLU and a^\intercal \cdot () on it. The later one is another linear transformation with output dim being one (a scalar).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789408445358276, "perplexity": 3267.617464879281}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00193.warc.gz"}
http://math.stackexchange.com/users/15140/dair?tab=activity&sort=reviews
Dair Reputation 709 Top tag Next privilege 1,000 Rep. Create new tags Feb 13 reviewed Approve Prove $\mathbb{Z} \times \mathbb{Z} / \left\langle (6,9)\right\rangle$ has an element of order 3 Apr 8 reviewed Approve Getting different answer when evaluating an integral from a released exam.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8608196377754211, "perplexity": 4984.321306069123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456975.30/warc/CC-MAIN-20151124205416-00248-ip-10-71-132-137.ec2.internal.warc.gz"}
http://tex.stackexchange.com/tags/notes/hot
# Tag Info 37 I am using todonotes with enlarged margins. You just enlarge the size of your document and one margin, but you keep the textwidth the same. You need to use the geometry package for that. I actually wrote a blogpost about it. A nice point about todonotes is that they are configurable (you can change the colour & background colour of your notes), that ... 29 You can have a look at the todonotes package. It has some very simple yet quite customizable commands to add notes and stuff to your document. There is also a nice feature called listoftodos that lists all the todo fields that you set in the document. Compare the MWE below: \documentclass{article} \usepackage{todonotes} \begin{document} Some text... ... 21 I use the changes package, which allows authors to mark their changes and makes them colourful and so easier to spot. There's also the latexdiff program which is a bit like running diff except that you can process the output via LaTeX and get a more useful way of displaying the differences, (see also ldiff). One tip if you're using non-LaTeX-specific tools ... 20 One basic possibility using TikZ (process using xelatex): \documentclass{article} \usepackage{xparse} \usepackage{fontspec} \setmainfont{Humor Sans} \usepackage{tikz} \usetikzlibrary{shadows} \usepackage{lipsum} \definecolor{myyellow}{RGB}{242,226,149} \NewDocumentCommand\StickyNote{O{6cm}mO{6cm}}{% \begin{tikzpicture} \node[ drop shadow={ shadow ... 16 Another option is to use the \caption* command from the caption package: \documentclass{article} \usepackage{caption} \begin{document} \begin{figure} \centering \caption{A figure} \rule{1cm}{1cm}% placeholder for \includegraphics \caption*{A note} \end{figure} \end{document} If the notes should use separate different formatting than the one used ... 15 My 2 cents (from experience with my thesis + several papers): even if your coauthors don't use version control, use it alone for your own work. typical workflow: give the .tex to your advisor, tell him to modify the source file directly, and forget about macros to mark differences manually. Whenever you get a revised version back from him, check it in. ... 15 As a demonstration, here is an implementation using threeparttable: \documentclass{article} \usepackage{booktabs,caption,fixltx2e} \usepackage[flushleft]{threeparttable} \begin{document} \begin{table} \begin{threeparttable} \caption{Sample ANOVA table} \begin{tabular}{lllll} \toprule Stubhead & $$df$$ & $$f$$ & \( ... 15 If you use Adobe Reader, then there is also pdfcomment. You can run texdoc pdfcomment in your terminal to read the manual. Like todnotes package, this package tries to emulate the commenting functionality found in some word processors. This sample code is taken from the example.tex file provided with the package and some of the things that it can do. ... 12 If you just want to write between the lines of text, you probably want to use combination of \raisebox and \rlap. \raisebox basically typesets the argument in a box raised, or lowered over the baseline by its given arguments. You however want TeX to make the box of virtually zero width and zero height. This is done using \rlap and arguments to \raisebox ... 12 Yes chemifg is a great tool. But as well as almost every code to picture system the syntax is not trival. Please consider the following example. You can easily see, that chemfig syntax follows a logical and human readable syntax, but will become extremely complex for larger structures. And so far as i can see chemfig is the easiest system for chemical ... 11 Without an actual example of what your table looks like it is difficult to say whether this solution will work for you. But, you could incorporate each portion within a minipage to obtain the tablenotes besides the table. Here is a before and after comparison: Before: After (with minipages): Notes: I have not been able to figure out how to get the ... 11 Git has nifty options to do diffs like you want =) http://idnotfound.wordpress.com/2009/05/09/word-by-word-diffs-in-git/ Also consider this tip to store preamble as a git submodule: http://markelikalderon.com/2008/07/31/keeping-your-latex-preamble-in-a-git-submodule/ 10 The easy answer here is, of course, yes - there is always room for improvement. One main observation I can make is the following: Mathmode in LaTeX is not limited to symbols and operators; you are allowed to use letters in math mode as well. For example, consider the difference in style when writing Let A = \{ x $\in$ $\mathbb{Z}$ $\|$ x $\le$ 5 \} ... 10 I'm sure the final answer will be using tikzmarks ;-) For what it's worth, here's a simplistic pure TeX solution based on shuffling around some boxes. Edit: Now with pagebreak enabled. \documentclass{article} \usepackage{color} \newdimen\charwd \charwd=1pt \makeatletter \newbox\@tempboxb \newbox\@tempboxc \newbox\@tempboxd \newskip\@tempskipc ... 9 Main file \documentclass{article} \def\noteref#1#2{\csname noteref#2\endcsname{#1}} \def\noterefSOLVED#1#2#3{} \def\noterefPENDING#1#2#3{% \expandafter\def\csname noteref-#1\endcsname{\marginpar{#3}}} \let\oldlabel\label \def\label#1{% \oldlabel{#1}% \csname noteref-#1\endcsname} \input{\jobname-notes} \begin{document} \section{intro\label{aa}} ... 8 One option using \tikzmark (since some calculations are performed for the bar placement, the code needs three runs to stabilize). The bar admits (multiple) page breaks: \documentclass{article} \usepackage[a5paper,rmargin=4cm]{geometry} \usepackage{atbegshi} \usepackage{refcount} \usepackage{setspace} \usepackage{tikzpagenodes} \usetikzlibrary{calc} ... 8 Another option is the rather new fixmetodonotes package. It is much more lightweight than \todonotes by using \marginpar instead of tikz, but contains many of its features, plus some more: Inline and margin placement Listing all notes through \listofnote Flexible customization Automatic placement of a DRAFT watermark on any page that contains notes (can be ... 8 this differs only slightly from the answer by Sigur, but i think it's worth some small adjustments. \documentclass{article} \usepackage{mathtools} \begin{document} \begingroup \small \[ \displaystyle \underbrace{\left(-\frac{1}{2}\right)^0}_{\substack{\text{1st term,}\\ j = 0}} + \underbrace{\left(-\frac{1}{2}\right)^1}_{\substack{\text{2nd term,}\\ j = ... 8 The floatrow package offers the \floatfoot macro for notes in addition to a float's \caption. \documentclass{article} \usepackage[capposition=top]{floatrow} \begin{document} \begin{figure} % \centering% default with floatrow \rule{1cm}{1cm}% placeholder for \includegraphics \caption{A figure} \floatfoot{A note} \end{figure} \end{document} 8 You can use the package enumitem and define a list that you can use for notes such as the following: \documentclass{article} \usepackage{enumitem} \newlist{notes}{enumerate}{1} \setlist[notes]{label=Note: ,leftmargin=*} \begin{document} Duis porttitor nisi et orci pellentesque feugiat. Aenean id turpis vel purus tincidunt sodales. Class aptent taciti ... 8 I always have the following three on my slides: %\documentclass[notes]{beamer} %\documentclass[notes=hide]{beamer} \documentclass[notes=only]{beamer} Then I comment or uncomment them according to my needs. As should be clear, the last one prints only notes, whereas the others print everything and no notes respectively. 7 You could use \setstretch and \parbox inside \colorbox. Here's an example, using even smaller stretch value to make the effect more visible: \documentclass{article} \usepackage[english]{babel} \usepackage{blindtext} \usepackage[svgnames]{xcolor} \usepackage[doublespacing]{setspace} \newcommand{\mymarginnote}[1]{% ... 7 I do not know if I understand you well. Do you have the LaTeX source of the papers you read? If, as it is often my case, you can only have the PDF versions, then what you would need is a tool to annotate those. I use Jarnal and I am quite satisfied with it. I also use it to grade my student's homework, which they usually turn in in PDF. 7 This should get you started. I extended the size of the margin and used \marginpar to place the text in the margin: \documentclass{article} \usepackage{lipsum} \usepackage{xcolor} \usepackage{mdframed} \setlength{\textwidth}{4.0in}% \setlength{\marginparwidth}{2.0in}% \mdfdefinestyle{MyMarginNoteStyle}{ topline=false, bottomline=false, ... 7 You could use the help of mdframed and expecially its options singleextra, firstextra, secondextra and middleextra. The different options allow you to have different styles for a frame that's on a single page and a frame that is broken over two or more pages. When mdframed is used with framemethod=tikz you have access to the corners of a frame. The node on ... 6 ChemFig is a great package but I don't believe that you'll gain much in time if you create your schemes with it rather than with ChemDraw. Once you know you're way around ChemFig you're just as fast or slow with it than with ChemDraw (supposing you know your way around that, too), at least that's my experience. There are other points you should consider: ... 6 I don't know how to do it with threeparttablex, but it seems that the old threeparttable should be sufficient: the former is for longtable and you don't seem to be using longtabu (I'm afraid that one should rewrite threeparttablex for using the longtabu environment). \documentclass{article} \usepackage{booktabs} \usepackage{tabu} \usepackage{threeparttable} ... 6 I think the free web-based writeLaTeX is worth mention. With this tool, all authors can share documents, review, comment and edit. Since it is web-based all documents will be accessible across platforms. 6 Try using a more casual specification of document structure like Markdown; personally, I like Emacs' org-mode for taking notes. If you want to convert to Latex later, there are plenty of converters. Pandoc supports many casual document structuring formats: besides Markdown, it supports Restructured Text, which is liked by many, and Textile; it supports ... 6 What about something more similar to the one you are trying to copy, made with mdframed? Code: \documentclass{article} \usepackage[framemethod=TikZ]{mdframed} \usepackage{lipsum} % just for the example \newmdenv[% rightmargin = -10pt, skipabove = 1.2\topskip, rightline = false, topline = false, bottomline = false, ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523053526878357, "perplexity": 4477.027568568555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011294162/warc/CC-MAIN-20140305092134-00066-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mathtube.org/lecture/video/elliptic-fibrations-and-singularities-anomalies-and-spectra-4-4
Elliptic Fibrations and Singularities to Anomalies and Spectra 4 of 4 Speaker: Monica Jinwoo Kang Date: Thu, Aug 26, 2021 Location: Class: Scientific Abstract: Throughout my lectures I will explain the geometry of elliptic fibration which can gave rise to understanding the spectra and anomalies in lower-dimensional theories from the Calabi-Yau compactifications of F-theory. I will first explain what elliptic fibration is and explain Kodaira types, which gives rise an ADE classification. Utilizing Weierstrass model of elliptic fibrations, I will discuss Tate’s algorithm and Mordell-Weil group. By considering codimension one and two singularities and studying the geometry of crepant resolutions, we can define G-models that are geometrically-engineered models from F-theory. I will discuss the dictionary between the gauge theory and the elliptic fibrations and how to incorporate this to learn about topological invariants of the compactified Calabi-Yau that is one of the ingredient to understand spectra in the compactified theories. I will explain the more refined connection to understand the Coulomb branch of the 5d N=1 theories and 6d (1,0) theories and their anomalies from this perspective.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779948949813843, "perplexity": 990.3187707416319}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00387.warc.gz"}
https://stacks.math.columbia.edu/tag/0DDG
Lemma 59.92.1. Let $K/k$ be an extension of separably closed fields. Let $X$ be a proper scheme over $k$. Let $\mathcal{F}$ be a torsion abelian sheaf on $X_{\acute{e}tale}$. Then the map $H^ q_{\acute{e}tale}(X, \mathcal{F}) \to H^ q_{\acute{e}tale}(X_ K, \mathcal{F}|_{X_ K})$ is an isomorphism for $q \geq 0$. Proof. Looking at stalks we see that this is a special case of Theorem 59.91.11. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9963199496269226, "perplexity": 173.02798532386194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00350.warc.gz"}
https://www.ocean-sci.net/6/161/2010/os-6-161-2010.html
Journal cover Journal topic Ocean Science An interactive open-access journal of the European Geosciences Union Journal topic • IF 2.539 • IF 5-year 3.129 • CiteScore 2.78 • SNIP 1.217 • IPP 2.62 • SJR 1.370 • Scimago H index 48 • h5-index 32 # Abstracted/indexed Abstracted/indexed Ocean Sci., 6, 161-178, 2010 https://doi.org/10.5194/os-6-161-2010 Ocean Sci., 6, 161-178, 2010 https://doi.org/10.5194/os-6-161-2010 04 Feb 2010 04 Feb 2010 # Ensemble perturbation smoother for optimizing tidal boundary conditions by assimilation of High-Frequency radar surface currents – application to the German Bight A. Barth1,2, A. Alvera-Azcárate1,2, K.-W. Gurgel3, J. Staneva4, A. Port5, J.-M. Beckers1,2, and E. V. Stanev4 A. Barth et al. • 1GeoHydrodynamics and Environment Research (GHER), MARE, AGO, University of Liège, Liège, Belgium • 2National Fund for Scientific Research, Belgium • 3Institute of Oceanography, University of Hamburg, Germany • 4Institute for Coastal Research, GKSS Research Center, Geesthacht, Germany • 5Institute for Chemistry and Biology of the Marine Environment (ICBM), University of Oldenburg, Germany Abstract. High-Frequency (HF) radars measure the ocean surface currents at various spatial and temporal scales. These include tidal currents, wind-driven circulation, density-driven circulation and Stokes drift. Sequential assimilation methods updating the model state have been proven successful to correct the density-driven currents by assimilation of observations such as sea surface height, sea surface temperature and in-situ profiles. However, the situation is different for tides in coastal models since these are not generated within the domain, but are rather propagated inside the domain through the boundary conditions. For improving the modeled tidal variability it is therefore not sufficient to update the model state via data assimilation without updating the boundary conditions. The optimization of boundary conditions to match observations inside the domain is traditionally achieved through variational assimilation methods. In this work we present an ensemble smoother to improve the tidal boundary values so that the model represents more closely the observed currents. To create an ensemble of dynamically realistic boundary conditions, a cost function is formulated which is directly related to the probability of each boundary condition perturbation. This cost function ensures that the boundary condition perturbations are spatially smooth and that the structure of the perturbations satisfies approximately the harmonic linearized shallow water equations. Based on those perturbations an ensemble simulation is carried out using the full three-dimensional General Estuarine Ocean Model (GETM). Optimized boundary values are obtained by assimilating all observations using the covariances of the ensemble simulation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471447825431824, "perplexity": 4797.505342802978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571360.41/warc/CC-MAIN-20190915114318-20190915140318-00324.warc.gz"}
http://math.stackexchange.com/questions/221104/how-to-show-that-xya-geq-xa-ya-given-that-a-geq-1-and-x-y
# How to show that $(x+y)^{a} \geq x^{a} + y^{a}$ given that $a \geq 1$ and $x,y \geq 0$? How to show that $(x+y)^{a} \geq x^{a} + y^{a}$ given that $a \geq 1$ and $x,y \geq 0$? Also, how to prove that the reverse inequality holds when $0 \leq a \leq 1$? - We deal with $a\ge 1$. If $a=1$ there is nothing to do. So let $a\gt 1$. Fix $y$, and let $f(x)=(x+y)^a-x^a-y^a$. We have $f(0)=0$, and $f'(x)=a((x+y)^{a-1}-x^{a-1})$. So $f'(x)\ge 0$ if $x\ge 0$. It follows that for fixed $y$, $f(x)$ is non-decreasing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986354112625122, "perplexity": 37.38721861748229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909055331-00494-ip-10-180-136-8.ec2.internal.warc.gz"}
https://eatelier.nl/?e_component=fast-recovery-diode
A fast recovery diode works the same way a normal diode does: i.e. forward voltage drop is 0.7 volts. The difference is in the speed it ‘recovers’ when it goes from the forward to the reverse direction (i.e. from conducting to non-conducting). In other words: in non-fast recovery diodes, the path between the anode and cathode is still somewhat conductive after the current that placed them in that state is gone. A fast recovery diode is usually used in high frequency or fast switching (MOSFET) applications. e.g. fly back diodes are often fast recovery types.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770638108253479, "perplexity": 1851.8249164630047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00165.warc.gz"}
https://math.stackexchange.com/questions/2134044/help-showing-a-composition-of-functions-is-surjective
# Help showing a composition of functions is surjective This question is related to one I asked here. I am trying, overall, to show that $|A - \{x\}| =n-1$ using a bijection given that $A$ is a finite set with $|A|\geq 1$. For the case when $n > 1$, suppose that, since $|A|=n$, $\exists$ a bijection $f: A \to \mathbb{N}_{n}$. With some help, I came up with the mapping $f^{-1}\circ \tau \circ I: \mathbb{N}_{n-1} \to A - \{x\}$. Here, $I: \mathbb{N}_{n-1} \to \mathbb{N}_{n}$ where for each $j \in \mathbb{N}_{n-1}$, $I(j) = j$. It is not difficult to show that $I$ is injective, but it is not surjective, because $n$ has no preimage. $\tau: \mathbb{N}_{n} \to \mathbb{N}_{n}$, where $\forall k \in \mathbb{N}_{n}$, $\tau(k) = \begin{cases} n & \text{if} \, k=f(x) \\ f(x) & \text{if} \, k = n \\ k & \text{otherwise} \end{cases}$ $\tau$ is bijective, whether we restrict it to $\mathbb{N}_{n} - \{f(x)\}$ or not. Now, I need to show that the composition $f^{-1} \circ \tau \circ I$ is a well-defined bijection in order to complete my cardinality proof. Since $I$, $\tau$, and $f^{-1}$ (since $f$ is, its inverse is) are all injective, their composition is injective. Showing surjectivity and that the map is well-defined are not so easy for me, however. To show that $f^{-1} \circ \tau \circ I$ is surjective, I know that I need to show that every $y \in A - \{ x \}$ has a preimage in $\mathbb{N}_{n-1}$, but I'm not specifically sure how to do that. Could someone please help me with this part? Also, to show that it's well-defined, I need to show that two different $i$'s in $\mathbb{N}_{n-1}$ do not map to the same $y \in A - \{ x \}$. Suppose they did. I.e., suppose that $i \neq j$ and $f^{-1}(\tau(I(i))) = f^{-1}(\tau(I(j)))$. Since the map is injective, it seems like this shouldn't work, but I can't use that to help me prove well-definedness, so I could use some help with this part as well. Thank you. • Another part of well-definedness is verifying that your function actually maps into $A-\{x\}$: this doesn't come free by composition because $f^{-1}$ maps into $A$. For the first bolded question, suppose $y \in A-\{x\}$. What can you say about the value of $f(y)$? This tells you something about $\tau^{-1}(f(y))$. From there you can deduce that the latter has a preimage under $I$, which is what you need for surjectivity. Feb 10 '17 at 2:47 I will fornish to you a (maybe) different proof. I think for you $\mathbb{N}_n:=\{1,2,...,n\}$. For me it is. Suppose that $|A|=n$. So there is a bijection $f:\mathbb{N}_n\to A$. Since $x\in A$ and $f$ is surjective, there is $i\in\mathbb{N}_n$ such that $f(i)=x$. Define $\tau: \mathbb{N}_{n} \to \mathbb{N}_{n}$, $\forall k \in \mathbb{N}_{n}$, by $$\tau(k) = \begin{cases} k & \text{if} ~~ 1\leq k < i \\ k-1 & \text{if} ~~ i < k \leq n \\ n & \text{if} ~~ k = i, \end{cases}$$ So $\tau$ "translade" the point $i$ (such that $f(i)=x$) to the "end of the set" $\mathbb{N}_{n}$. It is an easy exercise to show that $\tau$ is a bijection. So the function $f\circ \tau$ is still a bijection, as a composition of two bijections. Furthermore, $(f\circ \tau)(n)=x$. So if you define $\bar{f}=f\circ \tau|_{\mathbb{N}_{n-1}}$ (the restriction of $f\circ \tau$ to the set $\mathbb{N}_{n-1}$) you will get your desired bijection. • thank you! This is much better than what I had originally! – user100463 Feb 12 '17 at 2:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710947871208191, "perplexity": 72.84158359892739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301488.71/warc/CC-MAIN-20220119185232-20220119215232-00402.warc.gz"}
http://tex.stackexchange.com/questions/191855/fillable-fields-vanish-from-pdf-form-when-included-in-another-pdf
# Fillable fields vanish from PDF form when include'd in another PDF I have created, using hyperref, a set of fillable PDF forms. When I open the PDFs in Adobe Acrobat Reader, the forms work as expected (these are simple no-frills forms without any JavaScript - one of those fill-it-in-and-print-it-out varieties). However, when I try to include these forms in a larger LaTeX document (where these will be present in the Appendices), using (pdfpages): ``````\includepdf[pages={-}]{Form1.pdf} `````` and pdflatex that larger document, I just get the static text fields in the appendix - the fillable part of the form is gone! I wish to retain the PDF form as is (which I thought was how pdfpages worked). What are my options (if I am not missing something rather simple)? Is there an alternative to pdfpages here? - Unfortunately, I don't think that is how `pdfpages` works. See the last paragraph before section two. I believe it relies on `graphicx`, which suggests to me that it is simply (to use an unfair adverb) inserting an "image" in a clever way. – jon Jul 17 '14 at 3:58 I think, that `pdfpages` includes the files as `flat`, i.e. without any further PDF feature, just like a graphics file. – Christian Hupfer Jul 17 '14 at 8:37 Any non-pdfpages solution? – user2751530 Jul 17 '14 at 19:44 @ChristianHupfer : I think you are right. I just made a quick test, and when embedding with pdfpages a pdf with links, they disappear. I was'nt aware of this lack. – Clément Jul 17 '14 at 20:06 @user2751530 : if this form was generated by LaTeX, why could'nt you include its source code in your larger document? Did you consider the `standalone` package? – Clément Jul 17 '14 at 20:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637194871902466, "perplexity": 1647.7903025858466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466178.24/warc/CC-MAIN-20151124205426-00333-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/51878-order.html
Hint: Prove that if $f: G_1 \to G_2$ is an isomorphism then $|x|$ divides $|f(x)|$. Therefore if $f: G_1 \to G_2$ is an isomorphism then $f: G_1 \to G_2$ and $f^{-1} : G_2 \to G_1$ are homomorphisms therefore $|x|$ divides $|f(x)|$ and $|f(x)|$ divides $|x|$, so $|x|=|f(x)|$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987455010414124, "perplexity": 28.81346658454884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718284.75/warc/CC-MAIN-20161020183838-00444-ip-10-171-6-4.ec2.internal.warc.gz"}
http://slideplayer.com/slide/1671156/
# Goal: To understand momentum ## Presentation on theme: "Goal: To understand momentum"— Presentation transcript: Goal: To understand momentum Objectives: To Learn about What momentum is To learn about how to calculate Momentum in 2 dimensions To understand How is momentum changed? To understand the Conservation of momentum To learn about Why momentum is useful to understand. Tomorrow: To learn about applications to the conservation of momentum What is momentum In reality momentum is quite simply a measure of your ability to create change. Momentum = p = mv Lets do a quick sample: 1) A car with mass of 500 kg moves at a velocity of 20 m/s. What is the car’s momentum? Another example: Two cars are headed towards one another. The first car has 700 kg of mass and moves at a velocity of 20 m/s North The 2nd car has 1400 kg of mass and moves at a velocity of 10 m/s South. What is the combined momentum of the cars (yes momentum has direction)? Momentum in 2 dimensions… Each dimension has momentum. So, you have to find the total momentum for each dimension separately. Then at the end you can get a magnitude if you want, but usually it is more useful to keep them separate much like you keep a checking account separate from a savings account. Straight Foreward 2 D question A car is heading North with a mass of 1000 kg and a velocity of 12 m/s. A 2nd car is heading East with a mass of 750 kg and a velocity of 20 m/s. Which car has a greater magnitude of momentum? What is the combined magnitude of momentum for both cars combined Changing momentum How do you change momentum? You use what is called an “impulse”. Impulse = change in momentum Impulse = mass * change in velocity Impulse = F * t Note that F = ma So, Impulse = m * (a * t) What does acceleration * time equal? Example: A car runs into a mailbox. The mass of the mailbox is 10 kg and the mass of the car is 800 kg. If the car imparts a 2000 N force to the mailbox for 0.4 seconds find: A) The impulse on the mailbox B) The new velocity of the mailbox (set impulse = to mass * change in velocity)? C) What is the impulse the mailbox imparts on the car? (What, you have forgotten about Newton’s 3rd law already?) D) How much does the car’s momentum change? E) What is the net change in momentum (i.e. if you add the changes in momentum of the car and mailbox what do you get)? Conservation of momentum! Momentum is almost always conserved in a collision. In fact it is conserved for each dimension. Total p before = Total p after Quick question – will kinetic energy be conserved? Energy? Sometimes kinetic energy is also conserved. Collisions that conserve kinetic energy are called elastic collisions. Collisions where energy is not conserved are called inelastic collisions. “Oooh, oooh, fender bender” The pips from that car commercial In many collisions energy is transferred. Energy is transferred to sound energy, heat energy, and used to crumple a car. These collisions are always inelastic collisions. So, if you get hit by a car, you want it to be an elastic collision! You will fly faster and further, but the initial impact won’t use energy to bend and break things. Rear end crash A speeding car of mass 800 kg attempting to elude the police crashes into a 600 kg car sitting parked at the intersection. Ignoring brakes and friction, if the initial velocity of the speeding car is 50 m/s forward and the final velocity of the speeding car is 10 m/s forward then what will the final velocity of the other car be? There are 2 ways to do this problem Head on collision Car 1: 25 m/s East and a mass of 800 kg. Car 2: 30 m/s West and a mass of 900 kg. A) What is the net momentum of the two cars combined before the collision. C) After the crash Car 1 moves West at a velocity of 5 m/s. What will the final velocity of car 1 be? Hint, total momentum T Bone! Car 1: mass of 650 kg and headed North at 10 m/s Car 2: mass of 750 kg and headed west at 5 m/s. Car 1 T Bones Car 2 and car 1 comes to a complete stop. A) Before the crash what are the momentums in the north and west directions? B) After the crash how much momentum will car 1 have? C) After the crash what is the north and west velocity of Car 2 (hint: will the west velocity change?) D) What is the magnitude of the final velocity for car 2? If time: Ball off a wall You bounce a 0.15 kg ball off of the wall. The ball hits the wall at 20 m/s forward and when it bounces it returns (backward) at 80% of the SPEED of when it hit the wall. A) What is the change in velocity for the ball (remember direction)? B) What is the change in momentum? C) If the ball is in contact with the wall for 0.6 seconds then what is the average force that the wall imparts to the ball? D) What is the acceleration the wall gives the ball? Conclusion Momentum = mass * velocity Momentum is conserved! Momentum is conserved in every direction! If you run into something – or it runs into you – at high velocity – don’t bounce!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291680812835693, "perplexity": 668.4463625919242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689975.36/warc/CC-MAIN-20170924100541-20170924120541-00517.warc.gz"}
https://jp.maplesoft.com/support/help/view.aspx?path=StudyGuides/MultivariateCalculus/Chapter4/Examples/Section4-10/Example4-10-6&L=J
Example 4-10-6 - Maple Help Chapter 4: Partial Differentiation Section 4.10: Optimization on Closed Domains Example 4.10.6 Find the extreme values of the function  on the domain $R$ consisting of the interior and boundary of the ellipse whose equation is $2{x}^{2}-4xy+3{y}^{2}+4x-5y-1=0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 147, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783198595046997, "perplexity": 1333.84574244279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00018.warc.gz"}
http://mathhelpforum.com/calculus/7801-convergent-sequence-problem.html
# Thread: A convergent sequence problem 1. ## A convergent sequence problem Q: Let the sequence {An} converge to a with a>0, prove there exist an N within the set of natural numbers such that n >= N implies {An} > 0 thank you! 2. The convergence of the sequence (A(n)) to a tells you that for all e > 0, there exists an N such that for all n > N, |A(n)-a| < e. This last statement is equivalent to saying that a-e < A(n) < a+e, but you know that a > 0. Now just take the e > 0 small enough such that a > e which implies a-e > 0 and thus A(n) > 0. 3. Using what TD has shown you, I would suggest $e = \frac{a}{2}.$ 4. I was hoping tttcomrader would be figuring out that himself
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809691309928894, "perplexity": 882.5998288986525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00289-ip-10-171-10-108.ec2.internal.warc.gz"}
http://forge.cbp.ens-lyon.fr/redmine/projects/txm/issues/gantt?month=6&months=6&query_id=34&year=2019&zoom=3
## Known bugs Laboratoire ICARPlateforme TXMKnown bugsSupport #873: Under Windows, BFM import module fails on corpora composed of densely XML tagged textsSupport #874: Under Windows, CQL queries using '%d' modifier fail and kill CQP search engineSupport #876: Under Mac OS X 10.9 Mavericks, R Graphics are not displayed, users need to install XQuartzBug #947: Under Windows, the RConsole does not work 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S S M T W T F S Also available in: PDF
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182217717170715, "perplexity": 1494.4646621821357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00455.warc.gz"}
https://www.physicsforums.com/threads/good-problem.39178/
# Good Problem 1. Aug 12, 2004 ### ambuj123 Here is this problem if someone could help it will be highly appreciated. 8sqrt( ((l-a)lmd)/(eAE^2(K-1))) #### Attached Files: • ###### problem.ZIP File size: 6.7 KB Views: 86 2. Aug 12, 2004 ### TenaliRaman What have u tried so far? Usual approach is to give a little push and show that F directly proportional to displacement -- AI 3. Aug 12, 2004 ### ambuj123 Well i Tried to do like this but didnt work suppose capacitor has length x of dielectric filled then what is potential energy. then Force acting is = -dU/dx but it gives me constant force which acts on the dielectric but i should have got variable force dependnt on x to show it executes SHM. THe other way itried like this suppose dielectric moves inside by a distance dx and we apply an external force F so that it moves inside without acceleration the workdone by external force be -Fdx and done by battery is dW= dCV^2 the total work done is equal to change in potential energy of capacitor but this also give constant force acting on dielectric due to the electric field. HELP i have to get it done by monday. Last edited: Aug 13, 2004 4. Aug 13, 2004 ### Locrian I don't know how to do this one either. Dielectrics sounded far too practical to me and I slept through that entire part of the course. The dielectric block is released from rest, so there must be some force on it. I know that there is +Q on one side of the capacitor and -Q on the other. In the dielectric, there is -q on one side and +q on the other, but only in the space that is between the plates of the capacitor. This would mean there would be a net force to the left on the dielectric. However, this wouldn't explain a simple harmonic oscilator, since the amount of force would be inversley proportional to the distance the dielectric is away from the center. If this is the correct way to get started please let me know and I'll post some calculations as well. 5. Aug 13, 2004 ### ambuj123 Well actually i thought about it but i could'nt get it. have to think some alternative. 6. Aug 13, 2004 ### TenaliRaman I am not prolly among the ones that are good at physics around here. I will just add my two cents tho, Note that when u are pushing a di-electric material, is there a change in capacitance? (Note that what we have is capacitances in parallel) .. More directly, if we say that a material is pushed to x distance inside the plates, what is the capacitance of the system? How does it affect the force? -- AI (P.S -> if anyone finds any errors to what i said , they are welcomed to point it out and correct it) Last edited: Aug 13, 2004 7. Aug 13, 2004 ### Locrian I'm sure there is a change in capacitance, but don't see how that reflects on the force. Educate me. 8. Aug 14, 2004 ### TenaliRaman The key here is the battery... Calculate the work done by the battery (which will be [deltaC*V^2]) Calculate the field energy of the capacitor If my physics is still in good shape, Work done by battery must be > field energy of the capacitor The rest of the energy is spent in resisting the force pushing the block in. See if u can carry on from here ..... -- AI P.S-><same as my earlier post> 9. Aug 14, 2004 ### ambuj123 Well this is what i have done and it goves constant force not force dependent on distance which is required for SHM. You can check it. 10. Aug 15, 2004 ### Gokul43201 Staff Emeritus Yes ambuj, I think you're right, U seems to go linearly in x, which is not what we want. In fact, I think the force opposes you - tries to push the slab out of the capacitor. Anyway, I think the difficulty is because of the poor wording of the problem. I think what really happens is that the capacitor is charged to a potential of E, in the position shown, and then the battery is removed. If the slab is then let go, it will oscillate harmonically. The trick is that V also changes so that Q = CV remains constant. Clearly then there will be an x^2 dependence coming from the V^2 term. Really, if you find U(x) = (1/2)C(x)V(x)^2 = (1/2)kx^2, that gives you k and hence, T. You don't even have to find F = -dU/dx = kx...it's just redundant. 11. Aug 15, 2004 ### ehild The task in the original formulation of the problem is "show that the slab will execute periodic motion and find its time period". And it does not say that the battery is removed. A periodic motion is not necessarily harmonic. Consider a bouncing ball for example. I think the slab will move with a constant acceleration till it reaches the end of the capacitor. After that, it will decelerate at the same rate till it stops at a position, which is mirror image of the original, (if that little push at the beginning meant infinitesimal starting speed.) If it turns back the motion will be the same as before only at opposite direction, and the slab returns its original position. ehild 12. Aug 15, 2004 ### Gokul43201 Staff Emeritus No, if the battery remains in place the force pushes the slab out of the plates if you displace it slightly - there is no question of periodic motion. Remember, U = (1/2)CV^2 is maximum with the slab in (as C increases) and minumum with the slab out. So clearly, the slab will get pushed out. I know the question does not say that the battery is removed, and that's why it is WRONG ! But clearly there could have been an error of omission in the formulation of the question. I would go ahead and solve it this way if only I knew what ambuj means by 8sqrt( ((l-a)lmd)/(eAE^2(K-1))). Ambuj, can you please confirm what these variables refer to (the answer doesn't seem to be dimensionally correct) - and in any case, it is surely suggestive of NON-HARMONIC behavior. So, it is quite perplexing ! 13. Aug 16, 2004 ### ehild Well, the energy of a capacitor at voltage V is higher with a dielectric than without it. And also, the energy is higher if the capacitor is charged compared to the uncharged state. And in spite of that, the capacitor gets charged when connected to a battery instead of refusing the charges as they increase its energy... Why? :) The original problem shows a capacitor connected to a battery, and no indication of the battery disconnected. ehild 14. Aug 16, 2004 ### Gokul43201 Staff Emeritus Okay, so YOU prove that the slab will undergo periodic motion if released, and calculate what the period will be. As I explained before, I claim there will be no periodic motion because the slab will slide out of the plates and fall on the floor and live there happily ever after. Please tell me how this is wrong. Last edited: Aug 16, 2004 15. Aug 16, 2004 ### ehild Prove that your statement is true....:) The energy stored in the capacitor is higher if it is filled with a dielectric than without it. This does not mean that the energy of the system battery-capacitor is also higher with the dielectric in. I will prove that there is an inward force acting on the slab if it was pushed to move inward at the beginning. And it will continue to move inward with constant acceleration till it reaches the end of the capacitor. Problem: A plane capacitor is connected to a battery that ensures constant voltage V across its plates.There are no losses, no resistance, no friction. The plates of the capacitor are of length l, width w, and they are d distance apart. The area of one plate is A = l*w. The electric field intensity inside the capacitor is E=V/d. A dielectric slab of dimensions l x w x d and relative dielectric constant $$\kappa$$ is placed so that its front face is inside the capacitor, at distance "a" from the edge. It is pushed a little inward. What happens? Consider a small displacement $$\delta x$$ of the slab inward. The intensity of the electric field, just as the voltage remains unchanged. The capacitance will increase, and so will the energy of the capacitor. At the same time, some free surface charge will disappear because of the dipole-chains built up in the dielectric. To maintain constant voltage, the battery will supply extra charges, but it has to exert work to do this. Moreover, the KE of the slab can change. Work of the battery= change of the energy of the capacitor + change of the KE of the slab. $$\delta W_B=\delta W_C + \delta KE$$. The work of the battery is: $$\delta W_B=\delta Q*V$$ The change of the surface charge on the planes of the capacitor is equal to the change of the electric displacement, D, multiplied by the increament of the surface. When the dielectric replaces the vacuum and the electric field, E, stays the same D changes by $$(\kappa - 1)\epsilon_0*E$$. So $$\delta W_B=\delta Q*V=(\kappa-1)\epsilon_0*E^2*w*d*\delta x$$ The capacitance changes by $$\delta C=(\kappa-1)\epsilon_0*\delta x*w/d$$, and the energy stored in the capacitor would change by $$\delta W_C=0.5*\delta C*V^2=0.5*(\kappa-1)\epsilon_0*\delta x*w*d*E^2$$. From the condion of work-energy balance we get: $$(\kappa-1)\epsilon_0*E^2*w*d*\delta x =0.5*(\kappa-1)\epsilon_0*E^2*w*d*\delta x+\delta KE$$ Rearranging the equation: $$\delta KE=0.5*(\kappa-1)\epsilon_0*E^2*w*d*\delta x$$. The time rate of change for the kinetic energy is $$dKE/dt=m*v*\dot v =0.5*(\kappa-1)\epsilon_0*E^2*w*d*v$$ According to the formulation of the problem, the slab is given a little push inward. So v is small at the beginning, but positive. The diection of the acceleration is also positive. The speed will increase, there is an inward force acting on the slab. This little push is crucial, as we could not say if the slab starts to move or stays in its original position using the argument above. But there is really an inward force on it, coming from the "fringe effect" at the edges of the capacitor plates. The field near the edges is inhomogeneous and produces an inward force onto the slab. But to prove this is beyond the "college level". Anyway, the slab will not "slide out of the plates and fall on the floor and live there happily ever after move out of the capacitor". If it moves out, we have $$1-\kappa$$ in the formula for dKE/dt, so it would be negative. The slab would slow down, that means an inward force again. The slab was assumed moving inward at the beginning. We have deduced that it will continue to move inward with constant aceleration , $$\dot v=0.5*(\kappa-1)\epsilon_0*E^2*w*d/m$$ till its front face reaches the edge of the capacitor. From there on, the situation is reversed, new free charges appear on the plates as the slab moves outward, and the capacitor feeds back charges to the battery. Its energy decreases but to feed back the charges, additional energy is needed on the account of the KE of the slab. At the end the slab will stop. Now it is the fringe effect that will start the slab to move inward again. The time now. The displacement is l-a, the time needed is $$t=\sqrt{\frac{2*(l-a)}{\dot v}} =2*\sqrt{\frac{(l-a)*m}{(\kappa-1)\epsilon_0*E^2*w*d}$$. The time priod is four times longer: $$T=4*t=8*\sqrt{\frac{(l-a)*m}{(\kappa-1)*\epsilon_0*E^2*w*d}$$ The solution quoted by ambuj was 8sqrt( ((l-a)lmd)/(eAE^2(K-1))) which is very similar to my result, but dimensionally not correct. ehild 16. Aug 16, 2004 ### Gokul43201 Staff Emeritus ehild, you are absolutely right. I was wrong, and foolish. I take back what I said. 17. Aug 17, 2004 ### ehild It is all right Gokul. ehild 18. Aug 18, 2004 ### TenaliRaman wow! Good Job ehild! (Note : I am really sorry at misleading the peeps when i said "the battery does does a resistive act" ... should have seen that coming !! :( ) Anyways ehild, i am not going to push u through the edges on this one since u specifically mentioned that "proving the *fringe_effect* was beyond college level" and i am not particularly the physics_guy. Though i would like to know one thing that bothers me ..... Any system i believe tries to achieve a low field potential, then how come this bizarre event of capacitor actually pulling the dielectric block in ?? (assuming that my knowledge base : *that the field potential increases as the block moves in is correct*?? or is it that we relate the field potential to the actual potential difference across the plates which is constant here and hence the effect is not actually violating any rules as such) (Pardon me if this doubt is hidious .. i lose my mind in the gutters of Diagon Alley at times) -- AI 19. Aug 18, 2004 ### ehild Hi Al, That event -I mean the capacitor pulling in the dielectric- is not that bizarre. You see a similar effect with a coil connected to a battery and a piece of iron road. The coil will suck the piece of iron in, just like the capacitor does with the dielectric slab. And the energy density inside the coil 0.5*B*H will increase if the magnetic permeability increases and the current stays the same. Try and see yourself. As for your statement that "the field potential increases as the block moves in" - I do not really understand what you mean on "field potential". There is a potential in a point in an electric field. There is a potential difference between the two plates of the capacitor. But it does not tend to minimize itself. It is a positive charge which tends to occupy a place with lower potential. The field inside the capacitor has got an energy density, and this multiplied by the volume is the same as the "energy of the capacitor" 0.5*V^2*C (well, if we don't bother ourselves with the field outside the capacitor. Maybe we do it wrong.) Yes, this would increase if the slab moves in, but the capacitor is connected to the battery, and it maintains a constant potential difference across the capacitor. A systems tends to occupy the lowest energy if it is let alone, but this capacitor is not "let alone". I am not an expert on electric field calculations and Thermodynamics and so on, but I try to imagine that I am inside that battery and watch what happens. There is an electrolyte and two electrodes, say a zinc and a carbon one, and the zinc ions would like to go into the electrolyte, they like to be there, because of some crazy chemical desire, I never understood Chemistry, but they can not go any more, as the electrons left behind the electrode are pulling them back... then the electrodes of the battery are connected to the capacitor. The electrons happily run there to occupy the empty space, more zinc ions can dissolve and at the end the process stops again when the capacitor is charged to the voltage of the battery. Poor zinc ions, left on the electrode, should stay there longing for the cool electrolyte in vain. But then a dielectric slab is pushed between the plates of the capacitor. The electric field would polarize its molecules or atoms, and align them into dipole chains, and the ends of the chains neutralize some charges on the capacitor plates. The voltages would fall, but new charges rush over from the battery, so the result is that the voltage stays constant. This goes on till the dielectric totally fills the place between the capacitor plates. It looks as if the battery would prefer the dielectric in, as it has more place for the charges and more zinc can dissolve. A battery likes to spread out its charges, that means it loses more energy then the capacitor gaines. (I know this was not a physical explanation, so Gokul, if you read this, please do not force me again to sweat out a more exact derivation. :) It is not so easy for me.) ehild 20. Aug 18, 2004 ### Locrian Thanks to everyone who worked on a solution for this one, I really wanted to see how it was done. Similar Discussions: Good Problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205897808074951, "perplexity": 601.2783533067817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426169.17/warc/CC-MAIN-20170726142211-20170726162211-00626.warc.gz"}
https://plainmath.net/7996/evaluate-the-integral-int-x-sqrt-5x-1-dx
# Evaluate the integral. int x sqrt(5x-1) dx Question Integrals Evaluate the integral. $$\displaystyle\int{x}\sqrt{{{5}{x}-{1}}}{\left.{d}{x}\right.}$$ 2020-10-21 The key here is to do a change of variables. Specifically, any time you see a square root, you either want to convert it to something squared under the square root or just set u= whatever is under it. In this case, we'll do the latter. Let u=5x−1.Then, because we know $$\displaystyle{d}{u}=\frac{{{d}{u}}}{{{\left.{d}{x}\right.}}}{\left.{d}{x}\right.}$$, we see that du=5dx which is equivalent to $$\displaystyle{k}{\left.{d}{x}\right.}=\frac{{1}}{{d}}{u}$$. Substituting these into the integral, we get. $$\displaystyle\int{x}\sqrt{{u}}\cdot\frac{{1}}{{5}}{d}{u}$$ This is part of what we want, but it still has that xx there and we want to totally convert to uu's. Well, remember that we set u=5x−1. Let's solve that for x in terms of u: $$\displaystyle{5}{x}={u}+{1}\Rightarrow{x}=\frac{{{u}+{1}}}{{5}}$$ Substituting this in, we get $$\displaystyle\int\frac{{{u}+{1}}}{{5}}\sqrt{{u}}\cdot\frac{{1}}{{5}}{d}{u}=\frac{{1}}{{25}}\int{\left({u}+{1}\right)}\sqrt{{u}}{d}{u}=\frac{{1}}{{25}}\int{\left({u}^{{\frac{{3}}{{2}}}}+{u}^{{\frac{{1}}{{2}}}}\right)}{d}{u}$$ ### Relevant Questions Evaluate the integral $$\int \frac{1}{1+\frac{x}{2}^2}dx$$ Evaluate the integral $$\displaystyle\int{\frac{{{1}}}{{{1}+{\frac{{{x}}}{{{2}}}}^{{2}}}}}{\left.{d}{x}\right.}$$ Evaluate the integral: $$\displaystyle\int\int\int_{{E}}{\left({x}{y}+{z}^{{2}}\right)}{d}{V}$$, where $$\displaystyle{E}={\left\lbrace{\left({x},{y},{z}\right)}{\mid}{0}\le{x}\le{2},{0}\le{y}\le{1},{0}\le{z}{<}{3}\right\rbrace}$$ Evaluate the triple integral $$\displaystyle\int\int\int_{{E}}{3}{y}{d}{V}$$,where $$\displaystyle{E}={\left\lbrace{\left({x},{y},{z}\right)}{\mid}{0}\le{x}\le{2},{0}\le{y}\le\sqrt{{{4}-{x}^{{2}}}},{0}\le{z}\le{x}\right\rbrace}$$ Evaluate the following derivatives. $$\displaystyle{\frac{{{d}}}{{{\left.{d}{x}\right.}}}}{\int_{{{7}}}^{{{x}}}}\sqrt{{{1}+{t}^{{{4}}}+{t}^{{{6}}}}}{\left.{d}{t}\right.}$$ Evaluate the integral ∫(2/3sqrtx)dx $$\displaystyle\int_{{S}}{F}\cdot{d}{S}$$ S is the cube with vertices $$\displaystyle{\left(\pm{1},\pm{1},\pm{1}\right)}$$ Give the correct answer and solve the given equation Evaluate $$\displaystyle\int{x}^{3}{\left({\sqrt[{3}]{{{1}-{x}^{2}}}}\right)}{\left.{d}{x}\right.}$$ Evaluate the integral $$\displaystyle\int\frac{x}{{2}}{\left.{d}{x}\right.}$$ Evaluate the integral $$\displaystyle\int{e}^{{{3}{x}}} \cos{{2}}{x}{\left.{d}{x}\right.}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965964555740356, "perplexity": 719.5307683124047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00355.warc.gz"}
https://www.coursehero.com/file/8971390/3-J7061B303-00145/
Physics Book Solutions # 3 j7061b303 00145 This preview shows page 1. Sign up to view the full content. This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: (1.50 &quot; 10!3 m) # 4.5 &quot; 103 V (d) Energy # 1 QV # 1 (0.0180 &quot; 10!6 C)(200 V) # 1.80 &quot; 10!6 J # 1.80 \$ J 2 2 EVALUATE:! We could also calculate the stored energy as 24.27. Q 2 (0.0180 &quot; 10!6 C) 2 # # 1.80 \$ J . 2C 2(9.00 &quot; 10!11 F) IDENTIFY:! The energy stored in a charged capacitor is 1 CV 2 . 2 SET UP:! 1 \$ F # 10!6 F EXECUTE:! 1 2 CV 2 # 1 (450 &quot; 10!6 F)(295 V)2 # 19.6 J 2 EVALUATE:! Thermal energy is generated in the wire at the rate I 2 R , where I is the current in the wire. When the capacitor discharges there is a flow of charge that corresponds to current in the wire. 24-8 Chapter 24 24.28. IDENTIFY:! After the two capacitors are connected they must have equal potential difference, and their combined charge must add up to the original charge. Q2 1 SET UP:! C # Q / V . The stored energy is U # # CV 2 2C 2 EXECUTE:! (a) Q # CV0 . Q Q2 QQ C Q 3 (b) V # 1 # 2 and also Q1 0 Q2 # Q # CV0 . C1 # C and C2 # so 1 # and Q2 # 1 . Q # Q1 . C (C 2) C1 C2 2 2 2 2 Q1 2 Q 2 # # V0 . Q1 # Q and V # 3 C 3C 3 1 % Q 2 Q 2 &amp; 1 9 ( 2 Q) 2 2( 1 Q) 2 : 1 Q 2 1 03 # CV0 2 (c) U # ' 1 0 2 ( # &gt; 3 ?# C &lt; 3C 3 2 ) C1 C2 * 2 ; C 1 (d) The original U was U # 1 CV0 2 , so BU # ! CV0 2 . 2 6 (e) Thermal energy of capacitor, wires, etc., and electromagnetic radiation. EVALUATE:! The original charge of the charged capacitor must distribute between the two capacitors to make the potential the same across each capacitor. The voltage V for each after they are connected is less than the original voltage V0 of the charged capacitor. IDENTIFY and SET UP:! Combine Eqs. (24.9) and (24.2) to write the stored energy in terms of the separation between the plates. Q2 !A xQ 2 EXECUTE:! (a) U # ; C # 0 so U # 2C x 2!0 A 24.29. (b) x E x 0 dx gives U # . x 0 dx / Q 2 2!0 A . x 0 dx / Q 2 ! xQ 2 # % Q 2 &amp; dx dU # ' ( 2!0 A 2!0 A ) 2!0 A * Q2 2!0 A (d) EVALUATE:! The capacitor plates and the field between the plates are shown in Figure 24.29a. = Q E# # !0 !0 A F #... View Full Document Ask a homework question - tutors are online
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617129921913147, "perplexity": 3509.1031929206843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542002.53/warc/CC-MAIN-20161202170902-00005-ip-10-31-129-80.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/232428/yonedas-lemma-and-limits?answertab=active
# Yoneda's Lemma and Limits The context of this question is section 12 of the chapter on Categories of the Stacks Project. Let $M:I \rightarrow C$ be a diagram and suppose that $\lim_i M_i$ exists. Then $Mor_C(W,\lim_i M_i) = \lim_i Mor_C(W,M_i)$. It is mentioned that "By the Yoneda lemma this formula completely determines the limit". What does the above phrase mean? We supposed that $\lim_i M_i$ exists and it is also unique up to isomorphism by its universal property. Where does Yoneda's lemma come into play? - Yoneda's lemma tells you that if an object of $C$ represents a functor $F:C^{opp}\rightarrow\mathbf{Set}$, then the object (more correctly the pair consisting of the object and a natural isomorphism between its $\mathrm{Hom}$ functor $h_X$ and $F$) is unique up to unique isomorphism. In pretty much every situation I can think of, definitions given in terms of some universal mapping property (like a limit) can also be phrased in terms of representability of a functor. The formula $\mathrm{Hom}_C(W,\lim_iM_i)=\lim_i\mathrm{Hom}_C(W,M_i)$ means precisely that for all objects $W$, the arrow $\mathrm{Hom}_C(W,\lim_iM_i)\rightarrow\lim_i\mathrm{Hom}_C(W,M_i)$ induced by the canonical maps $\lim_iM_i\rightarrow M_j$ together with the universal mapping property of the limit in the category of sets, is bijective. It can also be interpreted as saying that the object $\lim_iM_i$ represents the functor $F:W\rightsquigarrow\lim_i\mathrm{Hom}_C(W,M_i)$ on $C^{opp}$. When interpreted this way, we see that, by Yoneda's lemma, the formula determines $\lim_iM_i$ (together with the relevant data, namely the maps $\lim_iM_i\rightarrow M_j$ which can be recovered from the natural bijections between $\mathrm{Hom}$ sets) up to canonical isomorphism. In this case the data of the natural isomorphism between $h_{\lim_iM_i}$ and $F$ is given by the morphisms $\lim_iM_i\rightarrow M_j$. EDIT: This is an answer to the question asked by the OP in the comments. Yoneda says that the functor $X\rightsquigarrow h_X$ from $C$ to $\mathrm{Fun}(C^{opp},\mathbf{Set})$ is fully faithful. So, in particular, if $M=\lim_iM_i$, then for any $j\in I$, the map $\mathrm{Hom}_C(M,M_j)\rightarrow\mathrm{Hom}_{\mathrm{Fun}(C^{opp},\mathbf{Set})}(h_M,h_{M_j})$ is bijective. If $M$ represents the functor $F$ defined above, then by composing the natural isomorphism $h_M\cong F$ with the natural transformation from $F=\lim_i\mathrm{Hom}_C(-,M_i)$ to $\mathrm{Hom}_C(-,M_j)$ (which exists by definition of the limit in the category of set valued functors on $C^{opp}$), we get a morphism $h_M\rightarrow h_{M_j}$. By Yoneda this correponds to a unique morphism $M\rightarrow M_j$. This is the projection map in the definition of the universal property for $M$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929910898208618, "perplexity": 55.03045380815884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776434475.94/warc/CC-MAIN-20140707234034-00054-ip-10-180-212-248.ec2.internal.warc.gz"}
http://www.gutenberg.cc/articles/eng/Condensed_matter
#jsDisabledContent { display:none; } My Account |  Register |  Help # Condensed matter Article Id: WHEBN0003491502 Reproduction Date: Title: Condensed matter Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Condensed matter Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter.[1] Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, these include the laws of quantum mechanics, electromagnetism and statistical mechanics. The most familiar condensed phases are solids and liquids, while more exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on atomic lattices, and the Bose–Einstein condensate found in cold atomic systems. The study of condensed matter physics involves measuring various material properties via experimental probes along with using techniques of theoretical physics to develop mathematical models that help in understanding physical behavior. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists identify themselves as condensed matter physicists,[2] and The Division of Condensed Matter Physics (DCMP) is the largest division of the American Physical Society.[3] The field overlaps with chemistry, materials science, and nanotechnology, and relates closely to atomic physics and biophysics. Theoretical condensed matter physics shares important concepts and techniques with theoretical particle and nuclear physics.[4] A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas, until the 1940s when they were grouped together as Solid state physics. Around the 1960s, the study of physical properties of liquids were added to this list, and it came to be known as condensed matter physics.[5] According to physicist Phil Anderson, the term was coined by himself and Volker Heine when they changed the name of their group at the Cavendish Laboratories, Cambridge from "Solid state theory" to "Theory of Condensed Matter",[6] as they felt it did not exclude their interests in the study of liquids, nuclear matter and so on.[7] The Bell Labs (then known as the Bell Telephone Laboratories) was one of the first institutes to conduct a research program in condensed matter physics.[5] References to "condensed" state can be traced to earlier sources. For example, in the introduction to his 1947 "Kinetic theory of liquids" book,[8] Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of "condensed bodies". ## History ### Classical physics One of the first studies of condensed states of matter was by English chemist Humphry Davy, when he observed that of the 40 chemical elements known at the time, 26 had metallic properties such as lustre, ductility and high electrical and thermal conductivity.[9] This indicated that the atoms in Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquified under the right conditions and would then behave as metals.[10][notes 1] In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquified chlorine and went on to liquify all known gaseous elements, with the exception of nitrogen, hydrogen and oxygen.[9] Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases,[12] and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.[13] By 1908, James Dewar and H. Kamerlingh Onnes were successfully able to liquify hydrogen and then newly discovered helium, respectively.[9] Paul Drude proposed the first theoretical model for a classical electron moving through a metallic solid.[4] Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law.[14][15] However, despite the success of Drude's free electron model, it had one notable problem, in that it was unable to correctly explain the electronic contribution to the specific heat of metals, as well as the temperature dependence of resistivity at low temperatures.[16] In 1911, just three years after helium was first liquified, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity in mercury to vanish when the temperature was lowered below a certain value.[17] The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades.[18] Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that “with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas”.[19] Drude's classical model was augmented by Felix Bloch, Arnold Sommerfeld, and independently by Wolfgang Pauli, who used quantum mechanics to describe the motion of a quantum electron in a periodic lattice. In particular, Sommerfeld's theory accounted for the Fermi–Dirac statistics satisfied by electrons and was better able to explain the heat capacity and resistivity.[16] The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms.[20] The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935.[21] Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.[4] In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered the development of a voltage across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current.[22] This phenomenon arising due to the nature of charge carriers in the conductor came to be known as the Hall effect, but it was not properly explained at the time, since the electron was experimentally discovered 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 predicted the quantization of the Hall conductance for electrons confined to two dimensions.[23] Magnetism as a property of matter has been known since pre-historic times.[24] However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included the classification of materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization.[25] Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials.[24] In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets.[26] The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of quantum spins that collectively acquired magnetization.[24] The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to the development of new magnetic materials with applications to magnetic storage devices.[24] ### Modern many body physics The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect.[27] After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective modes of excitation of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now known as Landau-quasiparticles.[27] Landau also developed a mean field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases.[28] Eventually in 1965, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons can give rise to a bound state called a Cooper pair.[29] The study of phase transition and the critical behavior of observables, known as critical phenomena, was a major field of interest in the 1960s.[30] Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and scaling. These ideas were unified by Kenneth Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.[30] The quantum Hall effect was discovered by Klaus von Klitzing in 1980 when he observed the Hall conductivity to be integer multiples of a fundamental constant.[31] (see figure) The effect was observed to be independent of parameters such as the system size and impurities, and in 1981, theorist Robert Laughlin proposed a theory describing the integer states in terms of a topological invariant called the Chern number.[32] Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductivity was now a rational multiple of a constant. Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational solution, known as the Laughlin wavefunction.[33] The study of topological properties of the fractional Hall effect remains an active field of research. In 1987, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 Kelvin. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role.[34] A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics.[35] ## Theoretical Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the Band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical techniques of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases and gauge symmetries. ### Emergence Main article: Emergence Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents.[29] For example, a range of phenomena related to high temperature superconductivity are not well understood, although the microscopic physics of individual electrons and lattices is well known.[36] Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon.[37] Emergent properties can also occur at the interface between materials: one example is the lanthanum-aluminate-strontium-titanate interface, where two non-magnetic insulators are joined to create conductivity, superconductivity, and ferromagnetism. ### Electronic theory of solids The metallic state has historically been an important building block for studying properties of solids.[38] The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law.[38] In 1913, X-ray diffraction experiments revealed that metals possess periodic lattice structure. Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, called the Bloch wave.[39] Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation techniques are necessary to obtain meaningful predictions.[40] The Thomas–Fermi theory, developed in the 1920s, was used to estimate electronic energy levels by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions, but not for their Coulomb interaction. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory which gave realistic descriptions for bulk and surface properties of metals. The density functional theory (DFT) has been widely used since the 1970s for band structure calculations of variety of solids.[40] ### Symmetry breaking Main article: Symmetry breaking Certain states of matter exhibit symmetry breaking, where the relevant laws of physics possess some symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) rotational symmetry.[41] Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.[42] ### Phase transition Main article: Phase transition The study of critical phenomena and phase transitions is an important part of modern condensed matter physics.[43] Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature. In particular, quantum phase transitions refer to transitions where the temperature is set to zero, and the phases of the system refer to distinct ground states of the Hamiltonian. Systems undergoing phase transition display critical behavior, wherein several of their properties such as correlation length, specific heat and susceptibility diverge. Continuous phase transitions are described by the Ginzburg–Landau theory, which works in the so-called mean field approximation. However, several important phase transitions, such as the Mott insulatorsuperfluid transition, are known that do not follow the Ginzburg–Landau paradigm.[44] The study of phase transitions in strongly correlated systems is an active area of research.[45] ## Experimental Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Experimental probes include effects of electric and magnetic fields, measurement of response functions, transport properties and thermometry.[8] Commonly used experimental techniques include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measurement of transport via thermal and heat conduction. ### Scattering Main article: Scattering Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest.[46] Visible light has energy on the scale of 1 eV and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density. Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons themselves have spin but no charge).[46] Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes,[47] and similarly, positron annihilation can be used as an indirect measurement of local electron density.[48] Laser spectroscopy is used as a tool for studying phenomena with energy in the range of visible light, for example, to study non-linear optics and forbidden transitions in media.[49] ### External magnetic fields In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems.[50] Nuclear magnetic resonance (NMR) is a technique by which external magnetic fields can be used to find resonance modes of individual electrons, thus giving information about the atomic, molecular and bond structure of their neighborhood. NMR experiments can be made in magnetic fields with strengths up to 65 Tesla.[51] Quantum oscillations is another experimental technique where high magnetic fields are used to study material properties such as the geometry of the Fermi surface.[52] The quantum hall effect is another example of measurements with high magnetic fields where topological properties such as Chern–Simons angle can be measured experimentally.[49] ### Cold atomic gases Main article: Optical lattice Cold ion trapping in optical lattices is an experimental tool commonly used in condensed matter as well as atomic, molecular, and optical physics.[53] The technique involves using optical lasers to create an interference pattern, which acts as a "lattice", in which ions or atoms can be placed at very low temperatures.[54] Cold atoms in optical lattices are used as "quantum simulators", that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets.[55] In particular, they are used to engineer one, two and three dimensional lattices for a Hubbard model with pre-specified parameters.[56] and to study phase transitions for Néel and spin liquid ordering.[53] In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy a single quantum state.[57] ## Applications Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor,[4] and laser technology.[49] Several phenomena studied in the context of nanotechnology come under the purview of condensed matter physics.[58] Techniques such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication.[59] Several condensed matter systems are being studied with potential applications to quantum computation,[60] including experimental systems like quantum dots, SQUIDs, and theoretical models like the toric code and the quantum dimer model.[61] Condensed matter systems can be tuned to provide the conditions of coherence and phase-sensitivity that are essential ingredients for quantum information storage.[59] Spintronics is a new area of technology that can be used for information processing and transmission, and is based on spin, rather than electron transport.[59] Condensed matter physics also has important applications to biophysics, for example, the experimental technique of magnetic resonance imaging, which is widely used in medical diagnosis.[59] ## References • P. M. Chaikin and T. C. Lubensky (2000). Principles of Condensed Matter Physics, Cambridge University Press; 1st edition, ISBN 0-521-79450-1 • Alexander Altland and Ben Simons (2006). Condensed Matter Field Theory, Cambridge University Press, ISBN 0-521-84508-4 • Michael P. Marder (2010). Condensed Matter Physics, second edition, John Wiley and Sons, ISBN 0-470-61798-5 • Lillian Hoddeson, Ernest Braun, Jürgen Teichmann and Spencer Weart, eds. (1992). Out of the Crystal Maze: Chapters from the History of Solid State Physics, Oxford University Press, ISBN 0-195-05329-X Template:Physics-footer This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563936352729797, "perplexity": 959.5915344795536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00412.warc.gz"}
https://studysoup.com/tsg/statistics/75/elementary-statistics-a-step-by-step-approach/chapter/1531/4-1
× Get Full Access to Statistics - Textbook Survival Guide Get Full Access to Statistics - Textbook Survival Guide × # Solutions for Chapter 4.1: Sample Spaces and Probability ## Full solutions for Elementary Statistics: A Step By Step Approach | 9th Edition ISBN: 9780073534985 Solutions for Chapter 4.1: Sample Spaces and Probability Solutions for Chapter 4.1 4 5 0 340 Reviews 24 3 ##### ISBN: 9780073534985 Summary of Chapter 4.1: Sample Spaces and Probability A sample space is the set of all possible outcomes of a probability experiment. This textbook survival guide was created for the textbook: Elementary Statistics: A Step By Step Approach , edition: 9. Elementary Statistics: A Step By Step Approach was written by and is associated to the ISBN: 9780073534985. Chapter 4.1: Sample Spaces and Probability includes 33 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions. Since 33 problems in chapter 4.1: Sample Spaces and Probability have been answered, more than 666780 students have viewed full step-by-step solutions from this chapter. Key Statistics Terms and definitions covered in this textbook • -error (or -risk) In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error). • Analytic study A study in which a sample from a population is used to make inference to a future population. Stability needs to be assumed. See Enumerative study • Average See Arithmetic mean. • Bernoulli trials Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant. • Bivariate distribution The joint probability distribution of two random variables. • Categorical data Data consisting of counts or observations that can be classiied into categories. The categories may be descriptive. • Central limit theorem The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem. • Chance cause The portion of the variability in a set of observations that is due to only random forces and which cannot be traced to speciic sources, such as operators, materials, or equipment. Also called a common cause. • Conditional mean The mean of the conditional probability distribution of a random variable. • Conditional probability density function The probability density function of the conditional probability distribution of a continuous random variable. • Conidence level Another term for the conidence coeficient. • Counting techniques Formulas used to determine the number of elements in sample spaces and events. • Critical region In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis. • Defects-per-unit control chart See U chart • Distribution free method(s) Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s). • False alarm A signal from a control chart when no assignable causes are present • Fraction defective In statistical quality control, that portion of a number of units or the output of a process that is defective. • Gaussian distribution Another name for the normal distribution, based on the strong connection of Karl F. Gauss to the normal distribution; often used in physics and electrical engineering applications • Geometric mean. The geometric mean of a set of n positive data values is the nth root of the product of the data values; that is, g x i n i n = ( ) = / w 1 1 . • Goodness of fit In general, the agreement of a set of observed values and a set of theoretical values that depend on some hypothesis. The term is often used in itting a theoretical distribution to a set of observations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187436699867249, "perplexity": 1017.3816687410788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00309.warc.gz"}
https://www.physicsforums.com/threads/minimum-diameter-fluid-dynamics.405290/
# Homework Help: Minimum diameter fluid dynamics 1. May 24, 2010 ### pat666 1. The problem statement, all variables and given/known data A fire-hose must be able to shoot water to the top of a building 35.0 m tall when aimed straight up. Water enters this hose at a steady rate of 0.500 m³.s¯¹ and shoots out of a round nozzle. (a) What is the maximum diameter that this nozzle can have? (4 marks) (b) If the only nozzle available is twice as great, what is the highest point that the water can reach? 2. Relevant equations Vcylinder=pir^2h gh=1/2v^2 3. The attempt at a solution first i found the initial velocity required which was 26.21m/s. then 0.5m^3/s=pi*r^2h/s therefore h/s is the velocity, then i solved that and got 15.6cm......... For b i did basically the reverse and got 2.2m.... Can someone please check this. 2. May 24, 2010 ### rock.freak667 A = cross-sectional area so the cross sectional area of a cylinder is ? hence 0.5=Av. so what is v? When you get that, your second equation will come into play now. 3. May 24, 2010 ### pat666 ok so ive assumed the hose is a cylinder... the given flow rate is 0.5m^3 per second so therefore 0.5m^3/s is equal to (pi r^2*h/)s ... h is in meters so h/s is the velocity(m/s) as it exits the hose. therefore 0.5=pi*r^2*v.............. Im trying to solve this by logic and am unsure if its right. im guessing that you do not agree with me?? 4. May 24, 2010 ### rock.freak667 oh sorry, you solved it the other way. I didn't read your post through, sorry. 5. May 24, 2010 ### pat666 ok cool thanks for that, just out of interest how do you solve these sort of problems? 6. May 24, 2010 ### rock.freak667 Exactly how you solved it, except I would have done it in the reverse to how you did it. Just because I could get the diameter in terms of variables I know the value of so that rounding errors would be reduced.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371289372444153, "perplexity": 1459.171153245881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589417.43/warc/CC-MAIN-20180716174032-20180716194032-00185.warc.gz"}
https://jlibovicky.github.io/2019/10/10/MT-Weekly-Modeling-Confidence.html
Neural machine translation is based on machine learning—we collect training data, pairs of parallel sentences which we hope represent how language is used in the two languages, and train models using the data. When the model is trained, the more the input resembles the sentences the model was trained on, the better we can expect the translation to be. If we were able to compute this similarity, we might be able to guess, how well the model can translate the sentence. Current best systems are trained on tens or even hundreds of millions of sentence pairs. Obviously, comparing every single sentence you want to translate to so many sentences is not computationally feasible. This problem (and an elegant solution of it) is a topic of a recent paper from the Karlsruhe Institute of Technology called Modeling Confidence in Sequence-to-Sequence Models. The paper is based on a simple and clever trick that allows estimating this similarity. But before we get to the trick, let us follow (what I think was) the thinking of the authors. If the input sentence is similar to sentences that were in the training data, the representations in the encoder and in the decoder should also be similar. This itself is not really helpful: computing the similarity still requires to go through millions of training sentences which is computationally expensive. The authors actually did that, burned a lot of electric power (warmed the planet a little bit) and proved that this really gives a good estimate of how good the translation will be. The question is now how to do it efficiently, i.e., how to compute the similarity with the training sentences without actually using them? And indeed, there is a clever trick to do that. In the paper, they train an autoencoder for the hidden states. An autoencoder is a neural network that projects its inputs into a vector of a smaller dimension and tries to reconstruct the input from this intermediate representation. The intermediate representation is an information bottleneck because it simply does not have enough capacity to memorize the input. The network needs to learn how to compress its input to be able to reconstruct it. Because the network is trained to work well on average, frequent inputs get reconstructed better than inputs that appear only rarely. And voilà, this is exactly what we want: the reconstruction error can serve as an indirect estimate of how similar are the encoder and decoder states for the input sentence and for the training data. The paper goes even further with this idea. When we have the hopefully well-reconstructed decoder states, we can plug them into the model instead of the original ones. We thus do not have to measure cosine or whatever similarity of the hidden states for which we do not have a straightforward interpretation anyway. We can simply try how the model output would change if we used the reconstructed states instead of the original ones. The outputs are words which got assigned the highest probability by the model. If the actual model output gets a low probability given the reconstructed hidden states, it means the original prediction was based on a hidden state that was not typical for training data and the translation is not likely to be of good quality. Moreover, if we do word alignment (which they did in a quite clever way in the paper) between the source sentence and the model output, we can say what source words make the model more likely to fail. This feature can find use both in user interfaces and analysis of what the models do. Isn’t it amazing? Instead of complaining that the hidden states of neural machine translation models are totally uninterpretable and that we cannot say anything about the expected translation quality, this paper shows that with a little wit and simple statistics, this can be nearly turned into an advantage. BibTeX Reference @misc{niehues2019modeling, title={Modeling Confidence in Sequence-to-Sequence Models}, author={Jan Niehues and Ngoc-Quan Pham}, year={2019}, eprint={1910.01859}, archivePrefix={arXiv}, primaryClass={cs.CL} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163450956344604, "perplexity": 368.36630114359497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00760.warc.gz"}
https://www.physicsforums.com/threads/general-energy-question.802872/
# General Energy question 1. Mar 13, 2015 ### jared bernstein Hey guys I am in intro physics in college and took physics in high school. I have a general question about work/energy. In highschool we used work= change in total energy and total energy = PE +KE +Q (Internal) which =work. In class there was a problem that read 2.00 kg block is attached to a spring of force constant 500 N/m. The block is pulled 4.50 cm to the right of equilibrium and released from rest. (a) Find the speed of the block as it passes through equilibrium if the horizontal surface is frictionless. (b) Find the speed of the block as it passes through equilibrium (for the first time) if the coefficient of friction between block and surface is 0.350 What I dont understand is that my teacher said KE + PE +PEs =Wf (which is work of friction) but in highschool (the equation above the friction or Q is on the other side and she says that the friction is equal to W. Can anyone explain!! Or simply do the problem out for me I am getting mixed messages! thanks first post btw 2. Mar 13, 2015 ### Suraj M Give it a try yourself, first! What is this? Please rephrase, what do you mean by friction = W ? Hint: Just equate the potential energy stored in the spring to the sum of kinetic energy gained by the object and work done by friction. 3. Mar 13, 2015 ### haruspex You could write it as $W_f = - \Delta(KE+PE)$. That is, the work done against friction is equal to the loss in mechanical energy. (You have a PEs, which I take to be another form of PE, e.g. you may have gravitational PE as well as spring PE. I'm lumping all the PEs together.). However, if your teacher takes Wf to be the work done by friction then that reverses the sign. So it could be just a question of standpoint. Draft saved Draft deleted Similar Discussions: General Energy question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941658079624176, "perplexity": 736.8657043750101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00286.warc.gz"}
http://mathhelpforum.com/advanced-statistics/44292-statistic-homework-help.html
# Math Help - Statistic Homework Help 1. ## Statistic Homework Help I would really appreciate it if anyone could help me with the following math problem from a homework assignment i have. The principal at Jonesbury High School has claimed that the mean IQ of all students at the school is 125. The superintendent of schools in Jonesbury wants to test this claim. She checks the files of 36 Jonesbury High students at random, and finds that the mean IQ among these students is 124.8, with a standard deviation of .6. What is the smallest significance level at which the null hypothesis will be rejected? Hint: Calculate the p-value. Thank you in advance to anyone that can help. 2. Which is the standard deviation of the sample mean? 0.6 or 0.6/6 ??? It's a good thing to know in order to get started. 3. The principal at Jonesbury High School has claimed that the mean IQ of all students at the school is 125. The superintendent of schools in Jonesbury wants to test this claim. She checks the files of 36 Jonesbury High students at random, and finds that the mean IQ among these students is 124.8, with a standard deviation of .6. What is the smallest significance level at which the null hypothesis will be rejected? Hint: Calculate the p-value. H_0 : u = 125 H_A : u ≠ 125 x = 125 u = 124.8 s = 0.6 n = 36 z = (x - u) / (s / sqrt(n)) = (125-124.8) / (0.6/6) = 0.2 / 0.1 = 2 Thus, the two-tailed p-value = 0.02275026. Now, if the p-value is lower than alpha, the null hypothesis is rejected. This means that alpha must be greater than (or equal to) 0.02275026 for the null hypothesis to be rejected. So, the smallest significance-level (alpha) such that H_0 gets rejected is about 0.02275026. Round this to as many digits as you wish. -Andy post script: LaTex is not working, so I could not use it. Sorry. 4. I dare you to find a high school with an average IQ that high. 5. ## thank you Thank you so much.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8624399304389954, "perplexity": 770.1229373117093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274979.56/warc/CC-MAIN-20140728011754-00068-ip-10-146-231-18.ec2.internal.warc.gz"}
http://heattransfer.asmedigitalcollection.asme.org/article.aspx?articleid=2562347
0 # Modeling of Spectral Properties and the Scattering Phase Function for Lightweight Heat Protection Spacecraft Materials [+] Author and Article Information Valery V. Cherepanov Mem. ASME Department of Physics, Moscow Aviation Institute (National Research University), Volokolamskoe Highway, 4, A-80, GSP-3, Moscow 124993, Russia e-mail: [email protected] Oleg M. Alifanov Department of Space Systems Engineering, Moscow Aviation Institute (National Research University), Volokolamskoe Highway, 4, A-80, GSP-3, Moscow 124993, Russia e-mail: [email protected] 1Corresponding author. Contributed by the Heat Transfer Division of ASME for publication in the JOURNAL OF HEAT TRANSFER. Manuscript received March 28, 2016; final manuscript received September 20, 2016; published online November 16, 2016. Editor: Dr. Portonovo S. Ayyaswamy. J. Heat Transfer 139(3), 032701 (Nov 16, 2016) (9 pages) Paper No: HT-16-1153; doi: 10.1115/1.4034814 History: Received March 28, 2016; Revised September 20, 2016 ## Abstract This work gives a brief description of the statistical model that takes into account when calculating the physical, in particular, the optical properties of some ultraporous nonmetallic high-temperature materials, the real regularities of the material structure, and the physical properties of substances constituting the material. For the spectral part of the model, some tests are presented, confirming its adequacy. The simulation of the spectra and the scattering of monochromatic radiation pattern by using the representative elements of the model and the material as a whole are carried out. It is found that despite the fact that the scattering pattern based on the use of representative elements of a material can be approximated by the classical distributions, this is not true for the material as a whole. Calculations of the angular scattering probability density of the materials are carried out, and the approximations of obtained distributions that extend the class of modeling scattering phase functions (SPF) are proposed. <> ## References Green, D. J. , and Lange, F. F. , 1982, “ Micromechanical Model for Fibrous Ceramic Bodies,” J. Am. Ceram. Soc., 65(3), pp. 138–141. Placido, E. , Arduini-Schuster, M. C. , and Kuhn, J. , 2005, “ Thermal Properties Predictive Model for Insulating Foams,” Infrared Phys. Technol., 46(3), pp. 219–231. Petrasch, J. , Wyss, P. , and Steinfeld, A. , 2007, “ Tomography-Based Monte Carlo Determination of Radiative Properties of Reticulate Porous Ceramics,” J. Quant. Spectrosc. Radiat. Transfer, 105(2), pp. 180–197. Alifanov, O. M. , Budnik, S. A. , Mikhaylov, V. V. , Nenarokomov, A. V. , Titov, D. M. , and Yudin, V. M. , 2007, “ An Experimental–Computational System for Materials Thermal Properties Determination and Its Application for Spacecraft Structures Testing,” Acta Astronaut., 61(1–6), pp. 341–351. Öchsner, A. , Murch, G. E. , and de Lemos, M. J. S. , eds., 2008, Cellular and Porous Materials: Thermal Properties Simulation and Prediction, Wiley-VCH, New York. Coquard, R. , Rochais, D. , and Baillis, D. , 2009, “ Experimental Investigation of the Coupled Conductive and Radiative Heat Transfer in Metallic/Ceramic Foams,” Int. J. Heat Mass Transfer, 52(21–22), pp. 4907–4918. Alifanov, O. M. , and Cherepanov, V. V. , 2009, “ Mathematical Simulation of High-Porosity Fibrous Materials and Determination of Their Physical Properties,” High Temp., 47(3), pp. 438–447. Coquard, R. , Rousseau, B. , Echegut, P. , Baillis, D. , Gomart, H. , and Iacona, E. , 2012, “ Investigations of the Radiative Properties of Al–NiP Foams Using Tomographic Images and Stereoscopic Micrographs,” Int. J. Heat Mass Transfer, 55(5–6), pp. 1606–1619. Cherepanov, V. V. , 2011, “ Interaction of Radiation With Fragments of High Porous Material. The Theory,” Therm. Process. Eng., 3, pp. 215–227 (in Russian). Alifanov, O. M. , Cherepanov, V. V. , and Morzhukhina, A. V. , 2015, “ Complex Study of the Physical Properties of Reticulated Vitreous Carbon,” J. Eng. Phys. Thermophys., 88(1), pp. 134–144. Rochais, D. , Coquard, R. , and Baillis, D. , 2015, “ Microscopic Thermal Diffusivity Measurements of Ceramic and Metallic Foams Lumps in Temperature,” Int. J. Therm. Sci., 92, pp. 179–187. Cherepanov, V. V. , Alifanov, O. M. , Morzhukhina, A. V. , and Cherepanov, A. V. , 2016, “ Interaction of Radiation With Orthogonal Representative Elements of Highly Porous Materials,” Appl. Math. Model., 40(1–2), pp. 3459–3474. Sacadura, J.-F. , 2011, “ Thermal Radiative Properties of Complex Media: Theoretical Prediction Versus Experimental Identification,” Heat Transfer Eng., 32(9), pp. 754–770. Van de Hulst, H. C. , 1981, Light Scattering by Small Particles, Dover Publications, New York. Modest, M. F. , 2003, Radiative Heat Transfer, 2nd ed., Academic Press, New York. Dombrovsky, L. A. , and Baillis, D. , 2010, Thermal Radiation in Disperse Systems: An Engineering Approach, Begell House, New York. Howell, J. R. , Siegel, R. , and Mengüç, P. , 2011, Thermal Radiation Heat Transfer, 5th ed., CRC Press/Taylor & Francis, Boca Raton, FL. Chapman, S. , and Cowling, T. G. , 1952, The Mathematical Theory of Non-Uniform Gases, 3rd ed., Cambridge University Press, Cambridge, UK. Bohren, C. F. , and Huffman, D. R. , 1998, Absorption and Scattering of Light by Small Particles, Wiley, New York. Wait, J. R. , 1955, “ Scattering of a Plane Wave From a Circular Dielectric Cylinder at Oblique Incidence,” Can. J. Phys., 33(5), pp. 189–195. Lind, A. C. , and Greenberg, J. M. , 1966, “ Electromagnetic Scattering by Obliquely Oriented Cylinders,” J. Appl. Phys., 37(8), pp. 3195–3203. Bass, L. P. , Voloschenko, A. M. , and Germogenova, T. A. , 1986, Methods of Discrete Ordinates in Radiation Transport Problems, Institute of Applied Mathematics Publishing House, Moscow, Russia (in Russian). Jackson, J. D. , 1962, Classical Electrodynamics, Wiley, New York. Alifanov, O. M. , 1994, Inverse Heat Transfer Problems, Springer, Berlin. Luke, Y. L. , 1975, Mathematical Functions and Their Approximations, Academic Press, New York. von Neumann, J. , 1951, “ Various Techniques Used in Connection With Random Digits. Monte-Carlo Method,” Applied Mathematics Series 12, A. S. Householder , G. E. Forsythe , and H. H. Germond , eds., National Bureau of Standards, Washington, DC, pp. 36–38. Kroese, D. P. , Taimre, T. , and Botev, Z. I. , 2011, Handbook of Monte Carlo Methods, Wiley, New York. Liou, K. N. , 2002, An Introduction to Atmospheric Radiation, 2nd ed., Elsevier-Academic Press, New York. Moiseev, S. S. , Petrov, V. A. , and Stepanov, S. V. , 1996, “ Use of a Radiation Diffusion Model for Determination of Optical and Thermal Radiative Properties of Anisotropic Silica Fiber Thermal Insulation,” Int. J. Thermophys., 17(2), pp. 515–525. Banner, D. , Klarsfeld, S. , and Catherine, L. , 1989, “ Temperature Dependence of the Optical Characteristics of Semitransparent Porous Media,” High Temp. High Pressure, 21(3), pp. 347–354. ## Figures Fig. 1 The extinction efficiency of an infinitely long cylinder versus the diffraction parameter x = kR and the incident angle α. The relative refractive index m = 1.6 (ε = 2.56 and μ = 1). (a) Dependencies QexE(x) for different values of the incident angle α. (b) Dependencies QexE(α) and QexH(α) for different values of the diffraction parameter x. Fig. 2 Microstructure of the fibrous material TZMK-10 (bar = 10 μm) Fig. 3 Unnormalized scattering pattern for one of the representative elementary volumes of fibrous material TZMK-10 in spherical (a) and polar (b) coordinates. Direction of illumination θi = 60 deg and φi = 0 deg. Wavelength λ = 1.15 μm. Sizes of fragments (μm): dx = 3.1422, lx = 301.79, Lx = 29.284; dy = 4.0878 e, ly = 304.13, Ly = 29.284; and dz = 3.2798, lz = 296.37, Lz = 16.269. Fig. 4 The spectral weighting function f and the coefficients for the same representative element, as in Fig. 3: (a) the absorption α and scattering β and (b) the transport radiation diffusion coefficient D and the mean free path of photons l. T = 700 K, lighting direction θi = 60 deg, and φi = 15 deg. Fig. 5 Influence of the parameter g of the radiation model on the structure of the non-normalized SPF of TZMK-10 in polar coordinates. λ = 0.63 μm and Т = 600 K. (a) g = 0.3, (b) g = 0.4, (c) g = 0.5, (d) g = 0.6, and (e) g = 0.8. Fig. 6 The spectral scattering pattern for TZMK-10 in spherical and polar coordinates for different wavelengths. g = 0.3 and T = 600 K. (a) λ = 0.63 μm, (b) λ = 1.15 μm, and (c) λ = 3.9 μm. Fig. 7 Unnormalized probability density of the material TZMK-10 distribution in the scattering angle in polar coordinates and its approximation, λ = 0.63 μm: - - - - - - - calculated curve of Fig. 6(a) and - - - - - approximation (2) ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048182725906372, "perplexity": 4402.2833004233335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647567.36/warc/CC-MAIN-20180321023951-20180321043951-00777.warc.gz"}
http://physics.stackexchange.com/questions/70718/what-is-the-meaning-of-spin-two/70723
# What is the meaning of spin two? As the title suggests, what is the meaning of spin two? I kind of understand spin half for electrons. I can kind of understand spin one for other particles. However I'm not sure how something could have spin 2. - Related: Graviton – Keep these mind Jul 11 '13 at 13:00 "I'm not sure how something could have spin 2" I think the phrase is quite vague. It's like asking "I'm not sure how apples can fall down" - Nature doesn't care what you think..! – Waffle's Crazy Peanut Jul 11 '13 at 13:04 Possible duplicates: physics.stackexchange.com/q/1/2451 and 10 links therein. Related: physics.stackexchange.com/q/14932/2451 – Qmechanic Jul 11 '13 at 13:07 Spin-2 means that the spin is equal to 2 in the same sense in which spin-1 means that the spin is equal to 1 or spin-1/2 means that the spin is equal to 1/2. So it's hard to believe that you could understand the words spin-1/2 and spin-1 but not spin-2. It's like knowing how to drink half a liter of water, one liter of water, but be unable to drink 2 liters of water. Well, in this water example, it's actually more plausible. The spin $\vec J$ is the intrinsic angular momentum. The intrinsic means "innate", the part of the angular momentum that exists even when the particle is at rest. The angular momentum is the quantity that is conserved whenever the laws of physics obey the rotational symmetry. The classical rotating object has $\vec J = \sum_i \vec r_i \times \vec p_i$ summed over the mass points. In quantum mechanics, the total angular momentum of a particle is linked to the eigenvalue of $\vec J\cdot \vec J$ and it may be shown that the eigenvalues have the form $j(j+1)\hbar^2$ where $2j=0,1,2,3,\dots$ So the spin is either integer or half-integer. Equivalently, any projection of the spin, most commonly talked about as $j_z$, has eigenvalues that go from $j_z=-j$ to $j_z=+j$ with the spacing one. The multiplication of all the (half)integer values by $\hbar$ is understood everywhere. The maximum allowed $j_z$ for a particle is also equal to the spin $j$. Massless particles move by the speed of light so they don't have any rest frame. In their case, we usually talk about the spin with respect to one particular axis only, the axis of the direction of motion $\vec p$, because the rotations around this axis are unbroken. If it is so, the rotation symmetry is just $U(1)\sim SO(2)$ and the individual values of $j_z$ may exist in isolation from all the other values filling the interval between $-j_z$ and $+j_z$. The maximum positive value of $j_z$ is still referred to as the spin. The electron is massive and has $j=1/2$, with the allowed $j_z=\pm 1/2$. The photon is massless and has $j=1$ with $j_z=\pm 1$. Roughly speaking, this "one" comes from the one index of the potential $A_\mu$. Similarly, the graviton has spin 2, $j=2$, roughly speaking because its field $g_{\mu\nu}$ has two indices. The allowed projections are just $j_z=\pm 2$. The photon's $j_z=0$ and the graviton's $j_z=-1,0,1$ are rendered unphysical by the gauge symmetries and diffeomorphisms, respectively. There are also many massive particles such as nuclei and atoms that have $j=2$. These massive particles allow $j_z=-2,-1,0,1,2$. - Does this mean that the Higgs has no intrinsic angular momentum? – Jitter Jul 11 '13 at 23:41 Yes, the Higgs is a spinless particle - equivalently, an excitation of a scalar field. – Luboš Motl Jul 12 '13 at 4:52 Does -1 then mean that I owe a glass of water and have to pay it back? – Jitter Jul 14 '13 at 11:28 For a more intuitive, less rigorous, understanding of "spin", fall back to primitive geometric objects and, in particular, how they behave under a coordinate rotation. A scalar, a number, is unchanged (invariant) under a coordinate rotation so think of this as a "spin 0" object. Indeed, when we quantize a scalar field, the quanta have zero angular momentum; the quanta are spin 0 particles. A vector, however, is covariant under a coordinate rotation. Importantly, if the coordinate system is rotated "once around", the vector is unchanged so think of this as a "spin 1" object; the vector rotates at the same rate as the coordinate system. More technically, to transform a vector, apply the transformation once. As you might anticipate, when a vector field is quantized, the quanta have 1 unit of angular momentum; the quanta are spin 1 particles. Now, consider a rank 2 tensor (think of an outer product of two vectors as an example). To transform this object, the coordinate transformation must be applied twice (both vectors get the transformation). When the coordinate system is rotated through half a rotation, the rank 2 tensor is unchanged so think of this object as a "spin 2" object; the rank 2 tensor rotates twice the rate of the coordinate system. Now, you probably already see where this is going. When this tensor field is quantized, the quanta have 2 units of angular momentum; the quanta are spin 2 particles. - Hi you have discussed spin-0 particle as the quanta of scalar field, spin-1 particle as the quanta of vector field and spin-2 particle as the quanta of tensor field. But what about spin-1/2 particles? What do we call the field of spin-1/2 particles? – user22180 Oct 23 '14 at 7:44 @user22180, spinor fields: mathworld.wolfram.com/SpinorField.html – Alfred Centauri Oct 23 '14 at 11:02 Wow, never thought that a rank $n$ tensor is invariant under rotation by $\frac{2\pi}n$! – Ruslan Feb 19 '15 at 11:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324547052383423, "perplexity": 372.76108359331727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277313.35/warc/CC-MAIN-20160524002117-00095-ip-10-185-217-139.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions/3505/what-is-the-length-of-an-rsa-signature?answertab=votes
# What is the length of an RSA signature? Is it the same as the bits of the key (So a 2048 bit system will yield a 2048 bit signature)? At most as the key? Or something else entirely? - As a sidenote: While the signature size will correspond to the key size, that doesn't mean that the size of the signed message is the size of the message plus the signature size. It's possible to embed part of the message into the signature itself, making the combined size a bit smaller. This is called message recovery. –  CodesInChaos Aug 10 '12 at 19:13 If $d,N$ is the private key, signing a message $m$ is computed as $m^d\bmod N$. The $\bmod N$ makes it so that the signed message is between $0$ and $N$. So, it is no larger than $N$. In most applications, however, there is usually some encoding or protocol fields that will make it larger. - Thanks. And when we say 2048 bit encryption - Do we mean that N itself is 2048 bit? –  ispiro Aug 10 '12 at 15:06 OK, I found the answer to that one here: stackoverflow.com/a/2922135/939213 - Yes, the modulus itself is 2048. –  ispiro Aug 10 '12 at 15:26 PKCS#1, "the" RSA standard, describes how a signature should be encoded, and it is a sequence of bytes with big-endian unsigned encoding, always of the size of the modulus. This means that for a 2048-bit modulus, all signatures have length exactly 256 bytes, never more, never less. PKCS#1 is the most widely used standard, but there are other standards in some areas which may decide otherwise. Mathematically, a RSA signature is an integer between $1$ and $N-1$, where $N$ is the modulus. In some protocols, there can be some wrapping around the signature, e.g. to indicate which algorithm was used, or to embed certificates. For instance, in CMS, a "signature" contains the RSA value itself, but also quite a lot of additional data, for a virtually unbounded total size (I have seen signatures of a size of several megabytes, due to inclusion of huge CRL). - In some RSA variants, including ISO/IEC 9796-2 (with total message recovery, and the "Signature production function" rather than the more common "Alternative signature production function"), the signature is a bit string of $\lceil\log_2N\rceil-1$ bits, corresponding to (the big-endian encoding of) an integer between $1$ and $(N−3)/2$: that is $\min\big(J^s\bmod N,N-(J^s\bmod N)\big)$ where $J$ is a function of the message to sign and parameters. –  fgrieu Sep 18 '14 at 6:29 That trick can be extended; e.g. you can skip a complete byte (8 bits) if the verifier is ready to compute 256 RSA verifications, while it is trying to guess the missing bits. Maybe more importantly, in ISO/IEC 9796-2, part of the signed data can be embedded "for free" in the signature, so while the signature value has size $n$ bits (or $n-1$ with the trick you describe), the overhead implied by the presence of the signature can be substantially smaller, depending on the situation. –  Thomas Pornin Sep 18 '14 at 11:25 The RSA signature size is dependent on the key size, the RSA signature size is equal to the length of the modulus in bytes. This means that for a "n bit key", the resulting signature will be exactly n bits long. Although the computed signature value is not necessarily n bits, the result will be padded to match exactly n bits. Now here is how this works: The RSA algorithm is based on modular exponentiation. For such a calculation the final result is the remainder of the "normal" result divided by the modulus. Modular arithmetic plays a large role in Number Theory. There the definition for congruence is (I'll use 'congruent' since I don't know how to get those three-line equal signs) $m \equiv n \mod k$ if $k$ divides $m - n$ Simple example: Let $n = 2$ and $k = 7$, then $2 \equiv 2 \mod 7$ ($7$ divides $2 - 2$) $9 \equiv 2 \mod 7$ ($7$ divides $9 - 2$) $16 \equiv 2 \mod 7$ ($7$ divides $16 - 2$) ... $7$ actually does divide $0$, the definition for division is An integer $a$ divides an integer $b$ if there is an integer $n$ with the property that $b = n·a$. For $a = 7$ and $b = 0$ choose $n = 0$. This implies that every integer divides $0$, but it also implies that congruence can be expanded to negative numbers (won't go into details here, it's not important for RSA). So the gist is that the congruence principle expands our naive understanding of remainders, the modulus is the "number after mod", in our example it would be $7$. As there are an infinite amount of numbers that are congruent given a modulus, we speak of this as the congruence classes and usually pick one representative (the smallest congruent integer $\geq 0$) for our calculations, just as we intuitively do when talking about the "remainder" of a calculation. In RSA, signing a message $m$ means exponentiation with the "private exponent" $d$, the result $r$ is the smallest integer with $0 \leq r < n$ so that $$m^d \equiv r \bmod n.$$ This implies two things: 1. The length of $r$ (in bits) is bounded by the length of $n$ (in bits). 2. The length of $m$ (in bits) must $\leq$ length($n$) (in bits, too). To make the signature exactly $n$ bits long, some form of padding is applied. Cf. PKCS#1 for valid options. The second fact implies that messages larger than n would either have to be signed by breaking $m$ in several chunks $< n$, but this is not done in practice since it would be way too slow (modular exponentiation is computationally expensive), so we need another way to "compress" our messages to be smaller than $n$. For this purpose we use cryptographically secure hash functions such as SHA-1 that you mentioned. Applying SHA-1 to an arbitrary-length message m will produce a "hash" that is 20 bytes long, smaller than the typical size of a RSA modulus, common sizes are 1024 bits or 2048 bits, i.e. 128 or 256 bytes, so the signature calculation can be applied for any arbitrary message. The cryptographic properties of such a hash function ensures (in theory - signature forgery is a huge topic in the research community) that it is not possible to forge a signature other than by brute force. - But since the signature is $m^d \bmod n$ - It seems that the length can even be $0$. Or are you referring to a system that will pad it as an extra feature? –  ispiro Aug 10 '12 at 15:05 yes, I'm referring it as an extra feature –  K Kiran Aug 10 '12 at 15:09 Actually, modular exponentiation works just fine for numbers greater than $n$. It's just that this would trivially give us signature collisions (as $m + n \equiv m \mod n$, we also have $(m + n)^d \equiv m^d \mod n$), which is one of the reasons to use a hash first. –  Paŭlo Ebermann Aug 11 '12 at 10:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8517942428588867, "perplexity": 628.525172442046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131309986.40/warc/CC-MAIN-20150323172149-00121-ip-10-168-14-71.ec2.internal.warc.gz"}
https://demo7.dspace.org/items/4ea98091-9cbc-4328-98d8-f9d855eabc35
## On using the WMAP distance priors in constraining the time evolving equation of state of dark energy Li, Hong Xia, Jun-Qing Zhao, Gong-Bo Fan, Zu-Hui Zhang, Xinmin ##### Description Recently, the WMAP group has published their five-year data and considered the constraints on the time evolving equation of state of dark energy for the first time from the WMAP distance information. In this paper, we study the effectiveness of the usage of these distance information and find that these compressed CMB information can give similar constraints on dark energy parameters compared with the full CMB power spectrum if dark energy perturbations are included, however, once incorrectly neglecting the dark energy perturbations, the difference of the results are sizable. Comment: 4 pages, 3 figures, 2 tables Astrophysics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9213285446166992, "perplexity": 1459.8063928112729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00114.warc.gz"}
http://mathoverflow.net/questions/74592/well-balanced-covering-of-transpositions-in-n-elements/74631
# Well-balanced covering of transpositions in $n$ elements Let me denote $X_n$ the set of transpositions in $n$ elements. Equivalently, $X_n$ is the set of doubletons in $[1,n]\times[1,n]$. The cardinality of $X_n$ is $N=\frac{n(n-1)}{2}$. If $f:{\mathbb Z}/N{\mathbb Z}\rightarrow X_n$ is a bijection, let us denote $$r(f):=\min\{|\ell-m|;\ell\ne m\quad\hbox{and}\quad f(\ell)\cap f(m)\ne\emptyset\}.$$ Finally, let us define $$R_n:=\max\{r(f);\hbox{bijections}\quad f:{\mathbb Z}/N{\mathbb Z}\rightarrow X_n\}.$$ What is the asymptotics of $R_n$ as $n\rightarrow+\infty$. Is it $R_n\sim cn$ for some $c\in(0,\frac12)$? Or do we have $R_n=o(n)$? My motivation comes from a numerical algorithm due to Jacobi for the calculation of the spectrum of Hermitian matrices. Each step operates on a pair of rows/columns, with the effect of settong the entry $a_{ij}$ to zero. Once one has act on a row, it seems better to avoid coming back to it too soon. On an other hand, one needs to visit every pairs $(i,j)$ every $N$ steps. - Close to $n/2$ is possible. I'll do odd $n$ and leave even $n$ for someone else's pleasure. Let $m=(n-1)/2$. For $i=0,\ldots,n-1$ and $j=1,\ldots,m$, let $M(i,j)$ be the pair $\{i-j,i+j\}$ (all values taken mod $n$, of course). The solution is $$M(0,1).\ldots,M(0,m),M(1,1),\ldots,M(1,m),\ldots,M(n-1,1),\ldots,M(n-1,m).$$ Graph theorists will note that this is a standard 1-factorization of $K_n$ listed one factor at a time. $M(i,j_1)$ and $M(i,j_2)$ are disjoint for $j_1\ne j_2$, so the only chance of two overlapping pairs being closer than $m$ positions is two pairs of the form $M(i,j_1)$ and $M(i+1,j_2)$. A little thought shows that $M(i,j)$ overlaps $M(i+1,j-1)$ and $M(i+1,j+1)$ and no other pairs $M(i+1,j')$. Thus the minimum separation is $m-1=(n-3)/2$. There are $n-1$ pairs $\{0,j\}$, so two of them must be at most distance $\lfloor N/(n-1)\rfloor = (n-1)/2$, still assuming $n$ is odd. This shows that the solution above is at most 1 worse than the optimum. EDIT: For even $n$, $(n-2)/2$ is achievable and is optimal. The remaining loose end is whether $(n-1)/2$ is possible for odd $n$. - That's very neat. I tried for something like that but couldn't do better than $n/4$. –  Noam D. Elkies Sep 6 '11 at 4:25 Thanks. In the case of $n$ odd and separation $m=(n−1)/2$, any $m$ pairs in a row must be a matching, and any $2m$ in a row must be a hamiltonian path. I didn't find such a solution for $n=5$ but I didn't yet try systematically. –  Brendan McKay Sep 6 '11 at 5:22 Unless my program has a bug (for the first time ever ;), there are no solutions for $n$ odd and separation $m=(n−1)/2$ with $5\le n\le 23$. So I conjecture that $(n-3)/2$ is optimal. –  Brendan McKay Sep 6 '11 at 9:40 And of course, $n$ odd implies that the map $(i,j)\mapsto(i-j,i+j)$ is a bijection. –  Denis Serre Sep 6 '11 at 12:50 $R_n \geq n/16$ can be obtained by starting from an arbitrary $f$ and then switching pairs of transpositions to get rid of any overlapping pairs whose images are too close to each other. Suppose $r(f) < k$, and suppose $f(l)$ overlaps some $f(m)$ with $0 < |l-m| < k$. We want to find some $l'\in{\bf Z}/N{\bf Z}$ such that: $f(l')$ does not overlap $f(m)$ for any $m \neq l$ with $|l-m| < k$, and $f(l)$ does not overlap $f(m')$ for any $m' \neq l'$ with $|l'-m'| < k$. Now any transposition overlaps with $2n-4$ others. Thus each of our two conditions excludes at most $(2n-4) (2k-2)$ choices of $l'$. We must also exclude the $2n-4$ choices of $l'$ such that $f(l')$ itself overlaps with $f(l)$. As long as $N-1 > (2n-4)(4k-3)$, we can find such $l'$. Since $N = (n^2-n)/2$, this condition is satisfied as long as $k < n/16 - O(1)$; indeed since $N-1 = (n-2)(n+1)/2$ our condition simplifies to $k \leq (n+13)/16$. So if $k \leq (n+13)/16$ we can switch two transpositions so as to increase by at least $1$ the number of transpositions without an offending overlap. Doing this at most $N$ times yields a bijection $f: {\bf Z}/N{\bf Z} \rightarrow X_n$ with $r(f) \geq k$, as claimed. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627168774604797, "perplexity": 151.65061044843213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093974.67/warc/CC-MAIN-20150627031813-00196-ip-10-179-60-89.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/116179/bayesian-modeling-using-multivariate-normal-with-covariate
# Bayesian modeling using multivariate normal with covariate Suppose you have an explanatory variable ${\bf{X}} = \left(X(s_{1}),\ldots,X(s_{n})\right)$ where $s$ represents a given coordinate. You also have a response variable ${\bf{Y}} = \left(Y(s_{1}),\ldots,Y(s_{n})\right)$. Now, we can combine both variables as: $${\bf{W}}({\bf{s}}) = \left( \begin{array}{ccc}X(s) \\ Y(s) \end{array} \right) \sim N(\boldsymbol{\mu}(s), T)$$ In this case, we simply choose $\boldsymbol{\mu}(s) = \left( \mu_{1} \; \; \mu_{2}\right)^{T}$ and $T$ is a covariance matrix that describes the relation between $X$ and $Y$. This only describes the value of $X$ and $Y$ at $s$. Since we have more points from other locations for $X$ and $Y$, we can describe more values of ${\bf{W}}(s)$ in the following way: $$\left( \begin{array}{ccc} {\bf{X}} \\ {\bf{Y}} \end{array}\right) = N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end{array}\right), T\otimes H(\phi)\right)$$ You will notice that we rearranged the components of $\bf{X}$ and $\bf{Y}$ to get all $X(s_i)$ in a column and after that, concatenate all $Y(s_i)$ together. Each component $H(\phi)_{ij}$ is a correlation function $\rho(s_i, s_j)$ and $T$ is as above. The reason we have the covariance $T\otimes H(\phi)$ is because we assume it is possible to separate the covariance matrix as $C(s, s')=\rho(s, s') T$. Question 1: When I calculate the conditional ${\bf{Y}}\mid{\bf{X}}$, what I'm actually doing is generating a set of values of $\bf{Y}$ based on $\bf{X}$, correct? I already have $\bf{Y}$ so I would be more interested in predicting a new point $y(s_{0})$. In this case, I should have a matrix $H^{*}(\phi)$ defined as $$H^{*}(\phi) = \left(\begin{array}{ccc}H(\phi) & \boldsymbol{h} \\ \boldsymbol{h}& \rho(0,\phi) \end{array}\right)$$ in which $\boldsymbol{h}(\phi)$ is a vector $\rho(s_{0} - s_{j};\phi)$. Therefore, we can construct a vector (without rearrangement): $${\bf{W^{*}}} = \left({\bf{W}}(s_{1}), \ldots, {\bf{W}}(s_{n}), {\bf{W}}(s_{0})\right)^{T} \sim N\left(\begin{array}{ccc}\boldsymbol{1}_{n+1} \otimes \left( \begin{array}{ccc} \mu_{1} \\ \mu_{2} \end{array} \right)\end{array}, H(\phi)^{*}\otimes T\right)$$ And now I just rearrange to get a joint distribution $\left(\begin{array}{ccc} {\bf{X}} \\ x(s_0) \\{\bf{Y}} \\ y(s_0)\end{array} \right)$ and obtain the conditional $p(y(s_0)\mid x_0, {\bf{X}}, {\bf{Y}})$. Is this correct? Question 2: For predicting, the paper I'm reading indicates that I must use this conditional distribution $p(y(s_0)\mid x_0, {\bf{X}}, {\bf{Y}})$ and obtain a posterior distribution $p(\mu, T, \phi\mid x(s_0), {\bf{Y}}, {\bf{X}})$, but I'm not sure how to obtain the posterior distribution for the parameters. Maybe I could use the distribution $\left(\begin{array}{ccc}{\bf{X}} \\ x(s_0)\\ {\bf{Y}}\end{array}\right)$ that I think is exactly the same as $p({\bf{X}}, x(s_0), {\bf{Y}}\mid\mu, T, \phi)$ and then simply use Bayes' theorem to obtain $p(\mu, T, \phi\mid {\bf{X}}, x(s_0), {\bf{Y}}) \propto p({\bf{X}}, x(s_0), {\bf{Y}}\mid\mu, T, \phi)p(\mu, T, \phi)$ Question 3: At the end of the subchapter, the author says this: For prediction, we do not have ${\bf{X}}(s_0)$. This does not create any new problems as it may be treated as a latent variable and incorporated into $\bf{x}'$ This only results in an additional draw within each Gibbs iteration and is a trivial addition to the computational task. What does that paragraph mean? By the way, this procedure can be found in this paper (page 8), but as you can see, I need a bit more of detail. Thanks! ## migrated from math.stackexchange.comSep 21 '14 at 0:21 This question came from our site for people studying math at any level and professionals in related fields. • Voted to migrate per OP request. – Thursday Sep 21 '14 at 0:09 • I would say correct to both your answers to questions 1 and 2. Question 3 means that the unobserved $X(s_0)$ is treated as an additional parameter, on top of $\mu,T,\phi$, using the full conditional $$p(x(s_0)\mid{\bf{X}}, , {\bf{Y}},\mu, T, \phi)$$ as prior on $X(s_0)$. – Xi'an Dec 28 '14 at 20:58 • Thank you! Do you have a recommendation on where I can get more information on this kind of model? By the way, you can add your comment as an answer. – Robert Smith Dec 29 '14 at 5:50 Question 1: Given your joint probability model $$\left( \begin{array}{ccc} {\bf{X}} \\ {\bf{Y}} \end{array}\right) \sim N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end{array}\right), \begin{bmatrix} \boldsymbol\Sigma_{11} & \boldsymbol\Sigma_{12} \\ \boldsymbol\Sigma_{21} & \boldsymbol\Sigma_{22} \end{bmatrix} \right)=N\left(\left(\begin{array}{ccc}\mu_{1}\boldsymbol{1}\\ \mu_{2}\boldsymbol{1}\end{array}\right), T\otimes H(\phi)\right)$$ the conditional distribution of $\bf{Y}$ given $\bf{X}$ is also Normal, with mean $$\boldsymbol\mu_2 + \boldsymbol\Sigma_{21} \boldsymbol\Sigma_{11}^{-1} \left( \mathbf{X} - \boldsymbol\mu_1\right)$$ and variance-covariance matrix $$\boldsymbol\Sigma_{22} - \boldsymbol\Sigma_{21} \boldsymbol\Sigma_{11}^{-1} \boldsymbol\Sigma_{21}.$$ (Those formulas are copied verbatim from the Wikipedia page on multivariate normals.) The same applies to $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}})$ since $(y(s_0), x(s_0), {\bf{X}}, {\bf{Y}})$ is another Normal vector. Question 2: The predictive $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}})$ is defined as $$p(y(s_0) | x(s_0), {\bf{X}}, {\bf{Y}})=\int p(y(s_0)| x(s_0), {\bf{X}}, {\bf{Y}},\mu,T,\phi)\,p(\mu,T,\phi| x(s_0), {\bf{X}}, {\bf{Y}})\,\text{d}\mu\,\text{d} T\,\text{d}\phi\,,$$ i.e., by integrating out the parameters using the posterior distribution of those posteriors, given the current data $({\bf{X}}, {\bf{Y}},x(s_0))$. So there is a little bit more to the full answer. Obviously, if you only need to simulate from the predictive, your notion of simulating jointly from $p(\mu, T, \phi\mid {\bf{X}}, x(s_0), {\bf{Y}})$ and then from $p(y(s_0)\mid x(s_0), {\bf{X}}, {\bf{Y}},\mu,T,\phi)$ is valid. Question 3: In the event that $x(s_0)$ is not observed, the pair $(x(s_0),y(s_0))$ can be predicted from another predictive $$p(x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}})=\int p(x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}},\mu,T,\phi)\,p(\mu,T,\phi\mid {\bf{X}}, {\bf{Y}})\,\text{d}\mu\,\text{d} T\,\text{d}\phi\,.$$ When simulating from this predictive, because it is not available in a manageable form, a Gibbs sampler can be run that iteratively simulates 1. $\mu\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),T,\phi$ 2. $T\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),\mu,\phi$ 3. $\phi\mid {\bf{X}}, {\bf{Y}},x(s_0),y(s_0),T,\mu$ 4. $x(s_0)\mid {\bf{X}}, {\bf{Y}},y(s_0),\phi,T,\mu$ 5. $y(s_0)\mid {\bf{X}}, {\bf{Y}},x(s_0),\phi,T,\mu$ or else merge steps 4 and 5 into a single step • $x(s_0),y(s_0)\mid {\bf{X}}, {\bf{Y}},\phi,T,\mu$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862368106842041, "perplexity": 165.7734870252487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00059.warc.gz"}
https://cs.stackexchange.com/questions/75674/reducing-randomness-needed-by-turing-machine
# Reducing randomness needed by turing machine I am reading an article related to streaming algorithms named "Turnstile streaming algorithms might as well be linear sketched" by Yi Li, Huy Nguyen and David Woodruff, At some point they have a random algorithm (uses a tape of random bits) that solved a problem over $\mathbb Z_{|m|}^{n} = \{-m,..,m\}^n$ that succeeds with probability $1-\delta$ They want to reduce the number of bits needed for randomness used by that algorithm using via the following statement: Theorem 9: Let A be a randomized automaton solving problem P on $\mathbb Z_{|m|}^{n}$ with failure probability at most $\delta$. There is a ranomized automaton B that only needs to pick uniformly at random one of $O(\delta ^{-2}nlogm)$ deterministic instances of A and solves P on $\mathbb Z_{|m|}^{n}$ with failure probability at most $2\delta$ Proof: Let $A_1 ,A_2 ,.., A_{O(n\delta ^{-2}logm)}$ be independent draws of deterministic automata picked by B. Fix an input $x\in \mathbb Z_{|m|}^{n}$. Let $p_A(x)$ be the fraction of automata among $A_1 ,A_2 ,.., > A_{O(n\delta ^{-2}logm)}$ that solve problem P correctly on x and $p_B(x)$ be the probability that B solves P on x correctly. By a Chernoff bound, we have that $Pr\{|p_A(x)-p_B(x)|\geq \delta\} \leq exp(-O(nlogm)) < (2m+1)^{-2n}$. Taking a union bound over all choices of $x\in \mathbb Z_{|m|}^{n}$, we have $Pr\{|p_A(x)-p_B(x)|\geq \delta \ for\ all\ x\} > 0$. Therefore, there exists a set of $A_1 ,A_2 ,.., A_{O(n\delta > ^{-2}logm)}$ such that $|p_A(x)-p_B(x)|\leq \delta$ for all $x\in > \mathbb Z_{|m|}^{n}$. The automaton B simply samples uniformly at random from this set of deterministic automata I am having trouble understanding some parts of the proof: the random variable $p_A(x)$ should be the amount of $A_i$'s that succeeds divided by the amount of them that B has sampled? if so than by the low of total probability its easy to see that: $p_B(x) = \sum_{i}P[B\ choose\ A_i] P[B\ succeeds\ on\ x\ |\ B\ choose' A_i] = \sum_{i}\frac{1}{O(\delta^{-2}nlogm)} P[A_i\ succeeds\ on\ x] = p_A(x)$ where the last move comes from the fact the each instance $A_i$ is deterministic so its either 1 for success or 0 for failure. So the whole $|p_A(x)-p_B(x)|$ they use is actually 0 I understand from the end of the proof that $|p_A(x)-p_B(x)|$ should some how represent the distance between A's success to B's success but could not see how this happen. Also I did not understand the use of Chernoff the way they did. Here is another way of looking at $p_A$ and $p_B$. We can think of $B$ as a deterministic algorithm which accepts an additional input $r$ representing the randomness. This input is drawn from some distribution $R$. Also, denote by $P(x)$ the correct answer. Then $$p_B(x) = \Pr_{r \sim R} [B(x,r) = P(x)].$$ Let $M = O(n\delta^{-2}\log m)$, and sample $r_1,\ldots,r_M \sim R$. The algorithm $A_i$ simply runs $B$ with randomness $r_i$, that is $A_i(x) = B(x,r_i)$. We can define another algorithm $A$ which first choose $i \in \{1,\ldots,M\}$ uniformly at random, and then runs $A_i$. The success probability of $A$ is \begin{align*} p_A(x) &= \Pr_{i \in \{1,\ldots,M\}} [B(x,r_i) = P(x)] \\ &= \Pr_{r \in \{r_1,\ldots,r_M\}} [B(x,r) = P(x)]. \end{align*} Thus in $B$, the randomness is chosen according to $R$, whereas in $A$, the randomness is chosen uniformly from a set $\{r_1,\ldots,r_M\}$. If we choose $r_1,\ldots,r_M \sim R$, then we expect the performance of $A$ to be very similar to that of $B$, and this can be quantified using Chernoff's inequality. The probability $p_A(x)$ itself depends on the choice of $r_1,\ldots,r_M$. What may confuse you is how we conceive of this two-step random process. We first choose $r_1,\ldots,r_M$, and this gives us the function $p_A(x)$, which is a random function depending on the choice of $r_1,\ldots,r_M$. Note that we don't take probability over the choice of $r_1,\ldots,r_M$ – if we did, we would just get $p_B(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937427639961243, "perplexity": 250.08326250363405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301341.12/warc/CC-MAIN-20220119125003-20220119155003-00093.warc.gz"}
https://www.statisticshomeworkhelper.com/probability-theory/
# Probability theory Probability theory is the statistical study of an uncertain or random event to determine its likelihood to happen. The chance of an event to occur is stated as a number between zero and one, whereby one indicates high possibility(certain to happen) and zero indicated impossibility. The higher the probability number in an event, the higher the likelihood that the event will happen. A good example of explaining probability is tossing a fair coin. Since the coin is unbiased, there is an equal possibility of the coin landing on heads or tails. In other words, there is a 50% chance of heads and 50% chance of tails. And since there is no other outcome that is expected from this experiment, the probability of the coin landing on either heads or tails is 50 %, which can be denoted as 0.5 or ½. ## Understanding probability theory and its terminology To understand probability, let’s consider an experiment that can be repeated and that can give us different results on different attempts but on similar conditions. All the possible outcomes in an experiment are statistically referred to as a sample space. In an experiment of tossing a coin, for instance, the sample space will have two possible outcomes; the heads and the tails. If we consider an experiment of tossing two dice, on the other hand, our sample space will have 36 possible outcomes. In the aforementioned experiments, the outcomes can have either a discrete or continuous random variable. And what is a random variable, you may ask? A random variable is observed when the outcome has numerical values. They are of two types: • Discrete random variable: A discrete random variable is a variable that takes on a small number of distinct values like 0, 1, 2, 3, 4, 5… • Continuous random variable: This one takes a limitless number of distinct values. Measurements such as length often fall under continuous random variables. Also, a probability test can be either independent or dependent. An event is considered dependent if its likelihood to occur rely on the likelihood of another event occurring. If an event does not rely on the probability of another event occurring, then it is said to be independent. Let’s consider our two experiments of tossing a coin and tossing two dices for instance. The probability of the coin landing on heads or tails is in no way affected by the probability of the numbers displayed by the faces of the two dices. Hence these two events can be said to be independent of each other. To understand probability theory and terminology involved in-depth, consider taking professional academic support from our probability theory online tutors. ## Real-life applications of probability theory We use probability in our daily lives to determine what the likelihood of an event to occur will be especially for those events that we are not sure about the results. Sure, most of the time, we will not apply complex mathematical formulas or solve actual probability problems but we will use subjective probability to judge the situation and decide the best course of action. Below are some of the events in which probability theory is applied in real life. • Weather forecasting: The meteorology department can’t forecast exactly what the atmospheric conditions will be without probability. They have to use instruments and tools to determine the chances that there will be sunshine, rain, or snow. If 40 out of 100 days experience rain, for instance, then there is a 40% likelihood of rain. Meteorologists also use past data to estimate how high or low the temperatures will be in the future and other probable weather patterns. • Insurance options: Probability theory plays a significant role in determining the best insurance policies for you and your family. For example, when selecting a motor insurance policy, you can use probability to analyze the likelihood that you will require to file a claim. For instance, if 20 out of 100 drivers (20% of drivers) in your area have hit a person over the past few months, you will likely consider a comprehensive cover for your car, not just liability. • Gaming: If you are playing a game that involves luck or chance, you will use probability to decide the most appropriate move. For instance, if you are placing a bet on the football team, you may want to consider how many times the team you want to bet on has won the match over the past year. For more information on the applications of probability theory, connect with our probability theory homework help experts.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319316506385803, "perplexity": 292.0512932571954}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00606.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/5794/what-is-a-complementary-map?noredirect=1
# What is a complementary map? I have a quantum map described by the following Kraus operators $$A_0 = c_0 \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}, \qquad A_1 = c_1 \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix},$$ such that $$c_0^2 + c_1^2 = 1$$. I want to know what is a complementary map and how to construct the same for the above-mentioned channel? Edit 1: Checked for some literature. Here is the definition of the complementary map equations 37 and 38. • Please do not drastically edit your question after you've already received and accepted an answer based on a previous version. If you need to add something mention it in "Edit 3". – Sanchayan Dutta Apr 10 '19 at 12:13 • The question is too broad and independent of the form of c's. Moreover, it is better to use map rather than channel. – Tobias Fritzn Apr 10 '19 at 13:05 • Ideally, that should have been taken care of before this was answered, not after. – Sanchayan Dutta Apr 10 '19 at 13:10 • Yeah, but it was due to lack of information at that time. I am sure these edits will make it easy for people to search. – Tobias Fritzn Apr 10 '19 at 13:12 Let's start by finding a complementary channel for any channel given by a Kraus representation $$\Phi(X) = \sum_{k=1}^n A_k X A_k^{\dagger}.$$ To make the necessary equations clear, let us assume that the channel has the form $$\Phi:\mathrm{L}(\mathcal{X})\rightarrow \mathrm{L}(\mathcal{Y})$$ for finite-dimensional Hilbert spaces $$\mathcal{X}$$ and $$\mathcal{Y}$$. Let us also define $$\mathcal{Z} = \mathbb{C}^n$$; the complementary channel we will define will take the form $$\Psi:\mathrm{L}(\mathcal{X})\rightarrow \mathrm{L}(\mathcal{Z})$$. (For the channel in the question itself, we will have $$\mathcal{X}$$, $$\mathcal{Y}$$, and $$\mathcal{Z}$$ all equal to $$\mathbb{C}^2$$, but it helps nevertheless to assign different names to these spaces.) Define an operator $$A = \sum_{k=1}^n A_k \otimes | k\rangle,$$ which is a linear operator mapping $$\mathcal{X}$$ to $$\mathcal{Y}\otimes\mathcal{Z}$$. This gives us a Stinespring representation $$\Phi(X) = \operatorname{Tr}_{\mathcal{Z}} \bigl( A X A^{\dagger}\bigr).$$ The channel $$\Psi(X) = \operatorname{Tr}_{\mathcal{Y}} \bigl( A X A^{\dagger}\bigr)$$ is therefore complementary to $$\Phi$$. We can simplify this expression by observing that $$A X A^{\dagger} = \sum_{j=1}^n \sum_{k=1}^n A_j X A_k^{\dagger} \otimes | j \rangle \langle k |,$$ so that $$\Psi(X) = \sum_{j=1}^n \sum_{k=1}^n \operatorname{Tr}\bigl(A_j X A_k^{\dagger}\bigr) | j \rangle \langle k |.$$ There's not too much more we can do with this, except perhaps to use the cyclic property of the trace to obtain the expression $$\Psi(X) = \sum_{j=1}^n \sum_{k=1}^n \operatorname{Tr}\bigl(A_k^{\dagger} A_j X\bigr) | j \rangle \langle k |.$$ Now let's plug in the specific operators from the question to obtain $$\Psi(X) = c_0^2 \operatorname{Tr}(X) | 0 \rangle \langle 0 | + c_1^2 \operatorname{Tr}(X) | 1 \rangle \langle 1 | + c_0 c_1 \operatorname{Tr}(\sigma_z X) | 0 \rangle \langle 1 | + c_0 c_1 \operatorname{Tr}(\sigma_z X) | 1 \rangle \langle 0 |.$$ Here $$\sigma_z$$ denotes the Pauli-Z operator, which we get because $$A_0^{\dagger} A_1 = A_1^{\dagger}A_0 = c_0 c_1 \sigma_z$$. (I am assuming $$c_0$$ and $$c_1$$ are real numbers.) The expression may look a bit nicer in matrix form: $$\Psi\begin{pmatrix} \alpha & \beta\\ \gamma & \delta\end{pmatrix} = \begin{pmatrix} c_0^2(\alpha + \delta) & c_0 c_1 (\alpha - \delta)\\ c_0 c_1 (\alpha - \delta) & c_1^2 (\alpha + \delta) \end{pmatrix}.$$ Finally, the question asks for Kraus operators of $$\Psi$$, which we can get by computing the Choi operator of $$\Psi$$. In general, this is the operator $$J(\Psi) = \sum_{j=1}^n\sum_{k=1}^n \Psi(|j\rangle\langle k|) \otimes |j\rangle\langle k|,$$ and in this particular case we obtain $$J(\Psi) = \begin{pmatrix} c_0^2 & 0 & c_0 c_1 & 0\\ 0 & c_0^2 & 0 & -c_0 c_1 \\ c_0 c_1 & 0 & c_1^2 & 0\\ 0 & -c_0 c_1 & 0 & c_1^2 \end{pmatrix}.$$ This operator has rank 2, which means just 2 Kraus operators suffice. We can get them through a spectral decomposition of $$J(\Psi)$$. Specifically, we have $$J(\Psi) = \begin{pmatrix} c_0\\ 0\\ c_1\\ 0 \end{pmatrix} \begin{pmatrix} c_0 & 0 & c_1 & 0 \end{pmatrix} + \begin{pmatrix} 0\\ c_0\\ 0\\ -c_1 \end{pmatrix} \begin{pmatrix} 0 & c_0 & 0 & -c_1 \end{pmatrix},$$ and by "folding up" these vectors we get Kraus operators: $$\Psi(X) = B_0 X B_0^{\dagger} + B_1 X B_1^{\dagger}$$ where $$B_0 = \begin{pmatrix} c_0 & 0\\ c_1 & 0 \end{pmatrix} \quad\text{and}\quad B_1 = \begin{pmatrix} 0 & c_0 \\ 0 & -c_1 \end{pmatrix}.$$ • Thank you so much, @John Watrous. Just a small request. How to obtain the Choi operator? – Tobias Fritzn Mar 28 '19 at 18:13 • The answer now defines the Choi operator. – John Watrous Mar 28 '19 at 18:54 • Hello, @John Watrous. Before Choi operator, everything was nice 2-dimensional. What does the Choi operator actually ("physically") do? – Tobias Fritzn Mar 29 '19 at 5:38 • To be precise, if $\Psi$ maps a 2D state to a 2D state, why is it that in order to get the Kraus operators, one has to perform a "bipartite" kind of operation? – Tobias Fritzn Mar 29 '19 at 7:02 • The Choi operator of a channel is useful in multiple ways, including the fact that it provides a mechanical way to compute Kraus operators, which is how it was used in this answer. If you would like to know more about Choi operators, let me suggest that you ask that as a separate question, and I am sure you will get some informative answers. – John Watrous Mar 29 '19 at 12:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929538130760193, "perplexity": 307.008133320638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00543.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3813286
# Conserved quantities in the Doran Metric? by m4r35n357 Tags: conserved, doran, metric, quantities P: 82 I've been doing some amateur simulations of particle trajectories in a few well-known metrics (using GNU Octave and Maxima), and in the case of the Schwarzschild and Gullstrand-Painleve metrics I have the ability to check my results using freely available equations for conservation of energy and angular momentum. In the case of the Doran metric however, the geodesic equations are far messier(!), but I believe I now have a "correct" simulation. I would like to check this also, but have not seen any reference to equations for conserved quantities for this metric. Has anyone here looked into this, or does anyone know any useful links, or are there any computer algebra wizards that can help? P: 95 What's the Doran metric? If it doesn't have (m)any symmetries, don't hold your breath for conserved quantities. If it does, work them out yourself: the dot product of a killing vector with the four-velocity is conserved along a geodesic. Physics Sci Advisor PF Gold P: 6,039 By the "Doran metric", do you mean the Doran chart on Kerr spacetime? This chart is described, for example, by Matt Visser in this article (Section 7): http://arxiv.org/abs/0706.0622 If that is what you mean, then the conserved quantities are dictated, as Sam Gralla notes, by the symmetries of the spacetime. These can be read off, roughly speaking, by looking at the metric and seeing which coordinates it doesn't depend on. (I say "roughly speaking" because you need to be using the right set of coordinates for this to work. Strictly speaking, the symmetries are captured by the Killing vectors of the spacetime; the "right" set of coordinates is one where as many coordinates as possible match up with Killing vectors.) Emeritus Sci Advisor P: 7,596 Conserved quantities in the Doran Metric? Some remarks: If a metric has explicit time symmetry, i.e. t does not appear in the metric, then the vector with components <1,0,0,0>, which we will call $\xi^a$ is a Killing vector of that metric. And we can write the dot product $E = g_{ab} \, \xi^a \, u^b$ = constant along any geodesic, $u^b$ being the tangent vector of that geodesic at any point, the same $u^b$ that appears in the geodesic equation. To confirm this (including to prevent any errors by confusing vectors with one forms or other mistakes in interpreting the notation, or typos in my post) you should be able to confirm that one of the geodesic equations is equivalent to $d E/ d \tau = 0$, where you expand the total derivative (dE/d \tau), which is taken to be along the geodesic curve which is parameterized by $\tau$ as usual using the chain rule. Example: for the Schwarzschild metric we can write , for some geodesic parameterized by $\tau$ so that we have the geodesic curve $t(\tau), r(\tau), \theta(\tau), \phi(\tau)$, and the tangent vector $u^a$ of this curve with components $u^t = dt/d\tau, u^{r} = dr/d \tau, u^{\theta} = d \theta / d \tau, u^{\phi} = d \phi / d \tau$ $$E(\tau) = -(1-2m/r) u^t = -\left(1 - \frac{2m} {r(\tau)} \right) \left( \frac{ d \, t(\tau) } { d \, \tau } \right)$$ and we can expand using the chain rule $$dE / d\tau = -(1-2m/r) ) \frac{d^2 t}{d \tau^2}- \frac{2m}{r^2} \left( \frac{d \, r } {d \tau} \right) \left( \frac{d \, t} {d \tau} \right) = 0$$ Which with some algebra this can be seen to be identical to one of the standard geodesic equations for the Schwarzschild metric. Thus we see that one of the geodesic equations is equivalent to a statement that E is constant along a geodesic curve. Furthermore, if you convert to a coordinate chart that doesn't have explicit time symmetry, the tensor equations above will still be true, but the Killing vector will have different components. And because the Killing vector transforms like a vector, you can use the standard vector transformation laws to find the components in the new coordinate chart - so you can convert the Kerr Killing vectors to the Doran chart, for instance. P: 82 OK thanks for the replies so far, for information my main link for the metric is arxiv.org/pdf/gr-qc/0411060 (sorry but the whole link is too messy). You might be able to imagine that the Christoffel symbols (34 non-unique) are hideous to work with (at least the output from Maxima, which is very poorly factorized) so I don't fancy my chances at much of the maths on my own, hence my question here. I don't have GRTensor and I just hoped that it would do a better job . . . . Pervect: I will have to take my time to absorb your comments fully, I think you have given me some good information, thank you. Physics PF Gold P: 6,039 Quote by m4r35n357 OK thanks for the replies so far, for information my main link for the metric is arxiv.org/pdf/gr-qc/0411060 This is the same Doran metric that is treated in the paper I linked to; the two treatments give pretty much the same information. P: 608 Quote by m4r35n357 I've been doing some amateur simulations of particle trajectories in a few well-known metrics (using GNU Octave and Maxima), and in the case of the Schwarzschild and Gullstrand-Painleve metrics I have the ability to check my results using freely available equations for conservation of energy and angular momentum. In the case of the Doran metric however, the geodesic equations are far messier(!), but I believe I now have a "correct" simulation. I would like to check this also, but have not seen any reference to equations for conserved quantities for this metric. Has anyone here looked into this, or does anyone know any useful links, or are there any computer algebra wizards that can help? You might find the following links useful- 'A new form of the Kerr solution' Chris Doran http://lanl.arxiv.org/abs/gr-qc/9910099 'Painleve-Gullstrand Coordinates for the Kerr Solution' Jose Natario http://arxiv.org/abs/0805.0206v2 P: 82 Quote by stevebd1 You might find the following links useful- 'A new form of the Kerr solution' Chris Doran http://lanl.arxiv.org/abs/gr-qc/9910099 'Painleve-Gullstrand Coordinates for the Kerr Solution' Jose Natario http://arxiv.org/abs/0805.0206v2 Cheers for those, think I need to take a break from doing and go back to reading for a short while . . . P: 82 Quote by pervect and we can expand using the chain rule erm, sorry to sound stupid, but I'm sort of seeing this example as the product rule, do you really mean the chain rule? [EDIT] OK I see both now, np ;) P: 82 Quote by pervect Furthermore, if you convert to a coordinate chart that doesn't have explicit time symmetry, the tensor equations above will still be true, but the Killing vector will have different components. And because the Killing vector transforms like a vector, you can use the standard vector transformation laws to find the components in the new coordinate chart - so you can convert the Kerr Killing vectors to the Doran chart, for instance. Firstly, thanks for the clearest description I have heard regarding Killing vectors and conserved quantities in GR. With your help I now have two conserved quantities in my Doran metric simulations. It turned out that my initial guess for the expressions was right, I just had an error in the implementation, duh. But from your comments I now know why my guess was the right thing to do . . . I am also now intrigued at the possibility of replacing two huge geodesic equations with much simpler expressions, Maxima permitting ;) Still working my way through the last part of your post (quoted). As it happens I need to implement some control over coordinates in order to set initial conditions in a consistent way. Related Discussions Classical Physics 0 Advanced Physics Homework 0 Advanced Physics Homework 2 Quantum Physics 6 Advanced Physics Homework 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842285871505737, "perplexity": 439.9809986700783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997895170.20/warc/CC-MAIN-20140722025815-00165-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/knematics-finding-velocity.268656/
# Knematics finding velocity 1. Nov 1, 2008 ### davo 1. The problem statement, all variables and given/known data A person throws a rock horizontally from the roof of a 50m building. The rock lands 70 m from the base of the building. How long was the rock in the air in s? I got 3.194. The online home work thingy (webassign) said I was right. With what velocity was the rock thrown in m/s? 2. Relevant equations I used the Xf equation and solved for time. Dont I just plug in the time i got in the last equation and actualy solve it? Well i tryed that but I just keep getting it wrong. 3. The attempt at a solution 70=-4.9(3.194)^2 + Vo(3.194) Is this right? 2. Nov 1, 2008 ### djeitnstine The way you need to set it up is imagine the rock is simply dropping because the x velocity doesn't matter. So s = vt + 1/2 at^2 s = 50m v = 0 a = 9.8m/s^2 3. Nov 1, 2008 ### djeitnstine 50 = 0t + 1/2 9.8 t^2 4. Nov 1, 2008 ### davo what does s represent in this? 5. Nov 1, 2008 ### davo I think i did my algebra wrong because it said it was wrong. (49.88+50)/3.194= 31.305 6. Nov 1, 2008 ### djeitnstine sorry s is displacement 7. Nov 1, 2008 ### djeitnstine 50 = 0t + 1/2 9.8 t^2 this is the equation, the height 50 is equal to zero plus 1/2 9.8*t^2 (the rock is modeled as only dropping from 50 meters above the ground the x velocity has nothing to do with it) Similar Discussions: Knematics finding velocity
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231993556022644, "perplexity": 2367.8829691716787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123549.87/warc/CC-MAIN-20170423031203-00017-ip-10-145-167-34.ec2.internal.warc.gz"}
https://community.teradata.com/t5/Database/What-is-spool-space-in-teradata-explain-with-examples/m-p/61293/highlight/true
Database Highlighted Enthusiast ## What is spool space in teradata? explain with examples. What is spool space in teradata? explain with examples. 3 REPLIES 3 Ambassador ## Re: What is spool space in teradata? explain with examples. Short question, short answer : RTFM Dieter Enthusiast ## Re: What is spool space in teradata? explain with examples. unused permspace will be used as the spoolspace in the system. this is used to hold the intermittent results of the queries and volatile tables. Enthusiast ## Re: What is spool space in teradata? explain with examples. We have a rule i.e if a query takes more than one terabyte of spool we are supposed to abort it. My question is lets say the total spool is used by a query , what is the expected behavior of the system , will the system restart or what can happen? Next question is related to the 1st line ; If we have around 10 terabyte of spool , is this logical to abort the query that has just crossed 1tb of spool. i thnk we should allow it more spool that can be up to 9tb or so if there are no other sessions. Please provide your analysis on the above cases , Thanks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8321002721786499, "perplexity": 2736.592692269759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528290.72/warc/CC-MAIN-20190722221756-20190723003756-00136.warc.gz"}
http://mathhelpforum.com/algebra/38342-two-problems-solve.html
# Math Help - Two Problems to Solve 1. ## Two Problems to Solve Hello, I am sorry that I do not have a better title for these two problems. I am an adult taking an Algebra I class equivalent to ninth grade algebra. If I could learn to solve these two, they would be helpful as examples to solve the rest. Thank you so much for your willingness to help me! 1. A box has length (l) 3 inches less than the height (h) and width (w) 9 inches less than h. The volume is 324 cubic inches. a. Write an equation you can use to solve for the dimensions. b. What are the dimensions of the box? 2. Write a = c in 5 different ways. b d 2. Hello, Originally Posted by VAP Hello, I am sorry that I do not have a better title for these two problems. I am an adult taking an Algebra I class equivalent to ninth grade algebra. If I could learn to solve these two, they would be helpful as examples to solve the rest. Thank you so much for your willingness to help me! 1. A box has length (l) 3 inches less than the height (h) and width (w) 9 inches less than h. The volume is 324 cubic inches. a. Write an equation you can use to solve for the dimensions. b. What are the dimensions of the box? Translate the text : l is 3 inches less than h. --> $\boxed{l=h-3}$ w is 9 inches less than h. --> $\boxed{w=h-9}$ The volume V is : $V=l \cdot w \cdot h$ Now replace such that you get V with respect to h. Then, solve for h 3. Thank you so much! Can you help with the other ? 4. Originally Posted by VAP ... 1. A box has length (l) 3 inches less than the height (h) and width (w) 9 inches less than h. The volume is 324 cubic inches. a. Write an equation you can use to solve for the dimensions. b. What are the dimensions of the box? 2. Write a = c in 5 different ways. b d to #1: $l = h-3$ $w = h-9$ Since $V = l\cdot w \cdot h$ the volume can be calculated by: $V= 324 = (h-3)(h-9)\cdot h = h^3-12h^2+27h$ Solve for h: $h^3-12h^2+27h -324=0$ $h^2(h-12) + 27(h-12) = 0$ $(h^2+27)(h-12)=0$ A product of 2 factors is zero if one factor equals zero: $h^2+27>0$ Therefore $h-12 = 0~\implies~ h = 12$ The demensions of the box are: $l=9; w=3; h=12$ to #2: I assume that you mean: \begin{aligned}\frac ab=\frac cd & \implies & ad=bc \\ & \implies & \frac ac=\frac db \\ & \implies & \frac ca=\frac bd \\ & \implies & \frac{ad}b=c \\ & \implies & \frac{ad}c=b \end{aligned} 5. Thank you for expanding on problem #1. Yes, that's what I meant for problem #2. I really appreciate it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892581105232239, "perplexity": 871.6713967549157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929012.53/warc/CC-MAIN-20150521113209-00105-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.maths.usyd.edu.au/u/AlgebraSeminar/17abstracts/rostam17.html
# Salim Rostam (Université de Versailles Saint-Quentin-en-Yvelines) ## A KLR-like presentation for the Hecke algebra of $$G(r,p,n)$$ The Hecke algebra of $$G(r,p,n)$$ can be seen as the fixed point subalgebra of the Hecke algebra of $$G(r,1,n)$$ (also known as the Ariki-Koike algebra) for a certain automorphism $$\sigma$$. Using an isomorphism of Brundan and Kleshchev with a KLR algebra, we find an analogue of $$\sigma$$ defined from the KLR presentation. Moreover, it turns out that we can give a KLR-like presentation of the fixed point subalgebra.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655250310897827, "perplexity": 475.69727164017377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00114.warc.gz"}
https://csr.ufmg.br/dinamica/dokuwiki/doku.php?id=agent_based_model:gini_calculation
## Gini Coefficient Calculation The Gini coefficient (also known as the Gini index or Gini ratio) is a measure of statistical dispersion developed by the Italian statistician and sociologist Corrado Gini and published in 1912. The Gini coefficient measures the inequality among values of a frequency distribution (for example levels of income). A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has an exactly equal income). The Gini coefficient is usually defined mathematically based on the Lorenz curve, which plots the proportion of the total income of the population (y axis) that is cumulatively earned by the bottom x% of the population (see diagram). The line at 45 degrees thus represents perfect equality of incomes. The Gini coefficient can then be thought of as the ratio of the area that lies between the line of equality and the Lorenz curve. The area of concentration can be found by approximation by trapezoidal area of the Lorenz curve In this work we calculate the inequality of the distribution of calories among rabbits at the end of each step. The input to this submodel is a map in which the pixel value is the amount of calories of each rabbit. First, the model divides the population in groups(k) and then calculates the inequality of calories distribution among these groups through the cumulative relative frequency of the population(X) and calories(Y).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924867749214172, "perplexity": 719.167403454827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00549.warc.gz"}
http://thecockrells.com/nifa-rnfa-qhzvcvs/difference-between-homogeneous-and-non-homogeneous-partial-differential-equation-14ea91
Differential equations (DEs) come in many varieties. Differential Equations are equations involving a function and one or more of its derivatives.. For example, the differential equation below involves the function \(y\) and its first derivative \(\dfrac{dy}{dx}\). If all the terms of a PDE contain the dependent variable or its partial derivatives then such a PDE is called non-homogeneous partial differential equation or homogeneous otherwise. (**) Note that the two equations have the same left-hand side, (**) is just the homogeneous version of (*), with g(t) = 0. Each such nonhomogeneous equation has a corresponding homogeneous equation: y″ + p(t) y′ + q(t) y = 0. Homogeneous Differential Equations Introduction. And different varieties of DEs can be solved using different methods. The solutions of an homogeneous system with 1 and 2 free variables The function is often thought of as an "unknown" to be solved for, similarly to how x is thought of as an unknown number, to be solved for, in an algebraic equation like x 2 − 3x + 2 = 0. Here are some examples: Solving a differential equation means finding the value of the dependent […] 1.6 Slide 2 ’ & \$ % (Non) Homogeneous systems De nition 1 A linear system of equations Ax = b is called homogeneous if b = 0, and non-homogeneous if b 6= 0. What is the difference between homogeneous and inhomogeneous differential equations and how are they used to help solve questions (or how do you solve questions with these)? Answer: Homogeneous differential equations involve only derivatives of y and terms involving y, and they’re set to 0, as in this equation:. In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function.. In addition to this distinction they can be further distinguished by their order. Ordinary Differential Equations (ODE) An Ordinary Differential Equation is a differential equation that depends on only one independent variable. Linear Differential Equation; Non-linear Differential Equation; Homogeneous Differential Equation; Non-homogeneous Differential Equation; A detail description of each type of differential equation is given below: – 1 – Ordinary Differential Equation. In quaternionic differential calculus at least two homogeneous second order partial differential equations exist. Nonhomogeneous differential equations are the same as homogeneous differential equations, except they can have terms involving only x (and constants) on the right side, as in this equation:. In the above six examples eqn 6.1.6 is non-homogeneous where as the first five equations are homogeneous. You can classify DEs as ordinary and partial Des. In the above four examples, Example (4) is non-homogeneous whereas the first three equations are homogeneous. what is the difference between homogeneous and non homogeneous differential equations? Notice that x = 0 is always solution of the homogeneous equation. Homogeneous PDE: If all the terms of a PDE contains the dependent variable or its partial derivatives then such a PDE is called non-homogeneous partial differential equation or homogeneous otherwise. (Non) Homogeneous systems De nition Examples Read Sec.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866587221622467, "perplexity": 338.8474335235935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381230.99/warc/CC-MAIN-20210307231028-20210308021028-00456.warc.gz"}
https://stacks.math.columbia.edu/tag/00HB
The Stacks Project Tag 00HB Definition 10.38.1. Let $R$ be a ring. 1. An $R$-module $M$ is called flat if whenever $N_1 \to N_2 \to N_3$ is an exact sequence of $R$-modules the sequence $M \otimes_R N_1 \to M \otimes_R N_2 \to M \otimes_R N_3$ is exact as well. 2. An $R$-module $M$ is called faithfully flat if the complex of $R$-modules $N_1 \to N_2 \to N_3$ is exact if and only if the sequence $M \otimes_R N_1 \to M \otimes_R N_2 \to M \otimes_R N_3$ is exact. 3. A ring map $R \to S$ is called flat if $S$ is flat as an $R$-module. 4. A ring map $R \to S$ is called faithfully flat if $S$ is faithfully flat as an $R$-module. The code snippet corresponding to this tag is a part of the file algebra.tex and is located in lines 8299–8317 (see updates for more information). \begin{definition} \label{definition-flat} Let $R$ be a ring. \begin{enumerate} \item An $R$-module $M$ is called {\it flat} if whenever $N_1 \to N_2 \to N_3$ is an exact sequence of $R$-modules the sequence $M \otimes_R N_1 \to M \otimes_R N_2 \to M \otimes_R N_3$ is exact as well. \item An $R$-module $M$ is called {\it faithfully flat} if the complex of $R$-modules $N_1 \to N_2 \to N_3$ is exact if and only if the sequence $M \otimes_R N_1 \to M \otimes_R N_2 \to M \otimes_R N_3$ is exact. \item A ring map $R \to S$ is called {\it flat} if $S$ is flat as an $R$-module. \item A ring map $R \to S$ is called {\it faithfully flat} if $S$ is faithfully flat as an $R$-module. \end{enumerate} \end{definition} There are no comments yet for this tag. There is also 1 comment on Section 10.38: Commutative Algebra. Add a comment on tag 00HB In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877675771713257, "perplexity": 463.0756295244969}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890928.82/warc/CC-MAIN-20180121234728-20180122014728-00449.warc.gz"}
http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-7/r4/section/5.11/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Proportions Using Cross Products | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Middle School Math Concepts - Grade 7 Go to the latest version. # 5.11: Proportions Using Cross Products Difficulty Level: At Grade Created by: CK-12 % Progress Practice Proportions Using Cross Products Progress % Remember Manuel who was reading all of the medieval books on knights in the Proportions Concept? Well after he finished reading the series, he loaned it to his friend Rafael. Rafael is enjoying the series as much as Manuel did. In five weeks, Rafael had already finished 8 of the 12 books. It took Manuel 7.5 weeks to read all 12 books. Will Rafael and Manuel finish the series in the same amount of time? Are they reading at the same rate? How can you figure this out? To figure this out, you will need to know how to determine if two ratios form a proportion. If the reading rate of the boys is the same, then the ratios will form a proportion. Use this Concept to learn how to solve proportions using cross products. Then you will know how to figure out this dilemma. ### Guidance Previously we learned that a proportion states that two ratios are equivalent. Here are two proportions. $\frac{a} {b} = \frac{c} {d} \qquad \text{or} \qquad a : b = c : d$ In a proportion, the means are the two terms that are closest together when the proportion is written with colons. So, in $a : b = c : d$ , the means are $b$ and $c$ . The extremes are the terms in the proportion that are furthest apart when the proportion is written with colons. So, in $a : b = c : d$ , the extremes are $a$ and $d$ . The diagram below shows how to identify the means and the extremes in a proportion. In the last lesson, you learned how to solve proportions by using proportional reasoning. We can also solve a proportion for a variable in another way. This is where the cross products property of proportions comes in. What is the Cross Products Property of Proportions? The Cross Products Property of Proportions states that the product of the means is equal to the product of the extremes. You can find these cross products by cross multiplying, as shown below. $\frac{a}{b} &= \frac{c}{d}\\b \cdot c &= a \cdot d$ $\frac{a}{4} = \frac{6}{8}$ To solve this, we can multiply the means and the extremes. $a \cdot 8 &= 4 \cdot 6 \\8a & = 24$ Next, we solve the equation for the missing variable. To do this, we use the inverse operation. Multiplication is in the problem, so we use division to solve it. We divide both sides by 8. $\frac{8a}{8} &= \frac{24}{8}\\ a &= 3$ Solve for each variable in the numerator by using cross products. #### Example A $\frac{x}{5} = \frac{6}{10}$ Solution: $x = 3$ #### Example B $\frac{a}{9} = \frac{15}{27}$ Solution: $a = 5$ #### Example C $\frac{b}{4} = \frac{12}{16}$ Solution: $b = 3$ Here is the original problem once again. Remember Manuel who was reading all of the medieval books on knights? Well after he finished reading the series, he loaned it to his friend Rafael. Rafael is enjoying the series as much as Manuel did. In five weeks, Rafael had already finished 8 of the 12 books. It took Manuel 7.5 weeks to read all 12 books. Will Rafael and Manuel finish the series in the same amount of time? Are they reading at the same rate? How can you figure this out? Let’s write a proportion to solve this problem. $\frac{8 \ books}{5 \ weeks} = \frac{12 \ books}{7.5 \ weeks}$ Next, we can use cross products to see if the two ratios form a proportion. If they do, then the two boys will finish the series in the same amount of time. $8 \times 7.5 &= 60\\5 \times 12 & = 60$ The two cross products are equal so the two ratios form a proportion. The two boys will finish the series of books in the same amount of time. ### Guided Practice Here is one for you to try on your own. The ratio of boys to girls in the school chorus is 4 to 5. There are a total of 20 boys in the chorus. How many total students are in the chorus? The ratio given, 4 to 5, compares boys to girls. However, the question asks for the total number of students in the chorus. One way to set up a proportion for this problem would be to write two equivalent ratios, each comparing boys to total students. The ratio of boys to girls is 4 to 5. We can use this ratio to find the ratio of boys to total students. $\frac{boys}{total} = \frac{boys}{boys + girls} = \frac{4}{4 + 5} = \frac{4}{9}$ You know that there are 20 boys in the chorus. The total number of students is unknown, so represent that as $x$ . $\frac{boys}{total} = \frac{20}{x}$ Get those ratios equal to form a proportion. Then cross multiply to solve for $x$ . $\frac{4}{9} &= \frac{20}{x}\\9 \cdot 20 &= 4 \cdot x\\180 &= 4x\\\frac{180}{4} &= \frac{4x}{4}\\45 &= x$ So, there are a total of 45 students in the school chorus. ### Explore More Directions: Use cross products to find the value of the variable in each proportion. 1. $\frac{6}{10} = \frac{x} {5}$ 2. $\frac{2}{3} = \frac{x} {9}$ 3. $\frac{4}{9} = \frac{a} {45}$ 4. $\frac{7}{8} = \frac{a} {4}$ 5. $\frac{b}{8} = \frac{5} {16}$ 6. $\frac{6}{3} = \frac{x} {9}$ 7. $\frac{4}{x} = \frac{8} {10}$ 8. $\frac{1.5}{y} = \frac{3} {9}$ 9. $\frac{4}{11} = \frac{c} {33}$ 10. $\frac{2}{6} = \frac{5} {y}$ 11. $\frac{2}{10} = \frac{5} {x}$ 12. $\frac{4}{12} = \frac{6} {n}$ 13. $\frac{5}{r} = \frac{70} {126}$ 14. $\frac{4}{14} = \frac{14} {k}$ 15. $\frac{8}{w} = \frac{6} {3}$ 16. $\frac{2}{5} = \frac{17} {a}$ Dec 21, 2012 Dec 29, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 40, "texerror": 0, "math_score": 0.8062732815742493, "perplexity": 706.2671832266265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131301015.31/warc/CC-MAIN-20150323172141-00040-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/vectors-help.398409/
# Vectors help 1. Apr 25, 2010 ### somebodyelse5 I wont post the actual problem with numbers, I just need some direction. My teacher never went over this part of the webwork in class, and we havent touched on it in physics either. "a" and "b" are both 3D vectors. 1.) I am supposed to find the component of "b" along "a" 2.) I am supposed to find the projection of "b" onto "a" 3.) I am supposed to find the projection of "b" orthogonal to "a" If somewhat could shed some light on what im actually doing, and maybe give me some direction it would be greatly appreciated. 2. Apr 25, 2010 ### lanedance do you know about dot products? they could be pretty useful here... 3. Apr 25, 2010 ### somebodyelse5 yes, i know how to find dot and cross products. But its not as simple as just finding the dot product is it? 4. Apr 25, 2010 ### LCKurtz Draw a and b with their tails together. Drop a perpendicular from the head of a to the line of vector b forming a right triangle. The component on b of a is the "shadow" of a on b which is the b leg of that triangle and you can see from the picture its length is |b|cos(θ) where θ is the angle between a and b. Notice that if you make a unit vector out of b, call it bhat that $$|a|\cos\theta = |a||\hat b|\cos\theta = a\cdot \hat b$$ If you multiply that by the unit vector bhat that makes a vector out of the "shadow" and that gives the projection of a on b. Subtracting that projection from a gives the vector forming the other leg of the triangle and that is the orthogonal projection.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552391529083252, "perplexity": 461.0467686846553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648226.72/warc/CC-MAIN-20180323122312-20180323142312-00644.warc.gz"}
http://dharmath.blogspot.com/2012_05_01_archive.html
## Thursday, May 3, 2012 ### Sum of randomly selected numbers In a class with 10 students, each student is allowed to choose a number from the following list: 100,101,102,300,301,302. The teacher then collects all chosen numbers and calculates the sum. How many ways are there for the sum to be 2012? Two "ways" are considered distinct if there is at least one student who chooses different numbers. Variant: In a class with $2n+1$ students, each student is allowed to choose a number from the following list: $n, n+1, -n, -n-1$. The teacher then collects all chosen numbers and calculates the sum. How many ways are there for the sum to be zero? Solution Suppose the number of students who chose 100,101,102,300,301,302 are $a,b,c,d,e,f$ respectively, so we have: $$a+b+c+d+e+f = 10$$ and $$100.a + 101.b + 102.c + 300.d + 301.e+ 302.f = 2012$$ Note that we can write 100=201-100-1, 101 as 201-100, 102 as 201-100+1, 300 as 201+100-1, 301 as 201+100, 302 as 201+100+1, so the second equation becomes: $$201(a+b+c+d+e+f) + 100(d+e+f-a-b-c) + (c+f-a-d) = 2012$$ Since $a+b+c+d+e+f = 10$ then: $$100(d+e+f-a-b-c) + (c+f-a-d) = 2$$ Now, since $|c+f-a-d| \leq 10$ then it must be the case that: $$d+e+f-a-b-c = 0$$ and $$c+f-a-d = 2$$ We also note that when the students choose one of the numbers, they're essentially making two choices: first, they choose if they want to choose small or big numbers (be included in $a+b+c$ or $d+e+f$), and second, they choose if they want their number to end with 0,1, or 2 (be included in $a+d, b+e$ or $c+f$). Thus, the number of ways to satisfy the first constraint $a+b+c = d+e+f = 5$ is 10C5, which means 5 students choose the small numbers and the rest choose the big numbers. In order to satisfy $c+f - a - d = 2$, we make the following substitutions: $x = a+d, y = b+e, z = c+f$. Then $x+y+z = 20, z-x = 2$. We divide the cases by the values of $z$. Note that once a student chooses to be included in $x$ or $y$ or $z$, he only needs to choose if he wants to choose a big or small number, since each of $x,y,z$ contains exactly one big and one small number. If $z > 6$ then $x > 4$, impossible. If $z = 6$, then $x = 4, y = 0$. There are 10C6 = 210 ways to choose this. If $z = 5$, then $x = 3, y = 2$, there are 10! / 2! 5! 3! = 2520 ways. If $z = 4$, then $x = 2, y = 4$, there are 10! / 4! 4! 2! = 3150 ways. If $z = 3$, then $x = 1, y = 6$, there are 10! / 3! 6! 1! = 840 ways. If $z = 2$, then $x = 0, y = 8$, there are 10C8 = 45 ways. If $z < 2$ then $x < 0$, impossible. So the total number of ways is $$_{10}C_{5} (210+2520+3150+840+45) = _{10}C_{5} 6765 = 1 704 780$$ Solution to variant Suppose the number of students who chose $-n-1, n+1, -n, n$ are $a,b,c,d$ respectively, so we have: $$a+b+c+d = 2n+1$$ $$(n+1)(b-a) + n(d-c) = 0$$ Note that because $n$ and $n+1$ are relatively prime, so we must have $n+1 | d-c$. But $|d-c|$ can take any values from $0$ to $2n+1$, so $d-c = 0, n+1, -(n+1)$. If $d-c=0$, then $b-a=0$, which means $a+b+c+d$ would be even, impossible. If $d-c=n+1$, then $b-a = -n$. Combined with $a+b+c+d = 2n+1$ we have $(a,b,c,d) = (n,0,0,n+1)$. (Remember that $a,b,c,d$ have to be non-negative). This means that $n$ of the students choose $-n-1$ and the other $n+1$ choose $n$. There are $_{2n+1}C_{n}$ ways to do that. The case of $d-c = -n-1$ is similar, where $(a,b,c,d) = (0,n,n+1,0)$, and there are $_{2n+1}C_{n}$ ways. Note that all these ways are disjoint, so the total number of ways is $2_{2n+1}C_{n}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985384464263916, "perplexity": 153.46209147548495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00675.warc.gz"}
https://hal.inria.fr/hal-00755842
# Approximate convex hull of affine iterated function system attractors 2 ALICE - Geometry and Lighting Inria Nancy - Grand Est, LORIA - ALGO - Department of Algorithms, Computation, Image and Geometry Abstract : In this paper, we present an algorithm to construct an approximate convex hull of the attractors of an affine iterated function system (IFS). We construct a sequence of convex hull approximations for any required precision using the self-similarity property of the attractor in order to optimize calculations. Due to the affine properties of IFS transformations, the number of points considered in the construction is reduced. The time complexity of our algorithm is a linear function of the number of iterations and the number of points in the output convex hull. The number of iterations and the execution time increases logarithmically with increasing accuracy. In addition, we introduce a method to simplify the approximation of the convex hull without loss of accuracy. Type de document : Article dans une revue Chaos, Solitons and Fractals, Elsevier, 2012, 45 (11), pp.1444-1451. <10.1016/j.chaos.2012.07.015> https://hal.inria.fr/hal-00755842 Contributeur : Nicolas Ray <> Soumis le : jeudi 22 novembre 2012 - 09:47:46 Dernière modification le : jeudi 22 septembre 2016 - 14:31:31 Document(s) archivé(s) le : samedi 23 février 2013 - 03:42:38 ### Fichier ConvexHull.pdf Fichiers produits par l'(les) auteur(s) ### Citation Anton Mishkinis, Christian Gentil, Sandrine Lanquetin, Dmitry Sokolov. Approximate convex hull of affine iterated function system attractors. Chaos, Solitons and Fractals, Elsevier, 2012, 45 (11), pp.1444-1451. <10.1016/j.chaos.2012.07.015>. <hal-00755842> Consultations de la notice ## 279 Téléchargements du document
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288909792900085, "perplexity": 2465.938936106079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00138-ip-10-171-10-70.ec2.internal.warc.gz"}
http://claesjohnson.blogspot.com/2017/10/ligo-vs-neo-newtonian-cosmology.html
## torsdag 5 oktober 2017 ### LIGO vs Neo-Newtonian Cosmology The Earth is pulled in the direction of the present position of the Sun at each  instant, right? Let us continue our reflections on LIGO recalling my earlier post on Neo-Newtonian Cosmology as an alternative to Einstein's Cosmology based on Einstein's equations with related posts here and the discussion about The Hen and the Egg. In Neo-Newtonian Cosmology a gravity potential $\phi (x,t)$ (depending on an Euclidean space coordinate $x$ and time $t$) is connected to mass distribution $\rho (x,t)$ through Poisson's equation • $\Delta\phi =\rho$ which is interpreted as creation of mass $\rho$ by the action of $\Delta$ upon the gravitational potential $\phi$ through a local instant operation of differentiation. This is different from the standard interpretation in Newtonian cosmology with instead the gravitational potential $\phi$ created from mass $\rho$ by instant action at distance corresponding to solving or integrating Poisson's equation, as if gravitational force/potential is propagated at infinite speed. We thus have: • Neo-Newtonian Cosmology: matter created from gravitational potential by instant local action. • Newtonian Cosmology: gravitational potential created from mass by instant action at distance. Now, the big mystery of Newtonian Gravitation/Cosmology is the instant action at distance, which with the optics of Neo-Newtonian Cosmology is a fictional problem, which thus can be replaced by local instant action, with the mystery of instant action at distance eliminated. The beauty with both Newtonian and Neo-Newtonian Cosmology, as compared with Einstein's, is that motion of the Earth around the Sun gets an theoretical explanation agreeing with the following observation: • The Earth accelerates at each instant of time in the direction of the present position of the Sun as if the action of the Sun is instant at distance. In particular, the Earth does not accelerate in the direction where the Sun is seen, since that position through the finite speed of light, has a delay of 8 minutes. The difference comes out in the thought experiment that the Sun suddenly disappears into nothing: In Neo-Newtonian Cosmology that would mean the instant disappearance of the gravitational field and thus leave the Earth instantly continuing in the tangent direction, while with finite speed of propagation of gravitational force created at distance by the mass of the Sun, that would take 8 minutes and the path would then be different. In Einstein' Cosmology gravitation is propagated with the finite speed of light, but that does not seem to be the case for the Sun-Earth system.  Right? Recall that Einstein was obsessed with choice of coordinates in both the special and general theory, as  if this choice has anything to do with physics, as if physics carries around coordinate systems imprinted in the "fabric of space-time" as the name of the game. But coordinate systems are inventions/conventions made by humans and not the Creator of the World, who had no need of such things when putting things together and letting it go... #### 2 kommentarer: 1. Nice post. What do you think about theories where gravity is included as thermodynamic work? In particular, work done as thermal resistance in units Nm²? Then the work of gravity equals surface emission in 4g², and TSI/(4/3)=4/3*8g². It seems to fit bettet than "curved space". And it invalidates the greenhouse theory. 2. Would you mind taking a look at this: https://lifeisthermal.wordpress.com/2017/04/13/just-numbers-no-blankets-2/ And this: https://lifeisthermal.wordpress.com/2017/06/21/electric-field-equations-gaussian-surface-gaussian-gravity/ It seems like thermodynamic relationships give more answers than "curved space". Gravity as work in units of thermal resistance fits perfectly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597011566162109, "perplexity": 897.9326905274617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743184.39/warc/CC-MAIN-20181116194306-20181116220306-00435.warc.gz"}
https://blog.gian-sass.com/trailing-zeros-factorials/
Trailing Zeros of Factorials Since I’m bored to hell in programming class, my teacher, who is also my physics teacher, gave me the task to find out the trailing zeros of and then for . I then began searching for patterns in factorials starting from to 10!. Interestingly, has one trailing zero and has two trailing zeros. By induction you could determine the trailing zeros of factorials to be expressed by , where is the floor function. But if you go down the pattern a bit further you find that which has six zeros! The heck? This contradicts our idea. Let’s investigate a bit further. In a factorial, the amount of fives getting multiplied adds a zero. For example . If we take a look we see that there are two 5s hidden here, and . But contains the factor which is ! So in that case, we find that there are six fives, because . With this additional information we can determine the number of trailing zeros of  to be But we are not finished yet, there is a pattern here and we should be able to define it for any . Given a number , the trailing zeros of is the sum of divided by all of its prime factors of 5. where has to be chosen such that . For example let’s calculate the trailing zeros of . We find leaving us with .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495110869407654, "perplexity": 585.8983374140768}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00250.warc.gz"}
https://research.iitj.ac.in/publication/reconstructive-phase-transformations-in-body-centered-cubic
X Reconstructive Phase Transformations in Body-Centered Cubic Titanium J. Zhu Published in Wiley-VCH Verlag 2020 Volume: 257 Issue: 12 Abstract Herein, the symmetry of the experimentally observed soft phonons in the body-centered cubic β-phase (Im (Formula presented.) m) of titanium is analyzed. The harmonic phonon dispersion relations are calculated using the first-principles calculations. Using the group-theoretical methods, the symmetry of the calculated unstable phonons is determined. The symmetries of the unstable phonons observed at wave vectors (Formula presented.) (N) and (Formula presented.) ((Formula presented.)) are the same as the symmetries of the (Formula presented.) and (Formula presented.) irreducible representations, respectively. Transformations of the β-phase due to the atomic motion of unstable phonons and the subsequent structure relaxation are discussed. One possible way to explain the transformation of the β-phase to the hexagonal close-packed α-phase ((Formula presented.) /mmc) is through an orthorhombic structure (either Cmcm or Pnnm). The atomic motion of an unstable (Formula presented.) phonon results in the orthorhombic structure and following structure relaxation transforms the orthorhombic structure to the α-phase. Similarly, the transformation of the β-phase to another hexagonal close-packed ω-phase (P6/mmm) can be considered to be happening through a trigonal structure (either P (Formula presented.) m1 or P3m1). The atomic motion of an unstable (Formula presented.) phonon forms the trigonal structure and subsequent structure relaxation transforms the trigonal structure to the ω-phase. The space group of the intermediate phase is a common subgroup of the space groups of the initial β-phase and the final α/ω-phase. Therefore, the β–α/ω transformation can be described as an unstable phonon-induced reconstructive transformation of the second type. There is no activation energy barrier along each of the four energy-minimizing paths, and the transformation strains are accommodated. © 2020 Wiley-VCH GmbH
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767874240875244, "perplexity": 4206.260026989254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00172.warc.gz"}
http://claesjohnson.blogspot.com/2014/12/prediction-of-global-temperature-may.html
## tisdag 2 december 2014 ### Prediction of Global Temperature May Well Be Possible The recent "hiatus" of global warming, with slightly falling global temperature over now two decades under rising CO2 levels, in total contradiction to steadily rising temperature predicted by all of the complex climate models underlying the CO2 alarmism propagated by IPCC, has given support to a populistic view that "mathematical modeling of climate is impossible because the evolution of climate is chaotic". Both skeptics and alarmists have shown enthusiasm for such a scientific defaitism. But it is not at all necessary to draw this conclusion, since chaos can sometimes be very predictable, for example as a null result of small stochastic perturbations. For example, a simple climate model stating a balance between incoming radiation from the Sun, which is observed to be nearly constant, and outgoing radiation from the Earth system, which is observed to be nearly constant, can give the prediction that global temperature will stay nearly constant over forseeable time, say a couple of hundred years. Such a model would be in excellent agreement with observations over the last two decades, and would also be within measurement accuracy since the start of recorded observations (with maybe half a degree Celsius nominal increase). Climate as long-time-average of weather may thus be predictable, by the same mathematical reasons that mean-value aspects of turbulent flow like total drag and lift of an airplane are predictable (as shown in Computational Turbulent Incompressible Flow). What may be impossible is a precise prediction of a very small effect of a small perturbation of atmospheric radiation from a change of concentration of a trace gas as CO2. But a precise prediction of something so small that it has no observable effect, is of course meaningless and thus the perceived impossibility is not real. It is only if you like IPCC want to send an alarm of an effect of a vanishingly small cause, that you need a precise climate model supporting your case. The fact that such model is basically unthinkable is then something to hide, together with the fact that a prediction of no-change is certainly thinkable and may well be correct. #### 10 kommentarer: 1. What do you think about a "model" which assumes the gravito-thermal 33C greenhouse effect is constant, since atmospheric mass/gravity are constant, but solar activity changes slightly, which if change the solar constant in the equation http://3.bp.blogspot.com/-xXJOurldG_E/VHjjbD6XinI/AAAAAAAAGx8/8yXlYh8Lcr4/s1600/The%2BGreenhouse%2BEquation%2B-%2BSymbolic%2Bsolution%2BP.png 1 W/m2 from 1367 to 1368 W/m2, the calculated surface temperature increases from T=288.433K to 288.486K, an increase of 0.056C (this includes the division by 4 to convert solar insolation from a flat disk to a sphere) and if we change the albedo 1% in the same equation we get a 1C temperature change at the surface and since albedo/clouds could be one of many solar amplification mechanisms described in the literature, the chaotic climate system in effect bounded by what the sun does? 2. Any model predicting no change can show up to be pretty correct, I think. 3. Hi Claes, I'm now able to reproduce the atmospheric temperature profile in a physically-derived 1-D vertical profile from the surface all the way to edge of space by computing the "ERL" height from the well-known barometric formula & center of mass. The only assumption made about temperature is the equilibrium temp with the Sun 255K, but otherwise all temperatures are calculated by the model, including the surface temperature and gravito-thermal temperature gradient in the troposphere. Above the troposphere, I can then adjoin with the physical 1976 US Standard Atmosphere 1-D model up to the edge of space. What do you think? Any interest in collaborating on this? http://hockeyschtick.blogspot.com/2014/12/why-us-standard-atmosphere-model.html Regards 4. Sounds good. Do you compute the lapse rate as well? Or do you put in the observed value? 5. Yes the lapse rate is calculated in the symbolic "greenhouse equation" by g/Cp The 1976 US Standard Atmosphere 1-D model was physically derived assuming the dry adiabatic lapse rate from measured Cp of dry air. After the dry atmosphere physical model was complete, they changed the Cp of air to the average observed at each atmospheric level (and if I recall they also calculated from physical chemistry - but I'll have to read the document again to verify the latter) to determine the lapse rates at each level and connect them together with the preceding lapse rates at lower levels as summarized here: http://en.wikipedia.org/wiki/U.S._Standard_Atmosphere 6. My point is that the effective (observed) lapse rate set by an interaction of convection, phase change, gravitation and radiation and that this determines the Earth surface temperature. To determine the effect of a small change of radiative properties of the atmosphere, that is the effect on the lapse rate, may be difficult, but there is nothing that indicates that the effect would be observable. What does your model say? 7. I've worked out a much better model of the entire atmosphere, and it says no warming effect of greenhouse gases whatsoever, and rather the opposite of cooling as passive IR radiators. Derived from 1st law to ensure conservation of energy, thus I firmly believe demonstrates no warming from GHGs nor effects on lapse rates, etc. I haven't posted on it yet while refining it, but please give me your thoughts, criticisms, or suggestions when I post it soon. Best regards 8. Yes, I will.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366690516471863, "perplexity": 1047.9761334676257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00495-ip-10-171-10-108.ec2.internal.warc.gz"}
https://socratic.org/questions/5879036e11ef6b72c8c54d96
Precalculus Topics # How do you graph y^2(x^2-1)=1 ? ##### 1 Answer Jan 13, 2017 See explanation... #### Explanation: Interesting! Let's start by considering: $y \left({x}^{2} - 1\right) = 1$ Dividing both sides by ${x}^{2} - 1$ we get: $y = \frac{1}{{x}^{2} - 1} = \frac{1}{\left(x - 1\right) \left(x + 1\right)}$ This has vertical asymptotes at $x = \pm 1$ and a local maximum at $x = 0$, where $y = - 1$. It is only positive when $\left\mid x \right\mid > 1$ It has a horizontal asymptote $y = 0$ since $\frac{1}{{x}^{2} - 1} \to 0$ as $x \to \pm \infty$. So it looks like this... graph{1/(x^2-1) [-10, 10, -5, 5]} Going back to our original equation, we can divide both sides by ${x}^{2} - 1$ to get: ${y}^{2} = \frac{1}{{x}^{2} - 1}$ and hence: $y = \pm \sqrt{\frac{1}{{x}^{2} - 1}}$ This does not describe a function, but has a graph which will be symmetrical about the $x$ axis. Considering just the positive values, note that square roots are monotonically increasing, approaching vertical for small values of the radicand... graph{(y^2-x) = 0 [-10, 10, -5, 5]} So combined with the function $\frac{1}{{x}^{2} - 1}$ this results in a graph which is less 'steep' near the vertical asymptotes, undefined for $\left\mid x \right\mid \le 1$ and slower to approach the asymptote $y = 0$, with a reflected copy... graph{y^2(x^2-1)=1 [-10, 10, -5, 5]} ##### Impact of this question 365 views around the world You can reuse this answer Creative Commons License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395202398300171, "perplexity": 910.3118214216555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141486017.50/warc/CC-MAIN-20201130192020-20201130222020-00566.warc.gz"}
https://gbas2010.wordpress.com/2010/05/13/
# Inequality 47(Christos Patilas) Problem: If $\displaystyle x_i$ for $\displaystyle i=1,2,...,n$ are positive real numbers then prove that $\displaystyle \sum^{n}_{i=1}\left(5\sqrt[5]{x^{3}_{i}}-3\sqrt[3]{\left(\frac{3x_i+2}{5}\right)^5}\right)\leq 2n$ Solution(An Idea by Vo Quoc Ba Can): We only need to prove that $\displaystyle 5\sqrt[5]{a^3}-3\sqrt[3]{\left(\frac{3a+2}{5}\right)^5}\leq 2$ for all $\displaystyle a>0$. So, using the AM-GM Inequality we have that $\displaystyle a+a+a+1+1\geq 5\sqrt[5]{a^3}$. It follows that $\displaystyle \sqrt[3]{\left(\frac{3a+2}{5}\right)^5}\geq \sqrt[3]{\left(\sqrt[5]{a^3}\right)^5}=a$. Therefore it suffices to prove that $\displaystyle 5\sqrt[5]{a^3}-2\leq 3a$, which is obviously true from the AM-GM Inequality, Q.E.D.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768463969230652, "perplexity": 590.0755131997745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589022.38/warc/CC-MAIN-20180715222830-20180716002830-00190.warc.gz"}
http://mathhelpforum.com/calculus/170115-trig-identity-sqrt-2-a.html
# Math Help - Trig identity for sqrt(2) 1. ## Trig identity for sqrt(2) Hi Upon checking an intergral (that I had worked out by hand) using mathcad(some math software I have) I noticed a simplification of part of my intergral the computer had performed and couldnt work out how it was made. it is -sec(6pi/8) = sqrt(2) I was wondering if anyone knows how one gets from one to the other ? I have even checked with my my tutor and he isnt sure either ! is there some identity I am not aware of ? 2. Originally Posted by skippets Hi Upon checking an intergral (that I had worked out by hand) using mathcad(some math software I have) I noticed a simplification of part of my intergral the computer had performed and couldnt work out how it was made. it is -sec(6pi/8) = sqrt(2) I was wondering if anyone knows how one gets from one to the other ? I have even checked with my my tutor and he isnt sure either ! is there some identity I am not aware of ? $\displaystyle\ -sec\left(\frac{6{\pi}}{8}\right)=-\frac{1}{cos\left(\frac{6{\pi}}{8}\right)}=\frac{1 }{cos\left(\frac{2{\pi}}{8}\right)}$ since $cos(\pi-A)=-cosA$ giving $\displaystyle\frac{1}{cos\left(\frac{\pi}{4}\right )}=\frac{1}{\left(\frac{1}{\sqrt{2}}\right)}=\sqrt {2}$ 3. Originally Posted by skippets ... -sec(6pi/8) = sqrt(2) I was wondering if anyone knows how one gets from one to the other ? I have even checked with my my tutor and he isnt sure either ! $\dfrac{6\pi}{8} = \dfrac{3\pi}{4}$ ... a unit circle angle. since ... $\cos\left(\dfrac{3\pi}{4}\right) = -\dfrac{\sqrt{2}}{2}$ $\sec\left(\dfrac{3\pi}{4}\right) = -\dfrac{2}{\sqrt{2}} = -\sqrt{2}$ $-\sec\left(\dfrac{3\pi}{4}\right) = \sqrt{2}$ 4. Thanks very much to both of you!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823083639144897, "perplexity": 1097.591859328478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097546.28/warc/CC-MAIN-20150627031817-00192-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/17080-polar-coordinate-02-a-print.html
# Polar Coordinate 02 • Jul 21st 2007, 04:29 PM camherokid Polar Coordinate 02 Find the area enclose by one loop of the curve r= 2 Cos[x] - Sec[x] • Jul 21st 2007, 05:30 PM Jhevon 1 Attachment(s) Quote: Originally Posted by camherokid Find the area enclose by one loop of the curve r= 2 Cos[x] - Sec[x] We need the limits of integration first. since the loop goes through the origin, we need to find the values of x for which the curve passes through the origin, so set r = 0 we have, $2 \cos x - \frac {1}{ \cos x} = 0$ $\Rightarrow 2 \cos^2 x - 1 = 0$ $\Rightarrow \cos x = \pm \frac {1}{ \sqrt { 2 }}$ $\Rightarrow x = \frac {\pi}{4}, \frac {3 \pi}{4}, \frac { 5\pi}{4}, \frac {7 \pi}{4}$ looking at the graph, we see we want the region between $\frac {\pi}{4}$ and $\frac {7 \pi}{4}$ we want to go anticlockwise from $\frac {7 \pi}{4}$ to $\frac {\pi}{4}$ to get the desired area, but we must go from a smaller to a larger angle. so rewrite $\frac {7 \pi}{4}$ as $- \frac { \pi}{4}$ and we are in business So our desired area is given by $\int_{- \pi / 4}^{ \pi / 4}\frac {1}{2} r^2 ~dx$ $= \int_{- \pi / 4}^{ \pi / 4}\frac {1}{2} \left( 2 \cos x - \sec x \right)^2 ~dx$ • Jul 21st 2007, 06:06 PM galactus That looks good, Jhevon. You could also just write it as: $\int_{0}^{\frac{\pi}{4}}(2cos\theta-sec\theta)^{2}d{\theta}$ Same thing. • Jul 21st 2007, 06:07 PM Jhevon Quote: Originally Posted by galactus That looks good, Jhevon. You could also just write it as: $\int_{0}^{\frac{\pi}{4}}(2cos\theta-sec\theta)^{2}d{\theta}$ Same thing. yes, that is true. thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487013816833496, "perplexity": 1637.6593786269248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103316.46/warc/CC-MAIN-20170817131910-20170817151910-00152.warc.gz"}
http://www.oalib.com/search?kw=E.%20Hungerford&searchField=authors
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Publish in OALib Journal ISSN: 2333-9721 APC: Only $99 Submit 2019 ( 187 ) 2018 ( 308 ) 2017 ( 276 ) 2016 ( 376 ) Search Results: 1 - 10 of 167585 matches for " E. Hungerford " All listed articles are free for downloading (OA Articles) Page 1 /167585 Display every page 5 10 20 Item Physics , 2014, DOI: 10.1088/1475-7516/2014/08/064 Abstract: Neutrons produced by cosmic muon interactions are important contributors to backgrounds in underground detectors when searching for rare events. Typically such neutrons can dominate the background, as they are particularly difficult to shield and detect. Since actual data is sparse and not well documented, simulation studies must be used to design shields and predict background rates. Thus validation of any simulation code is necessary to assure reliable results. This work compares in detail the predictions of the FLUKA simulation code to existing data, and uses this code to report a simulation of cosmogenic backgrounds for typical detectors embedded in a water tank with liquid scintillator shielding. Physics , 2012, Abstract: Cosmic muon interactions are important contributors to backgrounds in underground detectors when searching for rare events. Typically neutrons dominate this background as they are particularly difficult to shield and detect in a veto system. Since actual background data is sparse and not well documented, simulation studies must be used to design shields and predict background rates. This means that validation of any simulation code is necessary to assure reliable results. This work studies the validation of the FLUKA simulation code, and reports the results of a simulation of cosmogenic background for a liquid argon two-phase detector embedded within a water tank and liquid scintillator shielding. H. B. Hungerford Psyche , 1917, DOI: 10.1155/1917/57095 Abstract: H. B. Hungerford Psyche , 1926, DOI: 10.1155/1926/90626 Abstract: H. B. Hungerford Psyche , 1925, DOI: 10.1155/1925/24273 Abstract: Physics , 2009, Abstract: The Spallation Neutron Source in Oak Ridge, Tennessee, is designed to produce intense pulsed neutrons for various science and engineering applications. Copious neutrinos are a free by-product. When it reaches full power, the SNS will be the world's brightest source of neutrinos in the few tens of MeV range. The proposed CLEAR (Coherent Low Energy A (Nuclear) Recoils) experiment will measure coherent elastic neutral current neutrino-nucleus scattering at the SNS. The physics reach includes tests of the Standard Model. Physics , 2014, Abstract: In the published cosmogenic background study for a ton-sized DarkSide dark matter search, only prompt neutron backgrounds coincident with cosmogenic muons or muon induced showers were considered, although observation of the initiating particle(s) was not required. The present paper now reports an initial investigation of the magnitude of cosmogenic background from$\beta$-delayed neutron emission produced by cosmogenic activity in DarkSide. The study finds a background rate for$\beta\$-delayed neutrons in the fiducial volume of the detector on the order of < 0.1 event/year. However, detailed studies are required to obtain more precise estimates. The result should be compared to a radiogenic background event rate from the PMTs inside the DarkSide liquid scintillator veto of 0.2 events/year. Physics , 2010, DOI: 10.1088/1742-6596/202/1/012023 Abstract: The slow neutron capture process in massive stars (the weak s-process) produces most of the s-only isotopes in the mass region 60 < A < 90. The nuclear reaction rates used in simulations of this process have a profound effect on the final s-process yields. We generated 1D stellar models of a 25 solar mass star varying the 12C + 12C rate by a factor of 10 and calculated full nucleosynthesis using the post-processing code PPN. Increasing or decreasing the rate by a factor of 10 affects the convective history and nucleosynthesis, and consequently the final yields. Physics , 2010, Abstract: The contribution by massive stars (M > 9 solar masses) to the weak s-process component of the solar system abundances is primarily due to the 22Ne neutron source, which is activated near the end of helium-core burning. The residual 22Ne left over from helium-core burning is then reignited during carbon burning, initiating further s-processing that modifies the isotopic distribution. This modification is sensitive to the stellar structure and the carbon burning reaction rate. Recent work on the 12C + 12C reaction suggests that resonances located within the Gamow peak may exist, causing a strong increase in the astrophysical S-factor and consequently the reaction rate. To investigate the effect of an increased rate, 25 solar mass stellar models with three different carbon burning rates, at solar metallicity, were generated using the Geneva Stellar Evolution Code (GENEC) with nucleosynthesis post-processing calculated using the NuGrid Multi-zone Post-Processing Network code (MPPNP). The strongest rate caused carbon burning to occur in a large convective core rather than a radiative one. The presence of this large convective core leads to an overlap with the subsequent convective carbon-shell, significantly altering the initial composition of the carbon-shell. In addition, an enhanced rate causes carbon-shell burning episodes to ignite earlier in the evolution of the star, igniting the 22Ne source at lower temperatures and reducing the neutron density. Physics , 2012, DOI: 10.1111/j.1365-2966.2012.20193.x Abstract: [Shortened] The 12C + 12C fusion reaction has been the subject of considerable experimental efforts to constrain uncertainties at temperatures relevant for stellar nucleosynthesis. In order to investigate the effect of an enhanced carbon burning rate on massive star structure and nucleosynthesis, new stellar evolution models and their yields are presented exploring the impact of three different 12C + 12C reaction rates. Non-rotating stellar models were generated using the Geneva Stellar Evolution Code and were later post-processed with the NuGrid Multi-zone Post-Processing Network tool. The enhanced rate causes core carbon burning to be ignited more promptly and at lower temperature. This reduces the neutrino losses, which increases the core carbon burning lifetime. An increased carbon burning rate also increases the upper initial mass limit for which a star exhibits a convective carbon core. Carbon shell burning is also affected, with fewer convective-shell episodes and convection zones that tend to be larger in mass. Consequently, the chance of an overlap between the ashes of carbon core burning and the following carbon shell convection zones is increased, which can cause a portion of the ashes of carbon core burning to be included in the carbon shell. Therefore, during the supernova explosion, the ejecta will be enriched by s-process nuclides synthesized from the carbon core s process. The yields were used to estimate the weak s-process component in order to compare with the solar system abundance distribution. The enhanced rate models were found to produce a significant proportion of Kr, Sr, Y, Zr, Mo, Ru, Pd and Cd in the weak component, which is primarily the signature of the carbon-core s process. Consequently, it is shown that the production of isotopes in the Kr-Sr region can be used to constrain the 12C + 12C rate using the current branching ratio for a- and p-exit channels. Page 1 /167585 Display every page 5 10 20 Item
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957055926322937, "perplexity": 2300.596995539067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00228.warc.gz"}
http://mathoverflow.net/questions/112400/outer-automorphisms-of-an-infinite-simple-group
# Outer automorphisms of an infinite simple group Let $r(m)$ denote the residue class $r+m\mathbb{Z}$, where $0 \leq r < m$. Given disjoint residue classes $r_1(m_1)$ and $r_2(m_2)$, let the class transposition $\tau_{r_1(m_1),r_2(m_2)}$ be the permutation of $\mathbb{Z}$ which interchanges $r_1+km_1$ and $r_2+km_2$ for every $k \in \mathbb{Z}$ and which fixes everything else. The group ${\rm CT}(\mathbb{Z})$ generated by all class transpositions of $\mathbb{Z}$ is simple (cf. Math. Z. 264 (2010), no. 4, 927-938, http://dx.doi.org/10.1007/s00209-009-0497-8). Find the outer automorphism group of ${\rm CT}(\mathbb{Z})$. - The description should be split into two parts: first, determine whether every automorphism is spatial, i.e., is the automorphism group equal to the normalizer in the group of permutations of $\mathbf{Z}$. Then it remains to determine this normalizer. Note that since you restrict to $m>0$, you at least have $n\mapsto -n$ in this normalizer, which is a non-inner automorphism. But it might be the only one, I don't know. –  YCor Nov 14 '12 at 18:33 No, $n \mapsto -n$ is not an automorphism. The group ${\rm CT}(\mathbb{Z})$ stabilizes $\mathbb{N}_0$ setwise, but for example the conjugate of its element $\tau_{0(2),1(2)}$ under the mapping $n \mapsto -n$ doesn't (it moves 0 to -1). –  Stefan Kohl Nov 14 '12 at 20:00 But a little modification turns it into one: $n \mapsto -n-1$ is indeed an automorphism. –  Stefan Kohl Nov 14 '12 at 20:26 OK thanks. I hadn't noticed that the requirement $r\ge 0$ indeed forces $\mathbf{N}_0$ (why does the English convention exclude 0 from $\mathbf{N}$?) to be stable. So in my previous comment, the definition of <i>spatial</i> should rather be the normalizer in the group of permutations of $\mathbf{N}_0$. –  YCor Nov 15 '12 at 9:48 Thanks. - If the answer to Question <mathoverflow.net/questions/112469/>; is negative, this would yield further spatial outer automorphisms –  Stefan Kohl Nov 15 '12 at 10:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95676589012146, "perplexity": 338.78935751178295}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007324.75/warc/CC-MAIN-20141125155647-00227-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/mechanics-pulley-platform-problem.793926/
# Mechanics Pulley Platform Problem Tags: 1. Jan 23, 2015 1. The problem statement, all variables and given/known data Two construction workers each of mass m raise themselves on a hanging platform using pulleys as shown above. If the platform has a mass of 1.2⋅m, the initial distance between the pulleys and the platform is d, and the workers each pull with a force f on the ropes, what is the acceleration a of the workers? Assume the pulleys and ropes are massless. 2. Relevant equations Tension = m(a + g) (possibly). F = ma 3. The attempt at a solution To be wholly honest I have next to no idea how to proceed. My only real thought was to use F = ma. Thinking first that I'd need my total mass, I added (m + m + 1.2m) = 3.2m total (one mass for each worker plus the platform). Next I tried to consider the force, thinking we have two workers each exerting a tension force f, thus 2f being the total force upwards, then subtracting out the downward forces (namely gravity) to achieve (2f - 3.2mg). Plugging into F = ma I got (2f - 3.2mg) = a(3.2m) => a = (2f - 3.2mg)/(3.2m) which is marked incorrect. At this point I have hit a wall. My thanks for any help. (Attached is the image accompanying the problem). #### Attached Files: • ###### Men_and_pulleys.png File size: 1.6 KB Views: 121 Last edited: Jan 23, 2015 2. Jan 23, 2015 ### arpon Why 3.4mg ? 3. Jan 23, 2015 Sorry, that was a typo which persisted through my idiocy. I corrected it. 4. Jan 23, 2015 ### arpon I suggest you to think about the free body diagram. At first, think about the free body diagram of one of the men. He is exerting force f each on the rope, and so according to Newton's third law, the rope is also exerting force f (upward) on him. Let's assume, he is exerting a force R on the platform, and the platform is exerting the same force R on him in the upward direction. The other force on him is gravitational force which is equal to mg. So the net force on him is, $F_{net} = R + f - mg$; Again, $F_{net} = ma$. So $R + f - mg = ma$ In this same way, you may find out, equations for the other man and the platform, and solving these equations, you may find out accelaration. 5. Jan 23, 2015 Wouldn't the force R simply be equal the normal force the man is exerting on the platform and in turn then be equal to $mg$, giving $f = ma$ out again since R would cancel with $mg$. 6. Jan 23, 2015 ### arpon No. Suppose, you hang a ball with a rope. The rope is straight. Then you just touch the ball on a weighing machine keeping the rope straight. Will there be any weight found on the machine? 7. Jan 23, 2015 ### arpon Another example. Suppose, a heavy stone hangs from a crane. And you touch your hand under the stone. Will you feel any force on your hand? 8. Jan 23, 2015 Alright, I think I understand what you mean. So would the equation for the other man just be $R+f−mg=ma$ since he is identical to the first man and the equation for platform would be $2f -1.2mg = ma$ Two f for the tensions forces acting on the block, minus the downward component of force? I'll work on it more tomorrow as well, if I can't reply any more tonight. 9. Jan 23, 2015 ### arpon You are missing the normal forces which are exerted on the platform by the men, and look, the mass of the platform is not m. When you are calculating with the forces on the platform, the net force equals to the mass of 'the platform' times the accelaration of 'the platform'. 10. Jan 24, 2015 So it would then be: $2f - 1.2mg - 2R = 1.2mg$ (because since before you defined the force R to be the force the platform exerts on the men upwards, the normal force for the platform by the men should be twice the negative of R). 11. Jan 24, 2015 ### arpon "$= 1.2mg$" ? Probably, a typo. 12. Jan 24, 2015 Should have been $=1.2ma$ I think. 13. Jan 24, 2015 My eternal thanks good sir or madame. Your advice was instrumental in my finding the solution. 14. Jan 25, 2015 ### arpon Exactly! You're welcome. I also thank you for sharing this interesting problem. Have something to add? Draft saved Draft deleted Similar Discussions: Mechanics Pulley Platform Problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924536943435669, "perplexity": 1391.4163449611328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813691.14/warc/CC-MAIN-20180221163021-20180221183021-00576.warc.gz"}
https://math.stackexchange.com/questions/1413537/field-and-field-axioms
# Field and Field Axioms. I wanted to ask what are field and field axioms? I have tried looking on Wikipedia and Wolfram But They are too are advanced and I cant a understand one bit.So please any help would be much appreciated. Also there is a question in my book :"What is the difference between Real and the Complex fields?" As I don't know anything about fields ,so I don't know the answer too and cant think of one as well ( as I repeat I don't know anything about fields),So please again any help would be much appreciated on understanding Fields and field axioms. THANKS VERY MUCH • Fields informally speaking are places where you can add, subtract, multiply and divide. So $\Bbb Q$, $\Bbb R$, $\Bbb C$ are all fields, while $\Bbb Z$ is not because you cannot divide two things in $\Bbb Z$ and get something still in $\Bbb Z$. – Gregory Grant Aug 29 '15 at 11:55 A field is a set of numbers which satisfy some calculation rules. For the field the following rules hold: 1. Addition has the neutral element 0 2. Addition has an inverse (adding the negative part to a number) Multiplication rules 1. Multiplication has the neutral element 1 2. Multiplication is always invertible (by division) 3. Multiplication is associative 4. And another important fact is that the distributive law holds! Real fields are fields of real numbers while complex fields consist on complex numbers. • You may not have answered the question. One could argue to answer it properly you need to show $\not\exists$ an isomorphism $\Bbb R\to\Bbb C$ as fields. – Gregory Grant Aug 29 '15 at 12:13 • So how is the real field different from the complex field – Batwayne Aug 31 '15 at 7:56 In addition to the axioms given by kryomaxim, a field also must satisfy (5).... Addition and multiplication are commutative , and (6).... $0 \ne 1$. If all of the requirements are met except commutative multiplication, it is called a Division Ring With Unit. (The "unit" is 1). There is no real number x that satisfies $x^2+1=0$ but there certainly is a complex number that does. The usual terminology for a field F which is a subset of a field G is that F is a sub-field of G. (0) x + y ∈ F and x · y ∈ F for any x, y ∈ F (closure under addition and multiplication). (1) x + (y + z) = (x + y) + z and x · (y · z) = (x · y) · z for any x, y, z ∈ F (associativity of addition and multiplication). (2) x + y = y + x and x · y = y · x for any x, y ∈ F (commutativity of addition and multiplication). (3) x + 0 = x and x · 1 = x for all x ∈ F (0 and 1 are called the additive identity and the multiplicative identity, respectively). (4) For any x ∈ F, there is a w ∈ F such that x + w = 0 (existence of negatives). Moreover, if x ̸= 0, then there is also an r ∈ F such that x · r = 1 (existence of reciprocals). We denote w = −x and r = x −1 . 4 (5) x · (y + z) = x · y + x · z for any x, y, z ∈ F (distributivity of addition over multiplication).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648270964622498, "perplexity": 294.8239185112484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00547.warc.gz"}
https://www.physicsforums.com/threads/ring-oscillation-formula-derivation.394284/
# Homework Help: Ring oscillation formula derivation 1. Apr 11, 2010 ### xzibition8612 1. The problem statement, all variables and given/known data Derive and show that the period for a thick ring would be T=2π√[d/g+(ΔR)^2/4Rg] 2. Relevant equations I'm not sure.... 3. The attempt at a solution Obviously, delta R means the difference between Ri and Ro would be considerable. So ΔR=Ro-Ri. Then Ro=R+ΔR/2 and Ri=R-ΔR/2 I also know that T=2π√(I/mgr) and Itotal=Icenter of mass + MR^2........ Can you guys help me? I'm lost here. Thanks a lot.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911261260509491, "perplexity": 4745.376991968518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160400.74/warc/CC-MAIN-20180924110050-20180924130450-00270.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/white-light-illuminates-oil-film-water-viewing-directly-looks-red-assume-reflected-red-lig-q2781636
## Thin-film equations White light illuminates an oil film on water. Viewing it directly from above, it looks red. Assume that the reflected red light has a wavelength of 615 nm in air, and that the oil has a thickness of 2.640e-7 m. What is the refractive index of the oil? Assume that the refractive index of water is greater than that of the oil.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078120350837708, "perplexity": 333.39676239846034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701445114/warc/CC-MAIN-20130516105045-00005-ip-10-60-113-184.ec2.internal.warc.gz"}
http://talkstats.com/threads/difference-between-sample-variance-and-total-variance-sst.73356/
Difference between Sample Variance and Total Variance (SST) TripleMMM New Member Hi all, Was reviewing my notes the other day and suddenly I noticed something which has been on my mind. The sample variance of a variable X is given as: where we then adjust for the degrees of freedom by dividing by n-1. However in linear regression the total variance (SST - Total sum of squares) in the dependent variable is given as: While both are calculations of the variance for a variable, why is the former divided by n-1 while the latter is not given that for the latter we also have a sample that allows us to calculate the total variance? Am I missing something here? Thanks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529023170471191, "perplexity": 328.94861292875913}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00192.warc.gz"}
http://www.computer.org/csdl/proceedings/focs/1994/6580/00/0365700-abs.html
Subscribe Santa Fe, NM, USA Nov. 20, 1994 to Nov. 22, 1994 ISBN: 0-8186-6580-7 pp: 124-134 P.W. Shor , AT&T Bell Labs., Murray Hill, NJ, USA ABSTRACT A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factor: It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. We thus give the first examples of quantum cryptanalysis. INDEX TERMS cryptosystems, quantum computation algorithms, discrete logarithms, factoring, physical computational device, polynomial factor, Las Vegas algorithms, quantum computer CITATION P.W. Shor, "Algorithms for quantum computation: discrete logarithms and factoring", FOCS, 1994, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science 1994, pp. 124-134, doi:10.1109/SFCS.1994.365700 SEARCH 19 ms (Ver 2.0) Marketing Automation Platform
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432945013046265, "perplexity": 837.7035560643073}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737936627.82/warc/CC-MAIN-20151001221856-00027-ip-10-137-6-227.ec2.internal.warc.gz"}
http://www.inf.ethz.ch/personal/fukudak/polyfaq/node18.html
Next: How many facets does Up: Convex Polyhedron Previous: What computer models are   Contents ## How do we measure the complexity of a convex hull algorithm? To answer this question, we assume the unit cost RAM model, where the computational time is essentially the number of elementary arithmetic operations and the storage for any integer number takes a unit space. See Section 2.14. There are two approaches to evaluate the complexity of a given convex hull algorithm. Let be an algorithm which computes a minimal inequality description of a full-dimensional convex polytope for a given point set in with . Let denote the number of inequalities in the output . (One can interprete the discussion here in dual setting: consider as an algorithm to compute all vertices of a convex polytope with inequaities with vertices.) First of all, most people agree that the efficiency of computing the convex hull should be measured at least by the critical input parameters and . Some people like to see the complexity by fixing to constant, but it is always better to evaluate in terms of as well, and fix it later. The first measure, often employed by computational geometers, is to bound the worst case running time of an algorithm for any input with points in . For example, if is of , then it means terminates in time for ANY input of points in dimension . Also, when one set to be fixed (constant), such an algorithm is said to have time complexity , since is simply a constant. We may call this worst-case-input measure. For fixed dimension, there is an optimum algorithm [Cha93] for the convex hull in terms of the worst-case-input measure, that runs in time for . It cannot be better because the largest output is of the same order by the upper bound theorem (Theorem 2). The worst-case-input measure is quite popular, but it might be little misleading. For example, suppose algorithms and are of time complexity and , respectively. Then by this measurement, the algorithm is superior to . Here is a potentially serious problem with this worst-case-input measure. Above, it is still possible that takes worst-case time for ALL input of points in , and takes time proportional to some polynomial function of . Note that the number of inequalities varies wildly from to , even for fixed (by the upper bound theorem Theorem 2 and (1)). This diversity is just too big to be ignored if . Furthermore, the input data leading to the worst-case output hardly occurs in practice. In fact, for the random spherical polytope, the expected size of is linear in , see Section 2.16. While the worst-case-input optimal algorithm [Cha93] is a remarkable theoretical achievement, we are still very far from knowing the best ways to compute the convex hull for general dimensions. In order to circumvent this pitfall, one can use a measure using all key variables . Or more generally, one can measure the time complexity in terms of both the size of input and the size of output. We say an algorithm is polynomial if it runs in time bounded by a polynomial in . This polynomiality coincides with the usual polynomiality when the output size is polynomially bounded by the size of input. Under the nondegeneracy assumption (see 2.12), there is a polynomial algorithm for the convex hull problem. Few of the earlier polynomial algorithms are pivot-based algorithms [CCH53,Dye83] solving the problem in dual form (the vertex enumeration problem) and a wrapping algorithm [CK70]. A more recent algorithm [AF92] based on reverse search technique [AF96] is not only polynomial but compact at the same time. Here, we say an algorithm is compact if its space complexity is polynomial in the input size only. In the general case, there is no known polynomial algorithm. The paper [ABS97] is an excellet article presenting how various algorithms fail to be polynomial, through ingenious constructions of nasty'' polytopes. Next: How many facets does Up: Convex Polyhedron Previous: What computer models are   Contents Komei Fukuda 2004-08-26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584524035453796, "perplexity": 442.593815905768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137906.42/warc/CC-MAIN-20140914011217-00153-ip-10-234-18-248.ec2.internal.warc.gz"}
https://pdglive.lbl.gov/Particle.action?node=S071&init=0
Extra Dimensions INSPIRE search For explanation of terms used and discussion of significant model dependence of following limits, see the Extra Dimensions'' review. Footnotes describe originally quoted limit. ${{\mathit \delta}}$ indicates the number of extra dimensions. Limits not encoded here are summarized in the Extra Dimensions'' review, where the latest unpublished results are also described. See related review: Extra Dimensions Limits on $\mathit R$ from Deviations in Gravitational Force Law $<30$ $\mu {\mathrm {m}}$  CL=95.0% Limits on $\mathit R$ from On-Shell Production of Gravitons: $\delta$ = 2 $<4.8$ $\mu {\mathrm {m}}$  CL=95.0% Mass Limits on $\mathit M_{\mathit TT}$ $> 9.02$ TeV  CL=95.0% Limits on 1/$\mathit R$ = $\mathit M_{{{\mathit c}}}$ $>4.16$ TeV  CL=95.0% Limits on Kaluza-Klein Gravitons in Warped Extra Dimensions $> 4.25$ TeV  CL=95.0% Limits on Kaluza-Klein Gluons in Warped Extra Dimensions $> 3.8$ TeV  CL=95.0% Black Hole Production Limits Semiclassical Black Holes Quantum Black Holes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885791540145874, "perplexity": 2622.2660835215333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210996.32/warc/CC-MAIN-20200923113029-20200923143029-00496.warc.gz"}
https://mathoverflow.net/questions/186214/minimal-polynomial-for-a-graph
# minimal polynomial for a graph I wonder if there is any result relating the degree $d$ of the minimal polynomial of a directed finite graph to any of its topological features - such as its diameter, or any other similar 'natural' quantity. Ideally, this should help me, given knowledge of such a topological feature, to give an upper bound on $d$. How does the situation change if considering a weighted graph, such as the transition relation of a Markov chain? Context: I have a nonnegative matrix $Q$ (representing the transition relation of a generic Markov chain restricted to transient states) and a column vector $v$ (representing a generic initial sub-distribution on states). It is known that the least integer $m$ such that for each vector $v$ the Krylov space $K_m(A,v)=\mathrm{span}\{v, Qv, Q^2 v,...,Q^{m-1}v\}$ is $Q$-invariant, is $m=d$, where $d$ is the degree of the minimal polynomial of $Q$. I'm trying to understand the relationship between this $d$ and the 'topology' of $Q$ seen as a graph, in the above sense. Assume there is only one connected component in the graph of $Q$. Any reference to this problem would be greatly appreciated. Best, Michele • Is the matrix $Q$ symmetric, and assuming that it's not, can it be reformulated to be symmetric? Spectral theory for undirected graphs are significantly more mature than those for directed graphs, which are inherently harder to analyze. – Richard Zhang Nov 4 '14 at 22:42 • Your statement on Krylov subspaces seems false; counterexample: take $v$ equal to an eigenvector of $Q$. The claim holds if you reverse the order of quantifiers (the least $m$ such that for each $v$ $K_m(A,v)$ is $Q$-invariant is $d$). – Federico Poloni Nov 4 '14 at 22:57 Let $G$ denote the graph associated with $Q$. Note that $G$ is strongly connected under your assumptions. The following results hold in general: $$D \le d \le n,$$ where $D$ and $n$ are the diameter and size of $G$, respectively. The upper bound follows from Cayley-Hamilton theorem. The lower bound can be shown by contradiction; in particular, assume $D \ge d+1$, then there exists a (directed) path $i\to j$ of length $D$. Then consider vector $v = [0,\ldots, 1_i,0,\ldots,0]^T \in \mathbb{R}^n$. Then $K_d(Q,v) = \{0\}$ while $Q^Dv \neq 0$. You can easily find graphs such that the bounds above are achievable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974684476852417, "perplexity": 121.34612512431582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00626.warc.gz"}
https://codeforces.com/blog/entry/72488
### sidhant's blog By sidhant, 2 years ago, Pre-requisite: Go through the Zeta/SOS DP/Yate's DP blog here Source: This blog post is an aggregation of the explanation done by arjunarul in this video, a comment by rajat1603 here and the paper on Fast Subset Convolution Notation: 1. set $s$ and mask $s$ are used interchangeably meaning the same thing. 2. $a \setminus b$ would mean set subtraction, i.e subtracting set $b$ from set $a$. 3. $|s|$ refers to the cardinality, i.e the size of the set $s$. 4. $\sum_{s' \subseteq s} f(s')$ refers to summing function $f$ over all possible subsets (aka submasks) $s'$ of $s$. Aim: Given functions $f$ and $g$ both from $[0, 2^n)$ to integers. Can be represented as arrays $f[]$ and $g[]$ respectively in code. We want to compute the following transformations fast: 1. Zeta Transform (aka SOS DP/Yate's DP): $z(f(s)) = \sum_{s' \subseteq s} f(s')$, $\forall s \in [0, 2^n)$ in $O(N*2^N)$ 2. Mobius Transform (i.e inclusion exclusion sum over subset): $\mu(f(s)) = \sum_{s' \subseteq s} (-1)^{|s \setminus s'|} f(s')$, $\forall s \in [0, 2^n)$ in $O(N*2^N)$. Here the term $(-1)^{|s \setminus s'|}$ just looks intimidating but simply means whether we should add the term or subtract the term in Inclusion-Exclusion Logic. 3. Subset Sum Convolution: $f \circ g(s) = \sum_{s' \subseteq s} f(s')g(s \setminus s')$, $\forall s \in [0, 2^n)$ in $O(N^2*2^N)$. In simpler words, take all possible ways to partition set $s$ into two disjoint partitions and sum over product of $f$ on one partition and $g$ on the complement of that partition. Firstly, Zeta Transform is explained already here already. Just go through it first and then come back here. Motivation: As an exercise and motivation for this blog post, you can try to come up with fast algorithms for Mobius transform and Subset Sum convolution yourself. Maybe deriving Mobius transform yourself is possible, but for Subset Sum convolution it seems highly unlikely, on the contrary if you can then you had the potential to publish the research paper mentioned in the source. Now, let us define a relatively easy transform, i.e Odd-Negation Transform: $\sigma(f(s)) = (-1)^{|s|}*f(s)$. This can be computed trivially in $O(2^{N})$ (assuming __builtin_popcount is $O(1)$). The name "Odd-Negation" gives the meaning of the transform, i.e if the subset $s$ is of odd cardinality then this tranform negates it, otherwise it does nothing. Now we will show 3 important theorems (each with proof, implementation and intuition), which are as follows: Theorem 1: $\sigma z \sigma(f(s)) = \mu(f(s))$, $\forall s \in [0, 2^n)$ Proof: For any given $s$ \begin{align} \sigma z \sigma(f(s)) &= (-1)^{|s|} * \sum_{s' \subseteq s} \sigma f(s')\\ &= (-1)^{|s|} * \sum_{s' \subseteq s} (-1)^{|s'|} f(s') \end{align} Now, notice two cases for parity of $|s|$ Case 1: $|s|$ is even. Then $(-1)^{|s|} = 1$. And $(-1)^{|s'|} = (-1)^{|s \setminus s'|}$ because $|s'| \bmod 2 = |s \setminus s'| \bmod 2$, when $|s|$ is even. So, \begin{align} \sigma z \sigma(f(s)) &= \sum_{s' \subseteq s} (-1)^{|s \setminus s'|} f(s')\\ &= \mu(f(s)) \end{align} Case 2: $|s|$ is odd. Then $(-1)^{|s|} = -1$. And $(-1)^{|s'|} = -(-1)^{|s \setminus s'|}$ because $|s'| \bmod 2 \neq |s \setminus s'| \bmod 2$ when $|s|$ is odd. So, \begin{align} \sigma z \sigma(f(s)) &= (-1) * \sum_{s' \subseteq s} -(-1)^{|s \setminus s'|} f(s')\\ &= \sum_{s' \subseteq s} (-1)^{|s \setminus s'|} f(s')\\ &= \mu(f(s)) \end{align} QED. Theorem 1 implies that mobius transform can be done by simply applying the 3 transforms one after the other like the following and it will be $O(N*2^N)$: Implementation // Apply odd-negation transform if((__builtin_popcount(mask) % 2) == 1) { } } // Apply zeta transform for(int i = 0; i < N; i++) { if((mask & (1 << i)) != 0) { } } } // Apply odd-negation transform if((__builtin_popcount(mask) % 2) == 1) { } } Intuition: The first $\sigma$ transform, just negates the odd masks. Then we do zeta over it, so each element stores sum of even submasks minus sum of odd submasks. Now if the $s$ being evaluated is even, then this is correct, otherwise this is inverted, since odds should be the ones being added in and evens being subtracted. Therefore, we applied the $\sigma$ transform again. Now a somewhat, not so important theorem: Theorem 2: $z^{-1}(f(s) = \mu(f(s))$, $\forall s \in [0, 2^n)$ i.e Inverse SOS DP/Inverse Zeta transform is equivalent to Mobius transform, i.e Zeta Transform and Mobius Transform are inversers of each other $z(\mu(f(s)) = f(s) = \mu(z(f(s))$. The is not immediately obvious. But once someone thinks more about how to do Inverse SOS, i.e given a $z(f)$, how to obtain $f$ fast. We realise we need to do an inclusion-exclusion logic on the subsets, i.e a Mobius transform. We will skip the proof for this, although it can be viewed from here if anyone is interested. (No Proof and Intuition section) The interesting thing that comes out of this is that for mobius/inverse zeta/inverse SOS we have a shorter implementation which works out as follows: Implementation for(int i = 0; i < N; i++) { if((mask & (1 << i)) != 0) { } } } Here in this implementation, after the $i^{th}$ iteration, $f[s]$ will denote $\sum_{s' \subseteq F(i, s)} (-1)^{|s \setminus s'|} f(s')$, where $F(i, s)$ denotes the set of all subsets of $s$, which only differ in the lease significant $i$ bits from $s$. That is, for a given $s'$, $s' \in F(i, s)$ IFF $s' \subseteq s$ AND ($s'$ & $s$) >> $i$ $=$ $s$ >> $i$ (i.e all bits excluding the least significant $i$ bits, match in $s$ and $s'$) When $i = N$, we observe $F(N, s) =$ the set of all subsets of $s$ and thus we have arrived at Mobius Transform. Another interesting observation here is that: If we generalise the statement f[mask] (+/-)= f[mask ^ (1 << i)] to f[mask] = operation(f[mask], f[mask ^ (1 << i)]), then if the operation here applied multiple times yields the same thing (Ex. Add, Max, Gcd) then it is equivalent to SOS style combine, i.e $f(s) = \text{operation}_{s' \subseteq s} f(s')$ , otherwise, it may NOT behave as SOS style combine (Ex.Subtraction) Now the next is subset sum convolution. Before that, let us define the following: $\hat{f}(i, s) = \begin{cases} f(s),& \text{if } |s| = i\\ 0, & \text{otherwise} \end{cases}$ $\hat{g}(i, s) = \begin{cases} g(s),& \text{if } |s| = i\\ 0, & \text{otherwise} \end{cases}$ These just means that $\hat{f}(k, \dots)$ and $\hat{g}(k, \dots)$ will be concentrating only on those sets/masks which have cardinality size/number of bits on $= k$. Theorem 3: $f \circ g(s) = z^{-1}(\sum_{i = 0}^{|s|} z(\hat{f}(i, s)) * z(\hat{g}(|s| - i, s)))$, $\forall s \in [0, 2^n)$ Proof: Let $p(k, s)$ be defined as follows: \begin{align} p(k, s) = \sum_{i = 0}^{k} \sum_{\substack{a \subseteq s \\ |a| = i}} \sum_{\substack{b \subseteq s \\ |b| = k - i \\ a \cup b = s}} f(a)g(b) \end{align} Then $p(|s|, s) = f \circ g(s)$ Proof: \begin{align} p(|s|, s) &= \sum_{i = 0}^{|s|} \sum_{\substack{a \subseteq s \\ |a| = i}} \sum_{\substack{b \subseteq s \\ |b| = |s| - i \\ a \cup b = s}} f(a)g(b)\\ &= \sum_{i = 0}^{|s|} \sum_{\substack{a \subseteq s \\ |a| = i}} f(a)g(s \setminus a) \text{ [Because only }b = s \setminus a\text{ is a valid entity of the inner summation]} \\ &= \sum_{a \subseteq s} f(a)g(s \setminus a)\\ &= f \circ g(s) \end{align} Let $h(k, s)$ denote the following summation, i.e \begin{align} h(k, s) &= \sum_{i = 0}^{k} z(\hat{f}(i, s) * z(\hat{g}(k - i, s)\\ &= \sum_{i = 0}^{k} (\sum_{\substack{a \subseteq s \\ |a| = i}} f(a)) * (\sum_{\substack{b \subseteq s \\ |b| = k - i}} g(b))\\ &= \sum_{i = 0}^{k} \sum_{\substack{a \subseteq s \\ |a| = i}} \sum_{\substack{b \subseteq s \\ |b| = k - i}} f(a)g(b)\\ &= \sum_{i = 0}^{k} \sum_{s' \subseteq s} \sum_{\substack{a \subseteq s' \\ |a| = i}} \sum_{\substack{b \subseteq s' \\ |b| = k - i \\ a \cup b = s'}} f(a)g(b)\\ &= \sum_{s' \subseteq s} \sum_{i = 0}^{k} \sum_{\substack{a \subseteq s' \\ |a| = i}} \sum_{\substack{b \subseteq s' \\ |b| = k - i \\ a \cup b = s'}} f(a)g(b)\\ &= \sum_{s' \subseteq s} p(k, s') \end{align} So, the RHS of Theorem 3 looks like this: \begin{align} &= z^{-1}(h(|s|, s))\\ &= z^{-1}(\sum_{s' \subseteq s} p(|s|, s'))\\ &= p(|s|, s)\\ &= f \circ g(s) \end{align} QED. Theorem 3 implies that subset sum convolution can be done by applying SOS and Inverse SOS DP $N$ times, for each cardinality size (Therefore complexity is $O(N^2*2^N)$), as follows: Implementation // Make fhat[][] = {0} and ghat[][] = {0} } // Apply zeta transform on fhat[][] and ghat[][] for(int i = 0; i < N; i++) { for(int j = 0; j < N; j++) { if((mask & (1 << j)) != 0) { } } } } // Do the convolution and store into h[][] = {0} for(int i = 0; i < N; i++) { for(int j = 0; j <= i; j++) { } } } // Apply inverse SOS dp on h[][] for(int i = 0; i < N; i++) { for(int j = 0; j < N; j++) { if((mask & (1 << j)) != 0) { } } } } Intuition: The expression \begin{align} &= \sum_{i = 0}^{|s|} z(\hat{f}(i, s)) * z(\hat{g}(|s| - i, s))\\ &= \sum_{i = 0}^{|s|} (\sum_{\substack{a \subseteq s \\ |a| = i}} f(a)) (\sum_{\substack{b \subseteq s \\ |b| = |s| - i}} g(b))\\ \end{align} stores the sum of where $a$ is a subset of $s$, $b$ is a subset of $s$ and $|a| + |b| = |s|$. If we reframe this summation as the union of $a$ and $b$ being equal to $s'$ where $s'$ is a subset of $s$. This way, we can restate the summation as \begin{align} &= \sum_{s' \subseteq s} \sum_{\substack{a, b \subseteq s' \\ a \cup b = s' \\ |a| + |b| = |s|}} f(a)g(b) \end{align} If we see this closely, this is Inverse SOSable (can be seen because of the summation on all possible subsets of $s$). Once we do Inverse SOS, i.e apply $z^{-1}$, we get \begin{align} &= \sum_{\substack{a, b \subseteq s \\ a \cup b = s \\ |a| + |b| = |s|}} f(a)g(b) = f \circ g(s) \end{align} Problems to Practice: Problems mentioned in SOS blog, here and for Subset Sum Convolution, I only know of this as of now 914G - Сумма Фибоначчи. But the technique seems super nice and I hope new problems do come up in nearby future. • +168 » 2 years ago, # |   +3 Auto comment: topic has been updated by sidhant (previous revision, new revision, compare). » 2 years ago, # |   +9 Very well done, this is immediately useful to a wide variety of problems, is relatively easy to understand (compared to dirichlet stuff), and provides a simple no-nonsense implementation. Appreciate it~ » 11 months ago, # |   +14 Thanks for the blog. I know it's necroposting but this is the only blog I have seen on subset convolution on cf.Code at the end of the blog is buggy.Three for loops should be for(int i = 0; i <= N; i++) instead of for(int i = 0; i < N; i++).`i » 5 weeks ago, # | ← Rev. 2 →   -24 Sorry about the necropost, but I don't quite get the reduction between the third-last and second-last step in proving the third theorem, ie. \begin{align*} &= z^{-1}\left\{\sum_{s' \subseteq s} p(|s|, s) \right\} \\ &= p(|s|, s) \end{align*}Could someone explain it to me?Thank you for the blog btw, it's the only readable resource that I could find on subset convolution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 10, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998670816421509, "perplexity": 1405.0328547838099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00646.warc.gz"}
https://www.aps.org/programs/honors/prizes/prizerecipient.cfm?last_nm=Baxter&first_nm=Rodney&year=1987
# Prize Recipient ## Rodney Baxter Citation: "For his novel use of mathematical analysis to solve in exact analytical form problems of fundamental importance in statistical mechanics relating directly to cooperative phenomena, phase transitions and quantum field theory."
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372738599777222, "perplexity": 2393.445400105365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00290.warc.gz"}
http://www.ck12.org/book/CK-12-Basic-Algebra-Concepts/r12/section/3.12/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Formulas for Problem Solving | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Basic Algebra Concepts Go to the latest version. # 3.12: Formulas for Problem Solving Difficulty Level: Basic Created by: CK-12 % Best Score Practice Formulas for Problem Solving Best Score % Suppose you were allowed to enter formulas into your graphing calculator before taking a quiz. Would you know how to enter the formula for the surface area of a cube? How about the formula for the volume of a cone? What about the formula for the area of a rectangle? In this Concept, you'll learn how to use all these formulas, so you may not need your calculator after all! ### Guidance Some problems are easily solved by applying a formula, such as the Percent Equation or the area of a circle. Formulas are especially common in geometry, and we will see a few of them in this lesson. In many real-world problems, you have to write your own equation, and then solve for the unknown. #### Example A The surface area of a cube is given by the formula $Surface Area=6x^2$ , where $x=$ side of the cube. Determine the surface area of a die with a 2-inch side length. Solution: Since $x=2$ in this problem, we substitute that into the formula for surface area. $& \text{Surface Area}=6x^2\\& \text{Surface Area}=6(2)^2\\& \text{Surface Area}=6\cdot 4=24$ The surface area is 24 square inches. #### Example B A 500-sheet stack of copy paper is 1.75 inches high. The paper tray on a commercial copy machine holds a two-foot-high stack of paper. Approximately how many sheets is this? Solution: In this situation, we will write an equation using a proportion. $\frac{\text{number of sheets}}{\text{height}}=\frac{\text{number of sheets}}{\text{height}}$ We need to have our heights have the same units, so we will figure out how many inches are in 12 feet. Since 1 foot is 12 inches, then 2 feet are equivalent to 24 inches. $&\frac{500}{1/75}=\frac{\text{number of sheets}}{24}\\& 500\times 75=\frac{\text{number of sheets}}{24}\\& 37500=\frac{\text{number of sheets}}{24}\\& 37,500\times 24=\frac{\text{number of sheets}}{24} \times 24\\& 900,000=\text{number of sheets}\\$ A two-foot-high stack of paper will be approximately 900,000 sheets of paper. #### Example C The volume of a cone is given by the formula $\text{Volume} = \frac{\pi r^2 (h)}{3}$ , where $r=$ the radius , and $h=$ the height of the cone . Determine the amount of liquid a paper cone can hold with a 1.5-inch diameter and a 5-inch height. Solution: We want to substitute in values for the variables. But first, we are given the diameter, and we need to find the radius. The radius is half of the diameter so $1.5 \div 2=1.5 \times \frac{1}{2}=0.75.$ $&\text{Volume} = \frac{\pi r^2 (h)}{3}\\&\text{Volume} = \frac{\pi (0.75)^2 (5)}{3}\\$ We evaluate the expression in our calculator, using $\pi \approx 3.14.$ $\text{Volume} \approx 2.94$ The volume of the cone is approximately 2.94 inches cubed. It is approximate because we used an approximation of $\pi.$ ### Guided Practice An architect is designing a room that is going to be twice as long as it is wide. The total square footage of the room is going to be 722 square feet. What are the dimensions in feet of the room? A formula applies very well to this situation. The formula for the area of a rectangle is $A=l(w)$ , where $l=$ length and $w=$ width . From the situation, we know the length is twice as long as the width. Translating this into an algebraic equation, we get: $A=(2w)w$ Simplifying the equation: $A=2w^2$ Substituting the known value for $A$ : $722=2w^2$ $2w^2 & = 722 && \text{Divide both sides by} \ 2. \\w^2 & = 361 && \text{Take the square root of both sides}. \\ w & = \sqrt{361} = 19 \\2w & = 2 \times 19 = 38 \\w & = 19$ The width is 19 feet and the length is 38 feet. ### Practice Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Word Problem Solving 3 (11:06) 1. Patricia is building a sandbox for her daughter. It's going to be five feet wide and eight feet long. She wants the height of the sandbox to be four inches above the height of the sand. She has 30 cubic feet of sand. How high should the sandbox be? 2. It was sale day at Macy’s and everything was 20% less than the regular price. Peter bought a pair of shoes, and using a coupon, got an additional 10% off the discounted price. The price he paid for the shoes was $36. How much did the shoes cost originally? 3. Peter is planning to show a video file to the school at graduation, but he's worried that the distance the audience sits from the speakers will cause the sound and the picture to be out of sync. If the audience sits 20 meters from the speakers, what is the delay between the picture and the sound? (The speed of sound in air is 340 meters per second.) 4. Rosa has saved all year and wishes to spend the money she has on new clothes and a vacation. She will spend 30% more on the vacation than on clothes. If she saved$1000 in total, how much money (to the nearest whole dollar) can she spend on the vacation? 5. On a DVD, data is stored between a radius of 2.3 cm and 5.7 cm. Calculate the total area available for data storage in square cm. 6. If a Blu-ray $^{TM}$ DVD stores 25 gigabytes (GB), what is the storage density in GB per square cm ? 7. The volume of a cone is given by the formula $Volume = \frac{\pi r^2 (h)}{3}$ , where $r=$ the radius , and $h=$ the height of the cone . Determine the amount of liquid a paper cone can hold with a 1.5-inch diameter and a 5-inch height. 8. Consider the conversion $1 \ meter = 39.37 \ inches$ . How many inches are in a kilometer? (Hint: A kilometer is equal to 1,000 meters.) 9. Yanni’s motorcycle travels $108 \ miles/hour$ . $1 \ mph = 0.44704 \ meters/second$ . How many meters did Yanni travel in 45 seconds? 10. The area of a rectangle is given by the formula $A=l(w)$ . A rectangle has an area of 132 square centimeters and a length of 11 centimeters. What is the perimeter of the rectangle? Mixed Review 1. Write the following ratio in simplest form: 14:21. 2. Write the following ratio in simplest form: 55:33. 3. Solve for $a:\ \frac{15a}{36} = \frac{45}{12}$ . 4. Solve for $x:\ \frac{4x+5}{5} = \frac{2x+7}{7}$ . 5. Solve for $y:\ 4(x-7)+x = 2$ . 6. What is 24% of 96? 7. Find the sum: $4 \frac{2}{5}- \left (- \frac{7}{3} \right )$ . Basic 8 , 9 Feb 24, 2012 Jul 22, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 34, "texerror": 0, "math_score": 0.9426141977310181, "perplexity": 572.6987455173798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919066.8/warc/CC-MAIN-20140901014519-00340-ip-10-180-136-8.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/unfinished-bussiness/
2 days and 20 hrs ago Calvin Lin posted a pic here which started to get a lot of heat soon...and everyone who was involved shared their view about it there. Anyone interested what was it about can visit the note and have a look at it and also go through the comments. This note is to extend my view furthermore. This photo is to show that there are extremely sophisticated cameras which can very nicely capture an extremely difficult moment. These are pics which show inversions a raindrop can produce. Some of them were taken on a rainy day, so the inverted image won't be as sharp as expected because the outside (the object being inverted) is itself hazy due to the weather. And some of them are really sharp. So lets have a look at them. The photo Calvin posted was real or not is a different question. What I want to tell is that it is very much probable that anyone cap capture such an effect on a camera. :) So what do you think? Note by Soumo Mukherjee 4 years, 9 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: As a side note, you do not need extremely sophisticated cameras to see and the effects for yourself (or even to capture them). It all boils down to finding the correct focal length at which the image is inverted. Knowing this, take any lens (like a circular vase of water), and then move it back and forth towards the object that you are staring at. If you start with the vase at the object, the image will just pass through directly. As you bring the vase nearer, the image shrinks until it goes to a point / line. As you bring it closer, the image is now inverted, which is the effect that is captured above. Of course, other than moving the glass, you can also move your head (or camera) till you find the sweet spot. Your phone camera can capture such images, though the resolution might differ. Staff - 4 years, 9 months ago $\dfrac{1}{f}=\dfrac{1}{v}-\dfrac{1}{u}$ - 4 years, 9 months ago An equation says a thousand words ! - 4 years, 9 months ago This is just like the optical experiments we do in our school's physics lab. - 4 years, 8 months ago But I didn't get the second image. Is it inverted?? - 4 years, 8 months ago Awesome!!☆ - 4 years, 8 months ago Ossum! - 4 years, 8 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8797104954719543, "perplexity": 1997.3135484301893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00341.warc.gz"}
http://www.kyorinkai.net/shinryou2.htm
@ @ @ @ z[ fÉȖڂ̏Љ OStꗗ #### OfÒS\ tԁ@8F00`16F30 f@ԁ@9F00`17F00 @@@t j Ηj j ؗj j yj ߑO ߌ ߑO ߌ ߑO ߌ ߑO ߌ ߑO ߌ ߑO ߌ c@Y zŠVN @ @ @ @ @ @ @Z @ @ @ @ @ @ @ @ Ό@ zŠ @ @ @ @ @ @ @ O@m /tlH͓ @ @ @ @ @ @ @ @ @ @E @ @ @ @ @ @ @ @ @ @ er@mu _o @ @ @ @ @ @ @ @ @ @ @ ԗ@Nv Aa @ @ @ @ @ @ @ @ R@ _o @ @ @ @ @ @ @ @ @ @ R@_Y _o @ @ @ @ @ @ @ @ @ @ @@@@ɁE̖YÔē@Rti3TxfjARti3T̂ݐf@j ### er@mu () ```啪F_oȁA iF{_owEwEc {FmNJwEw {ɊwA{ȊwF wFBw@_oȁ@Տy ogwFvđww Bww@w@iwmj RgF_oȂ͓ɁAჂA߂܂AჁA^QA A]AYȂǂ̏Ǐ𒆐Sɔ]_o ̕aC܂ŕLfÂs܂B^@\̉񕜘A gɊւẮAnre[VƂ̘AgŁA] }񕜊Ẽnr͂ƂA_ oSʂ̃nrɎ܂őIȑΉKv B܂A_o̕aC̊҂̈Âɂ͐ffE ͓R̂ƂȂ璷IȃPA؂ɂȂ܂B _o̕aCł́̕AxkB``` ### c@Y (@) ```啪FzŠȁAVN iFFȊwAFVNw {tFYƈ wF{ȊwA{zŠwA{w {VNwA{zŠ\hw {]wAۍw ۔]wiWHOj ogwFa39N Bw``` ### O@m (@) ```啪FȈʁAtȁA iFwmAȐAtA͐ wF{ȊwA{twA{͈w {wAAJAaw ogwFa62N w RgFǏ̂ȂAُ킪AtsSSEǎɌ tƂȂ܂B@ł͏̐tit ɂmffsĂ܂jɑ΂鎡 tsSǗɂ͓͎ÂȂAlXȒ x̐tL銳җlɂY悤w߂Ă B``` ### Ό@ ```啪FzŠȁAȈ @@iF{ȊwF wF{ȊwAv@l {͈ ogwFBw``` ```啪FȈ iF{ȊwF wF{Ȋw ogwFa46N sw``` ### @E ```啪F iF{ȊwFA{w {aw wF{ȊwA{͈wA{w {aw ogwF11N ʈȑw RgFSŋɂ̏ȂSĂ܂Bݒ ́̕AkB``` ### @_ ```啪FO(ʊOȁÁA_AXAO) iF{OȊwFOȐA{tFYƈ wF{OȊwA{OȊwA{ՏOȊwA {wA{awA{זEfw ogwF3N@mȑw``` ### ԗ@Nv ```啪FȈʁAAAa iF{ȊwF wF{Ȋw ogwFa40N Bw ``` ### R@ ```啪F_o iFȔFA_oȐ wF{ȊwA{_oƉuwA{_oȊwA {_owA{]w ogwF12N Bw RgF͂߂܂āI_oȂSĒ܂BRƐ\ ܂Bog͑啪słBwŵ߂ɕɓ] A20NԂɑ啪œĒƂɂȂ܂B ]_ȍǏ󂪋CɂȂ銳җĺACyɊO܂ł kɂ炵ĂB``` y[W擪֖߂
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994756579399109, "perplexity": 2475.0596963885437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00067-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/866790/simple-high-school-inequality-question
Simple high school inequality question I'm reviewing some high school basic algebra and I would like to know when this function: $$2x-1-\frac{1}{x}$$ is positive. Solution Suppose $x\gt0$, then $2x-1-\frac{1}{x}\ge0\Leftrightarrow 2x^2-x-1\ge 0\Leftrightarrow x\ge 1$. Suppose $x\lt 0$, then $2x-1-\frac{1}{x}\ge0\Leftrightarrow 2x^2-x-1\le 0\Leftrightarrow -1/2\le x\lt 0$ Then $2x-1-\frac{1}{x}\ge 0 \Leftrightarrow x\ge 1$ and $-1/2\le x\lt 0$. Am I right? Thanks - Since $x\neq0$, you can also try multiplying with $x^2$ instead of x –  Thanos Darkadakis Jul 14 '14 at 10:56 @ThanosDarkadakis good remark, this simplify everything, can you post this as an answer so that I can upvote it? –  user85493 Jul 14 '14 at 11:01 Since $x\neq0$, instead if taking 2 cases for $x\gt0$ and $x\lt0$, you know that your function $f(x)$ has the same sign with $x^2f(x)$. $x^2f(x)=x^2(2x-1-\frac{1}{x})=2x^3-x^2-x=2x(x-1)(x+\frac12)$ - It should be $-1/2<x<0$ or $x>1$. Think about it. How can $x$ satisfy both of these inequalities at the same time? Also note the strict inequalities: it should be $<$ rather than $\leq,$ and $>$ rather than $\geq$. How do you solve this inequality without the assumptions $x\gt 0$ and $x\lt 0$? –  user85493 Jul 14 '14 at 10:55 $2x-1-\frac{1}{x}=\frac{2x^2-x-1}{x}=\frac{(x-1 )(x+\frac{1}{2})}{x}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523807168006897, "perplexity": 381.4767554663511}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00210-ip-10-180-206-219.ec2.internal.warc.gz"}