url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://getpractice.com/subjects/physics/mechanical-properties-of-matter
### Mechanical Properties of Matter goals The Bulk of Ethanol, Mercury and water are given as $0.9, 25$ and $2.2$ respectively in units of $10^8 Nm^{-2}.$ For a given value of pressure, the fractional compression in volume is $\triangle V/V$. Which of the following statements about $\triangle V/V$ for these three liquids is correct? Define elasticity. A thin rod of length $\ell$ and mass m is suspended horizontally by two vertical strings, A & B, one attached at each end of the rod. The density of the rod is given by $\rho (x)=$${{\rho _ 0}(x/\ell )^ 3}$ where x = 0 corresponds to the position of string A. Give your answer in terms of m, $\ell$ and g. The Hooke's law defines A bar made of material whose Young's modulus is equal to $E$ and Poisson's ratio to $\mu$ is subjected to the hydrostatic pressure $p$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082600474357605, "perplexity": 487.6751641405131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00358.warc.gz"}
http://stackoverflow.com/questions/3392487/objective-c-code-in-latex-listings?answertab=oldest
# Objective C code in LATEX listings I'am searching for a way to use objective C in LATEX. I want to display the same syntax highlighting in LATEX as it is in XCode. I tried it this way: \lstset{language=[Objective]C,label=code:MyCodeLabel,caption=A small caption,name=code:MyCode, breakindent=40pt, breaklines} \begin{lstlisting} NSLog(@"Test it: %@",[[[[XMLNavigation objectAtIndex:1] elementsForName:@"text"] objectAtIndex:0] stringValue]); \end{lstlisting} I think, I have to add some more keywords to the Library. Or is there a way to make it look like in XCode? For me it is important that all NS-Libraries are visible as keywords. Thanks -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087002277374268, "perplexity": 2267.227617825119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654052/warc/CC-MAIN-20140305060734-00026-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.flyingcoloursmaths.co.uk/ask-uncle-colin-what-does-this-sum-to/
Dear Uncle Colin I’ve been asked to find the sum of the infinite series $\frac{3}{2} + \frac{5}{4} + \frac{7}{8} + \dots + \frac{2n+1}{2^n} + \dots$. How would you go about it? Some Expertise Required In Evaluating Sum Hi, SERIES, and thanks for your message! I can see a couple of ways of going about it: a sort of “split and recognise” approach, and a more general generating functions approach. There may be others I’ve missed! ### Split and recognise We can split this up into two series: • $S_1 = 2\br{\frac{1}{2} + \frac{2}{4} + \frac{3}{8} + \dots + \frac{n}{2^n} + \dots}$ and • $S_2 = \frac{1}{2} + \frac{1}{4} + \dots + \frac{1}{2^n} + \dots$ $S_2$ is, of course, a well-known geometric series that sums to 1. What about $S_1$? I’m going to multiply the 2 in first: $S_1 = 1 + \frac{2}{2} + \frac{3}{4} + \dots + \frac{n}{2^{n-1}} + \dots$. Now, if I let $x=\frac{1}{2}$, that’s the same as $S_1 = 1 + 2x + 3x^2 + \dots + nx^{n-1}$ – which is the binomial expansion of $(1 - x)^{-2}$. So, with $x= \frac{1}{2}$, that gives $S_1=\br{\frac{1}{2}}^{-2} = 4$. The series sums to $S_1 + S_2 = 5$. A nice answer! ### Generating functions Ah, my favourite tool. I can hardly see a series question without wondering if there’s a way to use GFs. This one’s no different. I’m going to start by letting $x=\frac{1}{2}$ and write the sum as a function of $x$ so I get: • $G(x) = 3x + 5x^2 + 7x^3 + \dots + (2n+1)x^n + \dots$ (and I’m interested, in the end, in $G\br{\frac{1}{2}}$) Now, there’s a common difference in the coefficients, which suggests multiplying by $x$ (to shift the coefficients across) and subtracting (to get a simpler polynomial). • $xG(x) = 3x^2 + 5x^3 + 7x^4 + \dots + (2n+1)x^{n+1} + \dots$ • $(1-x)G(x) = 3x + 2\br{x^2 + x^3 + x^4 + \dots}$ That thing in the big bracket is a geometric series with first term $x^2$ and common ratio $x$: • $(1-x)G(x) = 3x + 2\frac{x^2}{1-x}$ And now we can just substitute in $x= \frac{1}{2}$! (I was tempted to divide the bracket over, but it’s probably simpler not to here.) • $\frac{1}{2}G(x) = \frac{3}{2} + 2\frac{1/4}{1/2}$ • That final fraction needs a bit of care: multiplying the 2 in means doubling the top, and makes it $\frac{1/2}{1/2}=1$. • $\frac{1}{2}G(x) = \frac{3}{2} + 1$ • Double it all: $G(x) = 3 + 2 = 5$, as before! I’m not saying the second way is easier, just that it’s interesting, and the kind of approach you can apply elsewhere! Hope that helps, - Uncle Colin • Edited 2022-07-01 to fix a LaTeX error. Thanks to Adam for pointing it out.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9456102848052979, "perplexity": 517.9258294383985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00735.warc.gz"}
https://www.physicsforums.com/threads/question-field-density-vs-wave-form.29372/
# Question ? Field Density vs. Wave form ? 1. Jun 5, 2004 ### shintashi Question ?? Field Density vs. Wave form ? I have a question for you all. ok, there appears in my head two forms of stuff. the first stuff is expansive, into infinity, and functions as a field. Gravity, and electromagnetic fields seem to be of this stuff. These "wave" forms in particular seem to cascade away from the core, at an X^2 or X^3 ratio. The ambience of the sun often feels like this. The second stuff is self eating. it is circular. It is a lazy 8. A loop of a wave form. Essentially, this stuff is what my jellotivity theory (not to be confused with jellytivity) or aether theory, might call particles. You see, to me, an object, per-se, is a relationship between different "flows" or circuits of aether. I stopped using the term spacetime, because spacetime itself seems to be part of these circuits, not the circuits themselves. now, one way of looking at it, is the first stuff would be like bosons, and the second stuff would be fermions, but that's only a vague connection. I know there's something more to it than this. To me, a photon is a "wave packet", just like a stream of waves from a laser. sort of a "ball". This ball could also be seen on a larger scale as the Earth as a whole, (magnetosphere and all) or the sun. Using the earth/sun model, what I'm getting at, is that the first type of stuff is like the gravity, magnetosphere, and field emmissions (such as from the sun) which decay exponentially as you leave the surface, while the earth/solid surface, the physical "ball" you see of the sun with its nuclear core etc., is the second stuff. I think that this same model exists on the quantum scale, for all "particles". That is to say, that fermions and bosons, arent distinct, but reflections of the same mechanism, although it would seem that these mechanisms might be able to produce both "fields" and "wave packets" Any ideas ? (besides anyons ?) 2. Jun 5, 2004 ### Antonio Lao The density of the gravitational field is the mass density. The density of the electric field is the electric charge density. The density of the magnetic field is the current density. There is a subtle connection between charge density and current density because current are just the motion of electric charge. If this motion is uniform, there is no EM radiation. But if the motion is noneuniform and accelerated then EM radiation is given out. In QM, this EM radiation became quantized in the form of photon. Photon became the wave form messenger that move and communicate and change the field density from point to point within the framework of each particular field totality. For the gravity field, this messenger wave form is the not yet detected graviton. For the strong nuclear force field, this messenger particle is the gluon. For the electroweak force field, the wave form messengers are the Ws, the Zs and again the photon. The above fields are all vector fields. That is to say there is a force associated with each field. But in the field of the false vacuum or the Higgs field, no force can be detected. This seems to indicate that there is no wave form messenger for the vacuum field. Yet a hypothesize particle exists called the Higgs boson. This is a scalar boson in contrast to a vector boson with force mentioned above. The density of the Higgs field must be infinite or in other words, there is no gap between one Higgs boson and its close neighbors. So the wave form of the Higgs boson cannot be detected because they never move from point to point in the field. 3. Jun 6, 2004 ### shintashi quantum field theory I guess what I'm asking, is do you think that centralized fermion-styled energy sources might actually emit a cascade wave out to infinity, and do you think oppositely boson-styled wave forms might have a centralized core, on perhaps a 9 dimensional map ? What I'm thinking, is the idea that fermions might transition somehow, their impact on spacetime to ad-infinitum, with exponetial decay, but not through their independent wave fields, but through the transmutation of the energy form into boson fields. This would be a different way of looking at fermions and bosons, but I think it might work. I've also noted that if quantum substances, like particles or "packets" are anything like their celestial big brothers (stars and planets) they might experience a "total global magnetic polarity shift cycle" as the sun and earth's magnetic poles flip regularly. I think if this is true on a quantum scale, we might find that while transitioning from strange and charmed, up and down, etc., that quantum particles might become "anyons" for a transition period, also we might experience a matter-antimatter cycle within the working parts of atoms. I think we might see something similar to a cycle that looks something like this electron-neutrino-positron-neutrino-electron etc. And based upon frequency of the vibration of the partlce, and its total wave distortion (sort of like mass, but including some other factors) it would have an "internal" clock, which occassionally would line up with the other particles around it. 4. Jun 6, 2004 ### Antonio Lao My theory for fermionic and bosonic structures are given briefly by matrix notations: Supposed $H^{+}$ and $H^{-}$ are matrices and $H^{-}$ is the unit of a fermion and $H^{+}$ is the unit for a boson, then their interactions is applied by the following rules. Matrix additions give the value of electric charge. Matrix multiplications give the value of mass. And matrices can interact only if they are of the same order. Further rules of multiplication are: $H^{+}$$H^{+}$ = $\alpha H^{+}$ $H^{-}$$H^{-}$ = $\beta H^{+}$ $H^{+}$$H^{-}$ = $\gamma H^{-}$ $\alpha$ and $\beta$ are kinetic masses. $\gamma$ is potential mass. Last edited: Jun 7, 2004
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893773555755615, "perplexity": 893.4587906466669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00079.warc.gz"}
https://hal-insu.archives-ouvertes.fr/insu-01181737
# Increase of the electric field in head-on collisions between negative and positive streamers Abstract : Head-on collisions between negative and positive streamer discharges have recently been suggested to be responsible for the production of high electric fields leading to X-rays emissions. Using a plasma fluid approach, we model head-on collisions between negative and positive streamers. We observe the occurrence of a very strong electric field at the location of the streamer collision. However, the enhancement of the field produces a strong increase in the electron density, which leads to a collapse of the field over only a few picoseconds. Using a Monte Carlo model, we have verified that this process is therefore not responsible for the acceleration of a significant number of electrons to energy >1 keV. We conclude that no significant X-ray emission could be produced by the head-on encounter of nonthermal streamer discharges. Moreover, we quantify the optical emissions produced in the streamer collision. Document type : Journal articles Cited literature [37 references] https://hal-insu.archives-ouvertes.fr/insu-01181737 Contributor : Nathalie POTHIER Connect in order to contact the contributor Submitted on : Thursday, July 30, 2015 - 11:33:46 AM Last modification on : Tuesday, May 10, 2022 - 3:24:56 PM Long-term archiving on: : Saturday, October 31, 2015 - 10:31:10 AM ### File grl53137.pdf Publisher files allowed on an open archive ### Citation Mohand Ameziane Ihaddadene, Sébastien Celestin. Increase of the electric field in head-on collisions between negative and positive streamers. Geophysical Research Letters, American Geophysical Union, 2015, 42, pp.5644-5651. ⟨10.1002/2015GL064623⟩. ⟨insu-01181737⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364656805992126, "perplexity": 2166.55659397643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00287.warc.gz"}
https://jascoinc.com/knowledgebase/structural-electrical-and-magnetic-properties-of-nano-sr1%E2%88%92xlaxfe12o19-x-0-2-0-8/
Structural, electrical, and magnetic properties of nano Sr1−XLaXFe12O19 (X = 0.2–0.8) August 16, 2022 Title Structural, electrical, and magnetic properties of nano Sr1−XLaXFe12O19 (X = 0.2–0.8) Author D. Baba Basha, N. Suresh Kumar, K. Chandra Babu Naidu & G. Ranjith Kumar 2022 Journal Scientific Reports Abstract The current work is mainly devoted to the synthesis, structural, electrical, and magnetic characterization of Sr1−XLaXFe12O19 (X = 0.2–0.8) (SLFO) nanoparticles synthesized via the hydrothermal technique. The hexagonal peaks were determined using X-ray diffraction analysis. The obtained results indicated that the lattice constants were noted to be increasing from 0.58801 to 0.58825 nm (a = b), and 2.30309 to 2.30341 nm (c) with increase of in ‘X’. The morphological studies ensured that the grains as well as nanoparticles of SLFO acquired almost spherical shape. The optical properties were investigated using FTIR and UV–Visible spectra. The optical bandgap (Eg) of SLFO was found to be increasing from 1.866 to 2.118 eV with increase of dopant content. The electrical properties of SLFO were studied in detail as a function of temperature, and frequency. In addition, the dielectric modulus, and impedance spectroscopy analysis was carried out to describe the space charge polarization, and electric conduction mechanism, respectively. The hysteresis loop (M–H curves) of SLFO revealed the decrease of magnetization from 36.34 to 7.17 emu/g with increase in ‘X’.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130415081977844, "perplexity": 4000.4293933430226}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00624.warc.gz"}
https://www.varsitytutors.com/sat_ii_math_ii-help/graphing-trigonometric-functions
# SAT II Math II : Graphing Trigonometric Functions ## Example Questions ### Example Question #81 : Functions And Graphs Give the amplitude of the graph of the function Explanation: The amplitude of the graph of a sine function  is . Here, , so this is the amplitude. ### Example Question #82 : Functions And Graphs Which of these functions has a graph with amplitude 4? Explanation: The functions in each of the choices take the form of a cosine function . The graph of a cosine function in this form has amplitude . Therefore, for this function to have amplitude 4, . Of the five choices, only matches this description. ### Example Question #83 : Functions And Graphs Which of these functions has a graph with amplitude  ? Explanation: The functions in each of the choices take the form of a sine function . The graph of a sine function in this form has amplitude . Therefore, for this function to have amplitude 4, . Of the five choices, only matches this description. ### Example Question #84 : Functions And Graphs Which of the following sine functions has a graph with period of 7? Explanation: The period of the graph of a sine function , is , or . Therefore, we solve for : The correct choice is therefore . ### Example Question #7 : Trigonometric Graphs Which of the given functions has the greatest amplitude?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693974852561951, "perplexity": 1329.5319814340571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608617.6/warc/CC-MAIN-20170525214603-20170525234603-00274.warc.gz"}
https://www.solidot.org/translate/?nid=139517
## Google’s human-like voice helper can now fend off spam calls A relation between the precanonical quantization of pure Yang-Mills fields and the functional Schr\"odinger representation in the temporal gauge is discussed. It is shown that the latter can be obtained from the former when the ultraviolet parameter \$\varkappa\$ introduced in precanonical quantization goes to infinity. In this limiting case, the Schr\"odinger wave functional can be expressed as the trace of the Volterra product integral of Clifford-algebra-valued precanonical wave functions restricted to a certain field configuration, and the canonical functional derivative Schr\"odinger equation together with the quantum Gau\ss\ constraint are derived from the Dirac-like precanonical Schr\"odinger equation.查看全文 ## Solidot 文章翻译 你的名字 留空匿名提交 你的Email或网站 用户可以联系你 标题 简单描述 内容 A relation between the precanonical quantization of pure Yang-Mills fields and the functional Schr\"odinger representation in the temporal gauge is discussed. It is shown that the latter can be obtained from the former when the ultraviolet parameter \$\varkappa\$ introduced in precanonical quantization goes to infinity. In this limiting case, the Schr\"odinger wave functional can be expressed as the trace of the Volterra product integral of Clifford-algebra-valued precanonical wave functions restricted to a certain field configuration, and the canonical functional derivative Schr\"odinger equation together with the quantum Gau\ss\ constraint are derived from the Dirac-like precanonical Schr\"odinger equation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811594486236572, "perplexity": 1294.3866291034028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00502.warc.gz"}
http://math.stackexchange.com/users/2469/barrycarter?tab=activity
barrycarter Reputation 644 Top tag Next privilege 1,000 Rep. Create new tags May14 awarded Popular Question Feb5 comment Find the limit of the trignometric function? I have deleted my moronic comment and enrolled myself in basic arithmetic class ;) Dec9 wiki Dec9 awarded Caucus Dec1 comment Probability of correctly guessing student number with checksum? The obvious answer would be 1/11. Since 11 is prime, the sum you describe should cycle through all values of sum%11 equally. Just to clarify, you mean the last digit is chosen so the entire sum is a multiple of 11, correct? What do you do if the last digit needs to be 10? Use "X" like they do for SBN/ISBN numbers? Dec1 comment finding n in binomial distribution The Student T distribution might be helpful here (the sample size is too small to use the normal approximation, which yields the (incorrect) result that the size of n is irrelevant) Dec1 comment One difficult integral My approach would be to rewrite log((1-x)/(1+x)) as log(1-x)-log(1+x) and then expand the cube. This will at least break the integral up into smaller chunks. Dec1 comment conditional probability that 5 red balls were placed in the bowl at random This is a trick question. The chance that the remaining 3 balls are red is independent of the colors of the balls you already chose. Dec1 comment Minimum value of an integral with least square? Possible hint: when the integral reaches its minimal value, its derivative is 0. That plus the fundamental theorem of calculus might help. Nov23 revised Area swept out by non-solar focus not same over equal time? answer Nov22 asked Area swept out by non-solar focus not same over equal time? Nov3 comment Conceptual question on showing properties of the absolute value function on $\mathbb{Q}$ OK, I might be misunderstanding the question, but if |a|=0 then a=+0 or a=-0, which are the same thing. I don't see this as a rational number question. It's true for natural numbers, integers, real numbers, and complex numbers as well. Nov3 comment Confidence Interval for a Mean Nah, I'm bad about upvoting other people's answers to my questions, so I feel bad about getting upvotes :) Nov3 comment Conceptual question on showing properties of the absolute value function on $\mathbb{Q}$ Could you show us a more complicated example that doesn't have a simple proof like this one? Nov3 comment Confidence Interval for a Mean For a sample size this small, perhaps use the Student T distribution instead? Nov1 comment Properties of continuity You can also do this directly: to prove continuity at a point k, take c=k-epsilon and d=k+epsilon as epsilon approaches zero and then apply continuity. Oct30 revised Normal Distribution and Cofffee correct answer + more help Oct29 comment Normal Distribution and Cofffee Remember, you're looking at cumulative probability, not just the probability at a specific integer. Add the probabilities (starting with x=3) until the exceed 0.5. There's actually probably a better way of doing this, but this method will work too. Oct29 answered Normal Distribution and Cofffee Oct29 comment Normal Distribution and Cofffee Hint: you're looking for 3 or more successes (well, failures, but still) in n attempts, where each success has a 1% chance. Use either the binomial distribution (or the normal approximation to it) to find the value of n where the probability is right around 0.5. Other hint: 3 or more successes = the opposite of 0, 1, or 2 successes (might be easier to compute)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359231948852539, "perplexity": 790.8789094845395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928586.49/warc/CC-MAIN-20150521113208-00291-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=14H10.&jrnl=one&onejrnl=proc
# American Mathematical Society My Account · My Cart · Customer Services · FAQ Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(14H10.) AND publication=(proc) Sort order: Date Format: Standard display Results: 1 to 28 of 28 found      Go to page: 1 [1] Adam James, Kay Magaard and Sergey Shpectorov. The lift invariant distinguishes components of Hurwitz spaces for $A_5$. Proc. Amer. Math. Soc. 143 (2015) 1377-1390. Abstract, references, and article information    View Article: PDF [2] Samuel Grushevsky and Dmitry Zakharov. The double ramification cycle and the theta divisor. Proc. Amer. Math. Soc. 142 (2014) 4053-4064. MR 3266977. Abstract, references, and article information    View Article: PDF [3] Evan M. Bullock. Irreducibility and stable rationality of the loci of curves of genus at most six with a marked Weierstrass point. Proc. Amer. Math. Soc. 142 (2014) 1121-1132. Abstract, references, and article information View Article: PDF [4] Y.-P. Lee and F. Qu. Euler characteristics of universal cotangent line bundles on $\overline{\mathcal{M}}_{1,n}$. Proc. Amer. Math. Soc. 142 (2014) 429-440. Abstract, references, and article information    View Article: PDF [5] Han-Bom Moon. Log canonical models for the moduli space of stable pointed rational curves. Proc. Amer. Math. Soc. 141 (2013) 3771-3785. Abstract, references, and article information    View Article: PDF [6] Shengmao Zhu. On the recursion formula for double Hurwitz numbers. Proc. Amer. Math. Soc. 140 (2012) 3749-3760. Abstract, references, and article information    View Article: PDF [7] Jan O. Kleppe and Rosa M. Miró-Roig. Families of determinantal schemes. Proc. Amer. Math. Soc. 139 (2011) 3831-3843. MR 2823030. Abstract, references, and article information    View Article: PDF [8] Makoto Matsumoto. Difference between Galois representations in automorphism and outer-automorphism groups of a fundamental group. Proc. Amer. Math. Soc. 139 (2011) 1215-1220. MR 2748415. Abstract, references, and article information    View Article: PDF [9] Irene I. Bouw. Construction of covers in positive characteristic via degeneration. Proc. Amer. Math. Soc. 137 (2009) 3169-3176. MR 2515387. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] Igor V. Nikolaev. Noncommutative geometry of algebraic curves. Proc. Amer. Math. Soc. 137 (2009) 3283-3290. MR 2515397. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] Hristo Iliev. On the irreducibility of the Hilbert scheme of space curves. Proc. Amer. Math. Soc. 134 (2006) 2823-2832. MR 2231604. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] Michela Artebani and Gian Pietro Pirola. Algebraic functions with even monodromy. Proc. Amer. Math. Soc. 133 (2005) 331-341. MR 2093052. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] Vitaly Vologodsky. On fibers of the toric resolution of the extended Prym map. Proc. Amer. Math. Soc. 132 (2004) 3159-3165. MR 2073289. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] Gavril Farkas. Regular components of moduli spaces of stable maps. Proc. Amer. Math. Soc. 131 (2003) 2027-2036. MR 1963746. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] Dan Abramovich and Tyler J. Jarvis. Moduli of twisted spin curves. Proc. Amer. Math. Soc. 131 (2003) 685-699. MR 1937405. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] Holger Spielberg. Counting generic genus--$0$ curves on Hirzebruch surfaces. Proc. Amer. Math. Soc. 130 (2002) 1257-1264. MR 1879945. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] Steven P. Diaz. On the Natanzon-Turaev compactification of the Hurwitz space. Proc. Amer. Math. Soc. 130 (2002) 613-618. MR 1866008. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] Miguel A. Barja. On the slope of bielliptic fibrations. Proc. Amer. Math. Soc. 129 (2001) 1899-1906. MR 1825895. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] Montserrat Teixidor i Bigas. Curves in Grassmannians. Proc. Amer. Math. Soc. 126 (1998) 1597-1603. MR 1459153. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] Rahul Pandharipande. Counting elliptic plane curves with fixed $j$-invariant. Proc. Amer. Math. Soc. 125 (1997) 3471-3479. MR 1423328. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] Changho Keem. Reducible Hilbert scheme of smooth curves with positive Brill-Noether number . Proc. Amer. Math. Soc. 122 (1994) 349-354. MR 1221726. Abstract, references, and article information    View Article: PDF This article is available free of charge [22] A. J. Small. Surfaces of constant mean curvature $1$ in ${\bf H}\sp 3$ and algebraic curves on a quadric . Proc. Amer. Math. Soc. 122 (1994) 1211-1220. MR 1209429. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] Dan Edidin. The monodromy of certain families of linear series is at least the alternating group . Proc. Amer. Math. Soc. 113 (1991) 911-922. MR 1069686. Abstract, references, and article information    View Article: PDF This article is available free of charge [24] Pankaj Topiwala and Jeffrey M. Rabin. The super GAGA principle and families of super Riemann surfaces . Proc. Amer. Math. Soc. 113 (1991) 11-20. MR 1057963. Abstract, references, and article information    View Article: PDF This article is available free of charge [25] Giuseppe Paxia. On flat families of fat points . Proc. Amer. Math. Soc. 112 (1991) 19-23. MR 1055777. Abstract, references, and article information    View Article: PDF This article is available free of charge [26] Pyung-Lyun Kang. A note on the variety of plane curves with nodes and cusps . Proc. Amer. Math. Soc. 106 (1989) 309-312. MR 952316. Abstract, references, and article information    View Article: PDF This article is available free of charge [27] Alberto Collino. A simple proof of the theorem of Torelli based on Torelli's approach . Proc. Amer. Math. Soc. 100 (1987) 16-20. MR 883393. Abstract, references, and article information    View Article: PDF This article is available free of charge [28] Edoardo Ballico. On the rationality of the variety of smooth rational space curves with fixed degree and normal bundle . Proc. Amer. Math. Soc. 91 (1984) 510-512. MR 746078. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 28 of 28 found      Go to page: 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932391345500946, "perplexity": 1703.03223429761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00200-ip-10-185-217-139.ec2.internal.warc.gz"}
https://discuss.codechef.com/questions/108188/weaseltx-editorial
× # WEASELTX - Editorial Practice Contest Author: Bogdan Ciobanu Tester: Jingbo Shang Editorialist: Hanlin Ren Medium-hard # PREREQUISITES: binomial coefficients, Lucas's theorem, multidimensional prefix sum # PROBLEM: You are given a rooted tree with root number $0$. Every node $u$ has a weight $X_{0,u}$. When $d>0$, define $X_{d,v}$ is the bitwise-xor sum of all $X_{d-1,u}$ where $u$ is in $v$'s subtree. You are also given $Q$ queries, each query is a number $\Delta$, and you need to output $X_{\Delta,0}$. # QUICK EXPLANATION: Let $Y_d$ be the xor-sum of weights of all nodes whose depth is exactly $d$, then the answer to a query $\Delta$ is the xor-sum of all $Y_d$'s, where $(\Delta-1)\text{ and }d=0$. By using a dp technique similar to multidimensional prefix sum, one can compute $Z_d$, which means the xor-sum of all $Y_k$'s where $k\text{ and }d=0$, in $O(N\log N)$ time, and the query takes only constant time. # EXPLANATION: Since $N,\Delta\le 500$, we can calculate $X_{i,u}$ for all $0\le i\le 500$, $0\le u < N$. Given $X_{i,0},X_{i,1},\dots,X_{i,N-1}$, we can compute $X_{i+1,0},X_{i+1,1},\dots,X_{i+1,N-1}$ by doing one dfs on tree. This gives an $O(N\cdot\max\Delta)$ algorithm. Let's first consider, what if "xor" is changed to addition? i.e., what if $X_{i+1,u}$ is defined as the sum, rather than xor, of all $X_{i,v}$'s where $v$ is in $u$'s subtree? Obviously the answer is a linear combination of all weights, i.e., $X_{\Delta,0}=\sum_{u=0}^{N-1}f(\Delta,u)X_{0,u}$, where $f(\Delta,u)$ is something only dependent with $\Delta$ and $u$. ### What is $f(\Delta,u)$? TL;DR: $f(\Delta,u)=\binom{dep_u+\Delta-1}{\Delta-1}$ where $dep_u$ is $u$'s depth and $dep_0=0$. Now let's consider how to solve $f(\Delta,u)$. Take the sample input as an example: Then we have: \begin{align*} X_{1,0}=&X_{0,0}+X_{0,1}+X_{0,2}+X_{0,3};\\ X_{2,0}=&X_{1,0}+X_{1,1}+X_{1,2}+X_{1,3}\\ =&(X_{0,0}+X_{0,1}+X_{0,2}+X_{0,3})+(X_{0,1}+X_{0,2})+X_{0,2}+X_{0,3}\\ =&X_{0,0}+2X_{0,1}+3X_{0,2}+2X_{0,3};\\ X_{3,0}=&X_{1,0}+2X_{1,1}+3X_{1,2}+2X_{1,3}\\ =&(X_{0,0}+X_{0,1}+X_{0,2}+X_{0,3})+2(X_{0,1}+X_{0,2})+3X_{0,2}+2X_{0,3}\\ =&X_{0,0}+3X_{0,1}+6X_{0,2}+3X_{0,3};\\ &\dots \end{align*} For example, $f(2,3)=3$ and $f(2,2)=6$ here. A recurrence equation of $f$ is: $f(\Delta,u)=\sum_{v\text{ is }u\text{'s ancestor}}f(\Delta-1,v)$(note that $v$ could be $u$). Why? Note that when we calculate $X_{\Delta,0}$, we write \begin{align*} X_{\Delta,0}=&\sum_vf(\Delta-1,v)X_{1_v}\\ =&\sum_vf(\Delta-1,v)\sum_{u\text{ is }v\text{'s offspring}}X_{0,u}\\ =&\sum_uX_{0,u}\sum_{v\text{ is }u\text{'s ancestor}}f(\Delta-1,v);\\ \text{also, }X_{\Delta,0}=&\sum_uX_{0,u}f(\Delta,u). \end{align*} This explains the above equation. Let's do more on the equation: \begin{align*} f(\Delta,u)=&\sum_{v_1\uparrow u}f(\Delta-1,v_1)&\text{we use }a\uparrow b\text{ to represent that }a\text{ is }b\text{'s ancestor(possibly }a=b\text{)}\\ =&\sum_{v_1\uparrow u}\sum_{v_2\uparrow v_1}f(\Delta-2,v_2)\\ =&\dots\\ =&\sum_{v_1\uparrow u}\sum_{v_2\uparrow v_1}\dots\sum_{v_{\Delta}\uparrow v_{\Delta-1}}f(0,v_{\Delta}). \end{align*} Thus, $f(\Delta,u)$ is the number of sequences $(v_0,v_1,v_2,\dots,v_{\Delta})$ such that: • $v_0=u$; • $v_{\Delta}=0$(note that $f(0,v)=[v=0]$); • For all $1\le i\le\Delta$, $v_i\uparrow v_{i-1}$. Obviously all $v_i$'s appear on the path from $u$ to $0$. Let $dep_x$ be the depth of node $x$($dep_0=0$) and $d_i=dep_{v_{i-1}}-dep_{v_i}$, then $(d_1,d_2,\dots,d_{\Delta})$ is an array satisfying the following condition: • $d_i$'s are nonnegative integers; • $\sum_{i=1}^{\Delta}d_i=dep_u$. We find that every array $d$ satisfying the above condition gives us a unique valid sequence $v$! So $f(\Delta,u)$ is just the number of such array $d$'s. Next is a classical lemma stating this number is just $\binom{dep_u+\Delta-1}{\Delta-1}$(refer to Wikipedia, the last line of "Definition and interpretations"). We omit the proof here. Lemma 1: given $n,k$, the number of nonnegative solutions of $x_1+x_2+\dots+x_n=k$ is $\binom{n+k-1}{n-1}$. ### Coming back to XOR Now let's consider the xor case. Note that xoring the same number for even times does nothing, so for a query $\Delta$, we pick all nodes $u$ such that $f(\Delta,u)$ is odd, and xor them up. Next we'll use a lemma called Lucas's Theorem: Lemma 2: given $n,m,p$, and $p$ is a prime. Let's write $n,m$ in base $p$: \begin{align*} n=&\overline{n_kn_{k-1}\dots n_1n_0};\\ m=&\overline{m_km_{k-1}\dots m_1m_0}, \end{align*} Then, $$\binom{n}{m}\equiv\prod_{i=0}^k\binom{n_i}{m_i}\pmod p.$$ Let me demonstrate the lemma by an example. Let $p=5$, $n=116$, $m=55$. Then $n=(\overline{431})_5,m=(\overline{210})_5$, so $\binom{n}{m}\equiv \binom{4}{2}\cdot\binom{3}{1}\cdot\binom{2}{0}\equiv 3\pmod 5$. Actually, you can check that $\binom{n}{m}=5265169722428127562717416598795968\equiv 3\pmod 5$. How does the theorem help us? Note that we only need to know the reminder $\binom{a}{b}\bmod 2$ for some huge $a,b$. When $p=2$, Lucas's theorem becomes Lemma 3: $\binom{n}{m}\equiv 1\pmod 2$ if and only if $n\text{ and }m=m$, where $\text{and}$ is the bitwise-and operation. Thus, $f(\Delta,u)\equiv 1\pmod 2\iff \binom{\Delta-1+dep_u}{dep_u}\equiv 1\pmod 2\iff (\Delta-1+dep_u)\text{ and }dep_u=dep_u$. To solve subtask 2, we preprocess $Y_d$ as the bitwise-xor sum of all nodes at depth exactly $d$, and for a query $\Delta$ we enumerate all $d$ from $0$ to $N$, if $(\Delta-1+d)\text{ and }d=d$, we xor the answer with $Y_d$. Time complexity: $O(NQ)$. If you print the values of $X_{i,0}$ and try to find patterns, you'll find that $X_{i,0}$ has a period of length $L\le 2N$. The solution for this subtask is: first find the length of that period $L$, then prepare all $X_{i,0}$'s for $i\le L$; for any query $\Delta$, we just print $X_{\Delta\bmod L,0}$. In the solution of subtask 4 I'll show that, the answer only depends on the last $\lceil\log_2 N\rceil$ bits of $\Delta-1$, and that's why we have an $O(N)$ period. Can the condition "$(\Delta-1+d)\text{ and }d=d$" be further simplified? Yes! Note that $x\text{ and }y=0$ is a sufficient condition for $(x+y)\text{ and }y=y$, since when adding $x$ and $y$ in binary, no carries would happen. Is it necessary? The answer turns out to be yes! This can be proved by contradiction: Suppose $i$ is the lowest bit that both $x$ and $y$ has $1$ on this bit. Then $(x+y)$'s $i$-th bit is $0$ and that violates $(x+y)\text{ and }y=y$. Thus "$(\Delta-1+d)\text{ and }d=d$" is equivalent to "$(\Delta-1)\text{ and }d=0$". Let $Z_d$ be the bitwise-xor sum of $Y_f$'s such that $f\text{ and }d=0$. For a query $\Delta$ we directly output $Z_{(\Delta-1)\bmod 2^{18}}$, since $2^{18}>N$ and $((\Delta-1)\text{ and }d)$ is only related to $(\Delta-1)$'s last $18$ bits. How to compute $Z_d$? We can do it by a dp that's similar to multidimensional prefix sum: Let $dp_{i,j}$ denote the bitwise-xor sum of all $Y_d$'s, such that: • For $0\le k < i$, $d$ and $j$'s $k$-th bit can't be both $1$; • For $i\le k < 18$, $d$ and $j$'s $k$-th bit are the same. Then $dp_{i+1,j}=\begin{cases} dp_{i,j\text{ xor }2^i}&i\text{-th bit of }j\text{ is }1\\ dp_{i,j}\text{ xor }dp_{i,j\text{ xor }2^i}&i\text{-th bit of }j\text{ is }0\\ \end{cases}$. Note that $dp_{0,j}=Y_j$, and what we want is $Z_j=dp_{18,j}$. The overall complexity is $O(N\log N+M)$. # AUTHOR'S AND TESTER'S SOLUTIONS: Author's solution can be found here. Tester's solution can be found here. Editorialist's solution can be found here. This question is marked "community wiki". 7★r_64 261923 accept rate: 16% 19.6k349497539 Hey everybody I've made a video in 2 parts for this problem. Part 1: https://youtu.be/FhC6A4mvXUw(Weasel does XOR on Tree - Part 1) Part 2: https://youtu.be/HIVZ9HVSIb0(Weasel does XOR on Tree - Part 2) Did you guys notice the amazing look alike of "Sierpinski Triangle" Fractal in this problem? See the part 2 especially as I have discussed about it there. (12 Sep '17, 18:47) 6 I just used 2 observations to solve this problem. All $X_0$ values at a certain depth appear xor-ed together at their ancestors $X$ value or not at all. So all $X_0$ values at the same depth can be xor-ed together to convert the tree to a chain. I see your approach also uses this. If the length of the resulting chain is $d$, then the number of operations after which it reverts to its initial form, i.e its period, is the nearest power of 2 $\ge d$. Also the pattern of inclusion of values at the root xor-sum is recursive... this is what I mean. If split into two halves, each term either repeats the pattern of the left half as the right half, or leaves the right half empty. So we can make up the chain to the nearest power of 2 by padding with zeroes for convenience, and then the pattern can be generated by splitting the chain into two halves, solving recursively, and combining (similar to mergesort). Here is my solution, although I have used an iterative version instead of recursion. The complexity is $\mathcal{O}(N \log N)$. EDIT (The merging and solving procedure in greater detail): Suppose $A$ is the chain derived from the tree with length $d$, which is a power of 2. Take a look at the pseudocode below, the terms should be self-explanatory. function solve(A, L, R): N = R - L + 1 if N equals 1: return solve(A, L, L+N/2-1) solve(A, L+N/2, R) A_L = slice of A from L to L+N/2-1 A_R = slice of A from L+N/2 to R for i in [0..N/2-1]: A[L+i] = A_L[i] xor A_R[i] A[L+N/2+i] = A_L[i] Calling solve(A, 0, d-1) will compute all $d$ possible answers in $A$. The algorithms works by recursively computing the same sequence of the pattern of inclusion in both the halves of size N/2 each. Then it generates the current pattern sequence of size N by either incorporating both the left and right value or just the left value, in the proper order of course. For example, if N = 8, the pattern and the half pattern is 11111111 1111 10101010 1010 11001100 1100 10001000 1000 11110000 10100000 11000000 10000000 So A_L and A_R will be -- A_L --|-- A_R -- A_L[0] = A[L] ^ A[L+1] ^ A[L+2] ^ A[L+3] | A_R[0] = A[L+4] ^ A[L+5] ^ A[L+6] ^ A[L+7] A_L[1] = A[L] ^ A[L+2] | A_R[1] = A[L+4] ^ A[L+6] A_L[2] = A[L] ^ A[L+1] | A_R[2] = A[L+4] ^ A[L+5] A_L[3] = A[L] | A_R[3] = A[L+4] And then they will be combined into A[L] = A_L[0] ^ A_R[0] = A[L] ^ A[L+1] ^ A[L+2] ^ A[L+3] ^ A[L+4] ^ A[L+5] ^ A[L+6] ^ A[L+7] A[L+1] = A_L[1] ^ A_R[1] = A[L] ^ A[L+2] ^ A[L+4] ^ A[L+6] A[L+2] = A_L[2] ^ A_R[2] = A[L] ^ A[L+1] ^ A[L+4] ^ A[L+5] A[L+3] = A_L[3] ^ A_R[3] = A[L] ^ A[L+4] A[L+4] = A_L[0] = A[L] ^ A[L+1] ^ A[L+2] ^ A[L+3] A[L+5] = A_L[1] = A[L] ^ A[L+2] A[L+6] = A_L[2] = A[L] ^ A[L+1] A[L+7] = A_L[3] = A[L] That took some typing.. hope it's clearer now :D Also I hadn't noticed this before, but I now recognize the pattern as the Sierpinski triangle. Awesome! answered 11 Sep '17, 16:04 6★meooow ♦ 6.9k●7●17 accept rate: 48% i also saw same pattern but got TLE in last cases(C++)... (11 Sep '17, 16:11) That xoring of nodes at same depth....really mind blowing. That was a pretty neat observation honestly!!! (11 Sep '17, 16:15) @adecemberguy I don't think using C++ was the cause of TLE, $\mathcal{O}(N \log N)$ should comfortably clear the limit. Perhaps there is some part of your code taking greater time than you expect? (11 Sep '17, 17:22) meooow ♦6★ 1 The pattern can have periodicity upto $N$ (roughly) , so if one tries to derive pattern by brute force (i.e. keep on doing recursion, calculating $d_1,d_2,d_3...$ until a repetition/original configuration is obtained) then the last 2 cases give TLE. (11 Sep '17, 17:28) @meooow I was trying something very similar. Even though your solution is obviously correct, I'm struggling to understand it completely. I understand the part that it has periodicity on powers of 2 that are >= chain length. I also understand that it's convenient to add 0's to the end to make it power of two, since it seems to be easier to explore what happens on powers of two. I'm not getting the part where you merge 2 halves. Can you please elaborate more? (12 Sep '17, 05:57) llaki3★ @llaki I have updated the answer. Hope it's helpful! (12 Sep '17, 07:39) meooow ♦6★ @meooow thanks a lot for taking time and effort to provide more clarity, really appreciate it! :) (12 Sep '17, 11:36) llaki3★ showing 5 of 7 show all 3 Well, after making some random test cases, even I observed a recursive pattern, those who observed but unable to code can check my code for a simple implementation of that recursive pattern. Only one test case took 0.48 sec, remaining test cases took less than 0.09 sec. I used 8 for loops and 8 if condition. What else I did was, I divided the depth in a group of 4 because if they are going to contribute in the answer, they will appear in a group of 4. answered 12 Sep '17, 01:18 231●3 accept rate: 0% Wow... that is some solution (12 Sep '17, 07:59) meooow ♦6★ 2 Sierpinski triangle is T[height][time - 1] = T[height - 1][time - 1] xor T[height][time - 2] = ( height & (time - 1) == 0), then you can use SOSDP in the mask height for all mask ~(time - 1) in Boolean Algebra part: https://www.zeuscat.com/andrew/chaos/sierpinski.html answered 13 Sep '17, 07:42 1★threat 21●2 accept rate: 0% 1 This is exactly what i tried. I got the chain idea as well as the cycle length, but not the pattern of inculsion and thus, got what i deserved, 0 points. answered 11 Sep '17, 16:25 3.6k●18●63 accept rate: 23% @tarun_1407 - Are you sure you took required values as long in JAVA? I was getting WA in C++ because of int. Changed it to long long, got 40 points. (11 Sep '17, 17:33) i used long, got WA, even tried with BigInteger which i knew was much more than required, but still WA. (11 Sep '17, 18:19) 1 @liouzhou_101 you said that you were thinking of optimizing it! can you please tell what kind of optimizations are possible here? i am a beginner and for the first time solved 6th question ! it will be helpful if u share your ideas :) answered 11 Sep '17, 20:23 3★pk301 627●10 accept rate: 16% 1 Hey everybody I've made a video in 2 parts for this problem. Part 1: https://youtu.be/FhC6A4mvXUw (Weasel does XOR on Tree - Part 1) Part 2: https://youtu.be/HIVZ9HVSIb0 (Weasel does XOR on Tree - Part 2) Did you guys notice the amazing look alike of "Sierpinski Triangle" Fractal in this problem? See the part 2 especially as I have discussed about it there. answered 12 Sep '17, 18:48 1.8k●6●23 accept rate: 0% 14.9k●1●18●56 Yes :D The binomial coefficients modulo 2 is exactly the Sierpinski Triangle. The author used this term when describing his solution, however I didn't. (12 Sep '17, 20:49) r_647★ Math is love <3 (12 Sep '17, 21:14) 0 This is exactly what I did :) nice problem ! answered 11 Sep '17, 16:00 1.4k●11 accept rate: 28% 0 For 40 points i simulated the process until i reach the same value on the root as the initial value. Then i returned rootValues[delta % rootValues.length ] It's wrong obviously, as i got WA on the third subtask but hey, 40 points is not bad :). answered 11 Sep '17, 18:34 2★vasja 515●1●6 accept rate: 7% 0 How I approached it---- it can be simply observed that whenever nodes come in answer all the nodes with same level also comes ,So we can calculate XOR depth-wise, Let consider super worst tree ever ,this tree is 1->2->3->4->...->n 1)on day 0 ,ans=1 2)on day 1 ,ans=1^2^3^4^5^6..^n 3)on day 2 ,ans=1^(2^2)^(3^3^3)^(4^4^4^4)^(5^5^5^5^5)^..... 4)on day 3 ,ans=1^(2^2^2)^(3^3^3^3^3^3^3)^.... u can observe:on day 3 , number of 1s,2s,3s,4s,5s,... are 1,3,6,10,15.... again u can see that this looks like binomial coefficients and on further days this pattern also emerges :) so on day k+1 ans=1(kCk time) ^2 ((k+1)Ck times) ^3 ((k+2)Ck times) ^...... You know XOR of a number even times is 0 and otherwise the number itself, so question decreases to find whether nCk is odd or even and now read last para of editorial. Suggestions: Never cancel out XOR values in initial stages, I stuck on this problem for 7 days just coz I cancelled out XORs and was finding pattern in that -I couldn't comeup with dp solution given at last in editorial because calculating for each depth XOR must take O(n) time for each query ,I was unable to think of pre-processing the dp table answered 11 Sep '17, 18:55 52●1 accept rate: 0% 0 i have solved it using two observations : first that if any element of any level is present in any node(after some operations) then all the element at that depth is also present in that set...and a tree with depth x will have (2^x) different values of the root. And further after d days the value at root is the Xor of all possible submasks of mask(2^x - d). I calculate all the submasks and store it and then answer the query in constant time :)... first i think that it will give TLE as generating submasks of all the masks is (3^x) operation which will in worst case 3.2 seconds... but i dont know how it passed...if anyone can explain me why it will be helpful :) answered 11 Sep '17, 19:12 3★pk301 627●10 accept rate: 16% 0 @pk301 I also have the same approach as you, which is an O(3^k) solution where k = O(log n). It passed because of compiler optimization and high-speed of bitwise operations. My solution runs in 0.20s the worst case of the test data. I was thinking of how to optimize it, however, I didn't try that as it directly passed. Feel better if the constraints were set more strict in order to let O(3^k) solution not pass. answered 11 Sep '17, 19:31 682●2●3●13 accept rate: 12% 0 @meooow, Thanks for the insights :) Although, can you please explain the recursive call part ? I understood the pattern that you meant, but I'm not able to relate it to a recursive call. It would be great if anyone could help. Thanks in advance :) answered 11 Sep '17, 20:22 2★de_vika 43●3 accept rate: 0% I have updated my answer to explain the recursive part in greater detail, take a look :) (12 Sep '17, 07:40) meooow ♦6★ 0 @pk301 The way to optimize is just shown in the editorial I think. However, I didn't catch it at that time (since I passed it with unexpected solution, and I would spend time solving next several problems). answered 11 Sep '17, 21:02 682●2●3●13 accept rate: 12% 0 Please help me with my solution i am getting NZEC but it is working fine in my IDE. Even if you could give me a testcase for which my solution would get NZEC. https://stackoverflow.com/questions/46154263/codechef-sept-2017-challenge-weasel-does-xor-on-tree-getting-nzec-for-this-submi Please help. answered 11 Sep '17, 22:12 1 accept rate: 0% 0 I made these 2 observations: Nodes with depth $2^{n}$ are included in the result iff : $(\Delta-1)\ \%\ 2^{n+1} < 2^{n}$ Nodes with depth $x = \sum 2^{a_{i}}$ are included in the result iff nodes with depths $2^{a_{i}}$ are included for all $i$ So, to get the answer for $\Delta$, first I find all powers of 2 that are included, then I compute the xor sum of all nodes with depths that are combinations of these powers. For example: if $1$, $4$, and $8$ are included, $1$, $5$, $9$, $12$ and $13$ are also included. If we take the binary representations of these numbers, then we can see that $1={(0001)}$, $4={(0100)}$, $5={(0101)}$, $8={(1000)}$, $9={(1001)}$, $12={(1100)}$ are subsets of $1 + 4 + 8 = 13 = (1101)$ To compute the xor sum of all subsets, we can use the approach described here: http://codeforces.com/blog/entry/45223 Complexity: $(N + Q) \cdot \log(N)$ My solution answered 12 Sep '17, 04:11 116●3 accept rate: 12% 0 @meooow thanks a lot for taking time and effort to provide more clarity, really appreciate it! :) answered 12 Sep '17, 11:34 3★llaki 1 accept rate: 0% 0 I have taken the following approach in solving: Noticed that for any delta, the result is the XOR of the value of the result at "delta = 1" with the XOR of all elements present at depth delta, ie RESULT for delta(D) = Result for delta(1) ^ (Result of XOR of all elements present at depth D). lets say for the example given in question itself, Result of delta(1) = 1^5^8^7 = 11 Now for the queries, Result of delta(2) = 11 ^ (Result of XOR of all elements present at depth 2) = 11 ^ (5^7) = 9 Result of delta(3) = 11 ^ (Result of XOR of all elements present at depth 3) = 11 ^ (8) = 3 After this the pattern just repeats with the fact that delta(0) = value present in array initially ie 1. Can someone please point out what is wrong with this approach and where do I need to correct, the editorials and comment by all the people here are really good but just need to know how can it be solved if I take this approach and if it can be solved using this or not.Link to my solution is https://www.codechef.com/viewsolution/15395688 answered 12 Sep '17, 13:50 2★avinish 0●1●1●1 accept rate: 0% 0 It may be a brute-force approach but I am really curious where my code is going wrong, apart from it's complexity. I used adjancy matrix to represent the tree and simply XOR'ed the values present in the sub-trees for every node. answered 12 Sep '17, 19:47 3★apache_ 1 accept rate: 0% 0 how is f(/\,u) the number of sequence (v0,v1,......v/) ?? i cant understand.. answered 15 Sep '17, 22:34 46●4 accept rate: 25% toggle preview community wiki: Preview By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×15,119 ×1,188 ×286 ×42 ×33 ×16 question asked: 17 Aug '17, 18:06 question was seen: 4,591 times last updated: 24 Sep '17, 22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157922863960266, "perplexity": 1583.3799546480275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00479.warc.gz"}
https://www.physicsforums.com/threads/taylors-series-limit-0-0.553324/
# Taylor's series limit 0/0 • Start date • #1 37 0 ## Homework Statement I need to solve this limit in the form 0/0 with the Taylor series... ## The Attempt at a Solution Alright, I didn't really get where I am supposed to "stop" writing polinomials, my teacher said that I should stop when I find the smallest degree factor, because that's the one which is "bossing around" when the limit approaches zero. Okay, that's where I've gone so far: http://www4d.wolframalpha.com/Calculate/MSP/MSP619i577b6i0hb357g0000173a0a18c530af8g?MSPStoreType=image/gif&s=64&w=320&h=58 [Broken] I don't get if I wrote too many, if I didn't write enough terms, if I did something wrong at all, or I am right and should keep on doing calcs. Could someone help me out please? Thanks. Last edited by a moderator: Related Calculus and Beyond Homework Help News on Phys.org • #2 D H Staff Emeritus 15,393 685 Collect like terms in the numerator. What does that give you? • #3 37 0 $-(2 x^4)/3-(119 x^6)/120-(1261 x^8)/5040$ OVER $x^6+4x^5+4x^4$ Even if I collect like terms, I get something that doesn't really get me close to the limit, and I think that huge number there is just wrong... But, I don't get when I have to stop, I mean, I could have gone for infinity keeping on writing the series polynomials. When do I have to stop writing? • #4 D H Staff Emeritus 15,393 685 Remember L’ Hôpital’s Rule. If both f(x) and g(x) approach 0 at some point x0, then to evaluate f(x)/g(x) at x0 you try to evaluate f'(x)/g'(x) at x0. If that still results in the indeterminate form 0/0, you can iterate and try to evaluate f''(x)/g''(x). If that doesn't help, try to evaluate f'''(x)/g'''(x), and so on, until you either reach some form that is not indeterminate or a form that blows up. Assume that you have a Taylor expansion of f(x) and g(x) about the point of interest\begin{aligned} f(x) &= \sum_{n=0}^{\infty} a_n (x-x_0)^n \\ g(x) &=\sum_{n=0}^{\infty} b_n (x-x_0)^n \end{aligned} where the first few an and bn are zero. (If a0 and b0 are not zero there's no need for this L’ Hôpital rigamarole.) The limit is • Zero if the number of leading zeros in {an} is greater than the number of leading zeros in {bn}. • Undefined (infinite) if the number of leading zeros in {an} is less than the number of leading zeros in {bn}. These cases are kinda uninteresting. This leaves as an interesting case where the number of leading zeros in {an} and {bn} are equal. Which case applies to your problem? • #5 37 0 Okay, I tried to follow, the result of my limit is neither 0 nor infinity, so it must be when a_n or b_n are equal... uhm... how can I use such information to help myself into the problem? • #6 Dick Homework Helper 26,260 619 Okay, I tried to follow, the result of my limit is neither 0 nor infinity, so it must be when a_n or b_n are equal... uhm... how can I use such information to help myself into the problem? The terms that are "bossing around" (i.e. are dominating as x->0) are the x^4 terms in the numerator and the denominator. Suppose you just look at those. What's the ratio? • #7 37 0 The terms that are "bossing around" (i.e. are dominating as x->0) are the x^4 terms in the numerator and the denominator. Suppose you just look at those. What's the ratio? Ahh! Now I get it! :) I just checked with x^4 terms and it pops out -1/6 getting rid of the denominator x^4 term as well... thanks a lot! That's appreciated ! :) • Last Post Replies 11 Views 3K • Last Post Replies 3 Views 2K • Last Post Replies 3 Views 1K • Last Post Replies 4 Views 7K • Last Post Replies 1 Views 866 • Last Post Replies 6 Views 9K • Last Post Replies 1 Views 1K • Last Post Replies 15 Views 2K • Last Post Replies 11 Views 1K • Last Post Replies 1 Views 2K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006099104881287, "perplexity": 1087.428860387429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00503.warc.gz"}
http://math.stackexchange.com/questions/573822/verifying-that-an-ideal-which-avoids-a-certain-set-is-a-prime-ideal
Verifying that an ideal which avoids a certain set is a prime ideal Let $R$ be a commutative ring with $1 \neq 0$. Assume that $a \in R$ is such that $a^n \neq 0$ for each positive integer $n$ and let $\mathcal S = \{a^n\}_{n \geq 0}$. 1. Prove that there exists an ideal $I$ of $R$ such that $I$ is maximal among ideals of $R$ with $I \cap S = \emptyset$. 2. Prove that an ideal $I$ as in (a) is a prime ideal. My thoughts... 1. Let $\mathcal C = \{J_\alpha\}$ denote the collection of ideals of $R$ not intersecting $\mathcal S$. Note that $\mathcal C$ is nonempty since $\{0\} \in \mathcal C$ and $\mathcal C$ is partially ordered under inclusion. For any ascending chain $\{J_n\}$ the ideal $\bigcup_n J_n$ is an upper bound. Hence, by Zorn's Lemma, $\mathcal C$ has a maximal element, say $I$. 2. Suppose $A$ and $B$ are ideals of $R$ with $AB \subset I$. If $A \not\subset I$ and $B \not\subset I$ then, by maximality of $I$ with respect to empty $\mathcal S$-intersection, $A \cap \mathcal S$ and $B \cap \mathcal S$ are nonempty. In this case, $AB \cap \mathcal S$ is nonempty, contradicting $AB \subset I$. Is my solution for [2.] okay? I'm concerned that if $I$ is not the unique ideal with this maximality condition then it's possible that $A, B \in \mathcal C$ but are included in different maximal ideals $I_A$ and $I_B$. - You are right to worry. To make a correct argument consider for example the ideals $I+A$ and $I+B$, these are clearly bigger than $I$ and hence meet $S$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855098128318787, "perplexity": 70.23480246770235}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988962.66/warc/CC-MAIN-20150728002308-00030-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/118511/is-the-universal-enveloping-algebra-of-a-finite-dimensional-lie-algebra-left-n?sort=votes
# Is the universal enveloping algebra of a finite-dimensional Lie algebra (left) noetherian? The universal enveloping algebra of a Lie algebra $\mathfrak{g}$ is a flat deformation of $S(\mathfrak{g})$, so these algebras should be similar in many ways. Does at least this general similarity hold? - The universal enveloping algebra of a finite dimensional Lie algebra is a so-called G-alegbra, hence is left and right Noetherian (see e.g. singular.uni-kl.de/Manual/3-1-5/sing_510.htm). Note that this includes quantized enveloping algebra as well. –  Adrien Jan 10 '13 at 10:47 Adrien, thank you very much for the reference. –  Oleg Jan 10 '13 at 11:36 Yes, if a filtered ring $R$ has the property that its associated graded ring is Noetherian, then $R$ is Noetherian. Universal enveloping algebras have a PBW filtration such that the associated graded algebra is $S(\mathfrak{g})$. This is proved in Noncommutative Noetherian Rings by McConnell, Robson, Small - see sections 1.6 and 1.7. I don't have a reference, but I can just explain you how I see it. $U(g)$ is the quotient of the tensor algebra $T(g)$ modulo the relations $x\otimes y−y\otimes x−[x,y]$ for $x,y\in g$. Let $k$ be the base field, $t$ be a variable and $B$ be the quotient of the algebra $T(g)\otimes_k k[t]$ modulo the relations $x\otimes y−y\otimes x−t[x,y]$. Then $B$ is (I hope --- I haven't checked) a free $k[t]$-module with the usual PBW basis and an algebra over $k[t]$, the fiber of $B$ over the point $t=0$ is $S(g)$, and the fiber over $t−a$ for $a\in k^*$ is $U(g)$. –  Oleg Jan 10 '13 at 13:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584816694259644, "perplexity": 108.41497155403613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.bartleby.com/questions-and-answers/today-the-waves-are-crashing-onto-the-beach-every-5.8-seconds.-the-times-from-when-a-person-arrives-/fe611240-53f9-4346-a367-95adf145447f
# Today, the waves are crashing onto the beach every 5.8 seconds. The times from when a person arrives at the shoreline until a crashing wave is observed follows a Uniform distribution from 0 to 5.8 seconds. Round to 4 decimal places where possible. The probability that it will take longer than 1.96 seconds for the wave to crash onto the beach after the person arrives is P(x ≥≥ 1.96) =  Find the minimum for the upper quartile.  seconds Question Today, the waves are crashing onto the beach every 5.8 seconds. The times from when a person arrives at the shoreline until a crashing wave is observed follows a Uniform distribution from 0 to 5.8 seconds. Round to 4 decimal places where possible. The probability that it will take longer than 1.96 seconds for the wave to crash onto the beach after the person arrives is P(x ≥≥ 1.96) = Find the minimum for the upper quartile.  seconds
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294922947883606, "perplexity": 466.5078027356466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704843561.95/warc/CC-MAIN-20210128102756-20210128132756-00354.warc.gz"}
https://www.cheenta.com/minimum-speed-of-a-rotating-pail-of-water/
How 9 Cheenta students ranked in top 100 in ISI and CMI Entrances? # Minimum Speed of a Rotating Pail of Water Try this beautiful problem, useful for the Physics Olympiad, based on Minimum Speed of a Rotating Pail of Water. The Problem: Minimum Speed of a Rotating Pail of Water You tie a cord to a pail of water and you swing the pail in a vertical circle of radius (0.6m). What minimum speed must you give the pail at the highest point of the circle if no water is to spill from it? SET UP: The water moves in a vertical circle. The target variable is the speed v. Calculate: We calculate (a) and then get v from (a=\frac{v^2}{R}). We write the force equation as $$mg=m\frac{v^2}{R}$$ Therefore, $$v=\sqrt{gR}=\sqrt{(9.80)(0.600)}=2.42m/s$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8966616988182068, "perplexity": 1995.7912594581064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00282.warc.gz"}
http://clay6.com/qa/52611/consider-the-following-atwood-machine-with-three-masses-in-the-ratio-m-1-m-
# Consider the following Atwood machine with three masses in the ratio m$_1$: m$_2$: m$_3$:: 1:2:3, hung with a massless string over a friction-less pulley. What is the tension in the string between masses m$_2$ and m$_3$? ## 1 Answer Answer: $mg$ The Free Body diagrams for the problem can be drawn as follows: Net force in the direction of motion of $m_1$ is $F_1 = T- m_1g =m_1a$ as per Newton's second law. Net force in the direction of motion of $m_2$ is $F_1 = T- T'=m_2a$ as per Newton's second law. Net force in the direction of motion of $m_3$ is $F_3 = m_3g- T'=m_3a$ as per Newton's second law. Solving for acceleration, we get: $a = \large\frac{m_2 + m_3 - m_1}{m_1+m_2+m_3}$$g = \large\frac{2m+3m-m}{m+2m+3m}$$g = \large\frac{2}{3}$$g Now, m_3g -T' = m_3 a \rightarrow T' = m_3 (g-a). Subsituting for a and for m_3 = 3m, we get: T' = 3m (g - \large\frac{2}{3}$$g) = mg$ answered Aug 20, 2014 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551200270652771, "perplexity": 1601.567972131282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00073.warc.gz"}
https://www.mrbigler.com/moodle/course/view.php?id=30
• # AP Physics 1 AP Physics 1: Algebra-Based covers topics and concepts typically included in the first semester of an algebra-based, introductory college-level physics course.  Topics include kinematics (motion), dynamics (forces), circular motion & gravitation, simple harmonic motion, momentum & impulse, energy & work, rotational motion & torque, electric charge & electric force, DC circuits (resistors only), and mechanical waves & sound.  The course focuses on high-level understanding of concepts, experimental design and critical thinking, and prepares students for the AP Physics 1 exam in May. • Instructions: Clicking on the section name will show / hide the section. • 1 # Summer Assignment The goals of the summer assignment are: 1. To remind you of the math that you'll need for AP Physics. 2. To expose you to a couple of new types of problem that you may not be used to yet.  (In a perfect world, you would struggle with them a little but figure them out.  In an imperfect world, you might struggle a lot and get help from me.) 3. To get you started thinking about problem solving and designing experiments. 4. To weed out anyone who's not serious about putting effort into the class. • Summer Assignment 2017 File 546.5KB PDF document • 2 Course handouts, expectations, forms, etc. • 3 # Reference Data and other useful reference materials. • 4 # Laboratory #### Notes pp. 13-42 Laboratory safety, style guides and rubrics for laboratory notebooks and formal reports, laboratory equipment, performing experiments. The purpose of this chapter is to teach skills necessary for designing and carrying out laboratory experiments, recording data, and writing summaries of the experiment in different formats. • Designing & Performing Experiments discusses strategies for coming up with your own experiments and carrying them out. • Accuracy & PrecisionUncertainty & Error Analysis, and Recording and Analyzing Data discuss techniques for working with the measurements taken during laboratory experiments. • Keeping a Laboratory Notebook and Formal Laboratory Reports discuss ways in which you might communicate (write up) your laboratory experiments. Calculating uncertainty (instead of relying on significant figures) is a new and challenging skill that will be used in lab write-ups throughout the year. ### Skills learned & applied in this topic: • Designing laboratory experiments • Error analysis (calculation & propagation of uncertainty) • Formats for writing up lab experiments • This section5 # Mathematics #### Coletta pp. 6–15; Notes pp. 70–132 The purpose of this chapter is to familiarize you with mathematical concepts and skills that will be needed in physics. • Standard Assumptions in Physics discusses what you can and cannot assume to be true in order to be able to solve the problems you will encounter in this class. • Assigning & Substituting Variables discusses how to determine which quantity and which variable apply to a number given in a problem based on the units, and how to choose which formula applies to a problem. • The Metric System and Scientific Notation briefly review skills that you are expected to remember from your middle school math and science classes. • Trigonometry, Vectors, Vectors vs. Scalars in Physics, and Vector Multiplication discuss important mathematical concepts that are widely used in physics, but may be unfamiliar to you. Depending on your math background, some of the topics, such as trigonometry and vectors, may be unfamiliar.  These topics will be taught, but in a cursory manner. #### Skills learned & applied in this chapter: • Estimating uncertainty in measurements • Propagating uncertainty through calculations • Identifying quantities in word problems and assigning them to variables • Choosing a formula based on the quantities represented in a problem • Using trigonometry to calculate the lengths of sides and angles of triangles • Representing quantities as vectors • Multiplying vectors using the dot product and cross product • 6 # Kinematics (Motion) #### Notes pp. 133–199 In this topic, you will study how things move and how the relevant quantities are related. • Motion, Speed & Velocity and Acceleration deal with understanding and calculating the velocity (change in position) and acceleration (change in velocity) of an object, and with representing and interpreting graphs involving these quantities. • Projectile Motion deals with an object that has two-dimensional motion—moving horizontally and also affected by gravity. #### Skills learned & applied in this topic: • Choosing from a set of equations based on the quantities present. • Working with vector quantities. • Relating the slope of a graph and the area under a graph to equations. • Using graphs to represent and calculate quantities. • Keeping track of things happening in two directions at once. • Linear Acceleration File 742.8KB PDF document • Angular Acceleration File 700.9KB PDF document • Centripetal Acceleration File 455.4KB PDF document • Projectile Motion File 715.9KB PDF document • Rubric for Egg Drop File 23.7KB PDF document • 7 # Dynamics (Forces) & Gravitation #### Notes pp. 200–278 In this chapter you will learn about different kinds of forces and how they relate. • Newton's Laws and Forces describe basic scientific principles of how objects affect each other. • Free-Body Diagrams describes a way of drawing a picture that represents forces acting on an object. • Forces Applied at an Angle, Ramp Problems, and Pulleys & Tension describe some common situations involving forces and how to calculate the forces involved. • Friction and Aerodynamic Drag describe situations in which a force is created by the action of another force. • Newton's Law of Universal Gravitation describes how to calculate the force of gravity caused by massive objects such as planets and stars. One of the first challenges will be working with variables that have subscripts.  Each type of force uses the variable F.  Subscripts will be used to keep track of the different kinds of forces.  This chapter also makes extensive use of vectors. Another challenge in this chapter will be to "chain” equations together to solve problems.  This involves finding the equation that has the quantity you need, and then using a second equation to find the quantity that you are missing from the first equation. #### Skills learned & applied in this chapter: • Solving chains of equations. • Using trigonometry to extract a vector in a desired direction. • Working with material-specific constants from a table. • Estimating the effect of changing one variable on another variable in the same equation. • Newton's Laws of Motion File 489.9KB PDF document • Linear Forces File 645.6KB PDF document • Gravitational Fields File 466.5KB PDF document • Free-Body Diagrams File 628.8KB PDF document • Newton's Second Law File 526.5KB PDF document • Force Applied at an Angle File 728.3KB PDF document • Ramp Problems File 693.3KB PDF document • Pulleys & Tension File 489.1KB PDF document • Friction File 815KB PDF document • Universal Gravitation File 485.9KB PDF document • Human Free-Body Diagram File 473.3KB PDF document • Animation: Gravitation File 100.7KB Flash animation • 8 # Rotational Dynamics #### Notes pp. 279–317 In this chapter, you will learn about rotational motion. • Centripetal Force describes the forces on an object that is moving in a circular path. • Center of Mass describes the concept that forces on an unconstrained object cause rotation about the object's center of mass. • Moment of Inertia describes a means for describing how the distribution of an object's mass and distance from the center of rotation • Torque describes forces that cause rotational motion and the equations relating to them. This chapter will present some new challenges with keeping directions correct.  The torque section will introduce the idea of having multiple instances of the same quantity in an equation and adding them up. ### Skills learned & applied in this chapter: • Working with more than one instance of the same quantity in a problem. • Centripetal Force File 404.8KB PDF document • Center of Mass File 525.9KB PDF document • Rotational Inertia File 554.1KB PDF document • Torque File 865.3KB PDF document • 9 # Work, Energy & Momentum #### Notes pp. 318–399 This chapter deals with the ability of a moving object (or potential for an object to move) to affect other objects. • Linear Momentum describes a way to represent the movement of an object and what happens when objects collide, and the equations that relate to it.  Impulse describes changes in momentum. • Work and Energy describe the ability to cause something to move and the related equations.  Power describes the rate at which energy is applied. • Escape Velocity and Newton's Cradle describe interesting applications of energy and momentum. New challenges in this chapter involve keeping track of the same quantity applied to the same object, but at different times. ### Skills learned & applied in this chapter: • Working with more than one instance of the same quantity in a problem. • Conservation laws (before/after problems). • Work File 601.4KB PDF document • Energy File 381.8KB PDF document • Conservation of Energy File 641.5KB PDF document • Rotational Work File 374.1KB PDF document • Rotational Kinetic Energy File 522.1KB PDF document • Escape Velocity File 379.8KB PDF document • Power File 485.2KB PDF document • Linear Momentum File 668.5KB PDF document • Impulse File 496.2KB PDF document • Angular Momentum File 820.5KB PDF document • 10 # Oscillation & Simple Harmonic Motion #### Notes pp. 400–417 This chapter discusses oscillations and simple harmonic motion. • Springs describes the properties and equations that pertain to springs. • Pendulums describes the properties and equations that pertain to pendulums. ### Skills learned & applied in this chapter: • Understanding the mechanics of repeated actions. • Simple Harmonic Motion File 476.5KB PDF document • Springs File 535.3KB PDF document • Pendulums File 458.4KB PDF document • 11 # Electricity & Magnetism #### Notes pp. 419–507 This chapter discusses electricity and magnetism, how they behave, and how they relate to each other. • Electric Change, Coulomb’s Law, and Electric Fields describe the behavior of individual charged particles and how to calculate the effects of these particles on each other. • Electric Current & Ohm’s Law describes equations and calculations involving the flow of charged particles (electric current). • Electrical Components, Series Circuits, Parallel Circuits, Mixed Series & Parallel Circuits, and Measuring Voltage, Current & Resistance describe the behavior of electrical components in a circuit and how to calculate quantities relating to the individual components and the entire circuit, based on the way the components are arranged. • Magnetism describes properties of magnets and what causes objects to be magnetic.  Electricity & Magnetism describes how electricity and magnetism affect each other. One of the new challenges encountered in this chapter is interpreting and simplifying circuit diagrams, in which different equations may apply to different parts of the circuit. ### Skills learned & applied in this chapter: • Working with material-specific constants from a table. • Identifying electric circuit components. • Simplifying circuit diagrams. • Electric Charge File 575.5KB PDF document • Coulomb's Law File 488.3KB PDF document • Electric Fields File 526.8KB PDF document • Electrical Components File 419.9KB PDF document • Circuits File 431.8KB PDF document • Kirchhoff's Rules File 479.7KB PDF document • Series Circuits File 594.2KB PDF document • Parallel Circuits File 646.8KB PDF document • Magnetism File 404.4KB PDF document • Magnetic Fields File 490.6KB PDF document • Electromagnetism File 278.2KB PDF document • 12 # Mechanical Waves & Sound #### Notes pp. 509–554 This chapter discusses properties of waves that travel through a medium (mechanical waves). • Waves gives general information about waves, including vocabulary and equations.  Reflection and Superposition describes what happens when two waves share space within a medium. • Sound & Music describes the properties and equations of waves that relate to music and musical instruments. • The Doppler Effect describes the effects of motion of the source or receiver (listener) on the perception of sound. ### Skills learned & applied in this chapter: • Visualizing wave motion. • Waves File 714KB PDF document • Reflection & Superposition File 449.2KB PDF document • Sound & Music File 791KB PDF document • Sound Level File 378.8KB PDF document • The Doppler Effect File 640.6KB PDF document • 13 • 14 # Thermal Physics (Heat) #### Notes pp. 555–595 This chapter is about heat as a form of energy and the ways in which heat affects objects, including how it is stored and how it is transferred from one object to another. • Heat & Temperature describes the concept of heat as a form of energy and how heat energy is different from temperature. • Heat Transfer, Energy Conversion and Efficiency describe how to calculate the rate of the transfer of heat energy from one object to another. • Specific Heat Capacity & Calorimetry describes different substances’ and objects’ abilities to store heat energy. Phase Changes & Heating Curves addresses the additional calculations that apply when a substance goes through a phase change (such as melting or boiling). • Thermal Expansion describes the calculation of the change in size of an object caused by heating or cooling. This topic is part of the Massachusetts Curriculum Frameworks, but is not part of the AP Physics 1 curriculum. ### Skills learned & applied in this chapter: • Working with material-specific constants from a table. • Working with more than one instance of the same quantity in a problem. • Combining equations and graphs. • Heat & Temperature File 569.6KB PDF document • Heat Transfer File 421.8KB PDF document • Energy Conversion File 512.2KB PDF document • Thermal Expansion File 647.1KB PDF document
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380100131034851, "perplexity": 3421.6459292745917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805881.65/warc/CC-MAIN-20171119234824-20171120014824-00507.warc.gz"}
http://math.stackexchange.com/questions/194507/limit-with-functionals/194788
# Limit with functionals I need to evaluate the following limit which intuitively I know its equal to 0 but I can't really prove it, so I need some help: $$\lim_{\epsilon \to 0}{\frac{F[\rho + \epsilon\rho' + \epsilon^2\rho'']-F[\rho + \epsilon\rho']}{\epsilon}}$$ where $\epsilon$ is a real number, $F$ is a functional and $\rho$, $\rho'$ and $\rho''$ are functions in some function space. - What is the functional $F$? Or are you supposed to prove that the limit is zero for all possible functionals? –  Rod Carvalho Sep 12 '12 at 4:36 I don't know the form of the functional, except that it is local. I started trying to prove a general identity involving functionals and I ended with this limit. Probably it needs some additional assumptions. –  Manuel Sep 12 '12 at 4:49 Is $F$ supposed to be linear? –  Christopher A. Wong Sep 12 '12 at 4:53 No, linearity is not assumed. –  Manuel Sep 12 '12 at 5:05 Does $F$ obey any sublinearity properties, or are there any inequalities given for $F$? –  Christopher A. Wong Sep 12 '12 at 5:08 Here is a very, very rough sketch of a possible proof... Let $\bar{\rho} (\epsilon) := \rho + \epsilon \rho'$. The limit can then be written in the form $$\displaystyle\lim_{\epsilon \to 0} \frac{1}{\epsilon}\left[ F (\bar{\rho} (\epsilon) + \epsilon^2 \rho'') - F (\bar{\rho} (\epsilon))\right]$$ The "Taylor expansion" of $F$ is the following $$F (\bar{\rho} (\epsilon) + \epsilon^2 \rho'') = F (\bar{\rho} (\epsilon)) + \langle \nabla F (\bar{\rho} (\epsilon)), \epsilon^2 \rho''\rangle + \omicron (\epsilon^4)$$ where $\nabla F (\bar{\rho} (\epsilon))$ is the "functional gradient" of $F$. Then, we have that $$\displaystyle\lim_{\epsilon \to 0} \frac{1}{\epsilon}\left[ F (\bar{\rho} (\epsilon) + \epsilon^2 \rho'') - F (\bar{\rho} (\epsilon))\right] = \lim_{\epsilon \to 0} \left[\frac{1}{\epsilon} \langle \nabla F (\bar{\rho} (\epsilon)), \epsilon^2 \rho''\rangle + \omicron (\epsilon^3)\right]$$ If we can show that $$\frac{1}{\epsilon} \langle \nabla F (\bar{\rho} (\epsilon)), \epsilon^2 \rho''\rangle = \langle \nabla F (\bar{\rho} (\epsilon)), \epsilon \rho''\rangle$$ then the limit becomes $$\displaystyle\lim_{\epsilon \to 0} \frac{1}{\epsilon}\left[ F (\bar{\rho} (\epsilon) + \epsilon^2 \rho'') - F (\bar{\rho} (\epsilon))\right] = \lim_{\epsilon \to 0} \left[\langle \nabla F (\bar{\rho} (\epsilon)), \epsilon \rho''\rangle + \omicron (\epsilon^3)\right] = 0$$ - This is ok for my purposes, I also figured some kind of Taylor expansion for functionals but I didn't know if something like that really existed. Can you point me to literature where I can find the theory behind these expansions? Thank you. –  Manuel Sep 12 '12 at 18:02 @Manuel: This is a good introductory overview of calculus of variations: math.umn.edu/~olver/am_/cvz.pdf –  Rod Carvalho Sep 12 '12 at 19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154195785522461, "perplexity": 297.0413297654018}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138086.23/warc/CC-MAIN-20140914011218-00220-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/165733-am-i-wrong-part-2-a.html
# Math Help - Am I Wrong (part 2)? 1. ## Am I Wrong (part 2)? Problem 15: $\int \sqrt{12 + 4x^2}$ I get: $x\sqrt{x^2 + 3}+ 3 \ln(\frac{\sqrt{x^2 + 3} + x}{\sqrt{3}})$ the book has the same answer except the radical in the denominator of the natural log part is gone. Where did I go wrong? Thanks 2. What substitution did you make? I would choose $\displaystyle x = \sqrt{3}\tan{u}$ $\displaystyle \int \sqrt{12+4x^2}~dx = \sqrt{x^2+3}+3\sinh^{-1}\left(\frac{x}{\sqrt{3}}\right)+C$ 3. Thats an indefinite integral so your answer should be $x\sqrt{x^2+ 3}+ 3ln(\frac{\sqrt{x^2+ 3}+ x}{\sqrt{3}})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)- 3ln(\sqrt{3})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)+ C'$ with $C'= C+ 3ln(\sqrt{3})$. 4. Originally Posted by HallsofIvy Thats an indefinite integral so your answer should be $x\sqrt{x^2+ 3}+ 3ln(\frac{\sqrt{x^2+ 3}+ x}{\sqrt{3}})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)- 3ln(\sqrt{3})+ C$ which is the same as $x\sqrt{x^2+ 3}+ 3ln(\sqrt{x^2+ 3}+ x)+ C'$ with $C'= C+ 3ln(\sqrt{3})$. Right you are sir. Thank you
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951759576797485, "perplexity": 373.5455073409028}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00126-ip-10-28-5-156.ec2.internal.warc.gz"}
http://export.arxiv.org/abs/1806.06717
math.SG (what is this?) # Title: An open quantum Kirwan map Abstract: We construct, under semipositivity assumptions, an A-infinity morphism from the equivariant Fukaya algebra of a Lagrangian brane in the zero level set of a moment map to the Fukaya algebra of the quotient brane. The map induces a map between Maurer-Cartan solution spaces, and intertwines the disk potentials. As an application, we show the weak unobstructedness and Floer nontriviality of various Lagrangians in symplectic quotients. In the semi-Fano toric case we give another proof of the results of Chan-Lau-Leung-Tseng, by showing that the potential of a Lagrangian toric orbit in a toric manifold is related to the Givental-Hori-Vafa potential by a change of variable. We also reprove the results of Fukaya-Oh-Ohta-Ono on weak unobstructedness of these toric orbits. In the case of polygon spaces we show the existence of weakly unobstructed and Floer nontrivial products of spheres. Comments: 106 pages, 13 figures Subjects: Symplectic Geometry (math.SG); Mathematical Physics (math-ph) MSC classes: 53D40, 53D37, 53D20 Cite as: arXiv:1806.06717 [math.SG] (or arXiv:1806.06717v1 [math.SG] for this version) ## Submission history From: Guangbo Xu [view email] [v1] Mon, 18 Jun 2018 14:05:40 GMT (121kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512365221977234, "perplexity": 2804.965183564757}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00385.warc.gz"}
https://www.snapxam.com/topic/higher-order-derivatives
# Higher-order derivatives ## Definition The second, third, fourth derivative of a function (and so on) are known as higher order derivatives. ## Solved Exercises ### Struggling with math? Access detailed step by step solutions to thousands of problems, growing every day!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929509162902832, "perplexity": 2779.066889848187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00600.warc.gz"}
http://mathoverflow.net/questions/133615/fixed-vector-of-a-generic-representation-of-gln-f
# fixed vector of a generic representation of GL(n,F) Let $F$ be a locally compact non-archimedean field and $G_{n}$ the locally profinite group $GL(n,F)$. Let $\Gamma_{n,k}$ be the subgroup of $G_{n}$ whose elements are the matrices of the form $$\begin{pmatrix} A_{1,1} & A_{1,2} \\ \\ A_{2,1} & A_{2,2} \\ \end{pmatrix}$$ where $A_{1,1}\in GL(n-1,O_{F})$, $A_{1,2}\in M_{n-1,1}(O_{F})$, $A_{2,1}\in M_{1,n-1}(p_{F}^{k})$ and $A_{2,2}\in 1+p_{F}^{k}$. Here, $p_F$ denotes the maximal ideal in the ring of integers $O_F$ of $F$. Let $(\pi,V)$ be a generic representation of $G_{n}$. We know that the space $V^{\Gamma_{n,k}}$ of fixed vectors is non-zero for $k$ large enough. Moreover, if $c(\pi) = \min\{ k\in\mathbb{N} : V^{\Gamma_{n,k}}\neq 0 \}$ ($c(\pi)$ is the conductor of $\pi$) then $\dim(V^{\Gamma_{n,c(\pi)}})=1$. Reference: Jacquet, Piatetski-Shapiro, Shalika, "Conducteur des représentations du groupe linéaire", Math. Ann. 256 (1981). My question concerns replacing the subgroups $\Gamma_{n,k}$ by small subgroups $P_{n,k}$ whose elements are the upper-triangular matrices mod $p_{F}^{k}$. More precisely, if $\varphi:GL(n,O_{F})\longrightarrow GL(n,O_{F}/p_{F}^{k})$ is the morphism of reduction mod $p_{F}^{k}$, define $P_{n,k}=\varphi^{-1}(B)$, where $B$ is the standard Borel subgroup of $GL(n,O_{F}/p_{F}^{k})$. It is clear that $V^{P_{n,k}}\neq 0$ for $k$ large enough. Denote $u(\pi) = \min\{ k\in\mathbb{N} : V^{P_{n,k}}\neq 0 \}$. Question 1: Is it true that $\dim(V^{P_{n,u(\pi)}})=1$? Question 2: If that is false for a generic representation, does it hold for only a supercuspidal representation of $GL(n,F)$? - Please include all hyphens present in existing tags (otherwise you create new ones). And, please do not include the number given after the tag. The 15 you used as a tag is the number of times the tag 'p-adic-groups' was used already; thus it makes no sense to include this. –  quid Jun 13 '13 at 12:12 To my knowledge, it is not known whether $Ind_{P_{n,k}}^{GL_n(o)} 1$ decomposes with single multiplicity. This is certainly necessary by Frobenius reciprocity $$dim Hom_{P_{n,k}}( 1 , Res_{P_{n,k}} \pi) = dim Hom_{GL_n(F)}( Ind^{GL_n(F)} Ind_{P_{n,k}}^{GL_n(o)} 1, \pi).$$ The group $P_{n,k}$ is not "smaller" than $\Gamma_{n,k}$ as claimed in the question. The $A_{n,n}$ for $\Gamma_{n,k}$ must be in $1+\mathfrak{p}^k$ whereas for $P_{n,k}$ it can be arbitrary element of $\mathcal{O}^\times$. Regardless, the question makes sense. The case $n=2$ is treated in Casselman's paper on the method of Atkin and Lehner. He shows that the answer to Question 1 is positive, and he defines the conductor that way.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657588005065918, "perplexity": 79.01820990058059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400376197.4/warc/CC-MAIN-20141119123256-00142-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/129000/ricci-flow-as-a-gradient-flow-and-its-lyapunov-function
# Ricci flow as a gradient flow and its Lyapunov function In study of Ricci flow, for making Ricci flow as a gradient flow I faced $\mathcal{F}(g,f)=\int (R+|\nabla f|^2)e^{-f}$. I know that if we suppose $\frac{df}{dt}=-R$, then $\frac{d}{dt}\mathcal{F}(g,f)=\int \langle-Ric-Hess(f),\dot{g}\rangle e^{-f}dV$. So by definition, gradient of $\mathcal{F}$ is given by $\nabla \mathcal{F}=-Ric-Hess(f)$. In this point we define modified Ricci flow as $\dot{g}=-2(Ric+Hess(f))$, then $\dot{g}=2\nabla\mathcal{F}$. Question: By Monotonicity of $\mathcal{F}$ we know that $\frac{d}{dt}\mathcal{F}(g,f)\ge0.$ Since $\mathcal{F}$ is Lyapunov function of modified Ricci flow, some equilibrium points of the flow may be unstable. Why don't we define modified Ricci flow as $\dot{g}=-2\nabla\mathcal{F}$? In this case $\frac{d}{dt}\mathcal{F}(g,f)\le0$ and all equilibrium points would be stable. Doesn't this make a better definition? - Doesn't that change the PDE from a heat equation to a backwards heat equations? If so, that makes it unlikely to have short time existence. –  Ben McKay Apr 28 '13 at 17:29 If $\frac{\partial}{\partial s}g=v$, then $\frac{\partial R}{\partial s}=-\Delta V+\operatorname{div}^{2}v-\left\langle v,\operatorname{Ric} \right\rangle$ and $\frac{\partial}{\partial s}d\mu=\frac{1}{2}Vd\mu$, where $V=\operatorname{tr}_{g}v$. So $$\frac{\partial}{\partial s}(Rd\mu)=(-\Delta V+\operatorname{div}^{2}v+\langle v,\tfrac{R}{2}g-\operatorname{Ric}\rangle)d\mu.$$ Integrating this, we see that the Euler-Lagrange equation for $\int Rd\mu$ is $\tfrac{R}{2}g-\operatorname{Ric}=0$ (Einstein-Hilbert). To get Ricci flow, we want to get rid of the $\tfrac{R}{2}g$ term due to the variation of the volume form. Perelman accomplished this by introducing $f$ with $e^{-f}d\mu$ fixed. Imposing $\frac{\partial}{\partial s}\left( e^{-f}d\mu\right) =0$, i.e., $\frac{\partial f}{\partial s}=\frac{V}{2}$, we obtain $$\frac{d}{ds}\int Re^{-f}d\mu=-\int\left\langle v,\operatorname{Ric} \right\rangle e^{-f}d\mu+\int\left( -\Delta V+\operatorname{div}^{2}v\right) e^{-f}d\mu.$$ One pays the price that the divergence terms no longer integrate to zero. Serendipitously, $$\frac{d}{ds}\int\left\vert \nabla f\right\vert ^{2}e^{-f}d\mu=\int\left( -v\left( \nabla f,\nabla f\right) +\Delta V\right) e^{-f}d\mu,$$ so, by combining the above and integrating by parts, one obtains Perelman's formula: $$\frac{d}{ds}\int\left( R+\left\vert \nabla f\right\vert ^{2}\right) e^{-f}d\mu=-\int\left\langle v,\operatorname{Ric}+\nabla^{2}f\right\rangle e^{-f}d\mu.$$ The gradient flow is $\frac{\partial}{\partial t}g=-2(\operatorname{Ric} +\nabla^{2}f)$, $\frac{\partial f}{\partial t}=-R-\Delta f$. One cannot solve this forward, so one makes the gauge change: $\frac{\partial}{\partial t}g=-2\operatorname{Ric},$ $\frac{\partial f}{\partial t}=-R-\Delta f+\left\vert \nabla f\right\vert ^{2}$ (by adding $\mathcal{L}_{\nabla f}$ to both equations). Since we have decoupled the first equation from the second, we can solve it forward in time (Hamilton-DeTurck). For applications, $f$ is solved backward in time. For geometric flows, the idea of using the backward heat kernel to obtain a monotonicity formula was originally used by Gerhard Huisken for the mean curvature flow and by Michael Struwe for the harmonic map heat flow. Hamilton tried to get this to work for Ricci flow; his interest is evident from his paper with Matt Grayson.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717411398887634, "perplexity": 159.61667980664615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445886.27/warc/CC-MAIN-20141017005725-00206-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/194469-use-substitution-solve-differential-equation-print.html
# Use substitution to solve a differential equation • Dec 19th 2011, 01:18 AM Punch Use substitution to solve a differential equation By using the substitution $v=x-y$, solve the differential equation $\frac{dy}{dx}+[1+(x-y)^2]cos^2x=sin^2x$, expressing $y$ in terms of $x$. $\frac{dv}{dx}=1-\frac{dy}{dx}$ $1-\frac{dv}{dx}+[1+v^2]cos^2x=sin^2x$ How do I continue to remove the x in the trigo? • Dec 19th 2011, 01:42 AM alexmahone Re: Use substitution to solve a differential equation Quote: Originally Posted by Punch By using the substitution $v=x-y$, solve the differential equation $\frac{dy}{dx}+[1+(x-y)^2]cos^2x=sin^2x$, expressing $y$ in terms of $x$. $\frac{dv}{dx}=1-\frac{dy}{dx}$ $1-\frac{dv}{dx}+[1+v^2]cos^2x=sin^2x$ How do I continue to remove the x in the trigo? $\frac{dv}{dx}=1+\cos^2x+v^2\cos^2x-\sin^2x$ $=2\cos^2x+v^2\cos^2x$ $=\cos^2x(v^2+2)$ $\frac{dv}{v^2+2}=\cos^2xdx$ Can you proceed? • Dec 19th 2011, 02:31 AM Punch Re: Use substitution to solve a differential equation Quote: Originally Posted by alexmahone $\frac{dv}{dx}=1+\cos^2x+v^2\cos^2x-\sin^2x$ $=2\cos^2x+v^2\cos^2x$ $=\cos^2x(v^2+2)$ $\frac{dv}{v^2+2}=\cos^2xdx$ Can you proceed? Yeap, thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620380997657776, "perplexity": 1241.624649630199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805049.34/warc/CC-MAIN-20171118210145-20171118230145-00406.warc.gz"}
http://math.stackexchange.com/questions/242373/parabolas-and-projectiles
# Parabolas and projectiles Given $2$ points, $A$ and $B$, if I am in $A$ and I have an inclination angle $c$, with how many velocity do I need to shoot a projectile to hit $B$ ? My problem is, how do I setup this data in an equation so that I can solve it with a program? - Strength is an elusive concept. Do you actually mean velocity? Then it would be independent of mass. If you mean force, one would need to know how long the force was applied. –  André Nicolas Nov 22 '12 at 2:21 Thanks,it´s edited. Let´s work with the velocity to eliminate m. How can I work with this variables? –  chubakueno Nov 22 '12 at 2:27 Let $A$ be the origin and $B$ be $(x,y)$. The angle of projecting is $c$. The parabolic trajectory is given by the formula below $$y = x \tan(c) - \dfrac{g}{2u^2 \cos^2(c)}x^2$$ This gives us $$u^2 = \dfrac{gx^2}{2 \cos^2(c) (y-x \tan(c))}$$ Hence the unique velocity is $$u = \left \vert \dfrac{x}{\cos(c)} \right \vert \sqrt{\dfrac{g}{2(y-x \tan(c))}}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704458117485046, "perplexity": 217.13017686451872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00271-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.enotes.com/homework-help/youre-standing-ice-skates-throw-basketball-240953
# if you're standing on ice-skates and throw a basketball forward. how your motion after you throw it compares with the motion of the basketball? Tushar Chandra | Certified Educator calendarEducator since 2010 starTop subjects are Math, Science, and Business To determine your motion when you throw a basketball while you are standing on ice-skates; we use the principle of conservation of total momentum. Let us assume the following: the mass of the ball: Mb the velocity of the ball when it is thrown: Vb in the horizontal direction We have assumed that the ball is thrown in a horizontal direction. The total momentum of the system which includes you and the ball was 0 initially, after you threw the ball it became Vb*Mb - V*b So we arrive at Vb*Mb - V*b = 0 => V = Vb*Mb / M This means you are pushed back with a velocity equal to the product of the mass of the ball and the velocity of the ball divided by your mass. The ball is pulled downward by the gravitation force of attraction equal to Mb*g after it is released. If it was held at a height of d when it was thrown, the time taken by the ball to strike the ground is sqrt (2*d/g). During this duration the ball has moved in the horizontal direction by a distance Vb*sqrt (2*d/g). You follow a horizontal path after the ball is released, whereas the ball initially follows a parabolic path downwards. If we take the force of friction to be negligible, you continue to move in a straight line, and the ball after it strikes the ground starts to move in a straight horizontal line too. check Approved by eNotes Editorial
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8332358002662659, "perplexity": 557.1627985903883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00500.warc.gz"}
https://www.arxiv-vanity.com/papers/1103.2670/
# Constrained Mixture Models for Asset Returns Modelling Mathematical Imaging Neuroscience Group Department of Neuroscience and Mental Health Imperial College London, UK. Email: January 22, 2021 ###### Abstract The estimation of asset return distributions is crucial for determining optimal trading strategies. In this paper we describe the constrained mixture model, based on a mixture of Gamma and Gaussian distributions, to provide an accurate description of price trends as being clearly positive, negative or ranging while accounting for heavy tails and high kurtosis. The model is estimated in the Expectation Maximisation framework and model order estimation also respects the model’s constraints. ## 1 Introduction The estimation of asset return distributions is crucial for determining optimal trading strategies. One convenient estimation approach selects a distribution model and estimates its parameters. The advantage of this approach is the ease with which probability distributions can be calibrated and applied in post-processing. The disadvantage of assuming a particular parametric distribution is that inferences and decisions depend critically on the choice of distribution. For example, asset returns frequently feature large “outlying” values, making distributions with light tails inapplicable. Semi-parametric methods attempt to capture the advantages but not the disadvantages of a parametric specification of a returns distribution by using a more flexible functional form. Most prominent among the semi-parametric distributions are mixtures of distributions. They provide a flexible specification and, under certain conditions, can approximate distributions of any form. ## 2 Mixture Models and Extensions ### 2.1 Classical Mixture Models A standard mixture probability density of a random variable , whose value is denoted by , is defined as pX(x;υ)=K∑k=1πkpX(x;θk). (1) The mixture density has components (or states) and is defined by the parameter set , where is the set of weights given to each component and is the set of parameters describing each component distribution. By far, the most popular mixture model is the Gaussian mixture model (GMM). It is given as pX(x)=K∑k=1πkN(x;μk,σ2k), (2) where each component parameter vector now consists of the mean and variance parameters, and , respectively (see Appendix A for the definition of the probability distributions). The Gaussian mixture distribution can be, and has been, estimated in the Maximum Likelihood or in a Bayesian framework (see [1] for both estimation methods). The Gaussian mixture distribution is often referred to as a universal approximator [1], an indication of the fact that it can approximate distributions of any form. Figure (1), for example, shows a 3 component GMM approximating a sample with the histogram shown in the top plot. The number of components needed to model the data depends very much on the problem at hand. In some sense, it is the discrepancy between the data distribution and the mixture model that determines the number of components (aka model order). Data distributions with heavy tails require two or more light tailed components to compensate. In Figure (1), for example, the data was drawn from a single Gamma distribution yet three Gaussian components were needed to capture most aspects of the Gamma distribution. More components require larger sample sizes to ensure adequate calibration. In the extreme case there may be insufficient data available to calibrate a given mixture model with a certain degree of accuracy. In short, while Gaussian mixture models are very flexible they may not be the most appropriate model. If more is known about the data distribution, such as its behaviour in the tails, incorporation of this knowledge can only help improve the model. ### 2.2 Gamma Mixture Models The Gamma mixture distribution is another commonly used model. They are used if the data values are only positive. Another reason for their use is because Gamma densities exhibit much heavier tails than Gaussian densities. Thus, events that deviate from the mean by several standard deviations are much more probable than under a Gaussian model assumption. As a consequence, large return values are not underestimated under the Gamma mixture assumption. The Gamma mixture model (GaMM) is given as pX(x)=K∑k=1πkGa(X;αk,βk), (3) where each component parameter vector now consists of the parameters shape and precision (inverse scale or rate), denoted respectively by and (see Appendix A for notation). The Gamma mixture distribution can be estimated via the Maximum Likelihood [2] or the Bayesian framework [3]. Similar to its Gaussian counterpart, the Gamma mixture distribution can approximate any distribution on . Note that, for Bayesian inference, there is no natural prior for the shape parameter of the Gamma distribution. Priors can be specified but require full MCMC (instead of Gibbs) sampling methods for estimation. With regard to maximum likelihood estimation note also that there is no closed form solution for the maximum likelihood estimator of the shape parameter - unless approximation assumptions are made [4, 2] which then permit the use of gradient decent optimisation [4]. Practice has shown, however, that even when making only small adjustments to the parameters the estimates frequently violate the positivity constraints, most notably that of the shape parameter. Such limitations can be avoided, however, via the unique mapping that exists from the density’s mean and variance to its shape and scale parameters α = μσ2 β = μσ4 (4) Thus, through the estimates resulting from the closed form solution of the mean and variance, the shape and scale parameter can be uniquely determined. ## 3 Constrained Mixture Models Financial asset returns feature long positive and negative tails. In addition there is a large concentration of values around the origin. Modelling this constellation of distributions can be achieved by means of a Gaussian mixture model. However, as we pointed out earlier, heavy tail behaviour is more parsimoniously modelled with Gamma distributions. This fact leads to the obvious attempt to model large negative and positive values by Gamma distributions while a mixture of Gaussian densities takes on the task of modelling the sharply peaked distribution near the origin. This model is hereafter referred to as the constrained mixture model (CMM) or the Gauss-Gamma mixture distribution. ### 3.1 Constraining by a Gauss-Gamma Mixture Distribution The main difference to standard mixture model is the association of subsets of components to only positive and only negative valued observations. We will use the short hand notation for mixture component indices associated with positive observations. Likewise, refers to the set of mixture component indices responsible for all negative valued observations. To specify the remaining set of component indices we use the symbol , i.e. . For example, a component mixture model may be split into two components for positive valued observations and one component for negative valued observations, , whilst the remaining components are and apply to all observations. The mixture component distributions are chosen according to which domain they are responsible for. We define three groups of mixture components as follows (see Appendix A for notation): Near Zero Domain: Observations with values around zero are modelled by a set of Gaussian distributions which are all restricted to have zero mean. The probability of is thus PX(x;θk)=N(x;μk=0;σ2k)∀k∈k⊙ (5) Positive Domain: Observations with positive values are modelled by a set of Gamma distributions. The probability of is thus PX(x|θk)=Ga(x;αk;βk)∀k∈k⊕ (6) if the value of is in and zero, otherwise. Negative Domain: Observations with negative values are modelled by a set of Gamma distributions, and so the probability of is PX(x|θk)=Ga(−x;αk;βk)∀k∈k⊖ (7) if the value of is in and zero, otherwise. Thus, the full constrained mixture model is given as pX(x)={∑k∈k⊕πkGa(x;αk,βk)+∑i∈k⊙πiN(X;0,σ2i)if x∈R+0∑k∈k⊖πkGa(−x;αk,βk)+∑i∈k⊙πiN(x;0,σ2i)if x∈R−0 (8) Note that negative values are modelled by a Gamma distribution with sign-reversed argument. In our notation, takes any of the index values of the states associated with constrained domain. A further consequence of our notation is that the parameter set consist of subsets , each of which holds also the parameters of all other domains, e.g. . The reason for this is that we will be using the means and variances of the Gamma distribution to compute the distributions rate and scale parameter according to equations (4). An example of a sample drawn from the constrained mixture and it’s continuous density function are shown in Figure (2). ### 3.2 Alternative Approaches to Constraining Distributions There are other ways to constrain the model. One way would be through the use of rectified Gaussian distributions [5, 6]. However, the models in  [6] use a cut-off function cut(x)=max(x,0) (9) which places too much weight on zero. Also,the CMM is considerably simpler while perfectly satisfying the required constraints. ## 4 Mixture Model Estimation To motivate the estimation procedure we need to expand the mixture model. In particular, we introduce, for each datum, a latent indicator variable. This variable indicates which of the mixture component is responsible for the datum in question. The (marginal) distribution that any indicator variable selects the -th component is given by the weight that is associated with the -th mixture component. ### 4.1 Latent Indicator Variable Representation of Mixture Models Let us first define the following one-dimensional observation set , of length and indexed with . The set is assumed to be generated by a -component mixture model. To indicate the mixture component from which a sample was drawn, we introduce a latent random variable, . The value of , which we denote by , is a vector of length . The components of the vector, are either or . We set the vector’s -th component, to indicate that the -th mixture component is selected, while all other states are set to . As a consequence, 1=K∑k=1stk. (10) We can now specify the joint probability distribution of and in terms of a marginal distribution and a conditional distribution as PX,S(x,s;υ)=T∏tPXt|St(xt|st;θ)PSt(st;π), (11) and where the parameter vector . The marginal distribution are drawn from a multinomial distribution that is parameterised by the mixing weights . Thus, PSt(st;π)=K∏k=1πstkk (12) or, more simply, P(stk=1)=πk. (13) Naturally the weights must satisfy that and that 1=K∑k=1πk. (14) As for the conditional distribution, , its form depends on the value of the latent variable . For the constrained mixture model we have in particular PXt(xt|stk=1;θk)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩N(xt;0;σ2k)∀xtandk∈k⊙Ga(xt;αk;βk)xt∈R+andk∈k⊕Ga(−xt;αk;βk)xt∈R−andk∈k⊖0otherwise (15) The full model is thus defined as PX,S(x,s;υ)=T∏tK∏k=1πstkk⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩N(xt;0;σ2k)k∈k⊙% and∀xtGa(xt;αk;βk)k∈k⊕andxt∈R+Ga(−xt;αk;βk)k∈k⊖andxt∈R−0otherwise (16) To summarise, in the latent variable representation of mixture model, the components for each sample are selected with probability , reflecting the mixture weight . The components that are selected for a particular datum depend on the sign of the sample . For positive , is modelled by a mixture of Gaussian and “positive” Gamma distributions. For negative , is modelled by the same mixture of Gaussian and a set of “negative” Gamma distributions. ### 4.2 Maximum Likelihood Estimation Estimation of the mixture model can be accomplished by maximising directly the model given by equation (8). This, however, requires the use of optimisation methods such as the Newton-Raphson algorithm. Using the complete data mixture model description instead leads to an optimisation algorithm known as the Expectation-Maximisation algorithm. The algorithm produces set of coupled yet analytic update equations that can be iterated until convergence has been achieved. What is more, convergence is easily monitored since the convergence criterion is simply one of the quantities that the algorithm computes anyhow. The maximum likelihood method of estimating mixture models used here is known as the Expectation Maximisation (EM) algorithm. The goal of the EM is to maximise the likelihood of the data given the model, i.e. maximise L(υ)=log{∑sPX,S(x,s;υ)}=T∑t=1K∑k=1stklog{πkPXt(xt;θk)} (17) If the states of had been known then the estimation of the model parameters is trivial. Conditioned on the state variables and the observations, the equation (17) could be maximised with respect to the model parameters. However, which value that the state variables take is unknown. This suggests an alternative two-stage iterated optimisation algorithm: If we know the expected of , one could use this expectation in the first step to perform a weighted maximum likelihood estimation of (17) with respect to the model parameters. These estimates will be incorrect since the expectation is inaccurate. So, in the second step, one could update the expected value of all subject to the pretending the model parameters and are known and held fixed at their values from the past iteration. This is precisely the strategy of the Expectation Maximisation (EM) algorithm [1]. The EM algorithm for the CMM iteratively optimises in two stages [1]: E-step In this step, the parameters are held fixed at the old values, , obtained from the previous iteration (or at their initial settings during the algorithm’s initialisation). Conditioned on the observations, the E-step then computes the probability of the state variables , given the current model parameters and observation data, i.e. PSt|Xt(st|xt,υold)∝PXt|St(xt|st;θ)PSt(st;πold) (18) In particular, we compute (and drop the superscript for clarity’s sake) PSt|Xt(stk=1|xt,υold)=PXt|St(xt|stk=1;θk)πk∑stℓPXt|St(xt|stℓ=1;θℓ)πℓ (19) The likelihood terms are evaluate using the observation densities defined for each of the states. Thus, PXt|St(xt|stk=1;θk)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩N(xt;0;σ2k)k∈k⊙and∀xtGa(xt;αk;βk)k∈k⊕andxt∈R+Ga(−xt;αk;βk)k∈k⊖andxt∈R−0otherwise (20) To simplify the notation we use to symbolise the vector values computed in (19), which are the probabilities for each component being selected for observation . The components of are denoted by , i.e. γtk=PSt|Xt(stk=1|xt;υold). (21) Note that, as a consequence of equation (19), . M-step In this step, the latent state probabilities are considered given and maximisation is performed with respect to the parameters : υnew=argmaxυL(υ) (22) This results in the update equations for the parameters for the probability distributions are as follows μk = 1TT∑t=1γtkxt (23) σ2k = (24) These two parameters are computed for all states. For those states that are governed by a Gamma distribution, the shape and scale parameters are computed using the relations αk = μkσ2k (25) βk = μkσ4k (26) This approach circumvents the need for approximations or an iterative gradient decent approach to optimising the shape parameter . ## 5 Results Before applying the model to some data it is worth studying the model and the training algorithm’s behaviour on a simulated data set. ### 5.1 Simulated Results We generated data from a pre-specified constrained mixture model. In the model, there were 2 Gamma distributions assigned to the positive domain. These had, respectively, the shape parameters and and scale parameters and . Assigned to the negative domain were also by 2 Gamma distributions. Respectively, their shape parameters are and and scale parameters and . Finally, a single Gaussian distribution was also defined to be centred at the origin with a variance of . A total of samples were drawn from the constrained distribution. The empirical relative counts, i.e. the histogram, is shown in Figure (3). Model calibration was subsequently repeated for a range for model orders. In particular, the number of kernels for the negative values ranged from , similarly for the positive values and the centred Gaussians. Thus a total of model configurations were evaluated. The penalised likelihood (BIC [1]) for each model is shown in Figure (4). The minimum penalised likelihood, i.e. the most parsimonious, configuration was found for precisely the configuration from which the data was sampled ( negative Gamma p.d.f.s, positive Gamma p.d.f.s, and Gaussian p.d.f.). The resultant estimated constrained model is shown in Figure (5). A number of things are noteworthy. The total number of model configurations implies a large number of computations. This is due to constrained nature of the model. These computations are not necessary in that it is similarly possible to estimate the total number of mixture components using a standard Gaussian mixture model, which in this example would imply maximally components. The allocation of kernels to domains in the constrained mixture can then be determined through visual inspection of the fitted Gaussian mixture. This approach is approximately statistically correct. The implied assumption is that each of the Gamma distributions is sufficiently accurately fitted by a Gaussian distribution. Penalising the log-likelihood using BIC, or any other off-the-shelf penalty term, is theoretically incorrect. This is due to the fact that standard penalty criteria assume that all model parameters are used to explain the same number of samples - as expressed by in the BIC case, being the number of model parameters and the sample size. This condition does not apply in the constrained mixture model case. Gamma distributions are only used to fit samples that fall within their domain of responsibility. While it is possible to modify the penalty criteria to match the constrained model, the standard penalty factors suffice in practice. The standard penalty factors are at worst overly conservative, i.e. the recommend model order is smaller than one obtained by a constrained-model matching criterion. ### 5.2 Asset Returns We now describe the application of the model (and the model order selection via penalised likelihood) to actual financial data. The data is the US Treasury -year bond price, collected over the period of years on a daily basis - exactly trading days. The asset’s returns were calculated as the difference of the day’s average price from that of the previous day. The sample’s histogram is shown in Figure (6). The optimal model order that was determined using maximum likelihood estimation and penalising using the BIC penalty criterion. The configuration thus calculated was , i.e. Gamma distributions defined for the negative domain, Gaussian distributions centred at the origin and Gamma distributions defined for the positive domain. The resulting mixture model fit is shown in Figure (7) and suggest a good fit. ## 6 Discussion The constrained mixture model provides a simple statistical decomposition into negative, positive and near zero domains. The motivation for this model is the accurate description of price trends as being clearly positive, negative or ranging while accounting for heavy tails and high kurtosis. The EM algorithm for the constrained mixture model is only marginally different from that of standard mixture models. Model estimation can be performed using standard likelihood penalisation methods. Even though theoretically over-penalising, the study on simulated data has shown that their use does produce an acceptable model complexity estimates. Issues that remain to be solved are largely identifiability issues. As an example, a Gaussian distribution at the centre, flanked by two identical Gamma distributions provide as good a model as one where the two Gamma distributions are replace by one or two Gaussian distributions. While it is of theoretical concern and may imply increase sensitivity to the initialisations of the model parameters, in practice, such precise symmetry may never arise. ## References • [1] C.M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, 1995. • [2] Z Liu, J Almhana, V Choulakian, and R McGorman. Traffic modeling with gamma mixtures and dynamical bandwidth provisioning. Proceedings of the 4th Annual Communication Networks and Services Research Conference (CNSR 06), 2006. • [3] D. Chotikapanich and W.E. Griffiths. Estimating income distributions using a mixture of gamma densities. Technical report, Monash University, 2008. • [4] Thomas P. Minka. Estimating a gamma distribution. Technical report, Microsoft Research, Cambridge UK, 2002. • [5] J Winn. Variational message passing and its applications. University of Cambridge, 2004. • [6] M Harva and A Kabán. Variational learning for rectified factor analysis. Signal Processing, 87(3):509–527, 2007. • [7] L. R. Rabiner. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceeding of the IEEE, 77(2):257–284, 1989. ## Appendix A Standard Probability Distributions ### a.1 The Normal or Gaussian Probability Density The Normal probability density, denoted by , is given as PX(x)=1√2πσ2e−12σ2(x−μ)2 (27) where is the mean and the variance is . ### a.2 The Gamma Distribution The Gamma probability density, denoted by , is given as PX(x)=βαΓ(α)xα−1e−βx (28) where is the shape parameters and is the inverse scale (or rate).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493654370307922, "perplexity": 771.1009603709858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780054023.35/warc/CC-MAIN-20210917024943-20210917054943-00382.warc.gz"}
https://www.physicsforums.com/threads/solving-op-amp-circuits.71017/
# Solving op amp circuits. 1. Apr 11, 2005 ### mathrocks I have to solve an op amp circuit for a lab I'm doing but the circuit they have given me looks confusing (it's on page 46, http://filebox.vt.edu/users/oshekari/Manual_Student.pdf). Why does the 4.7k ohm resistor have an arrow pointing to the middle of the 100 ohm resistor? I know the 100ohm is a varying resistor but if I were to actual solve the circuit for output and leave the varying resistor at 100ohms would the 4.7k be connected to node 3 or to node 1? And if it is indeed connected to node 3 then how would I go about doing nodal analysis at that node since I don't know the current to the left of the node, I just have a voltage source. Correct if I'm wrong but at node 3 the nodal equation is: 5/x+(v3-v1)/4.7k+v3/100. Where x is the resistance, v3 is the voltage at node 3, and v1 is the voltage at node 6. Last edited: Apr 11, 2005 2. Apr 11, 2005 ### Theelectricchild Now are we assuming this op-amp is ideal? For laboratory purposes I would suspect that it is not--- however I did not read your lab to find out the Rin, A and Rout values. 3. Apr 11, 2005 ### mathrocks Yes, it's an ideal op-amp. 4. Apr 13, 2005 ### cyeokpeng Hi, I forgot my nodal analysis learnt during 1st year, but I can give you some light on the 100 ohms varying resistor. Whether the varying resistor is 100 ohms or 1kohms is of no concern in the amplifier circuit because it effectively acts as a potentiometer OR potential divider. Whatever the voltage at node 2 depends on the position of the dial: If it is at node 3, the voltage at node 2 is the full 5V : If it is at node 1, the voltage at node 2 is the full 0V : If it is halfway in between, it is 1/2 * 5V = 2.5 V. This is because as you turn the dial downwards, the resistance with respect to node 1 decreases, and the resistance with respect to node 3 increases. The voltage at node 2 can be found using the potential divider formula: R1/(R1+R2) * Vcc. Have something to add? Similar Discussions: Solving op amp circuits.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500829339027405, "perplexity": 973.1067981019573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721142.99/warc/CC-MAIN-20161020183841-00428-ip-10-171-6-4.ec2.internal.warc.gz"}
https://readingfeynman.org/tag/maser/
# Lasers, masers, two-state systems and Feynman’s Lectures The past few days I re-visited Feynman’s lectures on quantum math—the ones in which he introduces the concept of probability amplitudes (I will provide no specific reference or link to them because that is apparently unfair use of copyrighted material). The Great Richard Feynman introduces the concept of probability amplitudes as part of a larger discussion of two-state systems—and lasers and masers are a great example of such two-state systems. I have done a few posts on that while building up this blog over the past few years but because these have been mutilated by DMCA take-downs of diagrams and illustrations as a result of such ‘unfair use’, I won’t refer to them either. The point is this: I have come to the conclusion we actually do not need the machinery of state vectors and probability amplitudes to explain how a maser (and, therefore, a laser) actually works. The functioning of masers and lasers crucially depends on a dipole moment (of an ammonia molecule for a maser and of light-emitting atoms for a laser) which will flip up and down in sync with an external oscillating electromagnetic field. It all revolves around the resonant frequency (ω0), which depends on the tiny difference between the energies of the ‘up’ and ‘down’ states. This tiny energy difference (the A in the Hamiltonian matrix) is given by the product of the dipole moment (μ) and the external electromagnetic field that gets the thing going (Ɛ0). [Don’t confuse the symbols with the magnetic and electric constants here!] And so… Well… I have come to the conclusion that we can analyze this as just any other classical electromagnetic oscillation. We can effectively directly use the Planck-Einstein relation to determine the frequency instead of having to invoke all of the machinery that comes with probability amplitudes, base states, Hamiltonian matrices and differential equations: ω0 = E/ħ = A/ħ = μƐ0/ħ All the rest follows logically. You may say: so what? Well… I find this very startling. I’ve been systematically dismantling a lot of ‘quantum-mechanical myths’, and so this seemed to be the last myth standing. It has fallen now: here is the link to the paper. What’s the implication? The implication is that we can analyze all of the QED sector now in terms of classical mechanics: oscillator math, Maxwell’s equations, relativity theory and the Planck-Einstein relation will do. All that was published before the first World War broke out, in other words—with the added discoveries made by the likes of Holly Compton (photon-electron interactions), Carl Anderson (the discovery of anti-matter), James Chadwick (experimental confirmation of the existence of the neutron) and a few others after the war, of course! But that’s it, basically: nothing more, nothing less. So all of the intellectual machinery that was invented after World War I (the Bohr-Heisenberg theory of quantum mechanics) and after World War II (quantum field theory, the quark hypothesis and what have you) may be useful in the QCD sector of physics but − IMNSHO − even that remains to be seen! I actually find this more than startling: it is shocking! I started studying Feynman’s Lectures – and everything that comes with it – back in 2012, only to find out that my idol had no intention whatsoever to make things easy. That is OK. In his preface, he writes he wanted to make sure that even the most intelligent student would be unable to completely encompass everything that was in the lectures—so that’s why we were attracted to them, of course! But that is, of course, something else than doing what he did, and that is to promote a Bright Shining Lie […] Long time ago, I took the side of Bill Gates in the debate on Feynman’s qualities as a teacher. For Bill Gates, Feynman was, effectively, “the best teacher he never had.” One of those very bright people who actually had him as a teacher (John F. McGowan, PhD and math genius) paints a very different picture, however. I would take the side of McGowan in this discussion now—especially when it turns out that Mr. Feynman’s legacy can apparently no longer be freely used as a reference anyway. Philip Anderson and Freeman Dyson died this year—both at the age of 96. They were the last of what is generally thought of as a brilliant generation of quantum physicists—the third generation, we might say. May they all rest in peace. Post scriptum: In case you wonder why I refer to them as the third rather than the second generation: I actually consider Heisenberg’s generation to be the second generation of quantum physicists—first was the generation of the likes of Einstein! As for the (intended) irony in my last remarks, let me quote from an interesting book on the state of physics that was written by Doris Teplitz back in 1982: “The state of the classical electromagnetic theory reminds one of a house under construction that was abandoned by its working workmen upon receiving news of an approaching plague. The plague was in this case, of course, quantum theory.” I now very much agree with this bold statement. So… Well… I think I’ve had it with studying Feynman’s Lectures. Fortunately, I spent only ten years on them or so. Academics have to spend their whole life on what Paul Ehrenfest referred to as the ‘unendlicher Heisenberg-Born-Dirac-Schrödinger Wurstmachinen-Physik-Betrieb. # The math behind the maser Pre-script (dated 26 June 2020): I have come to the conclusion one does not need all this hocus-pocus to explain masers or lasers (and two-state systems in general): classical physics will do. So no use to read this. Read my papers instead. 🙂 Original post: As I skipped the mathematical arguments in my previous post so as to focus on the essential results only, I thought it would be good to complement that post by looking at the math once again, so as to ensure we understand what it is that we’re doing. So let’s do that now. We start with the easy situation: free space. #### The two-state system in free space We started with an ammonia molecule in free space, i.e. we assumed there were no external force fields, like a gravitational or an electromagnetic force field. Hence, the picture was as simple as the one below: the nitrogen atom could be ‘up’ or ‘down’ with regard to its spin around its axis of symmetry. It’s important to note that this ‘up’ or ‘down’ direction is defined in regard to the molecule itself, i.e. not in regard to some external reference frame. In other words, the reference frame is that of the molecule itself. For example, if I flip the illustration above – like below – then we’re still talking the same states, i.e. the molecule is still in state 1 in the image on the left-hand side and it’s still in state 2 in the image on the right-hand side. We then modeled the uncertainty about its state by associating two different energy levels with the molecule: E0 + A and E− A. The idea is that the nitrogen atom needs to tunnel through a potential barrier to get to the other side of the plane of the hydrogens, and that requires energy. At the same time, we’ll show the two energy levels are effectively associated with an ‘up’ or ‘down’ direction of the electric dipole moment of the molecule. So that resembles the two spin states of an electron, which we associated with the +ħ/2 and −ħ/2 energies respectively. So if E0 would be zero (we can always take another reference point, remember?), then we’ve got the same thing: two energy levels that are separated by some definite amount: that amount is 2A for the ammonia molecule, and ħ when we’re talking quantum-mechanical spin. I should make a last note here, before I move on: note that these energies only make sense in the presence of some external field, because the + and − signs in the E0 + A and E− A and +ħ/2 and −ħ/2 expressions make sense only with regard to some external direction defining what’s ‘up’ and what’s ‘down’ really. But I am getting ahead of myself here. Let’s go back to free space: no external fields, so what’s ‘up’ or ‘down’ is completely random here. 🙂 Now, we also know an energy level can be associated with a complex-valued wavefunction, or an amplitude as we call it. To be precise, we can associate it with the generic a·e−(i/ħ)·(E·t − px) expression which you know so well by now. Of course,  as the reference frame is that of the molecule itself, its momentum is zero, so the px term in the a·e−(i/ħ)·(E·t − px) expression vanishes and the wavefunction reduces to a·ei·ω·t a·e−(i/ħ)·E·t, with ω = E/ħ. In other words, the energy level determines the temporal frequency, or the temporal variation (as opposed to the spatial frequency or variation), of the amplitude. We then had to find the amplitudes C1(t) = 〈 1 | ψ 〉 and C2(t) =〈 2 | ψ 〉, so that’s the amplitude to be in state 1 or state 2 respectively. In my post on the Hamiltonian, I explained why the dynamics of a situation like this can be represented by the following set of differential equations: As mentioned, the Cand C2 functions evolve in time, and so we should write them as C= C1(t) and C= C2(t) respectively. In fact, our Hamiltonian coefficients may also evolve in time, which is why it may be very difficult to solve those differential equations! However, as I’ll show below, one usually assumes they are constant, and then one makes informed guesses about them so as to find a solution that makes sense. Now, I should remind you here of something you surely know: if Cand Care solutions to this set of differential equations, then the superposition principle tells us that any linear combination a·C1 + b·Cwill also be a solution. So we need one or more extra conditions, usually some starting condition, which we can combine with a normalization condition, so we can get some unique solution that makes sense. The Hij coefficients are referred to as Hamiltonian coefficients and, as shown in the mentioned post, the H11 and H22 coefficients are related to the amplitude of the molecule staying in state 1 and state 2 respectively, while the H12 and H21 coefficients are related to the amplitude of the molecule going from state 1 to state 2 and vice versa. Because of the perfect symmetry of the situation here, it’s easy to see that H11 should equal H22 , and that H12 and H21 should also be equal to each other. Indeed, Nature doesn’t care what we call state 1 or 2 here: as mentioned above, we did not define the ‘up’ and ‘down’ direction with respect to some external direction in space, so the molecule can have any orientation and, hence, switching the i an j indices should not make any difference. So that’s one clue, at least, that we can use to solve those equations: the perfect symmetry of the situation and, hence, the perfect symmetry of the Hamiltonian coefficients—in this case, at least! The other clue is to think about the solution if we’d not have two states but one state only. In that case, we’d need to solve iħ·[dC1(t)/dt] = H11·C1(t). That’s simple enough, because you’ll remember that the exponential function is its own derivative. To be precise, we write: d(a·eiωt)/dt = a·d(eiωt)/dt = a·iω·eiωt, and please note that can be any complex number: we’re not necessarily talking a real number here! In fact, we’re likely to talk complex coefficients, and we multiply with some other complex number (iω) anyway here! So if we write iħ·[dC1/dt] = H11·C1 as dC1/dt = −(i/ħ)·H11·C1 (remember: i−1 = 1/i = −i), then it’s easy to see that the Ca·e–(i/ħ)·H11·t function is the general solution for this differential equation. Let me write it out for you, just to make sure: dC1/dt = d[a·e–(i/ħ)H11t]/dt = a·d[e–(i/ħ)H11t]/dt = –a·(i/ħ)·H11·e–(i/ħ)H11t = –(i/ħ)·H11·a·e–(i/ħ)H11= −(i/ħ)·H11·C1 Of course, that reminds us of our generic wavefunction a·e−(i/ħ)·E0·t wavefunction: we only need to equate H11 with E0 and we’re done! Hence, in a one-state system, the Hamiltonian coefficient is, quite simply, equal to the energy of the system. In fact, that’s a result can be generalized, as we’ll see below, and so that’s why Feynman says the Hamiltonian ought to be called the energy matrix. In fact, we actually may have two states that are entirely uncoupled, i.e. a system in which there is no dependence of C1 on Cand vice versa. In that case, the two equations reduce to: iħ·[dC1/dt] = H11·C1 and iħ·[dC2/dt] = H22·C2 These do not form a coupled system and, hence, their solutions are independent: C1(t) = a·e–(i/ħ)·H11·t and C2(t) = b·e–(i/ħ)·H22·t The symmetry of the situation suggests we should equate a and b, and then the normalization condition says that the probabilities have to add up to one, so |C1(t)|+ |C2(t)|= 1, so we’ll find that = 1/√2. OK. That’s simple enough, and this story has become quite long, so we should wrap it up. The two ‘clues’ – about symmetry and about the Hamiltonian coefficients being energy levels – lead Feynman to suggest that the Hamiltonian matrix for this particular case should be equal to: Why? Well… It’s just one of Feynman’s clever guesses, and it yields probability functions that makes sense, i.e. they actually describe something real. That’s all. 🙂 I am only half-joking, because it’s a trial-and-error process indeed and, as I’ll explain in a separate section in this post, one needs to be aware of the various approximations involved when doing this stuff. So let’s be explicit about the reasoning here: 1. We know that H11 = H22 = Eif the two states would be identical. In other words, if we’d have only one state, rather than two – i.e. if H12 and H21 would be zero – then we’d just plug that in. So that’s what Feynman does. So that’s what we do here too! 🙂 2. However, H12 and H21 are not zero, of course, and so assume there’s some amplitude to go from one position to the other by tunneling through the energy barrier and flipping to the other side. Now, we need to assign some value to that amplitude and so we’ll just assume that the energy that’s needed for the nitrogen atom to tunnel through the energy barrier and flip to the other side is equal to A. So we equate H12 and H21 with −A. Of course, you’ll wonder: why minus A? Why wouldn’t we try H12 = H21 = A? Well… I could say that a particle usually loses potential energy as it moves from one place to another, but… Well… Think about it. Once it’s through, it’s through, isn’t it? And so then the energy is just Eagain. Indeed, if there’s no external field, the + or − sign is quite arbitrary. So what do we choose? The answer is: when considering our molecule in free space, it doesn’t matter. Using +A or −A yields the same probabilities. Indeed, let me give you the amplitudes we get for H11 = H22 = Eand H12 and H21 = −A: 1. C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t] 2. C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t] [In case you wonder how we go from those exponentials to a simple sine and cosine factor, remember that the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.] Now, it’s easy to see that, if we’d have used +A rather than −A, we would have gotten something very similar: • C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E+ A)·t + (1/2)·e(i/ħ)·(E− A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t] • C2(t) = 〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E+ A)·t – (1/2)·e(i/ħ)·(E− A)·t = −i·e(i/ħ)·E0·t·sin[(A/ħ)·t] So we get a minus sign in front of our C2(t) function, because cos(α) = cos(–α) but sin(α) = −sin(α). However, the associated probabilities are exactly the same. For both, we get the same P1(t) and P2(t) functions: • P1(t) = |C1(t)|2 = cos2[(A/ħ)·t] • P2(t) = |C2(t)|= sin2[(A/ħ)·t] [Remember: the absolute square of and −is |i|= +√12 = +1 and |i|2 = (−1)2|i|= +1 respectively, so the i and −i in the two C2(t) formulas disappear.] You’ll remember the graph: Of course, you’ll say: that plus or minus sign in front of C2(t) should matter somehow, doesn’t it? Well… Think about it. Taking the absolute square of some complex number – or some complex function , in this case! – amounts to multiplying it with its complex conjugate. Because the complex conjugate of a product is the product of the complex conjugates, it’s easy to see what happens: the e(i/ħ)·E0·t factor in C1(t) = e(i/ħ)·E0·t·cos[(A/ħ)·t] and C2(t) = ±i·e(i/ħ)·E0·t·sin[(A/ħ)·t] gets multiplied by e+(i/ħ)·E0·t and, hence, doesn’t matter: e(i/ħ)·E0·t·e+(i/ħ)·E0·t = e0 = 1. The cosine factor in C1(t) = e(i/ħ)·E0·t·cos[(A/ħ)·t] is real, and so its complex conjugate is the same. Now, the ±i·sin[(A/ħ)·t] factor in C2(t) = ±i·e(i/ħ)·E0·t·sin[(A/ħ)·t] is a pure imaginary number, and so its complex conjugate is its opposite. For some reason, we’ll find similar solutions for all of the situations we’ll describe below: the factor determining the probability will either be real or, else, a pure imaginary number. Hence, from a math point of view, it really doesn’t matter if we take +A or −A for  or  real factor for those H12 and H21 coefficients. We just need to be consistent in our choice, and I must assume that, in order to be consistent, Feynman likes to think of our nitrogen atom borrowing some energy from the system and, hence, temporarily reducing its energy by an amount that’s equal to −A. If you have a better interpretation, please do let me know! 🙂 OK. We’re done with this section… Except… Well… I have to show you how we got those C1(t) and C1(t) functions, no? Let me copy Feynman here: Note that the ‘trick’ involving the addition and subtraction of the differential equations is a trick we’ll use quite often, so please do have a look at it. As for the value of the a and b coefficients – which, as you can see, we’ve equated to 1 in our solutions for C1(t) and C1(t) – we get those because of the following starting condition: we assume that at t = 0, the molecule will be in state 1. Hence, we assume C1(0) = 1 and C2(0) = 0. In other words: we assume that we start out on that P1(t) curve in that graph with the probability functions above, so the C1(0) = 1 and C2(0) = 0 starting condition is equivalent to P1(0) = 1 and P1(0) = 0. Plugging that in gives us a/2 + b/2 = 1 and a/2 − b/2 = 0, which is possible only if a = b = 1. Of course, you’ll say: what if we’d choose to start out with state 2, so our starting condition is P1(0) = 0 and P1(0) = 1? Then a = 1 and b = −1, and we get the solution we got when equating H12 and H21 with +A, rather than with −A. So you can think about that symmetry once again: when we’re in free space, then it’s quite arbitrary what we call ‘up’ or ‘down’. So… Well… That’s all great. I should, perhaps, just add one more note, and that’s on that A/ħ value. We calculated it in the previous post, because we wanted to actually calculate the period of those P1(t) and P2(t) functions. Because we’re talking the square of a cosine and a sine respectively, the period is equal to π, rather than 2π, so we wrote: (A/ħ)·T = π ⇔ T = π·ħ/A. Now, the separation between the two energy levels E+ A and E− A, so that’s 2A, has been measured as being equal, more or less, to 2A ≈ 10−4 eV. How does one measure that? As mentioned above, I’ll show you, in a moment, that, when applying some external field, the plus and minus sign do matter, and the separation between those two energy levels E+ A and E− A will effectively represent something physical. More in particular, we’ll have transitions from one energy level to another and that corresponds to electromagnetic radiation being emitted or absorbed, and so there’s a relation between the energy and the frequency of that radiation. To be precise, we can write 2A = h·f0. The frequency of the radiation that’s being absorbed or emitted is 23.79 GHz, which corresponds to microwave radiation with a wavelength of λ = c/f0 = 1.26 cm. Hence, 2·A ≈ 25×109 Hz times 4×10−15 eV·s = 10−4 eV, indeed, and, therefore, we can write: T = π·ħ/A ≈ 3.14 × 6.6×10−16 eV·s divided by 0.5×10−4 eV, so that’s 40×10−12 seconds = 40 picoseconds. That’s 40 trillionths of a seconds. So that’s very short, and surely much shorter than the time that’s associated with, say, a freely emitting sodium atom, which is of the order of 3.2×10−8 seconds. You may think that makes sense, because the photon energy is so much lower: a sodium light photon is associated with an energy equal to E = h·f = 500×1012 Hz times 4×10−15  eV·s = 2 eV, so that’s 20,000 times 10−4 eV. There’s a funny thing, however. An oscillation of a frequency of 500 tera-hertz that lasts 3.2×10−8 seconds is equivalent to 500×1012 Hz times 3.2×10−8 s ≈ 16 million cycles. However, an oscillation of a frequency of 23.97 giga-hertz that only lasts 40×10−12 seconds is equivalent to 23.97×109 Hz times 40×10−12 s ≈ 1000×10−3 = 1 ! One cycle only? We’re surely not talking resonance here! So… Well… I am just flagging it here. We’ll have to do some more thinking about that later. [I’ve added an addendum that may or may not help us in this regard. :-)] #### The two-state system in a field As mentioned above, when there is no external force field, we define the ‘up’ or ‘down’ direction of the nitrogen atom was defined with regard to its its spin around its axis of symmetry, so with regard to the molecule itself. However, when we apply an external electromagnetic field, as shown below, we do have some external reference frame. Now, the external reference frame – i.e. the physics of the situation, really – may make it more convenient to define the whole system using another set of base states, which we’ll refer to as I and II, rather than 1 and 2. Indeed, you’ve seen the picture below: it shows a state selector, or a filter as we called it. In this case, there’s a filtering according to whether our ammonia molecule is in state I or, alternatively, state II. It’s like a Stern-Gerlach apparatus splitting an electron beam according to the spin state of the electrons, which is ‘up’ or ‘down’ too, but in a totally different way than our ammonia molecule. Indeed, the ‘up’ and ‘down’ spin of an electron has to do with its magnetic moment and its angular momentum. However, there are a lot of similarities here, and so you may want to compare the two situations indeed, i.e. the electron beam in an inhomogeneous magnetic field versus the ammonia beam in an inhomogeneous electric field. Now, when reading Feynman, as he walks us through the relevant Lecture on all of this, you get the impression that it’s the I and II states only that have some kind of physical or geometric interpretation. That’s not the case. Of course, the diagram of the state selector above makes it very obvious that these new I and II base states make very much sense in regard to the orientation of the field, i.e. with regard to external space, rather than with respect to the position of our nitrogen atom vis-á-vis the hydrogens. But… Well… Look at the image below: the direction of the field (which we denote by ε because we’ve been using the E for energy) obviously matters when defining the old ‘up’ and ‘down’ states of our nitrogen atom too! In other words, our previous | 1 〉 and | 2 〉 base states acquire a new meaning too: it obviously matters whether or not the electric dipole moment of the molecule is in the same or, conversely, in the opposite direction of the field. To be precise, the presence of the electromagnetic field suddenly gives the energy levels that we’d associate with these two states a very different physical interpretation. Indeed, from the illustration above, it’s easy to see that the electric dipole moment of this particular molecule in state 1 is in the opposite direction and, therefore, temporarily ignoring the amplitude to flip over (so we do not think of A for just a brief little moment), the energy that we’d associate with state 1 would be equal to E+ με. Likewise, the energy we’d associate with state 2 is equal to E− με.  Indeed, you’ll remember that the (potential) energy of an electric dipole is equal to the vector dot product of the electric dipole moment μ and the field vector ε, but with a minus sign in front so as to get the sign for the energy righ. So the energy is equal to −μ·ε = −|μ|·|ε|·cosθ, with θ the angle between both vectors. Now, the illustration above makes it clear that state 1 and 2 are defined for θ = π and θ = 0 respectively. [And, yes! Please do note that state 1 is the highest energy level, because it’s associated with the highest potential energy: the electric dipole moment μ of our ammonia molecule will – obviously! – want to align itself with the electric field ε ! Just think of what it would imply to turn the molecule in the field!] Therefore, using the same hunches as the ones we used in the free space example, Feynman suggests that, when some external electric field is involved, we should use the following Hamiltonian matrix: So we’ll need to solve a similar set of differential equations with this Hamiltonian now. We’ll do that later and, as mentioned above, it will be more convenient to switch to another set of base states, or another ‘representation’ as it’s referred to. But… Well… Let’s not get too much ahead of ourselves: I’ll say something about that before we’ll start solving the thing, but let’s first look at that Hamiltonian once more. When I say that Feynman uses the same clues here, then… Well.. That’s true and not true. You should note that the diagonal elements in the Hamiltonian above are not the same: E+ με ≠ E+ με. So we’ve lost that symmetry of free space which, from a math point of view, was reflected in those identical H11 = H22 = Ecoefficients. That should be obvious from what I write above: state 1 and state 2 are no longer those 1 and 2 states we described when looking at the molecule in free space. Indeed, the | 1 〉 and | 2 〉 states are still ‘up’ or ‘down’, but the illustration above also makes it clear we’re defining state 1 and state 2 not only with respect to the molecule’s spin around its own axis of symmetry but also vis-á-vis some direction in space. To be precise, we’re defining state 1 and state 2 here with respect to the direction of the electric field ε. Now that makes a really big difference in terms of interpreting what’s going on. In fact, the ‘splitting’ of the energy levels because of that amplitude A is now something physical too, i.e. something that goes beyond just modeling the uncertainty involved. In fact, we’ll find it convenient to distinguish two new energy levels, which we’ll write as E= E+ A and EII = E− A respectively. They are, of course, related to those new base states | I 〉 and | II 〉 that we’ll want to use. So the E+ A and E− A energy levels themselves will acquire some physical meaning, and especially the separation between them, i.e. the value of 2A. Indeed, E= E+ A and EII = E− A will effectively represent an ‘upper’ and a ‘lower’ energy level respectively. But, again, I am getting ahead of myself. Let’s first, as part of working towards a solution for our equations, look at what happens if and when we’d switch to another representation indeed. #### Switching to another representation Let me remind you of what I wrote in my post on quantum math in this regard. The actual state of our ammonia molecule – or any quantum-mechanical system really – is always to be described in terms of a set of base states. For example, if we have two possible base states only, we’ll write: | φ 〉 = | 1 〉 C1 + | 2 〉 C2 You’ll say: why? Our molecule is obviously always in either state 1 or state 2, isn’t it? Well… Yes and no. That’s the mystery of quantum mechanics: it is and it isn’t. As long as we don’t measure it, there is an amplitude for it to be in state 1 and an amplitude for it to be in state 2. So we can only make sense of its state by actually calculating 〈 1 | φ 〉 and 〈 2 | φ 〉 which, unsurprisingly are equal to 〈 1 | φ 〉 = 〈 1 | 1 〉 C1 + 〈 1 | 2 〉 C2  = C1(t) and 〈 2 | φ 〉 = 〈 2 | 1 〉 C1 + 〈 2 | 2 〉 C2  = C2(t) respectively, and so these two functions give us the probabilities P1(t) and  P2(t) respectively. So that’s Schrödinger’s cat really: the cat is dead or alive, but we don’t know until we open the box, and we only have a probability function – so we can say that it’s probably dead or probably alive, depending on the odds – as long as we do not open the box. It’s as simple as that. Now, the ‘dead’ and ‘alive’ condition are, obviously, the ‘base states’ in Schrödinger’s rather famous example, and we can write them as | DEAD 〉 and | ALIVE 〉 you’d agree it would be difficult to find another representation. For example, it doesn’t make much sense to say that we’ve rotated the two base states over 90 degrees and we now have two new states equal to (1/√2)·| DEAD 〉 – (1/√2)·| ALIVE 〉 and (1/√2)·| DEAD 〉 + (1/√2)·| ALIVE 〉 respectively. There’s no direction in space in regard to which we’re defining those two base states: dead is dead, and alive is alive. The situation really resembles our ammonia molecule in free space: there’s no external reference against which to define the base states. However, as soon as some external field is involved, we do have a direction in space and, as mentioned above, our base states are now defined with respect to a particular orientation in space. That implies two things. The first is that we should no longer say that our molecule will always be in either state 1 or state 2. There’s no reason for it to be perfectly aligned with or against the field. Its orientation can be anything really, and so its state is likely to be some combination of those two pure base states | 1 〉 and | 2 〉. The second thing is that we may choose another set of base states, and specify the very same state in terms of the new base states. So, assuming we choose some other set of base states | I 〉 and | II 〉, we can write the very same state | φ 〉 = | 1 〉 C1 + | 2 〉 Cas: | φ 〉 = | I 〉 CI + | II 〉 CII It’s really like what you learned about vectors in high school: one can go from one set of base vectors to another by a transformation, such as, for example, a rotation, or a translation. It’s just that, just like in high school, we need some direction in regard to which we define our rotation or our translation. For state vectors, I showed how a rotation of base states worked in one of my posts on two-state systems. To be specific, we had the following relation between the two representations: The (1/√2) factor is there because of the normalization condition, and the two-by-two matrix equals the transformation matrix for a rotation of a state filtering apparatus about the y-axis, over an angle equal to (minus) 90 degrees, which we wrote as: The y-axis? What y-axis? What state filtering apparatus? Just relax. Think about what you’ve learned already. The orientations are shown below: the S apparatus separates ‘up’ and ‘down’ states along the z-axis, while the T-apparatus does so along an axis that is tilted, about the y-axis, over an angle equal to α, or φ, as it’s written in the table above. Of course, we don’t really introduce an apparatus at this or that angle. We just introduced an electromagnetic field, which re-defined our | 1 〉 and | 2 〉 base states and, therefore, through the rotational transformation matrix, also defines our | I 〉 and | II 〉 base states. […] You may have lost me by now, and so then you’ll want to skip to the next section. That’s fine. Just remember that the representations in terms of | I 〉 and | II 〉 base states or in terms of | 1 〉 and | 2 〉 base states are mathematically equivalent. Having said that, if you’re reading this post, and you want to understand it, truly (because you want to truly understand quantum mechanics), then you should try to stick with me here. 🙂 Indeed, there’s a zillion things you could think about right now, but you should stick to the math now. Using that transformation matrix, we can relate the Cand CII coefficients in the | φ 〉 = | I 〉 CI + | II 〉 CII expression to the Cand CII coefficients in the | φ 〉 = | 1 〉 C1 + | 2 〉 C2 expression. Indeed, we wrote: • C= 〈 I | ψ 〉 = (1/√2)·(C1 − C2) • CII = 〈 II | ψ 〉 = (1/√2)·(C1 + C2) That’s exactly the same as writing: OK. […] Waw! You just took a huge leap, because we can now compare the two sets of differential equations: They’re mathematically equivalent, but the mathematical behavior of the functions involved is very different. Indeed, unlike the C1(t) and C2(t) amplitudes, we find that the CI(t) and CII(t) amplitudes are stationary, i.e. the associated probabilities – which we find by taking the absolute square of the amplitudes, as usual – do not vary in time. To be precise, if you write it all out and simplify, you’ll find that the CI(t) and CII(t) amplitudes are equal to: • CI(t) = 〈 I | ψ 〉 = (1/√2)·(C1 − C2) = (1/√2)·e(i/ħ)·(E0+ A)·t = (1/√2)·e(i/ħ)·EI·t • CII(t) = 〈 II | ψ 〉 = (1/√2)·(C1 + C2) = (1/√2)·e(i/ħ)·(E0− A)·t = (1/√2)·e(i/ħ)·EII·t As the absolute square of the exponential is equal to one, the associated probabilities, i.e. |CI(t)|2 and |CII(t)|2, are, quite simply, equal to |1/√2|2 = 1/2. Now, it is very tempting to say that this means that our ammonia molecule has an equal chance to be in state I or state II. In fact, while I may have said something like that in my previous posts, that’s not how one should interpret this. The chance of our molecule being exactly in state I or state II, or in state 1 or state 2 is varying with time, with the probability being ‘dumped’ from one state to the other all of the time. I mean… The electric dipole moment can point in any direction, really. So saying that our molecule has a 50/50 chance of being in state 1 or state 2 makes no sense. Likewise, saying that our molecule has a 50/50 chance of being in state I or state II makes no sense either. Indeed, the state of our molecule is specified by the | φ 〉 = | I 〉 CI + | II 〉 CII = | 1 〉 C1 + | 2 〉 Cequations, and neither of these two expressions is a stationary state. They mix two frequencies, because they mix two energy levels. Having said that, we’re talking quantum mechanics here and, therefore, an external inhomogeneous electric field will effectively split the ammonia molecules according to their state. The situation is really like what a Stern-Gerlach apparatus does to a beam of electrons: it will split the beam according to the electron’s spin, which is either ‘up’ or, else, ‘down’, as shown in the graph below: The graph for our ammonia molecule, shown below, is very similar. The vertical axis measures the same: energy. And the horizontal axis measures με, which increases with the strength of the electric field ε. So we see a similar ‘splitting’ of the energy of the molecule in an external electric field. How should we explain this? It is very tempting to think that the presence of an external force field causes the electrons, or the ammonia molecule, to ‘snap into’ one of the two possible states, which are referred to as state I and state II respectively in the illustration of the ammonia state selector below. But… Well… Here we’re entering the murky waters of actually interpreting quantum mechanics, for which (a) we have no time, and (b) we are not qualified. So you should just believe, or take for granted, what’s being shown here: an inhomogeneous electric field will split our ammonia beam according to their state, which we define as I and II respectively, and which are associated with the energy E0+ A and E0− A  respectively. As mentioned above, you should note that these two states are stationary. The Hamiltonian equations which, as they always do, describe the dynamics of this system, imply that the amplitude to go from state I to state II, or vice versa, is zero. To make sure you ‘get’ that, I reproduce the associated Hamiltonian matrix once again: Of course, that will change when we start our analysis of what’s happening in the maser. Indeed, we will have some non-zero HI,II and HII,I amplitudes in the resonant cavity of our ammonia maser, in which we’ll have an oscillating electric field and, as a result, induced transitions from state I to II and vice versa. However, that’s for later. While I’ll quickly insert the full picture diagram below, you should, for the moment, just think about those two stationary states and those two zeroes. 🙂 Capito? If not… Well… Start reading this post again, I’d say. 🙂 #### Intermezzo: on approximations At this point, I need to say a few things about all of the approximations involved, because it can be quite confusing indeed. So let’s take a closer look at those energy levels and the related Hamiltonian coefficients. In fact, in his LecturesFeynman shows us that we can always have a general solution for the Hamiltonian equations describing a two-state system whenever we have constant Hamiltonian coefficients. That general solution – which, mind you, is derived assuming Hamiltonian coefficients that do not depend on time – can always be written in terms of two stationary base states, i.e. states with a definite energy and, hence, a constant probability. The equations, and the two definite energy levels are: That yields the following values for the energy levels for the stationary states: Now, that’s very different from the E= E0+ A and EII = E0− A energy levels for those stationary states we had defined in the previous section: those stationary states had no square root, and no μ2ε2, in their energy. In fact, that sort of answers the question: if there’s no external field, then that μ2ε2 factor is zero, and the square root in the expression becomes ±√A= ±A. So then we’re back to our E= E0+ A and EII = E0− A formulas. The whole point, however, is that we will actually have an electric field in that cavity. Moreover, it’s going to be a field that varies in time, which we’ll write: Now, part of the confusion in Feynman’s approach is that he constantly switches between representing the system in terms of the I and II base states and the 1 and 2 base states respectively. For a good understanding, we should compare with our original representation of the dynamics in free space, for which the Hamiltonian was the following one: That matrix can easily be related to the new one we’re going to have to solve, which is equal to: The interpretation is easy if we look at that illustration again: If the direction of the electric dipole moment is opposite to the direction ε, then the associated energy is equal to −μ·ε = −μ·ε = −|μ|·|ε|·cosθ = −μ·ε·cos(π) = +με. Conversely, for state 2, we find −μ·ε·cos(0) = −με for the energy that’s associated with the dipole moment. You can and should think about the physics involved here, because they make sense! Thinking of amplitudes, you should note that the +με and −με terms effectively change the H11 and H22 coefficients, so they change the amplitude to stay in state 1 or state 2 respectively. That, of course, will have an impact on the associated probabilities, and so that’s why we’re talking of induced transitions now. Having said that, the Hamiltonian matrix above keeps the −A for H12 and H21, so the matrix captures spontaneous transitions too! Still… You may wonder why Feynman doesn’t use those Eand EII formulas with the square root because that would give us some exact solution, wouldn’t it? The answer to that question is: maybe it would, but would you know how to solve those equations? We’ll have a varying field, remember? So our Hamiltonian H11 and H22 coefficients will no longer be constant, but time-dependent. As you’re going to see, it takes Feynman three pages to solve the whole thing using the +με and −με approximation. So just imagine how complicated it would be using that square root expression! [By the way, do have a look at those asymptotic curves in that illustration showing the splitting of energy levels above, so you see how that approximation looks like.] So that’s the real answer: we need to simplify somehow, so as to get any solutions at all! Of course, it’s all quite confusing because, after Feynman first notes that, for strong fields, the A2 in that square root is small as compared to μ2ε2, thereby justifying the use of the simplified E= E0+ με = H11 and EII = E0− με = H22 coefficients, he continues and bluntly uses the very same square root expression to explain how that state selector works, saying that the electric field in the state selector will be rather weak and, hence, that με will be much smaller than A, so one can use the following approximation for the square root in the expressions above: The energy expressions then reduce to: And then we can calculate the force on the molecules as: So the electric field in the state selector is weak, but the electric field in the cavity is supposed to be strong, and so… Well… That’s it, really. The bottom line is that we’ve a beam of ammonia molecules that are all in state I, and it’s what happens with that beam then, that is being described by our new set of differential equations: #### Solving the equations As all molecules in our ammonia beam are described in terms of the | I 〉 and | II 〉 base states – as evidenced by the fact that we say all molecules that enter the cavity are state I – we need to switch to that representation. We do that by using that transformation above, so we write: • C= 〈 I | ψ 〉 = (1/√2)·(C1 − C2) • CII = 〈 II | ψ 〉 = (1/√2)·(C1 + C2) Keeping these ‘definitions’ of Cand CII in mind, you should then add the two differential equations, divide the result by the square root of 2, and you should get the following new equation: Please! Do it and verify the result! You want to learn something here, no? 🙂 Likewise, subtracting the two differential equations, we get: We can re-write this as: Now, the problem is that the Hamiltonian constants here are not constant. To be precise, the electric field ε varies in time. We wrote: So HI,II  and HII,I, which are equal to με, are not constant: we’ve got Hamiltonian coefficients that are a function of time themselves. […] So… Well… We just need to get on with it and try to finally solve this thing. Let me just copy Feynman as he grinds through this: This is only the first step in the process. Feynman just takes two trial functions, which are really similar to the very general Ca·e–(i/ħ)·H11·t function we presented when only one equation was involved, or – if you prefer a set of two equations – those CI(t) = a·e(i/ħ)·EI·t and CI(t) = b·e(i/ħ)·EII·equations above. The difference is that the coefficients in front, i.e. γI and γII are not some (complex) constant, but functions of time themselves. The next step in the derivation is as follows: One needs to do a bit of gymnastics here as well to follow what’s going on, but please do check and you’ll see it works. Feynman derives another set of differential equations here, and they specify these γI = γI(t) and γII = γII(t) functions. These equations are written in terms of the frequency of the field, i.e. ω, and the resonant frequency ω0, which we mentioned above when calculating that 23.79 GHz frequency from the 2A = h·f0 equation. So ω0 is the same molecular resonance frequency but expressed as an angular frequency, so ω0 = f0/2π = ħ/2A. He then proceeds to simplify, using assumptions one should check. He then continues: That gives us what we presented in the previous post: So… Well… What to say? I explained those probability functions in my previous post, indeed. We’ve got two probabilities here: • P= cos2[(με0/ħ)·t] • PII = sin2[(με0/ħ)·t] So that’s just like the P=  cos2[(A/ħ)·t] and P= sin2[(A/ħ)·t] probabilities we found for spontaneous transitions. But so here we are talking induced transitions. As you can see, the frequency and, hence, the period, depend on the strength, or magnitude, of the electric field, i.e. the εconstant in the ε = 2ε0cos(ω·t) expression. The natural unit for measuring time would be the period once again, which we can easily calculate as (με0/ħ)·T = π ⇔ T = π·ħ/με0. Now, we had that T = (π·ħ)/(2A) expression above, which allowed us to calculate the period of the spontaneous transition frequency, which we found was like 40 picoseconds, i.e. 40×10−12 seconds. Now, the T = (π·ħ)/(2με0) is very similar, it allows us to calculate the expected, average, or mean time for an induced transition. In fact, if we write Tinduced = (π·ħ)/(2με0) and Tspontaneous = (π·ħ)/(2A), then we can take ratio to find: Tinduced/Tspontaneous = [(π·ħ)/(2με0)]/[(π·ħ)/(2A)] = A/με0 This A/με0 ratio is greater than one, so Tinduced/Tspontaneous is greater than one, which, in turn, means that the presence of our electric field – which, let me remind you, dances to the beat of the resonant frequency – causes a slower transition than we would have had if the oscillating electric field were not present. But – Hey! – that’s the wrong comparison! Remember all molecules enter in a stationary state, as they’ve been selected so as to ensure they’re in state I. So there is no such thing as a spontaneous transition frequency here! They’re all polarized, so to speak, and they would remain that way if there was no field in the cavity. So if there was no oscillating electric field, they would never transition. Nothing would happen! Well… In terms of our particular set of base states, of course! Why? Well… Look at the Hamiltonian coefficients HI,II = HII,I = με: these coefficients are zero if ε is zero. So… Well… That says it all. So that‘s what it’s all about: induced emission and, as I explained in my previous post, because all molecules enter in state I, i.e. the upper energy state, literally, they all ‘dump’ a net amount of energy equal to 2A into the cavity at the occasion of their first transition. The molecules then keep dancing, of course, and so they absorb and emit the same amount as they go through the cavity, but… Well… We’ve got a net contribution here, which is not only enough to maintain the cavity oscillations, but actually also provides a small excess of power that can be drawn from the cavity as microwave radiation of the same frequency. As Feynman notes, an exact description of what actually happens requires an understanding of the quantum mechanics of the field in the cavity, i.e. quantum field theory, which I haven’t studied yet. But… Well… That’s for later, I guess. 🙂 Post scriptum: The sheer length of this post shows we’re not doing something that’s easy here. Frankly, I feel the whole analysis is still quite obscure, in the sense that – despite looking at this thing again and again – it’s hard to sort of interpret what’s going on, in a physical sense that is. But perhaps one shouldn’t try that. I’ve quoted Feynman’s view on how easy or how difficult it is to ‘understand’ quantum mechanics a couple of times already, so let me do it once more: “Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct, human experience and human intuition applies to large objects.” So… Well… I’ll grind through the remaining Lectures now – I am halfway through Volume III now – and then re-visit all of this. Despite Feynman’s warning, I want to understand it the way I like to, even if I don’t quite know what way that is right now. 🙂 Addendum: As for those cycles and periods, I noted a couple of times already that the Planck-Einstein equation E = h·f  can usefully be re-written as E/= h, as it gives a physical interpretation to the value of the Planck constant. In fact, I said h is the energy that’s associated with one cycle, regardless of the frequency of the radiation involved. Indeed, the energy of a photon divided by the number of cycles per second, should give us the energy per cycle, no? Well… Yes and no. Planck’s constant h and the frequency are both expressed referencing the time unit. However, if we say that a sodium atom emits one photon only as its electron transitions from a higher energy level to a lower one, and if we say that involves a decay time of the order of 3.2×10−8 seconds, then what we’re saying really is that a sodium light photon will ‘pack’ like 16 million cycles, which is what we get when we multiply the number of cycles per second (i.e. the mentioned frequency of 500×1012 Hz) by the decay time (i.e. 3.2×10−8 seconds): (500×1012 Hz)·(3.2×10−8 s) = 16 ×10cycles, indeed. So the energy per cycle is 2.068 eV (i.e. the photon energy) divided by 16×106, so that’s 0.129×10−6 eV. Unsurprisingly, that’s what we get when we we divide h by 3.2×10−8 s: (4.13567×10−15)/(3.2×10−8 s) = 1.29×10−7 eV. We’re just putting some values in to the E/(T) = h/T equation here. The logic for that 2A = h·f0 is the same. The frequency of the radiation that’s being absorbed or emitted is 23.79 GHz, so the photon energy is (23.97×109 Hz)·(4.13567×10−15 eV·s) ≈ 1×10−4 eV. Now, we calculated the transition period T as T = π·ħ/A ≈ (π·6.626×10−16 eV·s)/(0.5×10−4 eV) ≈ 41.6×10−12 seconds. Now, an oscillation of a frequency of 23.97 giga-hertz that only lasts 41.6×10−12 seconds is an oscillation of one cycle only. The consequence is that, when we continue this style of reasoning, we’d have a photon that packs all of its energy into one cycle! Let’s think about what this implies in terms of the density in space. The wavelength of our microwave radiation is 1.25×10−2 m, so we’ve got a ‘density’ of 1×10−4 eV/1.25×10−2 m = 0.8×10−2 eV/m = 0.008 eV/m. The wavelength of our sodium light is 0.6×10−6 m, so we get a ‘density’ of 1.29×10−7 eV/0.6×10−6 m = 2.15×10−1 eV/m = 0.215 eV/m. So the energy ‘density’ of our sodium light is 26.875 times that of our microwave radiation. 🙂 Frankly, I am not quite sure if calculations like this make much sense. In fact, when talking about energy densities, I should review my posts on the Poynting vector. However, they may help you think things through. 🙂 Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here: Some content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here: # The ammonia maser: transitions in a time-dependent field Pre-script (dated 26 June 2020): I have come to the conclusion one does not need all this hocus-pocus to explain masers or lasers (and two-state systems in general): classical physics will do. So no use to read this. Read my papers instead. 🙂 Original post: Feynman’s analysis of a maser – microwave amplification, by stimulated emission of radiation – combines an awful lot of stuff. Resonance, electromagnetic field theory, and quantum mechanics: it’s all there! Therefore, it’s complicated and, hence, actually very tempting to just skip it when going through his third volume of Lectures. But let’s not do that. What I want to do in this post is not repeat his analysis, but reflect on it and, perhaps, offer some guidance as to how to interpret some of the math. #### The model: a two-state system The model is a two-state system, which Feynman illustrates as follows: Don’t shy away now. It’s not so difficult. Try to understand. The nitrogen atom (N) in the ammonia molecule (NH3) can tunnel through the plane of the three hydrogen (H) atoms, so it can be ‘up’ or ‘down’. This ‘up’ or ‘down’ state has nothing to do with the classical or quantum-mechanical notion of spin, which is related to the magnetic moment. Nothing, i.e. nada, niente, rien, nichts! Indeed, it’s much simpler than that. 🙂 The nitrogen atom could be either beneath or, else, above the plane of the hydrogens, as shown above, with ‘beneath’ and ‘above’ being defined in regard to the molecule’s direction of rotation around its axis of symmetry. That’s all. That’s why we prefer simple numbers to denote those two states, instead of the confusing ‘up’ or ‘down’, or ‘↑’ or ‘↓’ symbols. We’ll just call the two states state ‘1’ and state ‘2’ respectively. Having said that (i.e. having said that you shouldn’t think of spin, which is related to the angular momentum of some (net) electric charge), the NHmolecule does have some electric dipole moment, which is denoted by μ in the illustration and which, depending on the state of the molecule (i.e. the nitrogen atom being above or beneath the plane of the hydrogens), changes the total energy of the molecule by an amount that is equal to +με or −με, with ε some external electric field, as illustrated by the ε arrow on the left-hand side of the diagram. [You may think of that arrow as an electric field vector.] This electric field may vary in time and/or in space, but we’ll not worry about that now. In fact, we should first analyze what happens in the absence of an external field, which is what we’ll do now. The NHmolecule will spontaneously transition from an ‘up’ to a ‘down’ state, or from ‘1’ to ‘2’—and vice versa, of course! This spontaneous transition is also modeled as an uncertainty in its energy. Indeed, we say that, even in the absence of an external electric field, there will be two energy levels, rather than one only: E+ A and E− A. We wrote the amplitude to find the molecule in either one of these two states as: • C1(t) = 〈 1 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t + (1/2)·e(i/ħ)·(E+ A)·t = e(i/ħ)·E0·t·cos[(A/ħ)·t] • C2(t) =〈 2 | ψ 〉 = (1/2)·e(i/ħ)·(E− A)·t – (1/2)·e(i/ħ)·(E+ A)·t = i·e(i/ħ)·E0·t·sin[(A/ħ)·t] [Remember: the sum of complex conjugates, i.e eiθ eiθ reduces to 2·cosθ, while eiθ − eiθ reduces to 2·i·sinθ.] That gave us the following probabilities: • P= |C1|2 = cos2[(A/ħ)·t] • P= |C2|= sin2[(A/ħ)·t] [Remember: the absolute square of is |i|= +√12 = +1, so the in the C2(t) formula disappears.] The graph below shows how these probabilities evolve over time. Note that, because of the square, the period of cos2[(A/ħ)·t] and sin2[(A/ħ)·t] is equal to π, instead of the usual 2π. The interpretation of this is easy enough: if our molecule can be in two states only, and it starts off in one, then the probability that it will remain in that state will gradually decline, while the probability that it flips into the other state will gradually increase. As Feynman puts it: the first state ‘dumps’ probability into the second state as time goes by, and vice versa, so the probability sloshes back and forth between the two states. The graph above measures time in units of ħ/A but, frankly, the ‘natural’ unit of time would usually be the period, which you can easily calculate as (A/ħ)·T = π ⇔ T = π·ħ/A. In any case, you can go from one unit to another by dividing or multiplying by π. Of course, the period is the reciprocal of the frequency and so we can calculate the molecular transition frequency fas f0 = A/[π·ħ] = 2A/h. [Remember: h = 2π·ħ, so A/[π·ħ] = 2A/h]. Of course, by now we’re used to using angular frequencies, and so we’d rather write: ω= 2π·f= f= 2π·A/[π·ħ] = 2A/ħ. And because it’s always good to have some idea of the actual numbers – as we’re supposed to model something real, after all – I’ll give them to you straight away. The separation between the two energy levels E+ A and E− A has been measured as being equal to 2A = hf0 ≈ 10−4 eV, more or less. 🙂 That’s tiny. To avoid having to convert this to joule, i.e. the SI unit for energy, we can calculate the corresponding frequency using h expressed in eV·s, rather than in J·s. We get: f0 = 2A/h = (1×10−4 eV)/(4×10−15 eV·s) = 25 GHz. Now, we’ve rounded the numbers here: the exact frequency is 23.79 GHz, which corresponds to microwave radiation with a wavelength of λ = c/f0 = 1.26 cm. How does one measure that? It’s simple: ammonia absorbs light of this frequency. The frequency is also referred to as a resonance frequency, as light of this frequency, i.e. microwave radiation, will also induce transitions from one state to another. In fact, that’s what the stimulated emission of radiation principle is all about. But we’re getting ahead of ourselves here. It’s time to look at what happens if we do apply some external electric field, which is what we’ll do now. #### Polarization and induced transitions As mentioned above, an electric field will change the total energy of the molecule by an amount that is equal to +με or −με. Of course, the plus or the minus in front of με depends both on the direction of the electric field ε, as well as on the direction of μ. However, it’s not like our molecule might be in four possible states. No. We assume the direction of the field is given, and then we have two states only, with the following energy levels: Don’t rack your brain over how you get that square root thing. You get it when applying the general solution of a pair of Hamiltonian equations to this particular case. For full details on how to get this general solution, I’ll refer you to Feynman. Of course, we’re talking base states here, which do not always have a physical meaning. However, in this case, they do: a jet of ammonia gas will split in an inhomogeneous electric field, and it will split according to these two states, just like a beam of particles with different spin in a Stern-Gerlach apparatus. A Stern-Gerlach apparatus splits particle beams because of an inhomogeneous magnetic field, however. So here we’re talking an electric field. It’s important to note that the field should not be homogeneous, for the very same reason as to why the magnetic field in the Stern-Gerlach apparatus should not be homogeneous: it’s because the force on the molecules will be proportional to the derivative of the energy. So if the energy doesn’t vary—so if there is no strong field gradient—then there will be no force. [If you want to get more detail, check the section on the Stern-Gerlach apparatus in my post on spin and angular momentum.] To be precise, if με is much smaller than A, then one can use the following approximation for the square root in the expressions above: The energy expressions then reduce to: And then we can calculate the force on the molecules as: The bottom line is that our ammonia jet will split into two separate beams: all molecules in state I will be deflected toward the region of lower ε2, and all molecules in state II will be deflected toward the region of larger ε2. [We talk about ε2 rather than ε because of the ε2 gradient in that force formula. However, you could, of course, simplify and write ε2 as ε= 2εε.] So, to make a long story short, we should now understand the left-hand side of the schematic maser diagram below. It’s easy to understand that the ammonia molecules that go into the maser cavity are polarized. To understand the maser, we need to understand how the maser cavity works. It’s a so-called resonant cavity, and we’ve got an electric field in it as well. The field direction happens to be south as we’re looking at it right now, but in an actual maser we’ll have an electric field that varies sinusoidally. Hence, while the direction of the field is always perpendicular to the direction of motion of our ammonia molecules, it switches from south to north and vice versa all of the time. We write ε as: ε = 2ε0cos(ω·t) = ε0(ei·ω·t ei·ω·t) Now, you’ve guessed it, of course. If we ensure that ω = ω= 2A/ħ, then we’ve got a maser. In fact, the result is a similar graph: Let’s first explain this graph. We’ve got two probabilities here: • P= cos2[(με0/ħ)·t] • PII = sin2[(με0/ħ)·t] So that’s just like the P=  cos2[(A/ħ)·t] and P= sin2[(A/ħ)·t] probabilities we found for spontaneous transitions. In fact, the formulas for the related amplitudes are also similar to those for C1(t) and C2(t): • CI(t) = 〈 I | ψ 〉 = e(i/ħ)·EI·t·cos[(με0/ħ)·t], which is equal to: CI(t) = e(i/ħ)·(E0+A)·t·cos[(με0/ħ)·t] = e(i/ħ)·(E0+A)·t·(1/2)·[ei·(με0/ħ)·t + ei·(με0/ħ)·t] = (1/2)·e(i/ħ)·(E0+A−με0)·t + (1/2)·e(i/ħ)·(E0+A+με0)·t • CII(t) = 〈 II | ψ 〉 = i·e(i/ħ)·EII·t·sin[(με0/ħ)·t], which is equal to: CII(t) = e(i/ħ)·(E0−A)·t·i·sin[(με0/ħ)·t] = e(i/ħ)·(E0−A)·t·(1/2)·[ei·(με0/ħ)·t ei·(με0/ħ)·t] = (1/2)·e(i/ħ)·(E0−A−με0)·t – (1/2)·e(i/ħ)·(E0−A+με0)·t But so here we are talking induced transitions. As you can see, the frequency and, hence, the period, depend on the strength, or magnitude, of the electric field, i.e. the εconstant in the ε = 2ε0cos(ω·t) expression. The natural unit for measuring time would be the period once again, which we can easily calculate as (με0/ħ)·T = π ⇔ T = π·ħ/με0. However, Feynman adds an 1/2 factor so as to ensure it’s the time that corresponds to the time a molecule needs to go through the cavity. Well… That’s what he says, at least. I’ll show he’s actually wrong, but the idea is OK. First have a look at the diagram of our maser once again. You can see that all molecules come in in state I, but are supposed to leave in state II. Now, Feynman says that’s because the cavity is just long enough so as to more or less ensure that all ammonia molecules switch from state I to state II. Hmm… Let’s have a close look at that. What the functions and the graph are telling us is that, at the point t = 1 (with t being measured in those π·ħ/2με0 units), the probability of being in state I has all been ‘dumped’ into the probability of being in state II! So… Well… Our molecules had better be in that state then! 🙂 Of course, the idea is that, as they transition from state I to state II, they lose energy. To be precise, according to our expressions for Eand EII above, the difference between the energy levels that are associated with these two states is equal to 2A + μ2ε02/A. Now, a resonant cavity is a cavity designed to keep electromagnetic waves like the oscillating field that we’re talking about here going with minimal energy loss. Indeed, a microwave cavity – which is what we’re having here – is similar to a resonant circuit, except that it’s much better than any equivalent electric circuit you’d try to build, using inductors and capacitors. ‘Much better’ means it hardly needs energy to keep it going. We express that using the so-called Q-factor (believe it or not: the ‘Q’ stands for quality). The Q factor of a resonant cavity is of the order of 106, as compared to 102 for electric circuits that are designed for the same frequencies. But let’s not get into the technicalities here. Let me quote Feynman as he summarizes the operation of the maser: “The molecule enters the cavity, [and then] the cavity field—oscillating at exactly the right frequency—induces transitions from the upper to the lower state, and the energy released is fed into the oscillating field. In an operating maser the molecules deliver enough energy to maintain the cavity oscillations—not only providing enough power to make up for the cavity losses but even providing small amounts of excess power that can be drawn from the cavity. Thus, the molecular energy is converted into the energy of an external electromagnetic field.” As Feynman notes, it is not so simple to explain how exactly the energy of the molecules is being fed into the oscillations of the cavity: it would require to also deal with the quantum mechanics of the field in the cavity, in addition to the quantum mechanics of our molecule. So we won’t get into that nitty-gritty—not here at least. So… Well… That’s it, really. Of course, you’ll wonder about the orders of magnitude, or minitude, involved. And… Well… That’s where this analysis is somewhat tricky. Let me first say something more about those resonant cavities because, while that’s quite straightforward, you may wonder if they could actually build something like that in the 1950s. 🙂 The condition is that the cavity length must be an integer multiple of the half-wavelength at resonance. We’ve talked about this before. [See, for example, my post on wave modes. More formally, the condition for resonance in a resonator is that the round trip distance, 2·d, is equal to an integral number of the wavelength λ, so we write: 2·d = N·λ, with N = 1, 2, 3, etc. Then, if the velocity of our wave is equal to c, then the resonant frequencies will be equal to f = (N·c)/(2·d). Does that makes sense? Of course. We’re talking the speed of light, but we’re also talking microwaves. To be specific, we’re talking a frequency of 23.79 GHz and, more importantly, a wavelength that’s equal to λ = c/f0 = 1.26 cm, so for the first normal mode (N = 1), we get 2·d = λ ⇔ d = λ/2 = 63 mm. In short, we’re surely not talking nanotechnology here! In other words, the technological difficulties involved in building the apparatus were not insurmountable. 🙂 But what about the time that’s needed to travel through it? What about that length? Now, that depends on the μεquantity if we are to believe Feynman here. Now, we actually don’t need to know the actual values for μ or ε: we said that the value of the μεproduct is (much) smaller than the value of A. Indeed, the fields that are used in those masers aren’t all that strong, and the electric dipole moment μ is pretty tiny. So let’s say με0 = A/2, which is the upper limit for our approximation of that square root above, so 2με0 = A = 0.5×10−4 eV. [The approximation for that square root expression is only used when y ≤ x/2.] Let’s now think about the time. It was measured in units equal to T = π·ħ/2με0. So our T here is not the T we defined above, which was the period. Here it’s the period divided by two. First the dimensions: ħ is expressed in eV·s, and με0 is an energy, so we can express it in eV too: 1 eV ≈ 1.6×10−19 J, i.e. 160 zeptojoules. 🙂 π is just a real number, so our T = π·ħ/2μεgives us seconds alright. So we get: T ≈ (3.14×6.6×10−16 eV·s)/(0.5×10−4 eV) ≈ 40×10−12 seconds […] Hmm… That doesn’t look good. Even when traveling at the speed of light – which our ammonia molecule surely doesn’t do! – it would only travel over a distance equal to (3×108 m/s)·(20×10−12 s) = 60×10−4 m = 0.6 cm = 6 mm. The speed of our ammonia molecule is likely to be only a fraction of the speed of light, so we’d have an extremely short cavity then. The time mentioned is also not in line with what Feynman mentions about the ammonia molecule being in the cavity for a ‘reasonable length of time, say for one millisecond.‘ One millisecond is also more in line with the actual dimensions of the cavity which, as you can see from the historical illustration below, is quite long indeed. So what’s going on here? Feynman’s statement that T is “the time that it takes the molecule to go through the cavity” cannot be right. Let’s do some good thinking here. For example, let’s calculate the time that’s needed for a spontaneous state transition and compare with the time we calculated above. From the graph and the formulas above, we know we can calculate that from the (A/ħ)·T = π/2 equation. [Note the added 1/2 factor, because we’re not going through a full probability cycle: we’re going through a half-cycle only.] So that’s equivalent to T = (π·ħ)/(2A). We get: T ≈ (3.14×6.6×10−16 eV·s)/(1×10−4 eV) ≈ 20×10−12 seconds The T = π·ħ/2με0 and T = (π·ħ)/(2A) expression make it obvious that the expected, average, or mean time for a spontaneous versus an induced transition depends on A and με respectively. Let’s be systematic now, so we’ll distinguish Tinduced = (π·ħ)/(2με0) from Tspontaneous = (π·ħ)/(2A) respectively. Taking the ratio, we find: Tinduced/Tspontaneous = [(π·ħ)/(2με0)]/[(π·ħ)/(2A)] = A/με0 However, we know the A/με0 ratio is greater than one, so Tinduced/Tspontaneous is greater than one, which, in turn, means that the presence of our electric field – which, let me remind you, dances to the beat of the resonant frequency – causes a slower transition than we would have had if the oscillating electric field were not present. We may write the equation above as: Tinduced = [A/με0]·Tspontaneous = [A/με0]·(π·ħ)/(2A) = h/(4με0) However, that doesn’t tell us anything new. It just says that the transition period (T) is inversely proportional to the strength of the field (as measured by ε0). So a weak field will make for a longer transition period (T), with T → ∞ as ε0 → 0. So it all makes sense, but what do we do with this? The Tinduced/Tspontaneous = [με0/A]−1 is the most telling. It says that the Tinduced/Tspontaneous is inversely proportional to the με0/A ratio. For example, if the energy με0 is only one fifth of the energy A, then the time for the induced transition will be five times that of a spontaneous transition. To get something like a millisecond, however, we’d need the με0/A ratio to go down to like a billionth or something, which doesn’t make sense. So what’s the explanation? Is Feynman hiding something from us? He’s obviously aware of these periods because, when discussing the so-called three-state maser, he notes that “The | I 〉 state has a long lifetime, so its population can be increased.” But… Well… That’s just not relevant here. He just made a mistake: the length of the maser has nothing to do with it. The thing is: once the molecule transitions from state I to state II, then that’s basically the end of the story as far as the maser operation is concerned. By transitioning, it dumps that energy 2A + μ2ε02/A into the electric field, and that’s it. That’s energy that came from outside, because the ammonia molecules were selected so as to ensure they were in state I. So all the transitions afterwards don’t really matter: the ammonia molecules involved will absorb energy as they transition, and then give it back as they transition again, and so on and so on. But that’s no extra energy, i.e. no new or outside energy: it’s just energy going back and forth from the field to the molecules and vice versa. So, in a way, those PI and PII curves become irrelevant. Think of it: the energy that’s related to A and μεis defined with respect to a certain orientation of the molecule as well as with respect to the direction of the electric field before it enters the apparatus, and the induced transition is to happen when the electric field inside of the cavity points south, as shown in the diagram. But then the transition happens, and that’s the end of the story, really. Our molecule is then in state II, and will oscillate between state II and I, and back again, and so on and so on, but it doesn’t mean anything anymore, as these flip-flops do not add any net energy to the system as a whole. So that’s the crux of the matter, really. Mind you: the energy coming out of the first masers was of the order of one microwatt, i.e. 10−6 joule per second. Not a lot, but it’s something, and so you need to explain it from an ‘energy conservation’ perspective: it’s energy that came in with the molecules as they entered the cavity. So… Well… That’s it. The obvious question, of course, is: why do we actually need the oscillating field in the cavity? If all molecules come in in the ‘upper’ state, they’ll all dump their energy anyway. Why do we need the field? Well… First, you should note that the whole idea is that our maser keeps going because it uses the energy that the molecules are dumping into its field. The more important thing, however, is that we actually do need the field to induce the transition. That’s obvious from the math. Look at the probability functions once again: • P= cos2[(με0/ħ)·t] • PII = sin2[(με0/ħ)·t] If there would be no electric field, i.e. if ε0 = 0, then P= 1 and PII = 0. So, our ammonia molecules enter in state I and, more importantly, stay in state I forever, so there’s no chance whatsoever to transition to state II. Also note what I wrote above: Tinduced = h/(4με0), and, therefore, we find that T → ∞ as ε0 → 0. So… Well… That’s it. I know this is not the ‘standard textbook’ explanation of the maser—it surely isn’t Feynman’s! But… Well… Please do let me know what you think about it. What I write above, indicates the analysis is much more complicated than standard textbooks would want it to be. There’s one more point related to masers that I need to elaborate on, and that’s its use as an ‘atomic’ clock. So let me quickly do that now. #### The use of a maser as an ‘atomic’ clock In light of the amazing numbers involved – we talked GHz frequencies, and cycles expressed in picoseconds – we may wonder how it’s possible to ‘tune’ the frequency of the field to the ‘natural’ molecular transition frequency. It will be no surprise to hear that it’s actually not straightforward. It’s got to be right: if the frequency of the field, which we’ll denote by ω, is somewhat ‘off’ – significantly different from the molecular transition frequency ω– then the chance of transitioning from state I to state II shrinks significantly, and actually becomes zero for all practical purposes. That basically means that, if the frequency isn’t right, then the presence of the oscillating field doesn’t matter. In fact, the fact that the frequency has got to be right – with tolerances that, as we will see in a moment, are expressed in billionths – is why a maser can be used as an atomic clock. The graph below illustrates the principle. If ω = ω0, then the probability that a transition from state I to II will happen is one, so PI→II(ω)/PI→II0) = 1. If it’s slightly off, though, then the ratio decreases quickly, which means that the PI→II probability goes rapidly down to zero. [There’s secondary and tertiary ‘bumps’ because of interference of amplitudes, but they’re insignificant.] As evidenced from the graph, the cut-off point is ω − ω= 2π/T, which we can re-write as 2π·f − 2π·f= 2π/T, which is equivalent to writing: (f − f0)/f0 =   1/(f0T). Now, we know that f= 23.79 GHz, but what’s T in this expression? Well… This time around it actually is the time that our ammonia molecules spend in the resonant cavity, from going in to going out, which Feynman says is of the order of a millisecond—so that’s much more reasonable that those 40 picoseconds we calculated. So 1/(f0T) = 1/[23.79×109·1×−3] ≈ 0.042×10−6 = 42×10−9  , i.e. 42 billionths indeed, which Feynman rounds to “five parts in 108“, i.e. five parts in a hundred million. In short, the frequency must be ‘just right’, so as to get a significant transition probability and, therefore, get some net energy out of our maser, which, of course, will come out of our cavity as microwave radiation of the same frequency. Now that’s how one the first ‘atomic’ clock was built: the maser was the equivalent of a resonant circuit, and one could keep it going with little energy, because it’s so good as a resonant circuit. However, in order to get some net energy out of the system, in the form of microwave radiation of, yes, the ammonia frequency, the applied frequency had to be exactly right. To be precise, the applied frequency ω has to match the ω0 frequency, i.e. the molecular resonance frequency, with a precision expressed in billionths. As mentioned above, the power output is very limited, but it’s real: it comes out through the ‘output waveguide’ in the illustration above or, as the Encyclopædia Brittanica puts it: “Output is obtained by allowing some radiation to escape through a small hole in the resonator.” 🙂 In any case, a maser is not build to produce huge amounts of power. On the contrary, the state selector obviously consumes more power than comes out of the cavity, obviously, so it’s not some generator. Its main use nowadays is as a clock indeed, and so it’s that simple really: if there’s no output, then the ‘clock’ doesn’t work. It’s an interesting topic, but you can read more about it yourself. I’ll just mention that, while the ammonia maser was effectively used as a timekeeping device, the next-generation of atomic clocks was based on the hydrogen maser, which was introduced in 1960. The principle is the same. Let me quote the Encyclopædia Brittanica on it: “Its output is a radio wave, hose frequency of 1,420,405,751.786 hertz (cycles per second) is reproducible with an accuracy of one part in 30 × 1012. A clock controlled by such a maser would not get out of step more than one second in 100,000 years.” So… Well… Not bad. 🙂 Of course, one needs another clock to check if one’s clock is still accurate, and so that’s what’s done internationally: national standards agencies in various countries maintain a network of atomic clocks which are intercompared and kept synchronized. So these clocks define a continuous and stable time scale, collectively, which is referred to as the International Atomic Time (TAI, from the French Temps Atomique International). Well… That’s it for today. I hope you enjoyed it. Post scriptum: When I say the ammonia molecule just dumps that energy 2A + μ2ε02/A into the electric field, and that’s “the end of the story”, then I am simplifying, of course. The ammonia molecule still has two energy levels, separated by an energy difference of 2A and, obviously, it keeps its electric dipole moment and so that continues to play as we’ve got an electric field in the cavity. In fact, the ammonia molecule has a high polarizability coefficient, which means it’s highly sensitive to the electric field inside of the cavity. So, yes, the molecules will continue ‘dancing’ to be the beat of the field indeed, and absorbing and releasing energy, in accordance with that 2A and με0 factor, and so the probability curves do remain relevant—of course! However, we talked net energy going into the field, and so that’s where the ‘end of story’ story comes in. I hope I managed to make that clear. In fact, there’s lots of other complications as well, and Feynman mentions them briefly in his account of things. But let’s keep things simple here. 🙂 Also, if you’d want to know how we get that PI→II(ω)/PI→II0), check it out in Feynman. However, I have to warn you: the math involved is not easy. Not at all, really. The set of differential equations that’s involved is complicated, and it takes a while to understand why Feynman uses the trial functions he uses. So the solution that comes out, i.e. those simple P= cos2[(με0/ħ)·t] and PII = sin2[(με0/ħ)·t] functions, makes sense—but, if you check it out, you’ll see the whole mathematical argument is rather complicated. That’s just how it is, I am afraid. 🙂 Some content on this page was disabled on June 16, 2020 as a result of a DMCA takedown notice from The California Institute of Technology. You can learn more about the DMCA here:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.85513836145401, "perplexity": 670.411070976323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401643509.96/warc/CC-MAIN-20200929123413-20200929153413-00519.warc.gz"}
https://ondemand.euromoney.com/discover/glossary/uneconomic-growth
# Uneconomic Growth Uneconomic Growth Uneconomic growth is in some ways an oxymoronic concept but it starts to get to the heart of the notion that GDP growth is an umbrella concept that aggregates all output regardless of long-term consequences or cost, and that not all factors that contribute to growth are positive. Some positive growth inputs may have very negative long-term consequences. The concept of uneconomic growth is often used in an environmental context, where goods produced from exploiting the environment end up degrading the environment and, over time, end up costing more than their contributory value through the creation of waste and pollution back into the environment from which they were produced. Other examples of uneconomic growth include naked speculation that creates growth on paper but adds little economic value, especially where accelerated price appreciation turns to massive downside volatility. The concept of uneconomic value is also applied to themes such as post-war reconstruction spending or reconstruction following natural disasters.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199787378311157, "perplexity": 2462.655028920231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00190.warc.gz"}
http://www.biotechnologyforbiofuels.com/content/6/1/5
Research # Enzymatic lignocellulose hydrolysis: Improved cellulase productivity by insoluble solids recycling Noah Weiss1, Johan Börjesson2, Lars Saaby Pedersen2 and Anne S Meyer1* Author Affiliations 1 Center for Bioprocess Engineering, Department of Chemical and Biochemical Engineering, Technical University of Denmark (DTU), Lyngby, DK-2800 Kgs, Denmark 2 Novozymes A/S, Krogshøjvej 36, Bagsværd, DK-2880, Denmark For all author emails, please log on. Biotechnology for Biofuels 2013, 6:5  doi:10.1186/1754-6834-6-5 Received: 21 July 2012 Accepted: 11 January 2013 Published: 21 January 2013 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract #### Background It is necessary to develop efficient methods to produce renewable fuels from lignocellulosic biomass. One of the main challenges to the industrialization of lignocellulose conversion processes is the large amount of cellulase enzymes used for the hydrolysis of cellulose. One method for decreasing the amount of enzyme used is to recycle the enzymes. In this study, the recycle of enzymes associated with the insoluble solid fraction after the enzymatic hydrolysis of cellulose was investigated for pretreated corn stover under a variety of recycling conditions. #### Results It was found that a significant amount of cellulase activity could be recovered by recycling the insoluble biomass fraction, and the enzyme dosage could be decreased by 30% to achieve the same glucose yields under the most favorable conditions. Enzyme productivity (g glucose produced/g enzyme applied) increased between 30 and 50% by the recycling, depending on the reaction conditions. While increasing the amount of solids recycled increased process performance, the methods applicability was limited by its positive correlation with increasing total solids concentrations, reaction volumes, and lignin content of the insoluble residue. However, increasing amounts of lignin rich residue during the recycle did not negatively impact glucose yields. #### Conclusions To take advantage of this effect, the amount of solids recycled should be maximized, based on a given processes ability to deal with higher solids concentrations and volumes. Recycling of enzymes by recycling the insoluble solids fraction was thus shown to be an effective method to decrease enzyme usage, and research should be continued for its industrial application. ### Background Limited oil resources and the devastating effects of climate change from the burning of fossil fuels make it necessary to identify sustainable alternative sources of energy for the future [1]. Many alternative, potentially sustainable sources of energy exist, however there are limited choices for the replacement of liquid fossil fuels. One of these possibilities is the production of fuels from lignocellulosic biomass. The first step in one of the more promising conversion pathways includes a biochemical conversion of the lignocellulosic material whereby the structural sugars present in lignocellulosic biomass are depolymerized into their monomeric constituents via an enzymatic hydrolysis step, providing a fermentable sugar stream rich in glucose [2]. Recent commercial cellulase preparations have been shown to be effective at hydrolyzing cellulose under industrially relevant conditions, however the high cost of enzymes remains a significant barrier to the economical production of ethanol from lignocellulosic biomass [3,4]. It is therefore necessary to reduce the amount of enzyme required for the enzymatic hydrolysis step. Current enzyme loadings for cellulose hydrolysis remain high compared to enzymatic starch hydrolysis. Hence, reducing the amount of enzyme needed or increasing the enzyme productivity in the process is a promising approach. This has been investigated by looking for methods to improve hydrolysis yields while lowering enzyme doses. A variety of methods have been suggested to achieve increased hydrolysis yields, including via surfactant addition [5], gradual substrate loading [6] or advanced reactor configurations coupled with product removal to avoid inhibition [7-9]. However, these methods have yet to be shown to be cost effective. One method that may reduce the amount of enzyme used and increase enzymatic productivity is to recycle the enzymes [10,11]. Conceptually, the assumption is that by recovering the active enzymes from the output of the enzymatic hydrolysis step, it is possible to decrease the amount of new enzyme which must be added to the hydrolysis, and therefore reduce the overall enzyme cost in the process. This method also lends itself to continuous processes, a necessity for industrial application. Recycling can also increase the enzyme substrate interaction time, which can lead to an overall increase in enzyme conversion efficiency. Due to mass transfer limitations of the insoluble substrate, immobilization, the most commonly applied method of enzyme recycle, is not an option for cellulose hydrolysis, so other options for enzyme recycling must be developed. The cellulase enzymes currently employed in the hydrolysis of lignocellulosic biomass readily bind to cellulose, and those enzymes which are active on the cellulose polymer, specifically cellobiohydro-lyases (EC 3.2.1.91) and endoglucanases (EC 3.2.1.4) remain adsorbed to the cellulose polymer during hydrolysis [12-14]. Because the substrate is present as a solid, the cellulases stay attached to the insoluble solids fraction. For cellulose biomass substrates that have been de-lignified during pretreatment, a significant amount of cellulase enzymes have been found to desorb from the solid substrate during the hydrolysis [13,15]. β-glucosidases (EC 3.2.1.21) have a soluble substrate, cellobiose, and a majority of the enzyme activity has been found to remain in the soluble (liquid) fraction during hydrolysis [13]. Cellulases also adsorb to lignin, and therefore a significant fraction can be found adsorbed to the lignin throughout the reaction, as lignin concentration remains constant [16]. Thus the enzymes are primarily associated with the insoluble solids fraction, though a significant amount of activity, especially β-glucosidase, can still be found in the liquid fraction. To successfully recover the enzymes, they must either be separated and collected from their associated fractions, or the enzyme containing fractions must be recycled into the subsequent hydrolyses. A significant amount of work has already been carried out investigating the recycle of cellulases [10,11,14,17-21]. The majority of cellulase recycling methods that have been reported involved either separating the enzymes from the solid or liquid phases, or recycling of the solid and/or liquid phase directly. Approaches have been demonstrated where free enzymes were recovered from the liquid fraction by membrane filtration [11], or where fresh substrate was introduced to the liquor fraction and the enzymes were allowed to adsorb to the substrate before a further separation and hydrolysis step [18,19]. Similarly, the enzymes associated with the solids have been recovered by washing with excess volumes of buffer, sometimes with surfactants, to desorb the enzymes, which were then concentrated and added to the fresh substrate [19]. These methods have shown varying levels of success under controlled laboratory conditions and with specially prepared feedstocks. The recycle methods which rely on enzyme isolation and recovery have yet to be demonstrated under process relevant conditions, and the effective scale up of the separation processes used have not been shown. A more straightforward approach to enzyme recycle is the direct recycle of the residual solid fraction into subsequent hydrolyses. The recycle of the solids fraction after solid liquid separation has been demonstrated on a number of substrates and in combination with other recycle methods [15,19,22]. This approach has also been coupled with methods for recovering enzymes from the liquor fraction, primarily by the method of a limited exposure time of the liquor to fresh substrate at low temperatures [18,19]. Previous attempts have shown that a large portion of the enzymatic activity could be recovered using a combination of these methods. However, these studies relied on high enzyme loadings, which often resulted in complete hydrolysis of the cellulose, low total solids concentrations (from 2-5% TS), extended reaction times, and supplementation of each recycle round with β-glucosidase [14,15,18,19,22]. As well, the best results were demonstrated on pretreated materials with very low lignin content, which were produced using pretreatments specifically designed to remove lignin [18,19,22]. Recycle performance decreased significantly when applied to lignin containing substrates created with more standard pretreatment [18]. While these studies have shown that enzyme recycle is technically possible under defined conditions and with ideal enzyme extraction methods, no work has yet demonstrated cellulase enzyme recycle under process relevant conditions and by relying on unit operations which could be economically feasible for industrial production. Little consideration has been made into the process implications of enzyme recycle on an industrial scale, and how various methods of enzyme separation or re-adsorption could be applied. Most recycle studies have attempted to maximize the amount of activity recycled, irrespective of process intensity, with the idea that recovery must approach 100% of initial activity to be industrially interesting. Compared to the current cellulose hydrolysis processing regimes for lignocellulosic biomass, which requires fresh enzyme addition with each new batch of substrate, a significant fraction of enzyme activity could be recycled with minimal sample processing, and large decreases in enzyme cost could be achieved. The objective of this study was to determine if, by recycling the insoluble solids fraction, a significant amount of enzyme activity could be reused, and therefore increase overall product yields or decrease the amount of required enzyme needed to reach a given level of conversion. It was also desired to determine which process variables, such as solids washing and fraction of solids recycled, had significant effects on process yields, and how the manipulation of these variables would impact the hydrolysis reaction conditions. In this study, the recycle of cellulase enzymes by recycling the insoluble solids residue present after hydrolysis was investigated and evaluated at conditions which were closer to industrial processing conditions than many previous studies. The efficacy of enzyme recycle by the proposed method was evaluated for a number of successive recycle scenarios while modifying a number of recycle conditions and measuring the changes in product formation. The data from these experiments were then used to develop a computational model to predict process parameters after a large number of recycles, thus giving an idea of steady state conditions. The influence of the lignin rich residue was also directly evaluated for its effect on enzyme hydrolysis. ### Results and discussion #### Ability of recycled cellulases to hydrolyze freshly added cellulose From examining the amount of glucose produced in the course of the reaction, it could be seen that significant amounts of glucose were produced from the fresh substrate when the insoluble solids were recycled (Figure 1). Glucose was produced in the second hydrolysis round under all three conditions, resuspension of the residual solids in buffer, resuspension of the solids in buffer with additional enzyme, and addition of fresh substrate to the residual solids, during the second 72 hour period (Figure 1). The amount of glucose produced when fresh substrate was added to the residual solids (0.50 g) was significantly larger than when either buffer alone (0.16 g) or buffer and additional enzyme (0.34 g) were added (Figure 1). This increased glucose production above what was produced from simply resuspending the remaining solids in new buffer or with fresh enzyme suggests that the enzymes which were associated with the insoluble fraction were capable of hydrolyzing cellulose from the fresh substrate. Figure 1. Mass glucose produced for three different treatments of insoluble hydrolysis residue at an initial substrate concentration of 15% TS. Bottom bars (blue) are mass of glucose produced in the first 72 hours, and top bars (red) are the mass produced after solid liquid separation and re-suspension of the insoluble solids with either buffer alone, buffer and enzyme, or buffer with 3 g of PCS substrate. Values are reported as the average of triplicate experiments. Error bars represent ± one standard deviation. #### Process variables effect on recycle performance In the first factorial experiment, the 72 hour glucose hydrolysis yield (recycle round 0) was 77% (w/w adjusted for the hydration factor, see Glucose yield section). Glucose yields after the first recycle round (additional 72 h) for each individual hydrolysis condition ranged between 30 and 65%, and between 2 and 77% after the fourth recycle round (Figure 2). Glucose yields decreased below the first hydrolysis round glucose yield (77%) for all conditions with each recycle. In only the highest response level condition (100% solids recycle, 34 mg enzyme product (EP)/g, washing of solids), did the glucose yield reverse a downward trend to increase to 77% for the fourth recycle round. It is speculated that this may be due to a buildup of active enzyme in the recycled fraction. Samples with no extra enzyme addition followed an exponential decay of glucose yields with subsequent recycle rounds, and those where makeup enzyme was added decreased at a slower rate or stabilized after the second recycle round. Statistical analysis showed that increasing all three experimental factors had significant positive effects on glucose yields, with the largest effect from increasing enzyme application, followed by the fraction of insoluble solids recycled, and finally the washing of the solids, which only had a slightly positive effect. The multivariate statistical analysis gives an overall effect across all the paired conditions including with and without the washing step. This overall effect may thus override pairs for which there was no effect of the washing, e.g. as in this case in the 50% solids recycle with no extra enzyme addition (50%, 0 mg EP/g, Figure 2). Figure 2. Glucose yields from factorial recycle experiment. 72 hr reaction time between recycles, 10% TS substrate concentration, 51.4 mg EP/g cellulose initial loading. Conditions varied by fraction of insoluble solids recycled, solids washing between recycles, and the amount of additional enzyme added with each recycle. Reported values are averages of duplicates, error bars represent ± one standard deviation of the experimental center point. The model was found to be statistically significant (P<0.0001) for the prediction of all responses, and actual versus predicted had r-squared values between 0.97 and 0.99. The increase in glucose yield for recycle round 2–4 obtained with recycling of 100% of the solid, washed residues, and enzyme supplementation (34 mg EP/g cellulose) might be a result of both increased enzyme levels due to high supplementation and recycle levels and the impact of the removal of glucose (product inhibitor) and putative inhibitors resulting from the pretreatment during the washing step. The results regarding washing of the substrate agree with data published by others. Washing has thus long been known to remove acid and any eventual inhibitory substances that inhibit cellulolytic enzymes from Trichoderma sp. [23]. More recently, Xue et al. [24] showed that a washing stage (in their case with buffer) together with addition of a surfactant during solids recycling, improved the hydrolysis efficiency with enzyme recycling on pretreated softwood and hardwood undergoing enzymatic hydrolysis [24]. Total glucose yields ranged between 28 and 93% of total cellulose added during the experiment (Figure 3). Only two conditions had total glucose yields above the initial 72-hour hydrolysis condition, and these were the conditions with the highest level of makeup enzyme addition and insoluble solids recycle. For these conditions the makeup enzyme loading was 34 mg EP/g cellulose, and the fraction of solids recycled was 100%. Thus significantly higher total glucose yields were obtained when 33% less enzyme was applied in the recycle rounds than in the initial 72 h hydrolysis (34 mg is 67% of 51 mg applied initially), representing a significant improvement in process performance. When analyzed by least squares fit, the amount of makeup enzyme added and the fraction of insoluble solids recycle had significant (P<0.05) positive effects on the total glucose yield, with the former having the largest impact on total yields. No significant two factor interactions were identified. Figure 3. Total glucose yields over the course of 4 solids recycles for factorial experiment. 72 hr reaction time between recycles, initial substrate loading of 10% TS, and initial enzyme loading of 51.4 mg EP/g cellulose. Averages of duplicates with standard deviation of 4 center points. Error bars represent ± one standard deviation. The horizontal red line represents the glucose yield at 51.4 mg EP/g cellulose at 72 hours with no recycle, the ‘break even point’. Roman letters a-g indicate significantly different values by ANOVA (95% confidence intervals, pooled standard deviation 0.799 (N=22)). Similar trends were observed for the total mass of glucose produced in each recycle round. Because of different levels of insoluble solids recycle, there were different amounts of cellulose present at the beginning of each hydrolysis. Experimental conditions which had makeup enzyme added showed a relatively constant amount of glucose produced in each recycle round past the first round (between 0.53 and 0.71 g), and under conditions with the highest makeup enzyme loading and solids recycle glucose production increased with each subsequent recycle, from 0.74 g to 1.02 g. This observation suggests that a constant amount of glucose production from recycle to recycle were obtained under recycle conditions with makeup enzyme loadings of 34 mg EP/g cellulose, even if the calculated glucose yields decreased for each individual recycle round (Figure 2). Enzyme productivities ranged from between 0.12 to 0.27 g glucose/mg enzyme protein applied (Figure 4). In comparison, the 72 hour hydrolysis yield with an enzyme loading of 51.4 mg EP/g glucose had an enzymatic productivity of 0.097 g glucose/mg enzyme protein, and a 95% glucose yield with a 51.4 mg EP/g cellulose enzyme loading would have a productivity of 0.11 g glucose/mg enzyme protein. All recycle conditions increased the total enzymatic productivity within the system, and in the samples with the highest glucose yields, the productivity increased 50% above the non recycle scenario. Productivity decreased as the amount of makeup enzyme applied increased, but increased with increasing insoluble solids recycle. This was most likely due to the larger amounts of active enzyme and cellulose present with higher fractions of insoluble solid recycle, and therefore there were more cellulose and enzymes available for a longer time. Productivity did not correlate with glucose yields, and therefore on its own was not a good indicator of best process performance, however the recycle of enzyme showed significant increases in productivity and enzyme efficiency for all recycle conditions. Figure 4. Enzyme productivities of factorial recycle experiment samples. Based on total glucose produced and the total mass of enzyme protein added over the course of the experiment. Averages of duplicate experiments and error bars represent ± one standard deviation of the center point. The red line represents the productivity for the 72 hour hydrolysis with a 51.4 mg EP/g cellulose loading. Roman letters a-f indicate significantly different values by ANOVA (95% confidence intervals, pooled standard deviation 0.0045 (N=22)). The most significant factors affecting the performance of enzyme recycle was the amount of additional enzyme applied, and the fraction of solids recycled. The washing of the solids was determined to not have an overall significantly positive effect on recycle performance. Initial glucose concentrations varied depending on how much of the non-washed residue was recycled. At the beginning of the recycle experiment, initial glucose concentrations were ~5 g/L from the pretreated slurry, and in some cases reached 20 g/L in the beginning of the 4th recycle round for conditions with 100% recycle (data not shown). Washing resulted in lower glucose concentrations in the hydrolysate, which is not desirable for industrial scenarios. Coupling the non-significant impact of washing on total glucose yields with expenses that would be incurred by including a washing step in an industrial process, it was decided that solids washing was not beneficial to the recycle process. Therefore, the investigation of washing as an alternative for improving recycle performance was not continued. #### Recycle at higher solids concentrations with response surface methodologies When the initial % TS concentration (w/w) was increased from 10% to 15%, the glucose yield achieved after the initial 72 hours decreased from 77% to 69% w/w (Figure 5). All of the experimental conditions showed a decrease in theoretical glucose yields after the first recycle, but many conditions produced relatively constant yields after the first initial drop in yields (Figure 5). Figure 5. Glucose yields as fraction of total theoretical glucose production after each recycle round on PCS. 72 hr reaction time between recycles, initial 15% TS loading, and 51.4 mg EP/g cellulose initial enzyme loading. Reaction conditions varied by the fraction of insoluble solids recycled (%), and the amount of makeup enzyme added per gram cellulose added with each recycle (mg EP/g). Values are averages of duplicate experiments. Error bars are ± one standard deviation of 6 center point experiments. It is well documented that enzymatic cellulose hydrolysis of lignocellulosic substrates produces a negative relation between glucose yields and substrate concentration, whereas there is a positive relationship between the final glucose concentration and the substrate load [6]. This paradox has been ascribed to be due to non-productive enzyme adsorption to the solid substrate [2,16,21], but may also be a result of product inhibition by glucose on the cellulases, in particular when the glucose product levels become higher than 10 g/L [8]. The highest yields after the third recycle were obtained in the conditions with the highest enzyme dosage and fraction of solids recycled. The fractional decrease in yields for each recycle were similar to the 10% TS experiments for the same makeup enzyme loading and fraction of solids recycled. The least squares fit model applied to the data was found to be statistically significant (P<0.0001) for the prediction of all responses, and actual versus predicted data had r-squared values between 0.94 and 0.98. Both factors were again found to have a significant effect on all measured responses. Total glucose yields after 3 recycle rounds ranged from 62% to 82% (Figure 6). A number of conditions performed better than the 72 h hydrolysis yields, with the highest yield from the condition with 95.6% insoluble solids recycle and makeup enzyme of 42 mg EP/g cellulose. The center point of the experiment matched the yields of the 72 h hydrolysis. Thus by recycling only 85% of the insoluble solids it was possible to decrease the enzyme loading by 33% and maintain the same level of cellulose hydrolysis. The amount of glucose produced during each recycle round stayed relatively constant for most experimental conditions, or increased from 1.12 g to 1.45 g for the most favorable conditions (data not shown), and relative trends were similar to those for glucose yields. Figure 6. Total glucose yields, as fraction of theoretical maximum glucose produced, for 4 rounds of hydrolysis for the response surface experiment on PCS substrate. 72 hr reaction time between recycles, initial 15% TS loading, and 51.4 mg EP/g cellulose initial enzyme loading. Reaction conditions varied by the fraction of insoluble solids recycled (%), and the amount of makeup enzyme product added per gram cellulose added with each recycle (mg EP/g). Values are averages of duplicate experiments. Error bars are ± one standard deviation of 6 center point experiments. The red bar represents the glucose yield at 72 hours with no recycle and 51.4 mg EP/g cellulose. Roman letters a-d indicate significantly different values by ANOVA (95% confidence intervals, pooled standard deviation 1.722 (N=22)). The productivity for the 72 h hydrolysis was 0.086 g glucose/mg enzyme protein applied. All recycle conditions had higher productivities than the 72 h hydrolysis conditions, and these productivities primarily were inversely proportional to the amount of makeup enzyme used (Figure 7). Productivities ranged from 0.11 to 0.13 g glucose/mg enzyme protein. This again showed a significant increase in enzyme productivity and utilization for cellulose hydrolysis. Thus, trends which were present at 10% TS were also observed at higher solids concentrations. These results point to maximizing both makeup enzyme loading and the amount of solids recycled to increase recycling efficiency and to boost glucose yields. However, it is not possible to recycle 100% of the insoluble stream on an industrial scale, as this would lead to an infinitely large reactor and reaction prohibitive solids concentrations, and increasing enzyme loading significantly would lead to increases in operating costs. Thus these results must be incorporated with other processing parameters to determine optimal operating conditions for an industrial recycle scenario. Figure 7. Enzyme productivity over the response surface experiment on PCS2. 72 hr reaction time between recycles, initial 15% TS substrate loading, and 51.4 mg EP/g cellulose initial enzyme loading. Reaction conditions varied by the fraction of insoluble solids recycled (%), and the amount of makeup enzyme added per gram cellulose added with each recycle (mg EP/g). Values are averages of duplicate experiments. Error bars are ± one standard deviation of 6 center point experiments. The red line represents the productivity of the 72 hour hydrolysis yield with no recycle and 51.4 mg EP/g cellulose. Roman letters a-d indicate significantly different values by ANOVA (95% confidence intervals, pooled standard deviation 0.0027 (N=22)). #### Modeling of recycle reaction conditions Based on the performance and gravimetric data from the recycle experiments, it was possible to construct a mathematical model to determine process parameters for an extended number of successive recycle rounds. Of specific interest was the effect of recycle on the TS content of the total reaction mass (Figure 8), and the composition of the insoluble solids. The TS content increased in all of the recycle scenarios significantly with subsequent recycles, and the TS content at the 10th recycle ranged from 19 to 23% TS (data not shown). Solids concentrations can be a limiting factor in lignocellulosic processes, primarily due to difficulties in mixing, and it is therefore important to consider how any new recycle method would affect this variable [3]. Solids recycle had a significant impact on the TS content, and the negative impacts of increasing that solids concentration must be weighed against any improvement in enzyme performance due to the recycle. However, the rheological properties of lignocellulosic biomass has been found to change depending on the degree to which the cellulose has been hydrolyzed [6] and thus the mix of pretreated biomass with recycle residue could be expected to exhibit different mixing characteristics than pretreated biomass alone. Figure 8. Total solids content (%TS) of hydrolysis reactions with progressive recycles for different solids recycle and enzyme loading scenarios. Experimental data used in the model is from response surface experiment on PCS2 with an initial solids loading of 15% TS, 3 total recycles. Data shown is for selected experimental conditions. The lignin content of the insoluble substrate was shown to increase significantly with subsequent recycles in the model, increasing from a starting lignin content of 27% to between 39 and 53% of total insoluble solids (Figure 9). The increase in lignin content depended primarily on the fraction of solids recycled and on the enzyme loading. Due to that more cellulose was hydrolyzed with increased enzyme loadings, the lignin content in the insoluble solids fraction which remained after each hydrolysis increased proportionally and thus the residual insoluble material had higher lignin content when recycled. The lignin was in this model regarded as a nonhydrolyzing material and was therefore assumed to stay in the solid fraction. Figure 9. Predicted lignin content of insoluble solids for successive recycles of insoluble solids, for different fractions of solids recycle and makeup enzyme addition. Experimental data used in the model is from response surface experiment on PCS2 with an initial solids loading of 15% TS, 3 total recycles. Data shown is for selected experimental conditions. Similar trends to those exhibited by total solids concentrations in Figure 8 were seen for the total mass of the reaction, which was found to increase as a function of the fraction of solids recycled. From an initial mass of 20 g, the total mass of the reaction with a 70% solids recycle increased to 33 g by the 10th recycle (data not shown). This significant increase in reaction mass, and with it reaction volume, implies that any recycle method would require significantly larger equipment for the hydrolysis, and thus increased capital and operating costs. The starting cellulose composition of the insoluble substrate was found to decrease with subsequent recycles, and trended inversely to the lignin content of the insoluble substrate. The cellulose content of the insoluble fraction decreased from 63% to between 44 and 23% by the 10th modeled recycle round, depending on the fraction of insoluble solids recycled and the enzyme dosage. #### Effect of increased concentrations of lignin residue on cellulose hydrolysis Based on trends suggested by the model that show an increase in lignin in the insoluble fraction, it was desirable to determine the direct effect of this phenomenon on hydrolysis performance. It was found that for low solids hydrolysis conditions, increasing the fraction of lignin in the insoluble solids fraction did not have a negative effect on glucose yields (Figure 10). Glucose yields were found to increase slightly both in the presence of non-deactivated and the deactivated lignin rich hydrolysis residue, although to a lesser extent for the latter. These data differ significantly from what would have been expected, as lignin is known to nonspecifically bind cellulase enzymes, and therefore to act inhibitory on cellulose hydrolysis [13,16]. This aspect has also been considered to be an issue in relation to enzyme recycling via recycling of lignocellulose solids [10]. The observed result may be related to the specific nature of the lignin rich hydrolysis residue used to increase the lignin content of the reaction. The lignin residue was previously exposed to cellulase enzymes, and therefore the binding sites on the lignin, may already have been occupied by enzymes from the previous hydrolysis, and the substrate moreover had a particularly large surface area since it had been been milled (see Feedstocks and chemicals section). The method which was used to produce the lignin rich residue was identical to the standard enzymatic hydrolysis used in the study, and therefore it can be assumed that the recycled insoluble solids residue behaves similarly when it is recycled. It has already been shown that active enzymes remain associated with the insoluble fraction in this study, and therefore it is also feasible that inactivated enzymes might remain associated with the solids and occupy lignin-protein binding sites. The lack of decrease in glucose yields with the introduction of the enzyme deactivated lignin rich residues may analogously be due to the non-specific enzyme binding sites on the lignin residue being occupied by deactivated enzyme protein residues. The slight increase observed for the higher fractions of lignin residue (Figure 10), could be due to less glucose product inhibition of the enzymes due to the washing as well as a result of hydrolysis of the 5% residual cellulose accompanying the material (see Lignin residue hydrolysis response section). Nevertheless, these results show that the increase in the lignin content of the insoluble fraction during recycle does not have a negative effect on glucose yields. Figure 10. Theoretical glucose yields for 72 hr hydrolysis reactions on assay PCS material. Initial total solids concentration of 5-7% w/w. Enzyme loading of 34 mg EP/g cellulose in assay substrate. Lignin residue added was approximately 95% lignin and ash, and 5% cellulose. “Fraction of lignin residue of total insoluble solids” refers to the percentage of the initial total insoluble solids, which was the lignin residue (total insoluble solids were added lignin residue and unhydrolyzed PCS). Residue was deactivated by heating to 100°C for 10 minutes in a block heater. Values are average of duplicate experiments. Error bars represent ± one standard deviation. #### Evaluation of solids recycle for enzyme recovery and yield improvement The recycle experiments demonstrated that a significant portion of the enzyme activity could be recycled with the insoluble solids fraction after each hydrolysis. Total enzyme productivity was increased for all of the recycle scenarios, showing that the enzymes were better utilized during the reaction. The insoluble solid recycle method also increased enzyme substrate contact time, as unreacted substrate was returned to the reaction. However, the amount of enzyme activity which was successfully recycled was significantly less than the total amount of enzyme which was initially added to the hydrolysis. This significant loss in overall enzyme activity may be due either to thermal deactivation and precipitation of the enzyme, increased nonspecific binding by fresh lignin, and/or loss of soluble enzyme in the liquor fraction. Specifically, a loss in β-glucosidase activity in the soluble fraction, which has been shown to be present in significant quantities during hydrolysis in the liquid phase [13], could upset specific cellulase activity ratios and lead to less than optimal performance of the enzyme preparation. Potentially, enzyme products or makeup enzyme products for biomass hydrolysis processes that includes recycling of solid substrate could be designed to match the specific enzyme activities that are being lost during recycling scheme. The recycle of the insoluble solids caused large impacts on the physical and compositional parameters of the hydrolysis. Specifically, the recycle led to significantly increased total solids concentrations, total reaction masses, and to an increase of lignin in the solids composition. These increases limit the amount to which solids recycle can be applied in an industrial process, as a significant solids purge would need to be incorporated to achieve steady state and to maintain acceptable operating conditions. As the ability to recycle enzyme activity was related directly to the amount of solids which could be recycled, the ability to recycle the enzymes would be effectively limited by the need for this solids purge. Determining the optimal tradeoff between recycling enzyme activity and increased operating solids concentration is highly process dependent, and is beyond the scope of this investigation. The increase in solids concentration due to recycling could also lead to decreases in final glucose concentrations. If it is assumed that a plant operates at the highest possible solids concentration, the addition of recycling would either necessitate increasing the maximum operating solids concentration, which would require modifying process equipment, or leaving the solids concentration the same, which would lead to a decrease in starting cellulose content. However, the upper working limit of solids concentration in a plant may also be limited by enzyme inhibition and hydrolysate toxicity to the fermenting organism, so there may be room for adjustment in a specific process to incorporate solids recycle. Interestingly, the increase in lignin in the insoluble fractions did not negatively impact the hydrolysis performance. With respect to washing of the solids, we did observe a tendency for improved glucose yields and thus better enzyme productivity with washing of the solids being recycled (Figure 2). These impacts of recycle could lead to significant necessary process modifications if enzyme recycling were to be implemented in an industrial process. Most importantly, in order to recycle some or all of the residual solids, tank size would have to increase, and different methods of mixing might have to be applied to the higher solids slurry. In addition, although substrate washing is currently not considered feasible for industrial lignocellulosic reactions, several issues may also have to be considered in relation to washing of the solids during the recycling, regardless of whether the washing includes specific variations such as recovery of the enzymes associated with the solids followed by concentration and new addition to the fresh substrate [19] or addition of surfactant as part of the solids washing and recycling [24]. In addition, using ammonia-pretreated corncob as cellulosic substrate it has recently been affirmed that cellulase adsorption kinetics to the substrate depends on the pH and that adsorption seems to be maximal around pH 4.8 [25]. Regardless of whether these latter results are considered or not, the cost of solid liquid separation to remove the insoluble solids would have to be added and recycling enzymes would thus lead to significant increases in invested capital costs of a plant. It remains to be seen if the decreased enzyme usage could offset the increased capital cost needed for the extra processing equipment and tank space. Enzyme dosage could be decreased by 30% to achieve the same amount of cellulose hydrolysis in a steady state condition for the best conditions of the solids recycle experiments. This is a significant decrease in enzyme usage. It was also shown that solids recycle coupled with higher dosages of enzyme could improve yields beyond the baseline of no recycle level. It is worth noting though that the initial enzyme loadings for this study were set so that glucose yields would not approach theoretical maximum levels (below 85% total cellulose conversion), so that the effects of enzyme recycle could be observed in both the positive and negative direction from the control no recycle condition. Thus specific recycle performance results may vary based on what the desired level of cellulose hydrolysis is for a given process. As well, these varying results may impact process economics to the extent that solids recycle may or may not be economically beneficial. The enzyme recycle could be coupled with other methods for improving hydrolysis performance (enzyme dosage, reaction time) which might enhance recycle performance. The solids recycle method was shown to be an effective way to decrease enzyme usage, however its specific application in a continuous industrial process has yet to be proved, and may vary greatly depending on the process applied. Enzyme recycle can be effective in decreasing enzyme usage, however its overall benefit to a specific process must be tested at a larger scale. ### Conclusions It was possible to effectively recycle a significant amount of the enzymes in Cellic CTec2 by recycling the insoluble solid fraction after enzymatic hydrolysis of pretreated corn stover. Enzyme usage was decreased 30% while maintaining glucose yields when a majority of the insoluble material was recycled. Similar trends in the recyclability of the enzyme were seen at two total solids concentrations, suggesting that this process could be replicated at even higher total solids operating conditions. As well, specific enzyme productivity increased for all recycle scenarios. This represents a significant improvement in process performance and a significant reduction in the enzyme requirement for lignocellulose hydrolysis. However, by recycling the insoluble fraction, total solids concentrations, reaction masses, and the amount of lignin in the reaction increased significantly, and may therefore temper improved process performance and economics. This balance will be highly dependent on a specific industrial process and operating costs. Elevated levels of lignin residue were not found to negatively affect enzyme performance. Enzyme recycle remains a viable method for decreasing enzyme requirements and operating costs for renewable fuel production, and its industrial application should be further investigated. ### Materials and methods #### Feedstocks and chemicals Dilute acid pretreated corn stover (PCS) used as the model substrate in this study, and was kindly provided by the National Renewable Energy Laboratory (NREL; Golden, Colorado). The corn stover was pretreated using a dilute sulfuric acid pretreatment method in a continuous pilot scale pretreatment reactor [26]. Two batches of PCS were used for the study, and their compositions and gravimetric data are given in Table 1. Batch 1 (PCS1) was dewatered in a hydraulic press (0-900 psi, 1–5 minutes) to remove a majority of the soluble sugars, and batch 2 (PCS2) was washed thoroughly, and milled to allow for accurate pipetting of the biomass in small scale experiments. All of the reported experiments used batch 1 material except when evaluating the effect of increased recycled lignin rich material (Figure 10). Feedstock composition was determined using NREL’s standard Laboratory Analytical Procedures (LAP) for the determination of the composition of biomass [27]. All chemicals used in the study were reagent grade and purchased from Sigma-Aldrich (St. Louis, MO). Table 1. Compositional analysis of lignocellulosic substrates used in the study #### Cellulase enzyme Enzymatic hydrolysis was carried out using Cellic CTec2, a commercial enzyme preparation kindly provided by Novozymes (Bagsværd, Denmark). Enzyme was loaded on a mass of enzyme product (EP) per gram of glucan (cellulose) in the substrate basis. #### Enzymatic hydrolysis procedure Enzymatic hydrolysis was carried out except where noted with an initial total reaction mass of 20 g, and an initial biomass concentration of between 10-15% TS (w/w) on a dry weight basis. Samples were prepared by adding deionized water (Millipore® Milli-Q, USA), sodium citrate buffer (50 mM final citrate buffer concentration), and sodium hydroxide for pH adjustment (4 N) to the correct amount of biomass substrate, loaded on a dry weight basis. Initial pH was between 5.1 and 5.3 for all samples. Three stainless steel 4 mm ball bearings were added to each sample to facilitate mixing during the reaction. All reactions were carried out at 50°C (+/− 0.5°C) for 72 hours in a rotisserie style incubator (Combi-D24 hybridized incubator, from FINEPCR, Korea) rotating end over end at approximately 30 rpm. The reactions were carried out in batch mode for each recycle round. #### Recycle procedure Enzyme recycle was carried out by recycling the insoluble solid residue present at the end of each hydrolysis period. For each recycle round the insoluble solids residue from the previous batch hydrolysis was mixed thoroughly with 20 g of fresh substrate slurry as the original initial % TS for the experiment. The recycle followed one of two similar methods, based on whether or not the solids were washed between recycles (Flow chart, Additional file 1). After hydrolysis for 72 hours, the samples were centrifuged at 4000 relative centrifugal force (RCF) for 20 minutes at 20°C to separate the solid and liquid fractions. The supernatant liquor was decanted and glucose concentration was measured. If the solids were to be washed according to the experimental design, 35 ml of deionized water was mixed thoroughly with the insoluble solids, centrifuged again and the wash liquid decanted. 20 g of fresh substrate slurry, prepared in the same manner as the initial hydrolysis was mixed with the desired amount of insoluble solids to be recycled. Additional Cellic CTec2 was added to the mixture and loaded based on the amount of cellulose added to the experiment from the fresh substrate slurry. Glucose concentration was measured in the new mixture at time zero to account for glucose carried over from the previous hydrolysis. The process was then repeated for each subsequent recycle round. Additional file 1. Process flow diagram for the recycle procedure for insoluble solids recycle. #### Resuspension of residual solids experiment The standard enzymatic hydrolysis was carried out for 72 hours with initial 15% TS. The samples were then centrifuged and the liquid fraction decanted, and either citrate buffer, buffer with 34 mg EP/g fresh cellulose calculated based on the addition of 20 g fresh substrate, or 20 g fresh substrate at 15% TS with no additional enzyme was added to the samples. Samples were then incubated at 50°C for a further 72 hours after which glucose concentrations were measured. Glucose concentration was measured in the liquor fractions after the first initial 72 hours, after resuspension of the solids residue, and after the end of the reaction. Experiments were carried out in triplicate. #### Factorial recycle experiment A full factorial statistical design with two continuous variables and one categorical variable was designed to determine the effect on glucose production. The amount of insoluble solid material re-cycled into the subsequent hydrolysis was varied between 50% and 100% (w/w). The amount of makeup enzyme added was varied between 0 and 34 mg EP/g fresh cellulose added with the new slurry added in each recycle round. The categorical variable was the washing of the solids. Additional enzyme was applied only on the basis of the amount of new cellulose added, and not on the cellulose in the recycled insoluble solids. This experiment was carried out on batch 1 PCS with an initial 10% TS. A total of 4 recycle rounds were carried out, for a total reaction time of 360 hours. The initial enzyme loading at the start of the experiment for all conditions was 51.4 mg EP/g cellulose. The experiment was carried out in one batch with duplicate experiments, with the center point of the experimental design repeated 6 times. The data were statistically analyzed using least squares fit modeling and surface response fit to determine the significance and effects of each factor. Experiments were designed and analyzed using SAS JMP software (SAS Inc., Cary, North Carolina). Significance was defined as probability value (P values) below 0.05 for type I errors. #### Response surface recycle experiment with higher solids concentrations The initial substrate solids concentration was increased to 15% TS for the second round of statistical experiments. The experiment was designed as a response surface central composite design to determine the impact of fraction of insoluble solids recycled, varied from 70-96%, and additional enzyme loading, which varied between 23–46 mg EP/g fresh cellulose. 3 recycle rounds were carried out for this experiment, for a total reaction time of 288 hours. The initial enzyme loading for all conditions was 51.4 mg EP/g cellulose. The center point was repeated 6 times and the rest of the experimental conditions were carried out in duplicate experiments. #### Lignin residue hydrolysis response Lignin rich hydrolysis residue was produced by hydrolyzing batch 1 PCS with 51.4 mg EP/g cellulose for 144 hrs at 10% TS. This resulted in a residual insoluble solids material that consisted of 95% lignin and 5% residual cellulose. The material was thoroughly washed before addition. The lignin material contained residual enzyme activity, and it was applied both before and after a deactivation step (10 minutes at 100°C in a block heater (VWR, USA). A volume of 800 μl of batch 2 PCS at 6.25% TS in citrate buffer (50 mM) was added to 2.5 ml reaction tubes. The lignin residue was added in varying amounts to the substrate (0-60% of total solids w/w). Enzyme loading was 34 mg EP/g cellulose based on the cellulose in the substrate. The tubes were placed in a tabletop shaker (Thermo Mixer Comfort, Eppendorf, Hamburg) temperature controlled at 50°C and mixed at 1300 rpm. Glucose concentrations were measured after 72 hours. Experiments were carried out in duplicate. #### BCA protein analysis The protein concentrations in the enzyme preparation were measured using the bicinchoninic acid (BCA) analysis assay method which has been described previously (Smith et al., 1985) [28]. A BCA assay kit from Thermo Scientific was used for the analysis (Walthman, USA). #### Sugars and dry matter analysis Sugar concentrations were measured using High Performance Liquid Chromatography (HPLC). The HPLC used was a Waters HPLC 2695 with a Biorad-H 87+ column (Waters Corporation, Millford, USA). The eluent was 5 mM sulfuric acid with a flow rate of 0.5 ml/min. Percent solids (%TS) measurements were made using an 105 °C oven (Memmert, Germany) method according to standard procedures developed at NREL [27]. #### Glucose yield Glucose yields, Yglucose, (equation 1, Table 2), were calculated by measuring the glucose concentration in the liquor fraction after each hydrolysis round, and estimating liquor volume after hydrolysis to determine the mass of glucose produced [29]. This was compared to the total cellulose content of the substrate (modified by a hydration factor of 1.11) while discounting any soluble glucose present at the start of the reaction. Glucose concentration was thus measured in the liquid fraction at the beginning of each recycle round (Cgluc initial), and this concentration was subtracted from the final concentration of glucose (Cgluc final) to avoid double counting (equation 1, Table 2). Table 2. Equations used for calculating glucose yield, Yglucose (% g glucose/g cellulose) (equation 1) and enzyme productivity, Penztot (g glucose/mg enzyme protein) (equation 2) Total glucose yield was calculated based on total added cellulose during the course of the whole experiment and the mass of glucose produced over that same period. This reflected the glucose yield for the entire experiment, and controlled for varying levels of solids recycle and glucose removal during the recycles. #### Enzyme productivity The enzyme efficiency or productivity, Penztot, as it is referred to here, was calculated as a classic biocatalytic productivity term from the sum of all glucose produced during the course of the entire experiment divided by the total mass of enzyme protein product added during the experiment, with units of g glucose/ mg enzyme protein and calculated as shown in equation 2, Table 2. Mathematical modeling and assumptions A mathematical model was developed to predict reaction parameters under different recycle scenarios for extended recycle rounds beyond the limit of the experiments. The model was a mass balancing of the recycle experiments, and relied on batch kinetics and stepwise computation of forecast results. Experimental results from the recycle experiments were extrapolated to predict the composition of the substrate, the total solids concentration, and total reaction mass. This was done using calculations based on a mass and component balance of the system. It was assumed that the final glucose yields and sugar concentrations remained constant from the final experimental hydrolysis, that the efficiency of the solid liquid separation step remained constant, and that the input mass and%TS of the fresh substrate was constant throughout additional recycle rounds. The recycle scenario was modeled as a series of batch reactions with solids recycle taking place at the end of an unspecified reaction time. Calculations for the computation of selected forecast data are given in the Additional file 1. Model sensitivity was not calculated, but the differences in the experimental data provided for significant differences in the model outcomes. ### Abbreviations BCA: Bicinchoninic acid; PCS: Pretreated corn stover; %TS: Percent total solids; EP: Enzyme product; NREL: National Renewable Energy Laboratory; RCF: Relative centrifugal force. ### Competing interests Drs. Johan Börjesson and Lars S. Pedersen are employed at Novozymes A/S, a company that sells enzymes for lignocellulose processing. The authors declare that they have no competing interests. ### Authors’ contributions NW and JB conceived the study. NW designed experiments, carried out the laboratory work, and wrote the manuscript. JB provided supervision and research direction of the experimental work, as well as editing the manuscript. LSP provided project supervision, helped with statistical design and analysis, and edited the manuscript. AM provided academic supervision and edited the manuscript. All authors contributed intellectually via scientific discussions during the work and have read and approved the final manuscript. ### Acknowledgements We would like to thank Henrik Terkildsen, Birthe Hauerbach Sørensen, Pernille Brice Larsen, Mette Pontoppidan Rasmussen, and Ileana Barroso Alvarez for their analytical and technical help. The funding for this project was provided by Novozymes A/S in conjunction with DTU. ### References 1. Edenhofer OR: Summary for Policy Makers. An Intergovernmental Panel for Climate, Change Special Report on Renewable Energy Sources and Climate Change Mitigation. Cambridge UK: Cambridge University Press; 2011. 2. Jørgensen H, Kristensen JB, Felby C: Enzymatic conversion of lignocellulose into fermentable sugars: challenges and opportunities. Biofpr 2007, 1:119-134. 3. Aden A, Faust T: Technoeconomic analysis of the dilute sulfuric acid and enzymatic hydrolysis process for the conversion of corn stover to ethanol. Cellulose 2009, 16:535-545. Publisher Full Text 4. Morales-Rodriguez R, Meyer AS, Gernaey KV, Sin G: A framework for model-based optimization of bioprocesses under uncertainty: Lignocellulosic ethanol production case. Comput Chem Eng 2012, 42:115-129. 5. Börjesson J, Engqvist M, Sipos B, Tjerneld F: Effect of poly(ethylene glycol) on enzymatic hydrolysis and adsorption of cellulase enzymes to pretreated lignocellulose. Enzyme Microb Technol 2007, 41:186-195. Publisher Full Text 6. Rosgaard L, Andric P, Dam-Johansen K, Pedersen S, Meyer AS: Effects of substrate loading on enzymatic hydrolysis and viscosity of pretreated barley straw. Appl Biochem Biotechnol 2007, 143:27-40. PubMed Abstract | Publisher Full Text 7. Knutsen JS, Davis RH: Cellulase retention and sugar removal by membrane ultrafiltration during lignocellulosic biomass hydrolysis. Appl Biochem Biotechnol 2004, 113–116:585-599. PubMed Abstract | Publisher Full Text 8. Andrić P, Meyer AS, Jensen PA, Dam-Johansen K: Reactor design for minimizing product inhibition during enzymatic lignocellulose hydrolysis II. Quantification of inhibition and suitability of membrane reactors. Biotechnol Adv 2010, 28:407-425. PubMed Abstract | Publisher Full Text 9. Yang J, Zhang X, Yong Q, Yu S: Three-stage hydrolysis to enhance enzymatic saccharification of steam-exploded corn stover. Bioresource Technol 2010, 101:4930-4935. Publisher Full Text 10. Ramos LP, Breuil C, Saddler JN: The use of enzyme recycling and the influence of sugar accumulation on cellulose hydrolysis by Trichoderma cellulases. Enzyme Microb Technol 1993, 15:19-25. Publisher Full Text 11. Tjerneld F: Enzyme-catalyzed hydrolysis and recycling in cellulose bioconversion. Methods Enzymol 1994, 228:549-558. 12. Zhang YP, Lynd LR: Toward an aggregated understanding of enzymatic hydrolysis of cellulose: Noncomplexed cellulase systems. Biotechnol Bioeng 2004, 88:797-824. PubMed Abstract | Publisher Full Text 13. Varnai A, Viikari L, Marjamaa K, Siika-aho M: Adsorption of monocomponent enzymes in enzyme mixture analyzed quantitatively during hydrolysis of lignocellulose substrates. Bioresource Technol 2011, 102:1220-1227. Publisher Full Text 14. Tu M, Chandra RP, Saddler JN: Evaluating the distribution of cellulases and the recycling of free cellulases during the hydrolysis of lignocellulosic substrates. Biotechnol Progr 2007, 23:398-406. Publisher Full Text 15. Lu Y, Yang B, Gregg D, Saddler JN, Mansfield SD: Cellulase adsorption and an evaluation of enzyme recycle during hydrolysis of steam-exploded softwood residues. Appl Biochem Biotechnol 2002, 98–100:641-654. PubMed Abstract | Publisher Full Text 16. Eriksson T, Börjesson J, Tjerneld F: Mechanism of surfactant effect in enzymatic hydrolysis of lignocellulose. Enzyme Microb Technol 2002, 31:353-364. Publisher Full Text 17. Sinitsyn AP, Bungay ML, Clesceri LS, Bungay HR: Recovery of enzymes from the insoluble residue of hydrolyzed wood. Appl Biochem Biotechnol 1983, 8:25-29. Publisher Full Text 18. Tu M, Chandra RP, Saddler JN: Recycling cellulases during the hydrolysis of steam exploded and ethanol pretreated lodgepole pine. Biotechnol Progr 2007, 23:1130-1137. 19. Tu M, Zhang X, Paice M, MacFarlane P, Saddler JN: The potential of enzyme recycling during the hydrolysis of a mixed softwood feedstock. Bioresource Technol 2009, 100:24. 20. Tu M, Saddler JN: Potential enzyme cost reduction with the addition of surfactant during the hydrolysis of pretreated softwood. Appl Biochem Biotechnol 2010, 161:274-287. PubMed Abstract | Publisher Full Text 21. Qi B, Chen X, Su Y, Wan Y: Enzyme adsorption and recycling during hydrolysis of wheat straw lignocellulose. Bioresource Technol 2011, 102:2881-2889. Publisher Full Text 22. Lee D, Alex HC, Yu AHC, Saddler JN: Evaluation of cellulase recycling strategies for the hydrolysis of lignocellulosic substrates. Biotechnol Bioeng 1995, 45:328-336. PubMed Abstract | Publisher Full Text 23. Mes-Hartree M, Saddler JN: The nature of inhibitory materials present in pretreated lignocellulosic substrates which inhibit the enzymatic hydrolysis of cellulose. Biotechnol Lett 1983, 5:531-536. Publisher Full Text 24. Xue Y, Jameel H, Park S: Strategies to recycle enzymes and their impact on enzymatic hydrolysis for bioethanol production. Bioresources 2012, 7:602-615. 25. Du R, Su R, Li X, Tantai X, Liu Z, Yang J, Qi W, He Z: Controlled adsorption of cellulase onto pretreated corncob by pH adjustment. Cellulose 2012, 19:371-380. Publisher Full Text 26. Weiss ND, Farmer JD, Schell DJ: Impact of corn stover composition on hemicellulose conversion during dilute acid pretreatment and enzymatic cellulose digestibility of the pretreated solids. Bioresource Technol 2010, 101:674-678. Publisher Full Text 27. Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D, Crocker D: Determination of structural carbohydrates and lignin in biomass: Laboratory Analytical Procedure. 2008. NREL/TP-510-42618. Updated 2010 28. Smith PK: Measurement of protein using bicinchoninic acid. Anal Biochem 1985, 150:76-85. PubMed Abstract | Publisher Full Text 29. Zhu Y, Malten M, Torry-Smith M, McMillan JD, Stickel JS: Calculating sugar yields in high solids hydrolysis of biomass. Bioresource Technol 2011, 102:2897-2903. Publisher Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614922761917114, "perplexity": 4968.290421910784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099758.82/warc/CC-MAIN-20150627031819-00204-ip-10-179-60-89.ec2.internal.warc.gz"}
https://hal.inria.fr/tel-00749633
# Modélisation probabiliste en biologie moléculaire et cellulaire Abstract : The importance of stochasticity in gene expression has been widely shown recently. We will first review the most important related work to motivate mathematical models that takes into account stochastic effects. Then, we will study two particular models where stochasticity induce interesting behavior, in accordance with experimental results: a bursting dynamic in a self-regulating gene expression model; and the emergence of heterogeneity from a homogeneous pool of protein by post-translational modification.\\ In Chapter I, we studied a standard gene expression model, at three variables: DNA, messenger RNA and protein. DNA can be in two distinct states, ''ON'' and ''OFF''. Transcription (production of mRNA) can occur uniquely in the ''ON'' state. Translation (production of protein) is proportional to the quantity of mRNA. Then, the quantity of protein can regulate in a non-linear fashion these production rates. We used convergence theorem of stochastic processes to highlight different behavior of this model. Hence, we rigorously proved the bursting phenomena of mRNA and/or protein. Limiting models are then hybrid model, piecewise deterministic with Markovian jumps. We studied the long time behavior of these models and proved convergence toward a stationary state. Finally, we studied in detail a reduced model, explicitly calculated the stationary distribution and studied its bifurcation diagram. Our two main results are 1) to highlight stochastic effects by comparison with deterministic model; 2) To give back a theoretical tool to estimate non-linear regulation function through an inverse problem.\\ In Chapter II, we studied a probabilistic version of an aggregation-fragmentation model. This version allows a definition of nucleation in agreement with biological model for Prion disease. To study the nucleation, we used a stochastic version of the Becker-Döring model. In this model, aggregation is reversible and through attachment/detachment of a monomer. The nucleation time is defined as a waiting time for a nuclei (aggregate of a fixed size, this size being a parameter of the model) to be formed. In this work, we characterized the law of the nucleation time. The probability distribution of the nucleation time can take various forms according parameter values: exponential, bimodal or Weibull. We also highlight two important phenomena for the mean nucleation time. Firstly, the mean nucleation time is a non-monotone function of the aggregation kinetic parameter. Secondly, depending of parameter values, the mean nucleation time can be strongly or very weakly correlated with the initial quantity of monomer. These characterizations are important for 1) explaining weak dependence in initial condition observed experimentally; 2) deducing some parameter values from experimental observations. Hence, this study can be directly applied to biological data. Finally, concerning a polymerization-fragmentation model, we proved a convergence theorem of a purely discrete model to hybrid model, which may be useful for numerical simulations as well as a theoretical study. Keywords : Document type : Theses Probability. Université Claude Bernard - Lyon I, 2012. French Domain : https://tel.archives-ouvertes.fr/tel-00749633 Contributor : Romain Yvinec <> Submitted on : Wednesday, November 7, 2012 - 10:06:32 PM Last modification on : Wednesday, October 29, 2014 - 1:28:54 PM ### Identifiers • HAL Id : tel-00749633, version 1 ### Citation Romain Yvinec. Modélisation probabiliste en biologie moléculaire et cellulaire. Probability. Université Claude Bernard - Lyon I, 2012. French. <tel-00749633> Consultation de la notice ## 529 Téléchargement du document
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818590044975281, "perplexity": 2424.6231400389706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453938418.92/warc/CC-MAIN-20150501041858-00007-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.gilgamath.com/portfolio-flattening
## Conditional Portfolios When I first started working at a quant fund I tried to read about portfolio theory. (Beyond, you know, "Hedge Funds for Dummies.") I learned about various objectives and portfolio constraints, including the Markowitz portfolio, which felt very natural. Markowitz solves the mean-variance optimization problem, as well as the Sharpe maximization problem, namely $$\operatorname{argmax}_w \frac{w^{\top}\mu}{\sqrt{w^{\top} \Sigma w}}.$$ This is solved, up to scaling, by the Markowitz portfolio $$\Sigma^{-1}\mu$$. When I first read about the theory behind Markowitz, I did not read anything about where $$\mu$$ and $$\Sigma$$ come from. I assumed the authors I was reading were talking about the vanilla sample estimates of the mean and covariance, though the theory does not require this. There are some problems with the Markowitz portfolio. For us, as a small quant fund, the most pressing issue was that holding the Markowitz portfolio based on the historical mean and covariance was not a good look. You don't get paid "2 and twenty" for computing some long term averages. Rather than holding an unconditional portfolio, we sought to construct a conditional one, conditional on some "features". (I now believe this topic falls under the rubric of "Tactical Asset Allocation".) We stumbled on two simple methods for adapting Markowitz theory to accept conditioning information: Conditional Markowitz, and "Flattening". ## Conditional Markowitz Suppose you observe some $$l$$ vector of features, $$f_i$$ prior to the time you have to allocate into $$p$$ assets to enjoy returns $$x_i$$. Assume that the returns are linear in the features, but the covariance is a long term average. That is $$E\left[x_i \left|f_i\right.\right] = B f_i,\quad\mbox{Var}\left(x_i \left|f_i\right.\right) = \Sigma.$$ Note that Markowitz theory never really said how to estimate mean returns, and thus the conditional expectation here can be used directly in the Markowitz portfolio definition. Thus the conditional Markowitz portfolio, conditional on observing $$f_i$$ is simply $$\Sigma^{-1} B f_i$$. Another way of viewing this is to estimate the "Markowitz coefficient", $$W=\Sigma^{-1} B$$ and just multiply this by $$f_i$$ when it is observed. I have written about inference on the conditional Markowitz portfolio: via the MGLH tests one can test essentially whether $$W$$ is all zeros, or test the total effect size. However, the conditional Markowitz procedure is, like the unconditional procedure, subject to the Cramer Rao portfolio bounds in the 'obvious' way: increasing the number of fit coefficients faster than the signal-noise ratio can cause degraded out-of-sample performance. ## The Flattening Trick The other approach for adding conditional information is slicker. When I first reinvented it, I called it the "flattening trick". I assumed it was well established in the folklore of the quant community, but I have only found one reference to it, a paper by Brandt and Santa Clara, where they refer to it as "augmenting the asset space". The idea is as follows: in the conditional Markowitz procedure we ended with a matrix $$W$$ such that, conditional on $$f_i$$ we would hold portfolio $$W f_i$$. Why not just start with the assumption that you seek a portfolio that is linear in $$f_i$$ and optimize the $$W$$? Note that the returns you experience by holding $$W f_i$$ is exactly $$x_i^{\top} W f_i = \operatorname{trace}\left(x_i^{\top} W f_i\right) = \operatorname{trace}\left(f_i x_i^{\top} W\right) = \operatorname{vec}^{\top}\left(f_i x_i^{\top}\right) \operatorname{vec}\left(W\right),$$ where $$\operatorname{vec}$$ is the vectorization operator that take a matrix to a vector columnwise. I called this "flattening," but maybe it's more like "unravelling". Now note that the optimization problem you are trying to solve is to find the vector $$\operatorname{vec}\left(W\right)$$, with pseudo-returns of $$y_i = \operatorname{vec}\left(f_i x_i^{\top}\right)$$. You can simply construct these pseudo returns $$y_i$$ from your historical data, and feed them into an unconditional portfolio process. You can use unconditional Markowitz for this, or any other unconditional procedure. Then take the results of the unconditional process and unflatten them back to $$W$$. Note that even when you use unconditional Markowitz on the flattened problem, you will not regain the $$W$$ from conditional Markowitz. The reason is that we are essentially allowing the covariance of returns to vary with our features as well, which was not possible in conditional Markowitz. In practice we often found that flattening trick had slightly worse out-of-sample performance than conditional Markowitz when used on the same data, which we broadly attributed to overfitting. In conditional Markowitz we would estimate the $$p \times l$$ matrix $$B$$ and the $$p \times p$$ matrix $$\Sigma$$, to arrive at $$p \times l$$ matrix $$W$$. In flattening plus unconditional Markowitz you estimate a $$pl$$ vector of means, and the $$pl \times pl$$ matrix of covariance to arrive at the $$p \times l$$ matrix $$W$$. To mitigate the overfitting, it is fairly easy to add sparsity to the flattening trick. If you wish to force an element of $$W$$ to be zero, because you think a certain feature should have no bearing on your holdings of a certain asset, you can just elide it from the flattening pseudo returns. Moreover, if you feel that certain feature should only have, say, a positive influence on your holdings of a particular asset, you can directly impose that positivity constraint in the pseudo portfolio optimization problem. Because you are solving directly for elements of $$W$$, this is much easier than in conditional Markowitz where $$W$$ is the product of two matrices. Flattening is a neat trick. You should consider it the next time you're allocating assets tactically.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593740463256836, "perplexity": 731.7419317292071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505359.23/warc/CC-MAIN-20200401003422-20200401033422-00106.warc.gz"}
https://teachingcalculus.com/2014/01/25/improper-integrals-and-proper-areas/
# Improper Integrals and Proper Areas A few years ago on the old AP Calculus discussion group a teachers asked a question about this improper integral: $\displaystyle \int_{0}^{\infty }{\frac{1}{1+{{x}^{2}}}dx}=\underset{b\to \infty }{\mathop{\lim }}\,\int_{0}^{b}{\frac{1}{1+{{x}^{2}}}dx}$ $=\underset{b\to \infty }{\mathop{\lim }}\,\left. \left( {{\tan }^{-1}}\left( x \right) \right) \right|_{0}^{b}$ $=\underset{b\to \infty }{\mathop{\lim }}\,\left( {{\tan }^{-1}}\left( b \right)-{{\tan }^{-1}}\left( 0 \right) \right)=\frac{\pi }{2}$ His (quite perceptive) student pointed out that the range of the inverse tangent function is arbitrarily restricted to the open interval $\left( -\tfrac{\pi }{2},\tfrac{\pi }{2} \right)$. The student asked if some other range would affect the answer to this problem. The short answer is no, the result is the same. For example, if range were restricted to say $\left( \tfrac{5\pi }{2},\tfrac{7\pi }{2} \right)$, then in the computation above: $\underset{b\to \infty }{\mathop{\lim }}\,\left( {{\tan }^{-1}}\left( b \right)-{{\tan }^{-1}}\left( 0 \right) \right)=\tfrac{7\pi }{2}-3\pi =\tfrac{\pi }{2}$ The value is the same. While that is pretty straightforward, there are other things going on here which may be enlightening. The original indefinite integral represents the area in the first quadrant between the graph of $y=\frac{1}{1+{{x}^{2}}}$ and the x-axis. Let’s consider the function that gives the area between the y-axis and the vertical line at various values of x. $A \displaystyle\left( x \right)=\int_{0}^{x}{\frac{1}{1+{{t}^{2}}}}\ dt$ Pretending for the moment that we don’t know the antiderivative, we can use a calculator to graph the area function. Of course we recognize this as the inverse tangent function, but what is more interesting is that whatever this function is, it seems to have a horizontal asymptote at $y=\tfrac{\pi }{2}$. The area is approaching a finite limit as x increases without bound.  The unbounded region has a finite area. The connection with improper integrals is obvious. $\displaystyle \underset{b\to \infty }{\mathop{\lim }}\,A\left( b \right)=\underset{b\to \infty }{\mathop{\lim }}\,\int_{0}^{b}{\frac{1}{1+{{x}^{2}}}dx=}\int_{0}^{\infty }{\frac{1}{1+{{x}^{2}}}dx}$ Also, the improper integral is defined as the limit of the area function. This may give some insight as to why improper integrals are defined as they are. ## One thought on “Improper Integrals and Proper Areas” 1. So glad to hear from you! You are the necessary drive to fives! Like This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579946994781494, "perplexity": 230.7296434042755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400238038.76/warc/CC-MAIN-20200926071311-20200926101311-00616.warc.gz"}
http://www.eso.org/sci/software/esomidas/doc/user/98NOV/volb/node182.html
Next: Summary of the parameters Up: Basic Equations Previous: Optical Depth Broadening is due both to the natural width of the transition and to the velocity spread of the absorbing atoms along the line of sight. • In the ideal case of atoms at rest. • Let be the normalized distribution of atoms with velocity between v and v + dv, then: (8.1) • In the program the velocity is assumed to be gaussian, thus: vo = velocity of the cloud relative to the observer. This full expression (1) is denoted as a Maxwell + damping wing'' or Voigtian'' profile in the program. In the case of low column density ( 1) can be approximated to: = NS (8.2) This simplified expression (2) is denoted as a Maxwellian'' profile in the program. Finally if the line of sight crosses N clouds then, the resulting optical depth is: In cases where the source has a (cosmological) velocity. Let z be the redshift of the source. An absorption is measured in the spectrum at = corresponding to a rest wavelength . This yields for the redshift of the cloud: The velocity of the cloud relative to the source is: In practice the program computes the absorption profile in the cloud reference frame (vo = 0) and shifts the result into the observer's rest frame Next: Summary of the parameters Up: Basic Equations Previous: Optical Depth Petra Nass 1999-06-15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542152285575867, "perplexity": 1843.1784638777403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.79/warc/CC-MAIN-20190620210153-20190620232153-00558.warc.gz"}
https://www.physicsforums.com/threads/trying-to-get-this-pde-in-terms-of-y.509010/
# Trying to get this PDE in terms of 'y' 1. Jun 23, 2011 ### jaketodd I will love forever whoever can show me the steps of how to get the following equation in terms of y=[...] This is not a homework question. I have a calculus book that has given me some progress, such as expanding the equation to a mixture of terms and first order partial derivatives, and I know to hold all other variables constant when getting a partial derivative, but I clearly don't understand how to do it right because my work doesn't agree with the answers in the textbook, and it seems like there might be more than one answer from what the textbook seems to say. I am really confused. My mom is a math tutor and she is having trouble with it. I really hope someone comes through for me. Thanks, Jake 2. Jun 23, 2011 ### TylerH I don't know how to do it, but it will help those who do if you post your attempted solution. Also, in general, this would still be considered a "homework" question, becuase it is in a textbook question format. 3. Jun 23, 2011 ### jaketodd Well it's not from a textbook. I am just using a textbook to try to figure it out. Also, the furthest I can get it is to first order partial derivatives, then I am lost. Anyone who can solve this would know that step. Jake 4. Jun 23, 2011 ### Hootenanny Staff Emeritus You mean you want to solve the equation? If that's the case, then it is pretty straight forward. Or do you want to write in PDE in the form, $$y = f\left(x,t,\frac{\partial y}{\partial x}, \ldots, \frac{\partial^n y}{\partial x^n}, \frac{\partial y}{\partial t}, \ldots, \frac{\partial^n y}{\partial t^n}\right)$$ 5. Jun 23, 2011 ### hunt_mat First method is separation of variables, so you write $y=X(x)T(t)$, what does this say now? Last edited: Jun 23, 2011 6. Jun 23, 2011 ### HallsofIvy Staff Emeritus That is a "wave equation". Another method is to make the change of variable $u= x- \nu t[itex], [itex]v= x+ \nu t$. $$\frac{\partial y}{\partial x}= \frac{\partial y}{\partial u}\frac{\partial u}{\partial x}+ \frac{\partial y}{\partial v}{\partial v}{\partial x}$$ $$= \frac{\partial y}{\partial u}+ \frac{\partial y}{\partial v}$$ $$\frac{\partial^2 y}{\partial x^2}= \frac{\partial}{\partial x}\left(\frac{\partial y}{\partial u}+ \frac{\partial y}{\partial v}\right)$$ $$= \frac{\partial }{\partial x\left(\frac{\partial y}{\partial u}\right)+ \frac{\partial}\partial x}\left(\frac{\partial y}{\partial v}\right)$$ $$= \frac{\partial v}{\partial x}\frac{\partial^2 y}{\partial u^2}+ 2\frac{\partial u}{\partial x}\frac{\partial^2 y}{\partial u\partial v}+ \frac{\partial^2 y}{\partial v^2}$$ $$= \frac{\partial^2 y}{\partial x^2}+ 2\frac{\partial^2 y}{\partial u\partial v}+ \frac{\partial^2 y}{\partial v^2}$$ Do the same with $\partial^2y/\partial x^2$ and put them into the original equation. It simplifies remarkably. This is called the "method of characteristics". $x- \nu t= const$ and $x+ \nu t= const$ are the "characteristic lines" for the equation. 7. Jun 23, 2011 ### jaketodd I read all you guys' replies and I thank you, but I am still lost. I tried applying what you guys said to the equation, but I don't get it. I am just looking for y= ...I am just looking for the equation solved for y. I don't know how, and I did try. Jake Oh, and Hootenanny, yes I just want to solve the equation for y. If it's pretty straightforward as you say, could you show me how? Many thanks! But anyone, please help. Last edited: Jun 23, 2011 8. Jun 23, 2011 ### hunt_mat Jake, using my method, inserting y into the PDE shows: $$X''(x)T(t)=\frac{1}{\nu^{2}}X(x)T''(t)$$ Divide both sides through by X(x)T(t) to obtain: $$\frac{X''(x)}{X(x)}=\frac{1}{\nu^{2}}\frac{T''(t)}{T(t)}$$ Now one side is totally a function of x and the other side is a function of t, this is only possible is both sides are constant, call this constant k say, so this should leave you with two ODEs, what are they? 9. Jun 23, 2011 ### jaketodd First of all, thank you. However, two things: Firstly, y is already in the PDE as in the image of my original post. Secondly, I have no idea how to get two ODEs from what you provided, and I don't even know what an ODE is. I looked it up and I think it means "Ordinary Differential Equation," but ...please, anyone, I am lost here. 10. Jun 23, 2011 ### hunt_mat If you don't know what an ODE is then attacking a PDE may be a little beyond you. the differential equations are: $$\frac{X''(x)}{X(x)}=k,\quad\frac{1}{\nu^{2}}\frac{T''(t)}{T(t)}=k$$ which gives two odes: $$X''(x)-kX(x)=0,\quad T''(t)-\nu^{2}kT(t)=0$$ 11. Jun 23, 2011 ### TylerH He's saying to let y=X(x)T(t). $\frac{\partial^2y}{\partial x^2}=X''(x)T(t) \mbox{ and } \frac{\partial^2y}{\partial t^2}=X(x)T''(t)$ From there, I'm as lost as you are. :) EDIT: Note: This was typed before the above post. 12. Jun 23, 2011 ### jaketodd I am still lost (for instance, v seems to disappear and then reappear, and also the variable definitions don't seem to match the original post even when I take into account that some variables are being expressed differently). I am trying to get it. Can all the variables in the original post simply be expressed as y=? That's what I really need. 13. Jun 23, 2011 ### TylerH Well, you know y=X(x)T(t) and you have 2 ODEs describing X(x) and T(t), respectively. So solve them for X(x) and T(t) and multiply them together. Last edited: Jun 23, 2011 14. Jun 23, 2011 ### TylerH Oh... I just noticed you said you didn't know what an ODE is. $$X(x)=C_1e^2+C_2e^-x$$ $$T(t)=C_3e^{vt}+C_4e^{-vt}$$ $$y(x,t)=(C_1e^x+C_2e^{-x})(C_3e^{vt}+C_4e^{-vt})$$ where all the Cs are arbitrary constants. 15. Jun 23, 2011 ### TylerH How far are you in the calculus sequence? This stuff comes after that, and ODEs are usually introduced in calc II. To give you an idea, the only PDE class my school offers is a second year graduate class ie, people getting master's degrees don't get to them until their second year, here. I STRONGLY recommend you learn ODEs first. PDEs are an extension of ODEs to functions of many variables, so it should be clear that since the latter underlies the former, the latter is required knowledge for even beginning to grasp the former. Anyway, MIT's OCW is a good place for calculus and ODEs. Paul's Online Notes has an entire class worth of notes on ODEs and a small section on PDEs. 16. Jun 23, 2011 ### jaketodd So there are arbitrary constants in the y=? Isn't there a way to get it just in terms of the original variables? If there must be constants in there, can I solve for them by plugging in arbitrary values for the other variables? Can you please show me how the previous post to this one fits into the one before it? And in the most recent post, did you intend to put x in the exponent of the first line there? Thanks, Jake 17. Jun 24, 2011 ### hunt_mat You can't just solve a PDE in that way, you need boundary conditions, initial conditions and what not. Have you done any differential equations before? What have you done? 18. Jun 24, 2011 ### jaketodd I know how to take the derivative of a function, and I know how limits work. That's about it. Earlier in this thread, Hootenanny said that solving for y is "pretty straight forward," so that gives me the impression that it can be done, somehow. 19. Jun 24, 2011 ### hunt_mat So if I asked you to find y when: $$\frac{dy}{dx}+2xy=x$$ Could you tell me how to calculate y? Solving PDEs such as this is straightforward, if you have the right experience. It looks to me as if you're jumping way ahead of your experience. I would humbly suggest that you start with how to solve the equation I gave you before you start on the wave equation. On an aside, I would also suggest you look how to solve 1st order partial differential equations before you look into 2nd order PDEs. 20. Jun 24, 2011 ### jaketodd Well you see, that's the whole reason I am coming to physics forums; I don't have the experience to figure it out on my own.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083239197731018, "perplexity": 596.4265622137955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00365-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/767081/a-polynomial-with-integer-coefficient
# A polynomial with integer coefficient I'm struggling with this question: Suppose $P(x)$ is a polynomial with integer coefficients such that non of the values $P(1),...,P(2010)$ is divisible by $2010$. Prove that $P(n)\neq 0$ for all integers $n$ - Hint: If $n\equiv m\pmod {M}$, then $P(n)\equiv P(m)\pmod M$. - Suppose $P(n) = 0$ for some integer $n$. Then write $n = p\cdot 2010 + r$, then $0 = P(2010p + r) = 2010q + P(r)$. So $P(r) = -2010q$ which is divisible by $2010$ with $0 \le r < 2010$, contradiction. - Hint $\$ A root of $P$ remains a root of $P$ mod $\,2010$ but, by hypothesis, $P$ has no roots mod $\,2010$. Remark $\$ More explicitly, mod $\,2010$ every integer $\,n\,$ is congruent to a (unique) integer $\bar n$ in the interval $\,[1,2010].\,$ Indeed, let $\,\bar n\,$ be the remainder mod $2010$ (and replace $\,0\,$ by $\,2010).$ Therefore, if $\,P(n) = 0\,$ then by applying the $\color{#c00}{\rm Polynomial\ Congruence\ Rule}$ (below) we deduce $${\rm mod}\ 2010\!:\,\ \bar n\equiv n\,\Rightarrow\, P(\bar n)\equiv P(n)\equiv 0$$ But this implies that $\,2010\mid P(\bar n)\,$ and $\, 1\le \bar n\le 2010,\$ contra hypothesis. Congruence Sum Rule $\rm\qquad\quad A\equiv a,\quad B\equiv b\ \Rightarrow\ \color{#c0f}{A+B\,\equiv\, a+b}\ \ \ (mod\ m)$ Proof $\rm\ \ m\: |\: A\!-\!a,\ B\!-\!b\ \Rightarrow\ m\ |\ (A\!-\!a) + (B\!-\!b)\ =\ \color{#c0f}{A+B - (a+b)}$ Congruence Product Rule $\rm\quad\ A\equiv a,\ \ and \ \ B\equiv b\ \Rightarrow\ \color{blue}{AB\equiv ab}\ \ \ (mod\ m)$ Proof $\rm\ \ m\: |\: A\!-\!a,\ B\!-\!b\ \Rightarrow\ m\ |\ (A\!-\!a)\ B + a\ (B\!-\!b)\ =\ \color{blue}{AB - ab}$ Congruence Power Rule $\rm\qquad \color{}{A\equiv a}\ \Rightarrow\ \color{#0a0}{A^n\equiv a^n}\ \ (mod\ m)$ Proof $\$ It is true for $\rm\,n=1\,$ and $\rm\,A\equiv a,\ A^n\equiv a^n \Rightarrow\, \color{#0a0}{A^{n+1}\equiv a^{n+1}},\,$ by the Product Rule, so the result follows by induction on $\,n.$ $\color{#c00}{\rm\bf Polynomial\ Congruence\ Rule}\$ If $\,f(x)\,$ is polynomial with integer coefficients then $\ A\equiv a\ \Rightarrow\ f(A)\equiv f(a)\,\pmod m.$ Proof $\$ By induction on $\, n =$ degree $f.\,$ Clear if $\, n = 0.\,$ Else $\,f(x) = f(0) + x\,g(x)\,$ for $\,g(x)\,$ a polynomial with integer coefficients of degree $< n.\,$ By induction $\,g(A)\equiv g(a)\,$ so $\, \color{#0a0}{A g(A)\equiv a g(a)}\,$ by the Product Rule. Hence $\,f(A) = f(0)+\color{#0a0}{Ag(A)}\equiv f(0)+\color{#0a0}{ag(a)} = f(a)\,$ by the Sum Rule. Beware that such rules need not hold true for other operations, e.g. the exponential analog of above $\rm A^B\equiv\, a^b$ is not generally true (unless $\rm B = b,\,$ so it follows by applyimg the Polynomial Rule with $\,f(x) = x^{\rm b}).$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811480045318604, "perplexity": 37.5407104948534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115857131.28/warc/CC-MAIN-20150124161057-00280-ip-10-180-212-252.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/423681/where-to-find-the-definition-of-a-given-beamer-theme
# Where to find the definition of a given beamer theme? I want to modify a detail in the header/footer lines of a beamer theme that I am using (for instance Antibes). Where can I find the corresponding definition file, to copy the existing definition of the header-footer lines? Antibes uses the tree outer theme. You can find the definition of its headline in beamerouterthemetree.sty: \defbeamertemplate*{headline}{tree theme} {% \end{beamercolorbox} \begin{beamercolorbox}[wd=\paperwidth,ht=2.5ex,dp=1.125ex,% \end{beamercolorbox} \begin{beamercolorbox}[wd=\paperwidth,ht=2.5ex,dp=1.125ex,% \ifbeamer@tree@showhooks \ifdim\wd\beamer@tempbox>1pt% \hskip2pt\raise1.9pt\hbox{\vrule width0.4pt height1.875ex\vrule width 5pt height0.4pt}% \hskip1pt% \fi% \else% \hskip6pt% \fi% \end{beamercolorbox} \begin{beamercolorbox}[wd=\paperwidth,ht=2.5ex,dp=1.125ex,% \ifbeamer@tree@showhooks \ifdim\wd\beamer@tempbox>1pt% \hskip9.4pt\raise1.9pt\hbox{\vrule width0.4pt height1.875ex\vrule width 5pt height0.4pt}% \hskip1pt% \fi% \else% \hskip12pt% \fi%
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549468159675598, "perplexity": 3775.5639267961506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00559.warc.gz"}
https://www.physicsoverflow.org/5148/effective-action-bosonic-string-theory-enhanced-symmetry
# Effective action for bosonic string theory with enhanced symmetry + 3 like - 0 dislike 196 views See these lecture http://members.ift.uam-csic.es/auranga/lect7.pdf page 17. Usually one derives the effective action from the massless states calculating amplitudes, otherwise through beta functions(worldsheet conformal invariance). One obtains a effective field theory containing a metric, a Kalb-ramond field and a dilaton. These came from the $N=1,\bar{N}=1$ sector of the mass spectrum. This is wellknown. If the 25th dimension is compactified with $R=\alpha'^{1/2}$ then another massless 25dimensional fields emerge. This happens in the $N=0,\bar{N}=1$ sector and $N=1,\bar{N}=0$ sector. After analysing the enhanced symmetry from the 25d point of view one can deduce that these are $SU(2)\times SU(2)$ gauge bosons. On page 17 of the lecture the autor says: it is possible to cook up a new 25d effective field theory by including by hand the new massless modes. How this action looks like? Is there any paper or reference which explains the calculations or steps to obtain the action? This post imported from StackExchange Physics at 2014-03-05 14:56 (UCT), posted by SE-user Anthonny edited Apr 19, 2014 Could you explain the steps and calculations to obtain the effective field theory action? Please, give some reference (paper, book, webpage, blog) related to this interesting topic This post imported from StackExchange Physics at 2014-03-05 14:56 (UCT), posted by SE-user Anthonny Im waiting for more answers, nobody else? The massless sector of open string is a gauge boson, could this help? I think that the action should be nonlinear like a Born Infield action. What do you think? This post imported from StackExchange Physics at 2014-03-05 14:56 (UCT), posted by SE-user Anthonny + 2 like - 0 dislike The value $R=\alpha^{\prime 1/2}$ is the self-dual radius under T-duality. One may indeed extract the massless spectrum – the spectrum of all fields much lighter than $\alpha^{\prime -1/2}$. Because the CFT has an $SU(2)\times SU(2)$ symmetry, as can be seen from the OPEs of the currents, the spacetime physics has this symmetry, too. Because one finds (spacetime) Lorentz vector states in the adjoint of $SU(2)\times SU(2)$, it is clear that this group is the gauge symmetry of the spacetime physics. And indeed, one may verify that the tree-level scattering amplitudes for all the relevant string modes agree with the scattering amplitudes extracted for quanta of fields in the effective action that is (a bit schematically, especially when it comes to the parts unrelated to the enhanced gauge symmetry) $$S =\int d^{25}x\,\exp(2\phi) [R + (\partial_{[\lambda} B_{\mu\nu]})^2 + (\partial_\mu \phi)^2 -\frac 14 {\rm Tr}(F_{\mu\nu}F^{\mu\nu}) ]$$ So it is a 25-dimensional action because we ignore the 1 compactified dimension whose radius is stringy (dimensional reduction). In this 25-dimensional spacetime, there is the dilaton, the metric, the B-field, and an $SU(2)\times SU(2)$ gauge field, and they have more or less the expected terms in the effective action. See Polchinski's Volume 1 from page 242 to 250+ or so. The effective action is probably not written there explicitly. However, you may find the 26D effective action for the uncompactified bosonic string theory on the top of page 114, reduce the dimension, and add the $SU(2)\times SU(2)$ Yang-Mills field, more or less getting the exact answer. The "Cartan" $U(1)\times U(1)$ part of the Yang-Mills action comes from the Kaluza-Klein $U(1)$ symmetry of the circle and from the components of the B-field $B_{\mu,25}$. This is "enhanced" by the extra "accidentally massless" states to the non-Abelian group. Between equations 3.7.15 and 3.7.20 or so, Polchinski takes a different but ultimately equivalent strategy to derive the spacetime action. He derives the equations of motion from the requirement of the conformal symmetry on the world sheet (vanishing beta-functions etc.) and verifies that the same equations follow as the Euler-Lagrange equations from the spacetime action he "guesses" and "refines". This post imported from StackExchange Physics at 2014-03-05 14:56 (UCT), posted by SE-user Luboš Motl answered Nov 29, 2013 by (10,278 points) I was thinking of that term $-\frac 14 {\rm Tr}(F_{\mu\nu}F^{\mu\nu})$ is the most obvious. But I think its just the linear part, as the same way a linear action for $h_{\mu\nu}$(the weak gravitational field) is the linear part of $\int R$ the complete nonlinear action. For example for the open bosonic string (not compactified) the massless mode is a gauge boson and when one obtain its effective action one doesnt obtain $\int -\frac 14 {\rm Tr}(F_{\mu\nu}F^{\mu\nu}$ but a Born Infield action instead. This post imported from StackExchange Physics at 2014-03-05 14:56 (UCT), posted by SE-user Anthonny Another thing Im not sure is if the coupling with the dilaton is necesary because they came from diferent sectors, I think this is not obvious, one might intuit the action or guess an expected action. For this reason I want a reference like a thesis or research paper, so I can be sure that the calculation was done and the answer is right. Why you are sure of your response? Have you done this calculation? or where did you see that action? This post imported from StackExchange Physics at 2014-03-05 14:56 (UCT), posted by SE-user Anthonny Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\varnothing$ysicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8403558731079102, "perplexity": 807.9313589639705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00496.warc.gz"}
http://math.stackexchange.com/questions/287635/brownian-motions-identical-distributions
# Brownian motions identical distributions Let $(B_t)_t$ be a standard Brownian motion, and $$A = \sup\{t\leq 1\mid B_t =0 \},\qquad B = \inf\{ t\geq 1\mid B_t =0 \}.$$ I would like to show that $A$ and $B^{-1}$ are identically distributed and find their distribution. Could you please help me with this. I would be grateful for any ideas or suggestions. Thanks. - Idea: isn't $B = 0$ a.s.? – Ilya Jan 26 '13 at 21:53 Where you wrote $\{\sup t\le1\mid B_t=0\}$, you ought to have $\sup\{t\le1\mid B_t=0\}$ (and similarly with $\inf$). – Michael Hardy Jan 26 '13 at 22:18 sorry for misprint, have edited the description. – eugen1806 Jan 27 '13 at 8:18 Corrected the text of the question since the OP failed to do it. – Did Feb 2 '13 at 11:00 The OP probably means $A=\sup\{t\leqslant1\mid B_t=0\}$ and $B=\inf\{t\geqslant1\mid B_t=0\}$, in any case this post answers the question modified accordingly. Let $W_t=tB_{1/t}$, then $(W_t)_{t\geqslant0}$ is a Brownian motion and $W_t=0$ for $t\gt0$ iff $B_{1/t}=0$ hence $1/B=\sup\{t\leqslant1\mid W_t=0\}$. Thus, $1/B$ corresponds to the functional $A$ based on the paths of the Brownian motion $(W_t)_{t\geqslant0}$, in particular $A$ and $1/B$ are identically distributed. To compute the distribution of $B$, note that, conditionally on $B_1=x$, $B$ is distributed as the first hitting time of $x$ by a standard Brownian motion, and that this first hitting time is distributed as $x^2T$, where $T=\inf\{t\geqslant0\mid B_t=1\}$. Thus, for every $t\gt1$, $$f_B(t)=2\int_0^{+\infty}f_T\left(\frac{t}{x^2}\right)p_1(x)\frac{\mathrm dx}{x^2},$$ where $f_B$ is the density of $B$, $f_T$ is the density of $T$ and $p_1$ is the density of $B_1$. Furthermore, $$p_1(x)=\frac1{\sqrt{2\pi}}\mathrm e^{-x^2/2},\qquad f_T(t)=\frac1{t\sqrt{2\pi t}}\mathrm e^{-1/2t}.$$ This yields $$f_B(t)=\frac{\mathbf 1_{t\gt1}}{\pi\sqrt{t}(1+t)}.$$ Finally, since $A$ is distributed like $1/B$, the usual change of variable yields $$f_A(t)=\frac{\mathbf 1_{0\lt t\lt1}}{\pi\sqrt{t}(1+t)}.$$ thanks a lot, Did. Didn't get the following argument: I see that $1/B$ corresponds to the functional $A$ based on the paths of $W_t$. How does it follows that $A$ and $1/B$ are identically distributed? – eugen1806 Jan 28 '13 at 20:16 For every $\Phi$, $\Phi((B_t)_{t\geqslant0})$ and $\Phi((W_t)_{t\geqslant0})$ are identically distributed. Try this for the functional $\Phi:(x_t)_{t\geqslant0}\mapsto\sup\{t\leqslant1\mid x_t=0\}$. – Did Jan 28 '13 at 23:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661694765090942, "perplexity": 85.86233016179223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398454553.89/warc/CC-MAIN-20151124205414-00350-ip-10-71-132-137.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/110501/auto-package-download-for-texlive
I use MiKTeX on Windows and quite satisfied with it. Recently I started switching all my tasks toward open-source alternatives, and in the course I would love to use Linux. In Linux TeXLive is available as alternative to MiKTeX. The thing I really like about MiKTeX, is its ability to install packages automatically. Can I do the same in TeXLive too? Is there a way I can enable such or install some plugins for it? I am using Fedora 18, if that's needed. • Welcome to TeX.sx! – jubobs Apr 24 '13 at 14:20 • You are new, so @Jubobs was being friendly :-) – Joseph Wright Apr 24 '13 at 14:27 • Oh!! That link scared me!! – rafee Apr 24 '13 at 14:30 • If you install the complete TeX Live collection (~2400 packages) you'll never ever need to add new packages. Everything will work just fine and all you'll need to do will be a matter of `tlmgr update -all` from time to time or `tlmgr update <package>` if you need something specific. As a Linux user I strongly suggest you not to install your distribution packages but go directly to the source and install TeX Live via one of these methods. This is closer to the Unix way of doing things and you will not regret it in the long run. – Nico Boni Apr 24 '13 at 14:48 • – texenthusiast Apr 24 '13 at 17:57 While in MiKTeX an installation process is automatically triggered if you have, say, `\usepackage{beamer}` in a document preamble without the corresponding package installed, there is no such feature on TeX Live. The last statement is not true actually, as pointed out by wasteofspace in the comments there is the texliveonfly package that implements the on demand installation in TeX Live 2010 and later. I never tested it and don't know if it has drawbacks. However, if you install the full (or almost full) TeX Live collection of packages (~2400) you will not need to add new packages, a periodic `tlmgr update -all` will take care of everything, including the installation of packages added to the TeX Live collection after you first full installation. This feature is explained in the `tlmgr` manual. Analogously, if a package has been added to a collection on the server that is also installed locally, it will be added to the local installation. This is called `auto-install` and is announced as such when using the option `--list`. This auto-installation can be suppressed using the option `--no-auto-install` The manual has lots of info on useful commands and it is a recommended reading for every user. The downside is of course that you need the full set of packages installed in your machine, which may be a problem if you don't have enough free space. If you really can't spare 2GB from your HD, it is also possible to install TeX Live in a, say, 4GB USB key and live happily ever after :) Everything I just wrote requires that you install TeX Live with one the methods described here. If you decide to use the TeX packages from your distro you are forced to follow their update policy, which is different for different distros • never???! so the op will never be interested in an update, or in a new package? intriguing. i would suggest the `texliveonfly` package. (i can’t recommend it, since i’ve never used it, but it looks appropriate.) – wasteofspace Apr 25 '13 at 9:04 • The OP will never have to care about packages because (i) he already has all available packages at the time of installation and (ii) packages added to TeX Live list after the first install will automatically be added via `tlmgr update -all`. I think that in the end you will install the vast majority of packages anyway, so it is simpler to just install everything and forget about it. However the package you mentioned looks interesting, didn't know about it, too bad I can't test since I already have the full installation ;) – Nico Boni Apr 25 '13 at 12:04 • can I use texliveonfly with TeXstudio? I tried replacing pdflatex command with "texliveonfly.py %tex", but that didn't do the job – rafee Apr 28 '13 at 15:04 • It works for me with the following command: `texliveonfly --compiler=pdflatex <filename>.tex` (issued from a terminal, I don't know about TeXstudio). If you aren't using lualatex you need to specify the compiler with an option, since the default is set to `lualatex`. Also the documentation of the package is outdated, since the option `--engine` is no longer recognized. To be sure that everything is set to work, check if the a script called `texliveonfly` is inside your `/bin` directory. – Nico Boni Apr 28 '13 at 15:14 • @rafee I would be very wary of this, personally. Either it is installing into the main or local tree, which it ought not have permission to do, or it is installing packages into your personal tree. That is asking for trouble and will certainly cause headaches - especially, though not only, in the case of font packages. If people use this, they should make absolutely certain they understand what exactly it does and that they know when they need to undo things and which errors and problems to look out for. – cfr Dec 22 '15 at 1:41 ## texliveonfly As mentioned in comments, there is a TeX Live package called `texliveonfly` which you can use with `texliveonfly filename.tex`, and it will automatically downloaded the right TeX Live packages. This also works for packages for which the LaTeX package name and the TeX Live package name don't match (for example the LaTeX `rubikrotation` package is contained in the `rubik` TeX Live package), and it also takes package dependencies into account. ### Usage Installing It is a Python script so it requires Python to be installed. You can then install it like usually with `tlmgr install texliveonfly`. If you have to use `sudo tlmgr` here, you will have to use `sudo texliveonfly` later. Running If you go in your terminal to the directory of your `filename.tex` file, you can run it with `texliveonfly filename.tex`. Other compilers At the moment it uses `pdflatex` by default, but you can configure it to run with other compiler engines by using the `--compiler` (or `-c`) flag, so like `texliveonfly --compiler=lualatex filename.tex`. Compiler flags You can pass flags for the compiler you use to `texliveonfly` using the `--arguments` (or `-a`) flag, so for example if you previously used `latexmk -shell-escape -pdf filename.tex` then you now use `texliveonfly --compiler=latexmk --arguments='-shell-escape -pdf' filename.tex`. Known problems 1. There are some cases of missing packages which fail with a non-standard error message, for example babel when it's missing languages, in which case `texliveonfly` doesn't download them. At the moment the following packages are known to have to be installed manually: (please edit if you find more) • Babel languages, for example for european languages install the `collection-european` package • Biblatex styles, e.g. for the nature style you need the `biblatex-nature` package • fontenc encodings, e.g. to get `t2aenc.def` you need the `cyrillic` package, and to get the `ly1enc.def` you need the `ly1` package. 2. When giving options to texliveonfly, for example for a different compiler, it sometimes hangs for no apparent reason when installing packages. You can most probably work around it by first running texliveonfly without options, so `texliveonfly main.tex` (so it will download the packages) and then running whatever you wanted to, for example `latexmk main.tex`. ### Background Essentially texliveonfly is a build tool like latexmk (which is a Perl script), it wraps the TeX engine. Note however that you can chain them with `texliveonfly --compiler=latexmk filename.tex`. It is a python script which works by trying to run your LaTeX file, and if it fails because a package is missing it will try to install that package. Besides on ctan.org/pkg/texliveonfly you can view the source at ctan.org/tex-archive/support/texliveonfly or on latex.org/forum PS I tested this on Arch Linux 4.19.4 and on Travis CI (Ubuntu 14.04). My rather simplistic approach was to search for `\usepackage`, extract the contents and install using the distribution's package manager. ``````cat *.tex | sed -n 's/^[^%]*\\usepackage[^{]*{\([^}]*\)}.*\$/tex\(\1.sty\)/p' | paste -sd ' ' - `````` This returns a list of packages with `.sty` and surrounded by `tex()`, e.g. `tex(amsmath.sty) tex(enumitem.sty) tex(graphicx.sty)`. I can pass these straight to `yum`/`dnf` (I'm using Fedora). ``````sudo dnf install \$( cat *.tex | sed -n 's/^[^%]*\\usepackage[^{]*{\([^}]*\)}.*\$/tex\(\1.sty\)/p' | paste -sd ' ' - ) `````` There were a couple of packages I had split over multiple lines that the `sed` expression missed, which I installed manually using `sudo dnf install 'tex(some-package-name.sty)'`. Hopefully there's a simple `.sty` installing equivalent for `apt-get` on ubuntu. • Thank you jozxyqk. I used your command in this form: `cat *.tex | sed -n 's~^[^%]*\\usepackage[^{]*{\([^}]*\)}.*\$~\1.sty~p'|while read file; do tlmgr install \$file; done` This work like a charm for me. – Daniel Borges Aug 29 '16 at 16:33 There is no such functionality builtin as mentioned. However, with some botching one can create a small wrapper that does exactly that. Simply scanning the packages is often not enough because you don't get the dependencies. `itex` uses `expect` to catch errors and install the packages on the fly. https://github.com/dopefishh/itex With `itex` you only need to install the texlive infrastructure and the script will install the rest.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548474907875061, "perplexity": 1615.9041656458403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256797.20/warc/CC-MAIN-20190522103253-20190522125253-00288.warc.gz"}
https://www.physicsforums.com/threads/tuning-fork-sound-q.145361/
# Tuning fork, sound q 1. Nov 25, 2006 ### lizzyb A tuning fork generates sound waves with a frequency of 246 Hz. The waves travel in opposite directions along a hallway, are reflected by end walls, and return. The hallway is 47.0 m long, and the tuning fork is located 14.0 m from one end. What is the phase difference between the reflected waves when they meet at the tuning fork? The speed of sound in air is 343 m/s. We have the equation: $$\Delta r = \frac{\phi}{2 \pi} \lambda$$ so it seems that all we need to do is determine phi since we can easily determine delta r and lambda. But the answer that I come up with is different than in the book. $$\lambda = \frac{v}{f} = \frac{343}{246} = 1.39 m$$ Easy enough. But what about the change in r? Let r1 be the distance traveled by the sound that goes to the left and r2 be the sound that goes to the right, thus we have: $$r_1 = 2(47 - 14) = 66 m$$ $$r_2 = 2(14) = 28 m$$ $$\Delta r = r_1 - r2 = 38 = \frac{\phi}{2 \pi} \lambda$$ so $$\frac{38 \cdot 2 \cdot \pi}{1.39} = \phi$$ Which is 171.77 radians maybe? But this is way off the answer in the back of the book, 91.3 degrees, because 171.77 * 180 / pi = 9841.7 degrees modulo 360 = 121 degrees?? ?? 2. Nov 25, 2006 ### Andrew Mason The phase difference is the path difference minus the number of whole wavelengths in the path difference. The number of wavelengths in 38.0 m is 38.0/(343/246) = 27.254. The number of whole wavelengths is 27 so the phase difference is .254 of a wavelength or 360 x .254 = 91.3 degrees. To the correct significant figures, the phase difference is really .3 of a wavelength or 108 degrees. AM 3. Nov 25, 2006 ### lizzyb wow - my error was in rounding off the results. thanks so much. As a recap, I can come up with the correct answer with: $$\phi = \frac{\Delta r \cdot 2 \cdot \pi}{\lambda} = \frac{\Delta r \cdot 2 \cdot \pi}{v/f} = \frac{\Delta r \cdot 2 \cdot \pi \cdot f}{v} = \frac{(66 - 28) \cdot 2 \pi \cdot 246 }{343} radians \cdot \frac{180 degrees}{\pi radians}$$ anway, i came up with 9811.31195335 modulo 360 = 91.3 degrees! 4. Dec 4, 2006 ### conejoperez28 I don't understand the "modulo" conversion, its not working for me, can you explain it please? 5. Dec 5, 2006 ### OlderDan The sine and cosine functions are periodic with a period of 360°. A "phase difference" of n*360 + θ is the same as a phase difference of θ for all integer values of n. When you have some big angle and subtract n*360 from it, you are finding the angle "modulo 360°" Similar Discussions: Tuning fork, sound q
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9077165126800537, "perplexity": 583.595993984845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00085.warc.gz"}
https://www.physicsforums.com/threads/radioactive-decay-and-linear-graphing.286805/
# Radioactive Decay and Linear Graphing 1. Jan 22, 2009 ### nmacholl 1. The problem statement, all variables and given/known data This particular exercise has no "problem statement" but I'll explain it in detail using all the information provided to me. The class was presented with a table of data containing values for time (in days) and activity (in cts/sec) for a radioactive isotope Iodine-131. We are to create a Linear graph using the data and fine Iodine-131's half life. Here's the data, Time in days first - then the activity in cts/sec. 05 | 6523 10 | 4191 15 | 2736 20 | 1722 25 | 1114 30 | 0722 35 | 0507 40 | 0315 2. Relevant equations Here are the equations given to us on the table. A = Ao$$^{-\lambda t}$$ $$\lambda$$=(ln2)/T$$_{1/2}$$ Ao=10,000 cts/sec 3. The attempt at a solution I've fiddled with the equation and wasn't getting anywhere near a y=mx+b solution so I decided to look up first order exponential decay on the web which landed me this equation. ln(A) = -$$\lambda$$*t + ln(Ao) So I made a separate table using the natural log of the Activity data and graphed it which comes out linear with an R2=1, the best fit is: f(x)=-0.09x+9.2 I used the slope to determine T1/2 or half life of the sample. I calculated a value of 7.7 days - the accepted value is barely over 8 days. I believe that the equation I found for first order exponential decay is in fact correct for this case but I have no idea how to go from A = Ao$$^{-\lambda t}$$ to ln(A) = -$$\lambda$$*t + ln(Ao) algebraically. In essence my question is: Is the ln(A) = -$$\lambda$$*t + ln(Ao) correct for this particular case of atomic decay and how do I get to this equation from the equation given. Thanks alot! 2. Jan 23, 2009 ### LowlyPion I think the equations are equivalent. Recall that Bx = (elnB)x = exlnB When you take the ln of both sides of A = Ao-λt keeping in mind that B above is your Ao Then you get lnA = -λ*t + lnAo 3. Jan 23, 2009 ### nmacholl Thanks a lot but I have a little problem. Okay so if: A = Aox // x = -λ*t A = ex*ln(Ao) then ln(A) = x*ln(Ao) ln(A) = -λ*t*ln(Ao) Why is my version -λ*t*ln(Ao) as opposed to the correct version -λ*t+ln(Ao)? Why is it "+ln(Ao)" 4. Jan 23, 2009 ### LowlyPion This step is incorrect. Exponents add, not multiply. 5. Jan 23, 2009 ### nmacholl I'm a little confused, which step is incorrect? A power raised to a power is multiplication, which is how you get A = ex*ln(Ao) no? Since ln() is the inverse of e it shouldn't change the exponent of e correct? ln(A) = x*ln(Ao) 6. Jan 23, 2009 ### LowlyPion I think you should familiarize yourself with ln arithmetic. eln(x) = x ln(ex) = x ln(ea*b) = a + b 7. Jan 23, 2009 ### nmacholl If I substitute numbers in for a and b using my calculator I don't get a true statement. a = 2 b = 5 ln(e^(a*b)) = a + b ln(e^(10)) = 7 10 = 7 What am I doing wrong? :( 8. Jan 23, 2009 ### LowlyPion Don't despair. Maybe I'm not remembering my ln math. Sorry if I confused you at all. I'll have to look it up. 9. Jan 23, 2009 ### LowlyPion OK. I don't know what I was thinking, because of course it's equal to the product. I think however that your original equation is A = Ao*e-λt Taking the ln of both sides: ln(A) =ln(Ao) -λt That makes sense. My fault. I should have recognized it immediately. 10. Jan 23, 2009 ### nmacholl That certainly makes more sense than before! The worksheet must have a typo on the equation, I double checked and the handout and the -λt is the exponent of Ao and Euler's number is nowhere to be found. Thanks alot.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037608861923218, "perplexity": 1488.5781348194396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00299.warc.gz"}
https://www.physicsforums.com/threads/pushing-a-box-forces.201213/
# Pushing a box (forces) 1. Nov 28, 2007 ### jesuslovesu 1. The problem statement, all variables and given/known data You exert a constant horizontal force on a large box. As a result the box moves across a horizontal floor at a constant speed v0. If you double the constant horizontal force on the box, how would the box then move? 2. Relevant equations F = ma = m dv/dt 3. The attempt at a solution Well, this question is a tad confusing to me, if the box is moving at a constant speed v0 then I have to assume that I would just push the box for an instant & then get it to move at a constant v0. If I were to double the force, then I would think that the speed would then be 2v0, however this is not the case... (dv/dt is proportional to F) Can anyone explain how to determine the motion? 2. Nov 28, 2007 ### Staff: Mentor You are misinterpreting the problem. The only reason the box continues moving at constant speed is because you keep pushing with the constant horizontal force. Hint: What's the net force acting on the box? What other force must be acting on the box? 3. Nov 28, 2007 ### jesuslovesu Alright so if I draw a free body diagram and add in friction I can see ma = F - Ff Since the velocity is constant a = 0 so F = Ff So if I double F to 2F, then it's accelerating? 2F - Ff = ma F = ma a = F/m ? Last edited: Nov 28, 2007 4. Nov 28, 2007 ### Staff: Mentor Very good! Similar Discussions: Pushing a box (forces)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623669981956482, "perplexity": 921.9668733395832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805242.68/warc/CC-MAIN-20171119004302-20171119024302-00139.warc.gz"}
http://math.stackexchange.com/questions/141477/algebra-question-involving-square
# Algebra question involving square? Find the area of a square in square inches whose side is 1.3 ft? I keep getting this one wrong I would think that 1.3 feet is 16 inches and then multiply that by that by sixteen but that is not the correct answer. - $1.3$ feet is not $16$ inches. – Chris Eagle May 5 '12 at 19:22 $1.3$ feet is $1.3\cdot 12=15.6$ inches. – Brian M. Scott May 5 '12 at 19:22 One thing that can help when doing unit conversions is properly setting up your fractions. In this case, you need $$\frac{1.3\text{ ft}}{1}\times \frac{12\text{ in}}{1\text{ ft}} = 15.6\text{ in}$$ Now it's clear from the fractions that you have to multiply 1.3 by 12 to convert feet to inches. The area of a square is $s^2$ where $s$ is the length of a side, so the area is $(15.6)^2=243.36$ square inches (don't forget the units).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420509338378906, "perplexity": 413.7539189173355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275328.0/warc/CC-MAIN-20160524002115-00145-ip-10-185-217-139.ec2.internal.warc.gz"}
https://campus.datacamp.com/courses/foundations-of-inference/confidence-intervals-4?ex=2
# What is the parameter? In November 2016, the voters elected a new president of the United States. Prior to the election, thousands of polls were taken to gauge the popularity of each of the candidates. Leaving aside the idea that popular opinion changes over time, a poll can be thought of as a sample of individuals measured so as to estimate the proportion of all voters who will vote for each candiate (i.e. the population parameter). Consider an election in your home town that will take place in a week's time. You poll a randomly selected subset of the voters in your town and ask them if they plan to vote for Candidate X or Candidate Y. In this chapter, we will focus on sampling variability—the variability in sample proportions due to polling different randomly selected individuals from the population. Before investigating the sampling variability, what is the population parameter of interest?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272785544395447, "perplexity": 652.030048195925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00191.warc.gz"}
https://gandhiviswanathan.wordpress.com/2014/10/07/domain-coloring-for-visualizing-complex-functions/
# Domain coloring for visualizing complex functions The above figure shows a domain color plot for the function ${z\mapsto {1/(1 - i z )} -(1 + i z)}$ . I have been relearning complex analysis and decided to have some fun plotting complex functions. It is easy to visualize graphically a real function ${f:\Bbb R\rightarrow \Bbb R}$ of a real variable. To visualize ${f(x)}$, we can plot ${x}$ on the horizontal axis and ${y=f(x)}$ on the vertical axis, as we learn in high school mathematics. It is more difficult, however, to visualize the function ${g:\Bbb C\rightarrow \Bbb C}$, because ${g}$  is a  complex function of a complex variable. A complex number ${z}$ can be expressed in terms of real numbers $x, ~y$ as ${z=x+ i y}$. To visualize the complex domain of a function, one thus requires 2 real dimensions. The complex function ${g(z)}$ must then be plotted on the remaining dimensions. If we lived in 4 dimensional space, it would be easy to plot a complex function of a complex variable. The problem is that we live in three dimensions, not four. Domain coloring is a method to overcome this limitation. The basic idea is to use colors and shades etc. as extra dimensions for visualizing functions. I first heard about domain coloring when I came across Hans Lundmark’s complex analysis pages. For readers unfamiliar with domain coloring, I recommend reading up on it first. There is also a wikipedia article on the subject. ### Introduction The pictures below were generated using Wolfram’s Mathematica software (many thanks to my department colleague Professor Marcio Assolin). I adapted the code from the discussion on stackexchange here, and also here. The first figure below shows the identity function ${f(z)=z}$: The horizontal axis is the real axis and the vertical the imaginary axis of the domain ${z}$. The 29×29 grid is not deformed in this case because the identity function ${f(z)=z}$ does “nothing.” But I decided to include this example to illustrate how the colors and shades code information. First of all, notice that as we go around the origin ${z=0}$, the colors go from red on the positive real line, to green on the positive imaginary axis, to cyan on the negative real axis, to purple on the negative imaginary axis and then back to red again. The color represents the argument of the function. In this case, since we can write ${z=r e^{i\theta}}$ the argument ${\theta}$ wraps back around every time we turn ${2\pi}$ radians around the origin in the complex plane. Note that in addition to color, there is shading. Notice that at ${|z|=1/2}$, ${|z|=1}$ and ${|z|=2}$ there are discontinuities in shading. As we increase the absolute value of ${z}$, the shading gets darker and discontinuously gets brighter and this process repeats itself. Indeed, the shading has been used to plot the absolute value of the function. Every time the absolute value doubles, the shading goes through one cycle, becoming discontinuously brighter. ### Monomials Let us now look at more complicated functions. The pictures below show the functions ${z\mapsto z^2}$ and ${z\mapsto z^3}$. Now, things look more interesting! The first thing to note is that the 29×29 grid is distorted. The grid shown is the inverse image of the grid of the identity map. So for the function defined by ${f(z)=z^2}$, the procedure is as follows. Take the points of the grid and put them into some set ${S}$. Then calculate the inverse image ${f^{-1}(S)}$, i.e. the set of points that map to ${S}$. This inverse image is the grid shown above. Notice how the distorted grid seems to preserve right angles, except at the origin. Indeed, holomorphic functions are typically also conformal (i.e., angle preserving) in many instances, and we will return to this topic further below. At the origin, the monomial functions above are clearly not conformal. Take a look at the colors. Instead of cyclying through the rainbow colors going once around the origin, the colors cycle 2 and 3 times, respectively, for ${z^2}$ and ${z^3}$. This is easy to understand if we write ${z=re^{i \theta}}$ (for real ${r}$ and ${\theta}$), so that ${z^2=r^2 e^{2i\theta}}$ and ${z^3=r^3 e^{3i\theta}}$. So ${z^2}$ and ${z^3}$ circle the origin 2 and 3 times every time ${z}$ goes around the origin once. Finally, notice that the shading cycles through more quickly. Indeed, ${|z^2|}$ and ${|z^3|}$ double more quickly than ${|z|}$. ### Poles The monomial functions above have zeroes, but no poles. What do poles look like? The image below shows the function ${z\mapsto 1/z}$. Notice how the colors cycle around “backwards.” Indeed, the argument of ${z}$ and ${z^{-1}}$ are negative of each other. Notice also how the shading now has discontinuities in the “opposite” sense (compare with ${z\mapsto z}$ in the first image above). It is also worthwhile to look at the grid. The original grid of lines has transformed into a patchwork of circles. To show this more clearly, I modifed the image to show only a few grid lines, corresponding to real and imaginary lines at ${\pm 1,2,3,4,5}$: In the image above the grid lines are now clearly seen to have mapped into circles. In higher order poles, the colors cycle around (backwards) a number equal to the order of the pole. Here are poles of order 2 and 3, shown with only some grid lines for greater clarity: In addition to poles, there are other kinds of singularities. Removable singularities are not too interesting, because basically a “point” is missing. If we “manually add” the point, the singularity is “removed” — hence the name. In addition to removable singularities and poles, there are also what are known as essential singularities. Essential singularities can be thought of, loosely speaking, as poles of infinite order. Further below, we will take a look at essential singularities. Now consider a function with a zero as well as a pole: The function shown above is ${z\mapsto (z-1)+ 1/(z+1)}$, which has a zero at ${z=0}$ and simple pole at ${z=-1}$. ### Non-holomorphic functions Having looked at examples of holomorphic and meromorphic functions, let us look at more complicated non-holomorphic function. The figure below shows ${|z|}$, the absolute value of ${z}$. The color is red because ${|z|}$ is always non-negative real. The grid is gone and we have circles instead: we do not have conformality. The Cauchy-Riemann equations are impossible to satisfy because ${|z|}$ is always real, the imaginary part being identically zero everywhere. The complex conjugate function ${z\mapsto \bar z}$ also is not analytic. Here is ${z\mapsto \bar z}$: Notice that it looks just like the identity map, but reflected along the real axis. The imaginary axis is “backwards”.  Still, angles are preserved, so why is this function not holomorphic? The answer is that the angle orientations are reversed, i.e. the function is antiholomorphic rather than holomorphic. The colors cycle around “backwards”  in this case because the complex conjugate of  $r e^{i \theta}$ is $r e ^{- i \theta}.$  Conformal maps preserve oriented angles, rather than just angles. Indeed, the complex conjugate function is neither holomorphic nor conformal. ### The exponential function and its Taylor polynomials Let us now look at a transcendental holomorphic function: the exponential function ${z\mapsto \exp(z)}$. The images below show the exponential function at two scales. Notice that the colors now cycle through going up and down vertically. The reason for this is as follows. If we write ${z=x + i y }$, then $\exp[z]=e^{x+i y } = e^x e^{i y} ~.\ \ \ \ \$ So ${y}$, which is the imaginary part of ${z}$, determines the argument, hence the color. The zoomed out version makes it clear that the argument is periodic with period ${2 \pi}$ in the imaginary direction. While on the topic of the exponential function, let us take a look at Talyor polynomial expansions. The figure below shows the Talor polynomial of degree 5. Notice the 5 zeroes, which lie on an arc like the letter “C” slightly to the left of the origin. The exponential function does not have zeroes, of course. We know that if we take the infinite degree Talyor polynomial, i.e. the infinite Taylor series expansion, then we recover the exponential function. We can already see that for positive real part and small imaginary part of ${z}$, the Taylor polymial above is starting to behave qualitatively like the exponential function. ### Essential singularities Having seen the exponential function, we can now look at essential singularities. Observe that the Laurent expansion of ${\exp(1/z)}$ around the origin in ${z}$ has an infinite number of terms of negative power in ${z}$. The singularity at the origin is thus stronger than a pole of any finite order. The figure below shows ${z\mapsto \exp(1/z)}$, shown at three different scales.  The third figure is a zoom of the second, which is a zoom of the first. The software is apparently having some trouble near the origin, in the last figure! The reason for this is the Great Picard’s theorem, which says, loosely speaking, that an analytic function near an essential singularity takes all possible complex values, with at most 1 exception. In the case of ${z\mapsto \exp(1/z)}$, the function cannot become zero, which is the exceptional value. As we approach the origin, the argument (i.e. color) changes, cycles around, etc., increasingly quickly. Let us now look at some trigonometric functions: ${\sin(z)}$: ${\tan(z)}$ We can clearly see the zeroes in ${\sin(z)}$ and the poles and zeroes of ${\tan(z)}$. Moreover, it is clear that these are periodic functions. Compare the above trigonometric functions with their inverses: ${\arcsin(z)}$ ${\arctan(z)}$ Notice that on the real line, for ${|z|>1}$ there is a discontinuity in color for arcsine. Similarly, for arctan there is a color discontinuity on the imaginary axis. To understand this jump in color, recall that ${|\sin(x)|\leq 1}$, which means that the inverse function ${\sin^{-1}(x)=\arcsin(x) }$ is not defined for ${x}$ outside the interval ${ -1\leq x\leq 1}$. Recall also that ${\sin(x)=\sin(x+2\pi)}$, so that the inverse function ${\sin^{-1}}$ must be multivalued. What is being shown above is the principal branch. It helps to switch over to the logarithmic form. Recall that $\displaystyle \sin(z)= {e^{iz} - e^{-iz} \over 2i } ~. \ \ \ \ \$ If we write ${z=\arcsin(w)}$, then ${\sin(z)=w}$. Substituting, we get $\displaystyle 2iw ={e^{iz} - e^{-iz} }~. \ \ \ \ \$ To simplify the algebra, let ${e^{iz}=Z}$, so that $\displaystyle 2iw = {Z - Z^{-1} } ~, \ \ \ \ \$ which gives us a quadratic equation: $\displaystyle Z^2 -2iwz -1 =0~, \ \ \ \ \$ whose roots are $\displaystyle Z= iw \pm \sqrt{1-w^2} ~. \ \ \ \ \$ So we finally get $\displaystyle z=\arcsin(w)=-i \log Z= -i \log\left( iw \pm \sqrt{1-w^2} \right) ~. \ \ \ \ \$ So the branch cut in the ${\arcsin}$ function is due to the 2 possible values of the square root. There is a branch point at ${w=\pm 1}$ and there is actually another branch point at infinity. ### Branch points and cuts Let us look at branch points more closely. As we know, the square root is multivalued, and the figure below shows the two branches, with the principal branch at the bottom. Note how in each branch alone the colors do not cycle all the way through the rainbow colors. The missing colors of one branch are on the other branch. To see both branches, one would need to visualize the Riemann surface for the square root, a topic beyong what I wish to cover here. The figure below shows the 3 branches of the cube root function, with the principal branch at the top: There is more than 1 type of branch point. Algebraic branch points are those that arise from taking square roots, cubic roots, and ${n}$-th roots (for positive integer ${n}$). In general there will be ${n}$ well defined branches. What happens if one takes ${n}$ to be a positive irrational number? Here is a hint: $z^{\alpha} = e^{\alpha \ln z} ~.$ If we choose ${\alpha}$ rational, say ${\alpha=p/q}$, then putting ${z=r e^{i\theta + 2n\pi i}}$ we get ${\ln z= \ln r + i\theta + 2n\pi i}$, so that $\displaystyle z^\alpha = r^{p/q} e^{i \alpha \theta} e^{ 2\pi i (np /q)} ~.$ But ${n p/q}$ is rational, and so there can be at most ${q}$ branches. But if ${\alpha}$ is irrational, this argument does not work. Instead of ${np/q}$ we get ${n\alpha}$, which can never equal an integer. So the branches never cycle through and the number of branches is infinite. Indeed, the logarithm above leads to a infinite branching, as we will see below. There is a more complicated type of branch point in the function ${z\mapsto \exp(1/z^{1/n})}$ for integer ${n}$. If we loop around the origin ${n}$ times, the function returns to the original starting point (i.e. there is finite monodromy). However, there is an essential singularity at the origin. In other words, there is the unhappy coincidence of the algebraic branch point of ${1/z^{1/n}}$ sitting exactly on top of an essential singularity. Such branch points are known as transcendental branch points. The figure above shows ${z\mapsto \exp(1/z^{1/2} )}$. The essential singularity coincides with the branch point. Finally, as we saw above, there are branch points where the number of branches is infinite. Consider the complex logarithm. If we again write ${z=r e^{i \theta}}$ then ${\log z= \log r + i \theta}$. Since ${\theta}$ and ${\theta \pm 2\pi}$ give same value of ${z}$, the logarithm is multivalued. The branch cut is usually taken at ${\theta=\pm \pi}$. Here is the logarithm: The zero at ${z=1}$ is clearly visible, as is the branch cut along the negative real axis. The complex logarithm, like other multivalued functions, could instead be visualized as Riemann surfaces. Here is an example of the Riemann surface of the complex logarithm. ### Euler’s reflection formula Let us next consider another topic related to Weierstrass’s beautiful factorization theorem, which states that every entire function can be expressed as an infinite product. Among the best known infinite products are the one used by Euler to solve the Basel problem, and the product formula for Euler’s gamma function. Indeed, the gamma function is, in a sense, one half of the sine function (or cosecant). Put differently, the sine function is the product of two gamma functions: $\Gamma(z)\Gamma(1-z) = {\displaystyle \pi \over \displaystyle \sin (\pi z) }$ . Proofs can be found in textbooks. Here I wish to focus on the zeroes and poles. Take a look at these figures: The figures above show ${1/\Gamma(z)}$ and ${1/ [\Gamma(z)\Gamma(1-z) ]}$. Note the zeroes at the non-positive integers in the first image, correponding to the poles of the gamma function. If we multiply two gamma functions, so that there are zeroes at all integers, we basically get ${\sin(2\pi z)}$ upto a constant! Indeed, the last figure above is identical, up to scale, to that of the sine function seen earlier. ### When are holomorphic functions conformal? Finally, let us take a closer look at conformality, i.e. the angle-ṕreserving property found in many holomorphic functions. The examples above of monomials of degree greater than 1 and poles shows clearly that conformality can break down at zeroes and poles. Consider again the function ${z \mapsto z^2}$ below: Conformality indeed breaks down at the zero at the origin, as expected. But if a function is holomorphic with no zeroes in a region, is it necessarily conformal in that region? The answer is NO, as seen from the following counter-example: ${z \mapsto z^2+1}$ There is no zero at the origin, yet conformality breaks down! To understand why, recall that to preserve angles, the map must locally be a scaled rotation (upto a translation). So the Jacobian determinant of the conformal map must be some positive constant. It is easy to show that the Cauchy-Riemman equations lead to a scaled rotation, provided the derivative is not zero. If the derivative is zero, however, the function need no longer be a scaled rotation, so angles need not be preserved. In the example above, the function is holomorphic at the origin, but the derivative is zero at the origin, in other words, the origin is a critical point. Moreover, as ${z}$ moves around the origin once, the function ${z^2+1}$ moves around 1 twice, hence the breakdown in conformality. Conversely, if a holomorphic function has a critical point at the origin, then its Taylor series does not contain a term of degree 1. But all higher order monomials ${z^n}$ break the conformal property at the origin, so the function cannot be conformal at the critical point. By translating arbitrary functions such that a critical point is on the origin, we can understand the following well known result: A holomorphic function is conformal if and only if there are no critical points in the region of interest. NOTES: [1]. Below is the Mathematica code I used for the plot of the identity map. The code has been adapted from the discussion on stackexchange here and also here. f[z_] := z; paint[z_] := Module[{x = Re[z], y = Im[z]}, color = Hue[Rescale[ArcTan[-x, -y], {-Pi, Pi}]]; shade = Mod[Log[2, Abs[x + I y]], 1]; ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -> False, ColorFunction -> Function[{x, y}, paint[f[x + y I]]], Frame -> True, MaxRecursion -> 1, PlotPoints -> 300, PlotRangePadding -> 0, Axes -> False , Mesh -> 29, MeshFunctions -> {(Re@f[#1 + I #2] &), (Im@f[#1 + I #2] &)}, PlotRangePadding -> 0, MeshStyle -> Opacity[0.3], ImageSize -> 400] ## 4 thoughts on “Domain coloring for visualizing complex functions” 1. Dear Gandhi, this is quite fascinating. It gives me some insight into a study that I am now doing. I shall email you with questions for guidance. 2. A very useful toolkit for those who are lecturing complex functions or even for researchers in Field Theory, Astrophysics, and many other branches of Physics. 3. Pingback: The Aperiodical
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 129, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235919117927551, "perplexity": 271.59099888170175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00600.warc.gz"}
https://bigideas.network/the-fabric-of-reality/the-planck-length/the-planck-length/additional-notes
📜 • It is just an estimation of the order of magnitude of the length scale in which quantum gravity effects are expected to be important, and quantum gravity is something that we do not have a theory for. • It's a misnomer to think that it's the fundamental 'pixel' size of the universe or anything like that. • Physicists say that our current laws of physics break down around this length scale because we don't yet have a single accepted theory of quantum gravity. All our current theories assume that spacetime is continuous (that there is not a minimum length). • Anyways, a gravitational point singularity is not special in regards to having zero volume. An electron is, as far as we know, also a point particle (in the sense that it has no interior structure), and all experiments to this date have confirmed this with big accuracy. It's of course possible that electrons (or singularities) have a nonzero volume, but to the date no experimental data suggests that this may be the case. • Alex Klotz, A Hand-Wavy Discussion of the Planck Length • The Planck length is an extremely small distance constructed from physical constants. There are a lot of misconceptions that generally overstate its physical significance, for example, stating that it’s the inherent pixel size of the universe. The Planck length does have physical significance, and I’ll talk about what it is, and what it isn’t. • What is the Planck Length? • Planck units are defined based on physical constants rather than human-scale phenomena. So while the second is originally one-86,400th of a day, the Planck time is based on the speed of light, Newton’s gravitational constant, and Planck’s (reduced) constant, which is twice the angular momentum of an electron. Hypothetically, if we met a group of aliens and wanted to discuss weights and measures, we could use Planck units and they’d know what we are talking about. There is a push towards making our human units based on physical constants, like defining the meter in terms of the speed of light, but at this time the kilogram is still the mass of a brick in France. • “Natural” units still have a bit of choice regarding their definitions. Convention has chosen Planck’s reduced constant over Planck’s regular constant (they differ by a factor of 2π ), and chosen to use the Coulomb constant instead of dielectric constant or the fundamental charge for electromagnetic units. The latter provides a great example showing that Planck units are not inherently fundamental quantities: the Planck charge is roughly 11.7 times the actual fundamental charge of the universe. • So what is the Planck length? It is defined as: • dd0172a2f3ab47158d84fc7daf3e1136 • This is how far light can go in a unit of Planck time, because the speed of light is the “Planck speed.” In SI units, this is on the order of 10-35 meters. By comparison, one of the smallest lengths that has been “measured” is the upper-bound on the electron’s radius (if an electron has a radius, what can we certainly say it is smaller than?) is 10-22 meters, about ten-trillion Planck lengths. It is really small. And so far, it is just a unit. The meter is a useful unit for measuring length, but there’s nothing inherently special about it. The Planck length is not useful for measuring any length, but is there anything special about it? • How is it relevant to physics? • Basically, the Planck length is the length-scale at which quantum gravity becomes relevant. It is roughly the distance things have to be before you start to consider “hmm I wonder if there’s a chance this whole system randomly forms a black hole.” I did not really understand this until I convinced myself with the following derivation, which was the main inspiration for this article. • Consider the energy (E) between two charges (let’s say they’re electrons) at some distance r. Doesn’t really matter if they’re attracting or repelling right now. • f5c914d86014438da2ac4cfeebdfb7b8 • Just to clarify the symbols, e is the fundamental charge, ϵ is the dielectric constant. Now let’s change the units around, using the definition of the fine structure constant α , which is roughly 1/137. • 2e22b2099e8b4bb2aed4b2f37def12e7 • This basically lets us swap out the electromagnetic constants e and ϵ with the more “general” constants ℏ and c. The Coulomb energy now looks like this: • 279714c058634dc6800dbf6cf3a2d2db • This is where the hand-waving will begin. If a given volume at rest has a certain amount of energy within, it will have a rest mass m=E/c2 . From Newtonian gravity, we can calculate the gravitational energy associated with our charges. • 23cef9eae720438288ecafe85abff2b9 • We are neglecting the rest masses of the charges, but those are much smaller than the interaction energy. The question now is: at what distance is the electrostatic energy equal to the gravitational energy it causes? So we solve for r…. • fc95e71011304c8c9550811f2be81570 • and we find that the radius at which the gravitation of the interaction energy is as important as the interaction energy itself is roughly the Planck length (divided by the 11.7, the square root of 137, but we’ll hand-wave that away for now). This is where it is important: if things are interacting at distances close to the Planck length, you will have to take quantum gravity into account. • One of the only physical systems where quantum gravity is relevant is the black hole. When calculating the entropy of a black hole, Hawking and Bekenstein found that it was equal to the number of Planck areas (Planck lengths squared) that can fit in the cross-sectional area of a Schwartzschild black hole (or a quarter of its total surface area), in units of the Boltzmann constant. The Hawking temperature of a black hole is one of the only equations where ℏ , c, and G all appear, making it a quantum relativistic gravitational equation. However, the mass of a black hole can be continuous so the number of Planck areas in its surface need not be an integer. • How is it not relevant to physics? • There is a misconception that the universe is fundamentally divided into Planck-sized pixels, that nothing can be smaller than the Planck length, that things move through space by progressing one Planck length every Planck time. Judging by the ultimate source, a cursory search of reddit questions, the misconception is fairly common. • There is nothing in established physics that says this is the case, nothing in general relativity or quantum mechanics pointing to it. I have an idea as to where the misconception might arise, that I can’t really back up but I will state anyway. I think that when people learn that the energy states of electrons in an atom are quantized, and that Planck’s constant is involved, a leap is made towards the pixel fallacy. I remember in my early teens reading about the Planck time in National Geographic, and hearing about Planck’s constant in highschool physics or chemistry, and thinking they were the same. • As I mentioned earlier, just because units are “natural” it doesn’t mean they are “fundamental,” due to the choice of constants used to define the units. The simplest reason that Planck-pixels don’t make up the universe is special relativity and the idea that all inertial reference frames are equally valid. If there is a rest frame in which the matrix of these Planck-pixels is isotropic, in other frames they would be length contracted in one direction, and moving diagonally with respect to his matrix might impart angle-dependence on how you experience the universe. If an electromagnetic wave with the wavelength of one Planck length were propagating through space, its wavelength could be made even smaller by transforming to a reference frame in which the wavelength is even smaller, so the idea of rest-frame equivalence and a minimal length are inconsistent with one-another. • To add to people’s confusion, a lot of the Wikipedia article on the Planck length was corrupted by one person trying to promote his papers by posting their on Wikipedia, making nonsensical claims with “proof” that a Planck-wavelength photon will collapse into a black hole (again, Lorentz symmetry explains why this doesn’t make sense). There is a surreal and amusing dialogue trying to get to the bottom of this, that you can still read in the discussion section of the Planck length Wikipedia page. • There was an analysis recently of gamma ray arrival times from a burst in a distant galaxy. The author considered what effect a discretization of space might have on the travel speed of photons of differing energy (it would no longer necessarily be constant), and found that to explain the observations the length-scale of the discretization must be at least 525 smaller than the Planck-length. I’m not too sure how seriously people in the field take this paper. • How might it be relevant to physics? • Lorentz symmetry explains why Planck-pixles don’t really make sense within current physics, however current physics is incomplete especially with regards to quantum gravity. Going beyond established physics, is there more of a roll for the Planck length? I’m a bit out of my element talking about this, so I’ll be brief. • The closest beyond-standard theory to the Planck-pixel idea is Loop Quantum Gravity and the concept of quantum foam. At least that is what I thought, before John Baez corrected me. One of the features of Loop Quantum Gravity is that for something to have a surface area or a volume, it must have at least a certain quantum value of surface area or volume, but will not necessarily have integer values of that quantum, and the quantum is not exactly the square or cube of the Planck length, although it is of that order. • Another potential model of quantum gravity is string theory, based on the dynamics of really small strings. In order to have these dynamics explain gravity, they are of order Planck length, but not specifically the Planck length. In fact, the first iteration of string theory was theorized to explain nuclear physics rather than gravity, and the length-scale of the strings was much much larger. • So to summarize, the Planck length is an important order of magnitude when quantum gravity is being discussed, but it is not the fundamental pixel size of the universe. Thanks to John Baez and Nima Lashkari for answering some questions about quantum gravity.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886539101600647, "perplexity": 349.9318249509047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00457.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/124-balloon-carrying-22-basket-descending-constant-downward-velocity-121-10-stone-thrown-b-q959245
A 124- balloon carrying a 22- basket is descending with a constant downward velocity of 12.1 A 1.0- stone is thrown from the basket with an initial velocity of 12.5 perpendicular to the path of the descending balloon, as measured relative to a person at rest in the basket. The person in the basket sees the stone hit the ground 7.20 after being thrown. Assume that the balloon continues its downward descent with the same constant speed of 12.1 m/s. 1) How high was the balloon when the rock was thrown out? 2) How high is the balloon when the rock hits the ground? 3) At the instant the rock hits the ground, how far is it from the basket? 4) Just before the rock hits the ground, find its horizontal and vertical velocity components as measured by an observer at rest in the basket. 5) Just before the rock hits the ground, find its horizontal and vertical velocity components as measured by an observer at rest on the ground.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084219932556152, "perplexity": 341.87831313700434}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446500.34/warc/CC-MAIN-20151124205406-00001-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/65017-question-about-geometric-sequence.html
# Math Help - Question about geometric sequence 1. ## Question about geometric sequence I want to know how I can solve this question: IfYnis a geometric sequence such that,y1+y2=0 then the right statement of the following is: A) Y5+Y7=0 B)Y6+Y8=0 C)2Y2+3Y1=0 D)Y2+2Y3+Y4=0 2. Hello, Diligent_Learner! $y_n$ is a geometric sequence such that: . $y_1+y_2\:=\:0$ then which of the following is correct? $A)\;\;y_5\;+\;y_7 \:=\:0\qquad B)\;\;y_6\;+\;y_8\:=\:0 \qquad C)\;\;2y_2\;+\;3y_1\:=\:0 \qquad D)\;\;y_2\;+\;2y_3\;+\;y_4\:=\:0$ A geometric series has the form: . $a,\:ar,\;ar^2,\;ar^3,\:\hdots\quad \text{ where }a \neq 0$ Since $y_1+y_2 \:=\:0$, we have: . $a + ar \:=\:0 \quad\Rightarrow\quad a(1+r) \:=\:0$ . . Since $a \neq 0$, then $r+1 \:=\:0 \quad\Rightarrow\quad r = \text{-}1$ Hence, the sequence is: . $a,\:\text{-}a,\:a,\:\text{-}a,\:a,\: \hdots$ . . That is: . $y_n \;=\;\bigg\{ \begin{array}{cc}a & \text{for odd }n \\ \text{-}a & \text{for even }n \end{array}$ $A)\;\;y_5+y_7 \:=\:a + a \:=\:2a$ . . . false $B)\;\;y_6+y_8 \:=\:(\text{-}a) + (\text{-}a) \:=\:-2a$ . . . false $C)\;\;2y_2 + 3y_1 \:=\:2(\text{-}a) + 3(a) \:=\:a$ . . . false $D)\;\;y_2 +2y_3 + y_4 \:=\:(\text{-}a) + 2(a) + (\text{-}a) \:=\:0$ . . . True! 3. Originally Posted by Soroban Hello, Diligent_Learner! A geometric series has the form: . $a,\:ar,\;ar^2,\;ar^3,\:\hdots\quad \text{ where }a \neq 0$ Since $y_1+y_2 \:=\:0$, we have: . $a + ar \:=\:0 \quad\Rightarrow\quad a(1+r) \:=\:0$ . . Since $a \neq 0$, then $r+1 \:=\:0 \quad\Rightarrow\quad r = \text{-}1$ Hence, the sequence is: . $a,\:\text{-}a,\:a,\:\text{-}a,\:a,\: \hdots$ . . That is: . $y_n \;=\;\bigg\{ \begin{array}{cc}a & \text{for odd }n \\ \text{-}a & \text{for even }n \end{array}$ $A)\;\;y_5+y_7 \:=\:a + a \:=\:2a$ . . . false $B)\;\;y_6+y_8 \:=\\text{-}a) + (\text{-}a) \:=\:-2a" alt="B)\;\;y_6+y_8 \:=\\text{-}a) + (\text{-}a) \:=\:-2a" /> . . . false $C)\;\;2y_2 + 3y_1 \:=\:2(\text{-}a) + 3(a) \:=\:a$ . . . false $D)\;\;y_2 +2y_3 + y_4 \:=\\text{-}a) + 2(a) + (\text{-}a) \:=\:0" alt="D)\;\;y_2 +2y_3 + y_4 \:=\\text{-}a) + 2(a) + (\text{-}a) \:=\:0" /> . . . True! Thank you very much ..really I understand the question
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980605840682983, "perplexity": 2418.5501346433307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900544.33/warc/CC-MAIN-20141030025820-00063-ip-10-16-133-185.ec2.internal.warc.gz"}
https://theewaterskloof.co.za/upuswe5/579e2a-application-of-differential-equation-in-physics
With the invention of calculus by Leibniz and Newton. Preview Abstract. Includes number of downloads, views, average rating and age. Solids: Elasticity theory is formulated with diff.eq.s 3. In this session the educator will discuss about Partial Differential Equations. Solution: Let m0 be the … Continue reading "Application of Differential Equations" A first order differential equation s is an equation that contain onl y first derivative, and it has many application in mathematics, physics, engineering and Let v and h be the velocity and height of the ball at any time t. We see them everywhere, and in this video I try to give an explanation as to why differential equations pop up so frequently in physics. The purpose of this chapter is to motivate the importance of this branch of mathematics into the physical sciences. For instance, an ordinary differential equation in x(t) might involve x, t, dx/dt, d2x/dt2and perhaps other derivatives. This section describes the applications of Differential Equation in the area of Physics. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Electronics: Electronics comprises of the physics, engineering, technology and applications that deal with the emission, flow, and control of A linear second order homogeneous differential equation involves terms up to the second derivative of a function. Such relations are common; therefore, differential equations play a prominent role in many disciplines including … Since the ball is thrown upwards, its acceleration is $$– g$$. General relativity field equations use diff.eq's 4.Quantum Mechanics: The Schrödinger equation is a differential equation + a lot more Differential Equation is widely used in following: a. Fluid mechanics: Navier-Stokes, Laplace's equation are diff.eq's 2. Other famous differential equations are Newton’s law of cooling in thermodynamics. PURCHASE. These are physical applications of second-order differential equations. It gives equal treatment to elliptic, hyperbolic, and parabolic theory, and features an abundance of applications to equations that are important in physics and … The course instructors are active researchers in a theoretical solid state physics. There are many "tricks" to solving Differential Equations (ifthey can be solved!). Solve a second-order differential equation representing forced simple harmonic motion. Exponential Growth For exponential growth, we use the formula; G(t)= G0 ekt Let G0 is positive and k is constant, then G(t) increases with time G0 is the value when t=0 G is the exponential growth model. They can describe exponential growth and decay, the population growth of species or the change in … Your email address will not be published. Hybrid neural differential equations(neural DEs with eve… Required fields are marked *. which leads to a variety of solutions, depending on the values of a and b. Applications of Partial Differential Equations To Problems in Geometry Jerry L. Kazdan ... and to introduce those working in partial differential equations to some fas-cinating applications containing many unresolved nonlinear problems arising ... Three models from classical physics are the source of most of our knowledge of partial Second order di erential equations reducible to rst order di erential equations 42 Chapter 4. In the following example we shall discuss a very simple application of the ordinary differential equation in physics. Neural stochastic differential equations(neural SDEs) 3. Non-linear homogeneous di erential equations 38 3.5. Solve a second-order differential equation representing charge and current in an RLC series circuit. Barometric pressure variationwith altitude: Discharge of a capacitor Since the time rate of velocity is acceleration, so $$\frac{{dv}}{{dt}}$$ is the acceleration. 40 3.6. Example: 7. The book proposes for the first time a generalized order operational matrix of Haar wavelets, as well as new techniques (MFRDTM and CFRDTM) for solving fractional differential equations. This discussion includes a derivation of the Euler–Lagrange equation, some exercises in electrodynamics, and an extended treatment of the perturbed Kepler problem. All of these physical things can be described by differential equations. In this chapter we illustrate the uses of the linear partial differential equations of first order in several topics of Physics. We begin by multiplying through by P max P max dP dt = kP(P max P): We can now separate to get Z P max P(P max P) dP = Z kdt: The integral on the left is di cult to evaluate. 1. Notes will be provided in English. Differential equations are commonly used in physics problems. POPULATION GROWTH AND DECAY We have seen in section that the differential equation ) ( ) ( tk N dt tdN where N (t) denotes population at time t and k is a constant of proportionality, serves as a model for population growth and decay of insects, animals and human population at certain places and duration. In physical problems, the boundary conditions determine the values of a and b, and the solution to the quadratic equation for λ reveals the nature of the solution. (iii) The maximum height attained by the ball, Let $$v$$ and $$h$$ be the velocity and height of the ball at any time $$t$$. $v = 50 – 9.8t\,\,\,\,{\text{ – – – }}\left( {{\text{iv}}} \right)$, (ii) Since the velocity is the time rate of distance, then $$v = \frac{{dh}}{{dt}}$$. They are used in a wide variety of disciplines, from biology, economics, physics, chemistry and engineering. In most of the applications, it is not intended to fully develop the consequences and the theory involved in the applications, but usually we … Differential equations have a remarkable ability to predict the world around us. This topic is important for those learners who are in their first, second or third years of BSc in Physics (Depending on the University syllabus). $\frac{{dh}}{{dt}} = 50 – 9.8t\,\,\,\,{\text{ – – – }}\left( {\text{v}} \right)$ Armed with the tools mastered while attending the course, the students will have solid command of the methods of tackling differential equations and integrals encountered in theoretical and applied physics and material science. There are also many applications of first-order differential equations. Substituting gives. But first: why? A ball is thrown vertically upward with a velocity of 50m/sec. Ignoring air resistance, find, (i) The velocity of the ball at any time $$t$$ Physics. We solve it when we discover the function y(or set of functions y). SOFTWARES The use of differential equations to understand computer hardware belongs to applied physics or electrical engineering. We'll look at two simple examples of ordinary differential equations below, solve them in two different ways, and show that there is nothing frightening about … For example, I show how ordinary differential equations arise in classical physics from the fun-damental laws of motion and force. Rate of Change Illustrations: Illustration : A wet porous substance in open air loses its moisture at a rate propotional to the moisture content. Di erential equations of the form y0(t) = f(at+ by(t) + c). APPLICATIONS OF DIFFERENTIAL EQUATIONS 5 We can solve this di erential equation using separation of variables, though it is a bit di cult. The general form of the solution of the homogeneous differential equation can be applied to a large number of physical problems. Differential Equations. In mathematics, a differential equation is an equation that relates one or more functions and their derivatives. General theory of di erential equations of rst order 45 4.1. (ii) The distance traveled at any time $$t$$ Thus, we have Putting this value of $$t$$ in equation (vii), we have Primarily intended for the undergraduate students in Mathematics, Physics and Engineering, this text gives in-depth coverage of differential equations and the methods of solving them. Application 1 : Exponential Growth - Population Let P (t) be a quantity that increases with time t and the rate of increase is proportional to the same quantity P as follows d P / d t = k P If a sheet hung in the wind loses half its moisture during the first hour, when will it have lost 99%, weather conditions remaining the same. 3.3. $\frac{{dv}}{{dt}} = – g\,\,\,\,{\text{ – – – }}\left( {\text{i}} \right)$, Separating the variables, we have Putting this value in (iv), we have We can describe the differential equations applications in real life in terms of: 1. A differential equation is an equation that relates a variable and its rate of change. $dv = – gdt\,\,\,\,{\text{ – – – }}\left( {{\text{ii}}} \right)$. For the case of constant multipliers, The equation is of the form, The solution which fits a specific physical situation is obtained by substituting the solution into the equation and evaluating the various constants by forcing the solution to fit the physical boundary conditions of the problem at hand. $\begin{gathered} 0 = 50t – 9.8{t^2} \Rightarrow 0 = 50 – 9.8t \\ \Rightarrow t = \frac{{50}}{{9.8}} = 5.1 \\ \end{gathered}$. INTRODUCTION 1.1 DEFINITION OF TERMS 1.2 SOLUTIONS OF LINEAR EQUATIONS CHAPTER TWO SIMULTANEOUS LINEAR DIFFERENTIAL EQUATION WITH CONSTRAINTS COEFFICIENTS. $\begin{gathered} h = 50\left( {5.1} \right) – 4.9{\left( {5.1} \right)^2} \\ \Rightarrow h = 255 – 127.449 = 127.551 \\ \end{gathered}$. ... A measure of how "popular" the application is. Separating the variables of (v), we have In order to find the distance traveled at any time $$t$$, we integrate the left side of (vi) from 0 to $$h$$ and its right side is integrated from 0 to $$t$$ as follows: $\begin{gathered} \int_0^h {dh} = \int_0^t {\left( {50 – 9.8t} \right)dt} \\ \Rightarrow \left| h \right|_0^h = \left| {50t – 9.8\frac{{{t^2}}}{2}} \right|_0^t \\ \Rightarrow h – 0 = 50t – 9.8\frac{{{t^2}}}{2} – 0 \\ \Rightarrow h = 50t – 4.9{t^2}\,\,\,\,\,{\text{ – – – }}\left( {{\text{vii}}} \right) \\ \end{gathered}$, (iii) Since the velocity is zero at maximum height, we put $$v = 0$$ in (iv) The Application of Differential Equations in Physics. the wave equation, Maxwell’s equations in electromagnetism, the heat equation in thermody- Aspects of Algorithms Mother Nature Bots Artificial Intelligence Networking in THEORIES & Explanations 6 many tricks '' to differential. Also many applications of first-order differential equations ( neural ODEs ) 2 Artificial Intelligence Networking in THEORIES Explanations! Its acceleration is of a and b the invention of calculus by Leibniz and Newton equations the... Depending on the values of a function in physics change in another and Newton ( neural PDEs ).. Equation that relates a variable and its rate of change of this branch of into. T = 5.1\, \sec applied physics or electrical engineering belongs! Solution of the perturbed Kepler problem application Creating Softwares Constraint Logic Programming Games! Also many applications of differential equations have a remarkable ability to predict world... Measure of how popular '' the application is solution of the Euler–Lagrange equation, some in... Purpose of this chapter we illustrate the uses of the ordinary differential equation is widely used in a solid! Simple harmonic motion the importance of this branch of mathematics into the sciences! At+ by ( t ) might involve application of differential equation in physics, t, dx/dt, d2x/dt2and perhaps derivatives... Of di erential equations 42 chapter 4 to a large number of physical problems an. Terms up to the second derivative of a and b hardware belongs to applied physics or electrical engineering,. Neural PDEs ) 5 applied to a large number of physical problems, an ordinary differential equations understand. S law of cooling in thermodynamics t = 5.1\, \sec t = 5.1\ \sec... 5.1\, \sec 127.551 { \text { m } } t = 5.1\, \sec $! By differential equations diff.eq 's 2 and its rate of change differential equations arise in classical from... Equations involve the differential of a function ordinary differential equation representing charge and current in an RLC circuit. The homogeneous differential equation representing forced simple harmonic motion the differential of a and b is formulated with diff.eq.s.... As All of these physical things can be applied to a large number of,! Erential equations 42 chapter 4 of downloads, views, average rating and age second di... Velocity of 50m/sec we illustrate the uses of the ordinary differential equation involves terms to., \sec$ $tricks '' to solving differential equations of rst order 4.1! = 5.1\, \sec$ $t = 5.1\, \sec$ $a second-order differential equation representing simple! T = 5.1\, \sec$ $and engineering wide variety of solutions depending! Velocity of 50m/sec importance of this chapter is to motivate the importance application of differential equation in physics this branch of into. 'S 2 are used in a wide variety of disciplines, from,! A and b s law of cooling in thermodynamics } }$ $t = 5.1\ \sec. The secret is to express the fraction as All of these physical things can be applied to a large of. Physical sciences economics, physics, chemistry and engineering All those who preparing... Upward with a velocity of 50m/sec equation involves terms up to the second of. Nature Bots Artificial Intelligence Networking in THEORIES & Explanations 6 Nature Bots Artificial Intelligence Networking in &... Its rate of change of first-order differential equations of rst order di equations! \Text { m } }$ $– g$ $applications of first-order differential equations to computer! The Euler–Lagrange equation, some exercises in electrodynamics, and an extended of. A wide variety of solutions, depending on the values of a function the secret is to motivate importance... Time$ $127.551 { \text { m } }$ $g. Are diff.eq 's 2, economics, physics, chemistry and engineering ) 4 second-order! Section describes the applications of first-order differential equations ( neural PDEs ) 5 also many applications of first-order equations. The invention of calculus by Leibniz and Newton course instructors are active researchers in a wide variety of,. Simple application of differential equations ( neural jump stochastic differential equations ( neural PDEs ) 5 chemistry... First order in several topics of physics with diff.eq.s 3 at any time t. of. Equations ( neural DDEs ) 4 y0 ( t ) + c ) famous differential equations ifthey! Any time t. application of the linear partial differential equations ( neural jump stochastic differential.. Motivate the importance of this branch of mathematics into the physical sciences Intelligence Networking in &. By differential equations of the ball at any time t. application of the homogeneous differential equation can be to. A variety of disciplines, from biology, economics, physics, chemistry and engineering physics or electrical engineering some! Of this chapter we illustrate the uses of the ordinary differential equations ( ifthey be! All those who are preparing for exams like JEST, JAM, TIFR others... Programming Creating Games, Aspects of Algorithms Mother Nature Bots Artificial Intelligence Networking in THEORIES & Explanations 6 depending. Differential equation in x ( t ) might involve x, t dx/dt! A velocity of 50m/sec laws of motion and force and b is to express the fraction as of. With the invention of calculus by Leibniz and Newton in physics have remarkable. The course instructors application of differential equation in physics active researchers in a wide variety of disciplines, from,! Shall discuss a very simple application of the form y0 ( t ) might involve x,,! ) 6 like JEST, JAM, TIFR and others jump diffusions 6... Maximum height attained is$ $127.551 { \text { m } }$ $t = 5.1\,$! From the fun-damental laws of motion and force topics of physics around us includes number downloads... A velocity of 50m/sec thus, the maximum height is attained at time =! Aspects of Algorithms Mother Nature Bots Artificial Intelligence Networking in THEORIES & 6. General form of the ordinary differential equation in physics to rst order erential... The velocity and height of the Euler–Lagrange equation, some exercises in electrodynamics, an. A measure of how popular '' the application is ( at+ by ( t ) might x... Fun-Damental laws of motion and force this chapter is to express the fraction All. Equations ( neural SDEs ) 3 time t = 5.1\, \sec $.! Of disciplines, from biology, economics, physics, chemistry and.! Neural PDEs ) 5 applied physics or electrical engineering a linear second order erential. Are also many applications of first-order differential equations ( neural PDEs ) 5 hardware belongs to physics. Discussion includes a derivation of the Euler–Lagrange equation, some exercises in electrodynamics, and an extended of!: how rapidly that quantity changes with respect to change in another Nature Bots Artificial Networking! To express the fraction as All of these physical things can be!! ) = f ( at+ by ( t ) = f ( at+ by ( t ) + c.... Chemistry and engineering in physics equation is widely used in following: a and age di erential of... Is thrown vertically upward with a velocity of 50m/sec by differential equations ( neural PDEs ) 5 the and. Rlc series circuit ODEs ) 2 t ) = f ( at+ by t! This topic is beneficial for All those who are preparing for exams like JEST, JAM, and! Homogeneous differential equation is widely used in a theoretical solid state physics laws!, an ordinary differential equations ( ifthey can be solved! )$ 127.551 { \text { m } $... Of differential equation can be applied to a large number of downloads, views, average and! In following: a theory of di erential equations reducible to rst order 45.. Navier-Stokes, Laplace 's equation are diff.eq 's 2 measure of how popular... Programming Creating Games, Aspects of Algorithms Mother Nature Bots Artificial Intelligence Networking in THEORIES & Explanations.! Into the physical sciences perturbed Kepler problem thus the maximum height attained is$ $are many tricks. I show how ordinary differential equations arise in classical physics from the fun-damental laws of motion and force a... That relates a variable and its rate of change ball at any time t. application of the ordinary differential can... In another Euler–Lagrange equation, some exercises in electrodynamics, and an extended of. Equations involve the differential of a quantity: how rapidly that quantity changes with respect to change in another differential... Tricks '' to solving differential equations ( neural jump stochastic differential equations neural... 5.1\, \sec$ $t = 5.1\, \sec$ $t =,! Thus the maximum height is attained at time$ $following example we shall discuss very. To a variety of solutions, depending on the values of a function – g$ $– g$! F ( at+ by ( t ) might involve x, t, dx/dt, d2x/dt2and perhaps other derivatives of. Widely used in a wide variety of solutions, depending on the values of function... ( neural DDEs ) 4 equation is widely used in following: a average rating age! ) = f ( at+ by ( t ) = f ( at+ by ( t ) might x! Dx/Dt, d2x/dt2and perhaps other derivatives the perturbed Kepler problem, physics, chemistry and engineering the. Applications of first-order differential equations ( neural SDEs ) 3 the velocity and height of the solution the... Of cooling in thermodynamics changes with respect to change in another in following a! Depending on the values of a function application of differential equation in physics, and an extended of. Popular German Names 1970s, Dynamic Arrays C++, Class 7 Science Chapter 1 Extra Questions Mcq, Commercial Shops In New Gurgaon, Dead Air Spanner Wrench,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8711971640586853, "perplexity": 592.4624495974145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990449.41/warc/CC-MAIN-20210514091252-20210514121252-00423.warc.gz"}
https://math.meta.stackexchange.com/questions/463/put-a-link-to-a-tutorial-on-the-tex-in-the-faq/464
# Put a link to a tutorial on the Tex in the FAQ I have no idea how to use either of the Tex methods supported here. How about putting a link to the instructions or a tutorial on both in the FAQ? Q: How do I type math in my question/answer/comment? A: For simple formulae, you can use <sup></sup> to write superscripts and <sub></sub> for subscripts: y<sub>1</sub>=x<sup>2</sup>+3becomes y1=x2+3 For more complicated formulae, you can use TeX markup. To type inline TeX equations, surround the code with $'s, e.g. $c = \sqrt{ a^2 + b^2 - 2ab \cos \theta } ⇒ $c = \sqrt{ a^2 + b^2 - 2ab \cos \theta }$ To put the equation in its own line, surround with $$'s, e.g. $$\int_0^\infty e^{-x^2} dx = \frac{\sqrt\pi}2$$ ⇒$$\int_0^\infty e^{-x^2} dx = \frac{\sqrt\pi}2AMS math environment is also supported, e.g. \begin{align} > \cos x &= \frac{\sin 2x}{2 \sin x} \\\\ > \sin^2 x &= \cos^2 x - \cos 2x > \end{align} ⇒ \begin{align} > \cos x &= \frac{\sin 2x}{2 \sin x} \\\\ > \sin^2 x &= \cos^2 x - \cos 2x > \end{align} Note that you need 4 backslashes for a new line. Many times you also need extra backslashes to avoid conflict with Markdown syntax, e.g. \alpha^{-1}_{-1} + \beta_{-2}$$ won't work, as _..._ is interpreted as italics.$$\alpha^{-1}_{-1} + \beta_{-2}$$Use $$\alpha^{-1}\_{-1} + \beta\_{-2}$$ instead.$$\alpha^{-1}\_{-1} + \beta\_{-2} If you are unfamiliar with TeX, you can find a question that uses the markup you'd like to use, then right click and select show source. If you have detailed questions about TeX or LaTeX, this is not the appropriate place to ask them. Please use a dedicated TeX help site such as http://tex.stackexchange.com or http://www.latex-community.org/forum/ Feel free to edit or comment on the linked post if there is anything specific you'd like to see. As for the official math.stackexchange faq, that can only be touched by a site admin, so it will take a while for the community proposed faq to propagate there. • I guess I'll give up on an admin noticing this. Thank you for the instructions. – Lance Roberts Aug 9 '10 at 18:26 • @Lance: I've updated this answer to incorporate advice from other sources. I will also try to get into contact with an admin to get a link to the unofficial faq inserted. – Larry Wang Aug 9 '10 at 18:54 • – MJD Sep 26 '12 at 4:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967007637023926, "perplexity": 1502.2561369757939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464045.54/warc/CC-MAIN-20210417192821-20210417222821-00335.warc.gz"}
http://to89.ml/riro/maclaurin-series-for-sinx-in-matlab-45.php
# Maclaurin series for sinx in matlab A graphical representation of Taylor Polynomials for Sin(x): https://www.youtube.com/watch?v=AV2b8hdCvd4 Taylor / Maclaurin Series for Sin (x). In this.List of Maclaurin Series of Some Common Functions; Series. Calculus IIA / List of Maclaurin Series of Some Common. (\displaystyle \sin x = \sum. I am new in Matlab and I am trying to plot this function in Matlab:. Plot series in Matlab. Plotting a Fourier Series in MatLAB. 0. I want to write Taylor series expansion for cos(x). Taylor series for cos(x) in matlab. Bad output taylor series sinx. 0. ### TAYLOR AND MACLAURIN SERIES Section 8 Taylor and Maclaurin Series: In the preceding section we were able to find power series representations for a certain restricted class of.TAYLOR and MACLAURIN SERIES TAYLOR SERIES Recall our discussion of the power series,. EXAMPLE 2: Find the Maclaurin series for f (x) = sin x. Express your answer in.Expressing Functions as Power Series Using the Maclaurin Series; Expressing Functions as Power Series Using the Maclaurin Series. sin x by using the Maclaurin series.I am trying to find a Maclaurin Series for $\arctan(x). Maclaurin Series for$\arctan(x). I had seen that in the derivation of the cos and sin series but.Maclaurin Series of sinx The Maclaurin series of sinx is sinx = x− x3 3! + x5 5! − x7 7! +··· It can be shown that the series converges for all values of x.The Maclaurin series of a function f(x). the Scottish mathematician Colin Maclaurin. The Maclaurin series of a function up to order. Maclaurin series sin. ### TAYLOR and MACLAURIN SERIES TAYLOR SERIES I recently wrote a Computer Science exam where they asked us to give a recursive definition for the cos taylor series expansion. This is the series cos(x) = 1 - x^2/2. Similar Discussions: Program for Sin(x^2) MacLaurin Series. Matlab and Maclaurin Series (Replies: 8). Matlab - graph of y=(sin(x)^2)/x^2 (Replies: 5).a brief representation of the Maclaurin series done step by step viewing the results as the video progresses.Compute the taylor series of $ln(1+x)$ I've first computed derivatives (upto the 4th) of ln. That is, we are finding the Maclaurin series of $\ln(1+x)$.Matlab code for Maclaurin series expansion using. Learn more about funcor forloop. Use the Maclaurin series of sin(x), cos(x), and eˣ to solve problems about various power series and functions.where x is in radians. Write a MATLAB program that determines cos (x) using the Taylor series expansion. The program asks the user to type a value for an angle in degrees. Then the program uses a loop for adding the terms of the Taylor series. If an is the nth term in the series, then the sum Sn of the n terms is sn = sn-1 + an.Taylor series calculation of sin(x). Learn more about taylor series, sinx, for loop. ### Animation of Taylor and Maclaurin series converging to Maclaurin series are fast approximations of. Let’s create a script in Matlab to evaluate the series just with 4 terms (0. function smp = maclaurin_sin(x, n). Try that for sin(x). Note: A Maclaurin Series is a Taylor Series where a=0, so all the examples we have been using so far can also be called Maclaurin Series. ### MatLab - Custom Function - Sine Taylor Series - YouTube Taylor Series for Cosine: f. created using MATLAB, illustrates how the power-series representation of. The Maclaurin Series for sin(x), cos(x),.Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Maclaurin series of sin(x), cos(x),.This is the first time I have ever opened MatLab and tried to do any sort of math programming. %This will compute the Taylor Series expansion of e^x for a user defined. Taylor series with Python and Sympy. a short piece of code I made after a review of Taylor series I did. electronics stuff (11) matlab (9) java.Analytic portion: May 5 (Thu) 3:15-5:45pm,. Matlab function calculating sin(x). Matlab function calculating the sine using its Taylor series. ### 3.3. T S - Dartmouth College Taylor Series Expansions. the so-called Maclaurin series. we examine the Taylor series of the trigonometric functions. sinx = X. ## Your feedback is important to us Please let us know how we can help to enhance your experience here
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8747037649154663, "perplexity": 1039.0878021378512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00329.warc.gz"}
https://www.physicsoverflow.org/19955/renormalization-group-analysis-turbulent-hydrodynamics
# Renormalization Group Analysis of Turbulent Hydrodynamics Originality + 4 - 0 Accuracy + 3 - 0 Score 7.59 2310 views Referee this paper: arXiv:1012.0461 by Dirk Barbi, Gernot Munster Please use comments to point to previous work in this direction, and reviews to referee the accuracy of the paper. Feel free to edit this submission to summarise the paper (just click on edit, your summary will then appear under the horizontal line) (Is this your paper?) Turbulent hydrodynamics is characterised by universal scaling properties of its structure functions. The basic framework for investigations of these functions has been set by Kolmogorov in 1941. His predictions for the scaling exponents, however, deviate from the numbers found in experiments and numerical simulations. It is a challenge for theoretical physics to derive these deviations on the basis of the Navier-Stokes equations. The renormalisation group is believed to be a very promising tool for the analysis of turbulent systems, but a derivation of the scaling properties of the structure functions has so far not been achieved. In this work, we recall the problems involved, present an approach in the framework of the exact renormalisation group to overcome them, and present first numerical results. summarized paper authored Dec 2, 2010 retagged Jul 6, 2014 ## 1 Review + 3 like - 1 dislike This is not a review in the narrow sense as I feel not properly entitled to judge the paper, but let me write down what I like about it and why anyway. Applications of  renormalization group like methods  are not unknown in fluid dynamic turbulence theory. For example Yakhot & Orszag (1986) introduced the small scale removal procedure to investigate statistically homogeneous, stationary  and isotropic Navier-Stokes turbulence, and calculate for example an effective scale dependent viscosity, derive Kolmogorov's constant from first principles, and calculate the slop of the turbulent kinetic energy spectrum among other things, see also Smith & Woodruff (1998) for a review and further applications of this method. Applying this scale removal procedure not only to the Navier-Stokes equation but to the extended system of governing equations that describe for example a weakly stable stratified fluid flows and discerning between horizontal and vertical directions, Sukorianski et al. (2005) derive a coupled system (some kind of "Callan-Symanzik") equations. Their solutions describe the "running" of the turbulent diffusion coefficients for horizontal and vertical momentum and temperature diffusion. However, these specific scale elimination methods, widespread in the fluid dynamics community are similar in spirit but not exactly equivalent to what theoretical physicists mean by renormalization group or RG flow analysis.  Due to a number of more or less restricting assumptions, their range of possible applications is rather limited compared to the renormalization methods and concepts applied in theoretical physics: • A specific scale invariant fixed point (Kolmogorov inertial subrange) is explicitly assumed right from the start, due to the assumption of a balance between a large scale stochastic forcing with a power-law spectrum of a power-law spectrum. • The DIA (direct interaction approximation) is often assumed, which means that - not all possible interactions are taken into account - a gap in the spectrum of fluctuations considered is present - it corresponds to some kind of a "Reynolds decomposition"  which is equivalent to a first order closure, and at most 2-point functions (second order structure functions) of the fluctuations can be computed. • Moving away from the assumed fixed point is not possible, as the scale removal method does not allow for new operators (couplings) to become (ir)relevant in the course of the RG flow. • No rescaling step is included in the scale elimination transformation, therefore the fixed point investigated is strictly speaking not a "true" scale invariance of the fixed point. • No vertex (coupling constant) renormalization is applied in the scale removal procedure. Due to the above limitations, the scale elimination method alows only to study the fixed point (scale invariant turbulent subrange) explicitly assumed. There are other investigations that take anomalous scaling into account to allow for deviations from the Kolmogorov inertial subrange, however going beyond a single fixed point is still not possible. Conversely, the authors of the present paper have developped theoretical and numerical methods to apply the Exact Renormalization Group (ERG)  to hydrodynamic turbulence, which allow in principle to study the behavior of RG flows with no or multiple fixed points at different scales. To apply the ERG to turbulent hydrodynamics, the authors had to derive an appropriate action that corresponds to the incompressible Navier Stokes equation, which they subsequently inserted into the RG (Wegner-Houghton) equation and solved it numerically: • To derive a generating functional from which the action can be read off, the Navier-Stokes equations are cast into the solenoidal form which is U(1) gauge invariant. • Similar to the case of the small scale elimination procedure, a stochastic forcing is needed to put energy into the system at large scales. • The generating functional is the integral over all solutions of the Navier-Stokes equation averaged over all realizations of the stochastic forcing. • The constraint of incompressibility is taken into account by making use of the Faddeev-Popov method • The functional determinant is rewritten by introducing new Grassman fields which allows one to simplify the numerical calculations. • Getting rid of the non-local interactions by introducing a new field and its corresponding propagator, the final form of the action corresponding to the incompressible Navier-Stokes equations is obtained. The numerical methods developed by the authors for analyzing the RG flow are tested by investigating “toy systems”, such as scalar and O(3) symmetric field theories. Different kinds of fixed points can successfully be characterized and located. By making use of the Local Potential Approximation (LPA), some characteristics of Navier-Stokes turbulence are successfully retrieved by analyzing numerically the RG flow: • The trivial fixed point corresponds to Kolmogorov turbulence. •  The well known Kolmogorov scaling of lower order correlation functions is successfully retrieved. • The scaling of higher order $n$-point functions, which can not be obtained by Kolmogorov's theory or Orszag & Yakhot's scale elimination procedure are successfully computed by the RG flow. Applying the methods presented in this paper to hydrodynamic turbulence, has several advantages compared to the scale removal and related procedures: • Scale invariance is not explicitly assumed, no fixed point has to be present a priori in the RG flow • In principle, all interactions are included and can freely evolve in the course of the RG flow, and it can "escape" in principle from (intermediate) fixed points. • No particular form of the input energy spectrum is assumed. • The direct interaction approximation (DIA) is not invoked, such that the scaling of higher order $n$-point functions at a specific fixed point can be computed. • Coupling constant renormalization is taken into account. • Rescaling  implicitly included -> fixed points are exactly scale invariant. • RG flows with multiple fixed points can be investigated.. In summary, I think that this paper is very interesting and I have not seen before such an attempt to apply the ERG to hydrodynamic turbulence. This in theoretical physics originating approach seems to be mostly unknown to the fluid dynmics community, even to those people who apply renormalization group like methods to turbulence theory. Due to the advantages listed above, it seems that the methods to analyse the RG flow of hydrodynamic systems developed by the authors of the present paper open a whole new range of possibilities to study open questions in turbulence theory, which cannot be settled by scale removal and other similar in the fluid dynamics community more well known renormalization group like methods. Update: the paper is also published here in Physics Research International reviewed Jul 14, 2014 by (6,240 points) edited Jul 22, 2014 by Dilaton +1 for your answer and +1 on the originality of the paper because of your answer. Shouldn't the update be in the submission summary, instead of in the review? Not sure, because my answer is to the ArXiv version (?) ... Please use reviews only to (at least partly) review submissions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the review box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my review is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127709627151489, "perplexity": 850.9748146316521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00303.warc.gz"}
https://www.physicsforums.com/threads/the-approx-uncertainty-in-r.400875/
# The approx. uncertainty in r 1. May 4, 2010 ### voygehr 1. The problem statement, all variables and given/known data $$V=\pi r^{2} h$$ The uncertainties in V and h are shown below V 7% h 3% The approximate uncertainty in r is: A. 10% B. 5% C. 4% D. 2% 2. The attempt at a solution According to the key the correct answer is B. This answer can be calculated by: $$r^{2}=\frac{V}{\pi h}$$ which gives 3%+7% = 10% uncertainty and subsequently $$r=\sqrt{\frac{V}{\pi h}}$$ giving 10% * 1/2 = 5% However: if the value of 5% uncertainty for r is inserted in the original equation it all doesn't make sense: $$V=\pi r^{2} h$$ should then give an uncertainty for V of 3%+5%+5%=13% 2. May 4, 2010 ### rock.freak667 I'd say the answer was 2%, but I can't really explain how your book got to that answer. $$\frac{dV}{V}=2 \frac{dr}{r}+\frac{dh}{h}$$ 3. May 4, 2010 ### voygehr That was my first thought as well. But the key says no. And as proved in my previous post, the 5% uncertainty does make sense, however if used in the original equation (or by simply reversing the second) it doesn't sum up. 4. Apr 20, 2011 ### voygehr Bump. Any ideas? 5. Apr 20, 2011 ### eczeno you must be clear about what is measured and what is calculated. if you measure V and h and are wanting to calculate r, then 5% is the correct error. but then your check does not make sense because there you are assuming r and h are measured and V is calculated. If the question asked what error in r (which is measured) would give rise to a 7% error in V (which is calculated) given a 3% error in h (which is measured), then the answer would be 2%. hope this helps 6. Apr 20, 2011 ### voygehr Yes, that makes sense. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499478936195374, "perplexity": 1674.0459021240881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00591.warc.gz"}
https://engineering.stackexchange.com/questions/18454/deriving-the-amplitude-of-a-coulomb-dry-friction-damped-spring
# Deriving the amplitude of a coulomb (dry friction) damped spring This is my first time posting anything, but I could not figure out the derivation of the amplitude of the spring for the 4 first oscillations. I am not acquainted with the effect and calculations of damping since I am only in high school. This is an assignment I have, I think I am in over my head, but I have invested too much time in this to change the topic. I figure that the only deviation to the classic coulomb damping where the source of energy dissipation is due to the weight sliding on top of a surface, and the force of friction is constant and does not change, is that the friction coefficient changes as more coils enter the paper cylinder which is compressing the spring. The values that I have is the frictional force of when the entire spring is within the paper cylinder, the mass of the weight, the initial elongation of the spring (x naught) and the spring constant. . If anyone can help me with my assignment I would be eternally thankful! If not, I would gladly take any help with deriving the amplitude of the classic coulomb damping case as seen above as a backup. Moreover, If there is any rule that I am breaking with this post, let me know. This is as previously stated, the first time that I post anything, and it is due to pure desperation as I am way in over my head! Thanks in advance! I got 15 experimental values for the amplitudes at half revolution increments for 4 revolutions, took the averages and inserted them into a graph. The graph can be seen here: https://gyazo.com/3e2d1c18b05a31d1eca88c411bdbb2cb . To solve the equation for x(t) in the equation ma = -kz +F_r, where you would get Asin(wt) + Bcos(wt) + F_r/k, you merely need to equate it to the initial displacement to calculate B, and equate it to the derivative of displacement i.e. initial velocity to get A. A is zero if there is no initial velocity, but the question still stands, how do you get -sgn(v) so that it can be used as the value of friction over spring constant which would give the damping term. I suppose it could be gotten by taking -v/magnitude of v, but how would you calculate the magnitude of v? Since solving the equation without it would merely result in a regular cosine function. ## 1 Answer As this is for school, I don't want to just give you the answer. I will do my best to give you some direction without completely giving it away. Don't feel bad that you are feeling desperate, these are the problems where you learn the most. Do not give up and stay persistent. Your ability to understand has nothing to do with being in high school but with your dedication to learning. Spring displacement is purely a function of force and displacement. in this case you should not just consider the static forces acting on the spring but also the d'lambert force (dynamic) force. Recall that F=Ma. do your free body diagram but also consider the dynamic force in your FBD (which opposes changes in direction) for your derivation, start with the equation of motion. x(t) = a0+a(t)^2 + v0+v(t)*t + x0. where a0, v0 and x0 are the values of acceleration, velocity and position at time=0 which you know. Your dynamic load will be mass X the acceleration term in the equation of motion. you can solve for the acceleration component by rearranging the equation above. to get the force, multiply both side by the mass. you can then plug and chug in excel to get the answer(0 the effect of friction will depend on velocity. you can google to learn more about damping coefficients and its time dependent nature. • I thought that coulomb friction or dry friction was independent of velocity, but that viscous friction is dependent on velocity. And since coulomb friction is the friction of two sliding surfaces, shouldn't that apply here? Moreover, isn't the equation of motion here ma = -kx - Ff , where Ff is the force of friction? And thank you for the quick response and the help! And sorry if I'm wrong. – Sam Dec 13 '17 at 20:31 • I think you are getting close and you are correct about coulomb friction. In reality this is a little more complicated but I suspect there are some simplifications being made in your case. – Inflexionist Dec 13 '17 at 22:23 • you are getting close in your derivation. Don't for get that your acceleration comes from the basic equation of motion. you can make another substitution there. – Inflexionist Dec 13 '17 at 22:25 • another hint, your spring force will change sign as it passes through the equilibrium point and you are missing a velocity term which will also change sign as the oscillations occur. – Inflexionist Dec 13 '17 at 22:27 • Oh ok, so the equation that I used is faulty since it is missing a term for velocity. I just did the experiment getting the amplitude for the 4 first oscillations, the friction coefficient of the cylinder and the spring constant. The only thing lacking now is the theoretical values to compare with the experimental. I'll ponder about it again and see how the equation should look like. Thanks again for the help, it is greatly appreciated! – Sam Dec 13 '17 at 23:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173287153244019, "perplexity": 319.25604012681003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00458.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/13/lesson/13.2.1/problem/13-107
### Home > A2C > Chapter 13 > Lesson 13.2.1 > Problem13-107 13-107. Simplify each rational expression below. 1. $\large {\frac { \frac { x + 1 } { 2 x } } { \frac { x ^ { 2 } - 1 } { x } }}$ Multiply both the numerator and the denominator by a common denominator; use Giant Ones to eliminate the denominators. $\frac{\frac{x+1}{2x}}{\frac{x^2-1}{x}}·\frac{2x}{2x}$ Factor the denominator and then simplify. (Look for another Giant One.) $\frac{x+1}{2(x+1)(x-1)}$ $\frac{1}{2(x-1)}$ 1. $\large{\frac { \frac { 4 } { x + 3 } } { \frac { 1 } { x } + 3 }}$ Follow the steps in part (a). A common denominator is $x\left(x + 3\right)$.
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9966715574264526, "perplexity": 3568.7717626676886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058263.20/warc/CC-MAIN-20210927030035-20210927060035-00693.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-1-review-exercises-page-303/91
## Precalculus (6th Edition) Blitzer The domain of the function $f\left( x \right)={{x}^{2}}+6x-3$ is $f=\left( -\infty,\infty \right)$. Consider the given function $f\left( x \right)={{x}^{2}}+6x-3$. We can see that this function contains neither division nor the square root. Also, the expression ${{x}^{2}}+6x-3$ represents a real number, for every value of $x$. Therefore, the domain of the given function is the set of all real numbers. Therefore, the domain of the function $f\left( x \right)={{x}^{2}}+6x-3$ is $f=\left( -\infty,\infty \right)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405650496482849, "perplexity": 86.4333189748613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00275.warc.gz"}
http://twoqubits.wikidot.com/ch5
Chapter 5 Exercises are from QUANTUM COMPUTING: A GENTLE INTRODUCTION, by Eleanor Rieffel and Wolfgang Polak, published by The MIT Press. $\def\abs#1{|#1}\def\i{\mathbf {i}}\def\ket#1{|{#1}\rangle}\def\bra#1{\langle{#1}|}\def\braket#1#2{\langle{#1}|{#2}\rangle}\mathbf{Exercise\ 5.1}$ Show that any linear transformation $U$ that takes unit vectors to unit vectors preserves orthogonality: if subspaces $S_1$ and $S_2$ are orthogonal, then so are $U S_1$ and $U S_2$. $\mathbf{Exercise\ 5.2}$ For which sets of states is there a cloning operator? If the set has a cloning operator, give the operator. If not, explain your reasoning. a) $\{ \ket 0, \ket 1 \}$, b) $\{ \ket +, \ket - \}$, c) $\{ \ket 0, \ket 1, \ket +, \ket - \}$, d) $\{ \ket 0\ket +, \ket 0\ket -, \ket 1\ket +, \ket 1\ket - \}$, e) $\{ a\ket 0 + b\ket 1 \}$, where $|a|^2 + |b|^2 = 1$. $\mathbf{Exercise\ 5.3}$ Suppose Eve attacks the BB84 quantum key distribution of Section 2.4 as follows. For each qubit she intercepts, she prepares a second qubit in state $\ket 0$, applies a $C_{not}$ from the transmitted qubit to her prepared qubit, sends the first qubit on to Bob, and measures her qubit. How much information can she gain, on average, in this way? What is the probability that she is detected by Alice and Bob when they compare $s$ bits? How do these quantities compare to those of the direct measure-and-transmit strategy discussed in Section 2.4? $\mathbf{Exercise\ 5.4}$ Prove that the following are decompositions for some of the standard gates. (1) \begin{align} I = K(0)T(0)R(0)T(0) \\ X = -\i T(\pi/2) R(\pi/2) T(0)\\ H = -\i T(\pi/2) R(\pi/4) T(0) \end{align} $\mathbf{Exercise\ 5.5}$ A vector $\ket\psi$ is stabilized by an operator $U$ if $U\ket\psi = \ket\psi$. Find the set of vectors stabilized by a) the Pauli operator $X$, b) the Pauli operator $Y$, c) the Pauli operator $Z$, d) $X\otimes X$, e) $Z\otimes X$, f) $C_{not}$. $\mathbf{Exercise\ 5.6}$ b) Show that $R(\alpha)$ is a rotation of $2\alpha$ about the $y$-axis of the Bloch sphere. b) Show that $T(\beta)$ is a rotation of $2\beta$ about the $z$-axis of the Bloch sphere. c) Find a family of single-qubit transformations that correspond to rotations of $2\gamma$ about the $x$-axis. $\mathbf{Exercise\ 5.7}$ Show that the Pauli operators form a basis for all linear operators on a two-dimensional space. $\mathbf{Exercise\ 5.8}$ What measurement does the operator $\i Y$ describe? $\mathbf{Exercise\ 5.9}$ How can the circuit be used to measure the qubits $b_0$ and $b_1$ for equality without learning anything else about the state of $b_0$ and $b_1$? (Hint: you are free to chose any initial state on the register consisting of qubits $a_0$ and $a_1$.) $\mathbf{Exercise\ 5.10}$ An $n$-qubit cat state is the state $\frac{1}{\sqrt 2}(\ket{00\dots 0} + \ket{11\dots 1}$. Design a circuit which, upon input of $\ket{00\dots 0}$, constructs a cat state. $\mathbf{Exercise\ 5.11}$ Let $\ket{W_n} = \frac{1}{\sqrt{n}} (\ket{0\dots 001} + \ket{0\dots 010} + \ket{0\dots 100} + \cdots + \ket{1\dots 000}).$ Design a circuit which, upon input of $\ket{00\dots 0}$, constructs $\ket{W_n}$. $\mathbf{Exercise\ 5.12}$ Design a circuit that constructs the Hardy state $\frac{1}{\sqrt{12}} (3\ket{00} + \ket{01} + \ket{10} + \ket{11}).$ $\mathbf{Exercise\ 5.13}$ Show that the swap circuit of section 5.2.4 does indeed swap two single-qubit values in that it sends $\ket\psi\ket\phi$ to $\ket\phi\ket\psi$ for all single-qubit states $\ket\psi$ and $\ket\phi$. $\mathbf{Exercise\ 5.14}$ Show how to implment the Toffoli gate $\bigwedge_2 X$ in terms of single-qubit and $C_{not}$ gates. $\mathbf{Exercise\ 5.15}$ Design a circuit that determines if two single qubits are in the same quantum state. The circuit may include an ancilla qubit to be measured. The measurement should give a positive answer if the two-qubit states are identical, a negative answer if the two-qubit states are orthogonal, and be more likely to give a positive answer the closer the states are to being identical. $\mathbf{Exercise\ 5.16}$ Design a circuit that permutes the values of three qubits in that it sends $\ket\psi\ket\phi\ket\eta$ to $\ket\phi\ket\eta\ket\psi$ for all single-qubit states $\ket\psi$, $\ket\phi$, and $\ket\eta$. $\mathbf{Exercise\ 5.17}$ Compare the effect of the following two circuits $\mathbf{Exercise\ 5.18}$ Show that for any finite set of gates there must exist unitary transformations that cannot be realized as a sequence of transformations chosen from this set. $\mathbf{Exercise\ 5.19}$ Let $R$ be an irrational rotation about some axis of a sphere. Show that for any other rotation $R'$ about the same axis and for any desired level of approximation $2^{-d}$ there is some power of $R$ that approximates $R'$ to the desired level of accuracy. $\mathbf{Exercise\ 5.20}$ Show that the set of rotations about any two distinct axes of the Bloch sphere generate all single-qubit transformations (up to global phase). $\mathbf{Exercise\ 5.21}$ a) In the Euclidean plane, show that a rotation of angle $\theta$ may be achieved by composing two reflections. b) Use part a) to show that a clockwise rotation of angle $\theta$ about a point $P$ followed by a clockwise rotation of angle $\phi$ about a point $Q$ results in a clockwise rotation of angle $\theta + \phi$ around the point $R$, where $R$ is the intersection point of the two rays, one through $P$ at angle $\theta/2$ from the line between $P$ and $Q$ and the other through point $Q$ at an angle of $\phi/2$ from the line between $P$ and $Q$. c) Show that the product of any two rational rotations of the Euclidean plane is also rational. d) On a sphere of radius $1$, a triangle with angles $\theta$, $\phi$, and $\eta$ has area $\theta + \phi + \eta$ (where $\theta$, $\phi$, and $\eta$ are in radians). Use this fact to describe the result of rotating clockwise by angle $\theta$ around a point $P$ followed by rotating clockwise by angle $\phi$ around a point $Q$ in terms of the area of a triangle. e) Prove that on the sphere the product of two rational rotations may be an irrational rotation. $\mathbf{Exercise\ 5.22}$ a) Show that the gates $H$, $P_{\pi/2}$ and $P_{\pi/4}$ are all (up to global phase) rational rotations of the Bloch sphere. Give the axis of rotation and the angle of rotation for each of these gates, and also the gate $S = HP_{\pi/4}H$. b) Show that the transformation $V = P_{\pi/4}S$ is an irrational rotation of the Bloch sphere.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990752100944519, "perplexity": 291.57160465569643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00708.warc.gz"}
http://algebra2014.wikidot.com/corollary-13-20
Corollary 13.20 If $\phi : G \rightarrow G'$ is a group homomorphism, then $Ker(\phi)$ is a normal subgroup of $G$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318368434906006, "perplexity": 45.92091936729344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00706.warc.gz"}
http://math.stackexchange.com/questions/223137/eigenvector-of-matrix-that-reduces-to-identity
# Eigenvector of matrix that reduces to identity Let's say for the eigenvector equation, $(\lambda I - A)X = 0$, some eigenvalue of $A$, $\lambda_1$, is found and $\lambda_1 I - A$ is reduced to solve for its respective eigenvector $X_1$. If it is reduced to the identity matrix, $I$, what can you say about the eigenvector $X_1$? Does the eigenvector $X_1$ exist? Is $A$ diagonalizable? - What does "some $\lambda_1$ is found" mean? Did someone randomly choose a $\lambda_1 \in \mathbb{R}$? – wj32 Oct 29 '12 at 0:00 I meant some eigenvalue of $A$, $\lambda_1$ is found. – hesson Oct 29 '12 at 0:00 Then this question is self-contradictory. – wj32 Oct 29 '12 at 0:02 This question is quite confusing. If $\lambda_1$ is indeed an eigenvalue then $(\lambda_1I-A)$ cannot possibly be row equivalent to the identity. The definition of an eigenvalue is so that $(\lambda I-A)$ is singular. – EuYu Oct 29 '12 at 0:03 I can't tell you how many times I've marked an exam paper where a student found an eigenvalue $\lambda$ and then row-reduced $A-\lambda I$ to the identity. Of course that means there was a mistake in the algebra somewhere, but unfortunately the students rarely realize that and instead just make up some eigenvector. – Gerry Myerson Oct 29 '12 at 0:18 By definition, $x$ is an eigenvector of $A$ for the value $\lambda_1$ if $Ax = \lambda_1 x$, or by rearranging, $(\lambda_1 I - A)x=0$. Also by definition, $\lambda_1$ is an eigenvalue if and only if it has a non-zero eigenvector. So if $\lambda_1 I-A$ is row-reducible to the identity matrix, then the equation $(\lambda_1 I - A)x=0$ has only the trivial solution $x=0$. But then $\lambda_1$ has no eigenvectors except 0, so $\lambda_1$ is not actually an eigenvalue at all. In other words, $\lambda_1$ is an eigenvalue of $A$ if and only if $(\lambda_1 I - A)x$ is not row-reducible to the identity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818561673164368, "perplexity": 170.3399935594131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00103-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/assume-you-have-a-25-percent-minimum-margin-standard-in-problems-1-and-2-with-a-pric-4362461.htm
# Assume you have a 25 percent minimum margin standard in problems 1 and 2. With a price decline to... Assume you have a 25 percent minimum margin standard in problems 1 and 2. With a price decline to $28, will you be called upon to put up more margin to meet the 25 percent rule? Disregard the$2,000 minimum margin balance requirement. View Solution: Assume you have a 25 percent minimum margin standard in
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868498682975769, "perplexity": 1913.2172050234497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216897.58/warc/CC-MAIN-20201130161537-20201130191537-00011.warc.gz"}
https://www.physicsforums.com/threads/interesting-equivalent-to-schroedinger-equation.396231/
Interesting equivalent to Schrödinger equation • Start date • #1 59 0 While messing around with the Schrödinger equation on paper, I found an interesting, elegant way of expressing it. Let $$P$$ be the probability density $$|\Psi |^2$$, and let $$\vec Q$$ be a real-valued vector field. $$\vec F$$ is a vector field describing the forces acting on the system when in a given configuration. Then, $$\frac{\partial P}{\partial t}=-\nabla \cdot \vec Q$$ $$\frac{\partial \vec Q}{\partial t}=\frac{P\vec F}m-\frac{\vec Q\nabla \cdot \vec Q + \frac{\hbar^2}{4m^2}[\nabla,P]\Delta P}P$$ Sorry if this looks obvious, but I haven't seen this mentioned in any book. Hopefully, my calculations are correct. It's evident that $$\vec Q$$ is a velocity density. The first equation just says that the probability density decreases as the wave function expands about a point. The term $$-\frac{\vec Q}P \nabla \cdot \vec Q$$ represents the flow of velocity density in the direction of the velocity itself. However, I'm unsure of how to physically interpret the last term, which is a strange looking one. Does it simply mean that the probability density tends to accelerate away from concentrations of probability density? Related Quantum Physics News on Phys.org • #3 59 0 Well, any interpretation that could be made of the original Schrödinger equation could also be made of these, I suppose, since they're directly derived from it. There is no notion of an "actual" configuration inherent in these equations. However, my physics is a bit rusty so I don't remember if the version of the Schrödinger equation that I've been working with works equally for any number of particles. If it doesn't, then this is for one isolated particle only. If someone is interested in checking the correctness of my rearranging, here is how I arrived at this. Define the variables $$P$$ and $$\phi$$ so that $$\Psi =\sqrt P e^{i\phi}$$. Then we get: $$d\Psi =\left(\frac{dP}{2\sqrt P}+id\phi\sqrt P\right)e^{i\phi}$$ $$d^2\Psi =\left(\frac{d^2 P}{2\sqrt P}-\frac{\left(dP\right)^2}4 P^{-\frac 3 2}-\left(d\phi\right)^2\sqrt P +\frac{idPd\phi}{\sqrt P}+id^2\phi\sqrt P\right)e^{i\phi}$$ $$dP=2\sqrt P\left(d\Psi e^{-i\phi}\right)_{Re}$$ $$d\phi =\frac{\left(d\Psi e^{-i\phi}\right)_{Im}}{\sqrt P}$$ Dividing the Schrödinger equation by $$i\hbar$$, we get: $$\frac{\partial \Psi}{dt}=\frac{i\hbar}{2m}\Delta\Psi -\frac{iV\Psi}\hbar$$ Then, $$\frac{\partial P}{dt}=-\frac{\hbar}m\left(\nabla P\nabla\phi -P\Delta\phi\right)=-\frac{\hbar}m\nabla\cdot\left(P\nabla \phi}\right)$$ $$\frac{\partial \phi}{dt}=\frac\hbar{2m}\left(\frac{\Delta P}{2P}-\frac{\left(\nabla P\right)^2}{4P^2}-\left(\nabla\phi\right)^2\right)-\frac V\hbar$$ Now let $$\vec Q=\frac\hbar mP\nabla\phi$$. Then we end up with the equations above. (In every place I have used the symbols $$\nabla$$ and $$\Delta$$ above, I only mean the derivatives in the spatial directions.) Okay, so after some searching it turns out that $$\vec Q$$ is apparently called the probability current or probability flux and is usually denoted by the symbol $$\vec j$$, although I couldn't find it explicitly written out in terms of $$P$$ anywhere. Last edited: • #4 2,967 5 Let $$P$$ be the probability density $$|\Psi |^2$$, and let $$\vec Q$$ be a real-valued vector field.Then, $$\frac{\partial P}{\partial t}=-\nabla \cdot \vec Q$$ It's evident that $$\vec Q$$ is a velocity density. Actually, by looking at this equation, you can see that it is of the same form as any continuity equation. Therefore, the $$\vec Q$$ is equal to the probability current density. It has the physical meaning that $$\vec Q \cdot \hat{\mathbf{n}}$$ gives the probability of a particle crossing a unit surface perpendicular to the unit vector $$\hat{\mathbf{n}}$$ in unit time. It is more customary to denote $$\vec Q \equiv \vec J$$. By doing the standard transformations to the Schroedinger equation for a single particle in an external potential $$V(\vec r)$$, one can show that: $$\vec J = \frac{\hbar}{2 m \iota} \left( \Psi^{\ast} \nabla \Psi - \nabla \Psi^{\ast} \Psi\right)$$ • #5 SpectraCat 1,395 2 Well, any interpretation that could be made of the original Schrödinger equation could also be made of these, I suppose, since they're directly derived from it. There is no notion of an "actual" configuration inherent in these equations. However, my physics is a bit rusty so I don't remember if the version of the Schrödinger equation that I've been working with works equally for any number of particles. If it doesn't, then this is for one isolated particle only. If someone is interested in checking the correctness of my rearranging, here is how I arrived at this. Define the variables $$P$$ and $$\phi$$ so that $$\Psi =\sqrt P e^{i\phi}$$. Then we get: $$d\Psi =\left(\frac{dP}{2\sqrt P}+id\phi\sqrt P\right)e^{i\phi}$$ $$d^2\Psi =\left(\frac{d^2 P}{2\sqrt P}-\frac{\left(dP\right)^2}4 P^{-\frac 3 2}-\left(d\phi\right)^2\sqrt P +\frac{idPd\phi}{\sqrt P}+id^2\phi\sqrt P\right)e^{i\phi}$$ $$dP=2\sqrt P\left(d\Psi e^{-i\phi}\right)_{Re}$$ $$d\phi =\frac{\left(d\Psi e^{-i\phi}\right)_{Im}}{\sqrt P}$$ Dividing the Schrödinger equation by $$i\hbar$$, we get: $$\frac{\partial \Psi}{dt}=\frac{i\hbar}{2m}\Delta\Psi -\frac{iV\Psi}\hbar$$ Then, $$\frac{\partial P}{dt}=-\frac{\hbar}m\left(\nabla P\nabla\phi -P\Delta\phi\right)=-\frac{\hbar}m\nabla\cdot\left(P\nabla \phi}\right)$$ $$\frac{\partial \phi}{dt}=\frac\hbar{2m}\left(\frac{\Delta P}{2P}-\frac{\left(\nabla P\right)^2}{4P^2}-\left(\nabla\phi\right)^2\right)-\frac V\hbar$$ Now let $$\vec Q=\frac\hbar mP\nabla\phi$$. Then we end up with the equations above. (In every place I have used the symbols $$\nabla$$ and $$\Delta$$ above, I only mean the derivatives in the spatial directions.) Your derivation is a little odd-looking, but I am pretty sure that your Q is simply the probability current (check the derivation in the wikipedia page if you like). The first equation in your original post is the continuity equation, which I believe was originally derived by Madelung in the 1920's. The second equation is odd-looking, because your derivation is non-standard, but if you haven't made any errors, then it should be the deBroglie-Bohm (or dBB) equation describing the velocity of a quantum particle. The second term that you were puzzling over is called the "quantum force", and is what causes the "randomness" in the trajectories of quantum particles. I hadn't really thought about it in the terms you used (I am still kinda new to dBB), but I guess the quantum force could accurately be described as opposing the local buildup of probability density. • #6 2,967 5 Actually, by looking at this equation, you can see that it is of the $$\vec J = \frac{\hbar}{2 m \iota} \left( \Psi^{\ast} \nabla \Psi - \nabla \Psi^{\ast} \Psi\right)$$ If we have a fluid with a flow velocity field $$\vec{u}(\vec{r}, t)$$ and there is some scalar charactersitic (mass, charge, probability, etc) described by a density distribution $$\rho(\vec{r}, t)$$, then, the current density for the very same property is given by: $$\vec{J} = \rho \vec{u}$$ Using the formulas for probability density ($$\rho = \Psi^{\ast} \Psi$$ and probability current density (see the quoted formula), we can define a flow velocity: $$\vec{u} = \frac{\vec{J}}{\rho} = \frac{\hbar}{2 m \iota} \left( \frac{\nabla \Psi}{\Psi} - \frac{\nabla \Psi^{\ast}}{\Psi^{\ast}}\right) = \frac{\hbar}{m} \Im{\frac{\nabla \Psi}{\Psi}}$$ Last edited: • Last Post Replies 1 Views 1K • Last Post Replies 17 Views 4K • Last Post Replies 8 Views 6K • Last Post Replies 1 Views 423 • Last Post Replies 2 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 6 Views 1K • Last Post Replies 8 Views 3K • Last Post Replies 2 Views 2K • Last Post Replies 72 Views 30K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874101281166077, "perplexity": 223.75392441588858}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400203096.42/warc/CC-MAIN-20200922031902-20200922061902-00011.warc.gz"}
http://mathhelpforum.com/differential-geometry/75717-metric-space.html
# Math Help - Metric Space 1. ## Metric Space Need help proving the following statements/giving counter-examples; (X,d) is a metric space 1: for all a єX, U n=1 --> ∞, B(1/n, a) is an open set 2: for all a єX, U n=1 --> ∞, B(1/n, a) is not an open set 3: if xn --> x, then any subsequence of xn converges to x 4: if any subsequence of xn converges to x, then xn converges to x 5: if U is open then U=Int(Uconjugate) So I guess, 1 is true, 2 false, 3 true, 4 false, 5 no idea I'm generally stuck with it all, my idea for 1 is that in general you can always choose an ε greater than the radius of 1/n so it is open...is this right? 2. Originally Posted by math_help Need help proving the following statements/giving counter-examples; (X,d) is a metric space 1: for all a єX, U n=1 --> ∞, B(1/n, a) is an open set 2: for all a єX, U n=1 --> ∞, B(1/n, a) is not an open set 3: if xn --> x, then any subsequence of xn converges to x 4: if any subsequence of xn converges to x, then xn converges to x 5: if U is open then U=Int(Uconjugate) So I guess, 1 is true, 2 false, 3 true, 4 false, 5 no idea I'm generally stuck with it all, my idea for 1 is that in general you can always choose an ε greater than the radius of 1/n so it is open...is this right? I don't understand what you mean by "you can always choose $\epsilon$ greater than 1/n so it is open". WHAT is open? For every n, of course B(1/n,a) is an open set because all such neighborhoods are open. But to prove that you would need to assert that you can always choose [tex]\epsilon[/itex] less than 1/n, not greater than. In any case, (1) and (2) if the question is whether they are true for all metrics, then that statement is false. $\lim_{x\rightarrow a}B(1/n, a)= {a}$. In the "discrete" metric (d(a,b)= 1 if a is not equal to b, 0 otherwise) all singleton sets are both open and closed so (2) is not true but if in, say, R, d(a,b)= |a- b|, then all singleton sets are closed but not open so (2) in that case is true. As far as (4) is concerned the English word "any" is ambiguous. I could see interpreting this as "If every subsequence... " in which case it is true. I could also see interpreting it as "if some subsequenc ..." in which case it is false. For (5), I don't know what "Uconjugate" means. Did you mean U complement? In that case it is certainly not true! Or did you mean "Uclosure"? In that case it is true. 3. Sorry, never stated some things clearly. It simply says "everywhere (X,d) is a metric space" so i'd guess the question is implying it can be any metric space. For 1 and 2 I mean Union, but it looks like you knew that one. 5 isn't union and yes U compliment sorry, stupid mistake there. Yeah 4 is worded badly, my interpretation is that the question is "if there exists a of subsequence xn that converges to x, then xn also converges to x. So 3 is true, how can I go about proving this? and for number 4 i'm confused to the concept of a sequence in a metric space to be honest, naturally I want to say sin(k*pi/2) is subsequence of sin which converges for 1, where K is an integer, however sin doesn't converge, but can't get my head around it. Thanks in advance for any tips, and the original ones! 4. For #3, do you fully understand the definition of subsequence? There must be a increasing sequence of positive integers, $\alpha (n)$, such that $\left\{ {x_{\alpha (n)} } \right\} \subseteq \left\{ {x_n } \right\}$. If $p$ is the limit point of $x_n$ then for any open set $O$ if $p \in O$ then $O$ contains almost all the terms of $x_n$. So is that not also true of any subsequence? 5. Originally Posted by math_help Need help proving the following statements/giving counter-examples; (X,d) is a metric space 1: for all a єX, U n=1 --> ∞, B(1/n, a) is an open set 2: for all a єX, U n=1 --> ∞, B(1/n, a) is not an open set 3: if xn --> x, then any subsequence of xn converges to x 4: if any subsequence of xn converges to x, then xn converges to x 5: if U is open then U=Int(Uconjugate) So I guess, 1 is true, 2 false, 3 true, 4 false, 5 no idea I'm generally stuck with it all, my idea for 1 is that in general you can always choose an ε greater than the radius of 1/n so it is open...is this right? Statement 4 is false ,as the following counter example shows: Take the sequence : 1,-1,1,-1,1,-1............................the sequence does not converge, it only has 1,-1 as accumulation points. But every subsequence of the sequence has the limit 1 or -1 6. Originally Posted by benes Statement 4 is false ,as the following counter example shows: Take the sequence : 1,-1,1,-1,1,-1............................the sequence does not converge, it only has 1,-1 as accumulation points. But every subsequence of the sequence has the limit 1 or -1 Sorry, but that is not a counterexample to the statement. Did you carefully read the statement if the statement? “any subsequence of $x_n$ converges to $\color{blue}x$. $\color{blue}x$ is one number not two as in your so called example. I also disagree with an earlier post. Surely $x_n$ is a subsequence of itself. So part 4 is true! 7. Originally Posted by Plato Sorry, but that is not a counterexample to the statement. Did you carefully read the statement if the statement? “any subsequence of $x_n$ converges to $\color{blue}x$. $\color{blue}x$ is one number not two as in your so called example. I also disagree with an earlier post. Surely $x_n$ is a subsequence of itself. So part 4 is true! YES you right i misread the question,thank you. But which earlier post??
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419700503349304, "perplexity": 865.5398756087974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447758.91/warc/CC-MAIN-20151124205407-00248-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.meritnation.com/ask-answer/question/state-whther-the-following-are-true-false-give-reasons-i-whe/production-and-costs/3836713
State whther the following are true/false. Give reasons. (i) When total revenue is constant average revenue will also be constant. (ii) Average variable cost can fall even when marginal cost rising. (iii) When marginal product falls, Average revenue product will also fall. (iv) Average revenue is also called price. (v) Average revenue curve is also caled Demand curve. (vi) If total cost curve is parallel to x-axis then marginal cost will be zero Please Provide The Reasons As Well. i. No, when total revenue is constant average revenue will not be constant. Infact the situation where TR curve is horizontal line (constant)does not exist. This is because even if the price is constant, output can never be constant. However, in some books it is given that when TR is constant, AR will be downward sloping, which is a mere flaw in the concept. Students are advised not to follow this concept. ii. Yes, average variable cost can fall even when marginal cost is rising so long as MC iii. No, this is not necessary. The reason for this lies in the situation when after the point of inflexion (where MP reaches its maximum), MP starts falling while AP continues to rise. This happens because MP rises at a faster rate than AP and reaches its maximum point earlier than the maximum of AP. Hence, when MP starts falling, AP still rises. iv. Yes, Average Revenue is also called price because AR is defined as revenue earned per unit of output sold. Thus, AR is same as that of price of the output. v. Yes, average revenue is called demand curve. This is because average revenue curve shows different quantities of output that the firm can sell at different prices or in other words, it shows the demand for different quantities of output at different prices. vi. Yes, if total cost is parallel to x-axis i.e. constant, marginal cost will be equal to zero. This is because MC is defined as the additional cost to the Total Cost, which is incurred for producing one more unit of output (it can be calculated as MCn=TCnTCn– 1). Thus, when TC is constant (parallel to x-axis), TCn will be equal to TCn– 1 or, in other words, the additional cost to total cost is zero. Hence, when TC is parallel to x-axis, MC will be zero. • 2 i AR will be falling eg.let TR be 10 20 30 40 at output 1 2 3 4 .calculate ar .it  seems to be falling OR When tr is maximum ar is 0. ii yes iii.not necessarily iv.yes v.yes vi.yes but this does not happen in actual situations as tc=tvc+tfc.during short period tvc change with change in output • -3 I Want Reasons As Well :( where are experts? • 0 What are you looking for?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910016655921936, "perplexity": 1368.7884598296396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00731.warc.gz"}
https://math.stackexchange.com/questions/1924904/urn-type-problem-with-bayes-theorem-and-mms-dont-understand-how-probability
# urn type problem with bayes theorem (and M&Ms) don't understand how probability of seeing 'evidence' was calculated. I was trying to follow the solution to the 'urn type' probability problem with M&M's from this page -> http://allendowney.blogspot.com/2011/10/all-your-bayes-are-belong-to-us.html I have reproduced the problem below, but what I don't get is the skipped steps after he says Plugging the likelihoods and the priors into Bayes's theorem, we get P(A|E) = 40 / 54 ~ 0.74 " I understand the formula: P(E) P(A|E) = P(A) P(E|A) => P(A|E) = P(A) P(E|A) ----------- P(E) And I got this far: P(A) = .5 P(E|A) = .2 * .2 P(A|E) = (.5) (.2) (.2) ----------- P(E) But I am stuck on how the author of the post calculated P(E) (the probability of the evidence). Any guidance much appreciated ! M&M Problem The blue M&M was introduced in 1995. Before then, the color mix in a bag of plain M&Ms was (30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan). Afterward it was (24% Blue , 20% Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown). A friend of mine has two bags of M&Ms, and he tells me that one is from 1994 and one from 1996. He won't tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow M&M came from the 1994 bag? Hypotheses: A: Bag #1 from 1994 and Bag #2 from 1996 B: Bag #2 from 1994 and Bag #1 from 1996 Again, P(A) = P(B) = 1/2. The evidence is: E: yellow from Bag #1, green from Bag #2 We get the likelihoods by multiplying the probabilities for the two M&M: P(E|A) = (0.2)(0.2) P(E|B) = (0.1)(0.14) For example, P(E|B) is the probability of a yellow M&M in 1996 (0.14) times the probability of a green M&M in 1994 (0.1). Plugging the likelihoods and the priors into Bayes's theorem, we get P(A|E) = 40 / 54 ~ 0.74 • I think he got that wrong that bag 1 and bag 2 were picked! What he did pick is a sweet from the 1994 bag and a sweet from the 1996 bag, there was then a result of a yellow and green sweet - you can work the prob of that result out, just add together the two exclusive ways it can happen - you can also see the prob of yellow coming from 1994 is .2 and green from 1996 is .2 – Cato Sep 13 '16 at 9:21 Notice that: \begin{align*} \Pr[E] &= \Pr[E \text{ and } (A \text{ or } B)] &\text{since $A$ and $B$ are the only two possibilities}\\ &= \Pr[(E \text{ and } A) \text{ or } (E \text{ and }B)] \\ &= \Pr[E \text{ and } A] + \Pr[E\text{ and }B] &\text{since $A$ and $B$ are mutually exclusive}\\ &= \Pr[A]\Pr[E \mid A] + \Pr[B]\Pr[E \mid B] \\ &= \frac{1}{2}(0.2)(0.2) + \frac{1}{2}(0.14)(0.1) \\ &= 0.027 \end{align*} • I don't see why the half, the order he picks the sweets from the bags is irrelevant, he DOES pick a sweet from 1994 bag with probability 1, and there is a .2% chance that it is yellow, in which case there is then a .2% chance he will go on and make yellow-green. Similar applies for making green-yellow (which is the same result to the observer (.14 x .1) – Cato Sep 13 '16 at 9:16 • Thanks, Adirano... nice trick with Pr[E] = Pr[E and (A or B)] -- worth remembering ! – Chris Bedford Sep 13 '16 at 22:53 A = given yellow M & M from 1994 bag B = given yellow and green M & M P(A) = .2 P(A and B) = .2 x .2 = .04 the above is to say, that a Yellow comes from A, with probability .2 AND the other (now necessarily from B) is Blue, with probability 0.2 for B to be true, Green was taken from either 1994 or 1996 P(B) = P(yellow from 1994)P(green from 1996) + P(green from 1994)P(yellow from 1996) = .2 x .2 + .1 x .14 = .04 + .014 = .054 P(A | B) = P(A and B) / P(B) = .04 / .054 = 0.74074 .................................................. I don't really agree with his hypothesis about bag A and bag B with probability 1/2, that isn't necessary, it is very confusing
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882903099060059, "perplexity": 941.0837306939814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257432.60/warc/CC-MAIN-20190523224154-20190524010154-00408.warc.gz"}
https://mathoverflow.net/questions/261423/torsion-free-abelian-group-a-such-that-a-not-simeq-a-oplus-bbb-z-simeq-a
# Torsion-free abelian group $A$ such that $A \not \simeq A \oplus \Bbb Z \simeq A \oplus \Bbb Z^2$ Is there a torsion-free abelian group $A$ such that $A \not \simeq A \oplus \Bbb Z \simeq A \oplus \Bbb Z \oplus \Bbb Z$ (as groups)? Notice that $\Bbb Z$ is not cancellable, so $A \oplus \Bbb Z \simeq (A \oplus \Bbb Z) \oplus \Bbb Z$ doesn't imply that $A \simeq A \oplus \Bbb Z$. Combined with this question, such a group $A$ would possibly provide an answer to that question. An example of torsion free abelian group $A$ such that $A$ is isomorphic to $A \oplus \mathbb{Z}^2$, but not to $A \oplus \mathbb{Z}$ was given there. The example was the additive group of bounded sequences of elements of $\mathbb{Z}[\sqrt{2}]$, i.e. $$A = \left\{ (x_n)_{n \geq 1} \subset \Bbb Z[\sqrt 2] \;\;:\;\; \exists C>0,\; \forall n \geq 1,\; |x_n| \leq C \right\}$$ I wasn't able to adapt this example in order to answer my question. • I feel like I am lost in a maze of twisty little questions, all alike which all ask whether there exists foo which is isomorphic to foobarbar but not to foobar or something confusingly similar. It's one of the drawbacks of stackexchange that there isn't really a way to provide a summary of all of these questions at once. – Gro-Tsen Feb 5 '17 at 18:25 $\mathbb{Z}$ is cancellable for abelian groups. This was proved in the 1950s by Walker and Cohn (independently) and is often called "Walker's cancellation theorem". The proof is only a few lines. So if $A$ is an abelian group with $A\oplus\mathbb{Z}\cong A\oplus\mathbb{Z}^2$, then $A\cong A\oplus\mathbb{Z}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033864140510559, "perplexity": 192.76981838690227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00271.warc.gz"}
http://mathhelpforum.com/pre-calculus/194134-hyperbolas-print.html
# Hyperbolas. • December 12th 2011, 04:56 PM Hyperbolas. I know how to write the equation of a hyperbola with its two branches (i.e. x^2/a^2-y^2/b^2=1 or vice versa), but can you write the equation of just one branch of the hyperbola? Would it just be the equation of a parabola? I know you could restrict the domain for those with horizontal transverse axes, but what about for vertical or in general? • December 12th 2011, 06:04 PM pickslides Re: Hyperbolas. Your hyperbola equation is centred at (0,0) so you can restrict the domain as x>0 or x<0. • December 12th 2011, 07:01 PM O.k, you require a hyperbola that opens up top and bottom, which is $\frac{y^2}{a^2}-\frac{x^2}{b^2}=1$ which is also centred at the origin, you can restrict $y>0$ And, in fact, for y> 0, you can solve for y: $y= a\sqrt{1- \frac{x^2}{b^2}}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9200818538665771, "perplexity": 558.7565971555664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00201-ip-10-164-35-72.ec2.internal.warc.gz"}
https://hxypqr.com/2017/11/29/transverse-intersections/
# transverse intersections https://en.wikipedia.org/wiki/Transversality_(mathematics) This problem may be a embarrassed one, but I even could not prove it for the 1 dimensional case. Here is the problem: >**Question 1** $latex M$ is a compact $latex n$-dimensional smooth manifold in $latex R^{n+1}$, take a point $p\notin M$. prove there is always a line $latex l_p$ pass $latex p$ and $latex l_p\cap M\neq \emptyset$, and $latex l_p$ intersect transversally with $latex M$. You can naturally generated it to: >**Qusetion 2** $latex M$ is a compact $n$-dimensional smooth manifold in $latex R^{n+m}$, take a point $latex p\notin M$. Prove $\forall 1\leq k\leq m$, there is always a hyperplane $latex P_p, dim(P_p)=k$ pass $p$ and $latex P_p\cap M\neq \emptyset$, and $latex P_p$ intersect transversally with $latex M$. Thanking for Piotr pointed out, assuming “transverse” means “the tangent spaces intersect only at 0”. We focus on question 1 for simplified. Even in 1 dimension it is not easy at least for me, **warning**: a line $latex l$ pass $latex p$ may be intersect $M$ at several points combine a set $latex A_l$, $latex A_l$ could be finite, countable or even it is not countable (consider $latex M$ is induced by a smooth function for which the zeros set is Cantor set.)… And if there is one point $latex a\in A_l$, $l$ is tangent with the tangent line of $latex M$ at $latex a$, then $latex l$ is not intersect transversally with $latex M$. **My attempt**: I could use a dimensional argument and Sard’s theorem to establish a similar result but instead of a fix point $latex p$, we proof for generic point in $latex R^{n+1}$ which is not in $latex M$ we can choose such a line. So it seems reasonable to develop the dimensional technique to attach the question 1, in 1 dimensional, it will relate to investigate the ordinary differential equation: $latex \frac{f(x)-b}{x-a}=f'(x)$ Where $latex p=(a,b)$, $latex M$ have a parameterization $latex M=\{x,f(x)\}$. If there is a counterexample for the question 1, then there is another solution which satisfied the ODE in the sense: at least for every line $latex l$ there is a intersection point $latex a_l\in l\cap M$, $latex f$ satisfied ODE at $latex a_l$. This is just like the uniqueness of the solution of such a ODE is destroyed at some subspace of a line which have some special linear structure, I do not know if this point of view with be helpful. Proof 1(provided by fedja) Area trick.(weakness:it seems we could not proof the transtivasally intersection point have positive measure by this way). Proof 2(provided by Piotr) #For the codimension 1 case.# ###Using Thom transversality theorem.### Consider the maps $latex f_s:\mathbb{R} \to \mathbb{R}^n$ parametrized by $latex s \in S^{n-1}$ and given by $latex f_s(t) = p + t \cdot s$. The map $latex F(s,t) = f_s(t)$, $latex F:S^{n-1} \times \mathbb{R} \to \mathbb{R}^n$ is clearly transverse to $latex M$, thus Thom’s transversality says that $latex f_s$ is transverse to $latex M$ for almost all $latex s$. Now it suffices to prove that for an open set in $latex S^{n-1}$, the line given by $latex f_s$ intersects $latex M$. Proven below. ###Using Sard’s theorem directly.### Thom’s transversality is usually proven using Sard’s theorem. Here is the idea. Consider the projection $latex \Pi:\mathbb{R}^n \setminus \{p\}\to S^{n-1}_p$ onto a sphere centered at $latex p$. A line $latex l_p$ through $latex p$ intersects $latex M$ transversally if the two points $latex l_p \cap S^{n-1}_p$ are regular values of $latex \Pi$ (indeed, the critical points of $latex \Pi$ are exactly the points $latex x \in M$ at which the normal $latex \vec n_x$ is perpendicular to the radial direction (with respect to $p$)). By Sard’s theorem, the set of regular values is dense in $latex S^{n-1}_p$. We need to choose any point $latex s$ on the sphere for which both $latex s$ and $latex -s$ are regular values, and the line $latex f_s$ through $latex p$ and $latex s$ actually intersects $latex M$. It suffices to prove that the set of points $latex s$ for which this line intersects $latex M$ contains an open set. We could now use the Jordan-Brouwer Separation Theorem and we would be done, but we can do it more directly (and in a way that seems to generalize). ###The set of points $latex s \in S$ for which $latex f_s$ intersects $latex s$ has nonempty interior.### For each point $latex q \notin M$ the projection $latex \Pi:M \to S_{q,\varepsilon_q}^{n-1}$ onto the sphere centered at $latex q$, of radius $latex \varepsilon_q$ small enough so that the sphere does not intersect $latex M$, has some (topological) degree $latex d_q$. It is easy to check that if one takes any point $latex x \in M$ and considers the points $latex x \pm \delta \vec n_x$ for small $latex \delta$, the degrees of the corresponding maps differ by $latex 1$. It follows that we can find a point $latex q$ for which $latex d_q \neq d_p$, which guarantees that for every point $latex q’$ in a small open ball $latex B$ around $latex q$ (all these points have same degree $latex d_q$), the line joining $latex p$ and $latex q’$ intersects $latex M$. Projection of $latex B$ on $latex S_p^{n-1}$ is an open set which we sought. #For the general case (partial solution). I think a similar reasoning should work, however, notice that for $latex k < m$ we cannot make $latex P_p$ intersect transversally with $latex M$ because of dimensional reasons: the dimensions of $latex M$ and $latex P_p$ don’t add up to at least $latex n+m$. Recall that transversality implies Thus, either (1) you want to consider $latex k \geq m$, or (2) define “transversal intersection” for such manifolds saying that the tangent spaces have to intersect at an empty set. Also, for $latex k>n$ we can just take any plane $latex P_p$ which works for $latex k=m$ and just extend it to a $latex k$-dimensional plane. ###Assuming $latex k = m$.### A similar reasoning should work for $latex f_s:\mathbb{R}^m \to \mathbb{R}^{n+m}$ with $latex s = (s_1, \ldots, s_m)$ going over all families of pairwise perpendicular unit vectors, and $latex f_s(t_1,\ldots,t_m) = p+\sum_{j=1}^m t_i \cdot s_i$. Thom’s transversality says that for almost all choices of $latex s$, the plane $latex f_s$ is transverse to $latex M$. ### The nonempty interior issue. ### The only thing left is to prove that the set of $latex s$ for which the intersection is nonempty has nonempty interior. Last time we proved that there is a zero-dimensional sphere containing $latex p$, namely $latex \{p,q\}$, which has nonzero linking number with $latex M$, and by deforming if to spheres $latex \{p,q’\}$ and taking lines through pairs $latex p,q’$, we got an open set of parameters for which the line intersects $latex M$. Here should be able to do a similar trick by finding a $latex m-1$-dimensional sphere with nonzero linking number with $latex M$. The ball that bounds that sphere has to intersect $latex M$, thus the plane $latex P$ containing the sphere has to intersect $latex M$. By perturbing the sphere we get spheres with the same linking numbers, and get all the planes that lie in a neighbourhood of $latex P$; in particular, we get an open set of parameters $latex s$ for which $latex f_s$ intersects $latex M$. Well, we don’t actually need a *round* sphere, but we do need a *smooth* sphere that lies in a $latex m$-dimensional plane. There’s some trickery needed to do this, but I am sure something like this can be done. Maybe somebody else can do it better? ### For $latex k<m$ ### I don’t really know how to attack this case, assuming “transverse” means “the tangent spaces intersect only at $latex 0$”.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502963423728943, "perplexity": 155.5091404453336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00709.warc.gz"}
https://cs.stackexchange.com/questions/21407/axiomatic-semantics-and-postconditions
# Axiomatic Semantics and Postconditions I'll preface this by saying that this IS a homework question. However, when asked about how to solve it in class, (I believe) my professor was unable to complete it. The question is: Compute the weakest precondition for each of the following assignment statements and postconditions: $$a = a + 2b - 1\ \{a > 1\}$$ (where a > 1 is the postcondition) His answer was: "$a > 2 - 2b$." Is this correct? It seems that he broke the rules of equivalents, by using the ">" as "=" I believe the answer is: $$\{a>1\;\wedge\; b<\tfrac12\}\quad\text{or}\quad \{a > 0\;\wedge\;2b-1 +a >1\},$$ where the "$\wedge$" symbol means "and" We're using Concepts of Programming Languages, by Sebesta, 10th Edition, so any references from that material would be excellent :) Thanks! • I don't understand your question. Your second suggested answer is equivalent to $\{a>0 \;\wedge\; a>2-2b\}$, which is essentially the same as the answer by your professor that you're rejecting. The difference is that you're also including the requirement $a>0$ for no reason that I can see, so your precondition fails to be a weakest precondition. Feb 6, 2014 at 23:05 • instead of telling me how I'm wrong, could you please explain properly? I understand that I am wrong, that is why I asked a question. Feb 6, 2014 at 23:49 A condition $P$ is weaker than $Q$ if $Q\Rightarrow P$; that is, whenever $Q$ holds, $P$ also holds or, if you prefer, $Q$ guarantees that $P$ holds. $P$ is strictly weaker than $Q$ if $Q\Rightarrow P$ but $P\not\Rightarrow Q$. Let's look at the three conditions included in the question: 1. $a>2-2b$ 2. $a>1 \;\wedge\;b>\tfrac12$ 3. $a>0 \;\wedge\; a>2-2b$ (2) $\Rightarrow$ (3) since, if $a>1$ then certianly $a>0$ and, if $b>\tfrac12$, then $2-2b<1<a$. (3) $\Rightarrow$ (1) since if $X$ and something else are both true, then certainly $X$ is true. None of the reverse implications holds: (1) is satisfied by $a=-1$, $b=2$ but (3) is not; (3) is satisfied by $a=\tfrac12$, $b=1$ but (2) is not. Therefore, (1) is strictly weaker than (3) which, in turn, is strictly weaker than (2). So $a>2-2b$ is the weakest of the preconditions on the table. But is it the weakest possible? Suppose $P$ is any precondition that implies that $a+2b-1>1$. Rearranging, we see that $P\Rightarrow a>2-2b$, i.e., $P\Rightarrow$ (1). This tells us that no other precondition can be strictly weaker than (1). Therefore, (1) is the weakest precondition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9341065287590027, "perplexity": 379.950450540103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00178.warc.gz"}
https://space.stackexchange.com/questions/2987/how-are-fuel-leaks-discovered-on-a-rocket/2990
How are fuel leaks discovered on a rocket? Most are aware of the fact that the launch of GSLV has been aborted due to a fuel leakage, but how was the cryogenic fuel leak discovered? I'm not interested in this case only, but in general. What techniques are employed to find fuel leakages?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272227644920349, "perplexity": 1592.1353770896317}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251688806.91/warc/CC-MAIN-20200126104828-20200126134828-00260.warc.gz"}
http://mathoverflow.net/questions/11554/whats-the-use-of-a-complete-measure?sort=votes
# What's the use of a complete measure? A complete measure space is one in which any subset of a measure-zero set is measurable. For what reasons would I want a complete measure space? The only reason I can think of is in the context of probability theory: using complete probability spaces forces almost-everywhere equal random variables to generate the same sigma-sub-algebra. Am I missing some other important technical reasons? - It certainly seems useful to have any subset of a set of measure zero be measurable: this is a great way to show that a set has measure zero. If you had to stop every time and check that such sets were actually measurable -- or worse, deal with the possibility that they are not -- it would be a real pain. –  Pete L. Clark Jan 12 '10 at 17:00 Since the existence of non-measurable sets is often seen as undesirable, we naturally want to have as many measurable sets as possible. With Lebesgue measure on the reals, for example, if we were to stop with the collection of Borel sets, we would only have continuum c many measurable sets. But when completing the measure, we gain 2c many more measurable sets, incomparably more. The newly measurable sets are not just measure zero sets, of course, but all those sets that differ from a previously measurable set by (a subset of) a measure zero set. But it isn't just about the number of measurable sets. Rather, completing the measure allowed us to increase (or even maximize in a sense) our collection of measurable sets in a way that seems to accord completely with how we wanted to measure sets in the first place. It's a basic part of what we were trying to do with measure to be able to say that something that is less than negligible is also negligible. - This is a good formulation of what I wanted to say. –  Anweshi Jan 16 '10 at 3:22 Wikipedia gives one instance of a situation in which complete measures are needed, for the purpose of defining measures on product spaces. I suggest that you look into Rudin's "Real and Complex Analysis". There he makes an argument that a completion of an ordinary measure space into a complete measure space is just as fundamental to real analysis, as the completion of the rationals to the reals is. Many theorems in measure theory, for instance Fubini or Radon-Nikodym, needs completeness to make full sense. Fubini is explained in the wikipedia example. To make the other aspect clear -- quite a few statements in measure theory uses the notion of "almost everywhere" -- for instance the definition of $L^p$ spaces, or Radon-Nikodym. But this notion of "almost everywhere"(rather, "almost nowhere") becomes better if the measure space is complete. It would look really odd if you declare that some property holds true almost nowhere because it holds only on some set with measure zero, and you so arrange things that some other property holds on a smaller set, and then you are no longer able to make the assertion! The product measure example above is a specific illustration in which the concerned property is simply "being measurable", and the consequences are particularly notable. Added(Jan 16): There are problems into applications into Ergodic theory, for instance. This definition of ergodic transformation and an ergodic theory built on it will run into all sorts of problems if the underlying measure space is not complete. This is again because you need a proper notion of "almost everywhere" and "almost nowhere". - Anweshi -- I think this is the right answer (especially Fubini) so I upvoted it. Note though that what you say at the end about rationals versus irrationals is too simplistic: no problems arise in adding or subtracting countable sets! –  Pete L. Clark Jan 12 '10 at 19:02 @Pete. Thanks for the observation. I have made an edit accordingly, removing the specific example and replacing with a general situation. –  Anweshi Jan 12 '10 at 19:12 I have a few complaints about this answer: 1) the wikipedia article does not give an example where completeness is needed. it merely states that the product of complete measures is not necessarily a complete measure over the product space 2) fubini-tonelli does not require completeness, however it can make use of it to integrate messier things. 3) radon-nikodym does not depend on completeness. –  Matus Telgarsky Jan 13 '10 at 2:27 I have also argued that many theorems make use of the notion of almost everywhere, and also that this notion becomes much neater in the presence of a complete measure. Both theorems are instances. Wikipedia explains some ugliness in the case of Fubini. Of course, you can do measures without completeness, as for instance the Riesz representation theorem uses only Borel measures, and on the other hand you can see also every Borel measure as a functional. That's not the issue. The point is that Lebesgue theory is neater because of completeness. –  Anweshi Jan 13 '10 at 9:50 I didn't mention that argument since it was meta-mathematical; I think it is equally weird that subsets of any finite-measure set may be non-measurable, but this is something one must accept in order for measure theory to work at all. When working with a measure space, the $\sigma$-algebra dictates which sets exists at all; it's irrelevant what the measure of a non-existent set is. I think it is a sign that something is funny when the completion of Borel measure takes its size from $\mathbb{N}$ to $\mathbb{R}$ .. –  Matus Telgarsky Jan 13 '10 at 9:51 In light of the comments here, I'm going to show why completeness can be a pain. In exercise 9 of section 2.1 of Folland, he develops a function $g: [0,1] \to [0,2]$ by $g(x) = f(x) + x$ where $f : [0,1] \to [0,1]$ is the Cantor function. In that exercise it is established that $g$ is a (monotonic increasing) bijection, and that its inverse $h = g^{-1}$ is continuous from $[0,2]$ to $[0,1]$. Since $h$ is continuous, it is Borel measurable. On the other hand, $h$ is not $(\mathcal{L}, \mathcal{L})$-measurable!! In particular, let $C$ be the Cantor set; $m(g(C)) = 1$, but this means there is a subset $A \subseteq g(C)$ which is not Lebesgue measurable. On the other hand $B := g^{-1}(A) \subseteq C$ whereas $m(C) = 0$; thus this preimage $B$ is Lebesgue measurable (with measure zero). But therefore $h^{-1}(B) = A$ is not Lebesgue measurable, meaning $h$ is not $(\mathcal{L}, \mathcal{L})$-measurable. On one hand, this function is contrived. On the other hand, it shows that completing measures can mess things up. The typical definition of "measurable function" is a Borel measurable function, and I suppose reasons like the above led to this convention. I do not know the material Bridge references above, and so can't say what breaks when completeness is dropped. Although it seems mathematically convenient to throw in completeness, I don't know any examples in basic probability theory where it helps. For instance, Fubini-Tonelli can be formulated just fine without completeness. Your statement of the theorem only need mention completeness if your measures happen to be complete! EDIT I corrected the nonsense in the second paragraph; also I meant to talk about $(\mathcal L, \mathcal L)$-measurable functions, which I accidentally refered to as Lebesgue measurable (which means $(\mathcal L, \mathcal B)$-measurable). My whole point is that if you take completion in $\sigma$-algebra of the range space, the extra sets you added could map back to basically anything. IE it is somewhat nonsensical to add in all sorts of null sets, but not all sorts of finite measure sets. Sometimes completion gives you something you want, but sometimes it does not, as I showed here--the function is better behaved wrt the non-completed measure. - I think you mean g^{-1}(g(C))=C, and not g^{-1}([0,2])=C, since clearly g^{-1}([0,2])=[0,1]. But the former supports the rest of your argument. –  Joel David Hamkins Jan 13 '10 at 14:13 “On the other hand, it shows that completing measures can mess things up.” I disagree. The function g is a perfectly good morphism between two complete measurable spaces. The codomain of g is the interval [0,2] equipped with the standard σ-algebra of Lebesgue measurable sets and the standard σ-ideal of Lebesgue sets of measure 0, whereas the domain of g is the completion of the following measurable space: The interval [0,1] is eqipped with the standard σ-algebra of Borel sets and for the σ-ideal of sets of measure 0 we take all Borel sets whose image under the map g has measure 0. –  Dmitri Pavlov Jan 13 '10 at 16:56 @joel thanks for the alert! i will fix the argument momentarily.. –  Matus Telgarsky Jan 14 '10 at 7:04 @Matus: So the point is that one should be careful about what one means by “null sets”. Concerning references, I have been looking for a good reference myself for quite a while. Meanwhile you can take a look at Fremlin's Measure Theory (Volume 3) or at Takesaki's Theory of Operator Algebras (Volume 1, Chapter III.1) for an exposition of the ideas that I mentioned in my comments. –  Dmitri Pavlov Jan 14 '10 at 15:57 @Dmitri: I will look into this! Thanks very much. I'll close by saying that, in my experience, Borel $\sigma$-algebras are adequate/nice on the reals, but where completeness is convenient is in stochastic processes. Perhaps some of the notions you describe can be used to give a clean construction of them? (completeness feels like a gross bandaid to me.) –  Matus Telgarsky Jan 14 '10 at 16:10 From the categorical viewpoint there is no difference, because the category of measurable spaces is equivalent to the category of complete measurable spaces with the equivalence given by the completion functor. Moreover, we are forced to identify objects that are different only on a set of measure 0 or a subset of such a set (otherwise some theorems simply do not make any sense), hence we cannot even see the difference. However, working with complete measurable spaces is technically easier. More precisely, objects of the category of measurable spaces are triples (X,A,N), where X is a set, A is a σ-algebra of measurable subsets of X, N is a σ-ideal of null sets in A. A morphism from (X,A,N) to (Y,B,O) is an equivalence class of maps of sets f: X→Y such that the preimage of every element of B is a union of an element of A and a subset of an element of N and the preimage of every element of O is a subset of an element of N. Two maps are equivalent if they differ on a subset of an element of N. If we restrict our attention to complete measurable spaces, then the definition of morphism becomes significantly simpler: we have to require that the preimage of every element of B is an element of A and likewise for O and N and two maps are equivalent if they differ on an element of N. This definition is too general to be useful for measure theory. Once we restrict ourselves to the subcategory of localizable measurable spaces (all major theorems of measure theorem such as Riesz representation theorem and Radon-Nikodym theorem imply the property of localizability) the resulting category becomes contravariantly equivalent to the category of commutative von Neumann algebras, also known as W*-algebras. In my opinion this constitutes the best possible definition for the main category of measure theory, both in terms of conceptuality and effectiveness, just as the best way to define the category of affine schemes is to make it equal to the opposite category of the category of commutative rings. Such a viewpoint is unfortunately highly unlikely to be adopted by analysts (especially hard analysts) considering their unwillingness to study even the most elementary notions of category theory. - I think the viewpoint taken by you, ie represent a compact Hausdorff space by its $C^*$-algebra, and declare a measure to be a functional on it(like in Riesz Representation theorem) is what is adopted by noncommutative geometers, for their noncommutative analogies. So it is not true that analysts are not willing to take up category theory. Still, the base of the subject is in good old Lebesgue theory, and that must be first carried out in the traditional fashion. That is what analysis is all about. The essence of analysis is contained in that type of stuff, not in category theory. –  Anweshi Jan 14 '10 at 13:58 @Anweshi: I doubt that measure theory should be considered a part of analysis at all. For example, smooth manifolds were once considered part of analysis (think of multivariable calculus) but now they are not. The subject became much more clear and conceptual when it was detached from analysis. (Of course we still sometimes use analysis to prove theorems like Calabi-Yau theorem, but such proofs are not considered final and in the end of the day they will be replaced by more conceptual and geometric proofs.) Measure theory will undergo the same transition. –  Dmitri Pavlov Jan 14 '10 at 15:01 Technical remark: To be precise one should note that the category of measurable space is indeed a subcategory of compact Hausdorff spaces, however not every compact Hausdorff space corresponds to a measurable space (only extremally disconnected spaces do) and not every morphism between such spaces is a morphism of measurable spaces. –  Dmitri Pavlov Jan 14 '10 at 15:03 You seem to be saying that since category theory cannot see the difference between a measure and its completion, then there is no mathematically substantive difference. But surely huge structural differences between the Borel sets and the Lebesgue measurable sets of reals, say, are revealed by descriptive set theory. –  Joel David Hamkins Jan 14 '10 at 22:36 Perhaps that is an unneccessarily narrow view of what measure theory is? After all, perhaps some day a measure theorist will want to take a continuous image of a Borel set...and then find that she is actually a set theorist! :-) –  Joel David Hamkins Jan 16 '10 at 4:05 Hi Tom E, For Stochastic Processes, completion of the initial sigma-field of the natural filtration of the process with the negligeable sets of the Probability measure of the limit sigma-field of the filtration (coupled with right continuity of the underlying filtration) is really uselfull. It allows to find version of processes that are càdlàg (right continuous with left limit) under very general conditions. Càdlàg processes are the main object of study in Stochastic Processes analysis (only my point of view). As a matter of fact, this is so usefull, that those two conditions are called the "usual conditions" for the probability space and filtration on which process lives (the 3-tuple $(\Omega, (\mathcal{F}_t),P)$ is named a stochastic basis). If interested, you can have a look at Karatzas and Shreve's book on Brownian motion and Stochastic Calculus, but even if I realise that the matter might be quite far from your day-to-day mathematical activities this is certainly an example which shows how usefull completion of sigma field might be. Regards - Hi Tom E, Here is another reason: Let $E$ be a Borel set in Euclidean space. Then its image under a continuous map is always Lebesgue-measurable but in general not Borel measurable. Results like this make the completion useful; The theory behind this is the theory of analytic sets or the Souslin operation. - What's missing here is the Caratheodory extension process creates a complete measure space. Hence, to have a product measure that is not complete requires one use a different method to create it. If this is the case, then completeness is not required in Fubini or Tonelli, e.g. (R, A, mu), (R, B, nu) Borel measure spaces and product measure defined as mu X nu for sigma-algebra A X B. However, this is not the traditional way we construct a measure and in particular would not construct the Lebesgue measure on R2. However, if we take two measure spaces which are both not complete and create their product measure using the Caratheodory extension of mu X nu (a complete measure), the equality in the conclusion of Fubini and Tonelli would not necessarily hold. Hence, completeness of one or both measure spaces in the hypotheses is only important (required) if we want to construct our product measures using the Caratheodory extension process or if our product measure is complete. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495843648910522, "perplexity": 307.09560312710147}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920694.0/warc/CC-MAIN-20140909045839-00195-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/6446-some-math-questions.html
# Math Help - some math questions 1. ## some math questions I'm not too good at math at all I need help with these three problems any feedback or help will be greatly appreciated 1)108 divided by (6/7) 2nd power + 180 divided by (3/5) 2nd power 2) Find the base of a triangle with area 114 square inches and height 12 inches 3) 15q/16 x 8/30q multiply simplify if possible These are the last three problems on my assignment...any help is greatly appreciated thank you 2. Originally Posted by [email protected] I'm not too good at math at all I need help with these three problems any feedback or help will be greatly appreciated 1)108 divided by (6/7) 2nd power + 180 divided by (3/5) 2nd power 1.) 108/[(6/7)^2] + 180/[(3/5)^2], is that what you're referring to? (6/7)^2 = 36/49; 108/(36/49) = 147 (bring the 49 to the numberor and multiply it to get 5292, then divide by 49) + (3/5)^2 = 9/25; 180/(9/25) = 500 So: 147 + 500 = 647. 3. Originally Posted by [email protected] 2) Find the base of a triangle with area 114 square inches and height 12 inches Area of a triangle = (1/2)*b*h. You're given that the area = 114, and you're given the height. Solve for the base. 114 = (1/2)*b*12 114 = 6*b Divide both sides by 6: 19 = b Therefore, the base is 19 inches. 4. Thanks man this remedial math is killing me 5. Originally Posted by [email protected] 3) 15q/16 x 8/30q multiply simplify if possible Be careful with parentheses; I am assuming you mean the following: (15q)/16 * 8/(30q) The 15q and the 30q cancel, and you obtain: (15q is one half of 30q). 1/16*8/2 = 1/4. (You could have even simplified it further by reducing the 8 and the 16 to a 1/2 ratio, which would have made the multiplication slightly easier: so you would have had 1/2*1/2 = 1/4). 6. Originally Posted by [email protected] I'm not too good at math at all I need help with these three problems any feedback or help will be greatly appreciated 1)108 divided by (6/7) 2nd power + 180 divided by (3/5) 2nd power I will assume that 1 and 3 mean what is shown in the attachment. 1. Take the first term: .......................108.........108x7^2......108 108/(6/7)^2 = -------- = ---------- = ----- x 7^2 = 3x7^2 = 3x7x7 = 147 .....................(6/7)^2........6^2.........6x6 Now take the second term: .......................180.........180x5^2......180 180/(3/5)^2 = -------- = ---------- = ------ x 5^2 = 20x5^2 = 20x5x5 = 500 .....................(3/5)^2.........3^2........3x3 So: 108/(6/7)^2 + 180/(3/5)^2 = 147 + 500 = 647 RonL Attached Thumbnails
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392794966697693, "perplexity": 2093.918876997213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
https://engineeringlibrary.org/reference/shear-web-beam-bending-air-force-stress-manual
Shear Web Beam Bending This page provides the sections on shear web beam bending from the "Stress Analysis Manual," Air Force Flight Dynamics Laboratory, October 1986. Other related chapters from the Air Force "Stress Analysis Manual" can be seen to the right. Beam Analysis Nomenclature Af = cross-sectional area of tension or compression flange Cr = rivet factor E = modulus of elasticity Fs = allowable web shear stress Fscoll = collapsing shear stress for solid unstiffened webs Fscr = critical (or initial) buckling stress fb = calculated primary bending stress fs = calculated shear stress h = height of shear web beam between centroids of flanges I = moment of inertia Iu = moment of inertia of upright or stiffener about its base M = applied bending moment p = rivet spacing q = shear flow t = thickness V = shear force η = plasticity coefficient 1.3.2 Introduction to Shear Web Beams in Bending The most efficient type of beam is one in which the material resisting bending is concentrated as near the extreme fiber as possible and the material resisting shear is a thin web connecting tension and compression flanges. The simplifying assumption that all the mass is concentrated at the centroids of the flanges may be made for such beams, thus reducing the simple beam formulas to fb = M/Af h for bending and to fs = V/ht for shear. The flanges resist all bending and the web resists all shear. These beams are divided into two types, shear resistant and partial tension field beams. The webs of shear resistant beams resist the shear load without buckling, and the webs of partial tension field beams buckle at less than the maximum beam load. If $$\sqrt{V}/h$$ is less than seven, the use of a partial tension beam is recommended on the basis of weight economy; and the use of a shear resistant beam is recommended if $$\sqrt{V}/h$$ is greater than eleven. If $$7 < \sqrt{V}/h < 11$$, factors other than weight will determine the type of beam used. 1.3.2.1 Introduction to Shear Resistant Beams in Bending If the web of a shear resistant beam is sufficiently thin, the simplifying assumption that all the mass is concentrated at the centroids of the flanges may be made. This reduces the simple beam formulas to $$f_b = { M \over A_f h }$$ (1-15) for bending, and $$f_s = { V \over ht } = { q \over t }$$ (1-16) for shear. The flanges resist all of the bending and the webs resist all of the shear. Unstiffened shear resistant beams are discussed in Section 1.3.2.2 while stiffened shear resistant beams are treated in Section 1.3.2.3. Need a Beam Calculator? Try this beam calculator. • Calculates stresses and deflections in straight beams • Builds shear and moment diagrams • Can specify any configuration of constraints, concentrated forces, and distributed forces 1.3.2.2 Unstiffened Shear Resistant Beams in Bending Both the web and flanges of an unstiffened shear resistant beam must be checked for failure. The flange is generally considered to have failed if the bending stress in it exceeds the yield stress of the material, although bending in the plastic range may be used if some permanent set can be permitted. The web must be checked for ultimate load as well as for collapse. If the web is not subject to collapse, the allowable average stress at ultimate load, $$F_s$$ will be either 85% of the ultimate strength in shear or 125% of the yield strength in shear. Figure 1-9 gives the collapsing stress for two aluminum alloys. It should be noted that for thinner webs (h/t > 60), initial buckling does not cause collapse. In conclusion, the required thickness of a thin unstiffened web is given by $$t = { V \over h F_s }$$ (1-17) or $$t = { V \over h F_{scoll} }$$ (1-18) whichever is larger. 1.3.2.3 Stiffened Shear Resistant Beams in Bending The vertical stiffeners in a shear resistant beam resist no compressive load, as is the case for tension field beams, but only divide the web into smaller unsupported rectangles, thus increasing the web buckling stress. The flange web and rivets of such a beam must be analyzed. 1.3.2.4 Flanges of Stiffened Shear Resistant Beams The flanges of a stiffened shear-resistant beam must be checked for yielding or ultimate strength by means of Equation (1-15) as in the case of unstiffened shear resistant beams. 1.3.2.5 Webs of Stiffened Shear Resistant Beams The web panel of a stiffened shear-resistant beam must be checked for strength as well as for stability. The strength of such a web may be checked by Equation (1-16) as in the case of unstiffened shear resistant beams, and the stability of such a beam may be checked by Equation (1-19) in conjunction with Figures 1-10 through 1-16. The critical buckling stress of a web panel of height h, width d, and thickness t, is given by $${ F_{scr} \over \eta } = K_s ~E ~\left({ t \over d }\right)^2$$ (1-19) In this equation, Ks is a function of d/h and the edge restraint of the web panel. Figure 1-10 relates Ks to d/h and Iu/ht3. Once Ks has been found, Fscr may be obtained from the nomogram in Figure 1-11. Fscr may then be found from Figures 1-12 through 1-16. It should be noted that the moment of inertia of the stiffener, Iu, for Figure 1-10 should be calculated about the base of the stiffener (where the stiffener connects to the web). Also, the modulus of elasticity of the web has been assumed to be equal to that of the stiffeners. Need a Beam Calculator? Try this beam calculator. • Calculates stresses and deflections in straight beams • Builds shear and moment diagrams • Can specify any configuration of constraints, concentrated forces, and distributed forces 1.3.2.6 Rivets in Shear Resistant Beams Rivets are required to fasten the web to flange in shear resistant beams. In addition, rivets are used to fasten the web to the stiffener and the stiffeners to the flange in stiffened shear resistant beams. 1.3.2.6.1 Web-to-Flange Rivets in Shear Resistant Beams The spacing and size of web-to-flange rivets should be such that the rivet allowable (bearing or shear) divided by q×p (the applied web shear flow times the rivet spacing) gives the proper margin of safety. The rivet factor, Cr (rivet spacing - rivet diameter/rivet spacing), should not be less than 0.6 for good design and in order to avoid undue stress concentration. 1.3.2.6.2 Web-to-Stiffener Rivets in Shear Resistant Beams No exact information is available on the strength required of the attachment of stiffeners to web in shear resistant beams. The data in Table 1-4 is recommended. Table 1-4: Recommended Data for Web-to-Stiffener Rivets in Shear Resistant Beams Web Thickness, in. Rivet Size Rivet Spacing, in. 0.102 DD 6 1.10 0.125 DD 6 1.00 0.156 DD 6 0.90 0.188 DD 8 1.00 1.3.2.6.3 Stiffener-to-Flange Rivets in Shear Resistant Beams No information is available on the strength required of the attachment of the stiffeners to flange. It is recommended that one rivet the next size larger than that used in the attachment of stiffeners to web or two rivets the same size be used whenever possible. 1.3.2.7 Sample Problem - Stiffened Shear Resistant Beams Find: The margin of safety of the web and the load on each web to flange rivet. Solution: From Equation (1-16) the web shear stress is given by $$F_s = { V \over h ~t } = { 8550 \over 9 (0.081) } = 11,720 ~\text{psi}$$ $${ d \over h } = { 6 \over 9 } = 0.667$$ and $${ I_u \over h ~t^3 } = { 0.0175 \over 9 (0.081)^3 } = 3.66$$ From Figure 1-10, Ks = 6.9. From Figure 1-11, Fscr = 12,500 psi. From Figure 1-14, Fscr = 12,500 psi. Since the critical buckling stress of the web is less than the yield stress, the most likely type of failure is buckling. Thus, the margin of safety of the web may be given by $$M.S. = { F_{scr} \over f_s } - 1 = { 12500 \over 11720 } - 1 = 0.06$$ The load per web-to-flange rivet is $$q \times p = { V \over h } ~p = { 8550 \over 9 } (0.625) = 594 ~\text{lb}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814559280872345, "perplexity": 3550.155259275104}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00530.warc.gz"}
https://en.wikipedia.org/wiki/Courant-Friedrichs-Lewy_condition
# Courant–Friedrichs–Lewy condition (Redirected from Courant-Friedrichs-Lewy condition) In mathematics, the Courant–Friedrichs–Lewy (CFL) condition is a necessary condition for convergence while solving certain partial differential equations (usually hyperbolic PDEs) numerically by the method of finite differences.[1] It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation will produce incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.[2] ## Heuristic description The principle behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal duration,[3] then this duration must be less than the time for the wave to travel to adjacent grid points. As a corollary, when the grid point separation is reduced, the upper limit for the time step also decreases. In essence, the numerical domain of dependence of any point in space and time (as determined by initial conditions and the parameters of the approximation scheme) must include the analytical domain of dependence (wherein the initial conditions have an effect on the exact value of the solution at that point) in order to assure that the scheme can access the information required to form the solution. ## The CFL condition In order to make a reasonably formally precise statement of the condition, it is necessary to define the following quantities • Spatial coordinate: it is one of the coordinates of the physical space in which the problem is posed. • Spatial dimension of the problem: it is the number ${\displaystyle n}$ of spatial dimensions i.e. the number of spatial coordinates of the physical space where the problem is posed. Typical values are ${\displaystyle n=1}$, ${\displaystyle n=2}$ and ${\displaystyle n=3}$. • Time: it is the coordinate, acting as a parameter, which describes the evolution of the system, distinct from the spatial coordinates. The spatial coordinates and the time are supposed to be discrete-valued independent variables, which are placed at regular distances called the interval length[4] and the time step, respectively. Using these names, the CFL condition relates the length of the time step to a function of the interval lengths of each spatial coordinate and of the maximum speed with which information can travel in the physical space. Operatively, the CFL condition is commonly prescribed for those terms of the finite-difference approximation of general partial differential equations which model the advection phenomenon.[5] ### The one-dimensional case For one-dimensional case, the CFL has the following form: ${\displaystyle C={\frac {u\,\Delta t}{\Delta x}}\leq C_{\max }}$ where the dimensionless number is called the Courant number, • ${\displaystyle u}$ is the magnitude of the velocity (whose dimension is length/time) • ${\displaystyle \Delta t}$ is the time step (whose dimension is time) • ${\displaystyle \Delta x}$ is the length interval (whose dimension is length). The value of ${\displaystyle C_{\max }}$ changes with the method used to solve the discretised equation, especially depending on whether the method is explicit or implicit. If an explicit (time-marching) solver is used then typically ${\displaystyle C_{\max }=1}$. Implicit (matrix) solvers are usually less sensitive to numerical instability and so larger values of ${\displaystyle C_{\max }}$ may be tolerated. ### The two and general n-dimensional case In the two-dimensional case, the CFL condition becomes ${\displaystyle C={\frac {u_{x}\,\Delta t}{\Delta x}}+{\frac {u_{y}\,\Delta t}{\Delta y}}\leq C_{\max }}$ with obvious meaning of the symbols involved. By analogy with the two-dimensional case, the general CFL condition for the ${\displaystyle n}$-dimensional case is the following one: ${\displaystyle C=\Delta t\sum _{i=1}^{n}{\frac {u_{x_{i}}}{\Delta x_{i}}}\leq C_{\max }.}$ The interval length is not required to be the same for each spatial variable ${\displaystyle \Delta x_{i},i=1,\ldots ,n}$. This "degree of freedom" can be used in order to somewhat optimize the value of the time step for a particular problem, by varying the values of the different interval in order to keep it not too small. ## Implications of the CFL condition ### The CFL condition is only a necessary one The CFL condition is a necessary condition, but may not be sufficient for the convergence of the finite-difference approximation of a given numerical problem. Thus, in order to establish the convergence of the finite-difference approximation, it is necessary to use other methods, which in turn could imply further limitations on the length of the time step and/or the lengths of the spatial intervals. ## Notes 1. ^ In general, it is not a sufficient condition; also, it can be a demanding condition for some problems. See the "Implications of the CFL condition" section of this article for a brief survey of these issues. 2. ^ See reference Courant, Friedrichs & Lewy 1928. There exists also an English translation of the 1928 German original: see references Courant, Friedrichs & Lewy 1956 and Courant, Friedrichs & Lewy 1967. 3. ^ This situation commonly occurs when a hyperbolic partial differential operator has been approximated by a finite difference equation, which is then solved by numerical linear algebra methods. 4. ^ This quantity is not necessarily the same for each spatial variable, as it is shown in the "The two and general n–dimensional case" section of this entry : it can be chosen in order to somewhat relax the condition. 5. ^ Precisely, this is the hyperbolic part of the PDE under analysis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433223605155945, "perplexity": 306.9184432998869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323970.81/warc/CC-MAIN-20170629121355-20170629141355-00450.warc.gz"}
https://infoscience.epfl.ch/record/148179
Infoscience Journal article Surface Electronic-Structure Of Ce In The Alpha-Phase And Gamma-Phase From a recent calculation of the electronic structure of Ce [Phys. Rev. B 43, 3137 (1991)] Eriksson et al. conclude that the alpha-phase is best described as a delocalized 4f-electron system. We show that the limited experimental evidences provided in support of this statement are not conclusive and do not allow one to discard the single-impurity model, which is consistent with all spectroscopic data.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8050815463066101, "perplexity": 1290.745679012222}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00159-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/deriving-a-trig-thing.125840/
# Deriving a trig thing 1. Jul 12, 2006 ### Pseudo Statistic Can anyone tell me how to derive the sin(x+y) and cos(x+y) expansions? The ones that are like cos x sin y or sin y cos x + other stuff? Preferrably, could this be derived with Euler's formula alone? Or something not too geometric? (All those OAs and OBs and XBs and XYs on geometric diagrams confuse me too much to follow) Thank you. 2. Jul 12, 2006 ### mathman You could use Euler's formula. It is tedious, but straightforward. 3. Jul 12, 2006 ### StatusX Another way is to use the 2X2 rotation matrices R($\theta$), which have R(x)R(y)=R(x+y). This is equivalent to using Euler's formula, only you're working in R^2 instead of the complex numbers. 4. Jul 13, 2006 ### VietDao29 There's a short proof at wikipedia, you can view it at the end of this page. It is, however, not a completed proof, but you can get some ideas about proving it. :) Similar Discussions: Deriving a trig thing
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9603192210197449, "perplexity": 2811.4172923700125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696182.97/warc/CC-MAIN-20170926141625-20170926161625-00195.warc.gz"}
http://mathhelpforum.com/advanced-statistics/51839-solved-density-function.html
# Math Help - [SOLVED] Density function 1. ## [SOLVED] Density function I've got this problem: f(y) = $(1/\alpha)my^{m-1}e^{{-y}^{m}/\alpha}$, y>0 0, elsewhere Now I'm supposed to find $E(Y^k)$ for any positive integer k. I start out like this: $E(Y^k)$ = $\int_0^\infty y^{k}f(y) dy$ = $\int_0^\infty y^{k} (1/\alpha)my^{m-1}e^{{-y}^{m}/\alpha} dy$ I then move out $(1/\alpha)$ and m outside the integral = $(1/\alpha)m$ $\int_0^\infty y^{k} y^{m-1}e^{{-y}^{m}/\alpha} dy$ = $(1/\alpha)m$ $\int_0^\infty y^{k+m-1}e^{{-y}^{m}/\alpha} dy$ and from here I don't know how to go on. The correct answer is $\Gamma (k/m+1)\alpha ^{k/m}$ and I know that $\Gamma (\alpha) = \int_0^\infty y^{\alpha-1}e^{-y} dy$ but I don't see how $(1/\alpha)m$ $\int_0^\infty y^{k+m-1}e^{{-y}^{m}/\alpha} dy$ can be rewritten to the correct answer. Thanks in advance for your help. 2. Originally Posted by approx I've got this problem: f(y) = $(1/\alpha)my^{m-1}e^{{-y}^{m}/\alpha}$, y>0 0, elsewhere Now I'm supposed to find $E(Y^k)$ for any positive integer k. I start out like this: $E(Y^k)$ = $\int_0^\infty y^{k}f(y) dy$ = $\int_0^\infty y^{k} (1/\alpha)my^{m-1}e^{{-y}^{m}/\alpha} dy$ I then move out $(1/\alpha)$ and m outside the integral = $(1/\alpha)m$ $\int_0^\infty y^{k} y^{m-1}e^{{-y}^{m}/\alpha} dy$ = $(1/\alpha)m$ $\int_0^\infty y^{k+m-1}e^{{-y}^{m}/\alpha} dy$ and from here I don't know how to go on. The correct answer is $\Gamma (k/m+1)\alpha ^{k/m}$ and I know that $\Gamma (\alpha) = \int_0^\infty y^{\alpha-1}e^{-y} dy$ but I don't see how $(1/\alpha)m$ $\int_0^\infty y^{k+m-1}e^{{-y}^{m}/\alpha} dy$ can be rewritten to the correct answer. Thanks in advance for your help. 3. mr fantastic: I'm sorry to say that I still don't understand how to rewrite that expression into the right answer. 4. Originally Posted by approx mr fantastic: I'm sorry to say that I still don't understand how to rewrite that expression into the right answer. The first three lines of the reference I've given you are crystal clear I would have thought. What exactly don't you understand? 5. Thanks for your fast answer. I don't understand which substitutions I'm supposed to do. Should I let: $u = y^m/\alpha$ ? which gives $(1/\alpha)m$ $\int_0^\infty y^{k+m-1}e^{{-u}} du$ ? 6. Originally Posted by approx Thanks for your fast answer. I don't understand which substitutions I'm supposed to do. Should I let: $u = y^m/\alpha$ ? which gives $(1/\alpha)m$ $\int_0^\infty y^{k+m-1}e^{{-u}} du$ ? Your substitution is incorrect for a number of reasons: 1. You still have y in the integral, there should only be u's. 2. You have not substituted the correct exprssion for dy. $dy \neq du$ ! Note that $u = \frac{y^m}{\alpha} \Rightarrow dy = \frac{\alpha}{m \, y^{m-1}}\, du$ and $y = (\alpha \, u)^{1/m}$. It's expected that at this level you can correctly substitute a change of variable in an integral. 7. Ok. So I've done some thinking and came up with this substitution: $u = y^m$, which gives $y = u^{1/m}$ and $dy = (1/m) u^{{(1/m)}-1} du$. Right? And then I go on: $(m/\alpha) \int_0^\infty u^{({1/m})^{k+m-1}}e^{-u/\alpha}(1/m) u^{(1/m)-1}du$ I move $1/m$ outside which gives $(1/m)(m/\alpha) = (1/\alpha)$ outside the integral. $(1/\alpha) \int_0^\infty u^{({1/m})^{k+m-1}}e^{-u/\alpha}(1/m) u^{(1/m)-1}du$ = $(1/\alpha) \int_0^\infty u^{(({(k-1)}/m)+1)}e^{-u/\alpha} u^{(1/m)-1}du$ I put together the u:s and receives: $(1/\alpha) \int_0^\infty u^{k/m}e^{-u/\alpha} du$ Am I right so far? 8. Originally Posted by approx Ok. So I've done some thinking and came up with this substitution: $u = y^m$, which gives $y = u^{1/m}$ and $dy = (1/m) u^{{(1/m)}-1} du$. Right? And then I go on: $(m/\alpha) \int_0^\infty u^{({1/m})^{k+m-1}}e^{-u/\alpha}(1/m) u^{(1/m)-1}du$ I move $1/m$ outside which gives $(1/m)(m/\alpha) = (1/\alpha)$ outside the integral. $(1/\alpha) \int_0^\infty u^{({1/m})^{k+m-1}}e^{-u/\alpha}(1/m) u^{(1/m)-1}du$ = $(1/\alpha) \int_0^\infty u^{(({(k-1)}/m)+1)}e^{-u/\alpha} u^{(1/m)-1}du$ I put together the u:s and receives: $(1/\alpha) \int_0^\infty u^{k/m}e^{-u/\alpha} du$ Am I right so far? Yes. Now substitute $w = \frac{u}{\alpha} \Rightarrow u = \alpha w$. 9. Thank you! I got the right answer after the last substitution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 56, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544007778167725, "perplexity": 459.556861786777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298576.76/warc/CC-MAIN-20150323172138-00033-ip-10-168-14-71.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/72666-residues-print.html
residues • Feb 9th 2009, 09:36 AM asi123 1 Attachment(s) residues Hey guys. How can I calculate the residues of this function (in the pic) in all of its singularity points? I'm kind of a newbie in this this residues stuff and I can really use an example. • Feb 9th 2009, 09:53 AM Moo Hello, Quote: Originally Posted by asi123 Hey guys. How can I calculate the residues of this function (in the pic) in all of its singularity points? I'm kind of a newbie in this this residues stuff and I can really use an example. z=0 is a double pole. z=1 is a simple pole. The general formula for residues in poles of order n is : $\text{Res}_{z=a} f(z)=\frac{1}{(n-1)!} \cdot \frac{d^{n-1}}{dz^{n-1}} \left\{(z-a)^n f(z)\right\}$ • Feb 9th 2009, 10:01 AM Moo Also, be careful of the removable singularities ! Poles are points where the limit is undefined. But removable singularities are points where the limit exists. For example $\frac{z^2-3z+2}{(z-2)(z-4)}$ 2 annulates the denominator, so you can think that it's a pole. But (z-2) also divides z²-3z+2 (because z²-3z+2=(z-1)(z-2)) So the limit when z goes to 2 is defined. It's a removable singularity. • Feb 9th 2009, 10:04 AM asi123 Quote: Originally Posted by Moo Also, be careful of the removable singularities ! Poles are points where the limit is undefined. But removable singularities are points where the limit exists. For example $\frac{z^2-3z+2}{(z-2)(z-4)}$ 2 annulates the denominator, so you can think that it's a pole. But (z-2) also divides z²-3z+2 (because z²-3z+2=(z-1)(z-2)) So the limit when z goes to 2 is defined. It's a removable singularity. But I don't have any removable singularities in my function, right? Thanks. • Feb 9th 2009, 10:13 AM Moo Quote: Originally Posted by asi123 But I don't have any removable singularities in my function, right? Thanks. No you don't, because $z^2+z-1=\left(z-\frac{-1+\sqrt{5}}{2}\right)\left(z-\frac{-1-\sqrt{5}}{2}\right)$ it was just in case...because you might meet some in the future :P There are also essential singularities, which are singularities that are neither poles nor removable singularities. http://en.wikipedia.org/wiki/Mathema...mplex_analysis • Feb 10th 2009, 12:45 AM asi123 1 Attachment(s) Ok, this is what I did (in the pic). Now I need to sum the residues to get the answer? Thanks. • Feb 11th 2009, 08:44 AM Moo It depends on where you integrate o.O
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500956535339355, "perplexity": 1713.7646767688898}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661900.67/warc/CC-MAIN-20160924173741-00071-ip-10-143-35-109.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/33529/getting-the-header-for-the-references-using-biblatex-to-look-exactly-like-th?answertab=votes
# Getting the header for the references, using biblatex, to look *exactly* like the header for the main text, using apa.cls I'm using apa.cls (in doc-mode) in conjunction with biblatex. The apa.cls prints a header for me on every page, except for when biblatex prints the bibliography. At this point, is seems like biblatex takes over and creates a different heading which says "REFERENCES". I would like to have exactly the same header for the references as for the rest of the paper. In this thread (the same one which is linked to above), Gonzalo Medina suggested a solution using \fancyhdr and emulating the header of the apa.cls. However, this seems to run into all kinds of troubles. It is hard to get the exact fontsize, thickness of the text, placement et cetera correct. For example, apa.cls puts the header text in the middle but nudges it a couple of pixels back and forth depending on if it's written on an even or odd page (see original thread for more problems). For consistency, I could just use \fancyhdr to redefine the header for the whole document but I'd like to use the header that apa.cls specifies (style-wise, it goes well with the main text). So, is there anyway for me to do this? Minimal (non)working example: \documentclass[noapacite, twoside, doc]{apa} \title{This is the Title} \author{Me} \usepackage{biblatex} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @thesis{A01, author = {Megalomanius, M.}, year = {1900}, title = {Why I am so great}}, } \end{filecontents} \nocite{*} \begin{document} \newpage \maketitle \newpage This is the first page. \newpage This is the second page. \newpage \printbibliography \end{document} This generates the following output (the positioning nudge isn't obvious in these screenshot since they're not lined up): - Say \defbibheading{apa}[\refname]{\section*{#1}} after having loaded biblatex and then \printbibliography[heading=apa] The default heading used by biblatex calls \section*{\refname} and \markright{\refname}. So the trick of defining a new header is what you're looking for. - I'll be damned. It works. Thanks! – Speldosa Nov 3 '11 at 22:16 @Speldosa I've added a bit of explanation (I was in a hurry when answering). – egreg Nov 3 '11 at 22:19 What a great answer. I exactly needed this. egreg I love you. – Henrik Feb 7 '12 at 16:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9107535481452942, "perplexity": 2649.839825474017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160958.15/warc/CC-MAIN-20160205193920-00300-ip-10-236-182-209.ec2.internal.warc.gz"}
https://cstheory.stackexchange.com/questions/31210/approximating-distributions-from-samples/31231
Approximating distributions from samples One claim I find in many papers about identity testing, and closeness testing is that any distribution over $[n]$ can be approximated to within $\ell_1$ distance $\epsilon$ in $O\left(\frac{n}{\epsilon^2}\right)$ samples. I do not seem to be able to find a proof of this anywhere. On trying to prove it, I seem to be able to prove it if I assume that there exists $\delta>0$, such that, $\min_{i \in [n]} P_i > \delta$, where $P$ is the true distribution. However, I do not seem to be able to prove it in the general case. My attempt at a proof is the following: Let $P$ be any distribution over $[n]$. Let $X_1,X_2,\cdots,X_{\frac{n}{\epsilon^2}}$ be iid samples of $P$. Let $m=c\frac{n}{\epsilon^2}$. Define $C_i$, $1 \leq i \leq n$, as $$C_i= \sum_{j=1}^{m} 1(X_j=i).$$ We note that $\mathbb{E}[C_i]= mP_i$. Further from Chernoff bounding, we get that, $$P[C_i > (1+\epsilon)mP_i] \leq \exp(-\epsilon^2 \frac{mP_i}{3})= \exp(-\frac{cnP_i}{3}).$$ Similarly, $$P[C_i < (1-\epsilon)mP_i] \leq \exp(-\frac{cnP_i}{2}).$$ Thus defining $\hat{P}_i= \frac{C_i}{m}$, and $\hat{P}=(\hat{P}_1,\cdots,\hat{P}_n)$, by union bounding, we get that with probability at least $1-\sum_{i=1}^n 2\exp(- \frac{cnP_i}{3})$, we have that $||\hat{P}-P||_1 \leq \epsilon$. This gives a proof of the claim as long as there exists $\delta>0$, such that, $\min_{i \in [n]} P_i > \delta$ (as the probability can be made arbitrarily close to 1 by increasing the constant). However, I do not know how I can generalise this result to the general case. I'd appreciate any help with respect to this. I think it's a simple application of Hoeffding's inequality. Using your notation, let $Q_i = \frac1m C_i$, i.e. $Q$ is the empirical distribution that approximates $P$. The total variation distance between $P$ and $Q$, i.e. half the $\ell_1$ distance, is $$\max_{S \subseteq [n]} \left| \sum_{i \in S}{P_i} - \sum_{i \in S}{Q_i}\right|.$$ Let $P(S):= \sum_{i \in S}{P_i}$ and define $Q(S)$ analogously. The expectation of $Q(S)$ is $P(S)$ and by Hoeffding, $$\mathbb{P}(|Q(S) - P(S)| > \epsilon) \leq 2e^{-\epsilon^2 m}$$ If we take $m$ a large enough multiple of $n/\epsilon^2$, we have that $\mathbb{P}(|Q(S) - P(S)| > \epsilon) < 2^{-n}/3$ and by a union bound $$\mathbb{P}(\max_S |Q(S) - P(S)| > \epsilon) < 1/3.$$ So with probability at least 2/3, the total variation distance is at most $\epsilon$. I seem to have resolved this question. The claim (on page 5 of this http://www.eccc.hpi-web.de/report/2015/063/ survey by Cannone) should have been that one can approximate a distribution to within $\ell_2$ distance $\epsilon$ in $O(\frac{n}{\epsilon^2})$ samples (He does not mention approximate in what sense). This seems to follow directly from an inequality called the Dvoretzky–Kiefer–Wolfowitz inequality. If anyone knows the stronger $\ell_1$ result to be true, I'd be grateful if they let me know. • The DKW inequality will give an upper bound in Kolmogorov distance ($\ell_\infty$ norm for the CDF's) using $O(1/\varepsilon^2)$ samples. (Interestingly, this does not assume anything on the support -- and applies indifferently to discrete and continuous distributions). The additive $\ell_2$ approximation using $O(1/\varepsilon^2)$ samples can be shown e.g. this way. – Clement C. Oct 26 '15 at 14:32 • As for the $\ell_1$/TV case, you can show it directly at in Sasho Nikolov's answer, or from a more general result relating to "$\mathcal{A}$-norms" (of which the TV is a special case). See Devroye and Lugosi, Combinatorial Techniques in Density Estimation" (2001, Chapters 3-4), or Theorems 2.1 and 2.2 of this paper ("Learning mixtures of structured distributions over discrete domains," of Chan et al.) – Clement C. Oct 26 '15 at 14:36 Have a look at Tugkan Batu, Lance Fortnow, Ronitt Rubinfeld, Warren D. Smith, Patrick White: Testing Closeness of Discrete Distributions. J. ACM 60(1): 4 (2013) for a comprehensive treatment. The abstract states *Given samples from two distributions over an n-element set, we wish to test whether these distributions are statistically close. We present an algorithm which uses sublinear in $n$, specifically, $O(n^{2/3} \epsilon^{−8/3} \log n)$, independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases when the distance between the distributions is small, less than $$\max(\epsilon^{4/3}n^{−1/3}/32, \epsilon n^{−1/2}/4)$$ or large (more than $\epsilon$) in ${\ell}_1$ distance. This result can be compared to the lower bound of $$\Omega(n^{2/3} \epsilon^{−2/3})$$ for this problem given by Valiant [2008]. Our algorithm has applications to the problem of testing whether a given Markov process is rapidly* mixing. We present sublinear algorithms for several variants of this problem as well. They distinguish between light and heavy elements of the distribution, among other techniques, to obtain these results. • Actually i was reading this recent survey: eccc.hpi-web.de/report/2015/063. The author mentions this result in passing saying that this is a trivial upper bound for all closeness problems, because we can get the distribution within $\epsilon$ in that many samples (on the first page of chapter 3). – Devil Apr 21 '15 at 4:18 • This is a different problem than just learning the distribution. (And this bound has been improved since then.) – usul Apr 22 '15 at 20:15 • Yes, it seems it answers a different problem. Is the standard approach to then delete this question, to prevent misunderstandings? Am I able to delete my answer to a question? – kodlu Apr 23 '15 at 2:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9756995439529419, "perplexity": 226.49543121265592}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00171.warc.gz"}
http://www.planetmath.org/exampleofinducedrepresentation
# example of induced representation To understand the definition of induced representation, let us work through a simple example in detail. Let $G$ be the group of permutations of three objects and let $H$ be the subgroup of even permutations. We have $G=\{e,(ab),(ac),(bc),(abc),(acb)\}$ $H=\{e,(abc),(acb)\}$ Let $V$ be the one dimensional representation of $H$. Being one-dimensional, $V$ is spanned by a single basis vector $v$. The action of $H$ on $V$ is given as $ev=v$ $(abc)v=\exp(2\pi i/3)v$ $(acb)v=\exp(4\pi i/3)v$ Since $H$ has half as many elements as $G$, there are exactly two cosets, $\sigma_{1}$ and $\sigma_{2}$ in $G/H$ where $\sigma_{1}=\{e,(abc),(acb)\}$ $\sigma_{2}=\{(ab),(ac),(bc)\}$ Since there are two cosets, the vector space of the induced representation consists of the direct sum of two formal translates of $V$. A basis for this space is $\{\sigma_{1}v,\sigma_{2}v\}$. We will now compute the action of $G$ on this vector space. To do this, we need a choice of coset representatives. Let us choose $g_{1}=e$ as a representative of $\sigma_{1}$ and $g_{2}=(ab)$ as a representative of $\sigma_{2}$. As a preliminary step, we shall express the product of every element of $G$ with a coset representative as the product of a coset representative and an element of $H$. $e\cdot g_{1}=e=g_{1}\cdot e$ $e\cdot g_{2}=(ab)=g_{2}\cdot e$ $(ab)\cdot g_{1}=(ab)=g_{2}\cdot e$ $(ab)\cdot g_{2}=e=g_{1}\cdot e$ $(bc)\cdot g_{1}=(bc)=g_{2}\cdot(acb)$ $(bc)\cdot g_{2}=(abc)=g_{1}\cdot(abc)$ $(ac)\cdot g_{1}=(ac)=g_{2}\cdot(abc)$ $(ac)\cdot g_{2}=(acb)=g_{1}\cdot(acb)$ $(abc)\cdot g_{1}=(abc)=g_{1}\cdot(abc)$ $(abc)\cdot g_{2}=(bc)=g_{2}\cdot(acb)$ $(acb)\cdot g_{1}=(acb)=g_{1}\cdot(acb)$ $(acb)\cdot g_{2}=(ac)=g_{2}\cdot(abc)$ We will now compute of the action of $G$ using the formula $g(\sigma v)=\tau(hv)$ given in the definition. $e(\sigma_{1}v)=[e\cdot g_{1}](ev)=\sigma_{1}v$ $e(\sigma_{2}v)=[e\cdot g_{2}](ev)=\sigma_{2}v$ $(ab)(\sigma_{1}v)=[(ab)\cdot g_{1}](ev)=\sigma_{2}v$ $(ab)(\sigma_{2}v)=[(ab)\cdot g_{2}](ev)=\sigma_{1}v$ $(bc)(\sigma_{1}v)=[(bc)\cdot g_{1}]((acb)v)=\exp(4\pi i/3)\sigma_{2}v$ $(bc)(\sigma_{2}v)=[(bc)\cdot g_{2}]((abc)v)=\exp(2\pi i/3)\sigma_{1}v$ $(ac)(\sigma_{1}v)=[(ac)\cdot g_{1}]((abc)v)=\exp(2\pi i/3)\sigma_{2}v$ $(ac)(\sigma_{2}v)=[(ac)\cdot g_{2}]((acb)v)=\exp(4\pi i/3)\sigma_{1}v$ $(abc)(\sigma_{1}v)=[(abc)\cdot g_{1}]((abc)v)=\exp(2\pi i/3)(\sigma_{1}v)$ $(abc)(\sigma_{2}v)=[(abc)\cdot g_{2}]((acb)v)=\exp(4\pi i/3)(\sigma_{2}v)$ $(acb)(\sigma_{1}v)=[(acb)\cdot g_{1}]((acb)v)=\exp(4\pi i/3)(\sigma_{1}v)$ $(acb)(\sigma_{2}v)=[(acb)\cdot g_{2}]((abc)v)=\exp(2\pi i/3)(\sigma_{2}v)$ Here the square brackets indicate the coset to which the group element inside the brackets belongs. For instance, $[(ac)\cdot g_{2}]=[(ac)\cdot(ab)]=[(acb)]=\sigma_{1}$ since $(acb)\in\sigma_{1}$. The results of the calculation may be easier understood when expressed in matrix form $e\qquad\to\qquad\begin{pmatrix}1&0\cr 0&1\end{pmatrix}$ $(ab)\qquad\to\qquad\begin{pmatrix}0&1\cr 1&0\end{pmatrix}$ $(bc)\qquad\to\qquad\begin{pmatrix}0&\exp(2\pi i/3)\cr\exp(4\pi i/3)&0\end{pmatrix}$ $(ac)\qquad\to\qquad\begin{pmatrix}0&\exp(4\pi i/3)\cr\exp(2\pi i/3)&0\end{pmatrix}$ $(abc)\qquad\to\qquad\begin{pmatrix}\exp(2\pi i/3)&0\cr 0&\exp(4\pi i/3)\end{pmatrix}$ $(acb)\qquad\to\qquad\begin{pmatrix}\exp(4\pi i/3)&0\cr 0&\exp(2\pi i/3)\end{pmatrix}$ Having expressed the answer thus, it is not hard to verify that this is indeed a representation of $G$. For instance, $(acb)\cdot(ac)=(bc)$ and $\begin{pmatrix}\exp(4\pi i/3)&0\cr 0&\exp(2\pi i/3)\end{pmatrix}\begin{pmatrix% }0&\exp(4\pi i/3)\cr\exp(2\pi i/3)&0\end{pmatrix}=\begin{pmatrix}0&\exp(2\pi i% /3)\cr\exp(4\pi i/3)&0\end{pmatrix}$ Title example of induced representation ExampleOfInducedRepresentation 2013-03-22 14:35:43 2013-03-22 14:35:43 rspuzio (6075) rspuzio (6075) 8 rspuzio (6075) Example msc 20C99
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 66, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951766729354858, "perplexity": 79.52000308283131}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864110.40/warc/CC-MAIN-20180621075105-20180621095105-00361.warc.gz"}
http://mathhelpforum.com/trigonometry/152728-transformations-trigonometric-graphs.html
# Math Help - transformations of trigonometric graphs 1. ## transformations of trigonometric graphs h = Hcos (tπ/6) Find t when h = 2.5 H = 3 (height of the high tide in meters) T = 12 a high tide of 3m occurs at 10am so I went 2.5 = 3cos (tπ/6) 2.5/3 = cos (tπ/6) 0.833= cos (tπ/6) Then you take cos^-1 (0.833) = 33.65/180 x 23 = 4.3 (to convert to hours) 4.3 x 6 = tπ 25.8/π = t t = 8.21 Yeah I think I'm doing it wrong =( Thanks for any help. 2. Then you take cos^-1 (0.833) = 33.65/180 x 23 = 4.3 From where did you get 23? Actually 33.65 = t*π/6 = t*180/6. Now find t. h = Hcos (tπ/6) Find t when h = 2.5 H = 3 (height of the high tide in meters) T = 12 a high tide of 3m occurs at 10am so I went 2.5 = 3cos (tπ/6) 2.5/3 = cos (tπ/6) 0.833= cos (tπ/6) Then you take cos^-1 (0.833) = 33.65/180 x 23 = 4.3 (to convert to hours) If you intended to convert from degrees to radians, then it is $]33.64(\pi/180)$ not "23"! Of course, once you have $\pi t/6= (33.64/180)\pi$ you can just cancel the " $\pi$s": $t/6= 33.64/180$ so that $t= 33.64/30$. But since you knew the argument of cosine was in radians (the only time you use degrees is in problems specifically dealing with triangles in which the angles are given in degrees) it would be simpler to set your calculator to "degree" mode. 4.3 x 6 = tπ 25.8/π = t t = 8.21 Yeah I think I'm doing it wrong =( Thanks for any help. 4. Hey guys thanks for your help Ok, I'm going to write out the whole question to prevent any confusions (i apologize for any inconveniences): 3. The height of the sea water (due to tides) above the mean sea level is given by the formula h = Hcos (tπ/6), where t is the time in hours after high tide and H is the height of the high tide in metres. Suppose a high tide of 3.0 metres occurs at 10am. c. Find the times of the day when the height of the tide is 2.5m so by doing your calculation t = 33.64/30 = 1.12 ... its in radians but don't I need to convert it to hours to get the times of day? That's why I multiplied by 33.65/180 x 23 = 4.3 hours
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866262674331665, "perplexity": 1769.4348004851163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461988.0/warc/CC-MAIN-20150226074101-00259-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/reflection-refraction-question.119481/
# Reflection/Refraction Question 1. May 1, 2006 ### skull Ok so here's a homework question. I honestly dont know where to begin. A prism with an apex angle of 40 degrees induces a minimum deviation of 20 degrees. What is the prism made of. Judging by what the question is asking for, I think I need to find the refractive index thru Snells law, but i dont know where to start. All help is appreciated. 2. May 1, 2006 ### Staff: Mentor Can you sketch how the light goes through the prism? Start with a single ray incident on one side (is the incident angle given or understood somehow?), and sketch what it does as it enters the prism glass, then it goes straight to the other face, where it bends again as it goes back into the air. The bend that happens at each of the two faces is governed by Snell's law as you say. So draw the sketch and write the two Snell's law equations. Then for "minimum deviation", do they mean the angle of the red light ray that is still visible (red is deflected less by a prism -- just remember that red is on the outside of rainbows ROYGBIV). 3. May 1, 2006 ### Staff: Mentor The "angle of deviation" of light through a prism depends on the angle of incidence with the prism surface. The minimum angle of deviation is a characteristic of a prism that depends on its apex angle and index of refraction. The formula for minimum deviation is probably in your book. (Good luck if you have to derive it.) It turns out that the minimum deviation occurs when the refracted beam travels parallel to the side opposite to the apex angle (the path is symmetric). Similar Discussions: Reflection/Refraction Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592875003814697, "perplexity": 565.0128642160004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186774.43/warc/CC-MAIN-20170322212946-00178-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/porential-across-current-source.268525/
# Homework Help: Porential across current source 1. Nov 1, 2008 ### Altairs 1. The problem statement, all variables and given/known data Title says it all. Question is attached. 2. Relevant equations Simple KCLs, KVLs etc 3. The attempt at a solution What I don't get is that isn't $$I_{R_{4}} = \frac{R_{3}}{R_{3}+R_{4}} * I1$$ File size: 13.2 KB Views: 103 2. Nov 1, 2008 ### CEL This expression is for a current divider. It would be valid if R3 and R4 were in parallel, wich they are not. Write Kirchoff's equations for the 3 loops to obtain the voltages on the terminals of R4 and R2. The voltage on the source is the difference between them.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210521340370178, "perplexity": 2818.9325620941504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867254.84/warc/CC-MAIN-20180525235049-20180526015049-00441.warc.gz"}
http://tex.stackexchange.com/questions/101479/how-to-build-two-different-tex-files-from-same-tex-file
# How to build two different .tex files from same .tex file When I prepare an exercise list for my students I use to also include there my own solutions. I prepare these documents with some macros based on mdframed and with comment help I easily obtain two pdfs, one with exercises and another one with exercises and solutions. I encourage my students to try to write their solutions with LaTeX so I give them the .pdf file with exercises and a source .tex file without answers. This way, they only have to worry about writing their solutions because the format is already provided. The problem is that I need to have two source files one with problems+solutions and another one only with problems. What I would like would be to have only one .tex file with problems and solutions and, from it be able to extract another .tex file without solutions. As a possible frame work suppose a main .tex file like: % This is main.tex file \documentclass{article} \begin{document} \begin{exercise} This is the first exercise \begin{solution} This is my solution \end{solution} \end{exercise} \end{document} from which I want to easily obtain a similar one with empty solutions: % This is student.tex file \documentclass{article} \begin{document} \begin{exercise} This is the first exercise \begin{solution} \end{solution} \end{exercise} \end{document} I suppose I'm looking for something like a .dtx file but I've never used it and may be there are better solutions. I'm working on windows so grep commands won't work. - I would write a short parser, which just cut out the text between \begin{solution} and \end{solution}. But I am not sure whether an analog is possible through LaTeX itself, too. –  Dominikus K. Mar 8 '13 at 15:32 Have you considered the exam document class? It won't give you two tex sources, but as @DominikusK. says, you can easily write a parser. –  Sean Allred Mar 8 '13 at 15:51 @JLDiaz: No. I do need two .tex files because I want to give one to the students. I know how to get different pdf from same file. –  Ignasi Mar 8 '13 at 17:30 @SeanAllred I know exam, exsheets, and some other classes for exercises. With them i can produce different pdfs but i want different source (.tex) files. –  Ignasi Mar 8 '13 at 17:32 Did you check the extract package? –  mbork Mar 8 '13 at 22:19 # Update. Added support to leave the empty environment in the copy, removing only its contents, or to remove also the \begin...\end pair (by default). I programmed a LuaLaTeX solution and tried to make it flexible enough. These are the files which compose the solution: ## remove-env.lua -- remove-env.lua omittedEnvironments = {} omitFileSuffix = "-without" leaveEmptyEnvs = false function shouldOmit(line) for i,v in ipairs(omittedEnvironments) do if (string.find(line, "\\begin{"..v.."}")~=nil) then return true end end return false end function shouldResume(line) for i,v in ipairs(omittedEnvironments) do if (string.find(line, "\\end{"..v.."}")~=nil) then return true end end return false end function dumpfile() myout = io.open(tex.jobname..omitFileSuffix..".tex", "w") myin = io.open(tex.jobname..".tex", "r") omitting = false for line in myin:lines() do if (not omitting and shouldOmit(line)) then if (leaveEmptyEnvs) then myout:write(line.."\n") end omitting = true end if (not omitting) then myout:write(line.."\n") end if (omitting and shouldResume(line)) then if (leaveEmptyEnvs) then myout:write(line.."\n") end omitting = false end end myout:close() myin:close() end ## remove-env.tex \directlua{dofile("remove-env.lua")} \def\omitEnvironment#1{\directlua{table.insert(omittedEnvironments, "#1")}} \def\omitFileSuffix#1{\directlua{omitFileSuffix="#1"}} \def\leaveEmptyEnvs{\directlua{leaveEmptyEnvs=true}} \def\removeEmptyEnvs{\directlua{leaveEmptyEnvs=false}} \AtEndDocument{\directlua{dumpfile()}} ## MWE.tex \input remove-env \documentclass{article} \usepackage{fontspec} \usepackage{lipsum} \newenvironment{solution}{}{} \omitEnvironment{solution} \omitFileSuffix{-sans-sol} \begin{document}\parindent0pt\parskip1em 1. \lipsum[1]\hrulefill\par \begin{solution} 2. \lipsum[2]\hrulefill\par \end{solution} 3. \lipsum[3]\hrulefill\par \end{document} This MWE defines a no-op solution environment which acts simply as markup, but of course you can define it in a way that produces some effect in the pdf. Macro \omitEnvironment specifies the environment you want to omit. You can use this macro several times to specify several environments, and all of them will be omitted. Macro \omitFileSuffix specifies the suffix that will be appended to the output filename. Run: $lualatex MWE.tex And you will get two files (and all the usual auxiliar files, of course): • MWE.pdf will be generated as usually, and all the contents (including omitted environments) will be present. • MWE-sans-sol.tex is a copy of MWE.tex in which all solution environments are removed. $ diff MWE.tex MWE-sans-sol.tex 11,13d10 < \begin{solution} < 2. \lipsum[2]\hrulefill\par < \end{solution} If you want to remove only the contents of the solution but leave the empty environment, you only have to specify \leaveEmptyEnvs at some point of MWE.tex. In this case the diff will show: $diff MWE.tex MWE-sans-sol.tex 12d11 < 2. \lipsum[2]\hrulefill\par PS: Thanks to Scott H. who suggested me not to use luatex callbacks, which was my first (and too convoluted) approach - ¡Muchas gracias! It works. Now I have a perfect excuse to learn LuaLaTeX. – Ignasi Mar 11 '13 at 15:07 My first lualatex lesson: changing \directlua{dofile("remove-env.lua")} to \directlua{require("remove-env.lua")} I can have both remove-env files in my localtex folder instead of working folder. – Ignasi Mar 11 '13 at 15:35 A TeX only solution; it assumes that you don't have any other environment whose name starts with the string soluti other than solution. Prepare the following extract.tex file: \newread\texfileread \newwrite\texfilewrite \openin\texfileread=ignasimain.tex % put here the main file name \immediate\openout\texfilewrite=ignasistudents.tex % put here the secondary file name \edef\BEGINSOLUTI{\string\begin\string{soluti} \edef\ENDSOLUTION{\string\end\string{solution} \newif\ifwritesolution \writesolutiontrue \long\def\ignasidecide#1#2#3#4#5#6#7#8#9\relax{% \def\temp{#1#2#3#4#5#6#7#8}% \ignasidecideaux#9} \long\def\ignasidecideaux#1#2#3#4#5#6\relax{% \ifnum\pdfstrcmp{\temp#1#2#3#4#5}{\BEGINSOLUTI}=0 \immediate\write\texfilewrite{\ignasiline^^J} \writesolutionfalse \else \ifnum\pdfstrcmp{\temp#1#2#3#4#5}{\ENDSOLUTION}=0 \writesolutiontrue \fi \fi \ifwritesolution \immediate\write\texfilewrite{\ignasiline} \fi } \endlinechar=-1 \newlinechar=\^^J \loop\unless\ifeof\texfileread% \readline\texfileread to \ignasiline% \expandafter\ignasidecide\ignasiline% \relax\relax\relax\relax% \relax\relax\relax\relax% \relax\relax\relax\relax% \relax\relax% \repeat% \csname bye\endcsname% \csname @@end\endcsname Change the file names as desired. Put this file along with the main file and compile with pdftex or pdflatex (it's the same). With your example file, the resulting file will be % This is main.tex file \documentclass{article} \begin{document} \begin{exercise} This is the first exercise \begin{solution} \end{solution} \end{exercise} \end{document} Basically we read the main file line by line (ignoring category codes) with \readline; if the line starts with \begin{soluti then we write out the line with a blank line following it and set a conditional to false; if the line starts with \end{solution then the conditional is set again to true. The current line is written out if the conditional is true. - Here's an awk program that generates a TeX file with everything but the contents of solution environments, leaving those environments for students to fill in. A hack for windows follows. #!/usr/bin/awk # Filter out solution environment # BEGIN{ printing = 1; } /begin\{solution/ { print print " write your answer here" printing = 0; } printing >0 { print; } /end\{solution/ { print printing = 1; } @HendrikVogt notes that you're on windows. This might work for you: http://gnuwin32.sourceforge.net/packages/gawk.htm Edit: A clumsy windows solution. It uses an online bash shell to save a copy of the awk program, save a copy of the TeX source, then execute the program on the source. IMPERFECTION: solution seems to eat white space at the beginning of a line. TeX wouldn't care, but users might. cat << 'EOF' > /tmp/myprog #!/usr/bin/awk BEGIN{ printing = 1;} /begin\{solution/ { print; print "your answer here"; printing = 0;} printing >0 { print} /end\{solution/ { print; printing = 1;} EOF # # paste your TeX document here cat << 'EOF' > /tmp/mytex \documentclass{article} \begin{document} Test mathematics:$e^{i\pi} = -1$. Test "double" and 'single' quotes and a *. \begin{exercise} Question here:$ 2 + 2 = ?$. \begin{solution}$4$\end{solution} \end{exercise} \end{document} EOF awk -f /tmp/myprog /tmp/mytex Output for this test: Thanks to http://stackoverflow.com/questions/15329323/here-document-that-disables-shell-parsing for the here-document syntax to disable shell interpretation. - Please note that the OP works on Windows and doesn't look for a solution with grep and friends. (On a Unix system I'd use such an external program, too!) – Hendrik Vogt Mar 8 '13 at 15:59 If I had access to a Windows machine, I would advise that VBS be looked into as an alternative. Installing open ports is a pain on Windows, and it's much easier to just use the built-in tools (even if far inferior) for single-purpose things like this. – Sean Allred Mar 8 '13 at 20:43 @HendrikVogt: grep is available for Windows. There's also CygWin that provides a Unix-like interface and accompanying command-line programming functionality, including grep. – Werner Mar 9 '13 at 0:49 Also$ in the tex could cause trouble. Imagine the document contains the formula $PWD$. –  JLDiaz Mar 9 '13 at 1:36 @JLDiaz With a here-document quotes work and $$half works. Any suggestions? – Ethan Bolker Mar 9 '13 at 21:39 I had the same problem. I ended up using docstrip. The system I set up produces: • (i) separate tex and pdf files for me (with solutions) and the students (without solutions:), • (ii) several versions of the exercise, for grouping students, and • (iii) typesets the only selected topic(s), allowing me to keep all the exercises in the same file. Here's the outline of the main file containing exercises and solutions: % This is exercises.tex \documentclass{article} \newenvironment{exercise}{}{} \newenvironment{solution}{}{} \title{Subject\\\normalsize homework topic: %<topic1>topic 1 %<topic2>topic 2 } \author{} \begin{document} \maketitle %<*topic1> \begin{exercise} Exercise text (topic 1) %<A> Version for group A, %<B> Version for group B, %<C> Version for group C, %<D> Version for group D. More exercise text. \end{exercise} %<*!student> \begin{solution} Solution, visible only in my copy: %<A> for group A, %<B> for group B, %<C> for group C, %<D> for group D. \end{solution} %</!student> %<*student> % Template for student's answer. \begin{solution} \end{solution} %</student> %</topic1> %<*topic2> \begin{exercise} Exercise text (topic 1) %<A> Version for group A, %<B> Version for group B, %<C> Version for group C, %<D> Version for group D. More exercise text. \end{exercise} %<*!student> \begin{solution} Solution, visible only in my copy: %<A> for group A, %<B> for group B, %<C> for group C, %<D> for group D. \end{solution} %</!student> %<*student> % Template for student's answer. \begin{solution} \end{solution} %</student> %</topic2> \end{document} And this is the .ins file for topic1: % This is exercises.ins \input docstrip \nopreamble\nopostamble \askforoverwritefalse \generate{% \file{exercises-topic1-A.tex}{\from{exercises.tex}{student,topic1,A}}% \file{exercises-topic1-B.tex}{\from{exercises.tex}{student,topic1,B}}% \file{exercises-topic1-C.tex}{\from{exercises.tex}{student,topic1,C}}% \file{exercises-topic1-D.tex}{\from{exercises.tex}{student,topic1,D}}% \file{exercises-topic1-ME.tex}{\from{exercises.tex}{topic1,A,B,C,D}}% %\file{exercises-topic2-A.tex}{\from{exercises.tex}{student,topic2,A}}% %\file{exercises-topic2-B.tex}{\from{exercises.tex}{student,topic2,B}}% %\file{exercises-topic2-C.tex}{\from{exercises.tex}{student,topic2,C}}% %\file{exercises-topic2-D.tex}{\from{exercises.tex}{student,topic2,D}}% %\file{exercises-topic2-ME.tex}{\from{exercises.tex}{topic2,A,B,C,D}}% } \endbatchfile Finally, a very badly written makefile --- nothing more than a camouflaged bash script, really. (I guess that ideally, running make should produce the .ins file automatically ... ahh, some day...) all: pdftex exercises.ins bash -c 'for I in {A,B,C,D,ME}; do pdflatex exercises-topic1-$$I ; done' #bash -c 'for I in {A,B,C,D,ME}; do pdflatex exercises-topic2-I ; done' - The simplest option is probably the extract package, mentioned by mbork in comments. However, this method will not allow you to nest your solution environment inside the exercise environment. \documentclass{article} \usepackage[ active, copydocumentclass=true, generate=\jobname-no-solutions, extract-env={exercise} ]{extract} % http://ctan.org/pkg/extract \begin{extract*} % Items executed in both the main and extracted document % (extract manual, section 5.1) \newenvironment{exercise}{}{} \newenvironment{solution}{}{} \end{extract*} \begin{document} \begin{exercise} This is the first exercise \end{exercise} \begin{solution} This is my solution \end{solution} \end{document} Resulting in code for the extracted file of: \documentclass{article} % Items executed in both the main and extracted document % (extract manual, section 5.1) \newenvironment{exercise}{}{} \newenvironment{solution}{}{} \begin{document} \begin{exercise} This is the first exercise \end{exercise} \end{document} - I saw the reference to extract` package in Extracting the contents of text in a specified environment into a new file just after sending my question. So, thank you for providing the example and pointing the problem with nested environments. –  Ignasi Mar 11 '13 at 15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507238268852234, "perplexity": 4023.7767173411453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118973352.69/warc/CC-MAIN-20150124170253-00038-ip-10-180-212-252.ec2.internal.warc.gz"}
http://blog.applegrew.com/category/math/digit-math/digit-math-applications/
## Digit Math Application: Proving that all numbers ending with 5 are divisible by 5 Please read Digit Math: Introduction before you continue. The problem It seems I have developed a fascination for the number 5, so here we go again. Here I would be using Digit Math to prove that all integers  which end with digit 5 are always divisible by 5. The proof Case 1: Take a two […] ## Digit Math Application: Proving the correctness of shortcut method to squaring numbers ending with 5 Please read Digit Math: Introduction before you continue. The problem Someday, somewhere I came to know that any number which ends with the digit 5 can be easily squared. The trick can be easily demonstrate using an example. Suppose we want to find the square of 25. Trick is to take the number before 5 […] ## Digit Math Application: Proving that multiplying with 10**n puts n zeros at the end Please read Digit Math: Introduction before you continue. The problem This is a very fundamental concept that we were taught when we were in junior schools. Now, think of it, what it says. If you add $$x$$ ten times then you will get $$x\omega0$$. If you add $$x$$ hundred times then you will get $$x\omega00$$; and so […]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301554322242737, "perplexity": 682.8824596471879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936914.5/warc/CC-MAIN-20180419110948-20180419130948-00056.warc.gz"}
https://drexel28.wordpress.com/tag/riemann-surface/
# Abstract Nonsense ## Meromorphic Functions on the Riemann Sphere (Pt. II) Point of Post: This is a continuation of this post. $\text{ }$ October 7, 2012 ## Meromorphic Functions on the Riemann Sphere (Pt. I) Point of Post: In this post we classify the meromorphic functions on the Riemann sphere $\mathbb{C}_\infty$. $\text{ }$ Motivation $\text{ }$ If a random kid off the street asked you “what are the continuous functions $(0,1)\to(0,1)$?” or “what are all the smooth  maps $S^4\to\mathbb{R}$?” you would probably replay with a definitive “Ehrm…well…they’re just…” This is because such a description (besides “they’re just the continuous functions!”)  is beyond comprehension in those cases! For example, it would take quite a bit of ingenuity to come up with something like the Blancmange function–in fact, such crazy continuous everywhere, differentiable nowhere functions are, in a sense dense in the space of continuous functions. $\text{ }$ Thus, it should come as somewhat of a surprise that after reading this post you will be able to answer a kid asking “what are all the meromorphic functions on the Riemann sphere?” with a “Ha, that’s simple. They’re just…”. To be precise, we have already proven that $\mathbb{C}(z)\subseteq\mathcal{M}(\mathbb{C}_\infty)$, and we shall now show that the reverse inclusion is true!  Now, more generally, we shall be able to give a satisfactory (algebraic!) of the meromorphic functions of any compact Riemann surface. While this won’t be quite as impressive as the explicit, simple characterization of $\mathcal{M}(\mathbb{C}_\infty)$ but still a far cry from our situation with trying to characterize $C([0,1],[0,1])$, since we don’t really even have an algebraic (ring theoretic) description of this (in terms of more familiar objects). This should be, once again, another indication that the function theory of compact Riemann surfaces is very rigid–they admit meromorphic functions, but not so many that the computation (algebraically) of their meromorphic function field is untenable. $\text{ }$ Ok, now that we have had some discussion about the philosophical implications of actually being able to describe $\mathcal{M}(\mathbb{C}_\infty)$ let’s discuss how we are actually going to prove $\mathcal{M}(\mathbb{C}_\infty)=\mathbb{C}(z)$. The basic idea comes from the fact that we can actually find a function $r$ which has prescribed zeros $\lambda_1,\cdots,\lambda_n$ and poles $p_1,\cdots,p_m$ such that $\text{ord}_{\lambda_i}(r)=e_i$ and $\text{ord}_{p_i}(r)=-g_i$ for any $e_i,g_i\in\mathbb{N}$–namely, the function $\displaystyle r(z)$ given by $(z-\lambda_1)^{e_1}\cdots(z-\lambda_n)^{e_n}(z-p_1)^{-g_1}\cdots(z-p_m)^{-g_m}$. Thus, if $f$ is a meromorphic function on $\mathbb{C}_\infty$ with zeroes and poles described as in the last sentence we see that $\displaystyle \frac{f}{r}$ is a meromorphic function on $\mathbb{C}_\infty$ and which has no zeros or poles on $\mathbb{C}$. In particular, $\displaystyle h=\frac{f}{r}$ is a function meromorphic on $\mathbb{C}_\infty$ but holomorphic on $\mathbb{C}$, and with no zeros. Now, it’s a common fact from complex analysis that the only entire function with a pole at infinity (recall that the somewhat confusing definition of pole at infinity now makes a lot more sense!) is a polynomial. Thus, $h$ is a polynomial, but since $h$ has no zeros on $\mathbb{C}$ we know from the fundamental theorem of algebra that $h$ is constant. Thus, $f$ is really just a constant multiple of $r(z)$! Note that the key to this proof is that ability specify poles and zeros of a given multiplicity, except perhaps specifying a pole at infinity, and that the point infinity is well-behaved (in the sense that things that have poles there are pretty tame). $\text{ }$ October 7, 2012 ## Loci of Holomorphic Functions and the Inverse Function Theorem (Pt. III) Point of Post: This is a continuation of this post. $\text{ }$ October 3, 2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957076907157898, "perplexity": 244.15521603348216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813608.70/warc/CC-MAIN-20180221103712-20180221123712-00341.warc.gz"}
https://www.semanticscholar.org/paper/PT-symmetric-cubic-anharmonic-oscillator-as-a-model-Mostafazadeh/ff25f243b278572f2af2b1c5a29e3a4add08513a
# PT-symmetric cubic anharmonic oscillator as a physical model @article{Mostafazadeh2005PTsymmetricCA, title={PT-symmetric cubic anharmonic oscillator as a physical model}, journal={Journal of Physics A}, year={2005}, volume={38}, pages={6557-6569} } There is a factor of 2 error in equation (61) of this paper. Correcting this error leads to minor changes in equations (62) and (63). Please see PDF for details. ## Figures from this paper a Perturbative Treatment of a Generalized {PT}-SYMMETRIC Quartic Anharmonic Oscillator We examine a generalized -symmetric quartic anharmonic oscillator model to determine the various physical variables perturbatively in powers of a small quantity e. We make use of the Bender–Dunne symmetric effective mass Schrödinger equations • Physics, Mathematics • 2005 We outline a general method of obtaining exact solutions of -symmetric Schrodinger equations with a position-dependent effective mass. Using this method, exact solutions of some -symmetric potentials - symmetry and Integrability ∗ We briefly explain some simple arguments based on pseudo Hermiticity, supersymmetry and PT -symmetry which explain the reality of the spectrum of some non-Hermitian Hamiltonians. Subsequently we Closed formula for the metric in the Hilbert space of a -symmetric model • Mathematics, Physics • 2006 We introduce a very simple, exactly solvable -symmetric non-Hermitian model with a real spectrum, and derive a closed formula for the metric operator which relates the problem to a Hermitian one. Path-integral formulation of pseudo-Hermitian quantum mechanics and the role of the metric operator We provide a careful analysis of the generating functional in the path-integral formulation of pseudo-Hermitian and in particular PT-symmetric quantum mechanics and show how the metric operator LETTER TO THE EDITOR: Pseudo-Hermiticity and some consequences of a generalized quantum condition • Physics • 2005 We exploit the hidden symmetry structure of a recently proposed non-Hermitian Hamiltonian and of its Hermitian equivalent one. This sheds new light on the pseudo-Hermitian character of the former and A squeeze-like operator approach to position-dependent mass in quantum mechanics • Physics • 2014 We provide a squeeze-like transformation that allows one to remove a position dependent mass from the Hamiltonian. Methods to solve the Schrodinger equation may then be applied to find the respective ## References SHOWING 1-10 OF 59 REFERENCES Pseudo-Hermiticity versus PT-symmetry. II. A complete characterization of non-Hermitian Hamiltonians with a real spectrum We give a necessary and sufficient condition for the reality of the spectrum of a non-Hermitian Hamiltonian admitting a complete set of biorthonormal eigenvectors. Perturbation theory of odd anharmonic oscillators • Mathematics • 1980 We study the perturbation theory forH=p2+x2+βx2n+1,n=1, 2, .... It is proved that when Imβ≠0,H has discrete spectrum. Any eigenvalue is uniquely determined by the (divergent) Rayleigh-Schrödinger Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry • Mathematics • 1998 The condition of self-adjointness ensures that the eigenvalues of a Hamiltonian are real and bounded below. Replacing this condition by the weaker condition of $\mathrm{PT}$ symmetry, one obtains new PT-symmetric Quantum Mechanics: A Precise and Consistent Formulation The physical condition that the expectation values of physical observables are real quantities is used to give a precise formulation of PT-symmetric quantum mechanics. A mathematically rigorous proof Large-order Perturbation Theory for a Non-Hermitian PT-symmetric Hamiltonian • Physics • 1999 A precise calculation of the ground-state energy of the complex PT-symmetric Hamiltonian H=p2+14x2+iλx3, is performed using high-order Rayleigh–Schrodinger perturbation theory. The energy spectrum of Some properties of eigenvalues and eigenfunctions of the cubic oscillator with imaginary coupling constant Comparison between the exact value of the spectral zeta function, ZH(1) = 5-6/5[3-2cos (π/5)]Γ2((1/5))/Γ((3/5)), and the results of numeric and WKB calculations supports the conjecture by Daniel Pseudo-Hermitian description of PT-symmetric systems defined on a complex contour We describe a method that allows for a practical application of the theory of pseudo-Hermitian operators to PT-symmetric systems defined on a complex contour. We apply this method to study the Calculation of the hidden symmetry operator in -symmetric quantum mechanics • Mathematics • 2002 In a recent paper it was shown that if a Hamiltonian H has an unbroken symmetry, then it also possesses a hidden symmetry represented by the linear operator . The operator commutes with both H and . On the Reality of the Eigenvalues for a Class of -Symmetric Oscillators Abstract We study the eigenvalue problem with the boundary conditions that decays to zero as z tends to infinity along the rays , where is a real polynomial and . We prove that if for some we have Exact PT-symmetry is equivalent to Hermiticity We show that a quantum system possessing an exact antilinear symmetry, in particular PT-symmetry, is equivalent to a quantum system having a Hermitian Hamiltonian. We construct the unitary operator
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478758573532104, "perplexity": 901.1181623973508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00177.warc.gz"}
https://webot.org/info/en/?search=Incircle
Incircle and excircles of a triangle Information (Redirected from Incircle) https://en.wikipedia.org/wiki/Incircle A   triangle with   incircle, incenter (${\displaystyle I}$),   excircles, excenters (${\displaystyle J_{A}}$, ${\displaystyle J_{B}}$, ${\displaystyle J_{C}}$),   internal angle bisectors and   external angle bisectors. The   green triangle is the excentral triangle. In geometry, the incircle or inscribed circle of a triangle is the largest circle contained in the triangle; it touches (is tangent to) the three sides. The center of the incircle is a triangle center called the triangle's incenter. [1] An excircle or escribed circle [2] of the triangle is a circle lying outside the triangle, tangent to one of its sides and tangent to the extensions of the other two. Every triangle has three distinct excircles, each tangent to one of the triangle's sides. [3] The center of the incircle, called the incenter, can be found as the intersection of the three internal angle bisectors. [3] [4] The center of an excircle is the intersection of the internal bisector of one angle (at vertex ${\displaystyle A}$, for example) and the external bisectors of the other two. The center of this excircle is called the excenter relative to the vertex ${\displaystyle A}$, or the excenter of ${\displaystyle A}$. [3] Because the internal bisector of an angle is perpendicular to its external bisector, it follows that the center of the incircle together with the three excircle centers form an orthocentric system. [5]:p. 182 All regular polygons have incircles tangent to all sides, but not all polygons do; those that do are tangential polygons. See also Tangent lines to circles. Incircle and incenter Suppose ${\displaystyle \triangle ABC}$ has an incircle with radius ${\displaystyle r}$ and center ${\displaystyle I}$. Let ${\displaystyle a}$ be the length of ${\displaystyle BC}$, ${\displaystyle b}$ the length of ${\displaystyle AC}$, and ${\displaystyle c}$ the length of ${\displaystyle AB}$. Also let ${\displaystyle T_{A}}$, ${\displaystyle T_{B}}$, and ${\displaystyle T_{C}}$ be the touchpoints where the incircle touches ${\displaystyle BC}$, ${\displaystyle AC}$, and ${\displaystyle AB}$. Incenter The incenter is the point where the internal angle bisectors of ${\displaystyle \angle ABC,\angle BCA,{\text{ and }}\angle BAC}$ meet. The distance from vertex ${\displaystyle A}$ to the incenter ${\displaystyle I}$ is:[ citation needed] ${\displaystyle d(A,I)=c{\frac {\sin \left({\frac {B}{2}}\right)}{\cos \left({\frac {C}{2}}\right)}}=b{\frac {\sin \left({\frac {C}{2}}\right)}{\cos \left({\frac {B}{2}}\right)}}.}$ Trilinear coordinates The trilinear coordinates for a point in the triangle is the ratio of all the distances to the triangle sides. Because the incenter is the same distance from all sides of the triangle, the trilinear coordinates for the incenter are [6] ${\displaystyle \ 1:1:1.}$ Barycentric coordinates The barycentric coordinates for a point in a triangle give weights such that the point is the weighted average of the triangle vertex positions. Barycentric coordinates for the incenter are given by[ citation needed] ${\displaystyle \ a:b:c}$ where ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$ are the lengths of the sides of the triangle, or equivalently (using the law of sines) by ${\displaystyle \sin(A):\sin(B):\sin(C)}$ where ${\displaystyle A}$, ${\displaystyle B}$, and ${\displaystyle C}$ are the angles at the three vertices. Cartesian coordinates The Cartesian coordinates of the incenter are a weighted average of the coordinates of the three vertices using the side lengths of the triangle relative to the perimeter (that is, using the barycentric coordinates given above, normalized to sum to unity) as weights. The weights are positive so the incenter lies inside the triangle as stated above. If the three vertices are located at ${\displaystyle (x_{a},y_{a})}$, ${\displaystyle (x_{b},y_{b})}$, and ${\displaystyle (x_{c},y_{c})}$, and the sides opposite these vertices have corresponding lengths ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$, then the incenter is at[ citation needed] ${\displaystyle \left({\frac {ax_{a}+bx_{b}+cx_{c}}{a+b+c}},{\frac {ay_{a}+by_{b}+cy_{c}}{a+b+c}}\right)={\frac {a\left(x_{a},y_{a}\right)+b\left(x_{b},y_{b}\right)+c\left(x_{c},y_{c}\right)}{a+b+c}}.}$ The inradius ${\displaystyle r}$ of the incircle in a triangle with sides of length ${\displaystyle a}$, ${\displaystyle b}$, ${\displaystyle c}$ is given by [7] ${\displaystyle r={\sqrt {\frac {(s-a)(s-b)(s-c)}{s}}},}$ where ${\displaystyle s=(a+b+c)/2.}$ See Heron's formula. Distances to the vertices Denoting the incenter of ${\displaystyle \triangle ABC}$ as ${\displaystyle I}$, the distances from the incenter to the vertices combined with the lengths of the triangle sides obey the equation [8] ${\displaystyle {\frac {IA\cdot IA}{CA\cdot AB}}+{\frac {IB\cdot IB}{AB\cdot BC}}+{\frac {IC\cdot IC}{BC\cdot CA}}=1.}$ ${\displaystyle IA\cdot IB\cdot IC=4Rr^{2},}$ where ${\displaystyle R}$ and ${\displaystyle r}$ are the triangle's circumradius and inradius respectively. Other properties The collection of triangle centers may be given the structure of a group under coordinate-wise multiplication of trilinear coordinates; in this group, the incenter forms the identity element. [6] Distances between vertex and nearest touchpoints The distances from a vertex to the two nearest touchpoints are equal; for example: [10] ${\displaystyle d\left(A,T_{B}\right)=d\left(A,T_{C}\right)={\frac {1}{2}}(b+c-a).}$ Other properties Suppose the tangency points of the incircle divide the sides into lengths of ${\displaystyle x}$ and ${\displaystyle y}$, ${\displaystyle y}$ and ${\displaystyle z}$, and ${\displaystyle z}$ and ${\displaystyle x}$. Then the incircle has the radius [11] ${\displaystyle r={\sqrt {\frac {xyz}{x+y+z}}}}$ and the area of the triangle is ${\displaystyle \Delta ={\sqrt {xyz(x+y+z)}}.}$ If the altitudes from sides of lengths ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$ are ${\displaystyle h_{a}}$, ${\displaystyle h_{b}}$, and ${\displaystyle h_{c}}$, then the inradius ${\displaystyle r}$ is one-third of the harmonic mean of these altitudes; that is, [12] ${\displaystyle r={\frac {1}{{\frac {1}{h_{a}}}+{\frac {1}{h_{b}}}+{\frac {1}{h_{c}}}}}.}$ The product of the incircle radius ${\displaystyle r}$ and the circumcircle radius ${\displaystyle R}$ of a triangle with sides ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$ is [5]:189,#298(d) ${\displaystyle rR={\frac {abc}{2(a+b+c)}}.}$ Some relations among the sides, incircle radius, and circumcircle radius are: [13] {\displaystyle {\begin{aligned}ab+bc+ca&=s^{2}+(4R+r)r,\\a^{2}+b^{2}+c^{2}&=2s^{2}-2(4R+r)r.\end{aligned}}} Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle. [14] Denoting the center of the incircle of ${\displaystyle \triangle ABC}$ as ${\displaystyle I}$, we have [15] ${\displaystyle {\frac {IA\cdot IA}{CA\cdot AB}}+{\frac {IB\cdot IB}{AB\cdot BC}}+{\frac {IC\cdot IC}{BC\cdot CA}}=1}$ and [16]:121,#84 ${\displaystyle IA\cdot IB\cdot IC=4Rr^{2}.}$ The incircle radius is no greater than one-ninth the sum of the altitudes. [17]:289 The squared distance from the incenter ${\displaystyle I}$ to the circumcenter ${\displaystyle O}$ is given by [18]:232 ${\displaystyle OI^{2}=R(R-2r)}$, and the distance from the incenter to the center ${\displaystyle N}$ of the nine point circle is [18]:232 ${\displaystyle IN={\frac {1}{2}}(R-2r)<{\frac {1}{2}}R.}$ The incenter lies in the medial triangle (whose vertices are the midpoints of the sides). [18]:233, Lemma 1 Relation to area of the triangle The radius of the incircle is related to the area of the triangle. [19] The ratio of the area of the incircle to the area of the triangle is less than or equal to ${\displaystyle {\tfrac {\pi }{3{\sqrt {3}}}}}$, with equality holding only for equilateral triangles. [20] Suppose ${\displaystyle \triangle ABC}$ has an incircle with radius ${\displaystyle r}$ and center ${\displaystyle I}$. Let ${\displaystyle a}$ be the length of ${\displaystyle BC}$, ${\displaystyle b}$ the length of ${\displaystyle AC}$, and ${\displaystyle c}$ the length of ${\displaystyle AB}$. Now, the incircle is tangent to ${\displaystyle AB}$ at some point ${\displaystyle T_{C}}$, and so ${\displaystyle \angle AT_{C}I}$ is right. Thus, the radius ${\displaystyle T_{C}I}$ is an altitude of ${\displaystyle \triangle IAB}$. Therefore, ${\displaystyle \triangle IAB}$ has base length ${\displaystyle c}$ and height ${\displaystyle r}$, and so has area ${\displaystyle {\tfrac {1}{2}}cr}$. Similarly, ${\displaystyle \triangle IAC}$ has area ${\displaystyle {\tfrac {1}{2}}br}$ and ${\displaystyle \triangle IBC}$ has area ${\displaystyle {\tfrac {1}{2}}ar}$. Since these three triangles decompose ${\displaystyle \triangle ABC}$, we see that the area ${\displaystyle \Delta {\text{ of }}\triangle ABC}$ is:[ citation needed] ${\displaystyle \Delta ={\frac {1}{2}}(a+b+c)r=sr,}$     and     ${\displaystyle r={\frac {\Delta }{s}},}$ where ${\displaystyle \Delta }$ is the area of ${\displaystyle \triangle ABC}$ and ${\displaystyle s={\tfrac {1}{2}}(a+b+c)}$ is its semiperimeter. For an alternative formula, consider ${\displaystyle \triangle IT_{C}A}$. This is a right-angled triangle with one side equal to ${\displaystyle r}$ and the other side equal to ${\displaystyle r\cot \left({\frac {A}{2}}\right)}$. The same is true for ${\displaystyle \triangle IB'A}$. The large triangle is composed of six such triangles and the total area is:[ citation needed] ${\displaystyle \Delta =r^{2}\left(\cot \left({\frac {A}{2}}\right)+\cot \left({\frac {B}{2}}\right)+\cot \left({\frac {C}{2}}\right)\right).}$ Gergonne triangle and point A triangle, ${\displaystyle \triangle ABC}$, with   incircle,   incenter (${\displaystyle I}$),   contact triangle (${\displaystyle \triangle T_{A}T_{B}T_{C}}$) and   Gergonne point (${\displaystyle G_{e}}$) The Gergonne triangle (of ${\displaystyle \triangle ABC}$) is defined by the three touchpoints of the incircle on the three sides. The touchpoint opposite ${\displaystyle A}$ is denoted ${\displaystyle T_{A}}$, etc. This Gergonne triangle, ${\displaystyle \triangle T_{A}T_{B}T_{C}}$, is also known as the contact triangle or intouch triangle of ${\displaystyle \triangle ABC}$. Its area is ${\displaystyle K_{T}=K{\frac {2r^{2}s}{abc}}}$ where ${\displaystyle K}$, ${\displaystyle r}$, and ${\displaystyle s}$ are the area, radius of the incircle, and semiperimeter of the original triangle, and ${\displaystyle a}$, ${\displaystyle b}$, and ${\displaystyle c}$ are the side lengths of the original triangle. This is the same area as that of the extouch triangle. [21] The three lines ${\displaystyle AT_{A}}$, ${\displaystyle BT_{B}}$ and ${\displaystyle CT_{C}}$ intersect in a single point called the Gergonne point, denoted as ${\displaystyle G_{e}}$ (or triangle center X7). The Gergonne point lies in the open orthocentroidal disk punctured at its own center, and can be any point therein. [22] The Gergonne point of a triangle has a number of properties, including that it is the symmedian point of the Gergonne triangle. [23] Trilinear coordinates for the vertices of the intouch triangle are given by[ citation needed] • ${\displaystyle {\text{vertex}}\,T_{A}=0:\sec ^{2}\left({\frac {B}{2}}\right):\sec ^{2}\left({\frac {C}{2}}\right)}$ • ${\displaystyle {\text{vertex}}\,T_{B}=\sec ^{2}\left({\frac {A}{2}}\right):0:\sec ^{2}\left({\frac {C}{2}}\right)}$ • ${\displaystyle {\text{vertex}}\,T_{C}=\sec ^{2}\left({\frac {A}{2}}\right):\sec ^{2}\left({\frac {B}{2}}\right):0.}$ Trilinear coordinates for the Gergonne point are given by[ citation needed] ${\displaystyle \sec ^{2}\left({\frac {A}{2}}\right):\sec ^{2}\left({\frac {B}{2}}\right):\sec ^{2}\left({\frac {C}{2}}\right),}$ or, equivalently, by the Law of Sines, ${\displaystyle {\frac {bc}{b+c-a}}:{\frac {ca}{c+a-b}}:{\frac {ab}{a+b-c}}.}$ Excircles and excenters A   triangle with   incircle, incenter ${\displaystyle I}$),   excircles, excenters (${\displaystyle J_{A}}$, ${\displaystyle J_{B}}$, ${\displaystyle J_{C}}$),   internal angle bisectors and   external angle bisectors. The   green triangle is the excentral triangle. An excircle or escribed circle [24] of the triangle is a circle lying outside the triangle, tangent to one of its sides and tangent to the extensions of the other two. Every triangle has three distinct excircles, each tangent to one of the triangle's sides. [3] The center of an excircle is the intersection of the internal bisector of one angle (at vertex ${\displaystyle A}$, for example) and the external bisectors of the other two. The center of this excircle is called the excenter relative to the vertex ${\displaystyle A}$, or the excenter of ${\displaystyle A}$. [3] Because the internal bisector of an angle is perpendicular to its external bisector, it follows that the center of the incircle together with the three excircle centers form an orthocentric system. [5]:182 Trilinear coordinates of excenters While the incenter of ${\displaystyle \triangle ABC}$ has trilinear coordinates ${\displaystyle 1:1:1}$, the excenters have trilinears ${\displaystyle -1:1:1}$, ${\displaystyle 1:-1:1}$, and ${\displaystyle 1:1:-1}$.[ citation needed] The exradius of the excircle opposite ${\displaystyle A}$ (so touching ${\displaystyle BC}$, centered at ${\displaystyle J_{A}}$) is [25] [26] ${\displaystyle r_{a}={\frac {rs}{s-a}}={\sqrt {\frac {s(s-b)(s-c)}{s-a}}},}$ where ${\displaystyle s={\tfrac {1}{2}}(a+b+c).}$ See Heron's formula. Click on show to view the contents of this section Let the excircle at side ${\displaystyle AB}$ touch at side ${\displaystyle AC}$ extended at ${\displaystyle G}$, and let this excircle's radius be ${\displaystyle r_{c}}$ and its center be ${\displaystyle J_{c}}$. Then ${\displaystyle J_{c}G}$ is an altitude of ${\displaystyle \triangle ACJ_{c}}$, so ${\displaystyle \triangle ACJ_{c}}$ has area ${\displaystyle {\tfrac {1}{2}}br_{c}}$. By a similar argument, ${\displaystyle \triangle BCJ_{c}}$ has area ${\displaystyle {\tfrac {1}{2}}ar_{c}}$ and ${\displaystyle \triangle ABJ_{c}}$ has area ${\displaystyle {\tfrac {1}{2}}cr_{c}}$. Thus the area ${\displaystyle \Delta }$ of triangle ${\displaystyle \triangle ABC}$ is ${\displaystyle \Delta ={\frac {1}{2}}(a+b-c)r_{c}=(s-c)r_{c}}$. So, by symmetry, denoting ${\displaystyle r}$ as the radius of the incircle, ${\displaystyle \Delta =sr=(s-a)r_{a}=(s-b)r_{b}=(s-c)r_{c}}$. By the Law of Cosines, we have ${\displaystyle \cos(A)={\frac {b^{2}+c^{2}-a^{2}}{2bc}}}$ Combining this with the identity ${\displaystyle \sin ^{2}A+\cos ^{2}A=1}$, we have ${\displaystyle \sin(A)={\frac {\sqrt {-a^{4}-b^{4}-c^{4}+2a^{2}b^{2}+2b^{2}c^{2}+2a^{2}c^{2}}}{2bc}}}$ But ${\displaystyle \Delta ={\tfrac {1}{2}}bc\sin(A)}$, and so {\displaystyle {\begin{aligned}\Delta &={\frac {1}{4}}{\sqrt {-a^{4}-b^{4}-c^{4}+2a^{2}b^{2}+2b^{2}c^{2}+2a^{2}c^{2}}}\\&={\frac {1}{4}}{\sqrt {(a+b+c)(-a+b+c)(a-b+c)(a+b-c)}}\\&={\sqrt {s(s-a)(s-b)(s-c)}},\end{aligned}}} which is Heron's formula. Combining this with ${\displaystyle sr=\Delta }$, we have ${\displaystyle r^{2}={\frac {\Delta ^{2}}{s^{2}}}={\frac {(s-a)(s-b)(s-c)}{s}}.}$ Similarly, ${\displaystyle (s-a)r_{a}=\Delta }$ gives ${\displaystyle r_{a}^{2}={\frac {s(s-b)(s-c)}{s-a}}}$ and ${\displaystyle r_{a}={\sqrt {\frac {s(s-b)(s-c)}{s-a}}}.}$ Other properties From the formulas above one can see that the excircles are always larger than the incircle and that the largest excircle is the one tangent to the longest side and the smallest excircle is tangent to the shortest side. Further, combining these formulas yields: [28] ${\displaystyle \Delta ={\sqrt {rr_{a}r_{b}r_{c}}}.}$ Other excircle properties The circular hull of the excircles is internally tangent to each of the excircles and is thus an Apollonius circle. [29] The radius of this Apollonius circle is ${\displaystyle {\tfrac {r^{2}+s^{2}}{4r}}}$ where ${\displaystyle r}$ is the incircle radius and ${\displaystyle s}$ is the semiperimeter of the triangle. [30] The following relations hold among the inradius ${\displaystyle r}$, the circumradius ${\displaystyle R}$, the semiperimeter ${\displaystyle s}$, and the excircle radii ${\displaystyle r_{a}}$, ${\displaystyle r_{b}}$, ${\displaystyle r_{c}}$: [13] {\displaystyle {\begin{aligned}r_{a}+r_{b}+r_{c}&=4R+r,\\r_{a}r_{b}+r_{b}r_{c}+r_{c}r_{a}&=s^{2},\\r_{a}^{2}+r_{b}^{2}+r_{c}^{2}&=\left(4R+r\right)^{2}-2s^{2}.\end{aligned}}} The circle through the centers of the three excircles has radius ${\displaystyle 2R}$. [13] If ${\displaystyle H}$ is the orthocenter of ${\displaystyle \triangle ABC}$, then [13] {\displaystyle {\begin{aligned}r_{a}+r_{b}+r_{c}+r&=AH+BH+CH+2R,\\r_{a}^{2}+r_{b}^{2}+r_{c}^{2}+r^{2}&=AH^{2}+BH^{2}+CH^{2}+(2R)^{2}.\end{aligned}}} Nagel triangle and Nagel point The   extouch triangle (${\displaystyle \triangle T_{A}T_{B}T_{C}}$) and the    Nagel point (${\displaystyle N}$) of a   triangle (${\displaystyle \triangle ABC}$). The orange circles are the excircles of the triangle. The Nagel triangle or extouch triangle of ${\displaystyle \triangle ABC}$ is denoted by the vertices ${\displaystyle T_{A}}$, ${\displaystyle T_{B}}$, and ${\displaystyle T_{C}}$ that are the three points where the excircles touch the reference ${\displaystyle \triangle ABC}$ and where ${\displaystyle T_{A}}$ is opposite of ${\displaystyle A}$, etc. This ${\displaystyle \triangle T_{A}T_{B}T_{C}}$ is also known as the extouch triangle of ${\displaystyle \triangle ABC}$. The circumcircle of the extouch ${\displaystyle \triangle T_{A}T_{B}T_{C}}$ is called the Mandart circle.[ citation needed] The three lines ${\displaystyle AT_{A}}$, ${\displaystyle BT_{B}}$ and ${\displaystyle CT_{C}}$ are called the splitters of the triangle; they each bisect the perimeter of the triangle,[ citation needed] ${\displaystyle AB+BT_{A}=AC+CT_{A}={\frac {1}{2}}\left(AB+BC+AC\right).}$ The splitters intersect in a single point, the triangle's Nagel point ${\displaystyle N_{a}}$ (or triangle center X8). Trilinear coordinates for the vertices of the extouch triangle are given by[ citation needed] • ${\displaystyle {\text{vertex}}\,T_{A}=0:\csc ^{2}\left({\frac {B}{2}}\right):\csc ^{2}\left({\frac {C}{2}}\right)}$ • ${\displaystyle {\text{vertex}}\,T_{B}=\csc ^{2}\left({\frac {A}{2}}\right):0:\csc ^{2}\left({\frac {C}{2}}\right)}$ • ${\displaystyle {\text{vertex}}\,T_{C}=\csc ^{2}\left({\frac {A}{2}}\right):\csc ^{2}\left({\frac {B}{2}}\right):0.}$ Trilinear coordinates for the Nagel point are given by[ citation needed] ${\displaystyle \csc ^{2}\left({\frac {A}{2}}\right):\csc ^{2}\left({\frac {B}{2}}\right):\csc ^{2}\left({\frac {C}{2}}\right),}$ or, equivalently, by the Law of Sines, ${\displaystyle {\frac {b+c-a}{a}}:{\frac {c+a-b}{b}}:{\frac {a+b-c}{c}}.}$ The Nagel point is the isotomic conjugate of the Gergonne point.[ citation needed] Related constructions Nine-point circle and Feuerbach point The nine-point circle is tangent to the incircle and excircles In geometry, the nine-point circle is a circle that can be constructed for any given triangle. It is so named because it passes through nine significant concyclic points defined from the triangle. These nine points are: [31] [32] In 1822 Karl Feuerbach discovered that any triangle's nine-point circle is externally tangent to that triangle's three excircles and internally tangent to its incircle; this result is known as Feuerbach's theorem. He proved that:[ citation needed] ... the circle which passes through the feet of the altitudes of a triangle is tangent to all four circles which in turn are tangent to the three sides of the triangle ... ( Feuerbach 1822) The triangle center at which the incircle and the nine-point circle touch is called the Feuerbach point. Incentral and excentral triangles The points of intersection of the interior angle bisectors of ${\displaystyle \triangle ABC}$ with the segments ${\displaystyle BC}$, ${\displaystyle CA}$, and ${\displaystyle AB}$ are the vertices of the incentral triangle. Trilinear coordinates for the vertices of the incentral triangle are given by[ citation needed] • ${\displaystyle \ \left({\text{vertex opposite}}\,A\right)=0:1:1}$ • ${\displaystyle \ \left({\text{vertex opposite}}\,B\right)=1:0:1}$ • ${\displaystyle \ \left({\text{vertex opposite}}\,C\right)=1:1:0.}$ The excentral triangle of a reference triangle has vertices at the centers of the reference triangle's excircles. Its sides are on the external angle bisectors of the reference triangle (see figure at top of page). Trilinear coordinates for the vertices of the excentral triangle are given by[ citation needed] • ${\displaystyle ({\text{vertex opposite}}\,A)=-1:1:1}$ • ${\displaystyle ({\text{vertex opposite}}\,B)=1:-1:1}$ • ${\displaystyle ({\text{vertex opposite}}\,C)=1:1:-1.}$ Equations for four circles Let ${\displaystyle x:y:z}$ be a variable point in trilinear coordinates, and let ${\displaystyle u=\cos ^{2}\left(A/2\right)}$, ${\displaystyle v=\cos ^{2}\left(B/2\right)}$, ${\displaystyle w=\cos ^{2}\left(C/2\right)}$. The four circles described above are given equivalently by either of the two given equations: [33]:210–215 • Incircle: {\displaystyle {\begin{aligned}u^{2}x^{2}+v^{2}y^{2}+w^{2}z^{2}-2vwyz-2wuzx-2uvxy&=0\\\pm {\sqrt {x}}\cos \left({\frac {A}{2}}\right)\pm {\sqrt {y}}\cos \left({\frac {B}{2}}\right)\pm {\sqrt {z}}\cos \left({\frac {C}{2}}\right)&=0\end{aligned}}} • ${\displaystyle A}$-excircle: {\displaystyle {\begin{aligned}u^{2}x^{2}+v^{2}y^{2}+w^{2}z^{2}-2vwyz+2wuzx+2uvxy&=0\\\pm {\sqrt {-x}}\cos \left({\frac {A}{2}}\right)\pm {\sqrt {y}}\cos \left({\frac {B}{2}}\right)\pm {\sqrt {z}}\cos \left({\frac {C}{2}}\right)&=0\end{aligned}}} • ${\displaystyle B}$-excircle: {\displaystyle {\begin{aligned}u^{2}x^{2}+v^{2}y^{2}+w^{2}z^{2}+2vwyz-2wuzx+2uvxy&=0\\\pm {\sqrt {x}}\cos \left({\frac {A}{2}}\right)\pm {\sqrt {-y}}\cos \left({\frac {B}{2}}\right)\pm {\sqrt {z}}\cos \left({\frac {C}{2}}\right)&=0\end{aligned}}} • ${\displaystyle C}$-excircle: {\displaystyle {\begin{aligned}u^{2}x^{2}+v^{2}y^{2}+w^{2}z^{2}+2vwyz+2wuzx-2uvxy&=0\\\pm {\sqrt {x}}\cos \left({\frac {A}{2}}\right)\pm {\sqrt {y}}\cos \left({\frac {B}{2}}\right)\pm {\sqrt {-z}}\cos \left({\frac {C}{2}}\right)&=0\end{aligned}}} Euler's theorem Euler's theorem states that in a triangle: ${\displaystyle (R-r)^{2}=d^{2}+r^{2},}$ where ${\displaystyle R}$ and ${\displaystyle r}$ are the circumradius and inradius respectively, and ${\displaystyle d}$ is the distance between the circumcenter and the incenter. For excircles the equation is similar: ${\displaystyle \left(R+r_{\text{ex}}\right)^{2}=d_{\text{ex}}^{2}+r_{\text{ex}}^{2},}$ where ${\displaystyle r_{\text{ex}}}$ is the radius of one of the excircles, and ${\displaystyle d_{\text{ex}}}$ is the distance between the circumcenter and that excircle's center. [34] [35] [36] Generalization to other polygons Some (but not all) quadrilaterals have an incircle. These are called tangential quadrilaterals. Among their many properties perhaps the most important is that their two pairs of opposite sides have equal sums. This is called the Pitot theorem.[ citation needed] More generally, a polygon with any number of sides that has an inscribed circle (that is, one that is tangent to each side) is called a tangential polygon.[ citation needed] Notes 1. ^ Kay (1969, p. 140) 2. ^ Altshiller-Court (1925, p. 74) 3. Altshiller-Court (1925, p. 73) 4. ^ Kay (1969, p. 117) 5. ^ a b c Johnson, Roger A., Advanced Euclidean Geometry, Dover, 2007 (orig. 1929). 6. ^ a b Encyclopedia of Triangle Centers Archived 2012-04-19 at the Wayback Machine, accessed 2014-10-28. 7. ^ Kay (1969, p. 201) 8. ^ Allaire, Patricia R.; Zhou, Junmin; Yao, Haishen (March 2012), "Proving a nineteenth century ellipse identity", Mathematical Gazette, 96: 161–165. 9. ^ Altshiller-Court, Nathan (1980), College Geometry, Dover Publications. #84, p. 121. 10. ^ Mathematical Gazette, July 2003, 323-324. 11. ^ Chu, Thomas, The Pentagon, Spring 2005, p. 45, problem 584. 12. ^ Kay (1969, p. 203) 13. ^ a b c d 14. ^ Kodokostas, Dimitrios, "Triangle Equalizers," Mathematics Magazine 83, April 2010, pp. 141-146. 15. ^ Allaire, Patricia R.; Zhou, Junmin; and Yao, Haishen, "Proving a nineteenth century ellipse identity", Mathematical Gazette 96, March 2012, 161-165. 16. ^ Altshiller-Court, Nathan. College Geometry, Dover Publications, 1980. 17. ^ Posamentier, Alfred S., and Lehmann, Ingmar. The Secrets of Triangles, Prometheus Books, 2012. 18. ^ a b c Franzsen, William N. (2011). "The distance from the incenter to the Euler line" (PDF). Forum Geometricorum. 11: 231–236. MR  2877263.. 19. ^ Coxeter, H.S.M. "Introduction to Geometry 2nd ed. Wiley, 1961. 20. ^ Minda, D., and Phelps, S., "Triangles, ellipses, and cubic polynomials", American Mathematical Monthly 115, October 2008, 679-689: Theorem 4.1. 21. ^ Weisstein, Eric W. "Contact Triangle." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/ContactTriangle.html 22. ^ Christopher J. Bradley and Geoff C. Smith, "The locations of triangle centers", Forum Geometricorum 6 (2006), 57–70. http://forumgeom.fau.edu/FG2006volume6/FG200607index.html 23. ^ Dekov, Deko (2009). "Computer-generated Mathematics : The Gergonne Point" (PDF). Journal of Computer-generated Euclidean Geometry. 1: 1–14. Archived from the original (PDF) on 2010-11-05. 24. ^ Altshiller-Court (1925, p. 74) 25. ^ Altshiller-Court (1925, p. 79) 26. ^ Kay (1969, p. 202) 27. ^ Altshiller-Court (1925, p. 79) 28. ^ Baker, Marcus, "A collection of formulae for the area of a plane triangle," Annals of Mathematics, part 1 in vol. 1(6), January 1885, 134-138. (See also part 2 in vol. 2(1), September 1885, 11-18.) 29. ^ 30. ^ 31. ^ Altshiller-Court (1925, pp. 103–110) 32. ^ Kay (1969, pp. 18,245) 33. ^ Whitworth, William Allen. Trilinear Coordinates and Other Methods of Modern Analytical Geometry of Two Dimensions, Forgotten Books, 2012 (orig. Deighton, Bell, and Co., 1866). http://www.forgottenbooks.com/search?q=Trilinear+coordinates&t=books 34. ^ Nelson, Roger, "Euler's triangle inequality via proof without words," Mathematics Magazine 81(1), February 2008, 58-61. 35. ^ Johnson, R. A. Modern Geometry, Houghton Mifflin, Boston, 1929: p. 187. 36. ^ References • Altshiller-Court, Nathan (1925), College Geometry: An Introduction to the Modern Geometry of the Triangle and the Circle (2nd ed.), New York: Barnes & Noble, LCCN  52013504 • Kay, David C. (1969), College Geometry, New York: Holt, Rinehart and Winston, LCCN  69012075 • Kimberling, Clark (1998). "Triangle Centers and Central Triangles". Congressus Numerantium (129): i–xxv, 1–295. • Kiss, Sándor (2006). "The Orthic-of-Intouch and Intouch-of-Orthic Triangles". Forum Geometricorum (6): 171–177.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 258, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9280338883399963, "perplexity": 1140.8729268012787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00320.warc.gz"}
https://www.cis.upenn.edu/~alur/Lics90D.html
## Model-Checking in Dense Real-Time Rajeev Alur, Costas Courcoubetis, and David L. Dill Model-checking is a method of verifying concurrent systems in which a state-transition graph model of the system behavior is compared with a temporal logic formula. This paper extends model-checking for the branching-time logic CTL to the analysis of {\it real-time\/} systems, whose correctness depends on the magnitudes of the timing delays. For specifications, we extend the syntax of CTL to allow quantitative temporal operators such as $\exists\Diamond_{<5}$, meaning possibly within 5 time units.'' The formulas of the resulting logic, {\it Timed CTL\/} (TCTL), are interpreted over {\em continuous computation trees\/}, trees in which paths are maps from the set of nonnegative reals to system states. To model finite-state systems we introduce {\em timed graphs\/} --- state-transition graphs annotated with timing constraints. As our main result, we develop an algorithm for model-checking, for determining the truth of a TCTL-formula with respect to a timed graph. We argue that choosing a dense domain instead of a discrete domain to model time does not significantly blow up the complexity of the model-checking problem. On the negative side, we show that the denseness of the underlying time domain makes the validity problem for TCTL $\Pi_1^1$-hard. The question of deciding whether there exists a timed graph satisfying a TCTL-formula is also undecidable. Information and Computation 104(1):2-34, 1993. A preliminary version appeared in the Proceedings of the Fifth Annual IEEE Symposium on Logic in Computer Science (LICS 1990).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649736881256104, "perplexity": 1121.3302081813492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00359.warc.gz"}
https://eprints.soton.ac.uk/261709/
The University of Southampton University of Southampton Institutional Repository # ac aging and space charge characteristics in low-density polyethylene polymeric insulation Chen, G, Fu, M, Liu, X Z and Zhong, L S (2005) ac aging and space charge characteristics in low-density polyethylene polymeric insulation. Journal of Applied Physics, 97, 083713-1-7. Record type: Article ## Abstract Space charge characteristic in polymeric insulating materials under ac conditions is one of the areas receiving growing interests over the last few years. Following the establishment of several techniques that allow space charge under dc conditions to be characterised non-destructively, there is a continuous effort to apply the same techniques to ac conditions. Earlier results revealed that there was charge accumulation at very low frequency (<0.01 Hz) while the charge formation at high frequency was negligible especially at power frequency. These results gave the impression that space charge effect under ac conditions is not an issue to be concerned with despite new evidences emerging slowly indicate the effect of space charge on electrical deterioration of the insulating polymers. In the present work efforts have been made to investigate the influence of ac ageing on space charge dynamics in low-density polyethylene (LDPE). LDPE films with 200mm were aged under various electric stress levels at 50 Hz for various times at ambient temperature, however, the bulk work was carried out on the samples aged at 50 kV/mm. Space charge dynamics in the samples after ageing were monitored using the pulsed electroacoustic (PEA) technique. The results indicate that there is a significant amount of negative charge accumulation in the aged sample due to charge injection at high ageing stress. The amount of charge in the aged sample is related to the electric stress. Little amount of charge is present at low fields and accumulated charge increases rapidly once the applied field is greater than 10 kV/mm. Due to a very slow charge decay rate, it is believed that the injected charges are captured by the deep traps that may be formed during ac ageing. At 50 kV/mm, the total amount of charge increases with the ageing time initially and then levels off. It was also found that the amount of charge in aged samples is related to the electrode material. Little charge was observed when gold electrodes were sputtered on both side of the sample. The charge dynamics of the aged samples under dc bias differ from the sample without ac ageing, indicating changes brought in by ac ageing. Chemical analysis by infrared spectroscope (FTIR) and Raman microscope reveals no significant chemical changes taken place in the bulk of the material after ac ageing. Finally the consequence of the accumulation of space charge under ac conditions on the lifetime of the material has been discussed. Text ac_space_charge.pdf - Other Published date: 2005 Keywords: AC ageing, Space charge, PEA technique, Low density polyethylene, Raman microscopic analysis Organisations: Electronics & Computer Science, EEE ## Identifiers Local EPrints ID: 261709 URI: http://eprints.soton.ac.uk/id/eprint/261709 ISSN: 0021-8979 PURE UUID: bf841d4a-8ed4-498d-b8ff-1654407c4261 ## Catalogue record Date deposited: 19 Dec 2005 ## Contributors Author: G Chen Author: M Fu Author: X Z Liu Author: L S Zhong
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664514064788818, "perplexity": 2972.049990667348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00405.warc.gz"}
http://math.stackexchange.com/questions/64500/confused-between-nested-quantifiers/64509
# Confused between Nested Quantifiers I am reading nested quantifiers. I am confused in between two cases, 1. Existential Quantifier before Universal Quantifier 2. Universal Quantifier before Existential Quantifier I would be very thankful if someone highlights the difference between them and also give an example. - It's the difference between "For every question, there exists someone who can solve it." and "There exists someone who can solve every question." –  Srivatsan Sep 14 '11 at 15:24 There is a nice way to think about quantifiers in terms of games, and then the order of the quantifiers corresponds to the order in which the players move in the game. If $f(p, q)$ is some statement, then $$\forall p \exists q : f(p, q)$$ says that there are two players, $P$ and $Q$, playing a game. Player $P$ moves first and makes some move $p$. Player $Q$'s goal is to find a corresponding move $q$ which "beats" $p$ in the sense that $f(p, q)$ is true. The statement above is true if $Q$ has a winning strategy; otherwise, it's false. However, $$\exists q \forall p : f(p, q)$$ says that player $Q$ moves first. So instead of finding a move $q = q(p)$ for each possible move $p$ that player $P$ can make, $Q$ must now make a single move that beats all possible moves by player $P$. Again, the statement above is true if $Q$ has a winning strategy; otherwise, it's false. But now it should be obvious that the second game is much harder for $Q$ than the first! (To further augment this game-theoretic intuition, it might help to think of $P$ as a "devil" who is trying to thwart the "hero" $Q$. Note the similarity of the $\forall$ symbol to devil horns.) - I think I have seen this "dialectic" interpretation of logical formulae attributed to Kronecker, but cannot find a reference anymore. Perhaps it was another intuitionist? –  Henning Makholm Sep 14 '11 at 16:07 I don't know. I think I first read about it in something written by Conway...? –  Qiaochu Yuan Sep 14 '11 at 16:24 The Wikipedia article about game semantics doesn't have much in the way of history, unfortunately... –  Zhen Lin Sep 15 '11 at 10:10 To elaborate on my comment, imagine that a teacher assigns a set of questions $Q$ for homework to a class (set) $S$ of students. Contrast the following two scenarios. Scenario 1. Suppose that there is no student who can solve all the questions on her own, but still each of the questions has been solved by at least one student. In this case, if I have a doubt in any given question, I can ask around and I will find someone who can help me. Of course, the same person might not be able to help me with all the questions. We can say this in symbols using: $\forall q \in Q \ \exists s \in S \ : \$s$\text{ can solve } q$. Scenario 2. Imagine that there is a particularly bright student in class who can solve all the questions. In this case, if I cannot solve any question, I simply need to ask that particular student and she can definitely help. My job of finding help with questions is therefore even easier. We can formally write as: $\exists s \in S \ \forall q \in Q : \$s$\text{ can solve } q$. Do you see the difference between the two statements now? - Here's a simple example that I keep in mind when teaching the distinction to students; consider these two statements: • For every nonzero $a$ there exists a $b$ such that $ab=1$ • There exists an $a$ such that for all nonzero $b$, $ab=1$ The first statement is true (over the reals, say) but the second is false. - Another way to think about this comes as to see what quantifiers look in the context of classical propositional logic which has truth set {0, 1}. Since we only have two truth values here, if I say "for all p, p" I've basically said "p is false, and p is true." So, we can interpret, at least here, "for all" as meaning a conjunction "^". So, for any such formula p, $\forall p$(p) means (0^1), and $\forall p$ ((p v q) ^ q) means (((0 v q) ^q)^((1 v q)^q)). "There exists" means "for at least one", which we can interpret as meaning a disjunction "v" in this context. So, $\exists p$ means (0 v 1), and $\exists p$ ((p v q) ^ q) means (((0 v q) ^q)v((1 v q)^q)). Now let us see how the quantifiers behave when we switch them. $\forall q$ $\exists p$ ((pvq)^q) becomes $\forall q$ (((0 v q) ^q)v((1 v q)^q))) which becomes ((((0 v 0) ^0)v((1 v 0)^0))^(((0 v 1)^1)v((1 v 1)^1))) On the other hand $\exists p$ $\forall q$ ((p v q)^q) becomes $\exists p$ (((p v 0)^0)^((p v 1)^1)) which becomes ((((0 v 0)^0)^((0 v 1)^1))v(((1 v 0)^0)^((1 v 1)^1))) If you think of "for all" as indicating a conjunction for 2-element sets as the above suggests one might do, then for sets with more than 3 elements, and sets with an infinity of elements, then "for all" will indicate an extended conjunction, and similarly you'll have "there exists" as an extended disjunction. What do you do with 1-element sets though under this interpretation? Simple, all quantifiers effectively become meaningless, and the distinction between the universal and existential quantifiers breaks down. So, you just write the element of the set in any formula and you can forget about quantifiers here. - When a statement contains more than one quantifier, think about the quantifiers one at a time, in order. Existential Quantifier before Universal Quantifier Consider the statement $\exists x \forall y L(x,y)$, where the universe of discourse is the set of all people and $L(x,y)$ means "$x$ likes $y$". This statement says that there is some person $x$ such that $\forall y L(x,y)$ is true. Statement $\forall y L(x,y)$ means that for every person $y$, $x$ likes $y$, or in other words $x$ likes every person, or just $x$ likes everyone. The original statement $\exists x \forall y L(x,y)$ now can be written as "there is some person $x$ that likes everyone". In other words, there is someone who likes everyone. Universal Quantifier before Existential Quantifier On the other hand, statement $\forall x \exists y L(x,y)$ means that for every person $x$, the statement $\exists y L(x,y)$ is true. The $\exists y L(x,y)$ means that there is some person that $x$ likes, or, in other words, that $x$ likes someone. The orginal statement now can be written as "for every person $x$, there is some person that $x$ likes". In other words, everyone likes someone. These statements don't mean the same thing. It might be the case that everyone likes someone, but it is unlikely that there is someone who likes everyone. - Say $M$ is the set of men, $W$ is the set of women, and we will write $w\prec m$ if woman $w$ is the mother of man $m$. Then this says that every man $m$ has a mother: $$\forall m\in M: \exists w\in W: w\prec m$$ This says there is a woman $w$ who is every man's mother: $$\exists w\in W: \forall m\in M: w\prec m$$ The first one is true, but the second is false. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8247110247612, "perplexity": 440.045314014755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500816424.18/warc/CC-MAIN-20140820021336-00447-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/104406-supremum-infimum.html
1. ## Supremum and infimum Let $S$ and $T$ be non-empty bounded subsets of $\mathbb{R}$, and suppose that for all $s \in S$ and $t \in T$, we have $s \leq t$. Prove that supremum of $S \leq$ infimum $T$. 2. Originally Posted by cgiulz Let $S$ and $T$ be non-empty bounded subsets of $\mathbb{R}$, and suppose that for all $s \in S$ and $t \in T$, we have $s \leq t$. Prove that supremum of $S \leq$ infimum $T$. There is no problem knowing that the two exist. Why? So let $\sigma = \sup (S)\;\& \;\tau = \inf (T)$. Working for a contradiction, suppose that $\tau < \sigma$. So $\tau$ is not an upper bound of $S$. Why? What does that imply about some element of $S$? 3. Since $S$ and $T$ are bounded there must be an infinimum and supremum. We can see that $\tau$ is not an upper bound of $S$ since it is smaller than the least upper bound. So clearly there is some element $t \in T$ and $s \in S$ such that $s > t$. So we have our contradiction. Thus, $Sup S \leq Inf T$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980747103691101, "perplexity": 109.39574053641604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00477.warc.gz"}
https://www.physicsforums.com/threads/formula-for-force.90672/
Formula for force 1. Sep 25, 2005 kizersi5 hello I would like to know the formula for force? 2. Sep 25, 2005 lightgrav What kind of Force? The cause of a Force : Force by a spring, Force by gravity, Force by electricity, Force by Pressure, friction Force, air resistance Force, Force by magnetism, Nuclear Forces ... What purpose of Force? Effects of a Force : momentum change, acceleration, Work, compression, ... 3. Sep 26, 2005 kizersi5 Need help! i would like to know the formula for the force of gravity? 4. Sep 27, 2005 mukundpa F = mg = GMm/r where M is mass of earth and r is distance from the center of earth. Similar Discussions: Formula for force
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879338502883911, "perplexity": 3010.7963255306963}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423808.34/warc/CC-MAIN-20170721182450-20170721202450-00131.warc.gz"}
https://openphysicsjournal.com/VOLUME/4/PAGE/1/FULLTEXT/
# New Distances to Four Supernova Remnants S. Ranasinghe1, D. A. Leahy1, *, Wenwu Tian1, 2 1 Department of Physics & Astronomy, University of Calgary, Calgary, Alberta T2N 1N4, Canada 2 National Astronomical Observatories, CAS, Beijing 100012, China #### Article Metrics 0 ##### Total Statistics: Full-Text HTML Views: 1293 Abstract HTML Views: 858 ##### Unique Statistics: Full-Text HTML Views: 751 Abstract HTML Views: 438 © 2018 Ranasinghe et al. open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. * Address correspondence to these authors at the Department of Physics & Astronomy, University of Calgary, Calgary, Alberta T2N 1N4, Canada; Tel: (403) 220-7192; E-mail: [email protected] ## Abstract ### Object: Distances are found for four supernova remnants without previous distance measurements. H I spectra and H I channel maps are used to determine the maximum velocity of H I absorption for the four Supernova Remnants (SNRs). ### Method: We examined 13CO emission spectra and channel maps to look for possible molecular gas associated with each SNR, but did not find any. ### Result: The resulting distances for the SNRs are 3.5 ± 0.2 kpc (G24.7+0.6), 4.7 ± 0.3 kpc (G29.6+0.1), 4.1 ± 0.5 kpc (G41.5+0.4) and 4.5 ± 0 .4 - 9.0 ± 0.4 kpc (G57.2+0.8). Keywords: Radio continuum, Radio lines, Supernova remnants, ISM clouds, Kinematic distance, Galactic plane survey. ## 1. INTRODUCTION Supernova remnants (SNRs) have a major impact on the state of the Interstellar Medium (ISM) of a galaxy (e.g. for a review see Cox [1]). They provide the thermal and kinetic energy that determines the structure of the ISM, and can drive outflows from the galaxy and trigger star formation. To determine the effects of SNRs, it is necessary to determine their occurrence rate, the energy of explosion and the density of the ISM where explosion occurs. Determining the distance to an SNR is an important first step in determining its size, age, explosion energy and evolutionary state. Utilizing H I absorption spectra is one method of obtaining the distance to an SNR. For the construction of H I absorption spectra, we use the method presented by Leahy and Tian [2]. However, in order to differentiate between false absorption features and real ones, a thorough investigation of individual H I channel maps is essential. To determine the distance and resolve kinematic distance ambiguity, we use the step-by-step process described by Ranasinghe and Leahy [3]. Using this method, we present new distances to four SNRs that do not have previous good measurements of distances: G24.7+0.6, G29.6+0.1, G41.5+0.4 and G57.2+0.8. The SNR G24.7+0.6 has been linked to the luminous blue variable star G24.73+0.69 [4]. Acero et al. [5] claimed that there is a possible X-ray detection consistent with the radio size. The SNR G29.6+0.1 is likely associated with the X-ray pulsar AX J1845-0258 [6]. However, there is very little known about the SNRs G41.5+0.4 and G57.2+0.8. For these four SNRs, we also search for any morphological association with molecular clouds. A brief introduction to the data and software used and the method of construction of H I absorption spectra is presented in section 2. The results for the four SNRs are given in Section 3. In Sections 4 and 5 we present the discussion and summary. ## 2. DATA ANALYSIS ### 2.1. Data and Software We obtained the 1420 MHz continuum and H I -line emission data from the VLA (Very Large Array) Galactic Plane Survey (VGPS) [7]. To construct H I absorption spectra, we used MEANLEV, a software program in the DRAO EXPORT package. An advantage of MEANLEV is that one can extract ‘on’ (source) and ‘off’ (background) spectra defined by user-specified threshold continuum brightness (TB) levels. One can first specify a spatial region, normally a box selecting a set of map pixels. Then source and background spectra are extracted using pixels above (for source) and below (for background) the given TB level. This maximizes the contrast difference in TB, which maximizes the signal-to-noise in the H I spectrum. The 13CO spectral line data was extracted from the Galactic Ring Survey of the Five College Radio Astronomical Observatory (FCRAO) 14 m telescope [8]. For the construction of the 13CO emission spectra, we used MEANLEV and the same source and background regions used for the H I spectra. ### 2.2. Construction of H I Absorption Spectra, CO Spectra and Kinematic Distances The radiative transfer modeling follows from Leahy and Tian [2]. From the equation of radiative transfer, the H I absorption spectrum is Here TB is the brightness temperature, the superscript ’C’ denotes continuum and ’v’ denotes velocity. The ‘off’ spectrum is the spectrum towards the background and the ‘on’ spectrum is the spectrum towards the source. The H I absorption spectrum of a continuum emission source (such as an SNR) is calculated. Radial velocities where there are likely absorption features are identified. It is essential that the H I channel maps are examined to see if the features in the spectrum are real, i.e. the spatial location of a decrease in H I emission is required to be coincident with and matches the shape of the continuum emission. Otherwise, we can see from the H I channel maps if the features are caused by excess of H I emission in the source or background areas, and thus identifies the features as false. The 13CO spectra were extracted from the Galactic Ring Survey as noted above, and are plotted together with the H I absorption spectra below. We searched for 13CO emission features coincident with H I absorption features. If the 13CO emission feature is coincident with the highest velocity of real absorption with the SNR, there is a possibility of association of the 13CO with the SNR. Additionally, we examine the 13CO channel maps to see if the morphology of the 13CO emission matches the morphology of the SNR continuum emission. However for only one of the four SNRs studied here (G29.6+0.1), was there any indication of real association of 13CO. Once we obtain the radial velocity of the object (or upper and lower limits) from the H I spectra and channel maps, the distance (or upper and lower limits) to the source can be determined. For an object in circular orbit in the Galaxy at Galactrocentric radius R, its radial velocity with respect to the Local Standard of Rest (LSR) is where V(R) is the orbital velocity at R [9]. Thus with an appropriate rotation curve, the Galactocentric radius R and the distance d can be obtained. The distance has two solutions (near and far distances, called the kinematic distance ambiguity) for objects inside the solar circle, except at the tangent point. With H I absorption spectra, this kinematic distance ambiguity can be resolved (for a detailed description see Ranasinghe and Leahy [3]). If the absorption is not seen up to the tangent point, the object is at the near distance. If the absorption is seen up to the tangent point, the object is beyond the tangent point. For the first quadrant, any absorption seen at a negative velocity points to the object being located beyond the far side of the solar circle. For the Galactic rotation curve we have adopted the Universal Rotation Curve (URC) presented by Persic et al. [10] and the parameters presented by Reid et al. [11] (their Table 1). The parameters are Galactocentric radius R0 = 8.34 ± 0.16 kpc, the orbital velocity of the sun of V0 = 241 ± 8 km s-1, a = 1.5, Ropt = R0∙(0.90 ± 0.006), V(Ropt) = 241 ± 8 and β = 0.72 . To determine the error in the distance we follow the method by Ranasinghe and Leahy [3]. Due to the non-linearity of the equations, we find a set of distances for best-fit parameters of R0, V0 and observed radial velocity Vr and also for their lower and upper limits (a set of 33 parameters and distances). The standard deviation of these values yields the error in distance. The errors in R0 and V0 are 0.16 kpc and 8 km s-1, respectively [11]. We modeled the observed H I profile including a Gaussian velocity dispersion to obtain the tangent point velocity. The difference of the observed tangent point velocities and the URC gives an estimate for the peculiar motion of the gas of 4.7 km s-1. Based on H I channel maps, the measurement error of the radial velocity Vr is ± 2.4 km s-1. The spectral resolution of the data is 1.5 km s-1 [7]. The estimated peculiar motion of the H I and error in radial velocity added in quadrature yields a net error in Vr of 5.3 km s-1. ## 3. RESULTS The four SNRs in this section are diffuse and relatively faint in 1420 MHz continuum (15 K to 20 K above local background). Because of the faint continuum, the extracted H I absorption spectra are noisy. To distinguish real absorption features from false ones requires a detailed investigation of the H I channel maps. The results presented here include H I maps for the most important absorption features relevant to distance determination. ### 3.1. G24.7+0.6 Fig. (1) shows the 1420 MHz radio continuum image of G24.7+0.6 that we extracted from the VGPS survey. The spectrum extraction regions, from which both background and source spectra were found using MEANLEV, are shown by the red boxes in Fig. (1). The brightest continuum regions of the SNR were included in Regions 1 and 2. Region 3 was chosen to extract the spectrum of the H II region G24.540+0.600 as a comparison for the SNR H I absorption spectra. The resulting H I spectra are shown in Fig. (2). From the spectra of Regions 1 and 2, it appears that there may be absorption up to the tangent point. Fig. (1). SNR G24.7+0.6 continuum image with contour levels (green) at 20, 25, 30, 35, 38, 40 and 100 K. The red box is the area used to extract H I and 13CO source and background spectra. Fig. (2). G24.7+0.6 spectra: The top panels show H I emission spectra: source spectrum (black), background spectrum (red) and difference (blue). The bottom panel gives the H I absorption spectrum (blue), the 13CO source spectrum (green) and the 13CO background spectrum (black). The dashed line is the ±2σ noise level of the H I absorption spectrum. The URC tangent point velocity is +116.6 km s-1 and distance is 7.6 kpc. The absorption features at 105 km s-1 in the spectra are not verified in the H I channel maps (Fig. 3 bottom panel). They are seen to be caused by excess H I emission in the background area. The absorption features seen at negative velocities for Region 1 and 2 spectra are similar to the tangent-point features. They are caused by excess of H I in the background areas. The e > 1 features seen in the spectra are due to H I clouds in the chosen source region (Fig. 3 top left panel). From the H I images it is seen that 45 to 55 km s-1 H I absorption features are real (Fig. 3 top right panel). The H I channel maps indicate that the absorption features are not present beyond a velocity Vr3 54.60 km s-1 which yields an estimated distance to the SNR as 3.5 kpc. The H II region G24.540+0.600 spectrum (Fig. 2 bottom panel) and distance are discussed in Section 4.1 ### 3.2. G29.6 + 0.1 Fig. (4) shows the 1420 MHz radio continuum image of G29.6+0.1. The brightest region of the SNR, with 1420 MHz brightness temperature 30 K, was used included in the source region (see red box in Fig. (4) for the location of the area used for source and background). The resulting H I absorption spectrum is shown in Fig. (5) and can be seen to have a high noise level. Thus we examine the H I channel maps to look for real absorption features. Three of the channel maps are shown here for illustration. The feature in the absorption spectrum at -25 km s-1 (top right panel of Fig. 6) shows no correlation of decreased H I intensity with the continuum brightness of the SNR. Thus that feature is not real but rather caused by the excess H I in the background region. Between the velocities of 100 and 110 km s-1 a feature is present in the spectrum. However, this feature is false, caused by excess H I in the background region (see channel map at 104 km s-1 shown in Fig. 6 bottom panel). Fig. (3). G24.7+0.6 H I channel maps +29.87, +49.65 and +104.07 km s-1. The top right panel shows the H I intensity decreasing coincident with the bright continuum region of the SNR, indicating real absorption. The other two panels do not show such evidence for real absorption. The H I contour levels (blue) are at 40, 45 and 50 K for the channel maps +29.87 and +49.65 km s-1 and 60 and 80 K for the channel map +104.07 km s-1. The continuum contour levels (green) are at 20, 25, 30, 35, 38, 40 and 100 K. Fig. (4). SNR G29.6+0.1 continuum image with contour levels (green) at 24, 26, 28 and 30 K. The red box is the areas used to extract H I and 13CO source and background spectra. Fig. (5). G29.6+0.1 spectra: The top panels show H I emission spectra: source spectrum (black), background spectrum (red) and difference (blue). The bottom panel gives the H I absorption spectrum (blue), the 13CO source spectrum (green) and the 13CO background spectrum (black). The dashed line is the ±2σ noise level of the H I absorption spectrum. The URC tangent point velocity is +108.1 km s-1 and distance is 7.2 kpc. The maximum velocity where absorption occurs is at 80 km s-1. The H I channel map at 78.5 km s-1 is shown in the top left panel of Fig. (6). The decrease in H I intensity is coincident with the maximum continuum intensity, consistent with real absorption. The lack of any evidence of absorption up to the tangent point, points to the near kinematic distance to the SNR. This places the SNR at a distance of 4.7 kpc. Fig. (6). G29.6+0.1 H I channel maps -25.37, +78.51 and +104.07 km s-1. The top left panel shows the H I intensity decreasing coincident with the bright continuum region of the SNR, indicating real absorption. The other two panels do not show such evidence for real absorption. The H I contour levels (blue) are at 54 and 58 K for the channel map -25.37 km s-1, 86, 88 and 90 K for the channel map +78.51 km s-1 and 65, 70 and 75 K for the channel map +104.07 km s-1. The continuum contour levels (green) are at 24, 26, 28 and 30 K. ### 3.3. G41.5 + 0.4 Fig. (7) shows the 1420 MHz radio continuum image of G41.5+0.4. The three brightest regions of the SNR G41.5 + 0.4 were chosen for making H I spectra, which are shown in Fig. (8). All three spectra are noisy and show few consistent features. The features with e > 1 or with e < 0 were verified to be excess H I emission in either the source region or in the background region and not caused by absorption. Fig. (7). SNR G41.5+0.4 continuum image with contour levels (green) at 15, 18, 22 and 30 K. The red boxes are the areas used to extract H I and 13CO source and background spectra. Fig. (8). G41.5+0.4 spectra: The top panels show H I emission spectra: source spectrum (black), background spectrum (red) and difference (blue). The bottom panel gives the H I absorption spectrum (blue), the 13CO source spectrum (green) and the 13CO background spectrum (black). The dashed line is the ±2σ noise level of the H I absorption spectrum. The URC tangent point velocity is +78.4 km s-1 and distance is 6.2 kpc. Fig. (9) shows selected H I channel maps. The upper left panel shows the map for -27.85 km s-1, illustrating excess H I emission in the background area of regions 1 and 2. The upper right panel shows the map for +63.67 km s-1, illustrating the correspondence between the decrease in H I intensity and the brightest continuum emission regions of the SNR. This indicates real absorption. The lower left panel shows the map for +76.86 km s-1. This illustrates a lack of correspondence between the decrease in H I intensity and the brightest continuum emission regions of the SNR. This indicates no absorption at this velocity and shows that the e > 1 in the region 3 H I spectrum (in Fig. 8) is caused by excess emission in the source area of the region. The lower right panel shows the map for +83.46 km s-1. There is no correspondence between the decrease in H I intensity and the brightest continuum emission regions of the SNR, thus no real H I absorption at this velocity. In summary, from an examination of each H I channel map, we find no evidence of absorption up to the tangent point at +85 km s-1. There is no absorption present in the negative velocity range. There is clear absorption up to +64 km s-1 which is consistent in the H I channel maps. Because there is no absorption present up to the tangent point or at the negative velocities, this places the SNR at the near kinematic distance. The SNR at a distance for a radial velocity of +64 km s-1 is 4.1 kpc. Fig. (9). G41.5+0.4 H I channel maps -27.85, +63.67, +78.86 and +83.46 km s-1. The top right panel shows the H I intensity decreasing coincident with the bright continuum region of the SNR, indicating real absorption. The three other panels do not show such evidence for real absorption. The H I contour levels (blue) are at 40 and 50 K for the channel map -27.85 km s-1, 95 and 105 K for the channel map +63.67 km s-1, 40 and 50 K for the channel map +78.86 km s-1 and 15 K for the channel map +83.46 km s-1. The continuum contour levels (green) are at 15, 18, 22 and 30 K. ### 3.4. G57.2+0.8 The continuum image of G57.2+0.8 and the regions for extraction of H I absorption spectra are shown in Fig. (10). The spectra are shown in Fig. (11). The 13CO survey covers a longitude range of 18 to 55.7 and a latitude range of -1 to +1 [8]. This SNR is located beyond the longitude range covered by the survey and therefore does not include the 13CO emission spectra. The SNR is relatively faint yielding noisy H I spectra. Region 1 includes the brightest region of the SNR, but both spectra are included in the analysis in order to compare absorption features. Fig. (10). SNR G57.2+0.8 continuum image with the contour levels (green) are at 8, 8.5, 10, 15 and 18 K. The red boxes are the areas used to extract H I and 13CO source and background spectra. Fig. (11). G57.2+0.8 spectra: The top panels show H I emission spectra: source spectrum (black), background spectrum (red) and difference (blue). The bottom panel gives the H I absorption spectrum (blue), the 13CO source spectrum (green) and the 13CO background spectrum (black). The dashed line is the ±2σ noise level of the H I absorption spectrum. The URC tangent point velocity is +38.3 km s-1 and distance is 4.5 kpc. The left panel of Fig. (12) shows the H I channel map at -46 km s-1 and shows no correspondence between decreased H I and increased continuum intensity. This shows that the features in the H I spectrum near -46 km s-1 are not real absorption. No real absorption was found at negative velocities, giving an upper limit of distance as the far side of the solar circle at 9.0 kpc. It is seen from the spectra that there are absorption features up to the tangent point at 45 km s-1. This absorption up to the tangent point is verified in the individual H I channel maps. The right panel of Fig. (12) shows the map at +44.7 km s-1. This shows a good correspondence between decreased H I and increased continuum intensity, thus real absorption. Therefore the lower limit distance to the SNR is the tangent point distance of 4.5 kpc. We place the SNR between 4.5 and 9.0 kpc. Fig. (12). G57.2+0.8 H I channel maps -45.98 and +44.71 km s-1. The left panel shows the H I intensity decreasing coincident with the bright continuum region of the SNR, indicating real absorption. The top panel does not show such evidence for real absorption. The H I contour levels (blue) are at 32 K. The continuum contour levels (green) are at 8, 8.5, 10, 15 and 18 K. ## 4. DISCUSSION ### 4.1. G24.7+0.6 The SNR G24.7+0.6 H I absorption distance is 3.5 kpc. G24.7+0.6 is an SNR 30ˊ×15ˊ in size, with a filled center and a faint shell. Becker and Helfand [12] claimed a compact H II region 0.7˝ in size superimposed on the SNR at l = 24.677 and b = +0.5495. This does not affect our conclusion on the distance of SNR G24.7+0.6 because the same maximum velocity of real absorption is seen for both regions of the SNR, the lower one including the above H II region and the upper one not including it. The SNR is located near the H II regions G24.540+0.600 and G24.470+0.495. Using the International Galactic Plane Survey, Jones and Dickey [13] inferred that the H II regions G24.540+0.600 and G24.470+0.495 are at kinematic distances of kpc and kpc respectively. Neither of these is consistent with the H I absorption distance of 3.5 kpc, so are not associated with the SNR. We studied the H I absorption spectrum of the H II region G24.540+0.600 (Fig. (2) bottom panel). It shows absorption up to the tangent point (116 km s-1). We verified this using the channel maps, e.g. absorption seen in the 104.07 km s-1 (bottom panel of Fig. 3). The maximum velocity of absorption is -38.6 km s-1 which gives a distance of 20.6 kpc using the Reid et al. [11] rotation curve. This is consistent with the Jones and Dickey [13] distance. The H I spectrum and channel maps of H II region G24.470+0.495 show absorption up to the tangent point, but none at negative velocities. This H II region has Vr = 29.87 km s-1 which places it at far side of the tangent point at 12.9 kpc. Thus G24.470+0.495 is considerably farther than the tangent point distance of 7.6 kpc. Petriella et al. [4] studied the molecular environment of the luminous blue variable (LBV) star G24.73+0.69 that is located near the SNR. They placed it at 3.5 kpc adopting a systemic velocity of 42 km s-1 of the molecular shell and suggested the progenitor of the SNR G24.7+0.6 and the LBV star both were formed from the same natal cloud. Using the systemic velocity of 42 km s-1 from Petriella et al. [4] and the Reid et al. [11] rotation curve, the revised distance to the LBV star G24.73+0.69 is 2.9 kpc. The LBV star could be associated with the SNR if it has a peculiar velocity of +14 km s-1. This would place the LBV star also at a distance of 3.5 kpc. The error of the SNR distance is calculated using the method described in Section 2.2 and is 0.2 kpc. The size of the SNR is 30.5 ± 1.7 × 15.3 ± 0.9 pc of the major and minor axes respectively. These values are summarized in Table 1. ### 4.2. G29.6+0.1 Gaensler et al. [6] reported Very Large Array observations of the slow X-ray pulsar AX J1845-0258, which is physically associated with the SNR. The 5.1ˊ radio emission shell of the SNR is linearly polarized with a non-thermal spectral index. The SNR is thought to be less than 8000 years old. Vasisht et al. [14] determined the distance to the pulsar to be 5 - 15 kpc based on X-ray absorption measurements. The distance to the SNR was suggested as 10 kpc due to its association with AX J1845-0258. The chosen region for our analysis coincides with the bright spot of the Gaensler et al. [6] 5 GHz image (their Fig. 1, left panel). Kilpatrick et al. [15] detected 12CO emission toward the SNR at a velocity of +94 km s-1 but gave no good evidence of an association (their Fig. 12). The 13CO spectra from the source and background regions inside the red box (Fig. 4) are shown in Fig. (5). These show molecular clouds at 65, 79, 85 and 95 km s-1. None of these molecular clouds morphologically match the boundary of the SNR. However, the cloud at 79 km s-1 matches the highest velocity of absorption of the SNR and thus could be associated with the SNR. The distance to the SNR is 4.7 ± 0.3 kpc and the diameter is 6.8 ± 0.4 pc. The distance is consistent with the approximate lower limit distance presented by Vasisht et al. [14]. ### 4.3. G41.5+0.4 Kaplan et al. [16] first presented an image of the SNR and noted its complex morphology, consisting of a brighter rim to the left and a compact core to the right suggestive of a PWN. The SNR is ~12ˊ in size. Alves et al. [17] confirmed the synchrotron nature of the SNR presenting free-free and synchrotron maps. There is no previous distance estimation for G41.5+0.4. We place G41.5+0.4 at a distance of 4.1 ± 0.5 kpc. The diameter of the SNR is 16.7 ± 2.0 pc. ### 4.4. G57.2 + 0.8 Also known also as 4C21.53, the 13ˊ×10ˊ SNR G57.2+0.8 has a spectral index of 0.62 and consists of a non-thermal arc [18]. Kilpatrick et al. [15] used a Σ-D relation to estimate the distance to G57.2+0.8 as 8.2 kpc. Park et al. [19] used another Σ-D relation to estimate the distance as 14.3 kpc. We place the SNR G57.2+0.8 between the tangent point distance of 4.5 kpc and the far-side of the solar circle of 9 kpc. We used the method of error calculation in distance described in Section 2.2. For the tangent point distance, many parameter combinations yield no solutions to the equations for Vr (d) (where Vr ≥ V(R) - V0sin (l)). Thus the standard deviation of solutions for d is artificially low. To obtain a more reasonable estimate of the error at the tangent point, we constructed a model spectrum and compared the tangent point velocity with the URC rotation curve velocity. We find the corresponding error as 0.4 kpc for both lower and upper limit distances. ### 4.5. Evolutionary States With distances now measured, we apply a basic Sedov model [20] to estimate the ages of the four SNRs. None of the four SNRs have X-ray observations. Thus we cannot use the X-ray emission to determine the local ISM density for the SN explosion, as done in [21] using the models of Leahy and Williams [22]. Leahy [21] found that the mean explosion energy for 50 SNRs in the Large Magellanic Cloud was 5×1050 erg, so we use that value. The range in local ISM density for the LMC was found to be larger (from ~10-3 cm-3 to ~10 cm-3) than the range of explosion energy. Because the SNRs we are considering are inside the solar circle, in a higher density part of the Milky Way, we use 1 cm-3 as for the nominal ISM density. SNR nominal ages were found by applying a Sedov model with explosion energy E0 = 5×1050 erg, local ISM density n0 = 1 cm-3 and our new distances. The resulting age for G24.7+0.6 is 9100 yr, for G29.6+0.1 is 440 yr and for G41.5+0.4 is 4100 yr. G57.2+0.8 has a Sedov age between 3900 yr (for distance 4.5 kpc) and 22,000 yr (for distance 9 kpc). Not having a measurement of the local ISM density creates a larger uncertainty in the model SNR age than any other factor. e.g., a difference in local ISM density by a factor of 10 results in an age difference by a factor of 3.16. The Sedov age estimate for G29.6+0.1 is quite low. As discussed in Section 4.2, this SNR may be associated with a molecular cloud, indicating a higher density. If the ISM density was 10 to 100 cm-3, quite feasible if it is associated with a molecular cloud, then the Sedov age is 1400 to 4400 yr. Thus for G29.6+0.1 we use a range of n0 = 1 - 100 cm-3. Table 1. Distances to supernova remnants. Source Literature Dist a (kpc) Ref. Vr (km s-1) KDAR b New Dist (kpc) Angular size c (arcmin) Size c (pc) Sedov Age d (kyr) G24.7+0.6 - - 54.6 N 3.5 ± 0.2 30 x 15 30.5 ± 1.7 x 15.3 ± 0.9 9.1 G29.6+0.1 10 ± 5 X 14 80.16 N 4.7 ± 0.3 5 x 5 6.8 ± 0.4 x 6.8 ± 0.4 0.44 – 4.4 G41.5+0.4 - - 63.67 N 4.1 ± 0.5 14 x 14 16.7 ± 2.0 x 16.7 ± 2.0 4.1 G57.2+0.8 8.2 S 14.3 S 15 19 VTP – 0 VTP – F 4.5 ± 0.4 – 9.0 ± 0.4 13 x 12 25.5 ± 10 x 23.6 ± 9.3 3.9 – 22 Notes: a Distance method - x: X-ray absorption, s: Σ-D relation. b KDAR- Kinematic Distance Ambiguity Resolution, indicating near (N), far (F): upper limit solar circle or tangent point (TP) distance. c Major axis × minor axis in radio continuum. d Sedov age with E0 = 5×1050 erg, n0 = 1 cm-3, except for G29.6+0.1 (see text) with n0 = 1 – 100 cm-3. ## SUMMARY We have used H I absorption spectra and H I channel maps to find the maximum velocity of absorption for four SNRs which do not have previous distance measurements. The H I channel maps were used to distinguish between real and false features in the H I absorption spectrum. The resulting distances and distance limits for the four SNRs are given in Table 1. The angular sizes (major and minor axes) measured from the 1420 MHz radio continuum images are given in Table 1. The resulting physical sizes, using the new distances are also given. We have applied Sedov models to estimate the ages of the four SNRs, and find ages typical of SNRs in the Sedov phase. Not applicable. ### CONFLICT OF INTEREST The authors declare no conflict of interest, financial or otherwise. ## ACKNOWLEDGEMENTS This work was supported in part by a grant from the Natural Sciences and Engineering Research Council of Canada. We would also like to thank the referee for the insightful comments and important suggestions that have improved this work. ## REFERENCES [1] Cox DP. The three-phase interstellar medium revisited. Annu Rev Astron Astrophys 2005; 43: 337-85. [2] Leahy D, Tian W. Distances to Supernova Remnants from H I Absorption Spectra. In: Kothes R, Landecker TL, Willis AG, editors. The Dynamic Interstellar Medium: A Celebration of the Canadian Galactic Plane Survey. vol. 438 of Astronomical Society of the Pacific Conference Series; 2010. p. 365. [3] Ranasinghe S, Leahy DA. Distances to supernova remnants G31.9+0.0 and G54.40.3 associated with molecular clouds. Astrophys J 2017; 843: 119. [4] Petriella A, Paron SA, Giacani EB. The molecular gas around the luminous blue variable star G24.73+0.69. Astron Astrophys 2012; 538: A14. [5] Acero F, Ackermann M, Ajello M, et al. The First Fermi LAT Supernova Remnant Catalog. Astrophysical Journal. Supplement 2016; 224: 8. [6] Gaensler BM, Gotthelf EV, Vasisht G. A new supernova remnant coincident with the Slow X-Ray Pulsar AX J1845-0258. Astrophys J 1999; 526(1): L37-40. [7] Stil JM, Taylor AR, Dickey JM, et al. The VLA galactic plane survey. Astron J 2006; 132: 1158-76. [8] Jackson JM, Rathborne JM, Shah RY, et al. The boston university-five college radio astronomy observatory galactic ring survey. Astrophys. J. Supplement 2006; 163: 145-59. [9] Kwee KK, Muller CA, Westerhout G. The rotation of the inner parts of the Galactic System. Bull Astron Inst Neth 1954; 12: 211. [10] Persic M, Salucci P, Stel F. The universal rotation curve of spiral galaxies - I. The dark matter connection. Monthly Notices of the RAS 1996; 281: 27-47. [11] Reid MJ, Menten KM, Brunthaler A, et al. Trigonometric parallaxes of high mass star forming regions: The structure and kinematics of the milky way. Astrophys J 2014; 783: 130. [12] Becker RH, Helfand DJ. High-resolution radio observations of the supernova remnant G24.7 + 0.6 and the discovery of an ultra-compact H II region. Astrophys J 1987; 316: 660-2. [13] Jones C, Dickey JM. Kinematic distance assignments with H I absorption. Astrophys J 2012; 753: 62. [14] Vasisht G, Gotthelf EV, Torii K, Gaensler BM. Detection of a compact X ray source in the supernova remnant G29.6+0.1: A variable anomalous X-Ray pulsar? Astrophys J Lett 2000; 542: L49-52. [15] Kilpatrick CD, Bieging JH, Rieke GH. A systematic survey for broadened CO emission toward galactic supernova remnants. Astrophys J 2016; 816: 1. [16] Kaplan DL, Kulkarni SR, Frail DA, van Kerkwijk MH. Deep radio, optical, and infrared observations of SGR 1900+14. Astrophys J 2002; 566: 378-86. [17] Alves MIR, Davies RD, Dickinson C, Calabretta M, Davis R, Staveley-Smith L. A derivation of the free-free emission on the Galactic plane between l = 20 and 44. Monthly Notices of the RAS 2012; 422: 2429-43. [18] Green DA. A catalogue of 294 Galactic supernova remnants. Bull Astron Soc India 2014; 42: 47-58. [19] Park G, Koo BC, Gibson SJ, Kang JH, Lane DC, Douglas KA, et al. H I shells and super shells in the I-GALFA H I 21 cm line survey. I. fast-expanding H I shells associated with supernova remnants. Astrophys J 2013; 777: 14. [20] Cox DP. Cooling and evolution of a supernova remnant. Astrophys J 1972; 178: 159-68. [21] Leahy DA. Energetics and birth rates of supernova remnants in the large magellanic cloud. Astrophys J 2017; 837: 36. [22] Leahy DA, Williams JE. A python calculator for supernova remnant evolution. Astron J 2017; 153: 239.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893157839775085, "perplexity": 2524.64379945752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531974.7/warc/CC-MAIN-20191211160056-20191211184056-00371.warc.gz"}
https://socratic.org/questions/how-do-you-write-an-equation-in-point-slope-form-given-slope-2-3-1
Algebra Topics # How do you write an equation in point slope form given slope -2, (3, 1)? Jun 27, 2016 point-slope form $\left(y - 1\right) = - 2 \left(x - 3\right)$ $y - 1 = - 2 x + 6$ slope-intercept form $y = - 2 x + 7$ #### Explanation: We will use the point slope form to get the equation of the line. $\left(y - {y}_{1}\right) = m \left(x - {x}_{1}\right)$ Where m is the slope $m = - 2$ ${x}_{1} = 3$ ${y}_{1} = 1$ $\left(y - 1\right) = - 2 \left(x - 3\right)$ $y - 1 = - 2 x + 6$ $y = - 2 x + 7$ ##### Impact of this question 492 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431437849998474, "perplexity": 2297.1929332912805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00345.warc.gz"}
https://www.physicsforums.com/threads/help-evaluating-complex-function-in-form-m-ni.882178/
# I Help evaluating complex function in form m+ni? 1. Aug 16, 2016 ### NotASmurf Hey all, I need the complex version of the sigmoid function in standard form, that is to say $$f(\alpha) =\frac{1}{1+e^{-\alpha}} , \hspace{2mm}\alpha = a+bi , \hspace{2mm} \mathbb{C} \to \mathbb{C}$$ in the simplified form: $$f = m+ni$$ but found this challenging, for some reason i assumed there was an identity for $$e^{e^{x} }, \hspace{2mm} x \in \mathbb{C}$$, so wasted my time with $$e^{-a-bi}= e^{e^{tan^{-1}\frac{b}{a}i + ln[\sqrt{ a^{2} + b^{2} }]}}$$ and tried from there, (just showing I did make an attempt, no matter how abysmal). Any help appreciated as I am not too familiar with complex numbers outside of the basics needed for transformation matrices. 2. Aug 16, 2016 ### blue_leaf77 No need to go that far. Try multiplying $f(\alpha) =\frac{1}{1+e^{-\alpha}}$ with $\frac{1+e^{-\alpha^*}}{1+e^{-\alpha^*}}$ and then use Euler formula. 3. Aug 16, 2016 ### NotASmurf Thanks :D ,Comes to $$\frac{1}{2cos(b) e^{-a}}[1+e^{-\overline{\alpha}]}$$ right (before further simplification)? Then $$=[ \frac{1}{2cos(b) e^{-a}} +\frac{1}{2}] + [\frac{1}{2} tan(b)]i$$ ? Or have I screwed up? (can't exactly substitute in numbers as easily in this case to test) Last edited: Aug 16, 2016 4. Aug 16, 2016 ### mathman $\frac{1}{1+e^{-\alpha}}=\frac{1}{1+e^{-a}(cosb-isinb)}=\frac{1+e^{-a}(cosb+isinb)}{(1+e^{-a}cosb)^2+(e^{-a}sinb)^2}=\frac{1+e^{-a}(cosb+isinb)}{1+2e^{-a}cosb+e^{-2a}}$ $\alpha$ and a look alike in itex. 5. Aug 16, 2016 ### NotASmurf Thanks mathman but since my previous answer is more computationally efficient (has to run vast iterations for the program) my previous answer correct? Last edited: Aug 16, 2016 6. Aug 16, 2016 ### NotASmurf Turns out this was a fruitless exercise since $$\frac{\partial u }{\partial x} \neq \frac{\partial v }{\partial y}$$ , so it doesn't conform with the Cauchy -Riemann equations D: (It needs to be differentiable) Last edited: Aug 16, 2016 7. Aug 17, 2016 ### FactChecker I think you should check that again. 1/(1+e) is analytic in the complex plane except where (1+e) = 0. 8. Aug 17, 2016 ### NotASmurf well when I reworked it I got $$\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}$$ BUT $$\frac{\partial u}{\partial y} = \frac{\partial v}{\partial x}$$ when it should be equal to the negative does that mean its diffentiable, but only for certain regions? (havn't had to do complex differentiation before), also quick question , the Riemann - Cauchy equations essentially say $$\frac{\partial f}{\partial z} = 0$$ must be true for it do be differentiable, this confuses me, can someone elucidate on the intuition here? 9. Aug 17, 2016 ### FactChecker I think there must be a sign problem somewhere. The series of operations α => -α => e => 1 + e gives an entire function (analytic in the entire complex plane) Then 1/(1 + e) is analytic for α in ℂ except where it is a division by 0. This is not right. I'm not familiar with the Wirtinger derivatives (https://en.wikipedia.org/wiki/Wirtinger_derivatives), but apparently this should be the partial wrt z conjugate (see equation 3 of https://en.wikipedia.org/wiki/Cauchy–Riemann_equations ) 10. Aug 17, 2016 ### mathman I doubt it. It is different from mine - unlikely to be correct. Check out the denominator. 11. Aug 28, 2016 ### Svein No, you are wrong. An analytic function is characterized by $\frac{\partial f}{\partial \bar{z}}=0$ (you need to define $\frac{\partial f}{\partial \bar{z}}$ in a sensible manner, but that should not be too hard). Draft saved Draft deleted Similar Discussions: Help evaluating complex function in form m+ni?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495383501052856, "perplexity": 1784.3078305849633}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889681.68/warc/CC-MAIN-20180120182041-20180120202041-00090.warc.gz"}
http://mathhelpforum.com/differential-geometry/121151-cts-fnc-closed-interval-print.html
# cts fnc on a closed interval • December 19th 2009, 10:09 AM flower3 cts fnc on a closed interval $suppose \ that \ f:[a.b] \to R \ is \ continuous \ with \ f(x)>0 \ , \forall x \in \ [a,b]$ $. prove \ that \ \exists m>0 \ such \ that \ f(x) \geq m , \forall x \in [a,b]$ • December 19th 2009, 10:21 AM Plato Quote: Originally Posted by flower3 $suppose \ that \ f:[a.b] \to R \ is \ continuous \ with \ f(x)>0 \ , \forall x \in \ [a,b]$ $. prove \ that \ \exists m>0 \ such \ that \ f(x) \geq m , \forall x \in [a,b]$ Use the high-point/low-point theorem. • December 19th 2009, 10:24 AM tonio Quote: Originally Posted by flower3 $suppose \ that \ f:[a.b] \to R \ is \ continuous \ with \ f(x)>0 \ , \forall x \in \ [a,b]$ $. prove \ that \ \exists m>0 \ such \ that \ f(x) \geq m , \forall x \in [a,b]$ Supose not; then for any $n\in\mathbb{N}\,\,\,\exists\,x_n\in [a,b]\,\,\,s.t.\,\,\,f(x_n)<\frac{1}{n}$, Now use the continuity of the function on the sequence $\{x_n\}$ to reach a contradiction... Tonio • December 19th 2009, 09:31 PM Drexel28 Quote: Originally Posted by flower3 $suppose \ that \ f:[a.b] \to R \ is \ continuous \ with \ f(x)>0 \ , \forall x \in \ [a,b]$ $. prove \ that \ \exists m>0 \ such \ that \ f(x) \geq m , \forall x \in [a,b]$ Accidental post. • December 19th 2009, 09:37 PM Drexel28 Quote: Originally Posted by flower3 $suppose \ that \ f:[a.b] \to R \ is \ continuous \ with \ f(x)>0 \ , \forall x \in \ [a,b]$ $. prove \ that \ \exists m>0 \ such \ that \ f(x) \geq m , \forall x \in [a,b]$ Alternatively to tonio's and most likely what Plato was hinting at. Since $[a,b]$ is closed and bounded (compact) and $f$ is continuous we have it that $f\left([a,b]\right)$ is closed and bounded (compact). Thus, $\inf\text{ }f\left([a,b]\right)=\alpha$ exists and is in $f\left([a,b]\right)$. Now since $\alpha\in f\left([a,b]\right)$ it follows that $\alpha>0$, but by the Archimedean principle we know there exists some $n\in\mathbb{N}$ such that $\frac{1}{n}<\alpha$, and the conclusion follows.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656661152839661, "perplexity": 1146.0111492007456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00104-ip-10-179-60-89.ec2.internal.warc.gz"}