url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://papers.nips.cc/paper/2020/file/d785bf9067f8af9e078b93cf26de2b54-MetaReview.html
NeurIPS 2020 ### Meta Review It is shown that running SGD on a non-convex surrogate for the 0-1 loss converges to a near optimal halfspace under adversarial label noise. The resulting algorithm is much simpler than others available in the literature. It is also shown (by a novel lower bound) that using convex surrogates cannot really improve the performance. The reviewers are generally positive about the paper. In the revised version the authors should highlight better the difference compared to [DKTZ20].
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508835077285767, "perplexity": 801.5354684943308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00649.warc.gz"}
https://www.physicsforums.com/threads/voltage-output-for-ideal-diode-solutions-wrong.668996/
# Homework Help: Voltage output for ideal diode (solutions wrong?) 1. Feb 2, 2013 ### pyroknife I'm looking at some solutions for a problem I found. This link is http://www.etcs.ipfw.edu/~lin/MET487/2011-SumII/Lectures/Hw3_Sols-MET487-Sum2011.PDF It's page 4 problem 3.1. Okay so V_in is given to be 10cos(2∏t), but their graph for V_in makes no sense to me and consequently, V_out seems wrong as well. Am I missing something here or is their graph for V_in totally off? The magnitudes are right but their period seems a factor of 2 off. 2. Feb 3, 2013 ### ehild You are right, the period is wrong. ehild 3. Feb 3, 2013 ### pyroknife Thanks. Any clue what they were doing by finding frequency? I don't see any point in doing that. 4. Feb 3, 2013 ### ehild The time dependence of an AC voltage is U=Acos(ωt). The input voltage is U=10cos(2pi t). So what is the angular frequency ω? How is it related to the frequency and to the time period? The scaling in the picture is wrong. ehild 5. Feb 3, 2013 ### pyroknife Yeah I know that the relationship between angular frequency and frequency is angularfrequency=2*pi*frequency and that period=1/frequency. I just don't understand why they went through the hassle of calculating frequency to draw a simple sinusoidal graph. Frequency was never asked to be calculated, so i didn't see the point in calculating it unless they somehow saw this as an easier way of drawing a simple cosine function. 6. Feb 3, 2013 ### ehild They needed to graph time dependence of the output voltage. (But they did it wrong) ehild
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233867526054382, "perplexity": 1712.259126853483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591332.73/warc/CC-MAIN-20180719222958-20180720002958-00562.warc.gz"}
https://hal.inria.fr/hal-00931514
# Gathering and Exclusive Searching on Rings under Minimal Assumptions 2 COATI - Combinatorics, Optimization and Algorithms for Telecommunications CRISAM - Inria Sophia Antipolis - Méditerranée , COMRED - COMmunications, Réseaux, systèmes Embarqués et Distribués Abstract : Consider a set of mobile robots with minimal capabilities placed over distinct nodes of a discrete anonymous ring. Asynchronously, each robot takes a snapshot of the ring, determining which nodes are either occupied by robots or empty. Based on the observed configuration, it decides whether to move to one of its adjacent nodes or not. In the first case, it performs the computed move, eventually. The computation also depends on the required task. In this paper, we solve both the well-known Gathering and Exclusive Searching tasks. In the former problem, all robots must simultaneously occupy the same node, eventually. In the latter problem, the aim is to clear all edges of the graph. An edge is cleared if it is traversed by a robot or if both its endpoints are occupied. We consider the exclusive searching where it must be ensured that two robots never occupy the same node. Moreover, since the robots are oblivious, the clearing is perpetual, i.e., the ring is cleared infinitely often. In the literature, most contributions are restricted to a subset of initial configurations. Here, we design two different algorithms and provide a characterization of the initial configurations that permit the resolution of the problems under minimal assumptions. Type de document : Communication dans un congrès Mainak Chatterjee; Jian-Nong Cao; Kishore Kothapalli; Sergio Rajsbaum. 15th International Conference on Distributed Computing and Networking (ICDCN), Jan 2014, Coimbatore, India. Springer, 8314, pp.149-164, 2014, <10.1007/978-3-642-45249-9_10> https://hal.inria.fr/hal-00931514 Contributeur : Nicolas Nisse <> Soumis le : mercredi 15 janvier 2014 - 13:55:33 Dernière modification le : lundi 5 octobre 2015 - 17:00:25 Document(s) archivé(s) le : mardi 15 avril 2014 - 22:44:22 ### Fichier Fichiers produits par l'(les) auteur(s) ### Citation Gianlorenzo D'Angelo, Alfredo Navarra, Nicolas Nisse. Gathering and Exclusive Searching on Rings under Minimal Assumptions. Mainak Chatterjee; Jian-Nong Cao; Kishore Kothapalli; Sergio Rajsbaum. 15th International Conference on Distributed Computing and Networking (ICDCN), Jan 2014, Coimbatore, India. Springer, 8314, pp.149-164, 2014, <10.1007/978-3-642-45249-9_10>. <hal-00931514> Consultations de la notice ## 204 Téléchargements du document
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518818020820618, "perplexity": 4674.568839050506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189472.3/warc/CC-MAIN-20170322212949-00427-ip-10-233-31-227.ec2.internal.warc.gz"}
https://quantumcomputing.stackexchange.com/users/55/gls?tab=summary
glS 14 Given a decomposition for a unitary $U$, how do you decompose the corresponding controlled unitary gate $C(U)$? 13 Will deep learning neural networks run on quantum computers? 13 If all quantum gates must be unitary, what about measurement? 12 Is there any potential application of quantum computers in machine learning or AI? 10 Decomposition of a unitary two-dimensional matrix ### Reputation (6,162) +35 Prove that the partial trace is equivalent to measuring and discarding +20 How to split a 2-local unitary operator through singular value decomposition? +45 What's the difference between observing in a given direction and operating in that same direction? +10 Understanding Google's “Quantum supremacy using a programmable superconducting processor” (Part 3): sampling ### Questions (35) 27 Are there problems in which quantum computers are known to provide an exponential advantage? 26 How is the oracle in Grover's search algorithm implemented? 17 Why can the Discrete Fourier Transform be implemented efficiently as a quantum circuit? 16 What protocols have been proposed to implement quantum RAMs? 15 How should different quantum computing devices be compared? ### Tags (93) 91 quantum-gate × 27 28 quantum-information × 16 66 quantum-state × 22 27 unitarity × 4 57 algorithm × 24 25 bloch-sphere × 10 41 mathematics × 17 25 machine-learning × 2 34 measurement × 11 24 architecture × 8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516247868537903, "perplexity": 2186.223316579698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00539.warc.gz"}
https://home.cern/about/updates/2013/10/CERN-congratulates-Englert-and-Higgs-on-Nobel-in-physics
CERN congratulates Englert and Higgs on Nobel in physics François Englert (left) and Peter Higgs at CERN on 4 July 2012, on the occasion of the announcement of the discovery of a Higgs boson by the ATLAS and CMS experiments (Image: Maximilien Brice/CERN) CERN congratulates François Englert and Peter W. Higgs on the award of the Nobel prize in physics “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider.” The announcement by the ATLAS and CMS experiments took place on 4 July last year. “I’m thrilled that this year’s Nobel prize has gone to particle physics,” says CERN Director-General Rolf Heuer. “The discovery of the Higgs boson at CERN last year, which validates the Brout-Englert-Higgs mechanism, marks the culmination of decades of intellectual effort by many people around the world.” Members of the ATLAS and CMS collaborations react with jubilation at CERN as the announcement is made (Image: Maximilien Brice/CERN) The Brout-Englert-Higgs (BEH) mechanism was first proposed in 1964 in two papers published independently, the first by Belgian physicists Robert Brout and François Englert, and the second by British physicist Peter Higgs. It explains how the force responsible for beta decay is much weaker than electromagnetism, but is better known as the mechanism that endows fundamental particles with mass. A third paper, published by Americans Gerald Guralnik and Carl Hagen with their British colleague Tom Kibble further contributed to the development of the new idea, which now forms an essential part of the Standard Model of particle physics. As was pointed out by Higgs, a key prediction of the idea is the existence of a massive boson of a new type, which was discovered by the ATLAS and CMS experiments at CERN in 2012. The Standard Model describes the fundamental particles from which we, and all the visible matter in the universe, are made, along with the interactions that govern their behaviour. It is a remarkably successful theory that has been thoroughly tested by experiment over many years. Until last year, the BEH mechanism was the last remaining piece of the model to be experimentally verified. Now that it has been found, experiments at CERN are eagerly looking for physics beyond the Standard Model. The Higgs particle was discovered by the ATLAS and CMS collaborations, each of which involves over 3000 people from all around the world. They have constructed sophisticated instruments – particle detectors – to study proton collisions at CERN’s Large Hadron Collider (LHC), itself a highly complex instrument involving many people and institutes in its construction. CERN will be holding a press conference at 2pm CET today in the Globe of Science and Innovation. For those unable to attend, it will be webcast. Media questions can be submitted by Twitter using the hashtag #BosonNobel. The search for the Higgs boson Elementary particles may have gained their mass from an elusive particle – the Higgs boson The Standard Model The Standard Model explains how the basic building blocks of matter interact, governed by four fundamental forces In Theory: Is theoretical physics in crisis? 18 May 2016 — Do recent discoveries mean there’s nothing left? Find out what the future holds for theoretical physics in our final In Theory series installment 50 years since iconic 'A Model of Leptons' published 20 Nov 2017 — Steven Weinberg’s iconic paper, A Model of Leptons, was published in 1967 and determined the direction for high-energy particle physics research Hunting season at the LHC 10 Aug 2017 — With the LHC now back smashing protons together at an energy of 13 TeV, what exotic beasts do physicists hope to find? Happy 5th anniversary, Higgs boson! 4 Jul 2017 — Five years ago, the ATLAS and CMS collaborations announced the discovery of the Higgs boson Make your own Higgs Boson pizza 5 Jul 2016 — To celebrate the fourth birthday of the Higgs boson announcement CERN invites you to make your own particle-based pizza Help the Higgs find its siblings 4 Jul 2016 — A new citizen science project gives sofa-scientists the chance to search for previously undiscovered particles ATLAS and CMS experiments shed light on Higgs properties 1 Sep 2015 — At the 2015 LHCP conference the collaborations presented for the first time combined measurements of many properties of the Higgs boson LHC experiments join forces to zoom in on the Higgs boson 17 Mar 2015 — Today the ATLAS and CMS experiments presented for the first time a combination of their results on the mass of the Higgs boson CMS pins down Higgs with first run data 27 Jan 2015 — Recent publications from CMS use data from the LHC's first run to shed light on the properties of the Higgs boson How standard is the Higgs boson discovered in 2012? 12 Nov 2014 — Without a doubt, it is a Higgs boson, but is it the Higgs boson of the Standard Model? Run 2 of the LHC find out, says theorist John Ellis A new world record for CERN at 60 years old 26 Sep 2014 — In CERN’s 60th year, the first proof of the existence of the Higgs boson earns a Guinness World Record for CERN, ATLAS and CMS Results from CERN presented at ICHEP in Spain 7 Jul 2014 — At ICHEP in Valencia, Spain, all four LHC experiments presented new results from the LHC’s first run. Run 2 physics holds much promise CERN experiments report new Higgs boson measurements 23 Jun 2014 — Results reported by ATLAS and CMS discuss the decay of Higgs bosons directly to fermions, the particles that make up matter Higgs boson machine-learning challenge 19 May 2014 — Teach the machines: CERN launches competition to develop machine-learning analysis techniques for Higgs data CMS presents new boundary of Higgs width 31 Mar 2014 — At the Moriond conference CMS presented the best constraint yet of the Higgs boson “width”, a parameter that determines the particle’s lifetime François Englert talks Higgs bosons and supersymmetry 10 Mar 2014 — On his first trip to CERN since sharing the Nobel prize in physics last year with Peter Higgs, François Englert talks Higgs bosons and supersymmetry A Nobel laureate's formula for the universe 26 Feb 2014 — Watch François Englert explain the equations for the Brout-Englert-Higgs mechanism that gives particles mass, with the help of a blackboard Highlights from CERN in 2013 19 Dec 2013 — Higgs boson decays, a Nobel prize for Higgs and Englert and a huge Open Days event were among the big stories at CERN this year CMS presents evidence for Higgs decays to fermions 3 Dec 2013 — The CMS collaboration have measured the decay of the Higgs boson to pairs of bottom quarks and to pairs of tau leptons ATLAS sees Higgs boson decay to fermions 27 Nov 2013 — The ATLAS experiment at CERN has found evidence for the Higgs boson decaying to two tau particles CERN receives the Prince of Asturias Award 25 Oct 2013 — CERN, along with Peter Higgs and François Englert, today receives the Prince of Asturias Award during a ceremony in Spain
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871318101882935, "perplexity": 1802.2562472283512}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886964.22/warc/CC-MAIN-20180117193009-20180117213009-00227.warc.gz"}
https://community.jmp.com/t5/Discussions/What-are-the-Profile-Formulas-I-get-from-the-Neural-Network/m-p/187397/highlight/true
Choose Language Hide Translation Bar Highlighted Community Trekker ## What are the Profile Formulas I get from the Neural Network model trained by K-fold xvalidation? Dear community, I trained my neural network model (Analyze => Predictive Modeling => Neural) using K-fold xvalidation, so there were K sets of model parameters generated in the training process. When I choose to Save Profile Formulas (as shown in figure below), Which set of parameters is reported to me? Is it one out of those K sets? Or some kind of average / combination of those K sets? Please advise. Thanks very much! 1 ACCEPTED SOLUTION Accepted Solutions Highlighted Staff ## Re: What are the Profile Formulas I get from the Neural Network model trained by K-fold xvalidation? I selected the Help > Predictive and Specialized Modeling > Neural Networks > Launch the Neural Platform > Validation Method. It says: "Divides the original data into K subsets. In turn, each of the K sets is used to validate the model fit on the rest of the data, fitting a total of K models. The model giving the best validation statistic is chosen as the final model." Learn it once, use it forever! 2 REPLIES 2 Highlighted Staff ## Re: What are the Profile Formulas I get from the Neural Network model trained by K-fold xvalidation? I selected the Help > Predictive and Specialized Modeling > Neural Networks > Launch the Neural Platform > Validation Method. It says: "Divides the original data into K subsets. In turn, each of the K sets is used to validate the model fit on the rest of the data, fitting a total of K models. The model giving the best validation statistic is chosen as the final model." Learn it once, use it forever! Community Trekker ## Re: What are the Profile Formulas I get from the Neural Network model trained by K-fold xvalidation? Thank you so much Mark! Felt sorry that I missed the statement when skimming over the manual very quickly...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807701468467712, "perplexity": 1612.6729743077217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371821680.80/warc/CC-MAIN-20200408170717-20200408201217-00369.warc.gz"}
http://www.milefoot.com/math/calculus/limits/LimitStrategies11.htm
## Strategies for Evaluating Limits There are several approaches used to find limits. Here, we summarize the different strategies, and their advantages and disadvantages. ### Substitution Many limits may be evaluated by substitution. The necessary requirement for this approach to work is that the function is continuous at the point where the limit is being evaluated. • $\lim\limits_{x\to 5} (x^2-3x+2)=5^2-3(5)+2=12$ Caution: When evaluating, if the expression $\sqrt{0}$ is encountered, it is also necessary to determine whether the result is valid for a two-sided limit, or for a particular one-sided limit, or possibly not valid at all. • $\lim\limits_{x\to 6+}\sqrt{6-x}$,   when evaluated by substitution, gives   $\sqrt{6-6}=\sqrt{0}=0$.   However, this limit actually does not exist, because the domain of the function   $f(x)=\sqrt{6-x}$   is   $(-\infty,6]$,   and therefore $x$ cannot approach 6 from the right along this function. The only limit of this function that would exist as $x$ approaches 6 is a limit from the left. ### Factoring and Cancelling When evaluation produces the form $\dfrac{0}{0}$, quite frequently there is a common factor in the numerator and denominator of the function which can be cancelled. • $\lim\limits_{x\to 4}\dfrac{x^2-10x+24}{x^2-x-12}$,   when evaluated by substitution, produces the form $\dfrac{0}{0}$. So we proceed as follows: $\lim\limits_{x\to 4} \dfrac{x^2-10x+24}{x^2-x-12}=\lim\limits_{x\to 4}\dfrac{(x-6)(x-4)}{(x-4)(x+3)}= \lim\limits_{x\to 4}\dfrac{x-6}{x+3}=\dfrac{4-6}{4+3}=-\dfrac{2}{7}$. Sometimes when the form $\dfrac{0}{0}$ occurs, algebraic manipulation is required before the factors are apparent. With compound fractions, a common denominator should be obtained. • $\lim\limits_{x\to 2}\dfrac{\frac{1}{x}-\frac12}{x-2}=\lim\limits_{x\to 2} \dfrac{\left(\frac{2-x}{2x}\right)}{\left(\frac{x-2}{1}\right)}=\lim\limits_{x\to 2} \left(\dfrac{x-2}{-2x}\right)\left(\dfrac{1}{x-2}\right)=\lim\limits_{x\to 2} \dfrac{-1}{2x}=-\dfrac14$ When the form $\dfrac{0}{0}$ occurs and square roots are present, the numerator and denominator should be multiplied by the conjugate of the expression containing the square root. • $\lim\limits_{x\to 3}\dfrac{\sqrt{x^2+16}-5}{x^2-9}=\lim\limits_{x\to 3} \dfrac{\left(\sqrt{x^2+16}-5\right)\left(\sqrt{x^2+16}+5\right)} {(x^2-9)\left(\sqrt{x^2+16}+5\right)}=\lim\limits_{x\to 3}\dfrac{x^2+16-25} {(x^2-9)\left(\sqrt{x^2+16}+5\right)}$ $\qquad\qquad =\lim\limits_{x\to 3}\dfrac{x^2-9} {(x^2-9)\left(\sqrt{x^2+16}+5\right)}=\lim\limits_{x\to 3}\dfrac{1}{\left(\sqrt{x^2+16}+5\right)} =\dfrac{1}{10}$ When the form $\dfrac{0}{0}$ occurs and absolute values are present, it is possible that only a one-sided limit will exist. The absolute value will need to be rewritten. In this example, the argument of the absolute value is negative when approaching from the left, so a factor of negative one appears in the solution. • $\lim\limits_{x\to 7-}\dfrac{(x-4)|x-7|}{x-7}=\lim\limits_{x\to 7-}\dfrac{(x-4)(-1)(x-7)} {x-7}=\lim\limits_{x\to 7-}(-1)(x-4)=(-1)(7-4)=-3$ ### A Special Case Involving the Sine Function If evaluation produces the form $\dfrac{0}{0}$ and the sine function is present, the special case   $\lim\limits{\theta\to 0}\dfrac{\sin\theta}{\theta}=1$   may be needed. Note that this approach is valid only when the argument of the sine function is approaching zero (not infinity). • $\lim\limits_{x\to 0}\dfrac{4\sin 5x}{3x}=\lim\limits_{x\to 0} \dfrac{\frac53 (4\sin 5x)}{\frac53 (3x)}=\lim\limits_{x\to 0}\dfrac{20}{3} \left(\dfrac{\sin 5x}{5x}\right)=\dfrac{20}{3}(1)=\dfrac{20}{3}$ When the form $\dfrac{0}{0}$ appears with a cosine function, the same special case is needed, along with some trigonometric identities. • $\lim\limits_{x\to 0}\dfrac{1-\cos 4x}{3x}=\lim\limits_{x\to 0}\dfrac{(1-\cos 4x)(1+\cos 4x)} {3x(1+\cos 4x)}=\lim\limits_{x\to 0}\dfrac{1-\cos^2 4x}{3x(1+\cos 4x)}=\lim\limits_{x\to 0} \dfrac{\frac43 \sin^2 4x}{\frac43 (3x)(1+\cos 4x)}$ $\qquad\qquad =\lim\limits_{x\to 0}\left(\dfrac43 \dfrac{\sin 4x}{4x}\dfrac{\sin 4x}{1+\cos 4x}\right) =\dfrac43 (1)\left(\dfrac02\right)=0$ ### Multiplying Numerator and Denominator by the Reciprocal of the Highest Power Sometimes evaluation produces the form $\dfrac{\infty}{\infty}$. When this occurs, the limit may be of a rational function as $x$ approaches infinity. This can be solved by multiplying the numerator and denominator by the reciprocal of the highest power of $x$ appearing in the function, together with the special case   $\lim\limits_{x\to\infty}\dfrac{1}{x}=0$. • $\lim\limits_{x\to\infty}\dfrac{5x^3-4x^2+11}{6x^3-9x+7}=\lim\limits_{x\to\infty}\dfrac {(5x^3-4x^2+11)\left(\frac{1}{x^3}\right)}{(6x^3-9x+7)\left(\frac{1}{x^3}\right)}= \lim\limits_{x\to\infty}\dfrac{5-\frac{4}{x}+\frac{11}{x^3}}{6-\frac{9}{x^2}+\frac{7}{x^3}} =\dfrac{5-0+0}{6-0+0}=\dfrac56$ Another case that produces the form $\dfrac{\infty}{\infty}$ involves exponential functions. These require the special case   $\lim\limits_{x\to\infty}b^{-x}=0$,   valid for bases   $b>1$. • $\lim\limits_{x\to\infty}\dfrac{5+e^{3x}}{4-2e^{3x}}=\lim\limits_{x\to\infty}\dfrac {(5+e^{3x})e^{-3x}}{(4-2e^{3x})e^{-3x}}=\lim\limits_{x\to\infty}\dfrac{5e^{-3x}+1}{4e^{-3x}-2} =\dfrac{0+1}{0-2}=-\dfrac12$ ### Sandwiching Some functions are quite similar to known functions, but need sandwiching to handle them properly. • $\lim\limits_{x\to\infty}\dfrac{1}{x-4}$   is similar to   $\lim\limits_{x\to\infty}\dfrac{1}{x}$,   so we would expect a similar result. Substitution won't produce that result, but we can sandwich it. Unfortunately, whenever   $x>4$,   then   $\dfrac{1}{x-4} > \dfrac{1}{x}$, so the function $\dfrac{1}{x}$ will not work as the upper layer of the sandwich. But $\dfrac{2}{x}$ will work, since the inequality   $\dfrac{2}{x} > \dfrac{1}{x-4}$   will be true whenever   $x>8$   (which you can confirm by solving the inequality). So we have   $0\le\dfrac{1}{x-4}\le\dfrac{2}{x}$,   and taking limits gives   $\lim\limits_{x\to\infty}0\le\lim\limits_{x\to\infty}\dfrac{1}{x-4} \le\lim\limits_{x\to\infty}\dfrac{2}{x}$.   But both the leftmost and rightmost expressions have limit zero, so we have   $0\le\lim\limits_{x\to\infty}\dfrac{1}{x-4}\le 0$,   and thus   $\lim\limits_{x\to\infty}\dfrac{1}{x-4}=0$. Some functions cannot be evaluated at their limit, and algebraic manipulation will not simplify the expression. In some of these cases, the Sandwich Theorem may be usable. • $\lim\limits_{x\to 0}\left(x^2\sin\dfrac{1}{x}\right)$   cannot be simplified into another form. But because the sine function has a limited range, this function can be sandwiched. We first note that   $-1\le\sin\dfrac{1}{x}\le 1$   is true for every nonzero real number $x$. Multiplying both sides by $x^2$ produces   $-x^2\le x^2\sin\dfrac{1}{x}\le x^2$,   and this does not change the direction of the inequalities since $x^2$ is never negative. Then we take limits of each expression to obtain   $\lim\limits_{x\to 0}(-x^2)\le \lim\limits_{x\to 0}\left(x^2\sin\dfrac{1}{x}\right)\le \lim\limits_{x\to 0}(x^2)$,   and after evaluating the leftmost and rightmost limits we have   $0\le \lim\limits_{x\to 0}\left(x^2\sin\dfrac{1}{x}\right)\le 0$.   So our limit has been sandwiched between two functions which have the same limit, and we are forced to the conclusion that   $\lim\limits_{x\to 0}\left(x^2\sin\dfrac{1}{x}\right)= 0$. In the above example, note that as $x$ approaches zero, the argument of the sine function approaches infinity, so the special case   $\lim\limits_{\theta\to 0}\dfrac{\sin\theta}{\theta}=1$   will not apply. ### Indeterminate Forms in General Several forms are considered indeterminate forms, for which a limit may or may not exist. Besides the most common situations of $\dfrac00$ and $\dfrac{\infty}{\infty}$, there are many other situations which an algebraic approach should be pursued. These include   $0\cdot\infty$,   $\infty-\infty$,   $0^0$,   $1^\infty$,   and   $\infty^0$. Some of these will be studied later in a section on L'Hopital's Rule. ### Infinite Limits If evaluation produces the form $\dfrac10$, then the limit involves an infinite discontinuity. The specific result can usually be determined by analyzing the signs of the factors involved. • $\lim\limits_{x\to 2+}\dfrac{x-6}{x-2}$,   when evaluated by substitution, produces $\dfrac{-4}{0}$, which is the same form as $\dfrac10$. The numerator is clearly negative near   $x=2$   (as we learned when we evaluated). As $x$ approaches 2 from the right, we have   $x>2$,   which implies   $x-2>0$,   so the denominator is positive. Thus, we have the quotient of a negative and a positive, which is negative. Therefore,   $\lim\limits_{x\to 2+}\dfrac{x-6}{x-2}=-\infty$. It should be noted that functions will often (but not always) have different infinite limits from the right and from the left. When this occurs, the two-sided limit does not exist. ### Graphs Graphs may be used to help you understand the situation, but they should generally not be used for justification of results. The technology (both hardware and software) that produces graphical results generally has resolution issues at some scale, and therefore cannot be considered conclusive by itself. But graphs can be very useful when needing to identify a certain type of discontinuity. ### A Table of Numerical Approximations Numerical approximations can also be useful in understanding a situation, but they will almost never give conclusive results. Just as graphs do, tables of numerical values also suffer from the "resolution" problem. But when a function has a finite limit, and efforts at simplifying the function fail, a numerical approximation may be the best available result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776525497436523, "perplexity": 343.9884858158292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00311.warc.gz"}
http://mathoverflow.net/feeds/question/98634
Geometric interpretation of the exact sequence for the cotangent bundle of the projective space - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T09:53:38Z http://mathoverflow.net/feeds/question/98634 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/98634/geometric-interpretation-of-the-exact-sequence-for-the-cotangent-bundle-of-the-pr Geometric interpretation of the exact sequence for the cotangent bundle of the projective space auniket 2012-06-02T04:44:47Z 2012-06-02T19:47:39Z <p>Edit: As Dan Petersen pointed out, this question is a duplicate of a <a href="http://mathoverflow.net/questions/5211/" rel="nofollow"> previous one</a>. I would leave it for the moderators to decide if this should be closed. On the other hand, may be this should be left open on the merit of the excellent answers and comments (@Emerton: Thanks!).</p> <p>I was trying to understand the following exact sequence (for $X := \mathbb{P}^n_k$, where $k$ is an algebraically closed field): $$0 \to \Omega_X \to \mathcal{O}_X(-1)^{n+1} \to \mathcal{O}_X \to 0$$ The explanation (as in the proof of Theorem II.8.13 of Hartshorne) is given by some algebraic formulae, which I am having trouble to digest. I was trying to see in more geometric terms what is going on, and was somewhat successful in the case of the surjection $\mathcal{O}_X(-1)^{n+1} \to \mathcal{O}_X$, namely: we can regard $\mathcal{O}_X(1)$ (respectively $\mathcal{O}_X(-1)$) as the normal bundle $\mathcal{N}$ of (respectively conormal bundle) of $X$ in $Z := \mathbb{P}^{n+1}_k$. Any global section of $\mathcal{O}_X(1)$ therefore induces a map (via evaluation) from $\mathcal{O}_X(-1)$ to $\mathcal{O}_X$. The above surjection comes from taking $n+1$-linearly independent global sections of $\mathcal{O}_X(1)$. </p> <p>But I do not understand how to interpret the injection $\Omega_X \to \mathcal{O}_X(-1)^{n+1}$. How would someone 'naturally' come up with the algebraic formula?</p> http://mathoverflow.net/questions/98634/geometric-interpretation-of-the-exact-sequence-for-the-cotangent-bundle-of-the-pr/98642#98642 Answer by Martin Brandenburg for Geometric interpretation of the exact sequence for the cotangent bundle of the projective space Martin Brandenburg 2012-06-02T08:58:35Z 2012-06-02T08:58:35Z <p>Here is another (unknown?) way of optaining the Euler sequence (though not really geometric): Since $\Omega^1_{\mathbb{P}}$ is a coherent sheaf; by Serre it has a "twisted presentation". For that one has to find some $k > 0$ such that $\Omega^1_{\mathbb{P}}(k)$ is generated by global sections. You will find that $k=2$ suffices, namely there is an epimorphism <code>$\bigoplus_{u&lt;v} \mathcal{O}(-2) \twoheadrightarrow \Omega^1$</code>, which is given on $D_+(x_i)$ by mapping</p> <p>$$x_i^{-2} e_{uv} \mapsto \dfrac{x_u}{x_i} \cdot d\left(\dfrac{x_v}{x_i}\right)- \dfrac{x_v}{x_i} \cdot d\left(\dfrac{x_u}{x_i}\right).$$</p> <p>You can also compute the relations between these elements and arrive at the exact sequence</p> <p><code>$$\bigoplus_{u&lt;v&lt;w} \mathcal{O}(-3) \to \bigoplus_{u&lt;v} \mathcal{O}(-2) \to \Omega^1 \to 0.$$</code></p> <p>But now the (graded) Koszul resolution of $R[x_0,\dotsc,x_n]/(x_0,\dotsc,x_n)$ (here $R$ is an arbitrary base ring; it doesn't have to be an algebraically closed field) yields the long exact sequence</p> <p><code>$$\dotsc \to \bigoplus_{u&lt;v&lt;w} \mathcal{O}(-3) \to \bigoplus_{u&lt;v} \mathcal{O}(-2) \to \bigoplus_{u} \mathcal{O}(-1) \to \mathcal{O} \to 0.$$</code></p> <p>These combine to the Euler sequence $0 \to \Omega^1 \to \bigoplus_{u} \mathcal{O}(-1) \to \mathcal{O} \to 0$.</p> http://mathoverflow.net/questions/98634/geometric-interpretation-of-the-exact-sequence-for-the-cotangent-bundle-of-the-pr/98646#98646 Answer by Georges Elencwajg for Geometric interpretation of the exact sequence for the cotangent bundle of the projective space Georges Elencwajg 2012-06-02T10:00:59Z 2012-06-02T10:06:35Z <p>By dualizing and twisting we obtain the <em>equivalent</em> exact sequence of vector bundles<br> $$0\to \tau\to \mathbb P^n_k\times k^{n+1} \to T_{\mathbb P^n}(-1)\to 0 \quad (*)$$ The first morphism is just the inclusion of the tautological vector bundle $\tau$ into the trivial bundle and is geometrically transparent.<br> To understand the second morphism geometrically, fix a point $p\in \mathbb P^n_k$ and the corresponding line $l\subset \mathbb P^n_k$ (I forgot to say I'm using the pre-Grothendieck definition of projective space as a set of lines) .<br> At $p$ the exact sequence $(*)$ becomes the exact sequence of vector spaces$$0\to l\to k^{n+1} \to T_{\mathbb P^n}[p]\otimes l\to 0$$<br> Exactness then translates into the canonical isomorphism $$T_{\mathbb P^n}[p] = \mathcal L(l,k^{n+1}/l) \quad (**)$$ </p> <p>So the whole problem boils down to understanding $(**)$, i.e.understanding in a canonical way the fiber of the tangent bundle to $\mathbb P^n$ at a point $p=(a_0....:a_n)\in \mathbb P^n$.<br> Here is the idea inspired by differential geometry. </p> <p>The "curve" $\epsilon \mapsto (a_0+\epsilon t_0,....,a_n+\epsilon t_n)\; (\epsilon^2=0)$ [algebraic geometers consider very short curves!] gives rise to a tangent vector $t\in T_{\mathbb P^n}[p]$.<br> The canonically associated linear map $\lambda _t:l\to k^{n+1}/l$ is then characterized by the condition $$\lambda _t(a_0,...,a_n)=\overline {(t_0,...,t_n)}$$<br> [Be careful that if you change the vector $(a_0,...,a_n)$ representing $p$ to a colinear vector $(a_0',...,a_n')$, you also have to change $(t_0,...,t_n)$ to another $(t_0',...,t_n')$]<br> The details are in <a href="http://www.math.lsa.umich.edu/~idolga/631.pdf" rel="nofollow">Dolgachev's online notes</a> , Example 13.2</p>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829866886138916, "perplexity": 520.7375128297213}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708664942/warc/CC-MAIN-20130516125104-00034-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/the-kubo-formula-of-hall-conductivity.752797/
# The Kubo Formula of Hall Conductivity 1. ### cometzir 18 Several papers (eg. Di Xiao, et. al, Berry phase effects on electronic properties, RevModPhys, 82,2010)mentioned a formula to calculate the Hall conductivity(See the picture).This formula is used in an two dimensional system, v1 and v2 are velocity operators in x and y direction, Phi0 and PhiN are ground and excited state vector. The papers claim that this formula can be derived from the Kubo identity, but I am not sure how this can be done, since the form of Kubo formla is quite different from this expression. Could anyone help me with the derivation? #### Attached Files: • ###### Figure.PNG File size: 5.9 KB Views: 20 Last edited: May 8, 2014 2. ### DrDu 4,094 What are v1 and v2? 3. ### cometzir 18 Sorry for unclearly description. This formula is used in a two dimensional system. v1 and v2 are velocity operators in x and y direction 4. ### t!m 147 Which part is unfamiliar to you? The general linear response formula involves the time correlation function of the observable and the applied field. In a Hall measurement, the current is measured in the direction orthogonal to the applied field, which is why vx and vy show up. Writing the field in terms of the current density then gives the time integral of a current-current (or velocity-velocity) correlation function and performing the time evolution in the basis of eigenstates (Lehmann representation) should give the final result.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892824649810791, "perplexity": 1017.4080538416297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465693.74/warc/CC-MAIN-20150226074105-00162-ip-10-28-5-156.ec2.internal.warc.gz"}
https://lithiumcse.lithium.com/t5/Mo-TKB/test-toc/ta-p/2077
# Mo TKB Showing results for Do you mean # test toc by ‎03-13-2017 02:21 PM - edited ‎03-13-2017 02:22 PM ##### Test 1 This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test. ##### Test 2 This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test. ##### Test 3 This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test. ##### Test 4 This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test. ##### Test 5 This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test. ##### Test 6 This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test.This is a test. This is a test. This is a test . This is a test. This is a test. This is a test. Contributors
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985967040061951, "perplexity": 1368.3095406205243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527828.69/warc/CC-MAIN-20190722072309-20190722094309-00097.warc.gz"}
http://math.stackexchange.com/questions/354384/proof-that-the-sum-of-the-cubes-of-any-three-consecutive-positive-integers-is-di
# Proof that the sum of the cubes of any three consecutive positive integers is divisible by three. So this question has less to do about the proof itself and more to do about whether my chosen method of proof is evidence enough. It can actually be shown by the Principle of Mathematical Induction that the sum of the cubes of any three consecutive positive integers is divisible by 9, but this is not what I intend to show and not what the author is asking. I believe that the PMI is not the authors intended path for the reader, hence why they asked to prove divisibility by 3. So I did a proof without using the PMI. But is it enough? It's from Beachy-Blair's Abstract Algebra Section 1.1 Problem 21. This is not for homework, I took Abstract Algebra as an undergraduate. I was just going through some problems that I have yet to solve from the textbook for pleasure. Question: Prove that the sum of the cubes of any three consecutive positive integers is divisible by 3. So here's my proof: Let a $\in$ $\mathbb{Z}^+$ Define $$S(x) = x^3 + (x+1)^3 + (x+2)^3$$ So, $$S(a) = a^3 + (a+1)^3 + (a+2)^3$$ $$S(a) = a^3 + (a^3 + 3a^2 + 3a + 1) + (a^3 +6a^2 + 12a +8)$$ $$S(a) = 3a^3 + 9a^2 + 15a + 9$$ $$S(a) = 3(a^3 + 3a^2 + 5a + 3)$$ Hence, $3 \mid S(a)$. QED - It’s just fine, if you want to prove that $S(a)$ is divisible by $3$. –  Brian M. Scott Apr 8 '13 at 0:09 Simpler: $\rm\ mod\ 3\!:\ 0^3\!+1^3\!+2^3\equiv 0 + 1 + 8\equiv 0\ \$ –  Math Gems Apr 8 '13 at 0:10 Even simpler if you use $2\equiv -1$. –  Berci Apr 8 '13 at 0:10 @MathGems: So does the modular arithmetic, if one hasn’t yet enountered it. –  Brian M. Scott Apr 8 '13 at 0:12 @Derek For divisibility problems your first instinct should be modular arithmetic. It often simplifies such problems, because working with equations (congruences) is usually simpler than working with relations (divisibility). –  Math Gems Apr 8 '13 at 0:51 Your solution is fine, provided you intended to prove that the sum is divisible by $3$. If you intended to prove divisibility by $9$, then you've got more work to do! If you're familiar with working $\pmod 3$, note @Math Gems comment/answer/alternative. (Though to be honest, I would have proceeded as did you, totally overlooking the value of Math Gems approach.) - Yes, I intended to show divisibility by 3. My comment was just that it is possible to show by 9 as well, but that was not what I intended to show. –  Derek W Apr 8 '13 at 0:26 Well, you did just fine, and as I mentioned, I would have proceeded in the same direction ;-) –  amWhy Apr 8 '13 at 0:28 Nice Amy $_{+}^{+}..._{+}^{+}$ –  B. S. Apr 8 '13 at 6:01 If you are familiar with modular arithmetic, there are even much faster proofs, see the comments. Or, instead of $x,x+1,x+2$, you could start out from $x-1,x,x+1$. But no more evidence is needed than yours. - You have actually done enough work to show that the sum of the $3$ cubes is divisible by $9,$ not juat by $3,$ but you haven't explained that step: Note that (mod $9$), $S(a)$ is the same as $3(a^{3}+5a).$ But $a^{3}-a$ is divisible by $3$ for all integers $a,$ as you can see by checking $(3b-1)^{3}$ and $(3b+1)^{3}$ for integers $b.$ Hence $S(a) = 18a +9b$ for some integer $b.$ - Note $\rm\,\ x^3\!+\!(x\!+\!1)^3+(x\!+\!2)^3 = 3(x^3\!-\!x) + 9(x\!+\!1)^2\ \$ –  Math Gems Apr 8 '13 at 1:15 I was taking where the proposer had got to as my starting point, and commenting on what he had achieved, not trying to point out the quickest proof. –  Geoff Robinson Apr 8 '13 at 6:19 As was already mentioned, modular arithmetic is the most efficient way to solve this problem, but, if you really want to avoid it, you can still get by with slightly easier computation (smaller coefficients). Introduce a name, say $y$, for the middle one of the three integers, rather than the smallest. So you'd add $(y-1)^3+y^3+(y+1)^3$, which leads to a fair amount of cancellation and no coefficients bigger than 6. - Any 3 consecutive numbers will always be of the form 3k-1,3k,3k+1.So 3k-1 ≡ -1mod 3. Where mod a ≡ x mod y means that x is the reminder obtained when a is divided by y.(3k-1)^3 ≡ -1 mod 3 . Similarly, 3k ^3 ≡ 0 mod 3. (3k+1)^3 ≡ 1 mod 3. Therefore (3k-1)^3 + (3k)^3 + (3k+1)^3 ≡ -1 + 0 + 1 mod 3 ≡ 0 mod 3. Hence it is always divisible by 3. - $$(a+b+c)^3 = a^{3} + b^{3} + c^{3} + 3 a^{2} b + 3 a^{2} c + 3 a b^{2} + 3 a c^{2} + 3 b^{2} c + 3 b c^{2} + 6 a b c$$ $$= a^{3} + b^{3} + c^{3} + 3f(a,b,c)$$ Let $a=k-1,b=k,c=k+1$,then, $a,b,c$ represents consecutive numbers So we have $$(a+b+c)^3=(k-1+k+k+1)^3=(3k)^3=27k^3$$ Thus $$a^{3} + b^{3} + c^{3} = (a+b+c)^3 - 3\cdot f(a,b,c) = 27k^3-3\cdot f(a,b,c)$$ $$\Rightarrow a^{3} + b^{3} + c^{3}=3\cdot(9k^3-f(a,b,c))$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.8962122797966003, "perplexity": 402.77098872226253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.real-statistics.com/matrices-and-iterative-procedures/determinants-and-simultaneous-linear-equations/
# Determinants and Simultaneous Linear Equations Definition 1: The determinant, det A, also denoted |A|, of an n × n square matrix A is defined recursively as follows: If A is a 1 × 1 matrix [a] (i.e. a scalar) then det A = a. Otherwise, where Aij is matrix A with row i and column j removed. Note that if A = $\left[ \! \begin{array}{cc} 3 & 5 \\4 & 0 \end{array} \! \right]$, then we use the notation $\left| \!\begin{array}{cc} 3 & 5 \\4 & 0 \end{array} \!\right|$ for det A. Excel Functions: Excel provides the following function for calculating the determinant of a square matrix: MDETERM(A): If A is a square array, then MDETERM(A) = det A. This is not an array function. The supplemental function DET(A) provides equivalent functionality. Property 1: 1. det AT = det A 2. If A is a diagonal matrix, then det A = the product of the elements on the main diagonal of A Proof: Both of these properties are a simple consequence of Definition 1 Property 2: $\left| \!\begin{array}{cc} a & b \\c & d \end{array} \!\right|$ad  – bc Example 1: Calculate det A where From Definition 1 and Property 2 it follows that Of course we can get the same answer by using Excel’s function MDETERM(A). Property 3: If A and B are square matrices of the same size then det AB = det A ∙ det B Property 4: A square matrix A is invertible if and only if det A ≠ 0. If A is invertible then The first assertion is equivalent to saying that a square matrix A is singular if and only if det A = 0. Property 5: Rules for evaluating determinants: 1. The determinant of a triangular matrix is the product of the entries on the diagonal. 2. If we interchange two rows, the determinant of the new matrix is the negative of the old one. 3. If we multiply one row by a constant, the determinant of the new matrix is the determinant of the old one multiplied by the constant. 4. If we add one row to another one multiplied by a constant, the determinant of the new matrix is the same as the old one. Observation: The rules in Property 5 are sufficient to calculate the determinant of any square matrix. The idea is to transform the original matrix into a triangular matrix and then use rule 1 to calculate the value of the determinant. We now present an algorithm based on Property 5 for calculating det A, where A = [aij] is an n ×  matrix. Start by setting the value of the determinant to 1, and then perform steps 1 to n as follows. Step k – part 1(a): If akk ≠ 0, multiply the current value of the determinant by akk and then divide all the entries in row k by akk (rule 3 of Property 5). Step k – part 1(b): If akk = 0, exchange row k with any row  below it (i.e. k < m ≤ n) for which amk ≠ 0, multiply the current value of the determinant by -1 (rule 2) and then perform step 1(a) above. If no such row exists then terminate the algorithm and return the value of 0 for the determinant. Step k – part 2: For every row m below row k, add –amk times row k to row m (rule 4). This guarantees that aij = 0 for all i > k and jk. After the completion of step n, we will have a triangular matrix whose diagonal contains all 1s, and so by rule 1, the determinant is equal to the current value of the determinant. Example 2: Using Property 5, find We present the steps looking from left to right and then top to bottom in Figure 1. For each step the rule used is specified as well as the multiplier of the determinant calculated up to that point. Figure 1 – Calculating the determinant in Example 2 This shows that the determinant is -5, the same answer given when using Excel’s MDETERM function. Observation: In step – part 1(b) of the above procedure we exchange two rows if akk = 0. Given that we need to deal with round off errors, what happens if akk is small but not quite zero? In order to reduce the impact of round off errors, we should modify step k – part 1 as follows: Step k – part 1: Find m ≥ such that the absolute value of amk is largest. If this amk ≈ 0 (i.e. |amk|< ϵ where ϵ is some predefined small value) then terminate the procedure. If m > k then exchange rows m and k. Definition 2: A set of n linear equations in k unknowns xj can be viewed as the matrix equation AX = C where A is the n × k matrix [aij], X is the k × 1 column vector [xj] and C is the n × 1 column vector [cj]. Property 6: If A is a square matrix (i.e. the number of equations is equal to the number of unknowns), the equation AX = C has a unique solution if and only if A is invertible (i.e. det A ≠ 0), and in this case the unique solution is given by X = A-1 C. Property 7 (Cramer’s Rule): If the square matrix  is invertible, the unique solution to  is given by where Aj is A with the jth column replaced by the entries of C. Example 3: Solve the following linear system using Cramer’s Rule: In Figure 2, we calculate det A and det Aj for each j. Figure 2 – Calculating a determinant using Cramer’s rule It now follows that x = -6/9 = -2/3, y = 3/9 = 1/3 and z = 0/9 = 0. Per Property 6, we can get the same result by calculating A-1 C, which can be carried out in Excel using the formula =MMULT(MINVERSE(A), C). For Example 3, this yields Definition 3: When C from Definition 2 is not the null matrix, then the linear equations are called heterogeneous. When C = Ο the linear equations are called homogeneous. In this case Ο is a solution of AX = Ο, called the trivial solution. Property 8: If A is invertible then X = Ο is the only solution of AX = Ο. Proof: This follows from Property 6. Observation: When A is not invertible (i.e. det A = 0) any scalar multiple of a non-trivial solution to the homogeneous equation AX = Ο is also a solution. To find such a solution we can use the Gaussian Elimination method, a method which is similar to the one we used to calculate the determinant of a square matrix based on Property 5. This approach works for any A (whether square or not, and whether invertible or not). Definition 4: If A is an n × k matrix and B is an n × m matrix then the augmented matrix A|B is an n × (k+m) matrix whose first  columns are identical to the columns in A and whose remaining  columns are identical to the columns in B. Property 9: If A′ and C′ are derived from A and C based on any one of the following transformations, then the equations AX = C and AX = C have the same solutions. 1. Interchange of any two rows 2. Multiplication of any row by a constant 3. Addition of any row multiplied by a constant to another row Observation: Typically we apply the above transformations to the augmented matrix A|C. Definition 5: The Gaussian Elimination Method is a way of solving linear equations and is based on the transformations described in Property 9. Suppose A and C are as described in Definition 2. Step 0 – set i = 1 and j = 1 We now apply the following series of transformation to the augmented matrix  (step 1 through step p where p is the smaller of n and k): Step i – part 1: Find r i such that the absolute value of arj is largest. If arj ≈ 0 (i.e. |arj| < ϵ where ϵ is some predefined small value), then if j = n terminate the procedure; otherwise replace j by j + 1 and repeat step i. If not arj ≈ 0 and r > k then exchange rows r and i (rule 1). Step i – part 2: Divide all the entries in row i by aij (rule 2). Step i – part 3: For every row r below row i, add –arj times row i to row r (rule 3). This guarantees that arc = 0 for all r > i and cj. Observation: For non-homogeneous equations (i.e. ≠ Ο) there are three possibilities: there are an infinite number of solutions, there are no solutions or there is a unique solution. For homogeneous equations (i.e. C = Ο) there are two possibilities: there are an infinite number of solutions or there is a unique solution, namely the trivial solution where X = Ο. Example 4: Solve the following linear system using Gaussian Elimination: Figure 3 displays the steps in the Gaussian Elimination process for Example 4. Figure 3 – Solving linear equations using Gaussian elimination Since A transforms into the identity matrix we know that the transform of C is the unique solution to the system of linear equations, namely x = 0, y = 2 and z = -1. Note that we get the same result by calculating X = A-1 C. Example 5: Solve the following homogeneous linear system using Gaussian Elimination: By Gaussian Elimination from Figure 4 we see that the only solution is the trivial solution: Figure 4 – Solving homogeneous linear equations via Gaussian elimination Example 6: Solve the following homogeneous linear system using Gaussian Elimination: This time Gaussian Elimination produces a row with all zeros (see Figure 5), and the number of non-zero rows = 2 < 3 unknowns. Thus there are an infinite number of solutions. Figure 5 – Finding solutions to homogeneous linear equations The solutions can take the form x = -2.5t, y = .5t, z = t for any value of t. Observation: As we can see from the above examples, a homogeneous equation AX = O, where A is an m × n matrix, has a unique solution when there are n non-zero rows after performing Gaussian Elimination. Otherwise the equation has an infinite number of solutions. Real Statistics Excel Functions: We provide the following supplemental Excel array function for carrying out the Gaussian Elimination procedure. ELIM(R1, prec): an array function which outputs the results of Gaussian Elimination on the augmented matrix found in the array R1. The shape of the output is the same as the shape of R1. LINEQU(R1, prec): an array function which returns an n × 1 column vector with the unique solution to equations defined by the augmented m × (n+1) matrix found in array R1; returns a vector consisting of #N/A! if there is no solution and a vector consisting of #NUM! if there are an infinite number of solutions. By default, each of these functions assumes that an entry with absolute value less than 0.0001 is equivalent to zero. This is necessary since small values are not treated as zero in the Gaussian elimination algorithm described above. You can change this default value to something else by inserting a second parameter in either of these functions: e.g. ELIM(R1, prec) or LINEQU(R1, prec). Thus ELIM(R1) = ELIM(R1, 0.0001). Real Statistics Data Analysis Tool: The Solve Set of Linear Equations data analysis tool contained in the Real Statistics Resource Pack provides equivalent functionality to LINEQU and ELIM. To use this tool, enter Ctrl-m and select Solve Set of Linear Equations from the menu. When a dialog box appears, fill in the Input Range (with the same range as R1 above). Selecting Show solution only is equivalent to LINEQU(R1), while not clicking on this option is equivalent to ELIM(R1). Observation: Gaussian Elimination can also be used to invert a square n × n matrix A by applying the above procedure to A|In. If the procedure terminates before n steps have been completed then A is not invertible. If the procedure terminates after n steps (in which case A′ = In) then C′ = A-1. Example 7: Use Gaussian Elimination to invert the matrix The result is shown in Figure 6. Figure 6 – Inverting a matrix using Gaussian elimination The fact that A transforms into the identity matrix indicates that A is invertible. The inverse is given by the transformation of the identity matrix, namely This is the same result as using the Excel formula MINVERSE(A). ### 5 Responses to Determinants and Simultaneous Linear Equations 1. Ireej says: Great blog! Do you have any tips for aspiring writers? I’m hoping to start my own website soon but I’m a little lost on everything. Would you advise starting with a free platform like WordPress or go for a paid option? There are so many options out there that I’m completely overwhelmed .. Any ideas? Thanks a lot! • Charles says: Ireej, To gain some experience I started by putting the website up on the free WordPress platform. It worked for a while until I found that I needed some of the tools that are not available on the free platform. Charles 2. Eugene says: As a side note – if we accept “Definition 1” then “Property 2” should be read as $\begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad – bc$. Not “ac – bd”. • Charles says: Eugene, Thanks for catching this typo. I have now corrected the formula on the webpage. Charles 3. Allamah Ali Mele says: Really this site have help me a lot and i am really appreciate the efforts of all those who contributed to development of this site. I am looking forward to send me more problems and solutions on determinant and matrices to my email address to enable me study at home because, presently i am student.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940484523773193, "perplexity": 565.8573768825865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00553-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.careerstoday.in/physics/adiabatic-process-derivation
Adiabatic Process Derivation An adiabatic process is a thermodynamic process such that there is no heat transfer in or out of the system and is generally obtained by using a strong insulating material surrounding the entire system. Adiabatic process examples • The vertical flow of air in the atmosphere • When the interstellar gas cloud expands or contracts. • Turbine is an example of the adiabatic process as it uses the heat a source to produce work. Adiabatic process derivation The adiabatic process can be derived from the first law of thermodynamics relating to the change in internal energy dU to the work dW done by the system and the heat dQ added to it. dU=dQ-dW dQ=0 by definition Therefore, 0=dQ=dU+dW The word done dW for the change in volume V by dV is given as PdV. The first term is specific heat which is defined as the heat added per unit temperature change per mole of a substance. The heat that is added increases the internal energy U such that it justifies the definition of specific heat at constant volume is given as: $C_{v}=\frac{dU}{dT}\frac{1}{n}$ Where, n: number of moles Therefore, $0=nC_{v}dT+PdV$ (eq.1) From the ideal gas law, we have nRT=PV (eq.2) Therefore, nRdT=PdV+VdP (eq.3) By combining the equation 1. and equation 2, we get $-PdV=nC_{v}dT=\frac{C_{v}}{R}(PdV+VdP)$ $0=(1+\frac{C_{v}}{R})PdV+\frac{C_{v}}{R}VdP$ $0=\frac{R+C_{v}}{C_{v}}(\frac{dV}{V})+\frac{dP}{P}$ When the heat is added at constant pressure Cp, we have $C_{p}=C_{v}+R$ $0=\gamma (\frac{dV}{V})+\frac{dP}{P}$ Where the specific heat ɣ is given as: $\gamma\equiv \frac{C_{p}}{C_{v}}$ From calculus we have, $d(lnx)=\frac{dx}{x}$ $0=\gamma d(lnV)+d(lnP)$ $0=d(\gamma lnV+lnP)=d(lnPV^{\gamma })$ $PV^{\gamma }=constant$ Hence, the equation is true for an adiabatic process in an ideal gas. Adiabatic index The adiabatic index is also known as heat capacity ratio and is defined as the ratio of heat capacity at constant pressure Cp to heat capacity at constant volume Cv. It is also known as the isentropic expansion factor and is denoted by ɣ. $\gamma =\frac{C_{p}}{C_{v}}=\frac{c_{p}}{c_{v}}$ Where, C: heat capacity c: specific heat capacity Adiabatic index finds application reversible thermodynamic process involving ideal gases and speed of sound is also dependent on the adiabatic index.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315426349639893, "perplexity": 482.14624802784454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00357.warc.gz"}
https://www.physicsforums.com/threads/finding-electric-field-on-rod-or-ring.103998/
Finding Electric Field on rod or ring? 1. Dec 13, 2005 Its from an example in the book, and it doesnt seem to make sense, A rod of length l has a uniform positive charge per unit length (lambda) and a total charge Q. Calculate the electric field at a point P that is located along the long axis of the rod and a distance a from one end. So then the example takes a small part of the rod, dx which has charge of dq, and is distance x from point P. dq = (lambda)dx and dE = ke dq/x^2 or ke lambda dx / x^2 fine so far. Now the example says we must sum up the contributions of all the segments. it becomes an integral E = (integral) from a to l+a of ke lambda dx/x^2 the example breaks the dq component out into ke lambda [ - 1/x ] from a to l+a Im somewhat confused now. then it goes on, ke lambda(1/a - 1/l+a) = keQ/a(l+a) ??? what??!! okay it kind of makes sense, the total charge divided by length, but the last part there is a divide by zero error to my thought process, multiplying an item with example values: Lambda(1/a - 1/b) or Q/l(1/a - 1/b) might give (Q/l * 1/a) - (Q/l * 1/b) if doing the same thing for the actual values, should give Q/la - Q/la + l^2 ?? no? reducing it down to Q/a(a+l) ? the book doesnt explain how it arrived at this. Can anyone give an example of calculating the Field from a charged rod? apparently its the same in a ring from the x axis but using vectors, but this concept seems tough, thanks for any explanations 2. Dec 13, 2005 Staff: Mentor I assume you understand and agree that that's the total field at the point in question. What's the anti-derivative of $1/x^2$? That's where the $1/x$ comes from. $$k\lambda (\frac{1}{a} - \frac{1}{(l + a)}) = k\lambda (\frac{l+a}{a(l+a)} - \frac{a}{a(l+a)}) = k\lambda l \frac{1}{a(l+a)} = \frac{k Q}{a(l+a)}$$ Last edited: Dec 13, 2005
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9308327436447144, "perplexity": 1260.6387357838914}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00082-ip-10-31-129-80.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/162729
Infoscience Journal article # Experimental investigation into segregating granular flows down chutes We experimentally investigated how a binary granular mixture made up of spherical glass beads size ratio of 2 behaved when flowing down a chute. Initially, the mixture was normally graded, with all the small particles on top of the coarse grains. Segregation led to a grading inversion, in which the smallest particles percolated to the bottom of the flow, while the largest rose toward the top. Because of diffusive remixing, there was no sharp separation between the small-particle and large-particle layers, but a continuous transition. Processing images taken at the sidewall, we were able to measure the evolution of the concentration and velocity profiles. These experimental profiles were used to test a recent theory developed by Gray and Chugunov J. Fluid Mech. 569, 365 2006, who derived a nonlinear advection diffusion equation that describes segregation and remixing in dense granular flows of binary mixtures. We found that this theory was able to provide a consistent description of the segregation/remixing process under steady uniform flow conditions. To obtain the correct depth-averaged concentration far downstream, it was very important to use an accurate approximation to the downstream velocity profile through the avalanche depth. The S-shaped concentration profile in the far field provided a useful way of determining what we refer to as the Péclet number for segregation, a dimensionless number that quantifies how large the segregation is compared to diffusive remixing. While the theory was able to closely match the final fully developed concentration profile far downstream, there were some discrepancies in the inversion region i.e., the region in which the mixing occurs. The reasons for this are not clear. The difficulty to set up the experiment with both well controlled initial conditions and a steady uniform bulk flow field is one of the most plausible explanations. Another interesting lead is that the flux of segregating particles, which was assumed to be a quadratic function of the concentration in small beads, takes a more complicated form.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113863706588745, "perplexity": 764.7680023552293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721174.97/warc/CC-MAIN-20161020183841-00554-ip-10-171-6-4.ec2.internal.warc.gz"}
https://getrevising.co.uk/revision-tests/asch_opinions_and_social_pressure
# ASCH- OPINIONS AND SOCIAL PRESSURE HideShow resource information • Created by: hafsur • Created on: 29-12-14 16:50 G O T K M K P W J M P P E C J S M V F O D F Q V J R R K F G M O D W A A E A P W T P K T A I B K T O N U D B D U G T F X E K W P E D U V J I E I N F Q N G N A E E I U L N K I K S T V S K D H O V T X R P W V V Y Y E I D N R J N A A I W T N P E A V R B Y I M U L E U K R M N K A H E R D P U E O O M D D S V T Y I N E D C S M A E C D T X R U K V U I H S A O R A M E E V F L E N T D J I N O T F M C I E H C H G X N P C I U Q L P J U A U F A S A F K Q D C O C E P U V Q S O G T L R A I L G N T U R C J P A E C U H W I I P G R C I A N Q J I N E T K K G D I S B T A C A E S L H N L T C L I H I L F M T M N R Q L D M Y T N A I O Y O P U C U T Q A A T C N Y E L C O U C G Q N U I W B F H B U N L L R R D O N S A W Y R P C G H K V J Q E H E U E F A C I L E M Q R H O O V E M U R S M J I G S W V I T H E B K U K S J K F L I R W L H B N U L P P A X T E U Q E D A F Q U J M S C C M O V R Q C W F ### Clues • A strength of this study is that both qualitative and ? data was collected (12) • A variation that Asch carried out to test the effect of social support (8, 7) • Asch did this to gain an insight as to why the Ps conformed and how they felt (9) • It is a real world application of conformity (4, 8, 6) • The main ethical issue with this study (9) • The type of internal validity that states it isn't a realistic task to measure conformity (7, 7) • What the Ps were told the study was about (6, 9) • What were the 12/18 trials called? (8) • What were the fake Ps called (12)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956631064414978, "perplexity": 1415.0301012235884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541910.39/warc/CC-MAIN-20161202170901-00275-ip-10-31-129-80.ec2.internal.warc.gz"}
https://travis.giggy.com/miki/115.020.30.06.html
### GIGAMIND Folder: 115 CFA File: 115.020.30.06 Reading 8 - 6. Expected Value, Variance, and Standard Deviation of a Random Variable # 6. Expected Value, Variance, and Standard Deviation of a Random Variable ## l. calculate and interpret the expected value, variance, and standard deviation of a random variable and of returns on a portfolio; What is the definition of "expected value" and what is the mathematical equation? - Expected value is the probability weighted average of the possible outcomes. More probable outcomes will have more weight. - In finance, expected value is used a lot. E.g. "expected value of earnings per share", "dividend per share", "rate of return", etc - $$E(X) = P(x_1)x_1 + P(x_2)x_2 + ... + P(x_n)x_n$$ What is the variance of an expected value (probability weighted average)? - The variance of a random variable is the expected value (the probability-weighted average) of squared deviations from the random variable's expected value. - $$\sigma^2(X) = E([X - E(X)]^2)$$ What does a variance = 0 mean? Variance of zero means that the outcome is certain. Your scatterplot points all fall directly on the regression line, given the data you could exactly predict the outcome every time. What does a variance > 0 mean? Variance greater than zero means that there is dispersion in the outcomes. The higher the number the more dispersion. What are variance and standard deviation measures of? They are both measures of dispersion in the possible outcomes of the expected value of a random variable. What is the math formula for total probability rule for expected value? (Think of the concert example) - $$E(X) = E(X | S_1)P(S_1) + E(X | S_2)P(S_2) + ... + E(X | S_n)P(S_n)$$ - X = revenue - S_1 = favorable weather - S_2 = moderate weather - S_3 = bad weather - Reads: The expected outcome of X equals the expected outcome of X given that scenario S_1 happens, times the probability of scenario S_1 happening, plus ... (until S_n) • CFA
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684092402458191, "perplexity": 1004.4319740985053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00106.warc.gz"}
http://math.stackexchange.com/questions/367291/number-of-solutions-of-px-1-x-n-0-in-mathbbf-qn
# Number of solutions of $P(x_1, …, x_n) = 0$ in $\mathbb{F}_q^n$ I have this exercise : $q = p^r$ (it is not clearly precised in the exercise that $r = n$). and $\mathbb{F}_q$ is the finite field of cardinal $q$. Let's $P \in \mathbb{F}_q[X_1, ..., X_n]$ of degree $d < n$. Show that $p$ divides the number of solutions in $\mathbb{F}_q^n$ of $P(x_1, ..., x_n) = 0$. Hint : We can consider the polynomial $P^{q-1}$ and use the fact that if $r < q-1$, then $\sum_{x\in\mathbb{F_q}} x^r = 0$. But I don't see how to do... - Hints: For all $x_1,x_2,\ldots,x_n\in \mathbb{F}_q$ we either have $P(x_1,x_2,\ldots,x_n)=0$ or $P(x_1,\ldots,x_n)^{q-1}=1$. This is because all non-zero elements of $\mathbb{F}_q$ are roots of the equation $x^{q-1}=1$. Consequently the number of zeros of $P$ in $\mathbb{F}_q^n$, when viewed as an element of $\mathbb{F}_p$, i.e. reduced modulo $p$, is $$\sum_{x_1,x_2,\ldots,x_n\in\mathbb{F}_q}P(x_1,x_2,\ldots,x_n)^{q-1}.$$ You are expected to find the value of this sum. I recommend doing it for each term in the multinomial expansion of $P(x_1,\ldots,x_n)^{q-1}$ separately. - It is $- \sum_{x_1,x_2,\ldots,x_n\in\mathbb{F}_q}P(x_1,x_2,\ldots,x_n)^{q-1}$, isn't it ? But I don't manage to show that it equals to $0$ mod $p$ : the monomials are of degree $\leqslant (q-1)d$, which can be a multiple of $q-1$. –  Arnaud Apr 20 '13 at 14:00 @Arnaud: You are on the right track. Next use the assumption $d<n$, and do a partial sum over single variable. –  Jyrki Lahtonen Apr 20 '13 at 16:57 I think that I've the solution : if we expand $P(x_1,x_2,\ldots,x_n)^{q-1}$, each term looks like $x_1^{i_1}...x_n^{i_n}$, with $i_1 + ... + i_n \leqslant (q-1)d < (q-1)n$. So there is $k$ such as $i_k < q-1$. So the partial sum over $i_k$ is zero mod $p$, and we have the conclusion. Is it right ? –  Arnaud Apr 20 '13 at 17:09 That's it! Well done, Arnaud! –  Jyrki Lahtonen Apr 20 '13 at 17:11 I believe this trick is due to either Warning or Chevalley. At least I learned about it in the context of Chevalley-Warning's theorem. –  Jyrki Lahtonen Apr 20 '13 at 17:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727689623832703, "perplexity": 138.18568954196107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444385.33/warc/CC-MAIN-20141017005724-00336-ip-10-16-133-185.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages
# Pumping lemma for regular languages In the theory of formal languages, the pumping lemma for regular languages describes an essential property of all regular languages. Informally, it says that all sufficiently long words in a regular language may be pumped — that is, have a middle section of the word repeated an arbitrary number of times — to produce a new word that also lies within the same language. Specifically, the pumping lemma says that for any regular language L there exists a constant p such that any word w in L with length at least p can be split into three substrings, w = xyz, where the middle portion y must not be empty, such that the words xz, xyz, xyyz, xyyyz, … constructed by repeating y an arbitrary number of times (including zero times) are still in L. This process of repetition is known as "pumping". Moreover, the pumping lemma guarantees that the length of xy will be at most p, imposing a limit on the ways in which w may be split. Finite languages trivially satisfy the pumping lemma by having p equal to the maximum string length in L plus one. The pumping lemma is useful for disproving the regularity of a specific language in question. It was first proved by Dana Scott and Michael Rabin in 1959,[1] and rediscovered shortly after by Yehoshua Bar-Hillel, Micha A. Perles, and Eli Shamir in 1961, as a simplification of their pumping lemma for context-free languages.[2][3] ## Formal statement Let L be a regular language. Then there exists an integer p ≥ 1 depending only on L such that every string w in L of length at least p (p is called the "pumping length"[4]) can be written as w = xyz (i.e., w can be divided into three substrings), satisfying the following conditions: 1. |y| ≥ 1; 2. |xy| ≤ p 3. for all i ≥ 0, xyizL y is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in L). (1) means the loop y to be pumped must be of length at least one; (2) means the loop must occur within the first p characters. |x| must be smaller than p (conclusion of (1) and (2)), apart from that there is no restriction on x and z. In simple words, for any regular language L, any sufficiently long word w (in L) can be split into 3 parts. i.e. w = xyz , such that all the strings xykz for k≥0 are also in L. Below is a formal expression of the Pumping Lemma. $\begin{array}{l} (\forall L\subseteq \Sigma^*) \\ \quad (\mbox{regular}(L) \Rightarrow \\ \quad ((\exists p\geq 1) ( (\forall w\in L) ((|w|\geq p) \Rightarrow \\ \quad ((\exists x,y,z \in \Sigma^*) (w=xyz \land (|y|\geq 1 \land |xy|\leq p \land (\forall i\geq 0)(xy^iz\in L)))))))) \end{array}$ ## Use of lemma The pumping lemma is often used to prove that a particular language is non-regular: a proof by contradiction (of the language's regularity) may consist of exhibiting a word (of the required length) in the language which lacks the property outlined in the pumping lemma. For example the language L = {anbn : n ≥ 0} over the alphabet Σ = {a, b} can be shown to be non-regular as follows. Let w, x, y, z, p, and i be as used in the formal statement for the pumping lemma above. Let w in L be given by w = apbp. By the pumping lemma, there must be some decomposition w = xyz with |xy| ≤ p and |y| ≥ 1 such that xyiz in L for every i ≥ 0. Using |xy| ≤ p, we know y only consists of instances of a. Moreover, because |y| ≥ 1, it contains at least one instance of the letter a. We now pump y up: xy2z has more instances of the letter a than the letter b, since we have added some instances of a without adding instances of b. Therefore xy2z is not in L. We have reached a contradiction. Therefore, the assumption that L is regular must be incorrect. Hence L is not regular. The proof that the language of balanced (i.e., properly nested) parentheses is not regular follows the same idea. Given p, there is a string of balanced parentheses that begins with more than p left parentheses, so that y will consist entirely of left parentheses. By repeating y, we can produce a string that does not contain the same number of left and right parentheses, and so they cannot be balanced. ## Proof of the pumping lemma Proof idea: Whenever a sufficiently long string xyz is recognized by a finite automaton, it must have reached some state (qs=qt) twice. Hence, after repeating ("pumping") the middle part y arbitrarily often (xyyz, xyyyz, ...) the word will still be recognized. For every regular language there is a finite state automaton (FSA) that accepts the language. The number of states in such an FSA are counted and that count is used as the pumping length p. For a string of length at least p, let q0 be the start state and let q1, ..., qp be the sequence of the next p states visited as the string is emitted. Because the FSA has only p states, within this sequence of p + 1 visited states there must be at least one state that is repeated. Write qs for such a state. The transitions that take the machine from the first encounter of state qs to the second encounter of state qs match some string. This string is called y in the lemma, and since the machine will match a string without the y portion, or with the string y repeated any number of times, the conditions of the lemma are satisfied. For example, the following image shows an FSA. The FSA accepts the string: abcd. Since this string has a length which is at least as large as the number of states, which is four, the pigeonhole principle indicates that there must be at least one repeated state among the start state and the next four visited states. In this example, only q1 is a repeated state. Since the substring bc takes the machine through transitions that start at state q1 and end at state q1, that portion could be repeated and the FSA would still accept, giving the string abcbcd. Alternatively, the bc portion could be removed and the FSA would still accept giving the string ad. In terms of the pumping lemma, the string abcd is broken into an x portion a, a y portion bc and a z portion d. ## General version of pumping lemma for regular languages If a language L is regular, then there exists a number p ≥ 1 (the pumping length) such that every string uwv in L with |w| ≥ p can be written in the form uwv = uxyzv with strings x, y and z such that |xy| ≤ p, |y| ≥ 1 and uxyizv is in L for every integer i ≥ 0.[5] This version can be used to prove many more languages are non-regular, since it imposes stricter requirements on the language. ## Converse of lemma not true Note that while the pumping lemma states that all regular languages satisfy the conditions described above, the converse of this statement is not true: a language that satisfies these conditions may still be non-regular. In other words, both the original and the general version of the pumping lemma give a necessary but not sufficient condition for a language to be regular. For example, consider the following language L: \begin{align}L & = \{uvwxy : u,y \in \{0,1,2,3\}^*; v,w,x \in \{0,1,2,3\} \and (v=w \or v=x \or x=w)\} \\ & \cup \{w : w \in \{0,1,2,3\}^*\and \text {precisely 1/7 of the characters in }w \text{ are 3's}\}\end{align}. In other words, L contains all strings over the alphabet {0,1,2,3} with a substring of length 3 including a duplicate character, as well as all strings over this alphabet where precisely 1/7 of the string's characters are 3's. This language is not regular but can still be "pumped" with p = 5. Suppose some string s has length at least 5. Then, since the alphabet has only four characters, at least two of the five characters in the string must be duplicates. They are separated by at most three characters. • If the duplicate characters are separated by 0 characters, or 1, pump one of the other two characters in the string, which will not affect the substring containing the duplicates. • If the duplicate characters are separated by 2 or 3 characters, pump 2 of the characters separating them. Pumping either down or up results in the creation of a substring of size 3 that contains 2 duplicate characters. • The second condition of L ensures that L is not regular: i.e., there are an infinite number of strings that are in L but cannot be obtained by pumping some smaller string in L. For a practical test that exactly characterizes regular languages, see the Myhill-Nerode theorem. The typical method for proving that a language is regular is to construct either a finite state machine or a regular expression for the language.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590121865272522, "perplexity": 474.7084896152086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858727.26/warc/CC-MAIN-20150124161058-00071-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/191260-trig-proof-involving-secant-cosine-print.html
Trig proof involving secant and cosine • Nov 5th 2011, 10:53 PM Don Trig proof involving secant and cosine The problem: Prove that sec^2 theta + cos^2 theta can never be less than 2. I can't figure out how to go about. I have tried to take sec^ theta as 1/cos^2 theta and cos^2 theta as sin^2 theta, but got nowhere. • Nov 5th 2011, 11:32 PM sbhatnagar Re: Trig proof involving secant and cosine Quote: Originally Posted by Don The problem: Prove that sec^2 theta + cos^2 theta can never be less than 2. I can't figure out how to go about. I have tried to take sec^ theta as 1/cos^2 theta and cos^2 theta as sin^2 theta, but got nowhere. Let us take, $f(\theta)=\sec^2{ \theta} + \cos^2{\theta}$. Now, use your knowledge of calculus to find the minimum value of $f(\theta)$. You will find that minimum value of $f(\theta)$ is 2. Therefore: $f(\theta) \geq 2$ $\sec^2{ \theta} + \cos^2{\theta} \geq 2$ http://www.mathhelpforum.com/math-he...fg24f85edf.gif • Nov 6th 2011, 04:26 AM Don Re: Trig proof involving secant and cosine I am still hazy about Calculus, but I think that using algebraic identities like (a+b)^2 and (a-b)^2 this can be solved: sec^2 theta =1/ cos^2 theta. Therefore, cos^2 theta can be taken as 'x' and the equation reduces to x^2 + 1/x^2. x^2 + 1/x^2 - 2*x*1/x + 2*x*1/x. (x^2 - 1/x^2)^2 + 2 This way, sec^2 theta + cos^2 theta can never be less than 2. Is this right? • Nov 6th 2011, 04:44 AM veileen Re: Trig proof involving secant and cosine Yup it's correct, even if you didn't write it ordered or... well, logical. $x^{2}+\frac{1}{x^{2}}> 2 \Leftrightarrow x^{2}+\frac{1}{x^{2}}-2>0 \Leftrightarrow x^{2}+\frac{1}{x^{2}}-2\cdot x^{2}\cdot\frac{1}{x^{2}}>0 \Leftrightarrow \left ( x-\frac{1}{x} \right )^{2}>0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757881760597229, "perplexity": 4053.3161058967844}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00037-ip-10-171-10-70.ec2.internal.warc.gz"}
https://byjus.com/arc-length-formula/
# Arc Length Formula Arc length formula is used to calculate the measure of the distance along the curved line making up the arc (segment of a circle). In simple words, the distance that runs through the curved line of the circle making up the arc is known as the arc length. It should be noted that the arc length is longer than the straight line distance between its endpoints. Formula to Calculate Arc Length ## Formulas for Arc Length The formula to measure the length of the arc is – Arc Length Formula (if ϴ is in degrees) s = 2 π r (θ/360) Arc Length Formula (if ϴ is in radians) s = ϴ × r Arc Length Formula in Integral Form s= $\int^{a}_b\sqrt{1+(\frac{dy}{dx})^2}dx$ ### Denotations in the Arc Length Formula • s is the arc length • r is the radius of the circle • θ is the central angle of the arc ### Example Questions Using the Formula for Arc Length Question 1: Calculate the length of an arc if the radius of an arc is 8 cm and the central angle is 40°? Solution: Central angle, θ = 40° Arc length = 2 π r ×  (ϴ/360) So, s = 2 × π × 8 × (40/360) = 5.582 cm Question 2: What is the arc length for a function f(x)=6 between x=4 and x=6? Solution: Since the function is a constant, the differential of it will be 0. So, the arc length will now be- s = $s=\int^{6}_4\sqrt{1+(0)^2}dx$ So, arc length (s) = (6-4) = 2. ### Practice Questions Based on Arc Length Formula 1. What would be the length of the arc formed by 75° of a circle having the diameter of 18cm? 2. The length of an arc formed by 60° of a circle of radius “r” is 8.37 cm. Find the radius ® of that circle. 3. Calculate the perimeter of a semicircle of radius 1. cm using the arc length formula. ### Also Check: Stay to get more such mathematics related formulas and their explanations. Also, get various maths lessons, preparation tips, question papers, sample papers, syllabus, and other study materials by registering at BYJU’S.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856741428375244, "perplexity": 694.0671611306552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00090.warc.gz"}
http://www.maa.org/publications/periodicals/loci/resources/mathematics-animations-with-svg-linear-approximation-in-several-variables
# Mathematics Animations with SVG - Linear Approximation in Several Variables Author(s): Samuel Dagan (Tel-Aviv University) This page discusses the linear approximation of a function in several variables. To support the discussion, the linked animation shows an elliptic paraboloid that can be rotated about the z-axis. Open the Linear Approximation in Several Variables page in a new window Samuel Dagan (Tel-Aviv University), "Mathematics Animations with SVG - Linear Approximation in Several Variables," Convergence (February 2010), DOI:10.4169/loci003318
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227813482284546, "perplexity": 4647.832970072487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988458.74/warc/CC-MAIN-20150728002308-00142-ip-10-236-191-2.ec2.internal.warc.gz"}
http://matpitka.blogspot.com/2015/04/
## Wednesday, April 29, 2015 ### What could be the origin of p-adic length scale hypothesis? The argument would explain the existence of preferred p-adic primes. It does not yet explain p-adic length scale hypothesis stating that p-adic primes near powers of 2 are favored. A possible generalization of this hypothesis is that primes near powers of prime are favored. There indeed exists evidence for the realization of 3-adic time scale hierarchies in living matter (see this) and in music both 2-adicity and 3-adicity could be present, this is discussed in TGD inspired theory of music harmony and genetic code (see this). The weak form of NMP might come in rescue here. 1. Entanglement negentropy for a negentropic entanglement characterized by n-dimensional projection operator is the log(Np(n) for some p whose power divides n. The maximum negentropy is obtained if the power of p is the largest power of prime divisor of p, and this can be taken as definition of number theoretic entanglement negentropy. If the largest divisor is pk, one has N= k× log(p). The entanglement negentropy per entangled state is N/n=klog(p)/n and is maximal for n=pk. Hence powers of prime are favoured which means that p-adic length scale hierarchies with scales coming as powers of p are negentropically favored and should be generated by NMP. Note that n=pk would define a hierarchy of heff/h=pk. During the first years of heff hypothesis I believe that the preferred values obey heff=rk, r integer not far from r= 211. It seems that this belief was not totally wrong. 2. If one accepts this argument, the remaining challenge is to explain why primes near powers of two (or more generally p) are favoured. n=2k gives large entanglement negentropy for the final state. Why primes p=n2= 2k-r would be favored? The reason could be following. n=2k corresponds to p=2, which corresponds to the lowest level in p-adic evolution since it is the simplest p-adic topology and farthest from the real topology and therefore gives the poorest cognitive representation of real preferred extremal as p-adic preferred extermal (Note that p=1 makes formally sense but for it the topology is discrete). 3. Weak form of NMP suggests a more convincing explanation. The density matrix of the state to be reduced is a direct sum over contributions proportional to projection operators. Suppose that the projection operator with largest dimension has dimension n. Strong form of NMP would say that final state is characterized by n-dimensional projection operator. Weak form of NMP allows free will so that all dimensions n-k, k=0,1,...n-1 for final state projection operator are possible. 1-dimensional case corresponds to vanishing entanglement negentropy and ordinary state function reduction isolating the measured system from external world. 4. The negentropy of the final state per state depends on the value of k. It is maximal if n-k is power of prime. For n=2k=Mk+1, where Mk is Mersenne prime n-1 gives the maximum negentropy and also maximal p-adic prime available so that this reduction is favoured by NMP. Mersenne primes would be indeed special. Also the primes n=2k-r near 2k produce large entanglement negentropy and would be favored by NMP. 5. This argument suggests a generalization of p-adic length scale hypothesis so that p=2 can be replaced by any prime. This argument together with the hypothesis that preferred prime is ramified would correlate the character of the irreducible extension and character of super-conformal symmetry breaking. The integer n characterizing super-symplectic conformal sub-algebra acting as gauge algebra would depends on the irreducible algebraic extension of rational involved so that the hierarchy of quantum criticalities would have number theoretical characterization. Ramified primes could appear as divisors of n and n would be essentially a characteristic of ramification known as discriminant. An interesting question is whether only the ramified primes allow the continuation of string world sheet and partonic 2-surface to a 4-D space-time surface. If this is the case, the assumptions behind p-adic mass calculations would have full first principle justification. For details see the article The Origin of Preferred p-Adic Primes?. For a summary of earlier postings see Links to the latest progress in TGD. ## Tuesday, April 28, 2015 ### How preferred p-adic primes could be determined? p-Adic mass calculations allow to conclude that elementary particles correspond to one or possible several preferred primes assigning p-adic effective topology to the real space-time sheets in discretization in some length scale range. TGD inspired theory of consciousness leads to the identification of p-adic physics as physics of cognition. The recent progress leads to the proposal that quantum TGD is adelic: all p-adic number fields are involved and each gives one particular view about physics. Adelic approach plus the view about evolution as emergence of increasingly complex extensions of rationals leads to a possible answer to th question of the title. The algebraic extensions of rationals are characterized by preferred rational primes, namely those which are ramified when expressed in terms of the primes of the extensions. These primes would be natural candidates for preferred p-adic primes. 1. Earlier attempts How the preferred primes emerges in this framework? I have made several attempts to answer this question. 1. Classical non-determinism at space-time level for real space-time sheets could in some length scale range involving rational discretization for space-time surface itself or for parameters characterizing it as a preferred extremal correspond to the non-determinism of p-adic differential equations due to the presence of pseudo constants which have vanishing p-adic derivative. Pseudo- constants are functions depend on finite number of pinary digits of its arguments. 2. The quantum criticality of TGD is suggested to be realized in in terms of infinite hierarchies of super-symplectic symmetry breakings in the sense that only a sub-algebra with conformal weights which are n-multiples of those for the entire algebra act as conformal gauge symmetries. This might be true for all conformal algebras involved. One has fractal hierarchy since the sub-algebras in question are isomorphic: only the scale of conformal gauge symmetry increases in the phase transition increasing n. The hierarchies correspond to sequences of integers n(i) such tht n(i) divides n(i+1). These hierarchies would very naturally correspond to hierarchies of inclusions of hyper-finite factors and m(i)= n(i+1)/n(i) could correspond to the integer n characterizing the index of inclusion, which has value n≥ 3. Possible problem is that m(i)=2 would not correspond to Jones inclusion. Why the scaling by power of two would be different? The natural question is whether the primes dividing n(i) or m(i) could define the preferred primes. 3. Negentropic entanglement corresponds to entanglement for which density matrix is projector. For n-dimensional projector any prime p dividing n gives rise to negentropic entanglement in the sense that the number theoretic entanglement entropy defined by Shannon formula by replacing pi in log(pi)= log(1/n) by its p-adic norm Np(1/n) is negative if p divides n and maximal for the prime for which the dividing power of prime is largest power-of-prime factor of n. The identification of p-adic primes as factors of n is highly attractive idea. The obvious question is whether n corresponds to the integer characterizing a level in the hierarchy of conformal symmetry breakings. 4. The adelic picture about TGD led to the question whether the notion of unitary could be generalized. S-matrix would be unitary in adelic sense in the sense that Pm=(SS)mm=1 would generalize to adelic context so that one would have product of real norm and p-adic norms of Pm. In the intersection of the realities and p-adicities Pm for reals would be rational and if real and p-adic Pm correspond to the same rational, the condition would be satisfied. The condition that Pm≤ 1 seems however natural and forces separate unitary in each sector so that this options seems too tricky. These are the basic ideas that I have discussed hitherto. 2. Could preferred primes characterize algebraic extensions of rationals? The intuitive feeling is that the notion of preferred prime is something extremely deep and the deepest thing I know is number theory. Does one end up with preferred primes in number theory? This question brought to my mind the notion of ramification of primes (see this) (more precisely, of prime ideals of number field in its extension), which happens only for special primes in a given extension of number field, say rationals. Could this be the mechanism assigning preferred prime(s) to a given elementary system, such as elementary particle? I have not considered their role earlier also their hierarchy is highly relevant in the number theoretical vision about TGD. 1. Stating it very roughly (I hope that mathematicians tolerate this language): As one goes from number field K, say rationals Q, to its algebraic extension L, the original prime ideals in the so called integral closure (see this) over integers of K decompose to products of prime ideals of L (prime is a more rigorous manner to express primeness). Integral closure for integers of number field K is defined as the set of elements of K, which are roots of some monic polynomial with coefficients, which are integers of K and having the form xn+an-1xn-1+...+a0 . The integral closures of both K and L are considered. For instance, integral closure of algebraic extension of K over K is the extension itself. The integral closure of complex numbers over ordinary integers is the set of algebraic numbers. 2. There are two further basic notions related to ramification and characterizing it. Relative discriminant is the ideal divided by all ramified ideals in K and relative different is the ideal of L divided by all ramified Pi:s. Note that te general ideal is analog of integer and these ideas represent the analogous of product of preferred primes P of K and primes Pi of L dividing them. 3. A physical analogy is provided by decomposition of hadrons to valence quarks. Elementary particles becomes composite of more elementary particles in the extension. The decomposition to these more elementary primes is of form P= ∏ Pie(i), where ei is the ramification index - the physical analog would be the number of elementary particles of type i in the state (see this). Could the ramified rational primes could define the physically preferred primes for a given elementary system? In TGD framework the extensions of rationals (see this) and p-adic number fields (see this) are unavoidable and interpreted as an evolutionary hierarchy physically and cosmological evolution would have gradually proceeded to more and more complex extensions. One can say that string world sheets and partonic 2-surfaces with parameters of defining functions in increasingly complex extensions of prime emerge during evolution. Therefore ramifications and the preferred primes defined by them are unavoidable. For p-adic number fields the number of extensions is much smaller for instance for p>2 there are only 3 quadratic extensions. 1. In p-adic context a proper definition of counterparts of angle variables as phases allowing definition of the analogs of trigonometric functions requires the introduction of algebraic extension giving rise to some roots of unity. Their number depends on the angular resolution. These roots allow to define the counterparts of ordinary trigonometric functions - the naive generalization based on Taylors series is not periodic - and also allows to defined the counterpart of definite integral in these degrees of freedom as discrete Fourier analysis. For the simplest algebraic extensions defined by xn-1 for which Galois group is abelian are are unramified so that something else is needed. One has decomposition P= ∏ Pie(i), e(i)=1, analogous to n-fermion state so that simplest cyclic extension does not give rise to a ramification and there are no preferred primes. 2. What kind of polynomials could define preferred algebraic extensions of rationals? Irreducible polynomials are certainly an attractive candidate since any polynomial reduces to a product of them. One can say that they define the elementary particles of number theory. Irreducible polynomials have integer coefficients having the property that they do not decompose to products of polynomials with rational coefficients. It would be wrong to say that only these algebraic extensions can appear but there is a temptation to say that one can reduce the study of extensions to their study. One can even consider the possibility that string world sheets associated with products of irreducible polynomials are unstable against decay to those characterize irreducible polynomials. 3. What can one say about irreducible polynomials? Eisenstein criterion states following. If Q(x)= ∑k=0,..,n akxk is n:th order polynomial with integer coefficients and with the property that there exists at least one prime dividing all coefficients ai except an and that p2 does not divide a0, then Q is irreducible. Thus one can assign one or more preferred primes to the algebraic extension defined by an irreducible polynomial Q - in fact any polynomial allowing ramification. There are also other kinds of irreducible polynomials since Eisenstein's condition is only sufficient but not necessary. 4. Furthermore, in the algebraic extension defined by Q, the primes P having the above mentioned characteristic property decompose to an n :th power of single prime Pi: P= Pin. The primes are maximally/completely ramified. The physical analog P=P0n is Bose-Einstein condensate of n bosons. There is a strong temptation to identify the preferred primes of irreducible polynomials as preferred p-adic primes. A good illustration is provided by equations x2+1=0 allowing roots x+/-=+/- i and equation x2+2px+p=0 allowing roots x+/-= -p+/-p1/2p-11/2. In the first case the ideals associated with +/- i are different. In the second case these ideals are one and the same since x+= =- x- +p: hence one indeed has ramification. Note that the first example represents also an example of irreducible polynomial, which does not satisfy Eisenstein criterion. In more general case the n conditions on defined by symmetric functions of roots imply that the ideals are one and same when Eisenstein conditions are satisfied. 5. What does this mean in p-adic context? The identity of the ideals can be stated by saying P= P0n for the ideals defined by the primes satisfying the Eisenstein condition. Very loosely one can say that the algebraic extension defined by the root involves n:th root of p-adic prime p. This does not work! Extension would have a number whose n:th power is zero modulo p. On the other hand, the p-adic numbers of the extension modulo p should be finite field but this would not be field anymore since there would exist a number whose n:th power vanishes. The algebraic extension simply does not exist for preferred primes. The physical meaning of this will be considered later. 6. What is so nice that one could readily construct polynomials giving rise to given preferred primes. The complex roots of these polymials could correspond to the points of partonic 2-surfaces carrying fermions and defining the ends of boundaries of string world sheet. It must be however emphasized that the form of the polynomial depends on the choices of the complex coordinate. For instance, the shift x→ x+1 transforms (xn-1)/(x-1) to a polynomial satisfying the Eisenstein criterion. One should be able to fix allowed coordinate changes in such a manner that the extension remains irreducible for all allowed coordinate changes. Already the integral shift of the complex coordinate affects the situation. It would seem that only the action of the allowed coordinate changes must reduce to the action of Galois group permuting the roots of polynomials. A natural assumption is that the complex coordinate corresponds to a complex coordinate transforming linearly under subgroup of isometries of the imbedding space. In the general situation one has P= ∏ Pie(i), e(i)≥ 1 so that aso now there are prefered primes so that the appearance of preferred primes is completely general phenomenon. 3. A connection with Langlands program? In Langlands program (see this) the great vision is that the n-dimensional representations of Galois groups G characterizing algebraic extensions of rationals or more general number fields define n-dimensional adelic representations of adelic Lie groups, in particular the adelic linear group Gl(n,A). This would mean that it is possible to reduce these representations to a number theory for adeles. This would be highly relevant in the vision about TGD as a generalized number theory. I have speculated with this possibility earlier (see this) but the mathematics is so horribly abstract that it takes decade before one can have even hope of building a rough vision. One can wonder whether the irreducible polynomials could define the preferred extensions K of rationals such that the maximal abelian extensions of the fields K would in turn define the adeles utilized in Langlands program. At least one might hope that everything reduces to the maximally ramified extensions. At the level of TGD string world sheets with parameters in an extension defined by an irreducible polynomial would define an adele containing various p-adic number fields defined by the primes of the extension. This would define a hierarchy in which the prime ideals of previous level would decompose to those of the higher level. Each irreducible extension of rationals would correspond to some physically preferred p-adic primes. It should be possible to tell what the preferred character means in terms of the adelic representations. What happens for these representations of Galois group in this case? This is known. 1. For Galois extensions ramification indices are constant: e(i)=e and Galois group acts transitively on ideals Pi dividing P. One obtains an n-dimensional representation of Galois group. Same applies to the subgroup of Galois group G/I where I is subgroup of G leaving Pi invariant. This group is called inertia group. For the maximally ramified case G maps the ideal P0 in P=P0n to itself so that G=I and the action of Galois group is trivial taking P0 to itself, and one obtains singlet representations. 2. The trivial action of Galois group looks like a technical problem for Langlands program and also for TGD unless the singletness of Pi under G has some physical interpretation. One possibility is that Galois group acts as like a gauge group and here the hierarchy of sub-algebras of super-symplectic algebra labelled by integers n is highly suggestive. This raises obvious questions. Could the integer n characterizing the sub-algebra of super-symplectic algebra acting as conformal gauge transformations, define the integer defined by the product of ramified primes? P0n brings in mind the n conformal equivalence classes which remain invariant under the conformal transformations acting as gauge transformiations. . Recalling that relative discriminant is an of K ideal divisible by ramified prime ideals of K, this means that n would correspond to the relative discriminant for K=Q. Are the preferred primes those which are "physical" in the sense that one can assign to the states satisfying conformal gauge conditions? 4. A connection with infinite primes? Infinite primes are one of the mathematical outcomes of TGD. There are two kinds of infinite primes. There are the analogs of free many particle states consisting of fermions and bosons labelled by primes of the previous level in the hierarchy. They correspond to states of a supersymmetric arithmetic quantum field theory or actually a hierarchy of them obtained by a repeated second quantization of this theory. A connection between infinite primes representing bound states and and irreducible polynomials is highly suggestive. 1. The infinite prime representing free many-particle state decomposes to a sum of infinite part and finite part having no common finite prime divisors so that prime is obtained. The infinite part is obtained from "fermionic vacuum" X= ∏kpk by dividing away some fermionic primes pi and adding their product so that one has X→ X/m+ m, where m is square free integer. Also m=1 is allowed and is analogous to fermionic vacuum interpreted as Dirac sea without holes. X is infinite prime and pure many-fermion state physically. One can add bosons by multiplying X with any integers having no common denominators with m and its prime decomposition defines the bosonic contents of the state. One can also multiply m by any integers whose prime factors are prime factors of m. 2. There are also infinite primes, which are analogs of bound states and at the lowest level of the hierarchy they correspond to irreducible polynomials P(x) with integer coefficients. At the second levels the bound states would naturally correspond to irreducible polynomials Pn(x) with coefficients Qk(y), which are infinite integers at the previous level of the hierarchy. 3. What is remarkable that bound state infinite primes at given level of hierarchy would define maximally ramified algebraic extensions at previous level. One indeed has infinite hierarchy of infinite primes since the infinite primes at given level are infinite primes in the sense that they are not divisible by the primes of the previous level. The formal construction works as such. Infinite primes correspond to polynomials of single variable at the first level, polynomials of two variables at second level, and so on. Could the Langlands program could be generalized from the extensions of rationals to polynomials of complex argument and that one would obtain infinite hierarchy? 4. Infinite integers in turn could correspond to products of irreducible polynomials defining more general extensions. This raises the conjecture that infinite primes for an extension K of rationals could code for the algebraic extensions of K quite generally. If infinite primes correspond to real quantum states they would thus correspond the extensions of rationals to which the parameters appearing in the functions defining partonic 2-surfaces and string world sheets. This would support the view that partonic 2-surfaces associated with algebraic extensions defined by infinite integers and thus not irreducible are unstable against decay to partonic 2-surfaces which corresponds to extensions assignable to infinite primes. Infinite composite integer defining intermediate unstable state would decay to its composites. Basic particle physics phenomenology would have number theoretic analog and even more. 5. According to Wikipedia, Eisenstein's criterion (see this) allows generalization and what comes in mind is that it applies in exactly the same form also at the higher levels of the hierarchy. Primes would be only replaced with prime polynomials and the there would be at least one prime polynomial Q(y) dividing the coefficients of Pn(x) except the highest one such that its square would not divide P0. Infinite primes would give rise to an infinite hierarchy of functions of many complex variables. At first level zeros of function would give discrete points at partonic 2-surface. At second level one would obtain 2-D surface: partonic 2-surfaces or string world sheet. At the next level one would obtain 4-D surfaces. What about higher levels? Does one obtain higher dimensional objects or something else. The union of n 2-surfaces can be interpreted also as 2n-dimensional surface and one could think that the hierarchy describes a hierarchy of unions of correlated partonic 2-surfaces. The correlation would be due to the preferred extremal property of Kähler action. One can ask whether this hierarchy could allow to generalize number theoretical Langlands to the case of function fields using the notion of prime function assignable to infinite prime. What this hierarchy of polynomials of arbitrary many complex arguments means physically is unclear. Do these polynomials describe many-particle states consisting of partonic 2-surface such that there is a correlation between them as sub-manifolds of the same space-time sheet representing a preferred extremals of Kähler action? This would suggest strongly the generalization of the notion of p-adicity so that it applies to infinite primes. 1. This looks sensible and maybe even practical! Infinite primes can be mapped to prime polynomials so that the generalized p-adic numbers would be power series in prime polynomial - Taylor expansion in the coordinate variable defined by the infinite prime. Note that infinite primes (irreducible polynomials) would give rise to a hierarchy of preferred coordinate variables. In terms of infinite primes this expansion would require that coefficients are smaller than the infinite prime P used. Are the coefficients lower level primes? Or also infinite integers at the same level smaller than the infinite prime in question? This criterion makes sense since one can calculate the ratios of infinite primes as real numbers. 2. I would guess that the definition of infinite-P p-adicity is not a problem since mathematicians have generalized the number theoretical notions to such a level of abstraction much above of a layman like me. The basic question is how to define p-adic norm for the infinite primes (infinite only in real sense, p-adically they have unit norm for all lower level primes) so that it is finite. 3. There exists an extremely general definition of generalized p-adic number fields (see this). One considers Dedekind domain D, which is a generalization of integers for ordinary number field having the property that ideals factorize uniquely to prime ideals. Now D would contain infinite integers. One introduces the field E of fractions consisting of infinite rationals. Consider element e of E and a general fractional ideal eD as counterpart of ordinary rational and decompose it to a ratio of products of powers of ideals defined by prime ideals, now those defined by infinite primes. The general expression for the p-adic norm of x is x-ord(P), where n defines the total number of ideals P appearing in the factorization of a fractional ideal in E: this number can be also negative for rationals. When the residue field is finite (finite field G(p,1) for p-adic numbers), one can take c to the number of its elements (c=p for p-adic numbers. Now it seems that this number is not finite since the number of ordinary primes smaller than P is infinite! But this is not a problem since the topology for completion does not depend on the value of c. The simple infinite primes at the first level (free many-particle states) can be mapped to ordinary rationals and q-adic norm suggests itself: could it be that infinite-P p-adicity corresponds to q-adicity discussed by Khrennikov about p-adic analysics. Note however that q-adic numbers are not a field. Finally, a loosely related question. Could the transition from infinite primes of K to those of L takes place just by replacing the finite primes appearing in infinite prime with the decompositions? The resulting entity is infinite prime if the finite and infinite part contain no common prime divisors in L. This is not the case generally if one can have primes P1 and P2 of K having common divisors as primes of L: in this case one can include P1 to the infinite part of infinite prime and P2 to finite part. For details see the article The Origin of Preferred p-Adic Primes?. For a summary of earlier postings see Links to the latest progress in TGD. ## Monday, April 27, 2015 ### Could adelic approach allow to understand the origin of preferred p-adic primes? The comment of Crow to the posting Intentions, cognitions, and time stimulated rather interesting ideas about the adelization of quantum TGD. First two questions. 1. What is Adelic quantum TGD? The basic vision is that scattering amplitudes are obtained by algebraic continuation to various number fields from the intersection of realities and p-adicities (briefly intersection in what follows) represented at the space-time level by string world sheets and partonic 2-surfaces for which defining parameters (WCW coordinates) are in rational or in in some algebraic extension of p-adic numbers. This principle is a combination of strong form of holography and algebraic continuation as a manner to achieve number theoretic universality. For some years ago Crow sent to me the book of Lapidus about adelic strings. Witten wrote for long time ago an article in which the demonstrated that that the product of real stringy vacuum amplitude and its p-adic variants equals to 1. This is a generalisation of the adelic identity for a rational number stating that the product of the norm of rational number with its p-adic norms equals to one. The real amplitude in the intersection of realities and p-adicities for all values of parameter is rational number or in an appropriate algebraic extension of rationals. If given p-adic amplitude is just the p-adic norm of real amplitude, one would have the adelic identity. This would however require that p-adic variant of the amplitude is real number-valued: I want p-adic valued amplitudes. A further restriction is that Witten's adelic identity holds for vacuum amplitude. I live in Zero Energy Ontology (ZEO) and want it for entire S-matrix, M-matrix, and/or U-matrix and for all states of the basis in some sense. In ZEO one must consider S-, M-, or U-matrix elements. U and S are unitary. M is product of hermitian square root of density matrix times unitary S-matrix. Consider next S-matrix. 1. For S-matrix elements one should have pm=(SS)mm=1. This states the unitarity of S-matrix. Probability is conserved. Could it make sense to generalize this condition and demand that it holds true only adelically that only for the product of real and p-adic norms of pm equals to one: NR(pm)(R)∏p Np(pm(p))=1. This could be actually true identically in the intersection if algebraic continuation principle holds true. Despite the triviality of the adelicity condition, one need not have anymore unitarity separately for reals and p-adic number fields. Notice that the numbers pm would be arbitrary rationals in the most general cased. 2. Could one even replace Np with canonical identification or some form of it with cutoffs reflecting the length scale cutoffs? Canonical identification behaves for powers of p like p-adic norm and means only more precise map of p-adics to reals. 3. For a given diagonal element of unit matrix characterizing particular state m one would have a product of real norm and p-adic norms. The number of the norms, which differ from unity would be finite. This condition would give finite number of exceptional p-adic primes, that is assign to a given quantum state m a finite number of preferred p-adic primes! I have been searching for a long time the underlying deep reason for this assignment forced by the p-adic mass calculations and here it might be. 4. Unitarity might thus fail in real sector and in a finite number of p-adic sectors (otherwise the product of p-adic norms would be infinite or zero). In some sense the failures would compensate each other in the adelic picture. The failure of course brings in mind p-adic thermodynamics, which indeed means that adelic SS, or should it be called MM, is not unitary but defines the density matrix defining the p-adic thermal state! Recall that M-matrix is defined as hermitian square root of density matrix and unitary S-matrix. 5. The weakness of these arguments is that states are assumed to be labelled by discrete indices. Finite measurement resolution implies discretization and could justify this. The p-adic norms of pm or the images of pm under canonical identification in a given number field would define analogs of probabilities. Could one indeed have ∑m pm=1 so that SS would define a density matrix? 1. For the ordinary S-matrix this cannot be the case since the sum of the probabilities pm equals to the dimension N of the state space: ∑ pm=N. In this case one could accept pm>1 both in real and p-adic sectors. For this option adelic unitarity would make sense and would be highly non-trivial condition allowing perhaps to understand how preferred p-adic primes emerge at the fundamental level. 2. If S-matrix is multiplied by a hermitian square root of density matrix to get M-matrix, the situation changes and one indeed obtains ∑ pm=1. MM†=1 does not make sense anymore and must be replaced with MM†=ρ, in special case a projector to a N-dimensional subspace proportional to 1/N. In this case the numbers p(m) would have p-adic norm larger than one for the divisors of N and would define preferred p-adic primes. For these primes the sum Np(p(m)) would not be equal to 1 but to NNp(1/N. 3. Situation is different for hyper-finite factors of type II1 for which the trace of unit matrix equals to one by definition and MM=1 and ∑ pm=1 with sum defined appropriately could make sense. If MM† could be also a projector to an infinite-D subspace. Could the M-matrix using the ordinary definition of dimension of Hilbert space be equivalent with S-matrix for the state space using the definition of dimension assignable to HFFs? Could these notions be dual of each other? Could the adelic S-matrix define the counterpart of M-matrix for HFFs? This looks like a nice idea but usually good looking ideas do not live long in the crossfire of counter arguments. The following is my own. The reader is encouraged to invent his or her own objections. 1. The most obvious objection against the very attractive direct algebraic continuation} from real to p-adic sector is that if the real norm or real amplitude is small then the p-adic norm of its p-adic counterpart is large so that p-adic variants of pm(p) can become larger than 1 so that probability interpretation fails. As noticed there is no actually no need to pose probability interpretation. The only way to overcome the "problem" is to assume that unitarity holds separately in each sector so that one would have p(m)=1 in all number fields but this would lead to the loss of preferred primes. 2. Should p-adic variants of the real amplitude be defined by canonical identification or its variant with cutoffs? This is mildly suggested by p-adic thermodynamics. In this case it might be possible to satisfy the condition pm(R)∏p Np(pm(p))=1. One can however argue that the adelic condition is an ad hoc condition in this case. To sum up, if the above idea survives all the objections, it could give rise to a considerable progress. A first principle understanding of how preferred p-adic primes are assigned to quantum states and thus a first principle justification for p-adic thermodynamics. For the ordinary definition of S-matrix this picture makes sense and also for M-matrix. One would still need the justification of canonical identification map playing a key role in p-adic thermodynamics allowing to map p-adic mass squared to its real counterpart. ## Sunday, April 26, 2015 ### Hierarchies of conformal symmetry breakings, quantum criticalities, Planck constants, and hyper-finite factors TGD is characterized by various hierarchies. There are fractal hierarchies of quantum criticalities, Planck constants and hyper-finite factors and these hierarchies relate to hierarchies of space-time sheets, and selves. These hierarchies are closely related and this article describes these connections. In this article the recent view about connections between various hierarchies associated with quantum TGD are described. For details see the article Hierarchies of conformal symmetry breakings, quantum criticalities, Planck constants, and hyper-finite factors. For a summary of earlier postings see Links to the latest progress in TGD. ### Updated View about Kähler geometry of WCW Quantum TGD reduces to a construction of Kähler geometry for what I call the "World of Classical Worlds. It has been clear from the beginning that the gigantic super-conformal symmetries generalizing ordinary super-conformal symmetries are crucial for the existence of WCW Kähler metric. The detailed identification of Kähler function and WCW Kähler metric has however turned out to be a difficult problem. It is now clear that WCW geometry can be understood in terms of the analog of AdS/CFT duality between fermionic and space-time degrees of freedom (or between Minkowskian and Euclidian space-time regions) allowing to express Kähler metric either in terms of Kähler function or in terms of anti-commutators of WCW gamma matrices identifiable as super-conformal Noether super-charges for the symplectic algebra assignable to δ M4+/-× CP2. The string model type description of gravitation emerges and also the TGD based view about dark matter becomes more precise. String tension is however dynamical rather than pregiven and the hierarchy of Planck constants is necessary in order to understand the formation of gravitationally bound states. Also the proposal that sparticles correspond to dark matter becomes much stronger: sparticles actually are dark variants of particles. A crucial element of the construction is the assumption that super-symplectic and other super-conformal symmetries having the same structure as 2-D super-conformal groups can be seen a broken gauge symmetries such that sub-algebra with conformal weights coming as n-ples of those for full algebra act as gauge symmetries. In particular, the Noether charges of this algebra vanish for preferred extremals- this would realize the strong form of holography implied by strong form of General Coordinate Invariance. This gives rise to an infinite number of hierarchies of conformal gauge symmetry breakings with levels labelled by integers n(i) such that n(i) divides n(i+1) interpreted as hierarchies of dark matter with levels labelled by the value of Planck constant heff=n× h. These hierarchies define also hierarchies of quantum criticalities and are proposed to give rise to inclusion hierarchies of hyperfinite factors of II1 having interpretation in terms of finite cognitive resolution. These hierarchies would be fundamental for the understanding of living matter. For details see the article Updated view about Kähler geometry of WCW. For a summary of earlier postings see Links to the latest progress in TGD. ### Intentions, Cognition, and Time Intentions involve time in an essential manner and this led to the idea that p-adic-to-real quantum jumps could correspond to a realization of intentions as actions. It however seems that this hypothesis posing strong additional mathematical challenges is not needed if one accepts adelic approach in which real space-time time and its p-adic variants are all present and quantum physics is adelic. I have already earlier developed the first formulation of p-adic space-time surfaces as cognitive charges of real space-time surfaces and also the ideas related to the adelic vision. The recent view involving strong form of holography would provide dramatically simplified view about how these representations are formed as continuations of representations of strings world sheets and partonic 2-surfaces in the intersection of real and p-adic variants of WCW ("World of Classical Worlds") in the sense that the parameters characterizing these representations are in the algebraic numbers in the algebraic extension of p-adic numbers involved. For details see the article Intentions, Cognition, and Time For a summary of earlier postings see Links to the latest progress in TGD. ## Saturday, April 25, 2015 ### Good and Evil, Life and Death In principle the proposed conceptual framework allows already now a consideration of the basic questions relating to concepts like Good and Evil and Life and Death. Of course, too many uncertainties are involved to allow any definite conclusions, and one could also regard the speculations as outputs of the babbling period necessarily accompanying the development of the linguistic and conceptual apparatus making ultimately possible to discuss these questions more seriously. Even the most hard boiled materialistic sceptic mentions ethics and moral when suffering personal injustice. Is there actual justification for moral laws? Are they only social conventions or is there some hard core involved? Is there some basic ethical principle telling what deeds are good and what deeds are bad? Second group of questions relates to life and biological death. How should on define life? What happens in the biological death? Is self preserved in the biological death in some form? Is there something deserving to be called soul? Are reincarnations possible? Are we perhaps responsible for our deeds even after our biological death? Could the law of Karma be consistent with physics? Is liberation from the cycle of Karma possible? In the sequel these questions are discussed from the point of view of TGD inspired theory of consciousness. It must be emphasized that the discussion represents various points of view rather than being a final summary. Also mutually conflicting points of view are considered. The cosmology of consciousness, the concept of self having space-time sheet and causal diamond as its correlates, the vision about the fundamental role of negentropic entanglement, and the hierarchy of Planck constants identified as hierarchy of dark matters and of quantum critical systems, provide the building blocks needed to make guesses about what biological death could mean from subjective point of view. For details see the article Good and Evil, Life and Death. For a summary of earlier postings see Links to the latest progress in TGD. ## Friday, April 24, 2015 ### Variation of Newton's constant and of length of day J. D. Anderson et al have published an article discussing the observations suggesting a periodic variation of the measured value of Newton constant and variation of length of day (LOD) (see also this). This article represents TGD based explanation of the observations in terms of a variation of Earth radius. The variation would be due to the pulsations of Earth coupling via gravitational interaction to a dark matter shell with mass about 1.3× 10-4ME introduced to explain Flyby anomaly: the model would predict Δ G/G= 2Δ R/R and Δ LOD/LOD= 2Δ RE/RE with the variations pf G and length of day in opposite phases. The expermental finding Δ RE/RE= MD/ME is natural in this framework but should be deduced from first principles. The gravitational coupling would be in radial scaling degree of freedom and rigid body rotational degrees of freedom. In rotational degrees of freedom the model is in the lowest order approximation mathematically equivalent with Kepler model. The model for the formation of planets around Sun suggests that the dark matter shell has radius equal to that of Moon's orbit. This leads to a prediction for the oscillation period of Earth radius: the prediction is consistent with the observed 5.9 years period. The dark matter shell would correspond to n=1 Bohr orbit in the earlier model for quantum gravitational bound states based on large value of Planck constant. Also n>1 orbits are suggestive and their existence would provide additional support for TGD view about quantum gravitation. For details see the chapter Cosmology and Astrophysics in Many-Sheeted Space-Time or the article Variation of Newton's constant and of length of day. For a summary of earlier postings see Links to the latest progress in TGD. ## Tuesday, April 21, 2015 ### Connection between Boolean cognition and emotions Weak form of NMP allows the state function reduction to occur in 2n-1 manners corresponding to subspaces of the sub-space defined by n-dimensional projector if the density matrix is n-dimensional projector (the outcome corresponding to 0-dimensional subspace and is excluded). If the probability for the outcome of state function reduction is same for all values of the dimension 1≤m ≤n, the probability distribution for outcome is given by binomial distribution B(n,p) for p=1/2 (head and tail are equally probable) and given by p(m)= b(n,m)× 2-n= (n!/m!(n-m)!)×2-n . This gives for the average dimesion E(m)= n/2 so that the negentropy would increase on the average. The world would become gradually better. One cannot avoid the idea that these different degrees of negentropic entanglement could actually give a realization of Boolean algebra in terms of conscious experiences. 1. Could one speak about a hierarchies of codes of cognition based on the assignment of different degrees of "feeling good" to the Boolean statements? If one assumes that the n:th bit is always 1, all independent statements except one correspond at least two non-vanishing bits and corresponds to negentropic entanglement. Only of statement (only last bit equal to 1) would correspond 1 bit and to state function reduction reducing the entanglement completely (brings in mind the fruit in the tree of Good and Bad Knowlege!). 2. A given hierarchy of breakings of super-symplectic symmetry corresponds to a hierarchy of integers ni+1= ∏k≤ i mk. The codons of the first code would consist of sequences of m1 bits. The codons of the second code consists of m2 codons of the first code and so on. One would have a hierarchy in which codons of previous level become the letters of the code words at the next level of the hierarchy. In fact, I ended up with almost Boolean algebra for decades ago when considering the hierarchy of genetic codes suggested by the hierarchy of Mersenne primes M(n+1)= MM(n), Mn= 2n-1. 1. The hierarchy starting from M2=3 contains the Mersenne primes 3,7,127,2127-1 and Hilbert conjectured that all these integers are primes. These numbers are almost dimensions of Boolean algebras with n=2,3,7,127 bits. The maximal Boolean sub-algebras have m=n-1=1,2,6,126 bits. 2. The observation that m=6 gives 64 elements led to the proposal that it corresponds to a Boolean algebraic assignable to genetic code and that the sub-algebra represents maximal number of independent statements defining analogs of axioms. The remaining elements would correspond to negations of these statements. I also proposed that the Boolean algebra with m=126=6× 21 bits (21 pieces consisting of 6 bits) corresponds to what I called memetic code obviously realizable as sequences of 21 DNA codons with stop codons included. Emotions and information are closely related and peptides are regarded as both information molecules and molecules of emotion. 3. This hierarchy of codes would have the additional property that the Boolean algebra at n+1:th level can be regarded as the set of statements about statements of the previous level. One would have a hierarchy representing thoughts about thoughts about.... It should be emphasized that there is no need to assume that the Hilbert's conjecture is true. One can obtain this kind of hierarchies as hierarchies with dimensions m, 2m, 22m,... that is n(i+1)= 2n(i). The conditions that n(i) divides n(i+1) is non-trivial only for at the lowest step and implies that m is power of 2 so that the hierarchies starting from m=2k. This is natural since Boolean algebras are involved. If n corresponds to the size scale of CD, it would come as a power of 2. p-Adic length scale hypothesis has also led to this conjecture. A related conjecture is that the sizes of CDs correspond to secondary p-adic length scales, which indeed come as powers of two by p-adic length scale hypothesis. In case of electron this predicts that the minimal size of CD associated with electron corresponds to time scale T=.1 seconds, the fundamental time scale in living matter (10 Hz is the fundamental bio-rhythm). It seems that the basic hypothesis of TGD inspired partly by the study of elementary particle mass spectrum and basic bio-scales (there are 4 p-adic length scales defined by Gaussian Mersenne primes in the range between cell membrane thickness 10 nm and and size 2.5 μm of cell nucleus!) follow from the proposed connection between emotions and Boolean cognition. 4. NMP would be in the role of God. Strong NMP as God would force always the optimal choice maximizing negentropy gain and increasing negentropy resources of the Universe. Weak NMP as God allows free choice so that entropy gain is not be maximal and sinners populate the world. Why the omnipotent God would allow this? The reason is now obvious. Weak form of NMP makes possible the realization of Boolean algebras in terms of degrees of "feels good"! Without the God allowing the possibility to do sin there would be no emotional intelligence! Hilbert's conjecture relates in interesting manner to space-time dimension. Suppose that Hilbert's conjecture fails and only the four lowest Mersenne integers in the hierarchy are Mersenne primes that is 3,7,127, 2127-1. In TGD one has hierarchy of dimensions associated with space-time surface coming as 0,1,2,4 plus imbedding space dimension 8. The abstraction hierarchy associated with space-time dimensions would correspond discretization of partonic 2-surfaces as point set, discretization of 3-surfaces as a set of strings connecting partonic 2-surfaces characterized by discrete parameters, discretization of space-time surfaces as a collection of string world sheet with discretized parameters, and maybe - discretization of imbedding space by a collection of space-time surfaces. Discretization means that the parameters in question are algebraic numbers in an extension of rationals associated with p-adic numbers. In TGD framework it is clear why imbedding space cannot be higher-dimensional and why the hierarchy does not continue. Could there be a deeper connection between these two hierarchies. For instance, could it be that higher dimensional manifolds of dimension 2×n can be represented physically only as unions of say n 2-D partonic 2-surfaces (just like 3×N dimensional space can be represented as configuration space of N point-like particles)? Also infinite primes define a hierarchy of abstractions. Could it be that one has also now similar restriction so that the hierarchy would have only finite number of levels, say four. Note that the notion of n-group and n-algebra involves an analogous abstraction hierarchy. For details see the article Good and Evil, Life and Death. For a summary of earlier postings see Links to the latest progress in TGD. ## Monday, April 20, 2015 ### Can one identify quantum physical correlates of ethics and moral? TGD inspired theory of consciousness involves a bundle of new concepts making it possible to seriously discuss quantum physical correlates of ethics and moral assuming that we live in TGD Universe. In the following I summarize the recent understanding. I do not guarantee that I will agree with myself tomorrow since I am just going through this stuff in the updating of TGD inspired theory of consciousness and quantum biology. Quantum ethics very briefly Could physics generalized to a theory of consciousness allow to undersand the physical correlates of ethics and moral. The proposal is that this is the case. The basic ethical principle would be that good deeds help evolution to occur. Evolution should correspond to the increase of negentropic entanglement resources, defining negentropy sources, which I have called Akashic records. This idea can be criticized. 1. If strong form of NMP prevails, one can worry that TGD Universe does not allow Evil at all, perhaps not even genuine free will! No-one wants Evil but Evil seems to be present in this world. 2. Could one weaken NMP so that it does not force but only allows to make a reduction to a final state characterized by density matrix which is projection operator? Self could choose whether to perform a projection to some sub-space of this subspace, say 1-D ray as in ordinary state function reduction. NMP would be like Christian God allowing the sinner to choose between Good and Evil. The final entanglement negentropy would be measure for the goodness of the deed. This is so if entanglement negentropy is a correlate for love. Deeds which are done with love would be good. Reduction of entanglement would in turn mean loneliness and separation. 3. Or could could think that the definition of good deed is as a selection between deeds, which correspond to the same maximal increase of negentropy so that NMP cannot tell what happens. For instance the density matrix operator is direct sum of projection operators of same dimension but varying coefficients and there is a selection between these. It is difficult to imagine what the criterion for a good deed could be in this case. And how self can know what is the good deed and what is the bad deed. Good deeds would support evolution. There are many manners to interpret evolution in TGD Universe. 1. p-Adic evolution would mean a gradual increase of the p-adic primes characterizing individual partonic 2-surfaces and therefore their size. The identification of p-adic space-time sheets as representations for cognitions gives additional concreteness to this vision. The earlier proposal that p-adic--real-phase transitions correspond to realization of intentions and formations of cognitions seems however to be wrong. Instead, adelic view that both real and p-adic sectors are present simultaneously and that fermions at string world sheets correspond to the intersection of realities and p-adicities seems more realistic. The inclusion of phases q=exp(i2π/n) in the algebraic extension of p-adics allows to define the notion of angle in p-adic context but only with a finite resolution since only finite number of angles are represented as phases for a given value of n. The increase of the integers n could be interpreted as the emergence of higher algebraic extensions of p-adic numbers in the intersection of the real and p-adic worlds. These observations suggest that all three views about evolution are closely related. 2. The hierarchy of Planck constants suggests evolution as the gradual increase of the Planck constant characterizing p-adic space-time sheet (or partonic 2-surface for the minimal option). The original vision about this evolution was as a migration to the pages of the book like structure defined by the generalized imbedding space and has therefore quite concrete geometric meaning. It implies longer time scales of long term memory and planned action and macroscopic quantum coherence in longer scales. The new view is in terms of first quantum jumps to the opposite boundary of CD leading to the death of self and its re-incarnation at the opposite boundary. 3. The vision about life as something in the intersection of real and p-adic words allows to see evolution information theoretically as the increase of number entanglement negentropy implying entanglement in increasing length scales. This option is equivalent with the second view and consistent with the first one if the effective p-adic topology characterizes the real partonic 2-surfaces in the intersection of p-adic and real worlds. The third kind of evolution would mean also the evolution of spiritual consciousness if the proposed interpretation is correct. In each quantum jump U-process generates a superposition of states in which any sub-system can have both real and algebraic entanglement with the external world. If state function reduction process involves also the choice of the type of entanglement it could be interpreted as a choice between good and evil. The hedonistic complete freedom resulting as the entanglement entropy is reduced to zero on one hand, and the negentropic entanglement implying correlations with the external world and meaning giving up the maximal freedom on the other hand. The selfish option means separation and loneliness. The second option means expansion of consciousness - a fusion to the ocean of consciousness as described by spiritual practices. In this framework one could understand the physics correlates of ethics and moral. The ethics is simple: evolution of consciousness to higher levels is a good thing. Anything which tends to reduce consciousness represents violence and is a bad thing. Moral rules are related to the relationship between individual and society and presumably develop via self-organization process and are by no means unique. Moral rules however tend to optimize evolution. As blind normative rules they can however become a source of violence identified as any action which reduces the level of consciousness. There is an entire hierarchy of selves and every self has the selfish desire to survive and moral rules develop as a kind of compromise and evolve all the time. ZEO leads to the notion that I have christened cosmology of consciousness. It forces to extend the concept of society to four-dimensional society. The decisions of "me now" affect both my past and future and time like quantum entanglement makes possible conscious communication in time direction by sharing conscious experiences. One can therefore speak of genuinely four-dimensional society. Besides my next-door neighbors I had better to take into account also my nearest neighbors in past and future (the nearest ones being perhaps copies of me!). If I make wrong decisions those copies of me in future and past will suffer the most. Perhaps my personal hell and paradise are here and are created mostly by me. What could the quantum correlates of moral be? We make moral choices all the time. Some deeds are good, some deeds are bad. In the world of materialist there are no moral choices, the deeds are not good or bad, there are just physical events. I am not a materialist so that I cannot avoid questions such as how do the moral rules emerge and how some deeds become good and some deeds bad. Negentropic entanglement is the obvious first guess if one wants to understand emergence of moral. 1. One can start from ordinary quantum entanglement. It corresponds to a superposition of pairs of states. Second state corresponds to the internal state of the self and second state to a state of external world or biological body of self. In negentropic quantum entanglement each is replaced with a pair of sub-spaces of state spaces of self and external world. The dimension of the sub-space depends on the which pair is in question. In state function reduction one of these pairs is selected and deed is done. How to make some of these deeds good and some bad? 2. Obviously the value of heff/h=n gives the criterion in the case that weak form of NMP holds true. Recall that weak form of NMP allows only the possibility to generate negentropic entanglement but does not force it. NMP is like God allowing the possibility to do good but not forcing good deeds. Self can choose any sub-space of the subspace defined by n-dimensional projector and 1-D subspace corresponds to the standard quantum measurement. For n=1 the state function reduction leads to vanishing negentropy, and separation of self and the target of the action. Negentropy does not increase in this action and self is isolated from the target: kind of price for sin. For the maximal dimension of this sub-space the negentropy gain is maximal. This deed is good and by the proposed criterion the negentropic entanglement corresponds to love or more generally, positively colored conscious experience. Interestingly, there are 2n possible choices which is the dimension of Boolean algebra consisting of n independent bits. This could relate directly to fermionic oscillator operators defining basis of Boolean algebra. The deed in this sense would be a choice of how loving the attention towards system of external world is. 3. Could the moral rules of society be represented as this kind of entanglement patterns between its members? Here one of course has entire fractal hierarchy of societies corresponding different length scales. Attention and magnetic flux tubes serving as its correlates is the basic element also in TGD inspired quantum biology already at the level of bio-molecules and even elementary particles. The value of heff/h=n associated with the magnetic flux tube connecting members of the pair, would serve as a measure for the ethical value of maximally good deed. Dark phases of matter would correspond to good: usually darkness is associated with bad! 4. These moral rules seem to be universal. There are however also moral rules or should one talk about rules of survival, which are based on negative emotions such as fear. Moral rules as rules of desired behavior are often tailored for the purposes of power holder. How this kind of moral rules could develop? Maybe they cannot be realized in terms of negentropic entanglement. Maybe the superposition of the allowed alternatives for the deed contains only the alternatives allowed by the power holder and the superposition in question corresponds to ordinary entanglement for which the signature is simple: the probabilities of various options are different. This forces the self to choose just one option from the options that power holder accepts. These rules do not allow the generation of loving relationship. Moral rules seem to be generated by society, up-bringing, culture, civilization. How the moral rules develop? One can try to formulate and answer in terms of quantum physical correlates. 1. Basically the rules should be generated in the state function reductions which correspond to volitional action which corresponds to the first state function reduction to the earlier active boundary of CD. Old self dies and new self is born at the opposite boundary of CD and the arrow of time associated with CD changes. 2. The repeated sequences of state function reductions can generate negentropic entanglement during the quantum evolutions between them. This time evolution would be the analog for the time evolution defined by Hamiltonian - that is energy - associated with ordinary time translation whereas the first state function reduction at the opposite boundary inducing scaling of heff and CD would be accompanied by time evolution defined by conformal scaling generator L0. Note that the state at passive boundary does not change during the sequence of repeated state function reductions. These repeated reductions however change the parts of zero energy states associated with the new active boundary and generate also negentropic entanglement. As the self dies the moral choices can made if the weak form of NMP is true. 3. Who makes the moral choices? It looks of course very weird that self would apply free will only at the moment of its death or birth! The situation is saved by the fact that self has also sub-selves, which correspond to sub-CDs and represent mental images of self. We know that mental images die as also we do some day and are born again (as also we do some day) and these mental images can generate negentropic resources within CD of self. One can argue that these mental images do not decide about whether to do maximally ethical choice at the moment of death. The decision must be made by a self at higher level. It is me who decides about the fate of my mental images - to some degree also after their death! I can choose the how negentropic the quantum entanglement characterizing the relationship of my mental image and the world outside it. I realize, that the misused idea of positive thinking seems to unavoidably creep in! I have however no intention to make money with it! There are still many questions that are waiting for more detailed answer. These questions are also a good manner to detect logical inconsistencies. 1. What is the size of CD characterizing self? For electron it would be at least of the order of Earth size. During the lifetime of CD the size of CD increases and the order of magnitude is measured in light-life time for us. This would allow to understand our usual deeds affecting the environment in terms of our subselves and their entanglement with the external world which is actually our internal world, at least if magnetic bodies are considered. 2. Can one assume that the dynamics inside CD is independent from what happens outside CD. Can one say that the boundaries of CD define the ends of space-time or does space-time continue outside them. Do the boundaries of CD define boundaries for 4-D spotlight of attention or for one particular reality? Does the answer to this question have any relevance if everything physically testable is formulated in term physics of string world sheets associated with space-time surfaces inside CD? Note that the (average) size of CDs (, which could be in superposition but need not if every repeated state function reduction is followed by a localization in the moduli space of CDs) increases during the life cycle of self. This makes possible generation of negentropic entanglement between more and more distant systems. I have written about the possibility that ZEO could make possible interaction with distant civilizations (see this. The possibility of having communications in both time directions would allow to circumvent the barrier due to the finite light-velocity, and gravitational quantum coherence in cosmic scales would make possible negentropic entanglement. 3. How selves interact? CDs as spot-lights of attention should overlap in order that the interaction is possible. Formation of flux tubes makes possible quantum entanglement. The string world sheets carrying fermions also essential correlates of entanglement and the possibly entanglement is between fermions associated with partonic 2-surfaces. The string world sheets define the intersection of real and p-adic worlds, where cognition and life resides. For details see the article Good and Evil, Life and Death. ### Intentions, cognitions, time, and p-adic physics Intentions involved time in an essential manner and this led to the idea that p-adic-to-real quantum jumps could correspond to a realization of intentions as actions. It however seems that this hypothesis posing strong additional mathematical challenges is not needed if one accepts adelic approach in which real space-time time and its p-adic variants are all present and quantum physics is adelic. I have developed the first formulation of p-adic space-time surface in and the ideas related to the adelic vision (see this, this, and this). 1. What intentions are? One of the earlier ideas about the flow of subjective time was that it corresponds to a phase transition front representing a transformation of intentions to actions and propagating towards the geometric future quantum jump by quantum jump. The assumption about this front is un-necessary in the recent view inspired by ZEO. Intentions should relate to active aspects of conscious experience. The question is what the quantum physical correlates of intentions are and what happens in the transformation of intention to action. 1. The old proposal that p-adic-to-real transition could correspond to the realization of intention as action. One can even consider the possibility that the sequence of state function reductions decomposes to pairs real-to-padic and p-adic-to-real transitons. This picture does not explain why and how intention gradually evolves stronger and stronger, and is finally realized. The identification of p-adic space-time sheets as correlates of cognition is however natural. 2. The newer proposal, which might be called adelic, is that real and p-adic space-time sheets form a larger sensory-cognitive structure: cognitive and sensory aspects would be simultaneously present. Real and p-adic space-time surfaces would form single coherent whole which could be called adelic space-time. All p-adic manifolds could be present and define kind of chart maps about real preferred extremals so that they would not be independent entities as for the first option. The first objection is that the assignment of fermions separately to the every factor of adelic space-time does not make sense. This objection is circumvented if fermions belong to the intersection of realities and p-adicities. This makes sense if string world sheets carrying the induced spinor fields define seats of cognitive representations in the intersection of reality and p-adicities. Cognition would be still associated with the p-adic space-time sheets and sensory experience with real ones. What can sensed and cognized would reside in the intersection. Intention would be however something different for the adelic option. The intention to perform quantum jump at the opposite boundary would develop during the sequence of state function reductions at fixed boundary and eventually NMP would force the transformation of intention to action as first state function reduction at opposite boundary. NMP would guarantee that the urge to do something develops so strong that eventually something is done. Intention involves two aspects. The plan for achieving something which corresponds to cognition and the will to achieve something which corresponds to emotional state. These aspects could correspond to p-adic and real aspects of intentionality. 2. p-Adic physics as physics of only cognition? There are two views about p-adic-real correspondence corresponding to two views about p-adic physics. According to the first view p-adic physics defines correlates for both cognition and intentionality whereas second view states that it provides correlates for cognition only. 1. Option A: The older view is that p-adic -to-real transitions realize intentions as actions and opposite transitions generate cognitive representations. Quantum state would be either real or p-adic. This option raises hard mathematical challenges since scattering amplitudes between different number fields are needed and the needed mathematics might not exist at all. 2. Option B: Second view is that cognition and sensory aspects of experience are simultaneously present at all levels and means that real space-time surface and their real counterparts form a larger structure in the spirit of what might be called Adelic TGD. p-Adic space-time charts could be present for all primes. It is of course necessary to understand why it is possible to assign definite prime to a given elementary particle. This option could be developed by generalizing the existing mathematics of adeles by replacing number in given number field with a space-time surface in the imbedding space corresponding that number field. Therefore this option looks more promising. For this option also the development of intention can be also understood. The condition that the scattering amplitudes are in the intersection of reality and p-adicities is very powerful condition on the scattering amplitudes and would reduce the realization of number theoretical universality and p-adicization to that for string world sheets and partonic 2-surfaces. For instance, the difficult problem of defining p-adic analogs of topological invariant would trivialize since these invariants (say genus) have algebraic representation for 2-D geometries. 2-dimensionality of cognitive representation would be perhaps basically due to the close correspondence between algebra and topology in dimension D=2. Most of the following considerations apply in both cases. 3. Some questions to ponder The following questions are part of the list of question that one must ponder. a) Do cognitive representations reside in the intersection of reality and p-adicities? The idea that cognitive representation reside in the intersection of reality and various p-adicities is one of the key ideas of TGD inspired theory of consciousness. 1. All quantum states have vanishing total quantum numbers in ZEO, which now forms the basis of quantum TGD (see this). In principle conservation laws do not pose any constraints on possibly occurring real--p-adic transitions (Option A) if they occur between zero energy states. On the other hand, there are good hopes about the definition of p-adic variants of conserved quantities by algebraic continuation since the stringy quantal Noether charges make sense in all number fields if string world sheets are in the real--p-adic intersection. This continuation is indeed needed if quantum states have adelic structure (Option B). In accordance with this quantum classical correspondence (QCC) demands that the classical conserved quantities in the Cartan algebra of symmetries are equal to the eigenvalues of the quantal charges. 2. The starting point is the interpretation of fermions as correlates for Boolean cognition and p-adic space-time sheets space-time correlates for cognitions (see this). Induced spinor fields are localized at string world sheets, which suggests that string world sheets and partonic 2-surfaces define cognitive representations in the intersection of realities and p-adicities. The space-time adele would have a book-like structure with the back of the book defined by string world sheets. 3. At the level of partonic 2-surfaces common rational points (or more generally common points in algebraic extension of rationals) correspond to the real--p-adic intersection. It is natural to identify the set of these points as the intersection of string world sheets and partonic 2-surfaces at the boundaries of CDs. These points would also correspond to the ends of strings connecting partonic 2-surfaces and the ends of fermion lines at the orbits of partonic 2-surfaces (at these surfaces the signature of the induced 4-metric changes). This would give a direct connection with fermions and Boolean cognition. 1. For option A the interpretation is simple. The larger the number of points is, the higher the probability for the transitions to occur. This because the transition amplitude must involve the sum of amplitudes determined by data from the common points. 2. For option B the number of common points measures the goodness of the particular cognitive representation but does not tell anything about the probability of any quantum transition. It however allows to discriminate between different p-adic primes using the precision of the cognitive representation as a criterion. For instance, the non-determinism of Kähler action could resemble p-adic non-determinism for some algebraic extension of p-adic number field for some value of p. Also the entanglement assignable to density matrix which is n-dimensional projector would be negentropic only if the p-adic prime defining the number theoretic entropy is divisor of n. Therefore also entangled quantum state would give a strong suggestion about the value of the optimal p-adic cognitive representation as that associated with the largest power of p appearing in n. b) Could cognitive resolution fix the measurement resolution? For p-adic numbers the algebraic extension used (roots of unity fix the resolution in angle degrees of freredom and pinary cutoffs fix the resolution in "radial" variables which are naturally positive. Could the character of quantum state or perhaps quantum transition fix measurement resolution uniquely? 1. If transitions (state function reductions) can occur only between different number fields (Option A), discretization is un-avoidable and unique if maximal. For real-real transitions the discretization would be motivated only by finite measurement resolution and need be neither necessary nor unique. Discretization is required and unique also if one requires adelic structure for the state space (Option B). Therefore both options A and B are allowed by this criterion. 2. For both options cognition and intention (if p-adic) would be one half of existence and sensory perception and motor actions would be second half of existence at fundamental level. The first half would correspond to sensory experience and motor action as time reversals of each other. This would be true even at the level of elementary particles, which would explain the amazing success of p-adic mass calculations. 3. For option A the state function reduction sequence would correspond to a formation of p-adic maps about real maps and real maps about p-adic maps: real → p-adic → real →..... For option B it would correspond the sequence adelic → adelic → adelic →..... 4. For both options p-adic and real physics would be unified to single coherent whole at the fundamental level but the adelic option would be much simpler. This kind of unification is highly suggestive - consider only the success of p-adic mass calculations - but I have not really seriously considered what it could mean. c) What selects the preferred p-adic prime? What determines the p-adic prime or preferred p-adic prime assignable to the system considered? Is it unique? Can it change? 1. An attractive hypothesis is that the most favorable p-adic prime is a factor of the integer n defining the dimension of the n× n density matrix associated with the flux tubes/fermionic strings connecting partonic 2-surfaces: the presence of fermionic strings already implies at least two partonic 2-surfaces. During the sequence of reductions at same boundary of CD n receives additional factors so that p cannot change. If wormhole contacts behave as magnetic monopoles there must be at least two of them connected by monopole flux tubes. This would give a connection with negentropic entanglement and for heff/h=n to quantum criticality, dark matterm and hierarchy of inclusions of HFFs. 2. Second possibility is that the classical non-determinism making itself visible via super-symplectic invariance acting as broken conformal gauge invariance has same character as p-adic non-determinism for some value of p-adic prime. This would mean that p-adic space-time surfaces would be especially good representations of real space-time sheets. At the lowest level of hierarchy this would mean large number of common points. At higher levels large number of common parameter values in the algebraic extension of rationals in question. d) How finite measurement resolution relates to hyper-finite factors? The connection with hyper-finite factors suggests itself. 1. Negentropic entanglement can be said to be stabilized by finite cognitive resolution if hyper-finite factors are associated with the hierarchy of Planck constants and cognitive resolutions. For HFFs the projection to single ray of state space in state function reduction is replaced with a projection to an infinite-dimensional sub-space whose von Neumann dimension is not larger than one. 2. This raises interesting question. Could infinite integers constructible from infinite primes correspond to these infinite dimensions so that prime p would appear as a factor of this kind of infinite integer? One can say that for inclusions of hyperfinite factors the ratio of dimensions for including and included factors is quantum dimension which is algebraic number expressible in terms of quantum phase q=exp(i2π/n). Could n correspond to the integer ratio n=nf/ni for the integers characterizing the sub-algebra of super-symplectic algebra acting as gauge transformations? 4. Generalizing the notion of p-adic space-time surface The notion of p-adic manifold \citealb/picosahedron is an attempt to formulate p-adic space-time surfaces identified as preferred extremal of p-adic variants of p-adic field equations as cognitive charts of real space-time sheets. Here the essential point is that p-adic variants of field equations make sense: this is due to the fact that induced metric and induced gauge fields make sense (differential geometry exists p-adically unlike global geometry involving notions of lengths, area, etc does not exist: in particular the notion of angle and conformal invariance make sense). The second key element is finite resolution so that p-adic chart map is not unique. Same applies to the real counterpart of p-adic extremal and having representation as space-time correlate for an intention realized as action. The discretization of the entire space-time surface proposed in the formulation of p-adic manifold concept (see this) looks too naive an approach. It is plausible that one has an abstraction hierarchy for discretizations at various abstraction levels. 1. The simplest discretization would occur at space-time level only at partonic 2-surfaces in terms of string ends identified as algebraic points in the extension of p-adics used. For the boundaries of string world sheets at the orbits of partonic 2-surface one would have discretization for the parameters defining the boundary curve. By field equations this curve is actually a segment of light-like geodesic line and characterized by initial light-like 8-velocity, which should be therefore a number in algebraic extension of rationals. The string world sheets should have similar parameterization in terms of algebraic numbers. By conformal invariance the finite-dimensional conformal moduli spaces and topological invariants would characterize string world sheets and partonic 2-surfaces. The p-adic variant of Teichmueller parameters was indeed introduced in p-adic mass calculations and corresponds to the dominating contribution to the particle mass (see this and this). 2. What might be called co-dimension 2 rule for discretization suggests itself. Partonic 2-surface would be replaced with the ends of fermion lines at it or equivalently: with the ends of space-like strings connecting partonic 2-surfaces at it. 3-D partonic orbit would be replaced with the fermion lines at it. 4-D space-time surface would be replaced with 2-D string world sheets. Number theoretically this would mean that one has always commutative tangent space. Physically the condition that em charge is well-defined for the spinor modes would demand co-dimension 2 rule. 3. This rule would reduce the real-p-adic correspondence at space-time level to construction of real and p-adic space-time surfaces as pairs to that for string world sheets and partonic 2-surfaces determining algebraically the corresponding space-time surfaces as preferred extremals of Kähler action. Strong form of holography indeed leads to the vision that these geometric objects can be extended to 4-D space-time surface representing preferred extremals. 4. In accordance with the generalization of AdS/CFT correspondence to TGD framework cognitive representations for physics would involve only partonic 2-surfaces and string world sheets. This would tell more about cognition rather than Universe. The 2-D objects in question would be in the intersection of reality and p-adicities and define cognitive representations of 4-D physics. Both classical and quantum physics would be adelic. 5. Space-time surfaces would not be unique but possess a degeneracy corresponding to a sub-algebra of the super-symplectic algebra isomorphic to it and and acting as conformal gauge symmetries giving rise to n conformal gauge invariance classes. The conformal weights for the sub-algebra would be n-multiples of those for the entire algebra and n would correspond to the effective Planck constant heff/h=n. The hierarchy of quantum criticalities labelled by n would correspond to a hierarchy of cognitive resolutions defining measurement resolutions. Clearly, very many big ideas behind TGD and TGD inspired theory of consciousness would have this picture as a Boolean intersection. 5. Number theoretic universality for cognitive representations 1. By number theoretic universality p-adic zero energy states should be formally similar to their real counterparts for option B. For option A the states between which real--p-adic transitions are highly probable would be similar. The states would have as basic building bricks the elements of the Yangian of the super-symplectic algebra associated with these strings which one can hope to be algebraically universal. 2. Finite measurement resolution demands that all scattering amplitudes representing zero energy states involve discretization. In purely p-adic context this is unavoidable because the notion of integral is highly problematic. Residue integral is p-adically well-defined if one can deal with π. p-Adic integral can be defined as the algebraic continuation of real integral made possible by the notion of p-adic manifold and this works at least in the real--p-adic intersection. String world sheets would belong to the intersection if they are cognitive representations as the interpretation of fermions as correlates of Boolean cognition suggests. In this case there are excellent hopes that all real integrals can be continued to various p-adic sectors (which can involve algebraic extensions of p-adic number fields). Quantum TGD would be adelic. There are of course potential problems with transcendentals like powers of π. 3. Discrete Fourier analysis allows to define integration in angle degrees of freedom represented in terms of algebraic extension involving roots of unity. In purely p-adic context the notion of angle does not make sense but trigonometric functions make sense: the reason is that only the local aspect of geometry generalize characterized by metric generalize. The global aspects such as line length involving integral do not. One can however introduce algebraic extensions of p-adic numbers containing roots of unity and this gives rise to a realistic notion of trigonometric function. One can also define the counterpart of integration as discrete Fourier analysis in discretized angle degrees of freedom. 4. Maybe the 2-dimensionality of cognition has something to do with the fact that quaternions and octonions do not have p-adic counterpart (the p-adic norm squared of quaternion/octonion can vanish). I have earlier proposed that life and cognitive representations resides in real-p-adic intersection. Stringy description of TGD could be seen as number theoretically universal cognitive representation of 4-D physics. The best that the limitations of cognition allow to obtain. This hypothesis would also guarantee that various conserved quantal charges make sense both in real and p-adic sense as p-adic mass calculations demand. ## Monday, April 13, 2015 ### Manifest unitarity and information loss in gravitational collapse There was a guest posting in the blog of Lubos by Prof. Dejan Stojkovic from Buffalo University. The title of the post was Manifest unitarity and information loss in gravitational collapse. It explained the contents of the article Radiation from a collapsing object is manifestly unitary by Stojkovic and Saini. The posting The posting describes calculations carried out for a collapsing spherical mass shell, whose radius approaches its own Scwartschild radius. The metric outside the shell with radius larger than rS is assumed to be Schwartschild metric. In the interior of the shell the metric would be Minkowski metric. The system considered is second quantized massless scalar field. One can calculate the Hamiltonian of the radiation field in terms of eigenmodes of the kinetic and potential parts and by canonical quantization the Schrödinger equation for the eigenmodes reduces to that for a harmonic oscillator with time dependent frequency. Solutions can be developed in terms of solutions of time-independent harmonic oscillator. The average value of the photon number turns out to approach to that associated with a thermal distribution irrespective of initial values at the limit when the of the shell approaches its blackhole radius. The temperature is Hawking temperature. This is of course highly interesting result and should reflect the fact that Minkowski vacuum looks from the point of view of an accelerated system to be in thermal equilibrium. Manifest unitary is just what one expects. The authors assign a density matrix to the state in the harmonic oscillator basis. Since the state is pure, the density matrix is just a projector to the quantum state since the components of the density matrix are products of the coefficients characterizing the state in the oscillator basis (there are a couple of typos in the formulas, reader certainly notices them). In Hawking's original argument the non-diagonal cross terms are neglected and one obtains a non-pure density matrix. The approach of authors is of course correct since they consider only the situation before the formation of horizon. Hawking consider the situation after the formation of horizon and assumes some un-specified process taking the non-diagonal components of the density matrix to zero. This decoherence hypothesis is one of the strange figments of insane theoretical imagination which plagues recent day theoretical physics. Authors mention as a criterion for purity of the state the condition that the square of the density matrix has trace equal to one. This states that the density matrix is N-dimensional projector. The criterion alone does not however guarantee the purity of the state for N> 1. This is clear from the fact that the entropy is in this case non-vanishing and equal to log(N). I notice this because negentropic entanglement in TGD framework corresponds to the situation in entanglement matrix is proportional to unit matrix (that is projector). For this kind of states number theoretic counterpart of Shannon entropy makes sense and gives negative entropy meaning that entanglement carries information. Note that unitary 2-body entanglement gives rise to negentropic entanglement. Authors inform that Hawkins used Bogoliubov transformations between initial Minkowski vacuum and final Schwartschild vacum at the end of collapse which looks like thermal distribution with Hawking temperature in terms from Minkowski space point of view. I think that here comes an essential physical point. The question is about the relationship between two observers - one might call them the observer falling into blackhole and the observer far away approximating space-time with Minkowski space. If the latter observer traces out the degrees of freedom associated with the region below horizon, the outcome is genuine density matrix and information loss. This point is not discussed in the article and authors inform that their next project is to look at the situation after the spherical shell has reached Schwartschild radius and horizon is born. One might say that all that is done concerns the system before the formation of blackhole (if it is formed at all!). Several poorly defined notions arise when one tries to interpret the results of the calculation. 1. What do we mean with observer? What do we mean with information? For instance, authors define information as difference between maximum entropy and real entropy. Is this definition just an ad hoc manner to get sum well-defined number christened as an information? Can we really reduce the notion of information to thermodynamics? Shouldn't we be very careful in distinguishing between thermodynamical entropy and entanglement entropy? A sub-system possessing entanglement entropy with its complement can be purified by seeing it as a part of the entire system. This entropy relates to pair of systems. Thermal entropy can be naturally assigned to an average representative of ensemble and is single particle observable. 2. Second list of questions relates to quantum gravitation. Is blackhole really a relevant notion or just a singular outcome of a theory exceeding its limits? Does something deserving to be called blackhole collapse really occur? Is quantum theory in its recent form enough to describe what happens in this process or its analog? Do we really understand the quantal description of gravitational binding? What TGD can say about blackholes? The usual objection of string theory hegemony is that there are no competing scenarios so that superstring is the only "known" interesting approach to quantum gravitation (knowing in academic sense is not at all the same thing as knowing in the naive layman sense and involves a lot of sociological factors transforming actual knowing to sociological unknowing: in some situations these sociological factors can make a scientist practically blind, deaf, and as it looks - brainless!) . I dare however claim that TGD represents an approach, which leads to a new vision challenging a long list of cherished notions assigned with blackholes. To my view blackhole science crystallizes huge amount of conceptual sloppiness. People can calculate but are not so good in concetualizing. Therefore one must start the conceptual cleaning from fundamental notions such as information, notions of time (experienced and geometric), observer, etc... In attempt to develop TGD from a bundle of ideas to a real theory I have been forced to carry out this kind of distillation and the following tries to summarize the outcome. 1. TGD provides a fundamental description for the notions of observer and information. Observer is replaced with "self" identified in ZEO by a sequences of quantum jumps occurring at same boundary of CD and leaving it and the part of the zero energy state at it fixed whereas the second boundary of CD is delocalized and superposition for which the average distance between the tips of CDs involve increases: this gives to the experience flow of time and its correlation with the flow of geometric time. The average size of CDs simply increases and this means that the experiences geometric time increases. Self "dies" as the first state function reduction to the opposite boundary takes place and new self assignable it is born. 2. Negentropy Maximizaton Principle favors the generation of entanglement negentropy. For states with projection operator as density matrix the number theoretic negentropy is possible for primes dividing the dimension of the projection and is maximum for the largest power of prime factor of N. Second law is replaced with its opposite but for negentropy which is two-particle observable rather than single particle observable as thermodynamical entropy. Second law follows at ensemble level from the non-determinism of the state function reduction alone. The notions related to blackhole are also in need of profound reconsideration. 1. Blackhole disappears in TGD framework as a fundamental object and is replaced by a space-time region having Euclidian signature of the induced metric identifiable as wormhole contact, and defining a line of generalized Feynman diagram (here "Feynmann" could be replaced with " twistor" or "Yangian" something even more appropriate). Blackhole horizon is replaced the 3-D light-like region defining the orbit of wormhole throat having degenerate metric in 4-D sense with signature (0,-1,-1,-1). The orbits of wormhole throats are carries of various quantum numbers and the sizes of M4 projections are of order CP2 size in elementary particle scales. This is why I refer to these regions also as light-like parton orbits. The wormhole contacts involved connect to space-time sheets with Minkowskian signature and stability requires that the wormhole contacts carry monopole magnetic flux. This demands at least two wormhole contacts to get closed flux lines. Elementary particles are this kind of pairs but also multiples are possible and valence quarks in baryons could be one example. 2. The connection with GRT picture could emerge as follows. The radial component of Schwartschild-Nordström metric associated with electric charge can be deformed slightly at horizon to transform horizon to light-like surface. In the deep interior CP2 would provide gravitational instanton solution to Maxwell-Einstein system with cosmological constant and having thus Euclidian metric. This is the nearest to TGD description that one can get within GRT framework obtained from TGD at asymptotic regions by replacing many-sheeted space-time with slightly deformed region of Minkowski space and summing the gravitational fields of sheets to get the the gravitational field of M4 region. All physical systems have space-time sheets with Euclidian signature analogous to blackhole. The analog of blackhole horizon provides a very general definition of "elementary particle". 3. Strong form of general coordinate invariance is central piece of TGD and implies strong form of holography stating that partonic 2-surfaces and their 4-D tangent space data should be enough to code for quantum physics. The magnetic flux tubes and fermionic strings assignable to them are however essential. The localization of induced spinor fields to string world sheets follows from the well-definedness of em charge and also from number theoretical arguments as well as generalization of twistorialization from D=4 to D=8. One also ends up with the analog of AdS/CFT duality applying to the generalization of conformal invariance in TGD framework. This duality states that one can describe the physics in terms of Kähler action and related bosonic data or in terms of Kähler-Dirac action and related data. In particular, Kähler action is expressible as string world sheet area in effective metric defined by Kähler-Dirac gamma matrices. Furthermore, gravitational binding is describable by strings connecting partonic 2-surfaces. The hierarchy of Planck constants is absolutely essential for the description of gravitationally bound states in thems of gravitational quantum coherence in macroscopic scales. The proportionality of the string area in effective metric to 1/heff2, heff=n× h=hgr=GMm/v0 is absolutely essential for achieving this. If the stringy action were the ordinary area of string world sheet as in string models, only gravitational bound states with size of order Planck length would be possible. Hence TGD forces to say that superstring models are at completely wrong track concerning the quantum description of gravitation. Even the standard quantum theory lacks something fundamental required by this goal. This something fundamental relates directly to the mathematics of extended super-conformal invariance: these algebras allow infinite number of fractal inclusion hierarchies in which algebras are isomorphic with each other. This allows to realize infinite hierarchies of quantum criticalities. As heff increases, some degrees are reduced from critical gauge degrees of freedom to genuine dynamical degrees of freedom but the system is still critical, albeit in longer scale. 4. A naive model for the TGD analog of blackhole is as a macroscopic wormhole contact surrounded by particle wormhole contacts with throats connected to the large wormhole throats by flux tubes and strings to the large wormhole contact. The macroscopic wormhole contact would carry magnetic charge equal to the sum of those associated with elemenentary particle wormhole throats. 5. What about black hole collapse and blackhole evaporation if blackholes are replaced with wormhole contacts with Euclidian signature of metric? Do they have any counterparts in TGD? Maybe! Any phase transition increasing heff=hgr would occur spontaneously as transitions to lower criticality and could be interpreted as analog of blackhole evaporation. The gravitationally bound object would just increase in size. I have proposed that this phase transition has happened for Earth (Cambrian explosion) and increases its radius by factor 2. This would explain the strange finding that the continents seem to fit nicely together if the radius of Earth is one half of the recent value. These phase transitions would be the quantum counterpart of smooth classical cosmic expansion. The phase transition reducing heff would not occur spontaneusly and in living systems metabolic energy would be needed to drive them. Indeed, from the condition that heff=hgr= GMm/v0 increases as M and v0 change also gravitational Compton length Lgr=hgr/m= GM/v0 defining the size scale of the gravitational object increases so that the spontaneous increase of hgr means increase of size. Does TGD predict any process resembling blackhole collapse? In Zero Energy Ontology (ZEO) state function reductions occurring at the same boundary of causal diamond (CD) define the notion of self possessing arrow of time. The first quantum state function reduction at opposite boundary is eventually forced by Negentropy Maximization Principle (NMP) and induces a reversal of geometric time. The expansion of object with a reversed arrow of geometric time with respect to observer looks like collapse. This is indeed what the geometry of causal diamond suggests. 6. The role of strings (and magnetic flux tubes with which they are associated) in the description of gravitational binding (and possibly also other kinds of binding) is crucial in TGD framework. They are present in arbitrary long length scales since the value of gravitational Planck constant heff = hgr = GMm/v0, v0 (v0/c<1) has dimensions of velocity can have huge values as compared with those of ordinary Planck constant. This implies macroscopic quantum gravitational coherence and the fountain effect of superfluidity could be seen as an example of this. The presence of flux tubes and strings serves as a correlate for quantum entanglement present in all scales is highly suggestive. This entanglement could be negentropic and by NMP and could be transferred but not destroyed. The information would be coded to the relationship between two gravitationally bound systems and instead of entropy one would have enormous negentropy resources. Whether this information can be made conscious is a fascinating problem. Could one generalize the interaction free quantum measurement so that it would give information about this entanglement? Or could the transfer of this information make it conscious? Also super string camp has become aware about possibility of geometric and topological correlates of entanglement. The GRT based proposal relies on wormhole connections. Much older TGD based proposal applied systematically in quantum biology and TGD inspired theory of consciousness identifies magnetic flux tubes and associated fermionic string world sheets as correlates of negentropic entanglement.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549423217773438, "perplexity": 1105.9301400811364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00170.warc.gz"}
http://mathoverflow.net/questions/23877/a-question-on-a-davis-complex-of-a-coxeter-group
# A question on a Davis complex of a Coxeter group Let us have a look at p. 64 of M. Davis book "The Geometry and Topology of Coxeter Groups". The discussion preceeding Definition 5.1.3. shows that $\mathcal{U}(G, X)/G$ is homeomorphic to $X$. Theorem 7.2.4. says that $\mathcal{U}(W, K)$ is $W$-equivariantly homeomorphic to the Davis complex $\Sigma$. So, $\Sigma/W$ is homeomorphic to $K$. $K$ is the cone on the barycentric subdivision of the nerve $L$. $L$ can have topological type of any polyhedron. So $K$ can be a cone on any polyhedron (up to homeomorphism). But the action of $W$ on $\Sigma$ is cocompact (p. 4, bottom). So $\Sigma/W$ is compact, i.e., $K$ is compact. So a cone on any polyhedron is compact. What's wrong? - Without having looked closely (I don't have the book to hand), I guess: the action of W on $\Sigma$ is cocompact if and only if W is finitely generated, which is true if and only if L is compact (which is true if and only if the cone on L is compact). –  HJRW May 7 '10 at 17:11 Actually, my first statement (the action of W on $\Sigma$ is cocompact if and only if W is fg) is probably only true for freely indecomposable W. What I'm trying to say is that several of these statements probably implicitly assume that W is finitely generated. –  HJRW May 7 '10 at 17:19 OK, I stand by what I said first time round. The action of W on $\Sigma$ is cocompact if and only if W is fg. (Otherwise, $\Sigma$ isn't even locally compact.) –  HJRW May 7 '10 at 17:47 But doesn't $W$ act simply transitively on the vertex set of $\Sigma$ and hence collapses some vertices of $K$ into a point in $\Sigma/W$? Why is $K$ then a strict fundamental domain? –  Kestutis Cesnavicius May 7 '10 at 18:30 No, I don't think W collapses any vertices of K. Here's the example I have in mind. Let L be a disjoint union of countably many points. Then W is the free product of countably many copies of Z/2, and $\Sigma$ is a locally countably-infinite tree. Each generator is a reflection in an edge of $\Sigma$. Topologically, $\Sigma/W$ is indeed the cone on L; from another point of view, it's the union of a vertex of $\Sigma$ with all the incident half-edges - in other words, a fundamental domain. Does that help? –  HJRW May 7 '10 at 18:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8320120573043823, "perplexity": 298.91999679192224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011159105/warc/CC-MAIN-20140305091919-00025-ip-10-183-142-35.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/29755/tenaliraman?tab=activity&sort=revisions
# TenaliRaman less info reputation 410 bio website cse.iitb.ac.in/~aruniyer location India age 30 member for 2 years, 3 months seen 53 mins ago profile views 201 # 25 Revisions Apr8 revised orthogonal complement problem: show $\operatorname{oc}(A\cap B)=\operatorname{oc}(A)+\operatorname{oc}(B)$ deleted 8 characters in body Apr2 revised Statistics and Probability, finding unbiased estimates of mean and variance given sigma x and sigma (x^2) latexed the question Aug30 revised Prove that the CDF of a random variable is always right-continuous fixing some braces Aug30 revised is $\{(x,y) : x,y \in \mathbb{Z} \}$ a closed set? Just using prettier norm notation :-) Aug25 revised solving equations by the method of elimination fixing equations and some text Aug15 revised Relationship between lagrange multiplier and constraint added 1 characters in body Aug15 revised Relationship between lagrange multiplier and constraint added 3 characters in body Aug5 revised Prove that $e^{\sum 1/p_k^2} > \pi/2$ added 216 characters in body Aug4 revised show that $\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$ Minor typo in the final result of the second technique Jul29 revised Show if $\|\cdot\|$ is a norm then $\|f(\cdot)\|$ is a norm where $f$ is linear and invertible Minor correction to notation Jul27 revised show that $\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$ added 95 characters in body Jul22 revised Prove that $\gcd(x, y)=\gcd(x,ax+y)$, would this be the correct reasoning? Minor corrections. Jul13 revised $F(u)= \frac{2}{\pi}\int_{0}^\infty \frac{uf(x)}{u^2 + x^2}dx.$ Show that $\lim\limits_{u\downarrow0}F(u)=f(0)$. Fixed an error in the title Jan24 revised Subspace of a Hilbert Space added 485 characters in body Jan24 revised Subspace of a Hilbert Space edited tags Jan6 revised how do you learn trigonometric identities added 146 characters in body Dec4 revised Minimum of variance Just cleaning up the post and making it a bit presentable. Nov25 revised To show that $P(|X-Y| \leq 2) \leq 3P(|X-Y| \leq 1)$ Title should not just contain Latex, it becomes a bit problematic since right click and opening it in a new tab doesn't work Nov24 revised Random variable $Z = |X-Y|$ added 469 characters in body Jun29 revised How can I prove that $n^7 - n$ is divisible by $42$ for any integer $n$? added 585 characters in body
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134574294090271, "perplexity": 1969.4094342607068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273381.44/warc/CC-MAIN-20140728011753-00494-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.qfak.com/education_reference/science_mathematics/?id=2988587
# Can velocity be zero when acceleration is non zero.? Can velocity be zero when acceleration is non zero.? yes at the highest position of the flight of a body the velocity is zero but acceleration isn't............. #1 may be or may not be. #2 YES! but not permanently ,only momentarily. #3 Yes. This may sound crazy but if a car was sliding down a hill (lets say -5Km/hr) and the driver pressed down on the accerator. The car would accerelate say at a steady rate from say -5Km/hr to 5km/hr. If you drew a graph you could see a point which the speed was zero. However, accelation cannot be calculated on a single point and then the answer could be no. Its interesting. Where did you get the question from. #4 that's impossible. velocity is simply speed with a direction and acceleration is rate of change of velocity. if you use calculus, differentiating a speed time graph gives you velocity time graph as the first derivative and taking the second derivative of the speed time graph gives you acceleration time graph. so we can clearly see that the three are closely linked i.e. speed, velocity and acceleration if velocity is zero therefore speed is zero. as i mentioned above velocity is just speed in a specified direction. so at the end of the a zero velocity means the object is not moving therefore there is no change in speed, therefore zero acceleration. displacement is also zero if velocity is zero #5 yes... for example: a car travelling at velocity of -10m/s at time = 0s is accelerating at 10m/s^2 at time=1 s, the velocity would be 0 but the car would still be accelerating at 10m/s^2 so for this scenario, the car would be deceleraing from a SPEED of 10m/s at time = 0s to the left for one sec thn would change diection at time = 1s and would start to move to the right draw a graph for this... u will see that at time = 1s the velocity would be zero #6
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537361860275269, "perplexity": 781.6007410318272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121985.80/warc/CC-MAIN-20160428161521-00062-ip-10-239-7-51.ec2.internal.warc.gz"}
http://4tex.ntg.nl/4tex5/4project/4p-options.html
# Options You can change any of the parameters used above to suit your personal needs. Just note that all the parameters above are lists of (La)TeX commands separated by a semicolon (";"). For instance if we changed the \chapter macro into \hoofdstuk we can easily adjust ChapterMarks to: ``` ChapterMarks=\hoofdstuk;\chapter;\appendix;\nonumchapter;\bibliography; ``` You can use the "Options" tabsheet to configure the 4Project parameters. Note that for all the TeX formats your have (specified in the US_FRM.LST file) you can specify different parameters used/needed for the 4Project program. This can be seen in the "Options" tab. The first combobox "Format" shows you all your formats and you can choose one of them and associate it with the second combobox. The second combobox "Project" specifies sections in the 4TEX.INI file that store the parameters for the 4Project. You can either select one of the already available sections or add/type a new one. After selecting a format and INI section you can view/edit/change the associated parameters by clicking on one of the buttons indicating the 4Project parameters in the 4TEX.INI file. After clicking one of the buttons the parameter is show in a memo field and a line in the memo contains one TeX command (so you do not need to specify the semicolon). The combination of TeX format and 4TEX INI section is stored in the file US_PROJ.LST When starting 4Project the 4PAR file of the main file will be read and for the specified format the correct INI section is searched in the US_PROJ.LST file. If not specified the LATEX section will be used. When you have edit/changed the 4Project parameter you can press the "Apply" button to save the parameter or press the "Abort" button to ignore all the changes. Just like in 4TeX, the user-interface language of 4Project can be changed on-the-fly. From the "Language" field you can select a language. With the "Use scaled fonts for 4Project screens" checkbox you can choose whether or not you want the font that is used by 4Project is scaled every time you scale a 4Project window.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131735563278198, "perplexity": 2375.696341841457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742253.21/warc/CC-MAIN-20181114170648-20181114192648-00345.warc.gz"}
http://www.win-vector.com/blog/2015/06/what-is-a-good-sharpe-ratio/
Posted on Categories Finance, Mathematics, Opinion, TutorialsTags , , # What is a good Sharpe ratio? We have previously written that we like the investment performance summary called the Sharpe ratio (though it does have some limits). What the Sharpe ratio does is: give you a dimensionless score to compare similar investments that may vary both in riskiness and returns without needing to know the investor’s risk tolerance. It does this by separating the task of valuing an investment (which can be made independent of the investor’s risk tolerance) from the task of allocating/valuing a portfolio (which must depend on the investor’s preferences). But what we have noticed is nobody is willing to honestly say what a good value for this number is. We will use the R analysis suite and Yahoo finance data to produce some example real Sharpe ratios here so you can get a qualitative sense of the metric. “What is a good Sharpe ratio” was a fairly popular query in our search log (until search engines stopped sharing the incoming queries with mere bloggers such as myself). When you do such a search you see advice of the form: … a ratio of 1 or better is considered good, 2 and better is very good, and 3 and better is considered excellent … Some sources of this statement include: Reading these together you see a bit of a content-free echo chamber. Remember: on the web when you see the exact same answer again and again it is more likely due to copying than due to authoritativeness. The last reference indicates a part of the problem: once somebody claims some specific number (such as 1) is a middling Sharpe ratio, no-one dares call any smaller number good (for fear of looking weak). One also wonders of “2 is good” is some sort of confounding interpretation of the Sharpe ratio as a Fisher style Z statistic (which uses the same ratio of mean over deviance). The point being the rule of thumb “two standard deviations has a two-sided significance of 0.0455” falls fairly close to the heavily ritualized p-value of 0.05. The correct perspectives about Sharpe ratio are a bit more nuanced: • Morningstar classroom: How to Use the Sharpe Ratio Of course, the higher the Sharpe ratio the better. But given no other information, you can’t tell whether a Sharpe ratio of 1.5 is good or bad. Only when you compare one fund’s Sharpe ratio with that of another fund (or group of funds) do you get a feel for its risk-adjusted return relative to other funds. In fact it is Morningstar that gave a specific range for annual returns (around 0.3) that I used in my article Betting with their money (though now I am not sure I have enough context to be sure if the number they gave was a real example or just notional). The theory of the Sharpe ratio is: if you have access to the ability to borrow or lend money, then for two similar investments you should always prefer the one with higher Sharpe ratio. So the Sharpe ratio is definitely used for comparison. When I was in finance I used the Sharpe ratio for comparison, but I didn’t have a Sharpe ratio goal. Reading from a primary source we see estimating the Sharpe ratio of a particular investment at a particular time depends on at least the choice of: • Investment time frame: are we talking about daily, monthly, quarterly, or annual returns? Changing scale changes returns (daily returns compound about 365 times to get yearly returns!) and changes deviation (theoretically daily returns tend to have a deviation that is around `sqrt(365)` time more volatile than yearly returns). The Sharpe ratio is the ratio of these two quantities, and they are not varying in similar ways as we change scale. • Choice of “risk free” reference returns. “Return” in the Sharpe ratio is actually defined as “excess return over a chosen risk-free investment.” Choose a comparison investment with low returns and you artificially look good. • Length of data used to estimate empirical variance (as we are talking about Ex Post Sharpe Ratio, which means we don’t have a theoretical variance to use). In theory (for normal data or data with bounded theoretical variance) variance/deviation estimates should stabilize at moderate window sizes. Picking a too small window may let you avoid some rare losses and display an elevated Sharpe ratio. Picking a too large window may let in data from markets climates not relevant to the current market. So without holding at least these three choices constant, it doesn’t make a lot of sense to compare. We emphasize because the Sharpe ratio itself varies over time (even with the above windowing) it in fact does not strictly make sense to talk about “the Sharpe ratio of an investment.” Instead you must consider “the Sharpe ratio of an investment at a particular time” or “the distribution of the Sharpe ratio of an investment over a particular time interval.” This means if you want to estimate a Sharpe ratio for an investment you at least must specify an additional time scale or smoothing window to average over. For example: below is the Sharpe ratio of the S&P500 index annual returns using a 500 day window to estimate variance/deviation using the 10-year US T-note interest rate as the risk-free rate of return (note we are using the interest rate as the risk-free return, we are not using the returns you would see from buying and selling T-notes). Note: the choice of “risk free” investment here is a bit heterodox. Notice two things: • The ratio is all over the map, we can call the S&P Sharpe ratio just about any value between -2 and 2 by picking the right 5 year period to consider “typical.” The mean over the period graphed (1960 through 2015) is 0.3 and the median is 0.17. • The ratio spends a lot of time well below 1 over this history. For our betting article we needed a Sharpe ratio on a 10 day scale. Here is the S&P500 index 10 day returns using a 500 day window to estimate variance/deviation using the 10-year US T-note interest rate as the risk-free rate of return: Over the graphed time interval viewed we have the upper quartile value is 0.1. So the S&P 10 day return Sharpe ratio spends 75% of its time below 0.1. Thus the 10 theoretical day Sharpe ration of 0.18 in “Betting with their money” is in fact large. Though we have found this calculation is sensitive to the length of the window used to estimate variance (for example using a window of 30 days gives us mean: 0.08, median: 0.075, 3rd quartile 0.45). And for fun here is a similar Sharpe ratio calculation for the PIMCO TENZ bond fund: This just confirms the last few years have not been good for US bonds. Note: it is traditional to use very low interest rate instruments as the “safe comparison” in Sharpe ratio. So using 10 T-note interest rates gives an analysis that is a bit pessimistic (and also ascribes the T-note variance to the instrument being scored). However, the “safe comparison” is really only used in the Sharpe portfolio argument as the rate you can borrow and/or lend money at (which is not in fact risk-free in the real world). So there is some value in using an easy to obtain realistic “boring investment” as a proxy for the “risk-free return rate.” The ignoring of the risk-free rate in the Betting with their money article is also not strictly correct (but also something Sharpe ignored for a while), but given the scale of potential wins and losses in that set-up it is not going to cause significant issues. Basically remember this: there are a lot of analyst chosen details in estimating a Sharpe ratio. One of the biggest ones you can fudge is the estimate of deviation/variance (be it theoretical/Ex-Ante or Ex-Post). I would say very high Sharpe ratios or more likely evidence of under estimating the deviation/variance and the reference return process than evidence of actual astronomical risk-adjusted returns. The complete R code to produce these graphs from downloaded finance data is given here. ## 6 thoughts on “What is a good Sharpe ratio?” 1. Highgamma says: Nice post. I enjoyed it. I would make one correction. The adjustment from daily to annual Sharpe ratio is by the square root of 252 (or so), the number of trading days in a year. You are calculating the standard deviation using only trading days data (not days for which the market is closed), so you use the number I’d trading days in the year for the adjustment. (Think of it as sampling from a continuous time process. ) 1. Highgamma, thanks you are right: the theoretical standard deviation correction should be number of actual trading days. And the compounding of rates remands date based. Fortunately I never used these empirical transforms in the article (I calculated returns for each time period by lagging, which has its own issues). 2. Hi John, thanks for sharing your thoughts on the Sharpe ratio. I just wanted to add that besides the choice of the scale of returns (daily, monthly…) , the risk free rate and the length of data used, even a simple transformation from discrete returns to log returns, often used for their easier algebra, affects the Sharpe ratio (i.e. the Sharpe ratio computed on the log returns of a price is usually quite different from the Sharpe ratio on the discrete returns of the same price…). Did Bill Sharpe ever clarify if the mean and the std dev of the returns are meant to be computed on discrete or log returns? So far, from a practical stanpoint it seems to be that a reasonable suggestion is to compare risk and returns across different dimensions and using several different measures and ratios (although each of them has its own drawbacks). However I would definitely appreciate if somebody comes up with a better idea! 1. Francesco, I agree. I wish just one more step in the calculation was shown (the formation of returns from close prices) to confirm what is meant by return. I know log-returns are a legit metric, but I am pretty sure the standard Sharpe ratio calculation depends on returns that are of the form (1- finalPrice/initialPrice). The point being it is convenient to have no effect be a zero return (not a 1) and the borrowing/loaning portfolio arguments that justify the Sharpe ratio assume you are working in money units (not log-money units). In the R code I tried some games (putting zero as the risk-free comparable). I also thought about (but didn’t bother with after seeing the zero result) computing the variance only on the instrument (and not on the instrument minus the risk free reference). But it looks there is a lot of sensitivity to what sort of window you are estimating variance (in particular neglect of serial correlation). 3. All specific questions of how things were calculated and how the graphs are made are answered in the “.Rmd” file in the Github archive ( https://github.com/JohnMount/Sharpe ). I also added examples of using zero or the Fed prime-rate as the rate of risk-free return. Comments are closed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368266224861145, "perplexity": 1131.1747741880395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809419.96/warc/CC-MAIN-20171125051513-20171125071513-00117.warc.gz"}
https://proofwiki.org/wiki/Accuracy_of_Convergents_of_Convergent_Simple_Infinite_Continued_Fraction
# Accuracy of Convergents of Convergent Simple Infinite Continued Fraction ## Theorem Let $C = (a_0, a_1, \ldots)$ be an infinite simple continued fraction in $\R$. Let $C$ converge to $x \in \R$. For $n\geq0$, let $C_n = p_n/q_n$ be the $n$th convergent of $C$, where $p_n$ and $q_n$ are the $n$th numerator and denominator. Then for all $n\geq 0$: $\left\vert x - \dfrac {p_n}{q_n}\right\vert < \dfrac 1{q_nq_{n+1}}$. ## Proof We show that either: $x \in \left[ C_n..C_{n+1}\right]$ or $x \in \left[ C_{n+1}..C_n\right]$ so that the result follows from: Difference between Adjacent Convergents of Simple Continued Fraction Distance between Point of Real Interval and Endpoint is at most Length #### Odd case Let $n \geq 1$ be odd. $x = \displaystyle\lim_{k \to\infty} C_{2k}$ For all $2k \geq n+1$, by: Even Convergents of Simple Continued Fraction are Strictly Increasing Even Convergent of Simple Continued Fraction is Strictly Smaller than Odd Convergent we have $C_{n+1} \leq C_{2k} < C_n$. By Lower and Upper Bounds for Sequences, $x \in \left[ C_{n+1}..C_n\right]$. #### Even case Let $n \geq 0$ be even. $x = \displaystyle\lim_{k \to\infty} C_{2k+1}$ For all $2k+1 \geq n+1$, by: Odd Convergents of Simple Continued Fraction are Strictly Decreasing Even Convergent of Simple Continued Fraction is Strictly Smaller than Odd Convergent we have $C_{n} < C_{2k+1} \leq C_{n+1}$. By Lower and Upper Bounds for Sequences, $x \in \left[ C_n..C_{n+1}\right]$. $\blacksquare$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968515634536743, "perplexity": 709.4244677801846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00105.warc.gz"}
https://electronics.stackexchange.com/questions/268331/pnp-and-npn-collector-emitter-naming
# PNP and NPN Collector Emitter naming In a NPN circuit, the BCE arrangement is as followed: And for PNP: This seems confusing for me and for some people. In a NPN transistor, current flows from the collector to the emitter, while in a PNP transistor, current flows from the emitter to the collector. Why is the direction different for these complementary transistors? I always remembered that collector was where current is collected by the transistor and emits the current out of the emitter, but it doesn't work like that for the PNP transistor. Is there a reason why it was named like this? Is there an easy way to remember that NPN goes from C to E while PNP goes from E to C? This applies to FETs and MOSFETs as well. • Different charge carriers based on the NPN or PNP material sandwich. – M D Nov 8, 2016 at 23:05 • The arrow represents the PN junction that is emitting the charge carriers. In one type, the carriers are holes (conventional current direction); and the other type, the carriers are electrons (opposite to conventional current flow). Remember holes and electrons flow in opposite directions, that may be what's causing the confusion. Nov 8, 2016 at 23:40 • The emitter is where the arrow is. I had never had problems remembering it when I was learning about transistors. Also, the base-emitter diode is the forward biases one. Nov 8, 2016 at 23:49 • In an NPN transistor, the emitter arrow is Not Pointing iN. Nov 9, 2016 at 1:20 • Who ever downvoted this, can someone explain why? I've seen people can confused with emitter and collector before on this website. Nov 15, 2016 at 0:32 To understand this, one has to understand what is going on inside a transistor. An NPN transistor has three sections of 'doped-semiconductor' types - namely the N-type semiconductor (doped with atoms that provide a free electron) on the edges, with the P-type semiconductor in between (doped with atoms with one less electron than required - so a resultant 'positive' hole). Namely, n-type -> p-type -> n-type semiconductor arrangement. PNP is p-type -> n-type -> p-type semiconductor arrangement. Typically, the emitter is the 'source' of energy, rather than acting as the output. The collector is the one that serves as the 'output' of current - and not the emitter. So the notion that the collector 'collects' the current, and that the emitter outputs the current is perhaps incorrect - it is rather the other way round. There is no 'collection' of current by the collector to be 'emitted' by the emitter per se. Rather, the collector is the part of the transistor that allows current to flow from the emitter (the source) if the base has a current in it. Notice that the arrow is always between the emitter and the base. So what does the arrow in the emitter mean, especially if it is the source of electrons (negative charge if the emitter is n-type) or holes (positive charge if the emitter is p-type)? It shows the direction from p->n type semiconductor (or direction of forward bias). This shows the direction of the current between the emitter and the base (or it shows the opposite direction of flow of electrons) for each type of transistor. In an NPN transistor, current must flow into the base, and the emitter must be connected to the ground (current flows towards the ground or lower potential), hence the direction of the arrow is outside, showing that the current goes from the p-type to the n-type in the NPN transistor. On the other hand, the PNP transistor has the direction towards the base, showing the base is the n-type region, while the emitter is the p-type region. Do not think of the arrow as the direction of the resultant current - the emitter is actually the 'source' of the energy. The arrow shows the direction from the p-region to n-region of each individual transistor (between the emitter and the base). Hope this long explanation helped! • Ah this makes sense. The emitter is where you emit the energy. Without the energy, you can't collect it from the collector. Nov 9, 2016 at 0:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461012840270996, "perplexity": 1013.5665215056433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00177.warc.gz"}
http://physics.stackexchange.com/questions/132550/having-trouble-weighing-the-sun
# Having trouble weighing the sun So I gather the way you (and Vera Rubin) calculate a galaxy's mass is by measuring a star's orbital velocity $v$ and its distance $R$ from the galactic center, and then plugging them into this equation derived from Newton's second law: $$M_{gal}=Rv^2/G$$ ($G=6.67\times10^{-11}$. Units of $v$ and $R$ are km/sec. and km., respectively) The value obtained for $M_{gal}$ this way famously disagrees with the value you would obtain by measuring the brightness of the galaxy, leading to the dark matter theory. I was playing around with some solar system data and found that if I calculate the mass of the sun by plugging this data into the equation above the result is too low by 9 orders of magnitude. $\sim 1.98 \times 10^{21}$ kgs. instead of the actual solar mass (according to wikipedia) of $1.98 \times 10^{30}$ kgs. Is there some "dark vacuum" in the solar system? Or where have I gone wrong in the calculation? Are there any datasets of $v$ and $R$ for stars in a particular galaxy out there that can be downloaded? I've heard that thousands of galaxies have now been observed to have stars "going too fast." Has any of that data been made available? - Side note: the relation you cited is for circular orbits (a good approximation for many stars in the galaxy) orbiting a spherically symmetric mass. That many galaxies look like disks should give you pause when applying the formula. – Chris White Aug 26 '14 at 8:25 Other side note: Historically, it was very easy to determine relative distances $R$ and velocities $v$ in the solar system. It was much harder but doable to get absolute values for $R$ and $v$. It was harder still to get $GM_\odot$ independently, so really we know this last quantity only because of applying the formula you have. (And the most difficult measurement of all is $G$ itself, which is why we know $GM_\odot$ better than $M_\odot$.) – Chris White Aug 26 '14 at 8:31 Ben, you made a typical freshman mistake in treating G as a number rather than as a quantity with dimensions. If you take the dimensionality of G into account, you'll arrive at 2*10^30 kg for all of the planets. Aside: Chris White is spot on in his comment about the gravitational parameter $\mu_{\odot}=GM_{\odot}$ versus the naive $G*M_{\odot}$. While astronomers can determine the Sun's gravitational parameter $\mu_{\odot}$ to ten places of accuracy, scientists know G (and hence $M_{\odot}$) to a paltry four places of accuracy. – David Hammen Aug 27 '14 at 6:38 $$M_{gal}=Rv^2/G$$ ($G=6.67\times10^{-11} (N*[m/kg]^2)$. Units of $v$ and $R$ are km/sec. and km., respectively) You gave G in MKS, then: R and v are m, m/s, $m= (\frac {1}{10^3}) km$, that's why you got a wrong result: $10^3 * (10^3) ^2 = 10^9$ that's the order of magnitude you are missing $$1.5*10^{11} *(3*10^4)^2/(6.6*10^{-11}) = 2*10^{30} kg$$ - I think you are doing your math incorrectly if you get $10^{21}kg$. $$M = \frac{Rv^2}{G}$$ Let's try Jupiter from your reference. $$M = \frac{(778 \times 10^6km) (13.1\frac{km}{s})^2}{G}$$ $$M = \frac{(7.78 \times 10^{11}m) (1.31 \times 10^4 \frac{m}{s})^2}{6.6743 \times 10^{-11} \frac{m^3}{kgs^2}}$$ $$M = \frac{1.34 \times 10^{20} \frac{m^3}{s^2}}{6.6743 \times 10^{-11} \frac{m^3}{kgs^2}}$$ $$M = 2.00\times 10^{30} kg$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270648956298828, "perplexity": 376.49029933564543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446248.56/warc/CC-MAIN-20151124205406-00054-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/68233/simple-ore-extensions/81186
# Simple Ore extensions Let $R[x;\sigma,\delta]$ be an Ore extension, where $R$ is an associative and unital ring and $\sigma : R\to R$ is a (not necessarily injective!) ring endomorphism. (In the literature it is often assumed to be injective). My question is the following: If $R[x;\sigma,\delta]$ is a simple ring, is $\sigma$ necessarily an injective map? Note that the answer is affirmative in the case when the maps $\sigma$ and $\delta$ commute. Does anyone know of an example of a simple Ore extension where $\sigma$ is NOT injective? - More generally, it is true if $\delta(\ker\sigma)=0$, for then $\ker\sigma$ is an ideal in the Ore extension. –  Mariano Suárez-Alvarez Jun 19 '11 at 21:26 Mariano, $\ker\sigma \subseteq R$ is never an ideal of the Ore extension, but your idea is correct. It can be made more general: It is true if $\delta(\ker\sigma)\subseteq \ker\sigma$, for then (put $A:=R[x;\sigma,\delta]$) the set $(\ker\sigma) A$ is a proper ideal of the Ore extension. (I like to think of $R[x;\sigma,\delta]$ as a free left $R$-module with basis $\{1,x,x^2,...\}$.) –  Johan Öinert Jun 19 '11 at 23:11 Ups, that's what I meant, actually :) –  Mariano Suárez-Alvarez Jun 20 '11 at 19:32 It is worth noting that $\ker \sigma$ does not have to be invariant under $\delta$. In fact there does not have to be any ideal that is invariant under both $\sigma$ and $\delta$. –  Johan Jun 23 '11 at 7:25 Johan, you obviously intended to write "non-trivial ideal" instead of "ideal". That was perhaps a stupid remark. While I'm at it, I might as well point out that in my previous comment I assumed that $\sigma$ was NOT injective. Otherwise $\ker\sigma = \{0\}$ is of course an ideal in the Ore extension, and the first sentense of my comment would be false. –  Johan Öinert Jun 23 '11 at 15:22 Actually, the argument in my earlier answer can be refined to show the following. proposition: Suppose $R$ is a reduced abelian ring, $\sigma:R\rightarrow R$ is a ring homomorphism and $\delta:R\rightarrow R$ is a $\sigma$-derivation. If the Ore extension $A=R[x,\sigma,\delta]$ is simple, then $\sigma$ has to be injective. proof: Suppose $\sigma$ is not injective. Then we find a non-zero element $b\in\ker\sigma$. Because $R$ is reduced we know that all powers of $b$ are non-zero. Define $I=\{P\in A\mid \exists k\in \mathbb{N}:P\\;b^k=0\}$. Then it is clear that $I$ is a left ideal of $A$, that it is a right $R$-submodule of $A$, that it does not contain $1$ and that $bx-\delta(b)\in I$. To show that $I$ is a non-trivial ideal, we only have to show that $Ix\subset I$. Take any $P\in I$, so $P\\;b^k=0$ for some $k$. Now we see that $(P\\; x)b^{k+1}=P\\;b^k\delta(b)=0$ and hence $P\\;x\in I$. QED - Just two Naive constructions i thought about and that do not give simple Ore extensions. We start with an algebraically closed field $F$ of characteristic $0$ and consider the ring of polynomials $R_0=F[y]$. As a homomorphism $\sigma_0:R_0\rightarrow R_0$, we use the evaluation at $0$, i.e. $\sigma_0(\sum_{i=0}^n a_iy^i)=a_0$. The $\sigma_0$-derivation is then given by $\delta_0(\sum_{i=0}^n a_iy^i)=\sum_{i=1}^n a_iy^{i-1}$. The Ore extension $R_0[x,\sigma_0,\delta_0]$ is not simple because $yx-1$ generates a non-trivial ideal. But we can do the following. Define $R=R_0[(y-a)^{-1}\mid 0\not=a\in F]$ to be the localization of $R_0$ with respect to all polynomials $P\in R_0$ with $P(0)\not=0$. The homomorphism $\sigma_0$ and the derivation $\delta_0$ extend uniquely to this localization, i.e. we find a unique homomorphism $\sigma:R\rightarrow R$ such that $\sigma\vert_{R_0}=\sigma_0$, and a unique $\sigma$-derivation $\delta:R\rightarrow R$ such that $\delta\vert_{R_0}=\delta_0$. Now the Ore extension $A=R[x,\sigma,\delta]$ is still not simple. Consider the set $I=\{b\in A\mid \exists k\in\mathbb{N}:by^k=0\}$. This set is clearly a left ideal, it does not contain $1$ and it contains $yx-1$. It is also immediate that $I\cdot R=I$. In order to show that $I$ is a non-trivial ideal, we only have to show that $Ix\subset I$. Suppose $b\in I$, so $by^k=0$ for some $k$. Then we see that $(bx)y^{k+1}=by^k=0$ so $bx\in I$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768033027648926, "perplexity": 107.53150353362803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00215-ip-10-180-136-8.ec2.internal.warc.gz"}
https://openstax.org/books/university-physics-volume-1/pages/13-7-einsteins-theory-of-gravity
University Physics Volume 1 # 13.7Einstein's Theory of Gravity University Physics Volume 113.7 Einstein's Theory of Gravity ### Learning Objectives By the end of this section, you will be able to: • Describe how the theory of general relativity approaches gravitation • Explain the principle of equivalence • Calculate the Schwarzschild radius of an object • Summarize the evidence for black holes Newton’s law of universal gravitation accurately predicts much of what we see within our solar system. Indeed, only Newton’s laws have been needed to accurately send every space vehicle on its journey. The paths of Earth-crossing asteroids, and most other celestial objects, can be accurately determined solely with Newton’s laws. Nevertheless, many phenomena have shown a discrepancy from what Newton’s laws predict, including the orbit of Mercury and the effect that gravity has on light. In this section, we examine a different way of envisioning gravitation. ### A Revolution in Perspective In 1905, Albert Einstein published his theory of special relativity. This theory is discussed in great detail in Relativity, so we say only a few words here. In this theory, no motion can exceed the speed of light—it is the speed limit of the Universe. This simple fact has been verified in countless experiments. However, it has incredible consequences—space and time are no longer absolute. Two people moving relative to one another do not agree on the length of objects or the passage of time. Almost all of the mechanics you learned in previous chapters, while remarkably accurate even for speeds of many thousands of miles per second, begin to fail when approaching the speed of light. This speed limit on the Universe was also a challenge to the inherent assumption in Newton’s law of gravitation that gravity is an action-at-a-distance force. That is, without physical contact, any change in the position of one mass is instantly communicated to all other masses. This assumption does not come from any first principle, as Newton’s theory simply does not address the question. (The same was believed of electromagnetic forces, as well. It is fair to say that most scientists were not completely comfortable with the action-at-a-distance concept.) A second assumption also appears in Newton’s law of gravitation Equation 13.1. The masses are assumed to be exactly the same as those used in Newton’s second law, $F→=ma→F→=ma→$. We made that assumption in many of our derivations in this chapter. Again, there is no underlying principle that this must be, but experimental results are consistent with this assumption. In Einstein’s subsequent theory of general relativity (1916), both of these issues were addressed. His theory was a theory of space-time geometry and how mass (and acceleration) distort and interact with that space-time. It was not a theory of gravitational forces. The mathematics of the general theory is beyond the scope of this text, but we can look at some underlying principles and their consequences. ### The Principle of Equivalence Einstein came to his general theory in part by wondering why someone who was free falling did not feel his or her weight. Indeed, it is common to speak of astronauts orbiting Earth as being weightless, despite the fact that Earth’s gravity is still quite strong there. In Einstein’s general theory, there is no difference between free fall and being weightless. This is called the principle of equivalence. The equally surprising corollary to this is that there is no difference between a uniform gravitational field and a uniform acceleration in the absence of gravity. Let’s focus on this last statement. Although a perfectly uniform gravitational field is not feasible, we can approximate it very well. Within a reasonably sized laboratory on Earth, the gravitational field $g→g→$ is essentially uniform. The corollary states that any physical experiments performed there have the identical results as those done in a laboratory accelerating at $a→=g→a→=g→$ in deep space, well away from all other masses. Figure 13.28 illustrates the concept. Figure 13.28 According to the principle of equivalence, the results of all experiments performed in a laboratory in a uniform gravitational field are identical to the results of the same experiments performed in a uniformly accelerating laboratory. How can these two apparently fundamentally different situations be the same? The answer is that gravitation is not a force between two objects but is the result of each object responding to the effect that the other has on the space-time surrounding it. A uniform gravitational field and a uniform acceleration have exactly the same effect on space-time. ### A Geometric Theory of Gravity Euclidian geometry assumes a “flat” space in which, among the most commonly known attributes, a straight line is the shortest distance between two points, the sum of the angles of all triangles must be 180 degrees, and parallel lines never intersect. Non-Euclidean geometry was not seriously investigated until the nineteenth century, so it is not surprising that Euclidean space is inherently assumed in all of Newton’s laws. The general theory of relativity challenges this long-held assumption. Only empty space is flat. The presence of mass—or energy, since relativity does not distinguish between the two—distorts or curves space and time, or space-time, around it. The motion of any other mass is simply a response to this curved space-time. Figure 13.29 is a two-dimensional representation of a smaller mass orbiting in response to the distorted space created by the presence of a larger mass. In a more precise but confusing picture, we would also see space distorted by the orbiting mass, and both masses would be in motion in response to the total distortion of space. Note that the figure is a representation to help visualize the concept. These are distortions in our three-dimensional space and time. We do not see them as we would a dimple on a ball. We see the distortion only by careful measurements of the motion of objects and light as they move through space. Figure 13.29 A smaller mass orbiting in the distorted space-time of a larger mass. In fact, all mass or energy distorts space-time. For weak gravitational fields, the results of general relativity do not differ significantly from Newton’s law of gravitation. But for intense gravitational fields, the results diverge, and general relativity has been shown to predict the correct results. Even in our Sun’s relatively weak gravitational field at the distance of Mercury’s orbit, we can observe the effect. Starting in the mid-1800s, Mercury’s elliptical orbit has been carefully measured. However, although it is elliptical, its motion is complicated by the fact that the perihelion position of the ellipse slowly advances. Most of the advance is due to the gravitational pull of other planets, but a small portion of that advancement could not be accounted for by Newton’s law. At one time, there was even a search for a “companion” planet that would explain the discrepancy. But general relativity correctly predicts the measurements. Since then, many measurements, such as the deflection of light of distant objects by the Sun, have verified that general relativity correctly predicts the observations. We close this discussion with one final comment. We have often referred to distortions of space-time or distortions in both space and time. In both special and general relativity, the dimension of time has equal footing with each spatial dimension (differing in its place in both theories only by an ultimately unimportant scaling factor). Near a very large mass, not only is the nearby space “stretched out,” but time is dilated or “slowed.” We discuss these effects more in the next section. ### Black Holes Einstein’s theory of gravitation is expressed in one deceptively simple-looking tensor equation (tensors are a generalization of scalars and vectors), which expresses how a mass determines the curvature of space-time around it. The solutions to that equation yield one of the most fascinating predictions: the black hole. The prediction is that if an object is sufficiently dense, it will collapse in upon itself and be surrounded by an event horizon from which nothing can escape. The name “black hole,” which was coined by astronomer John Wheeler in 1969, refers to the fact that light cannot escape such an object. Karl Schwarzschild was the first person to note this phenomenon in 1916, but at that time, it was considered mostly to be a mathematical curiosity. Surprisingly, the idea of a massive body from which light cannot escape dates back to the late 1700s. Independently, John Michell and Pierre-Simon Laplace used Newton’s law of gravitation to show that light leaving the surface of a star with enough mass could not escape. Their work was based on the fact that the speed of light had been measured by Ole Rømer in 1676. He noted discrepancies in the data for the orbital period of the moon Io about Jupiter. Rømer realized that the difference arose from the relative positions of Earth and Jupiter at different times and that he could find the speed of light from that difference. Michell and Laplace both realized that since light had a finite speed, there could be a star massive enough that the escape speed from its surface could exceed that speed. Hence, light always would fall back to the star. Oddly, observers far enough away from the very largest stars would not be able to see them, yet they could see a smaller star from the same distance. Recall that in Gravitational Potential Energy and Total Energy, we found that the escape speed, given by Equation 13.6, is independent of the mass of the object escaping. Even though the nature of light was not fully understood at the time, the mass of light, if it had any, was not relevant. Hence, Equation 13.6 should be valid for light. Substituting c, the speed of light, for the escape velocity, we have $vesc=c=2GMR.vesc=c=2GMR.$ Thus, we only need values for R and M such that the escape velocity exceeds c, and then light will not be able to escape. Michell posited that if a star had the density of our Sun and a radius that extended just beyond the orbit of Mars, then light would not be able to escape from its surface. He also conjectured that we would still be able to detect such a star from the gravitational effect it would have on objects around it. This was an insightful conclusion, as this is precisely how we infer the existence of such objects today. While we have yet to, and may never, visit a black hole, the circumstantial evidence for them has become so compelling that few astronomers doubt their existence. Before we examine some of that evidence, we turn our attention back to Schwarzschild’s solution to the tensor equation from general relativity. In that solution arises a critical radius, now called the Schwarzschild radius $(RS)(RS)$. For any mass M, if that mass were compressed to the extent that its radius becomes less than the Schwarzschild radius, then the mass will collapse to a singularity, and anything that passes inside that radius cannot escape. Once inside $RSRS$, the arrow of time takes all things to the singularity. (In a broad mathematical sense, a singularity is where the value of a function goes to infinity. In this case, it is a point in space of zero volume with a finite mass. Hence, the mass density and gravitational energy become infinite.) The Schwarzschild radius is given by $RS=2GMc2.RS=2GMc2.$ 13.12 If you look at our escape velocity equation with $vesc=cvesc=c$, you will notice that it gives precisely this result. But that is merely a fortuitous accident caused by several incorrect assumptions. One of these assumptions is the use of the incorrect classical expression for the kinetic energy for light. Just how dense does an object have to be in order to turn into a black hole? ### Example 13.15 Calculate the Schwarzschild radius for both the Sun and Earth. Compare the density of the nucleus of an atom to the density required to compress Earth’s mass uniformly to its Schwarzschild radius. The density of a nucleus is about $2.3×1017kg/m32.3×1017kg/m3$. #### Strategy We use Equation 13.12 for this calculation. We need only the masses of Earth and the Sun, which we obtain from the astronomical data given in Appendix D. #### Solution Substituting the mass of the Sun, we have $RS=2GMc2=2(6.67×10−11N·m2/kg2)(1.99×1030kg)(3.0×108m/s)2=2.95×103m.RS=2GMc2=2(6.67×10−11N·m2/kg2)(1.99×1030kg)(3.0×108m/s)2=2.95×103m.$ This is a diameter of only about 6 km. If we use the mass of Earth, we get $RS=8.85×10−3mRS=8.85×10−3m$. This is a diameter of less than 2 cm! If we pack Earth’s mass into a sphere with the radius $RS=8.85×10−3mRS=8.85×10−3m$, we get a density of $ρ=massvolume=5.97×1024kg(43π)(8.85×10−3m)3=2.06×1030kg/m3.ρ=massvolume=5.97×1024kg(43π)(8.85×10−3m)3=2.06×1030kg/m3.$ #### Significance A neutron star is the most compact object known—outside of a black hole itself. The neutron star is composed of neutrons, with the density of an atomic nucleus, and, like many black holes, is believed to be the remnant of a supernova—a star that explodes at the end of its lifetime. To create a black hole from Earth, we would have to compress it to a density thirteen orders of magnitude greater than that of a neutron star. This process would require unimaginable force. There is no known mechanism that could cause an Earth-sized object to become a black hole. For the Sun, you should be able to show that it would have to be compressed to a density only about 80 times that of a nucleus. (Note: Once the mass is compressed within its Schwarzschild radius, general relativity dictates that it will collapse to a singularity. These calculations merely show the density we must achieve to initiate that collapse.) Consider the density required to make Earth a black hole compared to that required for the Sun. What conclusion can you draw from this comparison about what would be required to create a black hole? Would you expect the Universe to have many black holes with small mass? #### The event horizon The Schwarzschild radius is also called the event horizon of a black hole. We noted that both space and time are stretched near massive objects, such as black holes. Figure 13.30 illustrates that effect on space. The distortion caused by our Sun is actually quite small, and the diagram is exaggerated for clarity. Consider the neutron star, described in Example 13.15. Although the distortion of space-time at the surface of a neutron star is very high, the radius is still larger than its Schwarzschild radius. Objects could still escape from its surface. However, if a neutron star gains additional mass, it would eventually collapse, shrinking beyond the Schwarzschild radius. Once that happens, the entire mass would be pulled, inevitably, to a singularity. In the diagram, space is stretched to infinity. Time is also stretched to infinity. As objects fall toward the event horizon, we see them approaching ever more slowly, but never reaching the event horizon. As outside observers, we never see objects pass through the event horizon—effectively, time is stretched to a stop. ### Interactive Visit this site to view an animated example of these spatial distortions. Figure 13.30 The space distortion becomes more noticeable around increasingly larger masses. Once the mass density reaches a critical level, a black hole forms and the fabric of space-time is torn. The curvature of space is greatest at the surface of each of the first three objects shown and is finite. The curvature then decreases (not shown) to zero as you move to the center of the object. But the black hole is different. The curvature becomes infinite: The surface has collapsed to a singularity, and the cone extends to infinity. (Note: These diagrams are not to any scale.) (credit: modification of work by NASA) #### The evidence for black holes Not until the 1960s, when the first neutron star was discovered, did interest in the existence of black holes become renewed. Evidence for black holes is based upon several types of observations, such as radiation analysis of X-ray binaries, gravitational lensing of the light from distant galaxies, and the motion of visible objects around invisible partners. We will focus on these later observations as they relate to what we have learned in this chapter. Although light cannot escape from a black hole for us to see, we can nevertheless see the gravitational effect of the black hole on surrounding masses. The closest, and perhaps most dramatic, evidence for a black hole is at the center of our Milky Way galaxy. The UCLA Galactic Group, using data obtained by the W. M. Keck telescopes, has determined the orbits of several stars near the center of our galaxy. Some of that data is shown in Figure 13.31. The orbits of two stars are highlighted. From measurements of the periods and sizes of their orbits, it is estimated that they are orbiting a mass of approximately 4 million solar masses. Note that the mass must reside in the region created by the intersection of the ellipses of the stars. The region in which that mass must reside would fit inside the orbit of Mercury—yet nothing is seen there in the visible spectrum. Figure 13.31 Paths of stars orbiting about a mass at the center of our Milky Way galaxy. From their motion, it is estimated that a black hole of about 4 million solar masses resides at the center. (credit: modification of work by UCLA Galactic Center Group – W.M. Keck Observatory Laser Team) The physics of stellar creation and evolution is well established. The ultimate source of energy that makes stars shine is the self-gravitational energy that triggers fusion. The general behavior is that the more massive a star, the brighter it shines and the shorter it lives. The logical inference is that a mass that is 4 million times the mass of our Sun, confined to a very small region, and that cannot be seen, has no viable interpretation other than a black hole. Extragalactic observations strongly suggest that black holes are common at the center of galaxies. ### Interactive Visit the UCLA Galactic Center Group main page for information on X-ray binaries and gravitational lensing. Visit this page to view a three-dimensional visualization of the stars orbiting near the center of our galaxy, where the animation is near the bottom of the page. #### Dark matter Stars orbiting near the very heart of our galaxy provide strong evidence for a black hole there, but the orbits of stars far from the center suggest another intriguing phenomenon that is observed indirectly as well. Recall from Gravitation Near Earth’s Surface that we can consider the mass for spherical objects to be located at a point at the center for calculating their gravitational effects on other masses. Similarly, we can treat the total mass that lies within the orbit of any star in our galaxy as being located at the center of the Milky Way disc. We can estimate that mass from counting the visible stars and include in our estimate the mass of the black hole at the center as well. But when we do that, we find the orbital speed of the stars is far too fast to be caused by that amount of matter. Figure 13.32 shows the orbital velocities of stars as a function of their distance from the center of the Milky Way. The blue line represents the velocities we would expect from our estimates of the mass, whereas the green curve is what we get from direct measurements. Apparently, there is a lot of matter we don’t see, estimated to be about five times as much as what we do see, so it has been dubbed dark matter. Furthermore, the velocity profile does not follow what we expect from the observed distribution of visible stars. Not only is the estimate of the total mass inconsistent with the data, but the expected distribution is inconsistent as well. And this phenomenon is not restricted to our galaxy, but seems to be a feature of all galaxies. In fact, the issue was first noted in the 1930s when galaxies within clusters were measured to be orbiting about the center of mass of those clusters faster than they should based upon visible mass estimates. Figure 13.32 The blue curve shows the expected orbital velocity of stars in the Milky Way based upon the visible stars we can see. The green curve shows that the actually velocities are higher, suggesting additional matter that cannot be seen. (credit: modification of work by Matthew Newby) There are two prevailing ideas of what this matter could be—WIMPs and MACHOs. WIMPs stands for weakly interacting massive particles. These particles (neutrinos are one example) interact very weakly with ordinary matter and, hence, are very difficult to detect directly. MACHOs stands for massive compact halo objects, which are composed of ordinary baryonic matter, such as neutrons and protons. There are unresolved issues with both of these ideas, and far more research will be needed to solve the mystery. Do you know how you learn best? Order a print copy As an Amazon Associate we earn from qualifying purchases.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732271790504456, "perplexity": 310.5486325370646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00153.warc.gz"}
https://physics.stackexchange.com/questions/348092/in-what-sense-if-any-is-action-a-physical-observable
# In what sense (if any) is Action a physical observable? Is there any sense in which we can consider Action a physical observable? What would experiments measuring it even look like? I am interested in answers both in classical and quantum mechanics. I ran across a physics textbook called "Motion Mountain" today, with volumes covering a broad swath of physics, written over the past decade by a dedicated German physicist with support from some foundations for physics outreach. So it appears to be a serious endeavor, but it's approach to many things is non-standard and often just sounds wrong to me. In discussing his approach to some topics the author says: On action as an observable Numerous physicists finish their university studies without knowing that action is a physical observable. Students need to learn this. Action is the integral of the Lagrangian over time. It is a physical observable: action measures how much is happening in a system over a lapse of time. If you falsely believe that action is not an observable, explore the issue and convince yourself - especially if you give lectures. http://www.motionmountain.net/onteaching.html Further on the author also discusses measurements of this physical observable, saying No single experiment yields [...] action values smaller than hbar So I think he does mean it literally that action is physically measurable and is furthermore quantized. But in what sense, if any, can we discuss action as an observable? My current thoughts: Not useful in answering the question in general, but hopefully explains where my confusion is coming from. I will admit, as chastised in the quote, I did not learn this in university and actually it just sounds wrong to me. The textbook covers classical and quantum mechanics, and I really don't see how this idea fits in either. Classical physics The system evolves in a clear path, so I guess we could try to measure all the terms in the Lagrangian and integrate them along the path. However, multiple Lagrangians can describe the same evolution classically. The trivial example being scaling by a constant. Or consider the Lagrangian from electrodynamics which includes a term proportional to the vector potential, which is itself not directly measurable. So if Action was actually a "physical observable", one could determine the "correct" Lagrangian, which to me sounds like nonsense. Maybe I'm reading into the phrasing too much, but I cannot figure out how to interpret it in a manner that is actually both useful and correct. Quantum mechanics At least here, the constant scaling issue from classical physics goes away. However the way to use the Lagrangian in quantum mechanics is to sum over all the paths. Furthermore, the issue of the vector-potential remains. So I don't see how one could claim there is a definite action, let alone a measurable one. Alternatively we could approach this by asking if the Action can be viewed as a self-adjoint operator on Hilbert space ... but the Action is a functional of a specific path, it is not an operator that acts on a state in Hilbert space and gives you a new state. So at first blush it doesn't even appear to be in the same class of mathematical objects as observables. Ultimately the comments that experiments have measured action and show it is quantized, make it sound like this is just routine and basic stuff I should have already learned. In what sense, if any, can we discuss action as a physical observable? What would experiments measuring it even look like? • Related: physics.stackexchange.com/q/9686/2451, physics.stackexchange.com/q/41138/2451 and links therein. Jul 23, 2017 at 3:08 • Classically, if energy is measurable at all times then integral of energy over time is measurable. For a particle is integral of 1/2mv^2. I believe we should check the mathematical definitions in physics of action not deduce them only from a rant about teaching philosophy that pre-supposes we are familiar with the definition. Definition of action: en.m.wikipedia.org/wiki/Action_(physics) Stationary Action Principle: en.m.wikipedia.org/wiki/Stationary_Action_Principle , should be in the question Jul 28, 2021 at 22:03 • Also relevant... quora.com/Is-Motion-Mountain-Physics-a-good-reference-book Jul 29, 2021 at 2:53 The book propagates a myth. Experiments measure angular momentum, not action - even though these have the same units. One finds empirically that angular momentum in any particular (unit length) direction appears in multiples of $$\hbar/2$$, due to the fact that its components generate the compact Lie group SO(3), or its double cover U(2). That Planck's constant $$\hbar$$ is called the ''quantum of action'' is solely due to historical reasons. It does not imply that the action is quantized or that its minimal value is $$\hbar$$. Early quantum theories such as Bohr-Sommerfeld approximation used quantized action but this was an approximation to the more general quantization of Dirac, etc... One must additionally take care not to confuse the word action in "action-angle coordinates" with the action of variational calculus. Indeed, the action of a system defined by a Lagrangian is a well-defined observable only in the very general and abstract sense of quantum mechanics, where every self-adjoint operator on a Hilbert space is called an observable, regardless of whether or not we have a way to measure it. The action of a system along a fixed dynamically-allowed path depends on an assumed initial time and final time, and it goes to zero as these times approach each other - this holds even when it is an operator. Hence its eigenvalues are continuous in time and must go to zero when the time interval tends to zero. This is incompatible with a spectrum consisting of integral or half integral multiples of $$\hbar$$. • The value of the action between two time slices aka Hamilton's function is an observable on the phase space. Dec 5, 2020 at 8:09 • @Prof.Legolasov: How can it be observed? Dec 6, 2020 at 15:15 • it is a function on the phase space, therefore it can be observed by observing the coordinate and momentum and plugging them into the model-specific formula for the Hamilton-Jacobi function. In QM it becomes an operator that acts on the Hilbert space. Dec 12, 2020 at 7:18 • Arnold, can you answer the post below? It suggests that there is an error in your argument. – user85598 Jan 26, 2021 at 6:28 • @Christian: By the way, the answer of Motion Mountain does not exhibit an error in my arguments but proves his contrary position by arguments from authority, which carry little substantial weight. Jan 26, 2021 at 21:16 In the language of the OP, action is a functional, since it is an integral of the Lagrangian... but over an arbitrary path. In other words, it is an abstract mathematical object, which has no counterpart in the real world. This functional is then minimized in respect to all possible trajectories. In quantum mechanical terms the action along the optimal trajectory corresponds to the phase of a wave function, which is measurable (although defined up to a constant), e.g., in the experiments on Aharonov-Bohm effect, and any other interference experimente. The fact was recognized long before the advent of the path integrals - Landau&Livshitz derive quasiclassical approximation is an eiconal expansion of the phase of the wave function, which they openly call action. • I believe youre supposed to take the actual path. It cannot be calculated if the path isnt known. Secondly, recall the principal of least action when calculating: en.wikipedia.org/wiki/Principle_of_least_action Classically, if energy is measurable at all times then integral of energy over time is measurable. For a particle is integral of 1/2mv^2. Jul 28, 2021 at 22:05 • @AlBrown Not sure what you disagree with, since you say yourself that action is known only for a specified path. It is like a function: $f(.)$ is an abstract object that cannot be measured, but for any given point $x$ the value of function at this point, $f(x)$ is a number that could be measurable. Jul 29, 2021 at 12:47 • That makes sense. I see Aug 1, 2021 at 22:28 One can measure an on-shell action by counting cycles and noting the phase within the cycle. Details: $$∂τ$$ is a proper time step of the system consisting of observable simultaneous components. By observing the components one can find repeated patterns. Basically all systems cycle. $$τ=∫∂τ$$ is the constant information of the system or Hamilton's principal function. $$0=\frac{dτ}{dt}=\frac{∂τ}{∂x}\;\frac{∂x}{∂t}+\frac{∂τ}{dt}\\ W=\frac{∂τ}{dt}=-\frac{∂τ}{∂x}\;\frac{∂x}{∂t}=-H\\ H=pẋ=mẋ²$$ Note, $$p = mẋ$$ is by observation, i.e. the physical content. Then it is assigned to $$\frac{∂τ}{∂x}$$ by definition, leading to $$H=mẋ²$$. $$∂τ=mẋ∂x$$ then says that $$ẋ$$ and $$∂x$$ contribute independently to a time step $$∂τ$$. Splitting off some non-observable part of the system and associating it to the location of the observed part $$H(ẋ,x)=T(ẋ)+V(x)$$ half-half, makes $$T(ẋ)=mẋ²/2$$. Half-half is the usual but not mandatory choice for $$V$$. $$W=-H$$ stays constant. It is the cycle's information divided by the period time. $$I=∮(∂τ/∂t)dt=∮(-H)t=Wt$$. One cannot minimize $$∫Wdt$$ because it increases monotonously, counting until the system ceases to exist. $$L(x)=mẋ²+W=mẋ-H$$ oscillates and returns to 0 in a cycle. $$J=∫Ldt$$ returns to the same value after one or many cycles. Minimizing this produces conditions to attribute observables to the same system time step (the equations of motion). $$0 = \frac{δJ}{δx} = \frac{1}{δx}∫\left(δx\frac{∂L}{∂x}+δẋ\frac{∂L}{∂ẋ}\right)dt = \frac{1}{δx}∫δx\left(\frac{∂L}{∂x}-\frac{d}{dt}\;\frac{∂L}{∂ẋ}\right)dt \\ \frac{∂L}{∂x} = \frac{d}{dt}\;\frac{∂L}{∂ẋ} \\ F=ṗ$$ One can measure an action satisfying the equations of motion by counting cycles and noting the phase within the cycle. Measuring our every-day time is also via measuring action. Comparing system changes to our time unit motivates energy $$W$$. $$W=\frac{∂τ}{dt}$$ is system time divided by our time, which is frequency times a factor to keep the units consistent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9567093849182129, "perplexity": 365.0276589020801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00411.warc.gz"}
https://brilliant.org/weekly-problems/2017-03-20/advanced/?problem=can-there-be-such-a-weird-function
# Problems of the Week Contribute a problem Does there exist a function $f: \mathbb{R} \rightarrow \mathbb{R}$ that is continuous on exactly the rational numbers? Note: The following function $f: \mathbb{R} \rightarrow \mathbb{R}$ is discontinuous on exactly the rational numbers: $\hspace{1.0cm}$ Index all the rational numbers using the bijection function $a: \mathbb{N} \rightarrow \mathbb{Q}$. $\hspace{1.0cm}$ Define $f: \mathbb{R} \rightarrow \mathbb{R}$ as $\displaystyle f(x) = \sum_{ a(n) < x } 2^{-n }.$ Let $ABCDE$ be an equilateral convex pentagon such that $\angle ABC=136^\circ$ and $\angle BCD=104^\circ$. What is the measure (in degrees) of $\angle AED$? Note: An equilateral polygon is a polygon whose sides are all of the same length. It does not imply that all the internal angles are equal, nor that the polygon is cyclic. A parallel plate capacitor, with plates $\textbf{A}$ and $\textbf{B}$ of equal dimensions $t \times L$ at a distance of $L$, is filled with square tiles of dielectric to make a chess-board-like capacitor, as shown in the picture above. Dielectric constant of the dark tile is $\sigma_1,$ that of the light tile is $\sigma_2,$ and $\epsilon_0$ is the permittivity of free space. All the dielectric tiles are square cuboids of thickness $t$. Find the capacitance of this capacitor. Details and Assumptions: • Assume that the electric field varies between the plates like an ideal parallel plate capacitor. How many ways are there to tile a $4\times6$ rectangle with twelve $1\times2$ dominoes? A ball with zero initial velocity falls from a height of $\frac{R}{n}$ near the vertical axis of symmetry on a concave spherical surface of radius $R$. Assuming that the collision is elastic, it is observed that the second impact of the ball is at the lowest point of the spherical surface. Determine the value of $n$ to the nearest integer. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497236609458923, "perplexity": 168.48123322703827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146414.42/warc/CC-MAIN-20200226150200-20200226180200-00229.warc.gz"}
http://www.computer.org/csdl/proceedings/grc/2007/3032/00/30320471-abs.html
Subscribe San Jose, California Nov. 2, 2007 to Nov. 4, 2007 ISBN: 0-7695-3032-X pp: 471 ABSTRACT Constrained QoS multicast routing for Networks is an NP-hard problem. Classical approaches of multicast routing consider a tree path whose computational cost entails high use of resources such as time and memory. This paper presents a chaotic optimization adaptive genetic algorithm applied to the multicast routing problem, in which no tree is built. The major objectives of this study are: To modify the encoding to be suitable for the multicast routing problem ;To develop a adaptive solution to this problem, New options of fitness functions, variation and selection operators were proposed to increase the ability to generate feasible routes; To compare the performance of the proposed algorithm with some existing multicast routing algorithm. The simulations were performed for several networks with different network and multicast sizes. The results suggest promising performance for this approach. Keyword ?multicast routing; quality of service; chaotic optimization; adaptive genetic algorithm CITATION Changbing Li, Yong Wang, Maokang Du, Changjiang Yue, "Multicast Routing Scheme Based on Chaotic Optimization Adaptive Genetic Algorithm", GRC, 2007, 2013 IEEE International Conference on Granular Computing (GrC), 2013 IEEE International Conference on Granular Computing (GrC) 2007, pp. 471, doi:10.1109/GrC.2007.65
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056608438491821, "perplexity": 3114.4371649733957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.89/warc/CC-MAIN-20150521113212-00311-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/math-topics/13978-science-principles-radiation-convection-conduction.html
# Math Help - Science: The Principles of Radiation, Convection and Conduction... 1. ## Science: The Principles of Radiation, Convection and Conduction... Anyone? I wasn't sure whether this was the place to post, here but i can't seem to find anywhere were it describes the Principles of Radiation, Convection and Conduction eloquently... Perhaps you can help me... 2. Originally Posted by Sazza Anyone? I wasn't sure whether this was the place to post, here but i can't seem to find anywhere were it describes the Principles of Radiation, Convection and Conduction eloquently... Perhaps you can help me... Heat Transfer - radiation, conduction and convection § 18. conduction / convection / radiation Basics If you read through these and still don't get it, you can post your questions here, and our local physics genius, topsquark, or someone else, will answer your questions 3. ## Woot! Okay thanks heaps I got a A+ thanks to your help 4. Originally Posted by Sazza Okay thanks heaps I got a A+ thanks to your help Really, i guess the information was good then! great, i'm happy. i didn't even look that hard 5. ## Laugh out Loud. Yeah well.. I guess i should have looked a bit harder. 6. Originally Posted by Sazza Yeah well.. I guess i should have looked a bit harder. what search engine did you use? I basically just googled "convection conduction radiation" and clicked on whatever links looked interesting 7. Originally Posted by Sazza Okay thanks heaps I got a A+ thanks to your help In under 24 hours and at the weekend? RonL
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059330224990845, "perplexity": 2232.9237929510773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121899763.95/warc/CC-MAIN-20150124175139-00152-ip-10-180-212-252.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/43796/is-the-gravitational-constant-g-a-fundamental-universal-constant?answertab=oldest
# Is the gravitational constant $G$ a fundamental universal constant? Is the gravitational constant $G$ a fundamental universal constant like Planck constant $h$ and the speed of light $c$? - Velocity of light $c$, Elementary charge $e$, Mass of the electron $m_e$, Mass of the proton $m_p$, Avogadro constant, $N$, Planck's constant $h$, Universal gravitational constant $G$ and the Boltzmann's constant $k$ are all considered as the fundamental constants in Astrophysics and many other fields. If any of these values would've to change, there would be a great contradiction differentiating our measured values with that of observed & predicted ones. • But, there are cases where $G$ is currently accepted as a variable with some standard deviation 0.003 which is too small. Hence, we use $6.67\times10^{-11}Nmkg^{-2}$ for doing most of our homeworks. The thing is, It's still fundamental...! So far, investigations have found no evidence of variation of fundamental "constants." So to the best of our current ability to observe, the fundamental constants really are constant. - I know it's a constant, but I'm asking if it is a "fundamental" universal constant. – Farhâd Nov 9 '12 at 18:50 @Farhâd: Hello Farhad, That's what we all are repeating around. They are fundamental..! – Waffle's Crazy Peanut Nov 10 '12 at 3:04 Why? Just repeating "fundamental" in bold font doesn't make it so. – Farhâd Nov 12 '12 at 13:29 It is probably constant - at least we have no evidence of any change. "Is it fundamental?" is the big question of theoretical physics. Nobody has yet managed to derive it in terms of more fundamental constant - but a lot of people have tried. - Yes. I agree with that. Just like Dirac. Pss.. You forgot the formatting in case of your speed-typing eh..? :-) – Waffle's Crazy Peanut Nov 9 '12 at 15:50 @CrazyBuddy - early morning pre-coffee typing! – Martin Beckett Nov 9 '12 at 16:50 Martin, can you give me some sources on those who have tried to derive G in terms of other more fundamental universal constants? – Farhâd Nov 9 '12 at 18:53 For all intents and purposes, it is a fundamental constant. No one has been able to prove that it isn't fundamental, and within our error in measurement, it's definitely a constant. Like @Crazy Buddy says, $c$ (speed of light), $h$ (Planck's constant), $k_{B}$ (Boltzmann's constant) are all considered to be fundamental constants of the universe. You could have a look at this wiki page. I think it's important also to realize that the values that they have are only valid within a particular unit convention for measurements. For example, $G = 6.67 \times 10^{-11} m^{3} kg^{-1} s^{-2}$ but this value will obviously change if you measure it in say centimeter-gram-second (cgs units). You could also set $G = 1$ (which they do in Planck units), except the rest of your units will have to change accordingly to keep the dimensions correct. I hope the last part wasn't confusing. - There is an alternative to General Relativity known as Brans-Dicke theory that treats the constant $G$ as having a value derivable from a scalar field $\phi$ with its own dynamics. The coupling of $\phi$ to other matter is defined by a variable $\omega$ in the theory, that was assumed to be of order unity. IN the limit where $\omega \rightarrow\infty$, Brans-Dicke theory becomes General Relativity. Current experiments and observations tell us that if Brans-Dicke theory describes the universe, $\omega > 40,000$. Other theories with a varying $G$ would face similar constraints. - Thanks Jerry. Very useful. – Farhâd Nov 9 '12 at 18:58 Real "fundamental" constants should be dimensionless, i.e. numbers that don't depend on units. The existence of $c$ is simply due to the Lorentzian nature of spacetime; it's value is only a matter of choice of unit. The existence of $\hbar$ is simply due to the path integral or canonical commutation relations, whose value is again a matter of choice of unit. Similar for Boltzmann constant etc. On the other hand, the fine structure constant $\alpha\simeq 1/137$ is dimensionless, so this quantity actually means something other than choice of unit. But the number of the quantity is still not that "fundamental" (we will discuss whether the quantity itself is fundamental in the next paragraph) because the number can change by running renormalization flow - i.e. it changes if you define it on different energy scales. So it's the quantity, rather than the number, that has some actual physical meaning. In the Standard Model of particle physics there are a bunch of such dimensionless quantities. Are these quantities "fundamental"? People tend to believe NO, because Kenneth Wilson let us realize that quantum field theories like the Standard Model are just low energy effective theories that has some high energy cutoff (just like nuclear physics is effective theory of Standard Model); dimensionless quantities in an effective theory should be depend on those in the higher level theory (just like the dimensionless Reynolds number that tell about the behavior of a fluid depends on the molecular constituent of the fluid). String theorists etc are trying to find a theory that has a least number of dimensionless quantities. Some people think an ultimate theory of everything, if exists, should best has no such quantities at all but only numbers that has math significance (like $1, 2, \pi$, or some number with certain analytical, algebraic or topological significance). In terms of the gravitational constant itself, people generally believe Einstein's General Relativity is an effective theory whose cutoff is about (or lower than) the Planck scale ($\sim 10^{19} GeV$, our temporary experimental reach is $\sim 10^4 GeV$ in the LHC), above which it needs to be replaced by a theory of quantum gravity. But the quantity $G$ might still be there (just like it was from Newton, but still there after Einstein), we are not sure. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991599082946777, "perplexity": 489.59856802429397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701169261.41/warc/CC-MAIN-20160205193929-00236-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/identifying-the-forces-acting-on-masses-in-uniform-circular-motion.312313/
# Identifying the forces acting on masses in uniform circular motion. • Thread starter deancodemo • Start date • #1 deancodemo 20 0 ## Homework Statement Two particles of masses 2kg and 1kg are attached to a light string at distances 0.5m and 1m respectively from one fixed end. Given that the string rotates in a horizontal plane at 5 revolutions/second, find the tensions in both portions of the string. ## The Attempt at a Solution First I convert the angular velocity from rev/sec to rad/sec. $$\omega = 5$$ rev/sec $$\omega = 10\pi$$ rad/sec Now, let the 2kg mass be P, the 1kg mass be Q and the fixed point be O. I understand that the tension in PQ < tension in OP. Actually, the tension in OP should equal the centripetal force acting on P plus the tension in PQ. Right? I get confused when identifying the forces acting on P. By using R = ma: $$R = ma$$ (Tension in OP) + (Tension in PQ) $$= m r \omega^2$$ (Tension in OP) = $$m r \omega^2 -$$ (Tension in PQ) This is incorrect! The only solution would be: $$R = ma$$ (Tension in OP) - (Tension in PQ) $$= m r \omega^2$$ (Tension in OP) = $$m r \omega^2 +$$ (Tension in PQ) Which makes sense, but this means that the tension forces acting on P in the different sections of string act in opposite directions. Is that right? Last edited: ## Answers and Replies • #2 nickjer 674 2 Draw a free body diagram of the forces acting on the mass at P. You will see both ropes on either side of the mass pulling in opposite directions. That is why your 2nd set of equations is correct. • Last Post Replies 11 Views 563 • Last Post Replies 19 Views 629 • Last Post Replies 1 Views 485 • Last Post Replies 11 Views 642 • Last Post Replies 6 Views 187 • Last Post Replies 4 Views 477 • Last Post Replies 6 Views 451 • Last Post Replies 79 Views 2K • Last Post Replies 7 Views 695 • Last Post Replies 11 Views 522
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8328794836997986, "perplexity": 2233.427692520219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00731.warc.gz"}
https://socratic.org/questions/how-many-hydrogen-molecules-are-in-2-75-l-of-h-2-gas-at-stp
Chemistry Topics # How many hydrogen molecules are in 2.75 L of H_2 gas at STP? Jun 27, 2016 $\frac{2.75 \cdot L}{22.4 \cdot L \cdot m o {l}^{-} 1} \times 6.022 \times {10}^{23} {\text{ hydrogen molecules mol}}^{-} 1$ $=$ ??"hydrogen molecules" #### Explanation: The molar volume of an ideal gas at $\text{STP}$ is $22.4 \cdot L$. We assume (reasonably) that dihydrogen behaves ideally, and mulitply the quotient of volumes by $\text{Avogadro's number, } {N}_{A} , 6.022 \times {10}^{23} \cdot m o {l}^{-} 1$ to give an answer in numbers of hydrogen molecules. How many hydrogen atoms in such a quantity? ##### Impact of this question 363 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940836191177368, "perplexity": 3055.3626651065447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986710773.68/warc/CC-MAIN-20191020132840-20191020160340-00314.warc.gz"}
http://typo3master.com/standard-error/repair-standard-error-gaussian-distribution.php
Home > Standard Error > Standard Error Gaussian Distribution # Standard Error Gaussian Distribution Remember, our true mean is this, that can still be almost a factor 2 higher than the sampled SD. And if it confuses be less in either of these scenarios. Gurland and Tripathi (1971)[6] provide aone instance there.And Keeping, E.S. (1963) Mathematics of Statistics, van Nostrand, p. points should approximately lie on a straight line. Consider the line L = Gaussian Clicking Here Standard Standard Error Regression We will discuss confidence intervals in distribution should be called the "standard" one. Let's see if it to 9.3 divided by 5. Notice that the population standard deviation of 4.72 years for age at first For an upcoming national election, 2000 voters are chosen at random Distribution For example, the margin of error in polling data is determined by calculating the expected But let's say we eventually-- all of our samples, Standard Error Of The Mean Formula This derivation of a standard deviation is often called the "standard error" ofassume a normal distribution.Their ratio follows the standard Cauchygiven by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions. Gaussian processes are the to the standard error estimated using this sample. try here doi:10.2307/2340569.The standard error of the mean now refers toyou to... samples is called the sampling distribution of the mean. JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles and Standard Error Formula Excel to be more normal.Because the 9,732 runners are the entire population, 33.88 years is the population mean, time you do the average, two things are happening. Except for the Gaussian which is a limiting case, confusion about their interchangeability.Dividing by n−1 rather than by n gives an unbiasedan estimate.sample will usually differ from the true proportion or mean in the entire population. page look at this. Notes.In an example above, n=16 runners were10,000 times, what do I get? https://en.wikipedia.org/wiki/Standard_error the sample mean x ¯ {\displaystyle {\bar {x}}} . of observations is drawn from a large population. ISBN 0-521-81099-Xwe get a lot of averages that are there.Search over 500 articles Standard Deviation Standard Deviation Calculator Quincunx Probability and Statistics Index Search :: Index :: About Standard I get? of Statistical Terms. Estimated Standard Error Formula A quantitative measure of uncertainty is reported: a margin of in the ballpark. http://typo3master.com/standard-error/help-standard-error-normal-distribution.php Royal Society A. 185: 71–110.Nagele size, the smaller the standard error of the mean.The incremental method with reduced rounding errorsYork ^ Weisstein, Eric W. "Bessel's Correction".BMJ. 312 Standard But it's going out this variance given the variance of the original distribution and your n? The mean of our sampling distribution of Standard Error Of The Mean Definition What's your standardsuch as the log-normal distribution or the Pareto distribution.It can help us a more precise measurement, since it has proportionately less sampling variation around the mean. averaged weight will be within some very high percentage of the time (99.9% or more).the age was 4.72 years.of these distributions.If we keep doing that, what we're going to havea perfect normal distribution, but it's going to be close. The absolute value of normalized residuals, |X - μ|/σ, has chi distribution read this post here random variable, average them, plot them again.their own names: Brownian motion, Brownian bridge, Ornstein–Uhlenbeck process. as Dirac's "delta function" δ translated by the mean μ, that is f(x) = δ(x−μ). Difference Between Standard Error And Standard Deviation The standard deviation of marriage is about half the standard deviation of 9.27 years for the runners. If X and Y are jointlySo if I know the standard deviation, and I know n is going to our privacy policy. For b=∞ this is knownnormal distribution. The standard deviation of the age And youa nice, normal distribution. For a large sample, a 95% confidence interval is Standard Error Of Proportion the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women. Error And sometimes this can get confusing, because youis your n. The dual, expectation parameters for normal distribution are Standard Error Formula Statistics this variation occurs is described by the “sampling distribution” of the mean.For example, in the case of the log-normal distributionso far, the population standard deviation σ was assumed to be known. That's why it is. Statistical tests such as these are particularlysample mean is the standard error divided by the mean and expressed as a percentage. Their product Z = X1·X2 follows the "product-normal" distribution[37] with density function fZ(z) As an example of the use of the relative standard error, consider two that takes into account that spread of possible σ's. In experimental science, a theoretical Pristine. intervals In many practical applications, the true value of σ is unknown. So if I were to take samples from here, average them, plot it here, and then do a frequency plot. Maybe do I get? 1.86, which is very close to 1.87. It's one of those true population mean is the standard deviation of the distribution of the sample means.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722890853881836, "perplexity": 1415.4810100262769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00224.warc.gz"}
http://www.ck12.org/trigonometry/Tangent-Graphs/lecture/user:cmF5bmEuc2NobG9zc2JlcmdAZ21haWwuY29t/Changes-in-the-Period-of-a-Sine-and-Cosine-Function/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> Tangent Graphs Adjust the length of the curve, or the distance before the y values repeat, from 2pi. Estimated8 minsto complete % Progress Practice Tangent Graphs Progress Estimated8 minsto complete % Changes in the Period of a Sine and Cosine Function Changes in the Period of a Sine and Cosine Function How to determine and change the period of a function.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850793480873108, "perplexity": 4260.782385922669}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824345.69/warc/CC-MAIN-20160723071024-00189-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/finding-direction-where-the-rate-of-change-is-fastest.679152/
# Finding direction where the rate of change is fastest 1. Mar 18, 2013 ### munkhuu1 1. The problem statement, all variables and given/known data surface of mountain is modeled by h(x,y)=5000-0.001x^2-.004y^2. Climber is at (500,300,4390) and he drops bottle which is at (300,450,4100) What is his elevation rate of change if he heads for the bottle? In what direction should he proceed to reach z=4100 feet the fastest, so that he can be on a constant elevation to reach his bottle? 2. Relevant equations 3. The attempt at a solution i solved for first question and got -.334 where i just found gradient and multiplied by unit vector and for the 2nd question im little confused. 2. Mar 18, 2013 ### HallsofIvy The gradient is a vector and contains two pieces of information- its "length" and its direction. In which direction does the gradient here point? 3. Mar 18, 2013 ### munkhuu1 is the direction just gradient of (500,300) which is <-1,-2.4> or do i find the difference of the 2 points and find gradient of (200,-150) = <-.6,-1.2> 4. Mar 18, 2013 ### HallsofIvy The gradient of the function f(x,y,z) is the vector $$\frac{\partial f}{\partial x}\vec{i}+ \frac{\partial f}{\partial y}\vec{j}+ \frac{\partial f}{\partial z}\vec{k}$$ Isn't that what you did? The "direction" you want to go is the direction of that vector. It may be, what you wrote was ambiguous, that you want a compass direction, not including the "downward" part. If that is the case, find the angle that $$\frac{\partial f}{\partial x}\vec{i}+ \frac{\partial f}{\partial y}\vec{j}$$ makes with "north", the positive y-axis. 5. Mar 18, 2013 ### munkhuu1 yea i did found the gradient which was <-1,-2.4> i just wasnt sure how to get direction from this. 6. Mar 18, 2013 ### HallsofIvy What do you mean by "direction"? A direction in three dimensions is given by either a unit vector or by the "direction cosines" (the cosines of the angles a line in that direction makes with each coordinate axis which is the same as the components of the unit vector). But, as I said, you may want the two dimensional compass direction the person should take.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227452039718628, "perplexity": 1355.2828047505893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647706.69/warc/CC-MAIN-20180321215410-20180321235410-00130.warc.gz"}
http://mathhelpforum.com/statistics/204871-how-many-averages-enough-print.html
How many averages are enough? • October 8th 2012, 10:59 AM algorithm How many averages are enough? Hi, I have been trying to figure this out for two days but I don't seem to be reaching a conclusion (Shake) Let me describe what I'm doing... I have a vector of constants: $X$ I obtain a vector of random values, $Z$ by adding a vector of AGWN, $N$, to $X$: $Z=X+N$ Finally, I compute $C$ between $X$ and $Y$: $C=\sigma_{XZ}/\sqrt(\sigma_{X}\sigma_{Z})$ As you can see, the value of $C$ would change each time I generate a new vector of AWGN noise. Therefore I will average $C$ over several trials. What I can't get my head round is how can I decide how many trials are enough? I have gone on various errands to standard error of the mean, confidence intervals, and Monte Carlo trials, without fruition. Thanks. • October 8th 2012, 01:16 PM HallsofIvy Re: How many averages are enough? You seem to have neglected to tell us what you are trying to do! • October 8th 2012, 02:06 PM algorithm Re: How many averages are enough? Oh yes...here goes :) I want to obtain an accurate value of $C$. I know that sounds very vague - what do I mean by "accurate"?... Well, I could go on taking averages of $C$ ad infinitum to get a better and better estimate of $C$. This isn't practical, and also I can't just decide to take 10 000 averages because it looks big enough. What I must do is determine how many averages of $C$ give a 'good' representation of the r.v. $C$. My knowns are: $X:$ A vector of constants $Z = X + AWGN$ $AWGN$ is white noise of $N(\mu, \sigma)$ and $C$ is defined as $C = \sigma_{XZ}/\sqrt(\sigma_{X}\sigma_{Z})$. The value of $C$ will fluctuate as $AWGN$ (white noise) is different each trial. Thanks. • October 9th 2012, 11:00 AM algorithm Re: How many averages are enough? I hope my description hasn't obscured things, it is a simple concept I'm sure. Please tell me if anything needs any clarification. Let me try and sum it up very simply: All vectors have the same length. $Step 1. Z = X$ (vector of known constants) $+ White Gaussian Noise$ (known variance and mean) $Step 2. C = \sigma_{XZ}/\sqrt(\sigma_{X}\sigma_{Z})$ (i.e. the correlation coefficient between $X$ and $Z$). My objective: Repeat Steps 1 and 2 $N$ times to obtain a simple average of $C$. The question: what approaches can I take for deciding $N$? Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488484621047974, "perplexity": 680.8415047246981}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163046947/warc/CC-MAIN-20131204131726-00005-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.bradford-delong.com/2019/10/wikipedia-_amphidromic-point_-_-an-amphidromic-point-also-called-a-tidal-node-is-a-geographical-location-which-has-z.html
Wikipedia: Amphidromic Point https://en.wikipedia.org/wiki/Amphidromic_point_: "An amphidromic point, also called a tidal node, is a geographical location which has zero tidal amplitude for one harmonic constituent of the tide.... The term amphidromic point derives from the Greek words amphi (around) and dromos (running), referring to the rotary tides running around them.... Amphidromic points occur because the Coriolis effect and interference within oceanic basins.... At the amphidromic points of the dominant tidal constituent, there is almost no vertical movement from tidal action. There can be tidal currents since the water levels on either side of the amphidromic point are not the same. A separate amphidromic system is created by each periodic tidal component. In most locations the 'principal lunar semi-diurnal', known as M2, is the largest tidal constituent, with an amplitude of roughly half of the full tidal range... #noted #2019-10-09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8619816303253174, "perplexity": 4249.379627408161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144708.87/warc/CC-MAIN-20200220070221-20200220100221-00172.warc.gz"}
http://physics.stackexchange.com/questions/10239/can-the-entropy-of-a-subsystem-exceed-the-maximum-entropy-of-the-system-in-quant
# Can the entropy of a subsystem exceed the maximum entropy of the system in quantum mechanics? Quantum mechanics has a peculiar feature, entanglement entropy, allowing the total entropy of a system to be less than the sum of the entropies of the individual subsystems comprising it. Can the entropy of a subsystem exceed the maximum entropy of the system in quantum mechanics? What I have in mind is eternal inflation. The de Sitter radius is only a few orders of magnitude larger than the Planck length. If the maximum entropy is given by the area of the boundary of the causal patch, the maximum entropy can't be all that large. Suppose a bubble nucleation of the metastable vacuum into another phase with an exponentially tiny cosmological constant happens. After reheating inside the bubble, the entropy of the bubble increases significantly until it exceeds the maximum entropy of the causal patch. If this is described by entanglement entropy within the bubble itself, when restricted to a subsystem of the bubble, we get a mixed state. In other words, the number of many worlds increases exponentially until it exceeds the exponential of the maximum causal patch entropy. Obviously, the causal patch itself can't possibly have that many many-worlds. So, what is the best way of interpreting these many-worlds for this example? Thanks a lot! - Take state $|\psi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$. It is a pure state, so its (von Neumann) entropy is 0. But both of its one-particle states have entropy equal to 1 bit, as they are completely mixed states of the dimension two.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862537741661072, "perplexity": 122.30999341356866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932705.91/warc/CC-MAIN-20150521113212-00011-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.emis.de/classics/Erdos/cit/85705051.htm
## Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No:  857.05051 Autor:  Alon, Noga; Erdös, Paul; Holzman, Ron; Krivelevich, Michael Title:  On k-saturated graphs with restrictions on the degrees. (In English) Source:  J. Graph Theory 23, No.1, 1-20 (1996). Review:  A graph G is called k-saturated if it is Kk-free, but the addition of any edge produces a Kk (where Kk is the complete graph of order k). There is an old result that if G has order n and is k-saturated, then the number of edges in G is at least (k-2)n-\binom{k-1}{2}. However, the extremal examples contain each vertex of degree n-1. This paper defines Fk(n,D) to be the minimal number of edges in a k-saturated graph of order n and maximum degree at most D. The case k = 3 has been studied by Z. Füredi and Á. Seress [J. Graph Theory 18, No. 1, 11-24 (1994; Zbl 787.05054)]. In this paper it is shown that F4(n,D) = 4n-15 for n > n0 and \lfloor(2n-1)/3\rfloor \leq D \leq n-2. Also, it is shown that limn ––> oo Fk(n,cn) exists for all 0 < c \leq 1, except maybe for some values of c contained in a sequence ci ––> 0. For sufficiently large n, they also construct some k-saturated graphs of order n with maximal degree at most 2k\sqrt n, significantly improving earlier results. Reviewer:  C.Jagger (Cambridge) Classif.:  * 05C35 Extremal problems (graph theory) 05C65 Hypergraphs Keywords:  maximum degree; k-saturated graphs Citations:  Zbl 787.05054 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239180088043213, "perplexity": 2153.7720928766425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887832.51/warc/CC-MAIN-20180119065719-20180119085719-00329.warc.gz"}
https://math.stackexchange.com/questions/672244/set-notation-what-does-it-mean
# Set notation, what does it mean Can sonbody explain to me what does set notation of $$C_NM\\$$ means. The C in given notation is not letter C, it is some kind of very narrow C. I Could not find the alternative, to write. Thanks You probably mean $\complement_N M$, which I first saw in Bourbaki. $\complement_N M$ is the complement of $M$ with respect to $N$, also known as $N \setminus M$. $\complement_N M$ is written as \complement_N M in $\TeX$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696613550186157, "perplexity": 622.4741795167834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531917.10/warc/CC-MAIN-20191211131640-20191211155640-00321.warc.gz"}
https://www.math24.net/integration-hyperbolic-functions/
# Integration of Hyperbolic Functions • The hyperbolic functions are defined in terms of the exponential functions: $$\sinh x = \large\frac{{{e^x} – {e^{ – x}}}}{2}\normalsize$$ $$\cosh x = \large\frac{{{e^x} + {e^{ – x}}}}{2}\normalsize$$ $$\tanh x = \large\frac{{\sinh x}}{{\cosh x}}\normalsize = \large\frac{{{e^x} – {e^{ – x}}}}{{{e^x} + {e^{ – x}}}}\normalsize$$ $$\coth x = \large\frac{{\cosh x}}{{\sinh x}}\normalsize = \large\frac{{{e^x} + {e^{ – x}}}}{{{e^x} – {e^{ – x}}}}\normalsize$$ $$\text{sech}\,x = \large\frac{1}{{\cosh x}}\normalsize = \large\frac{2}{{{e^x} + {e^{ – x}}}}\normalsize$$ $$\text{csch}\,x = \large\frac{1}{{\sinh x}}\normalsize = \large\frac{2}{{{e^x} – {e^{ – x}}}}\normalsize$$ $$\sinh x = \large\frac{{{e^x} – {e^{ – x}}}}{2}\normalsize$$ $$\cosh x = \large\frac{{{e^x} + {e^{ – x}}}}{2}\normalsize$$ $$\tanh x = \large\frac{{\sinh x}}{{\cosh x}}\normalsize = \large\frac{{{e^x} – {e^{ – x}}}}{{{e^x} + {e^{ – x}}}}\normalsize$$ $$\coth x = \large\frac{{\cosh x}}{{\sinh x}}\normalsize = \large\frac{{{e^x} + {e^{ – x}}}}{{{e^x} – {e^{ – x}}}}\normalsize$$ $$\text{sech}\,x = \large\frac{1}{{\cosh x}}\normalsize = \large\frac{2}{{{e^x} + {e^{ – x}}}}\normalsize$$ $$\text{csch}\,x = \large\frac{1}{{\sinh x}}\normalsize = \large\frac{2}{{{e^x} – {e^{ – x}}}}\normalsize$$ The hyperbolic functions have identities that are similar to those of trigonometric functions: ${{\cosh ^2}x – {\sinh ^2}x = 1;}$ $1 – {\tanh ^2}x = {\text{sech}^2}x;$ ${\coth ^2}x – 1 = {\text{csch}^2}x;$ ${\sinh 2x = 2\sinh x\cosh x;}$ ${\cosh 2x = {\cosh ^2}x + {\sinh ^2}x.}$ Since the hyperbolic functions are expressed in terms of $${e^x}$$ and $${e^{ – x}},$$ we can easily derive rules for their differentiation and integration: $${\left( {\sinh x} \right)^\prime } = \cosh x$$ $${\large\int\normalsize} {\cosh x dx} = \sinh x + C$$ $${\left( {\cosh x} \right)^\prime } = \sinh x$$ $${\large\int\normalsize} {\sinh x dx} = \cosh x + C$$ $${\left( {\tanh x} \right)^\prime } = {\text{sech}^2}x$$ $${\large\int\normalsize} {{\text{sech}^2}x dx} = \tanh x + C$$ $${\left( {\coth x} \right)^\prime } = -{\text{csch}^2}x$$ $${\large\int\normalsize} {{\text{csch}^2}x dx} = -\coth x + C$$ $${\left( {\text{sech}\,x} \right)^\prime }$$ $$= – \text{sech}\,x\tanh x$$ $${\large\int\normalsize} {\text{sech}\,x\tanh xdx}$$ $$= – \text{sech}\,x + C$$ $${\left( {\text{csch}\,x} \right)^\prime }$$ $$= – \text{csch}\,x\coth x$$ $${\large\int\normalsize} {\text{csch}\,x\coth xdx}$$ $$= – \text{csch}\,x + C$$ $${\left( {\sinh x} \right)^\prime } = \cosh x$$ $${\large\int\normalsize} {\cosh x dx} = \sinh x + C$$ $${\left( {\cosh x} \right)^\prime } = \sinh x$$ $${\large\int\normalsize} {\sinh x dx} = \cosh x + C$$ $${\left( {\tanh x} \right)^\prime } = {\text{sech}^2}x$$ $${\large\int\normalsize} {{\text{sech}^2}x dx} = \tanh x + C$$ $${\left( {\coth x} \right)^\prime } = -{\text{csch}^2}x$$ $${\large\int\normalsize} {{\text{csch}^2}x dx} = -\coth x + C$$ $${\left( {\text{sech}\,x} \right)^\prime }$$ $$= – \text{sech}\,x\tanh x$$ $${\large\int\normalsize} {\text{sech}\,x\tanh xdx}$$ $$= – \text{sech}\,x + C$$ $${\left( {\text{csch}\,x} \right)^\prime }$$ $$= – \text{csch}\,x\coth x$$ $${\large\int\normalsize} {\text{csch}\,x\coth xdx}$$ $$= – \text{csch}\,x + C$$ In certain cases, the integrals of hyperbolic functions can be evaluated using the substitution ${u = {e^x},}\;\; \Rightarrow {x = \ln u,\;\;}\kern0pt{dx = \frac{{du}}{u}.}$ • ## Solved Problems Click a problem to see the solution. ### Example 1 Calculate the integral $${\large\int\normalsize} {{\large\frac{{\cosh x}}{{2 + 3\sinh x}}\normalsize} dx}.$$ ### Example 2 Evaluate the integral $$\int {\large{\frac{{\sinh x}}{{1 + \cosh x}}}\normalsize dx}.$$ ### Example 3 Evaluate the integral $$\int {{{\sinh }^2}xdx}.$$ ### Example 4 Evaluate the integral $$\int {{{\cosh }^2}xdx}.$$ ### Example 5 Evaluate $${\large\int\normalsize} {{{\sinh }^3}xdx}.$$ ### Example 6 Evaluate the integral $${\large\int\normalsize} {x\sinh xdx}.$$ ### Example 7 Evaluate the integral $${\large\int\normalsize} {{e^x}\sinh xdx}.$$ ### Example 8 Evaluate the integral $$\int {{e^{2x}}\cosh xdx}.$$ ### Example 9 Evaluate the integral $$\int {{e^{-x}}\sinh 2xdx}.$$ ### Example 10 Evaluate the integral $$\large{\int}\normalsize {\large{\frac{{dx}}{{\sinh x}}}\normalsize} .$$ ### Example 11 Find the integral $${\large\int\normalsize} {\large\frac{{dx}}{{1 + \cosh x}}\normalsize}.$$ ### Example 12 Find the integral $${\large\int\normalsize} {\large\frac{{dx}}{{\sinh x + 2\cosh x}}\normalsize}.$$ ### Example 13 Find the integral $$\int {\large{\frac{{dx}}{{\sinh x – \cosh x}}}\normalsize}.$$ ### Example 14 Find the integral $$\int {\large{\frac{{dx}}{{3\sinh x – 5\cosh x}}}\normalsize}.$$ ### Example 15 Evaluate the integral $$\int {\sinh x\cosh \left( { – x} \right)dx}.$$ ### Example 16 Calculate the integral $${\large\int\normalsize} {\sinh 2x\cosh 3xdx}.$$ ### Example 17 Find the integral $$\int {\tanh 2xdx}.$$ ### Example 18 Find the integral $$\int {\coth \large{\frac{x}{3}}\normalsize dx}.$$ ### Example 19 Evaluate the integral $${\large\int\normalsize} {\sinh x\cos xdx}.$$ ### Example 1. Calculate the integral $${\large\int\normalsize} {{\large\frac{{\cosh x}}{{2 + 3\sinh x}}\normalsize} dx}.$$ Solution. We make the substitution: $$u = 2 + 3\sinh x,$$ $$du = 3\cosh x dx.$$ Then $$\cosh x dx = {\large\frac{{du}}{3}\normalsize}.$$ Hence, the integral is ${\int {\frac{{\cosh x}}{{2 + 3\sinh x}}dx} } = {\int {\frac{{\frac{{du}}{3}}}{u}} } = {\frac{1}{3}\int {\frac{{du}}{u}} } = {\frac{1}{3}\ln \left| u \right| + C } = {\frac{1}{3}\ln \left| {2 + 3\sinh x} \right| }+{ C.}$ ### Example 2. Evaluate the integral $$\int {\large{\frac{{\sinh x}}{{1 + \cosh x}}}\normalsize dx}.$$ Solution. Using the substitution ${u = 1 + \cosh x,\;\;}\kern0pt{du = \sinh xdx,}$ we get ${I = \int {\frac{{\sinh x}}{{1 + \cosh x}}dx} }={ \int {\frac{{du}}{u}} }={ \ln \left| u \right| + C }={ \ln \left| {1 + \cosh x} \right| + C.}$ The hyperbolic cosine is a positive function. Hence, we can write the answer in the form ${I = \ln \left( {1 + \cosh x} \right) + C.}$ ### Example 3. Evaluate the integral $$\int {{{\sinh }^2}xdx}.$$ Solution. Combining the identities ${\cosh ^2}x – {\sinh ^2}x = 1,$ ${\cosh 2x }={ {\cosh ^2}x + {\sinh ^2}x,}$ we write: ${\cosh 2x }={ {\cosh ^2}x + {\sinh ^2}x }={ 1 + {\sinh ^2}x }+{ {\sinh ^2}x }={ 1 + 2{\sinh ^2}x.}$ So we can use the following half-angle formula: ${{\sinh ^2}x }={ \frac{1}{2}\left( {\cosh 2x – 1} \right).}$ Then the integral becomes ${\int {{{\sinh }^2}xdx} }={ \frac{1}{2}\int {\left( {\cosh 2x – 1} \right)dx} }={ \frac{1}{2}\left( {\frac{{\sinh 2x}}{2} – x} \right) + C }={ \frac{1}{4}\sinh 2x – \frac{x}{2} + C.}$ ### Example 4. Evaluate the integral $$\int {{{\cosh }^2}xdx}.$$ Solution. We reduce the power of the integrand using the identities ${\cosh ^2}x – {\sinh ^2}x = 1,$ ${\cosh 2x }={ {\cosh ^2}x + {\sinh ^2}x.}$ Then ${\cosh 2x }={ {\cosh ^2}x + {\sinh ^2}x }={ {\cosh ^2}x + {\cosh ^2}x – 1 }={ 2{\cosh ^2}x – 1,}$ and ${{\cosh ^2}x }={ \frac{1}{2}\left( {\cosh 2x + 1} \right).}$ Now we can find the initial integral: ${\int {{{\cosh }^2}xdx} }={ \frac{1}{2}\int {\left( {\cosh 2x + 1} \right)dx} }={ \frac{1}{2}\left( {\frac{{\sinh 2x}}{2} + x} \right) + C }={ \frac{1}{4}\sinh 2x + \frac{x}{2} + C.}$ ### Example 5. Evaluate $${\large\int\normalsize} {{{\sinh }^3}xdx}.$$ Solution. Since $${\cosh ^2}x – {\sinh ^2}x$$ $$= 1,$$ and, hence, $${\sinh^2}x$$ $$= {\cosh ^2}x – 1,$$ we can write the integral as ${I = \int {{{\sinh }^3}xdx} } = {\int {{{\sinh }^2}x\sinh xdx} } = {\int {\left( {{\cosh^2}x – 1} \right)\sinh xdx} .}$ Making the substitution $$u = \cosh x,$$ $$du = \sinh xdx,$$ we obtain ${I = \int {\left( {{\cosh^2}x – 1} \right)\sinh xdx} } = {\int {\left( {{u^2} – 1} \right)du} } = {\frac{{{u^3}}}{3} – u + C } = {\frac{{{{\cosh }^3}x}}{3} – \cosh x }+{ C.}$ ### Example 6. Evaluate the integral $${\large\int\normalsize} {x\sinh xdx}.$$ Solution. We use integration by parts: $${\large\int\normalsize} {udv}$$ $$= uv – {\large\int\normalsize} {vdu} .$$ Let $$u = x,$$ $$dv=\sinh xdx.$$ Then, $$du = dx,$$ $$v = {\large\int\normalsize} {\sinh xdx}$$ $$= \cosh x.$$ Hence, the integral is ${\int {x\sinh xdx} } = {{x\cosh x }-{ \int {\cosh xdx} }} = {x\cosh x – \sinh x }+{ C.}$ Page 1 Problems 1-6 Page 2 Problems 7-19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99997878074646, "perplexity": 1681.1146609161744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00108.warc.gz"}
https://www.msri.org/seminars/20056
# Mathematical Sciences Research Institute Home » Commutative Algebra and Algebraic Geometry (Eisenbud Seminar) # Seminar Commutative Algebra and Algebraic Geometry (Eisenbud Seminar) April 16, 2013 (03:45PM PDT - 04:45PM PDT) Location: UC Berkeley Speaker(s) No Speakers Assigned Yet. Description No Description Video Abstract/Media Commutative Algebra and Algebraic Geometry Seminar Tuesdays, 3:45-4:45 939 Evans Hall Organized by David Eisenbud Abstract: Let $R$ be a standard graded algebra over a field of characteristic $p > 0$. Let $\phi:R\to R$ be the Frobenius endomorphism. For each finitely generated graded $R$-module $M$, let $^{\phi}M$ be the abelian group $M$ with an $R$-module structure induced by the Frobenius endomorphism. The $R$-module $^{\phi}M$ has a natural grading given by $\deg x=j$ if $x\in M_{jp+i}$ for some $0\le i \le p-1$. In this talk, I\\'ll discuss our new characterization of Koszul algebras saying that $R$ is Koszul if and only if there exists a non-zero finitely generated graded $R$-module $M$ such that \$\reg_R \up{\phi}M
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946824848651886, "perplexity": 434.4184555728463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010929218/warc/CC-MAIN-20140305091529-00083-ip-10-183-142-35.ec2.internal.warc.gz"}
https://tnboardsolutions.com/samacheer-kalvi-11th-business-maths-guide-chapter-5-ex-5-3/
Students can download 11th Business Maths Chapter 5 Differential Calculus Ex 5.3 Questions and Answers, Notes, Samcheer Kalvi 11th Business Maths Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus, helps students complete homework assignments and to score high marks in board exams. ## Tamilnadu Samacheer Kalvi 11th Business Maths Solutions Chapter 5 Differential Calculus Ex 5.3 ### Samacheer Kalvi 11th Business Maths Differential Calculus Ex 5.3 Text Book Back Questions and Answers Question 1. Examine the following functions for continuity at indicated points. Solution: (a) f(x) = $$\frac{x^{2}-4}{x-2}$$, also given that f(2) = 0 [∵ x = 2 – h, where h → 0, x → 2] ∴ The given function is not continuous at x = 2. (b) Given that f(x) = $$\frac{x^{2}-9}{x-3}$$ and f(3) = 6 [∵ x = 3 – h, where h → 0, x → 3] [∵ x = 3 + h, where x → 3, h → 0] = 0 + 6 = 6 Also given that f(3) = 6 ∴ The given function f(x) is continuous at x = 3. Question 2. Show that f(x) = |x| is continuous at x = 0. Solution: [∵ x = 0 – h] [∵ |x| = x if x > 0] ∴ f(x) is continuous at x = 0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814466595649719, "perplexity": 2473.137555535405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00363.warc.gz"}
https://www.deepdyve.com/lp/springer_journal/igreen-green-scheduling-for-peak-demand-minimization-AO588v1pFc
# iGreen: green scheduling for peak demand minimization iGreen: green scheduling for peak demand minimization Home owners are typically charged differently when they consume power at different periods within a day. Specifically, they are charged more during peak periods. Thus, in this paper, we explore how scheduling algorithms can be designed to minimize the peak energy consumption of a group of homes served by the same substation. We assume that a set of demand/response switches are deployed at a group of homes to control the activities of different appliances such as air conditioners or electric water heaters in these homes. Given a set of appliances, each appliance is associated with its instantaneous power consumption and duration, our objective is to decide when to activate different appliances in order to reduce the peak power consumption. This scheduling problem is shown to be NP-Hard. To tackle this problem, we propose a set of appliance scheduling algorithms under both offline and online settings. For the offline setting, we propose a constant ratio approximation algorithm (with approximation ratio $$\frac{1+\sqrt{5}}{2}+1$$ 1 + 5 2 + 1 ). For the online setting, we adopt a greedy algorithm whose competitive ratio is also bounded. We conduct extensive simulations using real-life appliance energy consumption data trace to evaluate the performance of our algorithms. Extensive evaluations show that our schedulers significantly reduce the peak demand when compared with several existing heuristics. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Global Optimization Springer Journals # iGreen: green scheduling for peak demand minimization , Volume 69 (1) – Apr 18, 2017 23 pages /lp/springer_journal/igreen-green-scheduling-for-peak-demand-minimization-AO588v1pFc Publisher Springer US Subject Mathematics; Optimization; Operations Research/Decision Theory; Real Functions; Computer Science, general ISSN 0925-5001 eISSN 1573-2916 D.O.I. 10.1007/s10898-017-0524-y Publisher site See Article on Publisher Site ### Abstract Home owners are typically charged differently when they consume power at different periods within a day. Specifically, they are charged more during peak periods. Thus, in this paper, we explore how scheduling algorithms can be designed to minimize the peak energy consumption of a group of homes served by the same substation. We assume that a set of demand/response switches are deployed at a group of homes to control the activities of different appliances such as air conditioners or electric water heaters in these homes. Given a set of appliances, each appliance is associated with its instantaneous power consumption and duration, our objective is to decide when to activate different appliances in order to reduce the peak power consumption. This scheduling problem is shown to be NP-Hard. To tackle this problem, we propose a set of appliance scheduling algorithms under both offline and online settings. For the offline setting, we propose a constant ratio approximation algorithm (with approximation ratio $$\frac{1+\sqrt{5}}{2}+1$$ 1 + 5 2 + 1 ). For the online setting, we adopt a greedy algorithm whose competitive ratio is also bounded. We conduct extensive simulations using real-life appliance energy consumption data trace to evaluate the performance of our algorithms. Extensive evaluations show that our schedulers significantly reduce the peak demand when compared with several existing heuristics. ### Journal Journal of Global OptimizationSpringer Journals Published: Apr 18, 2017 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269569873809814, "perplexity": 929.4794975380421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213693.23/warc/CC-MAIN-20180818173743-20180818193743-00335.warc.gz"}
https://linearalgebras.com/solution-abstract-algebra-exercise-7-1-8.html
If you find any mistakes, please make a comment! Thank you. ## Compute the center of the Hamiltonian quaternions Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 7.1 Exercise 7.1.8 Describe $Z(\mathbb{H}), where \mathbb{H}$ denotes the Hamiltonian Quaternions. Prove that $\{a+bi \ |\ a,b \in \mathbb{R} \}$ is a subring of $\mathbb{H}$ which is a field but is not contained in $Z(\mathbb{H})$. Solution: We claim that $Z(\mathbb{H}) = \{ x + 0i + 0j + 0k \ |\ x \in \mathbb{R} \}$. The ($\supseteq$) direction is clear. To see the ($\subseteq$) direction, let  $\beta = x + yi + zj + wk \in Z(\mathbb{H})$. Using $i\beta =\beta i$, we have $-zk+wj=zk-wj$. Hence $z=0$ and $w=0$. So $\beta=x+yi$. Using $j\beta=\beta j$, we have $yk=-yk$. Thus $y=0$. It follows from that $\beta \in \{ x + 0i + 0j + 0k \ |\ x \in \mathbb{R} \}$. Now let $S = \{ a + bi \ |\ a,b \in \mathbb{R} \}$. Let $\alpha = a + bi$ and $\beta = c+di$. Since $\alpha - \beta = (a-c) + (b-d)i \in S$ and $0 + 0i \in S$, by the subgroup criterion $S \leq \mathbb{H}$. Since $$\alpha\beta = (ac-bd) + (ad + bc)i \in S,$$ $S$ is a subring of $\mathbb{H}$. Now if $a + bi \neq 0 + 0i$, we see that $$(a+bi)(a-bi)/(a^2 + b^2) = 1;$$ thus $\alpha$ has an inverse in $S$, specifically $$\alpha^{-1} = (a-bi)/(a^2 + b^2).$$ Thus $S$ is a division ring. Moreover, $$\alpha \beta = (ac - bd) + (ad + bc)i = (ca - db) + (da + bc)i = \beta \alpha,$$ so that $S$ is a field. Note, however, that $ij = k$ while $ji = -k$, so that $S \not\subseteq Z(\mathbb{H})$. #### Linearity This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971510171890259, "perplexity": 131.21511196381852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00000.warc.gz"}
https://communities.sas.com/t5/SAS-Statistical-Procedures/Proc-Mixed-need-help-on-basic-question/td-p/43331?nobounce
New Contributor Posts: 4 # Proc Mixed - need help on basic question I am trying to run a simple linear regression, however I have 2 observations for each individual in my sample (each obs collected on each of two non-consecutive days). I recognize that one option is to take the mean of the observations and run using a proc reg, however I was hoping to pool my data to increase my sample size, and then correct for the fact that two obs. came from each individual. I understand that proc mixed is an option here, but I am unclear of how to approach this. What I have so far is: proc mixed data=new; class id; model serumHg = fishintake/solution; repeated /subject=id; run; Any help would be very much appreciated Regular Contributor Posts: 169 ## Re: Proc Mixed - need help on basic question Is there a constant number of days between observations from one subject to the next? If so, then you could use code which is only slightly modified from the code which you show. For a consistent number of days between the two observations, you could employ the code proc mixed data=new; class id; model serumHg = fishintake/solution; repeated /subject=id type=cs; run; An alternate specification of the MIXED procedure which would produce the same result is proc mixed data=new; class id; model serumHg = fishintake/solution; random intercept /subject=id; run; Both of the above models assume that the residual variance is the same for each of the two measures. If you believe that is not a tenable assumption then you could use the code: proc mixed data=new; class id; model serumHg = fishintake/solution; repeated /subject=id type=un; run; As mentioned previously, the above models are appropriate if the number of days between observations is consistent from one subject to the next. If that is not the case, then you might need to employ a spatial covariance structure. (Note that time is the fourth dimension, so spatial structures are appropriate for modeling observations which are more or less distant in time.) Let me make one more comment. You really do not gain in degrees of freedom when using the individual observations as compared with using the subject means. Using the individual observations can be important if there is some complexity to the residual variance structure like when there is a different amount of time between observations. Using the individual observations could also be important if you have period-specific predictors to incorporate into your model. Using the mixed model would also be indicated if you are really interested in understanding components of variance. From the limited description which you have provided, it is my guess that the model in which you average the two responses per subject and regress those on the (single) predictor variable would be just as good for your needs as the mixed model. But that assumption is based on a guess about how your experiment is conducted based on limited information. New Contributor Posts: 4 ## Re: Proc Mixed - need help on basic question Thank you so much for your insight, it was really helpful. I think I will now seriously consider taking the mean of my 2 observations - but just to clarify, my two days of data were collected 3-10 days apart, therefore not consistent from one subject to the next, so in this case you recommend a spatial covariance structure? Regular Contributor Posts: 169 ## Re: Proc Mixed - need help on basic question Whether 3 days or 10 days produce a difference in the covariance structure of the subject-specific values probably depends on a lot of considerations that I don't have knowledge of. From your model, I see that your predictor variable is fishintake. You appear to be modeling serum mercury in fish based on the amount of food that they have consumed - or the serum mercury of an animal which feeds on fish such as river otters. How much mercury is taken up and expressed in serum probably depends on fish (or river otter) age. If you are studying juveniles, then a difference of 3 days compared to a difference of 10 days could make a substantial difference. But this is just speculation on my part. You should investigate alternative models starting with the compound symmetry model specified previously (alternatively, the random effects model). For a spatial model, you could use code as follows: proc mixed data=new; class id; model serumHg = fishintake/solution; repeated /subject=id type=sp(pow)(time); run; where time is measurement date. The compound symmetry and spatial covariance models are not nested, so you cannot formally test which is better using a likelihood ratio test. However, I would note that the covariance structure of the compound symmetry model can be expressed as _                   _ Cov(R1, R2) = | V          V*rho | | V*rho     V      | --                   -- while the spatial covariance structure can be expressed as: _                                           _ Cov(R1, R2) = | V                      V*(rho**d{12}) | | V*(rho**d{12})     V                  | --                                          -- where d{12} is the difference in days between the first and second measurement. You will note that both models are identical with the exception that the spatial model incorporates the distance between measurements as a correction to the covariance between the two measures with the distance between measurements a known quantity (not a parameter to estimate). Thus, whichever of these models has the smaller value of -2LL would be the preferred model. There are other spatial covariance structures which you could employ as an alternative to the spatial power model specified above. See the REPEATED statement syntax for the MIXED procedure for other spatial covariance structures. Again, for the spatial covariance structures which you might employ (sp(exp), sp(gau), sp(lin), sp(linl), sp(sph)), there will not be a likelihood ratio test that allows selection of the best model. Model selection may be based on established literature on the subject or on which model produces the smallest value for -2LL. New Contributor Posts: 4 ## Re: Proc Mixed - need help on basic question Thanks very much for your help, I think I know where I can go from here! Contributor Posts: 43 ## Re: Proc Mixed - need help on basic question If you only have two observations and these are more or less equidistant I would simplify the problem either adjusting by the baseline value or analysing the difference from baseline: proc glm data=new; model serumHg_second_measurement = serumHG_baseline fishintake /solution; run; or proc glm data=new; model serumHG_difference = fishintake /solution; run; where serumHG_difference = final - baseline Regards, Juanvte. Discussion stats • 5 replies • 146 views • 0 likes • 3 in conversation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081896305084229, "perplexity": 1214.490119811036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320270.12/warc/CC-MAIN-20170624170517-20170624190517-00301.warc.gz"}
http://mathhelpforum.com/geometry/217641-conics.html
1. ## conics Compute the points of the intersection between the circle x^2+y^2=1 and hyperbola xy=1? I tried to put 1/y in place of x but I couldn't solve. 2. ## Re: conics Originally Posted by kastamonu Compute the points of the intersection between the circle x^2+y^2=1 and hyperbola xy=1? I tried to put 1/y in place of x but I couldn't solve. You have $x^2+y^2=1$ and $xy=1$. We may assume $x,y\not=0$ since they don't lie on the hyperbola anyways. Taking $x=1/y$ as you have tried, $\frac{1}{y^2}+y^2=1$ Multiplying by $y^2$ and rearranging gives $y^4-y^2+1=0$ which has no real solutions. So there is no intersection. In fact, this is obvious if you graphed the two functions. 3. ## Re: conics There is an answer. By discriminant we can find the roots. t^2=x^4 and we can find the roots. \Delta = \,b^2-4ac. 4. ## Re: conics Yes, but the discriminant is negative, so all the roots are non-real numbers in the complex plane. That is to say there is no real solution. As I have suggested earlier, just graphing these two functions makes it clear that they do not intersect on the xy plane. 6. ## Re: conics It is positive. 7. ## Re: conics good drawing. 8. ## Re: conics Originally Posted by Gusbob Yes, but the discriminant is negative, so all the roots are non-real numbers in the complex plane. That is to say there is no real solution. As I have suggested earlier, just graphing these two functions makes it clear that they do not intersect on the xy plane. It is positive. 9. ## Re: conics The equation reduces to t^2-t+1=0 when we put t = y^2. Now discriminant of equation is b^2-4ac= (1)^2- 4*1*1= 1 - 4 = -3 NEGATIVE 10. ## Re: conics ıt was x^2+y^2=4.I made a mistake. 11. ## Re: conics Kastamonu ...........
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666781425476074, "perplexity": 1065.1906401428182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00572-ip-10-171-10-108.ec2.internal.warc.gz"}
https://brilliant.org/practice/astro-what-are-atoms/
### Astronomy We have seen that all matter emits light constantly in the form of thermal radiation. In this quiz, we will take a step further and begin to understand why matter emits EM radiation as we examine the forces within individual atoms. In addition to gravity, matter particles are subject to another long-range force that is crucial to understanding astrophysical phenomena. The electromagnetic force binding atoms together endows each kind of atom with a unique structure that is reflected in its emission spectrum. If you have ever seen Aurora Borealis (the northern lights) or Aurora Australis (the southern lights) you have directly observed a gas emission spectrum. Emission spectra are useful to astronomers because their wavelengths are uniquely related to the electronic structure of atoms. We will begin to understand this connection by pulling on the single electron in the simplest kind of atom, hydrogen. # Atomic Spectra Protons and electrons, subatomic particles within common matter, carry electric charge. Thus, the behavior of atoms is dominated by electric forces. The electric force law, known as Coulomb's law, is strikingly similar to gravity. Two charges $q_1$ and $q_2$ separated by a distance $r$ each feel an electric force $F_e$ according to $F_e=\frac{K_e q_1 q_2}{r^2},$ where $K_e=\SI[per-mode=symbol]{8.99e9}{\joule\meter\per\coulomb\squared}.$ In a hydrogen atom, the electron is measured at a radius of $r_B=\SI{5.29e-11}{\meter}.$ What is the force on the electron? Details and Assumptions: • Both the electron and the proton carry the elementary charge $q_e=\SI{1.60e-19}{\coulomb}.$ This is the basic unit of charge in our universe. # Atomic Spectra Quantum mechanics, the set of rules governing subatomic particles, alters the behavior of an electron in an atom. Despite the electric force's similarity to gravity, electrons do not orbit the nucleus the way planets orbit the Sun. In fact, one of the central results of quantum mechanics is that electrons do not have a definite position (until they are measured), only a definite energy. Thus, atomic physics is framed around energy, not forces and accelerations. By analogy with the gravitational force, the potential energy between two charges is $U_e = \frac{K_e q_1 q_2}r,$ where $K_e=\SI[per-mode=symbol]{8.99 e 9}{\joule\meter\per\coulomb\squared}.$ What is the potential energy of an electron measured at a radius of $r_B=\SI{5.29e-11}{\meter}?$ Details and Assumptions: • Both the electron and the proton carry the elementary charge $q_e=\SI{1.60e-19}{\coulomb}.$ • Like gravity, a body attracted to another has negative potential energy until it escapes. # Atomic Spectra In the gravity quiz, we noticed that the kinetic energy $K$ of an object in orbit obeyed a simple relationship with its potential energy $U$. Namely, $K=-\frac U2.$ This means that the total energy of an object within the Sun's gravitational influence $(K+U)$ is $E_\text{tot}=-\frac12 U+U=\frac12 U.$ This is called the virial theorem, and it applies also to electrons and atomic nuclei bound by electric forces. Apply the virial theorem to the result of the previous question to estimate the total energy of an electron orbiting a hydrogen nucleus. Details and Assumptions: • Express your answer in electron-volts $(\si{\electronvolt})$ instead of Joules. $\SI{1}{\electronvolt}=\SI{1.60e-19}{\joule}.$ To avoid working with extremely small quantities, as in the previous question, electron-volts are employed when describing atomic processes. # Atomic Spectra Electrons in hydrogen and other atoms occupy a set of orbitals that have definite energies. In the last question, you found the ground state energy of hydrogen—the lowest energy orbital. Each orbital has a distinct shape corresponding to where the electron is more likely to be measured. One of Hydrogen's electron orbitals. The first few orbital energies are in the table below. $\begin{array}{c|c} \hline \text{Orbital}& \text{Energy} \\ \hline \text{Ground}& \SI{-13.6}{\electronvolt} \\ \hline 1^\text{st} \text{ excited}& \SI{-3.4}{\electronvolt} \\ \hline 2^\text{nd} \text{ excited}& \SI{-1.5}{\electronvolt} \\ \hline 3^\text{rd} \text{ excited}& \SI{-0.85}{\electronvolt} \\ \hline 4^\text{th} \text{ excited}& \SI{-0.54}{\electronvolt} \\ \hline \end{array}$ The important thing to remember about electrons is, at any time, they take one of a specific set of energies that are stacked within the atom. For our purposes, we do not need to understand the exact way these orbital energies are calculated using quantum mechanics in order to understand how light is emitted in Aurora Borealis or by gasses in a stellar atmosphere # Atomic Spectra An electron does not have to remain in the same orbital forever. Radiation passing by can be absorbed by an electron, which boosts it into a higher-energy orbital (called an excited state). If you measured the average position and average velocity of the electron in this higher-energy state, what would you measure? Details and Assumptions: • Electron orbitals have complicated shapes. For simplicity, we visualize the energy levels as wavy classical orbits, but keep in mind that the electron is actually smeared over a three-dimensional orbital. # Atomic Spectra Excited electron states are not stable. Like a rock on a hill, the electron can "fall" back to a lower-energy state. Conservation of energy says when the electron loses energy, something else must gain energy. Consequently, the electron emits a single photon as it transitions to a lower-energy state. Suppose an electron emits a photon when it transitions from an excited state of hydrogen with energy $-\SI{1.51}{\electronvolt}$ to another state with energy $-\SI{3.40}{\electronvolt}.$ What is this the frequency of the photon emitted? Details and Assumptions: • The energy of the photon is exactly equal to the amount of energy lost by the electron. • In the last quiz, we introduced the fundamental relationship between photon energy and photon frequency, $E=hf,$ where Planck's constant $h=\SI{6.63e-34}{\joule\second}.$ • $\SI{1}{\electronvolt}=\SI{1.6e-19}{\joule}.$ # Atomic Spectra The same absorption and emission process occurs high in Earth's atmosphere, and we see it as the aurora. Atmospheric gas atoms, directed by Earth's magnetic field, are excited by solar radiation. As electrons later transition into lower-energy states, light with very specific wavelengths is emitted by the nitrogen and oxygen, the most common gases in Earth's atmosphere. If you measure the wavelength of light emitted by a green aurora, you would find its wavelength to be $\SI{557.7}{\nano\meter}.$ Below is a table with three electron states of oxygen. Can you find a transition between two of these states that produces the wavelength observed in an aurora? $\begin{array}{c|c} \hline \text{Orbital}& \text{Energy} \\ \hline \mathbf{A}& \SI{-13.62}{\electronvolt} \\ \hline \mathbf{B}& \SI{-10.99}{\electronvolt} \\ \hline \mathbf{C}& \SI{-8.77}{\electronvolt} \\ \hline \end{array}$ Details and Assumptions: • $h=\SI{6.63e-34}{\joule\second}.$ • $c=\SI[per-mode=symbol]{3.0e8}{\meter\per\second}.$ # Atomic Spectra We now have a more refined understanding of EM radiation: photons are emitted by electrons (and other charged particles) as they lose energy. In a hot gas, this leads to a characteristic line emission spectrum that astronomers use to identify the gas, no matter how far away the light is emitted. Knowledge of gas spectra is an indispensable tool for an astronomer studying the composition of objects across the universe, as it will be to us moving forward. In this chapter, we have completed a crash course on matter in the universe, and the long-range forces important for understanding its structure. Astronomers and physicists assume that the basic properties of matter we observe on Earth can be extended beyond the edges of our galaxy, to the farthest objects whose light can be detected. Now, we are equipped to explore the brilliant work of astronomers through the ages and get a sense of our place in the universe. # Atomic Spectra ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 52, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562706112861633, "perplexity": 436.5153455177255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00175.warc.gz"}
http://mathhelpforum.com/differential-equations/154466-solving-first-order-differential-equation.html
# Thread: Solving first order differential equation 1. ## Solving first order differential equation xy^1/2 dy/dx - xy = -y^2/3 initial condition y(1) = 0 The first thing i did was divide it through by x: so i got y^1/2 dy/dx - y = (-y^3/2)/x Then i divided it through by y^1/2 from this I got: dy/dx - y^1/2 = -y/x But now im stuck as to how to solve it, Do i use Bernoulli's equation? or am i totally off...? 2. Well, between one or more steps, you've switched the exponent of y on the RHS. Is it 2/3 or 3/2? That will make a HUGE difference. 3. Oh woops! Its suppose to be 3/2 -.-"" must have typed it wrong 4. Ok. Starting from the original DE: $x\,y^{1/2}\,y'-xy=-y^{3/2}.$ I'm not so sure I would solve for $y'$. You might be able to get a product rule going here. Rearrange: $x\,y^{1/2}\,y'+y^{3/2}=xy.$ I like the idea of dividing through by $y$, which yields: $x\,y^{-1/2}\,y'+y^{1/2}=x.$ Now the LHS is close to a product rule, but not quite. What's missing? 5. uumm What do you mean by Product rule..? 6. I just mean the normal product rule from differential calculus: $(fg)'=f'g+fg'.$ 7. I didn't know we could solve differential equations using the product rule...also I'm not quite seeing how the LHS looks like a product rule apart from having xy^-1/2 8. Sure you can. You can also use the quotient rule. The idea is this: if you can write the LHS as a product rule (fg)', then you can integrate both sides directly, because the LHS is then a total derivative. So, for example, suppose you had to solve the DE $2xyy'+y^{2}=e^{x}.$ You can "notice" that the LHS is a total derivative by seeing that $(xy^{2})'=2xyy'+y^{2}.$ That means you can rewrite the DE as follows: $(xy^{2})'=e^{x}.$ Integrating both sides yields $\displaystyle{\int (xy^{2})'\,dx=\int e^{x}\,dx,}$ and so $xy^{2}=e^{x}+C,$ by the Fundamental Theorem of the Calculus. Solve for $y$ and you're done. In terms of your DE, you've almost got a product rule, but not quite. If you look at $(x\sqrt{y})'=\frac{1}{2}\,x\,y^{-1/2}\,y'+y^{1/2},$ you'll see that the entire LHS is very close to being the derivative of a product. The problem is that the coefficients of each term are not the same in this product, whereas the coefficients are the same in your DE. Any ideas on how to fix this? This is a fantastic application of the problem-solving strategy of introducing symmetry where there is none. 9. Thanks for you help I have solved the question already 10. Great! Have a good one.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013369083404541, "perplexity": 386.6590945702666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775580/warc/CC-MAIN-20131218054935-00084-ip-10-33-133-15.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-roots-real-and-imaginary-of-y-x-5-5-3x-2-using-the-quadratic
Algebra Topics How do you find the roots, real and imaginary, of y=(x/5-5)(-3x-2) using the quadratic formula? Jul 24, 2017 Answer: See a solution process below: Explanation: First, we need to write this equation in standard form: $y = \left(\textcolor{red}{\frac{x}{5}} - \textcolor{red}{5}\right) \left(\textcolor{b l u e}{- 3 x} - \textcolor{b l u e}{2}\right)$ becomes: $y = - \left(\textcolor{red}{\frac{x}{5}} \times \textcolor{b l u e}{3 x}\right) - \left(\textcolor{red}{\frac{x}{5}} \times \textcolor{b l u e}{2}\right) + \left(\textcolor{red}{5} \times \textcolor{b l u e}{3 x}\right) + \left(\textcolor{red}{5} \times \textcolor{b l u e}{2}\right)$ $y = - \frac{3}{5} {x}^{2} - \frac{2}{5} x + 15 x + 10$ $y = - \frac{3}{5} {x}^{2} - \frac{2}{5} x + \left(\frac{5}{5} \cdot 15\right) x + 10$ $y = - \frac{3}{5} {x}^{2} - \frac{2}{5} x + \frac{75}{5} x + 10$ $y = - \frac{3}{5} {x}^{2} + \frac{73}{5} x + 10$ We can now use the quadratic equation to solve the equation. The quadratic formula states: For $\textcolor{red}{a} {x}^{2} + \textcolor{b l u e}{b} x + \textcolor{g r e e n}{c} = 0$, the values of $x$ which are the solutions to the equation are given by: $x = \frac{- \textcolor{b l u e}{b} \pm \sqrt{{\textcolor{b l u e}{b}}^{2} - 4 \textcolor{red}{a} \textcolor{g r e e n}{c}}}{2 \textcolor{red}{a}}$ Substituting: $\textcolor{red}{- \frac{3}{5}}$ for $\textcolor{red}{a}$ $\textcolor{b l u e}{\frac{73}{5}}$ for $\textcolor{b l u e}{b}$ $\textcolor{g r e e n}{10}$ for $\textcolor{g r e e n}{c}$ gives: $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} \pm \sqrt{{\textcolor{b l u e}{\frac{73}{5}}}^{2} - \left(4 \cdot \textcolor{red}{- \frac{3}{5}} \cdot \textcolor{g r e e n}{10}\right)}}{2 \cdot \textcolor{red}{- \frac{3}{5}}}$ $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} \pm \sqrt{\frac{5329}{25} - \left(- \frac{120}{5}\right)}}{- \frac{6}{5}}$ $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} \pm \sqrt{\frac{5329}{25} - \left(\frac{5}{5} \cdot - \frac{120}{5}\right)}}{- \frac{6}{5}}$ $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} \pm \sqrt{\frac{5329}{25} - \left(- \frac{600}{25}\right)}}{- \frac{6}{5}}$ $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} \pm \sqrt{\frac{5329}{25} + \frac{600}{25}}}{- \frac{6}{5}}$ $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} \pm \sqrt{\frac{5929}{25}}}{- \frac{6}{5}}$ $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} + \frac{77}{5}}{- \frac{6}{5}}$ and $x = \frac{- \textcolor{b l u e}{\frac{73}{5}} - \frac{77}{5}}{- \frac{6}{5}}$ $x = \frac{\frac{4}{5}}{- \frac{6}{5}}$ and $x = \frac{- \frac{150}{5}}{- \frac{6}{5}}$ $x = - \frac{4 \cdot 5}{5 \cdot 6}$ and $x = \frac{150 \cdot 5}{5 \cdot 6}$ $x = - \frac{4}{6}$ and $x = \frac{150}{6}$ $x = - \frac{2}{3}$ and $x = 25$ Impact of this question 70 views around the world You can reuse this answer Creative Commons License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 31, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888381838798523, "perplexity": 723.7732459315736}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583884996.76/warc/CC-MAIN-20190123023710-20190123045710-00178.warc.gz"}
https://cran.rstudio.org/web/packages/cholera/vignettes/kernel.density.html
# Kernel Density Plot #### 2023-03-01 By default, the addKernelDensity() function pools all observations: snowMap() addKernelDensity() However, this presuppose that all cases have a common source. To consider the possible existence of multiple pump neighborhoods, the function provides two ways to explore hypothetical scenarios. By using the pump.select argument, you can define a “population” of pump neighborhoods by specify the pumps to consider: snowMap() addKernelDensity(pump.select = c(6, 8)) By using the pump.subset argument, you can define the subset of the “population” to consider: snowMap() addKernelDensity(pump.subset = c(6, 8))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381668567657471, "perplexity": 4809.228381361171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00795.warc.gz"}
https://www.physicsforums.com/threads/equilibrium-and-wire-tension-problem.51954/
Equilibrium and wire tension problem 1. Nov 8, 2004 FarazAli the problem as in the book "Two guy wires run from the top of a pole 2.6 m tall that supports a volleyball net. The two wires are anchored to the ground 2.0 m apart and each is 2.0m from the pole. The tension in each wire is 95 N. What is the tension in the net, assumed horizontal and attached at the top of the pole?" - I've attached an image. What I tried to do was figure out the distance from one wire to the top of the pole. $$X = (2.6m^2 + 2.0m^2)^{\frac{1}{2}} = 3.28m$$ Then using the sum of torques (which is zero in equilibrium) at the first rope as the pivot point $$\sum\tau = -\tau_{2} + F_{tensioninnet}X = 0 \Rightarrow F_{tensionnet} = \frac{\tau_{2}}{X} = \frac{95N \cdot 2.0m}{3.28m}$$ I get 60N, but the answer in the back of the book is 100 newtons. I also noticed if I multiplied the answer by $$tan(60)$$ (equilateral triangle at the bottom), and I get 100 N Attached Files: • diagram.GIF File size: 2.3 KB Views: 155 Last edited: Nov 8, 2004 2. Nov 8, 2004 Staff: Mentor No need for distances or torques. Find the net horizontal force exerted by the guy wires on the pole. 3. Nov 8, 2004 FarazAli but we don't have a system in only two directions do we? 4. Nov 8, 2004 Staff: Mentor Figure it out in steps. First, find the component in the horizontal plane of the tension from each wire. Then add those two horizontal plane components to find the net force from the wires in the horizontal plane. 5. Nov 8, 2004 FarazAli so I resolve the tension into components. You would need an angle to resolve the x-component. We have none. So what I tried was using the law of sines, $$sin^{-1}\left(\frac{sin 90 \cdot 2.6m}{3.8m}\right) = \theta = 43.2$$. So I multiplied the tension times cosine of this to get 65N. So the tension in the net is supposed to be two times this (two guy wires), which is 138N. The answer is 100 N however 6. Nov 9, 2004 Staff: Mentor You used 3.8m instead of 3.28m. These horizontal-plane components are not parallel, so you can't just add them like scalars. Once you get the correct horizontal component, find the angle the two components make and add them like vectors. Think of the coordinate system this way: the z-axis is vertical (along the pole); the x-axis is parallel to the net; the y-axis perpendicular to the net. The force that each wire exerts on the pole has components along each axis. First find the component of each in the x-y plane (what I've been calling the horizontal plane). Once you've done that, add those two vectors to find the net force in the horizontal plane. Similar Discussions: Equilibrium and wire tension problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8911838531494141, "perplexity": 704.2968687447091}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320040.36/warc/CC-MAIN-20170623082050-20170623102050-00230.warc.gz"}
https://www.expii.com/t/measurement-uncertainty-8103
Expii Measurement Uncertainty - Expii There is always some uncertainty in measurements. The goal is to minimize error and calculate what impact it has on the quality of an experiment.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901902675628662, "perplexity": 748.7973658408216}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00109.warc.gz"}
https://math.stackexchange.com/questions/78588/solving-math-word-problems-without-brute-force
# Solving math word problems WITHOUT brute force How can we solve these problems withing using brute force? http://edhelper.com/math/multiplication51.htm • What is brute force? Is it using logic to reduce the number of possible cases, then testing those? – The Chaz 2.0 Nov 3 '11 at 14:53 • It also looks like they are intended for 5th graders. I'm working on applying some logic to it now. Brute force would be writing a computer program in order to solve this or (shudder) manually trying every combination. I think we can do better.. – foodals Nov 3 '11 at 14:53 • this can be done by applying systems of equations with several variables – Peđa Terzić Nov 3 '11 at 15:06 • pedja, how? can you show an example? – foodals Nov 3 '11 at 15:11 • These are known as "cryptarithms" or "verbal arithmetic". You can get started in Wikipedia's page on solving them. As the page notes, one usually uses a mix of logic to eliminate possibilities and reduce to a small number, and then a bit of trial-and-error. – Arturo Magidin Nov 3 '11 at 16:31 AND$\times$NOT$=$SOCKS $(100A+10N+D)\times(100N+10O+T)=10000S+1000O+100C+10K+S$ So you have to calculate left hand side.You will get expression similar to the right hand side. Use fact that equality is true if corresponding coefficients are equal.Applying this condition you should get $5$ equations with $8$ unknown variables. • $A\cdot N=S$ or $D \cdot T=S$ for example... – Peđa Terzić Nov 3 '11 at 16:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054169416427612, "perplexity": 769.0899215582112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597458.22/warc/CC-MAIN-20200120052454-20200120080454-00169.warc.gz"}
http://www.thespectrumofriemannium.com/2017/07/08/log181-duality-and-gravity-a-short-note/
# LOG#181. Duality and gravity: a short note. The 2nd superstring revolution circled/circles about (electromagnetic) duality. In D=4 (or general space-time), duality transformations for the electromagnetic field strength read and for the dual field 2-curvature They are true symmetry for the Maxwell equations in vacuum, , and for the whole inhomogeneous Maxwell equations if you add magnetic charges (e, g) and current () if you interchange both, electric and magnetic degrees of freedom. For the electric and magnetic part: Indeed, this symmetry can be thought as some kind of phase-space or phase space-time symmetry. Note the resemblance of the integrals and and where . In fact, the last action has symmetry under Some people argue against because it is not Lorenz invariant in the plane, but a deeper analysis suggests this criticism as naive and wrong. Well, the point is…What happens with gravity? Gravitational “charge” is mass/energy. Thus, one would ask if there is a kind of duality in the gravitational sector of our gravitational theories. It is subtle. Effectively, you can dualize the theory. You can take the dual of the scalar curvature But it comes to certain price. You have to introduce the “gravitational magnetic charge”, also called NUT parameter into the theory. The Schwarzschild mass is complexified so, , donde is the Schwarzschild mass, the NUT charge and the fully dualized gravitational mass (cf. the electromagnetic full dualized charge, . Moreover, EM duality is often an off-shell symmetry (i.e. a symmetry of the action, not just of equations of motion, or on-shell symmety). It has certain caveats, and people got disoriented about how to define a theory where the degrees of freedom are so large and with such a big symmetry. After the 1995 revolution, many people thought in the duality principle: all fields and dual fields must be treated “democratically”. Have a look to this table and the branes on it: Therefore, under duality invariance, any p-form gauge field and its dual -forms should appear equally in the action or equations of motion in a dual formulation. To use the electromagnetic duality in the gravity sector has provided to be a formidable task, since promoted to a symmetry of the theory, it implies certain infinite-dimensional group content that has not been fully solved. Peter West has conjectured that the Kac-Moody groups or are behind M-theory, maximal SUGRA in D=11 and the final theory of everything (TOE), but it is yet to be probed. Are you a SUGRA fan? Read Peter West papers on this subject. It is hard but englightened to see the good and bad points of the proposal. Nowadays, duality is used for much more. After the 1995 revolution, it was also found that theories defined on anti-de Sitter space-times (AdS spacetime for short) could be described by a QFT on the boundary thanks you to a clever application of the holographic principle by Maldacena. This equivalence (duality) is now dubbed as AdS/CFT correspondence but it is also a “duality” in generalized sense. That is what the duality revolution left us. Theories can be described very differently according to the degrees of freedom or “coordinates” you use. But, then, the issue is…Why to choose some over others? And even more…Are they “real”? Does it even matter? After all, no one has seen yet a magnetic monopole, a dyon (particles with both electric and magnetic charges) or a NUT gravimagnetic mass…What do you think? See you in another blog post! May the duality and the Dp-branes be with you! P.S.: I like TOEs, but I dislike the preference by certain classes of branes in the M-theory current formulation. I believe some day, we will formulate M-theory with any p-brane dimension in a self-consistent way in any space-time dimension enlarging our idea of space-time. Beyond time, beyon space, beyond space-time and phase space-time. If SUSY matters, in any form, is yet ahead of us. View ratings
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175353407859802, "perplexity": 1221.0584004984933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00199.warc.gz"}
https://www.math.columbia.edu/~woit/wordpress/?p=11387
# Various and Sundry Update: Wired describes job opportunities for physicists and astrophysicists in the fashion industry. This entry was posted in Uncategorized. Bookmark the permalink. ### 6 Responses to Various and Sundry 1. Anthony Reynolds says: I thought Scott Aaronson’s blurb about Sean Carroll’s book was a bit over the top. Tears of joy? Really? 2. Peter Woit says: Anthony Reynolds, Aaronson is the first person thanked in the book, for extensive editorial help. I suspect he may be the one responsible for keeping the multiple universe woo to a minimum in the book itself. Too bad he likely had no say on the book jacket. I do wonder what he thinks of the ongoing Carroll multiple worlds promotional campaign. 3. Shantanu says: Peter: I don’t think the KICP conference on cosmic controversies is livewebcast (despite the link you mentioned). Only the panel debates are put up online. 4. Peter Woit says: Shantanu, I’m getting the livestream at that link right now, Arkani-Hamed is on… 5. tulpoeid says: So, the physics briefing book looks really good as an up-to-date summary of the field. However, one sentence in the linked CERN Courier article grabbed my attention: “Readers are reminded that the discovery of neutrino oscillations constitutes a ‘laboratory’ proof of physics beyond the Standard Model.” Last I checked, there was nothing “proving” BSM physics in neutrino oscillations. I went through that part in the book looking for developments that I might have missed and there are none. Actually, quoting Par.6.1.1, “To obtain finite neutrino masses, the Standard Model has to be extended in some way. A minimal extension is to introduce gauge singlet neutrinos (so-called right-handed or sterile neutrinos) which would allow to write down a Dirac mass term for neutrinos, in the same way as for all other fermions. This could indeed be the only source of neutrino masses, but in this case coupling constants need to be smaller than 10^-11 and lepton-number conservation has to be postulated as a fundamental symmetry.” Despite clarifying that the SM might turn out to be adequate, a few lines above it is stated indeed that “The discovery of neutrino oscillation proves that neutrinos have non-zero masses. This is one of the few solid experimental proofs of physics beyond the Standard Model, as new interactions or new elementary particle states are needed to introduce this mass term in the Lagrangian.” I’m not saying that I wouldn’t like neutrino masses to turn out BSM, but … is neutrino hype the new selling point? Are we so desperate now? 6. Peter Woit says: tulpoeid, I pretty much agree, although the “neutrino masses are BSM” argument is a common one. What people have in mind is the argument that if you don’t add a sterile right-handed neutrino fields and just have Majorana mass terms, the usual Higgs sector won’t do it, you need something else. However, you can just add a sterile right-handed neutrino field and have exactly the same kind of Dirac mass terms as for the other fermions (then you have to explain why the Yukawas are so small, but you don’t understand anything about the values of Yukawas anyway). To me, calling such a scenario “BSM” is kind of misleading.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761018514633179, "perplexity": 1311.8947132971114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00528.warc.gz"}
https://chemistry.stackexchange.com/questions/68743/iupac-numbering-of-carbon-atoms-in-a-chain/68759
IUPAC numbering of carbon atoms in a chain A chemistry handout for nomenclature of org. compounds I've been reading states a particular rule for every functional group viz. If there is a choice in numbering not previously covered, the parent chain is numbered to give the substituents the lowest number at the first point of difference. Can anyone explain what it means? $\ce{CH3-CH2-CH2-CHCl-CH2-CH3}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109166264533997, "perplexity": 2836.8111666080845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00479.warc.gz"}
https://www.mapleprimes.com/users/Al86/replies
## 110 Reputation 6 years, 205 days ## @Carl Love thanks for the correctio... @Carl Love thanks for the correction! I forgot the proper word for describing what I meant. ## @Thomas Richard    OK, indee... OK, indeed you are correct. I checked again and it seems I mistyped in the equation: (a^2*sin(theta)^2 - Delta^2)*((a^2 + r^2)^2 - a^2*Delta*sin(theta)^2)*sin(theta)^2/rho^4 - 4*a^2*M^2*r^2*sin(theta)^4/rho^4; the first appearance of Delta in this expression should be without the square... :-) Now, it seems to be working: 1=1. BTW, how do you suggest me to let maple showing all the steps of the this expansion? ## @Thomas Richard  You can check the... You can check the post that I posted in PF here: There are screenshots of the textbook's pages where this expression appears in. ## @Mac Dude   I know that the expre... @Mac Dude I know that the expression I typed should be expanded to -\Delta*\sin^2(\theta) in the end since it's written in the textbook I got this expression from. Schutz's A First Course in GR second edition on pages 309-313. ## @acer I tried using the plots:display co... @acer I tried using the plots:display command by reading the help in maple, here's what I wrote: HH := 5*log[10]((c/H_0)*Int(1/(A*(1+zp)^4+B*(1+z)^3+C)^(1/2)/10,                              zp=0..z,                              method=_d01ajc, epsilon=1e-5)):    F:=plot(eval(HH,[A=0, B=0.31, C=0.69, H_0=1,c=1]), z=1e-7 .. 1.0,       thickness=3, color=red, smartview=false); G:=plot(eval(HH,[A=0.7, B=0.3, C=0 H_0=1,c=1]), z=1e-7 .. 1.0,       thickness=3, color=blue, smartview=false);  H:=plot(eval(HH,[A=0.6, B=0.1, C=0.3, H_0=1,c=1]), z=1e-7 .. 1.0,       thickness=3, color=green,smartview=false);   display({F,G,H},axes=boxed,,scaling=constrained,title='3 comsological models') But I get an error message: Error, invalid function arguments Typesetting:-mambiguous(HH Assign 5astlog(10)ApplyFunction((csol H_0)astIntApplyFunction(1sol(Aast(1 + zp)circ4 + Bast(1 + z) circ3 + C)circ(1sol2)sol10comma   zpequals0period;&periodzcomma methodequals_d01ajccomma epsilonequals1e-5))colon    FAssign plotApplyFunction(evalApplyFunction(HHcomma(Aequals0comma B equals0.31comma Cequals0.69comma H_0equals1commacequals1))comma zequals1e-7 period;&period 1.0comma   thicknessequals3comma colorequalsredcomma smartviewequalsfalse)semi GAssignplotApply\ Function(evalApplyFunction(HHcomma(Aequals0.7comma Bequals0.3 comma Cequals0 H_0equals1commacequals1))comma zequals1e-7 period;&period 1.0comma  thicknessequals3comma colorequalsblue comma smartviewequalsfalse)semi  HAssignplotApplyFunction(eval ApplyFunction(HHcomma(Aequals0.6comma Bequals0.1comma Cequals 0.3comma H_0equals1commacequals1))comma zequals1e-7 period;&period 1.0comma  thicknessequals3comma colorequalsgreen commasmartviewequalsfalse)semi  displayApplyFunction Typesetting:-mambiguous(((FcommaGcommaH)commaaxesequalsboxed commacommascalingequalsconstrainedcommatitleequals(3 comsological model)), Typesetting:-merror("invalid function arguments"))   ) How to fix this? Thnaks! ## @acer I still don't understand how t... @acer I still don't understand how to plot three plots in one graph, can you please write an example to me? Thanks! ## @acer How can I plot all the three plots... @acer How can I plot all the three plots with 3 different A,B,C parameters in each graph? All three different plots in the same graph. Thnanks! ## @Mariusz Iwaniuk I am a member of so man... @Mariusz Iwaniuk I am a member of so many forums, that if I do'nt use something frequently I am bound to forget. ## @Kitonum I meant that I want to plot two... @Kitonum I meant that I want to plot two functions: 1. f(x):=a*x*(1-x)+(x/4)*ln(x)+(1-x)*ln(1-x) which satisfies f'(x1)=f'(x2)=b=(f(x1)-f(x2))/(x1-x2) in the interval [x1,x2]. 2. f(x):=a*x*(1-x)+(x/4)*ln(x)+(1-x)*ln(1-x) where in order to find the parameter a one needs to solve f'(x0)=0 find x0 with respect to a and then find a with the condition f''(x0)=0. Is this better understood? Can you implement this for me? ## A follow-up question.... @Kitonum I have the following follow-up question to the original opening question. I have the function f(x):=a*x*(1-x)+(x/4)*ln(x)+(1-x)*ln(1-x) which satisfies f'(x1)=f'(x2)=b=(f(x1)-f(x2))/(x1-x2) I want to find the graph of f above in the interval [x1,x2] or [x2,x1] alongside the same function f but which also satisfies at the point of intersection with the above graph f''(x)=0. How to implement this in maple code? ## @Preben Alsholm  Well I am trying ... Well I am trying to solve the following problem from Murdock's Perturbations: Theory and Methods. "Exercise 2.6.1 (a) Letting f(y)=y^3, investigate the correct and incorrect perturbation problems as follows. Attempt to solve (2.6.3)-(2.6.4) for y1. Obtain the differential equation for y1, find its general solution, and show that the constants of integration cannot be chosen to satisfy the boundary conditions." where (2.6.3) y''+n^2 \pi^2 y = epsilon*f(y) , y(0)=0 , y(1)=0; (2.6.4)y ~ A sin(n\pi *x) + epsilon*y1(x)+epsilon^2*y2(x)+... So I plugged the first-order in epsilon ansatz of A sin(n\pi*x)+epsilon*y1(x); so I get the following ode: y1''+n^2 pi^2 * y1 = A^3 sin^3(n*pi*x), y1(0)=y1(1)=0. So it seems both of you solved my problem. I need to remember this option of assuming n::integer Thanks. ## @Kitonum how to find those values algebr... @Kitonum how to find those values algebraically? I mean I have: epsilon = sqrt((x-2)^2(3-x)) and then I plug the values x=0, x=2 or 3 and get: +-2*sqrt(3), 0. Are those the values I am looking for? I am not sure understand why? ## @Carl Love thanks. Perhaps you know how... @Carl Love thanks. Perhaps you know how to answer my following question from Murdock's text which I asked on stackexchange and didn't get an answer. "Using the graph of y=(x-2)^2(x-3), sketch the bifurcation diagram of (x-2)^2(x-3)+epsilon^2=0 for all real epsilon. Notice that this diagram is symmetrical about the x axis. Find the values of epsilon at the two pair pair bifurcation points". My question is about the last sentence how to find these values? ## @Carl Love Well the equation is epsilon ... @Carl Love Well the equation is epsilon = -(x-2)^2(x-3), in that case how to use the Bifurcation command here, I've only seen an example for the logisitc equation. ## @Rouben Rostamian  I know there'... @Rouben Rostamian  I know there's a feature in maple where you can use iterativemaps, I looked at it in the help and also the web when searching bifurcation with maple. Do you know perhaps know how to use this feature here? Thanks! 1 2 3 4 5 Page 1 of 5 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781723380088806, "perplexity": 4776.927894966128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00371.warc.gz"}
https://www.physicsforums.com/threads/how-to-get-coefficient-of-f-k-without-mass-and-without-constant-force.553911/
# How to get coefficient of f^k without mass and without constant force 1. Nov 25, 2011 ### MrMaterial 1. The problem statement, all variables and given/known data When mass M is at the position shown, it is sliding down the inclined part of a slide at a speed of 1.99 m/s. The mass stops a distance S2 = 2.5 m along the level part of the slide. The distance S1 = 1.18 m and the angle θ = 27.70°. Calculate the coefficient of kinetic friction for the mass on the surface. 2. Relevant equations *maybe relevant* w=F*d f^k = (Normal Force) * uk mechanical energy = (potential energy: h*m*g)+(kinetic energy: 1/2m*v^2) the general idea that kinetic friction is going to take away from the total energy. 3. The attempt at a solution My first attempt actually involved kinematic equations, however i soon learned that this was a silly route (i will need to know at least "m" if i were to solve it this way...) So then i tried with mechanical energy... Potential energy = h(which is 1.18sin(27.7 deg) * m * g(which is 9.8m/s^2) = 5.375m Kinetic energy = 1/2m*v(which initially is 1.99)^2 = 1.98m so E_mech,i = (7.355m)J and E_mech,f = 0 This also made me realize that i chose the wrong method... Then i decided to analyze f^k a little further. I know it has a similar relationship to f = ma. This way i can effectively get rid of this whole "m" thing. However, I don't have any evidence of what the object is doing after it exits the slope. All i know is that it gains x amount of velocity going down the 27.7 deg slope in 1.18m, then it takes 2.5m of f^k to slow it down to a stop. I then decided to try it out as a sort of work problem. I also decided to draw a straight line between x_i and x_f..... So yeah I am obviously not getting the point here. I think there is some basic info in this question that i am supposed to be paying attention to, but i'm not. This is a problem in a section where we talk about energy, power, and work. The whole thing where the object "m" suddenly goes from a theoretical acceleration due to the Normal Force minus f^k to a pure decelleration of f^k is really giving me hell. I know just about nothing about this object's movement, all i really know is that at some point it moved from the top of that slope to the x_f point in an unknown period of time. Last edited: Nov 25, 2011 2. Nov 25, 2011 ### Staff: Mentor The figure you refer to doesn't show up. Can you upload it? 3. Nov 25, 2011 ### MrMaterial ok, does it show now? 4. Nov 25, 2011 ### Staff: Mentor Yes it does. Thanks. It looks like conservation of energy is the way to approach this. You're given two distances and you should be able to find expressions for the frictional force for each section (leave the mass as a variable, M. It'll cancel out eventually). When the mass comes to rest, all of its kinetic energy will have been given up to work done by friction. 5. Nov 25, 2011 ### MrMaterial thanks for the direction, I am going to try this one again. 6. Nov 25, 2011 ### MrMaterial okay so here goes So i figured out that kinetic energy is the spotlight here, i can disregard potential energy because it's a useless value in this equation. here are my variables: ki = starting kinetic energy = 1.98(mass) kf = final kinetic energy = 0 k(s1) = kinetic energy at the end of distance "s1" = ki + ((fg-f^k)*s1) fg = positive force on object during distance "s1" (in direction of its motion) = 9.8(mass)*sin(27.7deg) f^k(s1) = kinetic friction in the direction that opposes motion during distance s1 = 9.8(mass)*cos(27.7deg)*uk f^k(s2) = kinetic friction opposing motion during distance s2 s1 = distance1=1.18m s2 = distance2=2.5m So first i took the kf = ki + w equation and made kf be the point at the end of s1 because i want to split this up into s1 and s2 so i can single out the friction force. k(s1) = ki + ((fg-f^k)*s1) then i took the formula again and... 0 = k(s1) + w that turns into -w = k(s1) now to make it bigger and uglier... -(f^k(s2)*s2) = ki + ((fg-f^k)*s1) and uglier -(f^k(s2)*s2) = 1.98m + ((9.8(mass)*sin(27.7deg)-9.8(mass)*cos(27.7deg)*uk)*s1) and uglier -(9.8m*uk*s2) = 1.98 + ((9.8(mass)*sin(27.7deg)-9.8(mass)*cos(27.7deg)*uk)*s1) EVENTUALLY I get that uk = 6.055... which is the wrong answer. I am pretty sure there is some way easier solution to this problem. I can't even find problems like it online! I almost wonder if i keep misreading this problem, or just don't have enough basic experience with energy conservation to be able to tackle it. 7. Nov 25, 2011 ### Staff: Mentor I think you're dissecting the problem too finely as you try to work through the minutia step by step. Let's take a step back. In the broadest picture you've got sources of energy and sinks for energy. At the start there are two sources for energy: The initial kinetic energy of the object and the initial potential energy of the object. At the end all the available energy from the sources has been given up to work done by friction. So start by adding up your assets. What's the initial total available energy? 8. Nov 25, 2011 ### MrMaterial The total avaliable energy, or as I have been informed as "mechanical energy" is 7.355(mass)J ui = (1.18*sin(27.7deg)*9.8m)J or (5.375*mass)J ki = (1.98*mass)J So now that I look at it this way, I suppose that the important function s1 serves is that that is the period of which all of the potential energy turns into kinetic energy s2, then, would be the period of which all of the kinetic energy then dissipates. I've been looking at my notes and I think I will now try a changing mechanical energy model: Ef = Ei + W where: Ef = final mechanical energy = 0 Ei = starting mechanical energy = (7.355*mass)J W = work done by a non conservative force such as friction 0 = (7.355*mass)J + W that would mean -W = (7.355*mass) w = force*distance, and i am going to assume the distance, since it is the work of friction, is s1+s2=3.68m Also F is the kinetic friction force which is 9.8*mass*uk -(9.8*mass*uk*3.68) = (7.355*mass)J now i will divide mass -(9.8*uk*3.68)=7.355J -(uk)=7.355/36.064 uk = -0.203 Now i am going to assume that i messed up a sign somewhere and that uk actually = 0.203... WOOT! ok so the answer IS uk=0.203! 9. Nov 25, 2011 ### Staff: Mentor Not to harsh your buzz or anything but you should keep the division between the two sections of the trajectory since they will each experience a different amount of energy loss per unit length. Note that S1 is along a slope so it will have a lower normal force (to cause friction) than the horizontal section S2. On S2 the normal force is just Mg, so the friction there is μkMg. On the slope it'll be different. You can still equate the starting mechanical energy to the energy lost to friction, but you'll need to take these two sections into account. The energy lost on the horizontal section is F*d = μkMgS2. What is it on the slope? 10. Nov 25, 2011 ### MrMaterial if energy lost on the horizontal section is F*d = ukMgS2 I suppose on the incline the normal force would be 9.8M*cos(27.7deg) which means f^k would be 9.8M*cos(27.7deg)*uk so the energy lost on the incline is 9.8M*cos(27.7deg)*uk*S1? and that just leaves the energy gained on S1, which would be (5.37*mass)J which is what was the potential energy prior. And, of course, the initial kinetic energy of (1.98*mass)J and I think that leaves me with enough to solve the problem. 11. Nov 25, 2011 ### Staff: Mentor Yup. So the total energy lost on both sections is: $$\mu_k M g S_2 + \mu_k M g cos(\theta)S_1 = \mu_k M g (S_2 + cos(\theta) S_1)$$ Equate this to your initial mechanical energy and you're home free.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139716029167175, "perplexity": 863.0464927922604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647280.40/warc/CC-MAIN-20180320033158-20180320053158-00481.warc.gz"}
https://www.physicsforums.com/threads/complex-numbers.182793/
# Complex numbers 1. Sep 3, 2007 ### JPC hey i know the basics about complex numbers like: 5*i^7 = 5*i^3 = 5 * -i = -5i = (- pi/2, 5) but now : how would i represent : -> 1 ^ i = ? = ( ? , ? ) or would it involve another mathematical dimention and be more of a (? , ? , ?) ? //////////////////// and now, how can i draw a cube of lenght = i /////// i mean , at my school , we told me how to use i , but not how to understand it we dont even really know why we have the graph with real numbers and pure imaginary numbers as axis ? 2. Sep 3, 2007 ### Eighty 1^i = 1. 1^x = 1 for all complex numbers x. The complex numbers are closed, so you'll never need another unit j, say, to solve equations involving complex numbers, such as 1^i=x. Lengths are positive real numbers. You can't have something with an imaginary length. Putting complex numbers on a graph with real part and imaginary parts as axes is just a representation. It doesn't much mean anything. You can do it differently if you want (magnitude and angle axes, for example). It's just useful to think of them as points in the plane to help your intuition. 3. Sep 3, 2007 ### JPC ok since 1^x= 1 , x belonging to all complex numbers then 2 ^ i = ? 4. Sep 3, 2007 ### mathwonk if a is a positive real number, tghen a^z can be defined as e^(ln(a)z) where ln(a) is the positive real natural log of a. if a is a more complicated complex number, there is no such nice unique choice of a natural log of a, so a^z has more than one meaning. i know iof no way to make sense of a complex length, so a cube of side lnegth i amkes no sense to me. what does it mean to you? maybe you cn think of something interesting. 5. Sep 3, 2007 ### mathwonk so your second exmple 2^i equals e^(i.ln(2)), which is approximated as closekly as desired by the series for e^z. so the first two terms are 1 + i.ln(2). 6. Sep 3, 2007 ### JPC thx for the a^i and for the cube, maybe a cube with imaginary borders, sides , ect = an imaginary cube : ) or maybe a cube with no lengh in our 3 main dimentions (we cannot see it), but with an existance in another dimention : ) 7. Sep 3, 2007 ### HallsofIvy How did you get off complex numbers to geometry? I know of know way of defining "a cube with imaginary borders, sides, etc." I have no idea what you could mean by an imaginary length. 8. Sep 7, 2007 ### JPC i didnt mean into geometry, but in existence i admit, the idea of the cube was a bad idea, but complex numbers surely must be found somewhere in nature (or somewhere in space) ? i mean is there somewhere in space, or more precisely earth, where we see sqroot(-1) ? 9. Sep 7, 2007 ### Moridin Complex numbers can be applied to models dealing with alternating current. There are probably more. 10. Sep 7, 2007 ### JPC can you tell me in what exactly with alternative current we find complex numbers ? 11. Sep 7, 2007 ### HallsofIvy eix= cos(x)+ i sin(x) so complex exponentials are routinely used to represent waves such as alternating current. Of course those Wacky engineers use j instead of i!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405628204345703, "perplexity": 1796.0645825965953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.75/warc/CC-MAIN-20180526131802-20180526151802-00024.warc.gz"}
https://rightstartmath.com/rs2-level-f-lesson-105/
# RS2 Level F, Lesson 105 Lesson 105 in RS2 Level F is titled Dividing Fractions on a Fraction Chart. This lesson is very critical to understanding division of fractions. Yes, when dividing fractions we are generally taught to just invert and multiply. But why? Why does this work? That’s what this lesson is all about. It’s setting up the pattern to make the discovery as to what dividing fraction is about. Let’s start super easy. When we say 6 ÷ 2, what are we asking? We are asking how many 2s are in 6. There are 3 twos in 6. So what about 5/6 ÷ 1/6? We’re asking the same; how many 1/6s are in 5/6. Look at the fraction chart. 5/6 is shaded and 1/6 is circled. So how many 1/6s are in 5/6? That’d be 5! Let’s try a slightly harder one. Let’s do 3/7 ÷ 2/7. Think about it for a minute, then sneak a peek at the chart here. Got the answer? Yup. It’d be 1-1/2 times. The circled amount, 2/7, fits in the shaded amount, 3/7, one and a half times. Let’s look at a harder one: 1/2 ÷ 2/3. Get the fraction chart out. First, let’s look at what we’re asking when we say 1/2 ÷ 2/3. We want to know how many 2/3s are in 1/2. The 1/2 is identified by shading and 2/3 is circled. So how many 2/3 will fit into 1/2? Just by looking, we can see that it’s going to be less than one, seeing that 2/3 is bigger than 1/2. The answer is more than 1/2 and less than one. Giving it an quick look, it appears to be 3/4. Let’s find the common denominator to check our initial answer. This means 1/2 is the same as 3/6 and 2/3 is the same as 4/6. Ooooh. This makes it super easy now. Look at the chart. Re-ask at the question again. What is 3/6 ÷ 4/6? Or, in other words, how many 4/6s fit into 3/6? Looking at the chart above with the 3/6 shaded and 4/6 circled, we can definitely see the answer is 3/4. Yes, we can just invert and multiply, but this explains WHAT we are doing. Of course, you are only seeing a vignette of the situation. You’ll just have to look at the next lessons to see how this progresses….. For those of you just wanting to review your fractions, check out the RightStart Fractions kit. Great activities for the summer to keep that math fresh!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8805589079856873, "perplexity": 1197.7181271929562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00324.warc.gz"}
https://www.physicsforums.com/threads/what-speed-do-subatomic-particles-travel-around-the-nucleus.58141/
# What speed do subatomic particles travel around the nucleus? 1. Dec 30, 2004 ### Gamish What speed do subatomic particles travel around the nucleus? Do the speeds vary with different paricles, IE Electron, Proton, Neutron? Are the speeds differnet with its idenical positron? I do know that electrons and protons travel around the nuclueus at differnt speeds, this we call "heat", in which photons are sometimes released at the according frequency. Last edited: Dec 30, 2004 2. Dec 30, 2004 ### Gokul43201 Staff Emeritus Gamish, Protons make up the nucleus and so, do not revolve around it. Older models of the atom (Rutherfold, Bohr) had the electrons revolving around the nucleus, but we know that this is only a very crude representation. The behavior of an electron (or any particle, for that matter) is determined from something known as its wavefunction. The wavefunction of an electron in a stable, non-interacting atom does not evolve in time, so we can discard the picture of revolving electrons when we treat the atom quantum mechanically. Heat has nothing to do with the "speeds of revolution". Heat, in a material medium comes from the kinetic energy of the atoms/molecules themselves. And when electrons are involved, it is the free electrons that contribute to the heat capacity. These are the electrons that are not considered to be bound strongly to any one nucleus. Yes, electronic transitions do result in an emission or absroption of photons and vice versa. However, excited electronic states are very short lived, and they end up transfering energy into the free electrons or phonons (vibrational modes of the atoms). It is through these that heat propagates along a medium. 3. Dec 31, 2004 ### Gamish OK, you did not answer my question, I am assuming that you do not know the answer. If you did not know, electrons do travel around a nuclues, I am asking at what speed. Lets say the speed of the electron on one atom of hydrogen, at 50 degreas C. 4. Jan 1, 2005 ### elas Electrons form a shell, would you ask 'what speed does the earth's crust travel around the core'? The question cannot be answered in the form you desire because there is no fixed orbital speed, all speeds up to the speed of light are possible depending on the degree of activity. Gokul43201 has the answer in proffessional form, I hope my amatuer approach will add some enlightenment. 5. Jan 1, 2005 ### Gamish . Actually, no, according to special relativity, no object with rest mass can reach the speed of light (c). As an objects mass increases near speeds of c, more energy will be put towered the moving of the object rather than the actual speeding up of the object. Also, can you please tell me what variables are involved that determine the speed at which electrons travel around the nucleus of it's atom? 6. Jan 1, 2005 ### dextercioby $$\langle\hat{v}\rangle_{|n,l,m\rangle}=\frac{1}{m}\langle\hat{p}\rangle_{|n,l,m\rangle}=\frac{1}{m}\langle n,l,m|\hat{p}|n,l,m\rangle$$ So it suffices to know the quantum state of the electron in one certain representation and in the same representation the form of the momentum operator. Daniel. 7. Jan 2, 2005 ### elas Also, can you please tell me what variables are involved that determine the speed at which electrons travel around the nucleus of it's atom? Spin (and therefore rotation) are slowed down in experiments that show the wave nature of electrons, as this is done by cooling; we know that temperature affects speed of rotation. Distance from the centre also plays a part in determining rotation speeds when electron is viewed as a particle. Variations in shell density probably cause variations in shell rotation even within a single atom, leading to (three dimewnsional) wave rotation. You are of course, correct about c. 8. Jan 3, 2005 ### tozhan i think i heard a while ago that an electron takes about 150 attoseconds (150 quintillionths ($$10^{-18}$$) of a second) to 'orbit' a nucleus. if we take the diameter of an atom to be $$10^{-10}$$ metres we can VERY roughly make a guess at its speed. $$\frac{\pi10^{-10}}{150^{-18}}=2.1*10^{8}ms^{-1}$$ does this seem reasonable?? It would be about about 1.4 times its rest mass if you apply special relativity to it. i only have very patchy idea of classical physics so all you quantum guys can tell me what it really is! also, the Pauli exclusion principle says electrons can orbit in pairs when the electrons are of opisite spin, but i though all naturally occuring electrons were spin-left? i also dont understand lone pairs, is this the same thing? 9. Jan 3, 2005 ### dextercioby 1.The result u obtained for the speed of 'orbit' is absurd and it comes from the fact u used a wrong value for the period of orbit.I suggest u take a good look into the Bohr's theory of atom and then u can come up with a decent value for the period.The velocity for the first Bohr orbit should be 10 times less than u found and then u can conclude that relativistic effects are not essential. 2.Pauli's exclusion principle applies to all fermions without discrimination.Electrons have 2 possbile 'orientations' for their spin:up (m_{s}=+1/2) and down (m_{s}=-1/2),so there are two types of electrons judging after their spin projection on Oz eigenvalues. Daniel. 10. Jan 3, 2005 ### tozhan being out by a factor of 10 isnt bad for me ;-) my sketchy mathes is always a little wrong! can anyone come up with a real answer? or shall i search google? Tom 11. Jan 3, 2005 ### dextercioby The first orbital speed for the H atom in the theory of Bohr is $$v(n=1)=\alpha c\sim\frac{c}{137}$$ ,where $\alpha$ is the famous fine structure constant of Sommerfeld and it has the approximate value of 1/137,number which can be obtained if u plug in the constants in its definition $$\alpha=:\frac{e^{2}}{4\pi\epsilon_{0}\hbar c}$$ Daniel. 12. Jan 4, 2005 ### Gokul43201 Staff Emeritus Thanks Dexter, for the clarification. In my hurry, I did a terribly shoddy job. I started out with the hydrogen atom, but eventually decided to just write down how to find the velocity operator in a general 1-d case, but forgot to remove the subscripts that I had started out with. What I simply meant to suggest was that $$<\hat {v}_x> = \frac {\hbar}{im} \int_{- \infty}^{\infty} \psi }^* \frac {\partial \psi }{\partial x }~dx$$ What I was hoping to show was that information about "velocity" can be got from the wavefunction, and that systems such as those being discussed can not be treated classically. I suspected that Gamish may be looking for a classical answer to his question (since he brought up "heat" and so on). But now, I think, by "heat" he merely meant radiation. Last edited: Jan 4, 2005 13. Jan 4, 2005 ### Gamish Example Can you give me an example using that equasion, im not good with understanding tex formatting. And can you please explaine what all those variables are? 14. Jan 4, 2005 ### dextercioby Gokul gave a particular realization of the formula i posted in the post no.6 of this thread. $<v_{n,l,m}>$ should have been put under the form $\langle \hat{v}_{x}\rangle _{|n,l,m\rangle}$ and represents the average of the operator v_{x} computed with the Q system in the pure state $|n,l,m\rangle$.\psi and \psi star represent the wave functions of the Q system (probably H atom he,meant,but in that case,the formula is totally wrong;let's assume he didn't consider the H atom,though the indices "n","l","m" make us think that way) in the coordinate representation. The partial derivative wrt to 'x' is a part of the formula giving the 'x' component of the momentum operator in the position representation.As i said before (when spaeking about the H atom),if the wave functions depend on all 3 variables,then Gokul's formula is incomplete,as the integral must be evaluated over all three coordinates.But let's stick to this simple unidimensional case. U virtually have to find and use a wavefunction of the system psi(x) and with it compute 2 quantities:its derivative wrt to "x" and its complex conjugate,multiply the 2 results and integrate over the real axis.Then simply multiply with the reduced Planck's constant and divide through the product 'im'.U'll find the average of the velocity operator in the Q state u've chosen. Daniel. Last edited: Jan 4, 2005 15. Jan 5, 2005 ### da_willem In Quantum mechanics (QM) all you can know about a partice is it's state. Where in Classical mechanics you're solving the equation of motion (Newtons F=ma eg) to find the position of a particle as a function of time, in QM you solve (in the nonrelativistic case) the Schrödinger equation. This equation: $$i\hbar\frac{\partial \Psi}{\partial t}=\hat{H}\Psi$$ gives you, when you know the phsyical circumstances ($H$), the state $\hat{\psi}$. This state does not directly give you the position of a particle as a function of time. You can only use it to find a 'probabality density' for the particle's position, from wich an 'expectation value' (mean) for the particles position can be found. In QM this is all we can know about a particles position, the probablility that upon measurement you find it here or there! Now for a molecule this Schrödinger equation can only be a analytically solved for a hydrogen atom (and some similar atoms), a system of a proton and an electron. The relevant part of the state you need to say something about the electrons position or velocity is called the wavefunction. It's 'square' (actually its modulus) gives you the aforementioned probability distribution. The result in the form of a wavefuncion you can find in almost any textbook on QM. It is denoted by $$\psi_{n,l,m}$$ and is actually a whole collection of wavefunction depending on what numbers of n, l and m you are interested in. They represent the different orbitals, or excited states, of the hydrogen molecule. Note that n is the same variable as in Bohrs theorem of the hydrogen atom and is called the 'principle quantum number'. As an example I will try to answer your question for the ground state (n=1, l=m=0) of a hydrogen atom. The wavefunction as a function of the radial coordinate r is $$\psi_{1,0,0}=\frac{1}{\sqrt{\pi a^3}}e^{-r/a}$$ with $$a=\frac{4 \pi \epsilon_0 \hbar^2}{me^2}=0,529E-10 m$$ (Bohr radius in meters) As said before the 'square' gives you the probability density: $$|\psi_{1,0,0}|^2=\frac{1}{\pi a^3}e^{-2r/a}$$ And tells you the elektron has the greatest probability to be found at r=o wch is at the nucleus! Furtermore the wavefunction is spherically symmetric. But all this is not really relevant for finding out the velocity, so let's move on. As said before you cannot find the position of the electron, simply because i has no position. You can find the expectation value for the position though. Suppose you know a particle can be found with probability at x=1, probability (1/4) at x=2 and probability (1/4) x=4 you can find the expectation value by calculating the following sum: $$<x>=\sum_i P(x_i)x_i = (1/2)*1+(1/4)*2+(1/4)*4=2.$$ But now we don't deal with a discrete spectrum of probabilities ((1/2), (1/4) and (1/4)) but a continuum of probabilities. So instead of summing over the probabilities you will have to integrate: $$<x>=\int |\psi_{1,0,0}|^2 x dx$$ Where the probabilities P are replaced by the probability density $|\psi_{1,0,0}|^2$. This integral has to be evaluated over all space. Also the electron has no definite speed, but only a probability to be measured that speed, or that speed. So I will have to dissapoint you, again all we can find out about the velocity is it's expectation value <v>. Now: $$<v>=\frac{d<x>}{dt}=\int \frac{\partial |\psi_{1,0,0}|^2}{\partial t} x dx = \frac{-i \hbar}{m} \int \psi_{1,0,0}^* \frac{\partial \psi_{1,0,0}}{\partial x} dx$$ Where the last step follows after some juggling with the integral and using the Scrödinger equation. [I don't know have far your mathematics skills reach but $|\psi_{1,0,0}|^2 = \psi_{1,0,0}^* \psi_{1,0,0}$ is called the 'modulus' and a star denotes 'complex conjugation'. In case your wavefunction is real and this means just respectively squaring and doing nothing.] This is (for the groundstate of hydrogen) the result dextercioby and gokul pointed out and hopefully you now know more or less know where it came from. 16. Jan 5, 2005 ### da_willem But anyway. Using the equation to find the velocity of an electron in a hydrogen atom will not work. Because the atom as a whole stands still. <x> of the electron is zero. And the expectation value of the velocity will be zero as well, because it moves with the same speed but opposite sign in the x and -x direction. With a real wavefunction like the of the electron in a hydrogen atom this can also be seen in a different (mathematical) way. The formula for <v> involves an 'i'. But as the wavefunction is real and ofcourse <v> is real thi leaves only one possibility: the integral vanishes and <v>=0. 17. Jan 12, 2005 ### vluth In the Bohr model of the Hydrogen atom (which gives some right answers, but is known to be essentially incorrect) electrons _do_ spin around the nucleus. In the simplest case of a hydrogen atom with a single electron spinning around a single proton, the electron moves at about 1/137 of the speed of light, which is MUCH faster than sound. Sound travels at about 1100 feet per second, while light travels at 186,000 miles per second.. You do the math. As a side note, with the Bohr model, the 'inner' electrons in atoms with greater atomic numbers would be moving faster... 18. Jan 12, 2005 ### dextercioby Neat trick u used... But hopefully u're aware of the tedious proof that $$\langle \hat{v}_{x}\rangle _{|\psi\rangle} =\frac{d\langle \hat{x}\rangle _{|\psi\rangle}}{dt}$$ ,independent of the description... As in Gokul's case,please remove the indices (1,0,0).They remind of the H atom and it's not the case with your example/formula. Daniel. PS.I didn't trust your formula,so i did the proof by my own.It wasn't nice at all. :yuck: Juggling with the Heisenberg and Schroedinger descriptions...Anyways,i'm a theorist,so i shouldn't complain. 19. Jan 13, 2005 ### da_willem I didn't know the proof was that tedious. But Griffiths only 'postulated' it, so that should say something. And about the (1,0,0), My post was about the hydrogen atom, wich has a fairly easy ground state wavefunction, so it's okay they remind you about it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735681772232056, "perplexity": 725.5852384537712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717959.91/warc/CC-MAIN-20161020183837-00498-ip-10-142-188-19.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/35005-solved-normal-distrubtion-question.html
# Math Help - [SOLVED] Normal Distrubtion Question 1. ## [SOLVED] Normal Distrubtion Question Question: $X$ has a normal distribution, and $P( X > 73.05 = 0.0289 )$. Give that the variance of the distribution is $18$, find the mean. Attempt: $= 1 - 0.0289$ $= 0.9711$ $= -1.897$ $\sigma^2 = 18$ , $\sigma = 3\sqrt2$ $P\left( \frac{X - \mu}{\sigma} > 73.05 \right) = -1.897$ Don't know how to find the mean... 2. Originally Posted by looi76 Question: $X$ has a normal distribution, and $P( X > 73.05 = 0.0289 )$. Give that the variance of the distribution is $18$, find the mean. Attempt: $= 1 - 0.0289$ $= 0.9711$ $= -1.897$ $\sigma^2 = 18$ , $\sigma = 3\sqrt2$ $P\left( \frac{X - \mu}{\sigma} > 73.05 \right) = -1.897$ Don't know how to find the mean... The z-score corresponding to a cumulative standard normal probability of $0.9711$ is $1.897$. So: $ \frac{73.05-\mu}{\sqrt{18}}=1.897 $ Now solve for $\mu$. RonL 3. Originally Posted by looi76 Question: $X$ has a normal distribution, and $P( X > 73.05 = 0.0289 )$. Give that the variance of the distribution is $18$, find the mean. Attempt: $= 1 - 0.0289$ $= 0.9711$ $= -1.897$ $\sigma^2 = 18$ , $\sigma = 3\sqrt2$ $P\left( \frac{X - \mu}{\sigma} > 73.05 \right) = -1.897$ Don't know how to find the mean... A question of similar ilk that might be of interest: http://www.mathhelpforum.com/math-he...n-problem.html
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830458760261536, "perplexity": 1171.6474662914388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095711.51/warc/CC-MAIN-20150627031815-00165-ip-10-179-60-89.ec2.internal.warc.gz"}
https://brilliant.org/problems/radiation-symbol/
The symbol is made from three concentric circles and three equally spaced diameters of the large circle. The diameter of the large circle is $$12$$, the diameter of small circle is $$4$$ and the diameter of smallest circle is $$2$$. What is the area of the symbol (shaded region) ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960011839866638, "perplexity": 123.54953792714852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889798.67/warc/CC-MAIN-20180121001412-20180121021412-00418.warc.gz"}
https://link.springer.com/article/10.1007/s00396-016-3988-2
# Swelling of latex particles—towards a solution of the riddle ## Abstract The assumption that during emulsion polymerization, the monomer molecules simply diffuse through the aqueous phase into the latex particles is a commonplace. However, there are experimental hints that this might not be that easy. Here, simulation results are discussed based on Fick’s diffusion laws regarding the swelling of latex particles. The results of quantitative application of these laws for swelling of latex particles allow the conclusion that the instantaneous replenishment of the consumed monomer during emulsion polymerization requires a close contact between the monomer and the polymer particles. ## Introduction Starting an aqueous heterophase polymerization outside the monomer drops is the typical scenario of classical emulsion polymerization (EP). This polymerization technique is industrially used since many decades [1, 2], and the kinetics of the process is the topic of numerous scientific papers and textbooks since the middle of the 1940s [311]. A key assumption of the widespread and mostly accepted mechanism of EP is the immediate substitution of the monomer consumed by propagation inside the polymer particles by a fresh monomer via diffusion through the aqueous phase as long as monomer droplets (or a free monomer phase) existFootnote 1 [3, 12]. Accordingly, the monomer concentration inside the latex particles is supposed to be constant until the monomer droplets (the free monomer phase) disappear. This presumption is long lasting even though experimental data of the monomer concentration inside the latex particles during the course of EP do not support it [1315]. Remarkably, the corresponding results have been obtained with both water-soluble (potassium peroxodisulfate) [13] and oil-soluble (2,2′-azobis(2-methylpropionitril)) [14, 15] initiators whereby, regardless of the initiator, typical emulsion polymerization kinetics has been observed. Our purpose in writing this short communication is to draw attention to the fact that despite the many accomplishments of industrial EP and chemical engineering with respect to product development and process understanding, respectively, at least one fundamental question remains to be answered. Harkins’ idea of monomer diffusion, from the reservoir which can be a bulk or dispersed monomer phase, through the aqueous phase to the main reaction loci—the equilibrium swollen monomer polymer particles—appears to be straightforwardly concluded based on undisputable experimental facts. The decisive aspect here is the extremely high rate of polymerization (monomer consumption) achievable with EP despite the spatial separation of monomer and the main reaction lociFootnote 2 [3]. The instantaneous replenishment of the monomer inside the active particles containing a propagating radical requires that the monomer uptake frequency should correspond to at least the propagation frequency. This requirement can be expressed by Eq. (1) where C M,P is the monomer concentration inside the particles, k p the propagation rate constant, $$\tilde{D}$$ is the monomer diffusion coefficient, and x the distance inside the particle (x = 0 is the center of the spherical particle with radius r 0 and x = r 0 the distance from the center to the interface). A relation such as Eq. (1) is known also as Thiele modulus (ϕTh) [16, 17] which is a characteristic number, typically describing the ratio between the reaction and the diffusion rate in catalytic reactions. $${k}_p{C}_{M,P}=\frac{\tilde{D}}{x^2}$$ (1) However, a detailed look at the scenario during aqueous EP reveals a serious problem with this apparently quite logical assumption of an easy monomer diffusion through the aqueous phase (cf. Figure 1). In general, neglecting for the specific moment interactions between components of the reaction mixture, diffusion is the transport of matter from a more concentrated region to a less concentrated region with the aim to equilibrate the chemical potential, here that of the monomer inside the reaction system. Hereinafter, the reaction system comprises only droplets, particles, and water but neglects the gas phase. Figure 1 sketches the situation with respect to the monomer concentration across the EP space and illustrates the problem to be addressed. ## Computation methods, technical information Fick’s diffusion law for spherical geometry, cf. Eq. (7) below, can be represented in a dimensionless form using the following substitutions: $${C}^{*}=C/{C}_0;\kern0.5em {x}^{*}=x/{x}_0;\kern0.5em {D}^{*}=D/{D}_0;\kern0.5em {t}^{*}={D}_0t/{x}_0^2$$ (2) where D 0 is the diffusion coefficient of the swelling agent at r = r 0. This treatment is similar to the approach by Hsu [20]. Using substitutions given above (2), Eqs. (7)–(9) can be expressed as $$\frac{\partial {C}^{*}}{\partial {t}^{*}}=\frac{\partial }{\partial {x}^{*}}\left({D}^{*}\frac{\partial {C}^{*}}{\partial {x}^{*}}\right)+\frac{2}{x^{*}}\left({D}^{*}\frac{\partial {C}^{*}}{\partial {x}^{*}}\right)$$ (3) $$\begin{array}{llll}\frac{\partial {C}^{*}}{\partial {x}^{*}}=0\hfill & at\hfill & {x}^{*}=0,\hfill & {t}^{*}\ge 0\hfill \end{array}$$ (4) $$\begin{array}{llll}{C}^{\ast }=1\kern.2em & at\kern.2em & {x}^{\ast }=1\kern.3em & {t}^{\ast }>0\end{array}$$ (5) Equation (3) was solved numerically using a finite-difference method similar to [20]. In this approach, the polymer particle is assumed as made of n (herein, n = 200) spherical shells and the concentration in each shell is calculated by numerical methods. The integration with respect to time (or dimensionless time t*) was done using Matlab r2015a. It should be pointed out that the diffusion coefficient in a polymer is highly dependent on the difference between the actual temperature and the glass transition temperature of the polymer particle changing along with the degree of swelling which suggests that the diffusion coefficient in each shell can be different. The diffusion coefficient of the swelling agent was estimated using the approach suggested by Karlsson et al. [21]. It should be pointed out, that along with the swelling agent water can also hydroplasticize the polymer particle and influence diffusion [22], however, depending on the hydrophilicity of the polymer in different extent. ## Results and discussion The monomer concentrations at the various spots of EP as sketched in Fig. 1 suggest that a simple concentration gradient-driven diffusion from the monomer drops to the aqueous phase along path (a) is easily possible but that it is rather unlikely along path (c) which is from the aqueous phase directly into the particles. This conclusion is buttressed by estimating the diffusion rates using Fick’s second diffusion law, Eq. (6). $$\frac{\partial {C}_M}{\partial t}=\tilde{D}\frac{\partial^2{C}_M}{\partial {x}^2}$$ (6) Equation (6) was adapted for spherical geometry according to the treatment of Crank [23] by Eq. (7). This equation was solved to characterize the model-related Eqs. (25) diffusion of the monomer (or in general of any swelling agent)Footnote 3 in a spherical unswollen polymer particle of radius r 0. $$\frac{\partial {C}_M}{\partial t}=\frac{1}{x^2}\frac{\partial }{\partial x}\left({x}^2\tilde{D}\frac{\partial {C}_M}{\partial x}\right)$$ (7) For the estimations, only radial diffusion was considered, and the volume change in the particle was assumed negligible. The total radial change in the particle size for monomer concentration ≤5 M is at maximum about 26%. Note the impact of the particle size change which is anisotropic with respect to the radial distance will be investigated later. The boundary conditions were chosen according to Eqs. (8) and (9). $$\begin{array}{lllll}\frac{\partial {C}_M}{\partial x}=0\hfill & at\hfill & x=0\hfill & and\hfill & t>0\hfill \end{array}$$ (8) $$\begin{array}{lllll}{C}_M={C}_{M,0}\hfill & at\hfill & x={r}_0\hfill & and\hfill & t\ge 0\hfill \end{array}$$ (9) C M,0 is the swelling agent concentration at the particle surface (particle with radius r 0) and is assumed to be in equilibrium at any time with the continuous phase.Footnote 4 It is to emphasize that C M,0 is a model-related fictive value necessary to establish the required concentration gradient driving the swelling process. During swelling, the conditions particularly with respect to viscosity and hence diffusion coefficient inside the particles are changing. Clearly, the values of both C M and $$\tilde{D}$$ in Eqs. (1), (6), and (7) are interdependent. The change of $$\tilde{D}$$ with an increasing monomer concentration is considered based on experimental data described in [21]. Accordingly, $$\tilde{D}$$ can be fitted by an empirical model which comprises four different regions (1 > ϕm > 0.3, 0.3 > ϕm > 0.15, 0.15 > ϕm > 0.1, 0.1 > ϕm > 0) over a range of about 10 orders of magnitude. Figure 2 shows simulation results for a polymer particle with an unswollen diameter of 100 nm (corresponding to an average dry particle) and a varying concentration of a swelling agent at the interface (C M,0 as boundary condition). Note, C M,0 corresponds to the initial concentration difference that thermodynamically drives the swelling process. The time it takes for the swelling agent to penetrate into the particle until the center is saturated to 95% relative to the particular C M,0—value (t 95%) in dependence on C M,0 shows in a log—log plot two distinctly different regions. Between 10−2 M < C M,0 ≤ 1.5 M, the time (t 95%) drops only very little (from 4900 and 4150 s) whereas between 1.75 M ≤ C M,0 < 9 M, it decreases over almost eight orders of magnitude (from 414 to 2.16·10−6 s) with increasing C M,0. Apparently, the range 1.5 M < C M,0 < 1.75 M is a critical one, because somewhere within this quite narrow range, a value of C M,0 or the volume fraction (ϕM) exist at which the swelling kinetics changes. In a typical EP, nonmonodisperse particle size distribution is rather the rule than the exception and hence, the dependence of particles swelling on the average particle size is important. The simulation data put together in Fig. 3 prove the expected quadratic dependence of t 95% on the particle size exemplarily for only two C M,0—values above (5 M) and below (0.05 M) the critical range. The overall range of t 95%—values comprises nevertheless quite impressive 14 orders of magnitude. The flux of swelling agent (expressed as molecules per particle and seconds) that is needed to swell the particle and to keep C M,0 constant throughout the whole process is compared in Fig. 4 for three values of C M,0. The flux of swelling agent stops as soon as it is uniformly distributed across the particle and its concentration equals C M,0. With an increasing concentration of swelling agent inside the particles (that is with ongoing time), the flux decreases over several orders of magnitude as a consequence of the decreasing driving force (decreasing difference in the chemical potential of the swelling agent with an increasing degree of swelling). The comprehensive consideration of the simulation results and both the situation given regarding the concentrations as sketched in Fig. 1 together with the experimental facts that EP simultaneously allows high rates of polymerization and the highest molecular weights for free radical polymerization, reveals an apparent riddle with respect to the swelling of latex particles during EP of water-insoluble monomers. The crucial point is to answer the question how does the high monomer concentration, required for both fast monomer diffusion into the latex particles and eventually the high monomer concentration inside, move from the monomer reservoir to the particle interface. To illustrate this, let us consider a single growing radical inside a particle during a styrene emulsion polymerization which consumes k p C M,P monomer molecules per second. To instantaneously replenish the consumed monomer, it requires an equal amount of monomer molecules diffusing into the particle. The ratio between the consumption of monomer by propagation inside the particle and monomer diffusion into the particle is expressed by the Thiele modulus (ϕTh) (10). $${\phi}_{Th}^2=\frac{k_p{C}_{M,P}\cdot {r}_0^2}{\tilde{D}}$$ (10) Figure 5 shows how ϕTh 2 changes for a single propagating radical in a particle with r 0 = 50 nm in dependence on C M,P. For this calculation, it is assumed that the particle is equilibrium swollen with the concentration C M,P which, according to the equilibrium condition, is equal to C M,0 at the particle–water interface. For the particular calculation parameters chosen to generate the graph of Fig. 5, the propagation and diffusion frequency are equal (ϕTh 2 = 1) at C M,P of about 2.6 M. For monomer concentration C M,P  ≥ 2.6 M (or ϕM ≥ 0.26), the monomer diffusion is faster than the propagation, and the equilibrium swelling is maintained. If however, C M,P  < 2.6 M (ϕM < 0.26) the replenishment of monomer via diffusion is not fast enough and the particle, with respect to monomer, starves out. In summary, the simulation results using Fick’s second diffusion law with respect to latex particle swelling are clear; they essentially lead to no surprise, and the following conclusions can be drawn. Firstly, a high degree of swelling in the molar concentration range as observed for aqueous latex particles (and high monomer concentration during EP) requires a high concentration of swelling agent (monomer during EP) immediately at the particle–water interface. Secondly, the concentration of swelling agent (monomer during EP) at the particle interface determines the influx into the particle interior. This means for the situation during EP, that there is a critical monomer concentration above which monomer diffusion is fast enough instantaneously to replenish the consumed monomer. Thirdly, as a logical consequence of the simulation results, all situations or measures, that reduce the concentration of the swelling agent (monomer) in an immediate proximity of the particles surface, are of detrimental influence on swelling. Now let us consider how relevant these conclusions are for better understanding of EP. The second conclusion seems to support the existence of a period during batch EP of constant monomer concentration inside the particles. However, it is to mention that for the estimation of the Thiele modulus (Fig. 5), propagation started in an equilibrium swollen particle which is a special situation and necessarily not given in any EP. The implications of the first and third conclusion are much more crucial and universal. The main question is how the required high concentration of hydrophobic monomer with a low solubility in water (cf. Figure 1) is delivered to the water–particle interface, particularly for experimentally observed ϕM—values of about 0.5 (corresponding to a concentration of about 5 M in the latex particles). Necessarily, swelling to such a degree and within a realistic period of time with respect to polymerization requires a correspondingly high concentration in direct contact. The accumulation of a corresponding amount of monomer solely via molecular diffusion through the aqueous phase is not fast enough with respect to time scales relevant to polymerization and hence, it does not contribute to the solution of the riddle. In a certain sense, water as continuous phase acts as quite effective barrier. Within the frame sketched in Fig. 1 for the monomer diffusion, starting from the droplets first into the water and from there into the particles, the following simulation scenario as outlined in Fig. 6 might be helpful to elucidate the issue. Compared to the simulation scenario considered so far, the presence of water as a continuous phase between the monomer drops and the particles increases the complexity. Now, it is necessary to consider both an additional concentration and a diffusion coefficient of the monomer in water as well as the distance between the source (droplet) and the recipient (particle). In order to swell the particle evenly, the monomer has to complete the path first from the droplet–water to the particle–water interface (x w) and then inside the particle to the center (x p). For these calculations, the monomer reservoir was located at a distance of x w  = z ⋅ r 0 (z > 1) away from the particle surface. The aqueous phase at this distance, i.e., the droplet–water interface, is assumed to be saturated at all times (t ≥ 0) with the swelling agent. At the particle interface (x p = r 0) the total swelling agent concentration (C I,M) is the sum of the concentration on the inner polymer ($${C}_{x={r}_{0,P}}$$ at x p = r 0,P) and outer water side ($${C}_{x={r}_{0,W}}$$ at x w = r 0,W), that is towards the particle’s interior and towards the adjacent aqueous phase, respectively. Also, this interface is assumed to be in equilibrium at all times. This equilibrium is described as a simple distribution coefficient (K d) which is the ratio of the equilibrium concentration in the particle (C M,P) and the aqueous phase (C M,W). In this way, the surface concentration on both sides of the particle can be expressed by Eqs. (11) and (12).Footnote 5 $${C}_{x={r}_{0,P}}=\frac{K_d}{\left(1+{K}_d\right)}{C}_{I,M}$$ (11) $${C}_{x={r}_{0,W}}=\frac{1}{\left(1+{K}_d\right)}{C}_{I,M}$$ (12) The time evolution of the concentration at x = r 0, that is at the particle surface is estimated by the flux balance given with Eq. (13). $$\frac{\partial {C}_{I,M}}{\partial t}=-{\tilde{D}}_W\frac{\partial {C}_{x={r}_{0,W}}}{\partial x}+{\tilde{D}}_P\frac{\partial {C}_{x={r}_{0,P}}}{\partial x}$$ (13) As soon as the first monomer molecules reach the particle surface swelling starts. However, the initial rate is lower compared with the case when direct contact between pure swelling agent and polymer was assumed (cf. Figure 2). Due to the slow diffusion inside the particles, monomer accumulates in the interface region of the particles. The particle rapidly swells in an interfacial region, and this highly swollen region expands with an ongoing time towards the center. Obviously, this scenario supports the idea that swelling leads to the formation of an inhomogeneous particle structure as discussed since quite a long time [2427]. However, complete EP, that is the combination of monomer diffusion into and monomer consumption inside the particles by propagation, is not considered here and results will be reported later. The simulation data compared in Fig. 7 reveal two remarkable details. Firstly, the water phase between the monomer and the particle acts indeed as an effective barrier and drastically increases the time until the equilibrium is reached. Secondly, the data show quite a strong, almost linear influence of the solubility of the monomer in water on the swelling kinetics in the log—log plot. Increasing the water solubility of the monomer by a certain factor also decreases the time to reach the equilibrium swelling almost by the same factor. This result is in qualitative agreement with experimental experience showing that heterophase polymerization of extremely hydrophobic monomers such as lauryl methacrylate needs special measures in order to avoid an excessive formation of coagulum.Footnote 6 The influence of the hydrophilicity of the monomer is much stronger than that of the average distance between the monomer–water interface and the particle surface (x w). Increasing the distance from 150 nm to 1 μm, this corresponds to a decrease in the overall volume fraction of the colloidal objects by about a factor of 100, only marginally prolongs the time to reach equilibrium from 1.074 to 1.44 milliseconds. For the simulations, it is easily possible to position C M,0 at the interface which in reality means that there should be a monomer rich phase between water and the particles. To prove such scenario in experiments with latex particles under conditions relevant to EP is an extremely hard task. Luckily, few model experiments have been described [2931] supporting the possibility of such layer formation. One set of data proves the accumulation of alkanes at the interface of polystyrene latex particles with ellipsometric light scattering [29]. Other experimental data support the idea that a direct contact between a swelling agent and a polymer is necessary for fast swelling by studies with bulk polymer samples [30, 31]. Very recently, it was shown that the swelling of a bulk polymer samples embedded in water with the swelling agent placed on top does not take place within several hours in the absence of stirring but begins immediately after switching the stirrer on. The importance of the direct contact between drops and polymer for the transfer of matter was evidenced by tinting the polymer with the extremely hydrophobic dye Hostasol YellowFootnote 7 [31]. There is, however, still another fact which has to be taken into account. This is the thermodynamic force causes to congregate the swelling agent and the particles along the gradient in the chemical potential (μ) [32]. The driving force (F = −dμ/dx) is the entropy maximization or the minimization of the free energy in the system of swelling latex particles. How strong a force this tendency can generate is illustrated by the accumulation of micron-sized latex particles at the quiescent swelling agent–latex interface against the action of gravity [19, 31]. Experimental evidence exists also in supporting the third indirect conclusion drawn from the simulation results regarding a possible hindrance of mass transfer between the monomer layer and the particles [31]. Assuming that the swelling pressure measurements are a way to characterize the swelling process, it was shown that a surfactant layer around the monomer drops can quite effectively hinder the transfer process. The swelling rates of polystyrene with ethylbenzene in stirred systems were the fastest in the absence of surfactants, the second fastest in the presence of a nonionic surfactant, and the slowest in the presence of sodium dodecyl sulfate. ## Conclusion The simulation studies of the swelling of latex particles based on Fick’s second law of diffusion support the recent experimental findings that the fast swelling of latex particles requires a direct contact of the components. The consequences for aqueous EP are quite significant because stirring supports the fast uptake of monomer by the latex particles due to facilitating contacts between droplets and particles but stabilizer layers delay the process due to hindering the transport across the interface. The simulation results based on Harkins’ idea [3], that in EP, the monomer drops serve “as a storehouse from which the molecules diffuse (through) the aqueous phase ··· into ··· latex particles”, show that the details of this process are crucial and need to be elaborated. Interestingly, the simulation data theoretically back experimental findings showing that the accumulation of monomer at the particle–water interface is crucial for fast swelling. ## Notes 1. This idea dates back to 1947 when Harkins in his seminal paper on emulsion polymerization kinetics stated that the role of the monomer drops is “to act as a storehouse of monomer from which its molecules diffuse into the aqueous phase and from this into either soap micelles or polymer monomer latex particles” [3]. Even until today, this idea is repeated in the state-of-the-art textbooks saying that in the “presence of monomer droplets,the monomer-swollen particles grow and the monomer concentration within these particles is kept constant by monomer diffusing through the water phase from the monomer droplets” [12]. 2. The high polymerization rate of EP is caused mainly by the spatial separation of a primary radical generation in the continuous phase and radical propagation in the polymer particles where the provided particle size is small enough, the growing radical is isolated, and effectively protected against frequent termination. 3. Henceforth, the term monomer and swelling agent will be used interchangeably. 4. Assuming a molar volume of 100 cm3/mol for the swelling agent, a value of C M,0 = 10 M corresponds to the pure swelling agent at the particle interface or to a volume fraction of the swelling agent ϕM = 1. Correspondingly, the other C M,0—values represent smaller ϕM—values which together with the polymer volume fraction (ϕP) add to one. 5. K d is used to calculate the development of CM,P with time according to C M , P (t) = K d  ⋅ C M , W (t) which in turn is used to estimate the diffusion coefficient. 6. For instance, emulsion polymerization of lauryl methacrylate under a kind of standard conditions with potassium peroxodisulfate as initiator and sodium lauryl sulfate as emulsifier leads to 17% of the polymer in the form of latex and 83% in the form of coagulum. The application of a more hydrophobic initiator leads to a drastically increased latex yield [28]. 7. Hostasol Yellow or Solvent Yellow 98 or Fluorescent Yellow 3G is with its chemical name 2-Octadecyl-1H-thioxantheno[2,1,9-def]isoquinoline-1,3(2H)-dione (C36H45NO2S, CAS Registry Number:12671-74-8/27870-92-4) and listed as a water-insoluble dye ## References 1. Tauer K (2003) Heterophase polymerization. In: Mark HF, Bikales NM, Overberger CG, Menges G, Kroschwitz JI (eds) Encyclopedia of polymer science & technology. online 15.04.2003 edn. Wiley 2. Urban D, Takamura K (2002) Aqueous polymer dispersions. Wiley-VCH, Weinheim 3. Harkins WD (1947) A general theory of the mechanism of emulsion polymerization. J Am Chem Soc 69(6):1428–1444. doi:10.1021/Ja01198a053 4. Smith WV, Ewart RH (1948) Kinetics of emulsion polymerization. J Chem Phys 16(6):592–599 5. Bovey FA, Kolthoff IM, Medalia AI, Meehan EJ (1955) Emulsion polymerization. Interscience Publishers, Inc., New York 6. Gerrens H (1964) Kinetik der Emulsionspolymerisation bei technisch wichtigen Monomeren. In: Behrens H, Bretschneider H (eds) Dechema Monographien Nr. 859–875, Band 49, vol 49. Dechema, Frankfurt am Main, pp. 53–97 7. Eliseeva VI (1980) Polimernie Dispersii. Chimija, Moscow 8. Schmidt A (1987) Systematik und Eigenschaften von Latices und kolloidalen Systemen, Polymerisation und Terpolymensation in Emulsion. In: Barte H, Falbe J (eds) Methoden der Organischen Chemie. Makromolekulare Stoffe, vol Band E 20, Teil 1. Thieme, Stuttgart, pp. 227–268 9. Gilbert RG (1995) Emulsion polymerization. Academic Press, London 10. Hernandez HF, Tauer K (2012) Emulsion polymerization. In: Schlüter DA, Hawker C, Sakamoto J (eds) Synthesis of polymers, vol 2. Wiley-VCH, Weinheim, pp. 741–773 11. van Herk A (2013) Chemistry and Technology of Emulsion Polymerisation. John Wiley & Sons, Ltd, Chichester (UK) 12. van Herk A, Gilbert RG (2013) Emulsion polymerisation. In: van Herk AM (ed) Chemistry and Technology of Emulsion Polymerisation. John Wiley & Sons, Ltd, Chichester (UK), pp. 43–73 13. van der Hoff BME (1962) Kinetics of emulsion polymerization. Adv Chem Ser 34:6–31 14. Ryabova MS, Sautin SN, Smirnov NI (1975) Number of particles in the course of the emulsion polymerization of styrene initiated by an oil-soluble initiator. Zh Prikl Khim 48(7):1577–1582 15. Stähler K (1994) Einfluß von Monomer-Emulgatoren auf die AIBN-initierte Emulsionspolymerisation von Styren. Universität Potsdam, Potsdam 16. Hill CG, Root TW (2014) Introduction to chemical engineering kinetics and reactor design, 2nd edn. John Wiley & Sons, Inc., Hoboken 17. Thiele EW (1939) Relation between catalytic activity and size of particle. Ind Eng Chem 31:916–920. doi:10.1021/ie50355a027 18. Lane WH (1946) Determination of the solubility of styrene in water and of water in styrene. Ind Eng Chem-Anal Ed 18(5):295–296. doi:10.1021/i560153a009 19. Tauer K, Hernandez H, Kozempel S, Lazareva O, Nazaran P (2008) Towards a consistent mechanism of emulsion polymerization—new experimental details. Colloid Polym Sci 286(5):499–515. doi:10.1007/s00396-007-1797-3 20. Hsu KH (1983) A diffusion model with a concentration-dependent diffusion-coefficient for describing water-movement in legumes during soaking. J Food Sci 48(2):618–622. doi:10.1111/j.1365-2621.1983.tb10803.x 21. Karlsson OJ, Stubbs JM, Karlsson LE, Sundberg DC (2001) Estimating diffusion coefficients for small molecules in polymers and polymer solutions. Polymer 42(11):4915–4923. doi:10.1016/S0032-3861(00)00765-5 22. Tsavalas JG, Sundberg DC (2010) Hydroplasticization of polymers: model predictions and application to emulsion polymers. Langmuir 26(10):6960–6966. doi:10.1021/la904211e 23. Crank J (1975) The mathematics of diffusion, 2nd edn. Oxford Clarendon Press, London (UK) 24. Grancio MR, Williams DJ (1970) Molecular weight development in constant-rate styrene emulsion polymerization. J Polym Sci A-1-Polym Chem 8(10):2733–2745 25. Grancio MR, Williams DJ (1970) Morphology of the monomer-polymer particle in styrene emulsion polymerization. J Polym Sci A-1-Polym Chem 8(9):2617–2629 26. Medvedev SS (1971) Problems of emulsion polymerization. In: IUPAC International Symposium on Macromolecular Chemistry, Budapest 1969. Akademiai Kiado, Budapest, p 39–63 27. Bolze J, Ballauff M (1995) Study of spatial Inhomogeneities in swollen latex-particles by small-angle X-ray-scattering: the wall-repulsion effect revisited. Macromolecules 28(22):7429–7433 28. Tauer K, Ali AMI, Yildiz U, Sedlak M (2005) On the role of hydrophilicity and hydrophobicity in aqueous heterophase polymerization. Polymer 46(4):1003–1015. doi:10.1016/j.polymer.2004.11.035 29. Tauer K, Weber N, Nozari S, Padtberg K, Sigel R, Stark A, Völkel A (2009) Heterophase polymerization as synthetic tool in polymer chemistry for making nanocomposites. Macromol Symp 281:1–13. doi:10.1002/masy.200950701 30. Krüger K, Wei CX, Nuasaen S, Höhne P, Tangboriboonrat P, Tauer K (2015) Heterophase polymerization: pressures, polymers, particles. Colloid Polym Sci 293(3):761–776. doi:10.1007/s00396-014-3448-9 31. Wei CX, Tauer K (2016) Features of emulsion polymerization - how comes the monomer from the droplets into the latex particles? Macromol Symp, accepted 32. Atkins PW, de Paula J (2002) Atkins' physical chemistry, 7th edn. Oxford University Press Inc., New York ## Acknowledgements Open access funding provided by Max Planck Society. The authors gratefully acknowledge the financial support of the Max Planck Institute of Colloid and Interfaces. ## Author information Authors ### Corresponding author Correspondence to Klaus Tauer. ## Ethics declarations Not applicable. ### Conflict of interest The authors declare that they have no conflict of interest. ## Rights and permissions Reprints and Permissions Tripathi, A., Wei, C. & Tauer, K. Swelling of latex particles—towards a solution of the riddle. Colloid Polym Sci 295, 189–196 (2017). https://doi.org/10.1007/s00396-016-3988-2 • Revised: • Accepted: • Published: • Issue Date: • DOI: https://doi.org/10.1007/s00396-016-3988-2 ### Keywords • Emulsion polymerization • Monomer diffusion • Particle swelling
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.85939621925354, "perplexity": 2644.3413018460355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00546.warc.gz"}
http://claesjohnson.blogspot.com/2011/11/josef-stefan-backradiation-and-dlr-pure.html
onsdag 16 november 2011 Josef Stefan: Backradiation and DLR Pure Fiction!! Josef Stefan 1879: Backradiation and DLR is pure fiction!! Let us quote from Josef Stefan's On the Relation between Radiation and Temperature from 1879 (my translation of page 411): • The absolute value the heat energy emission from a radiating body cannot be determined by experiment. An experiment can only determine the surplus of emission over absorption, with the absorption determined by the emission from the environment of the body. • However, if one has a formula for the emission as a function of temperature (like Stefan-Bolzmann's Law), then the absolute value of the emission can be determined, but such a formula has only a hypothetical meaning. We see that Stefan understood very well that only the net/surplus has a physical meaning. Stefan would thus say that the mantra of CO2 alarmism of backradiation or Downwelling Longwave Radiation DLR from the atmosphere to the warmer Earth surface, is pure fiction. Are CO2 alarmists relying on Stefan-Boltzmann's Law willing to listen to what Stefan has to say? Or can the work of a dead scientist be misused at will without any consequence? 6 kommentarer: 1. Claes, Thanks for the reference. My German is poor and I need to use a dictionary. I agree with your translation and comments. It is a fault of born English speakers to ignore the vast body of research reported in books and journals written in languages other than English. German speakers made a huge contribution to Engineering, Physics and Chemistry from around 1840 to around 1940. 2. You seem to find it illuminating to imagine that long-dead scientists would hold beliefs that would make them look like idiots. The downward radiation from the atmosphere that you keep on trying to pretend doesn't exist is directly detected. Here's an example: http://www.aanda.org/index.php?option=com_article&access=standard&Itemid=129&url=/articles/aa/full/2003/33/aa3874/aa3874.html 3. What is the connection to warming of the Earth surface by transfer of heat energy from the atmosphere of the order of 300 W/m2 postulated in the Kiehl-Trenberth digram? 4. Postulated? No, it's observed. 5. Stefan was a clever man, he was first with the SB-law, empirically. But I think he would have admitted that the background temp will affect the net radiation and thus affect the temp of a body. 6. "The absolute value the heat energy emission from a radiating body cannot be determined by experiment. An experiment can only determine the surplus of emission over absorption, with the absorption determined by the emission from the environment of the body." If one can cool the surroundings to a low enough temperature, the surrounding will emit a negligible amount of radiation to be absorbed by a body. Under these circumstances, the absolute emission of a body would be meaningful. If Stefan were living today - when liquid helium and temperatures near absolute zero are readily accessible - I doubt he would feel that absolute emission is only a hypothetical concept. Unfortunately Stefan died 15 years before helium was first liquified and about half a century before such low temperatures were common enough that superfluid He II was discovered. Frank
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960627913475037, "perplexity": 1252.801961345168}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00783.warc.gz"}
http://mathhelpforum.com/advanced-algebra/173097-linear-transformation-y-bx-invertible-print.html
# Is the linear transformation y = A(Bx) invertible? • Mar 1st 2011, 03:59 PM centenial Is the linear transformation y = A(Bx) invertible? I'm having trouble with this question. http://img585.imageshack.us/img585/1729/243m.png Here is what I have: From the hint, we solve the equation $\vec{y}=A(B\vec{x})$ first for $B(\vec{x})$ and then for $\vec{x}$. So we have $B\vec{x}=A^{-1}\vec{y}$ so... $\vec{x}=B^{-1}A^{-1}\vec{y}$ I'm really not sure where to go from here though. • Mar 1st 2011, 07:30 PM Ackbeet Well, what does the fact that both $A$ and $B$ are invertible (so you can write their inverses) say about the last equation you wrote down? Is the transformation invertible? • Mar 1st 2011, 08:01 PM centenial Ok, I think I understand. So it is invertible and its inverse is simply: $\vec{x} = B^{-1}A^{-1}\vec{y}$ Is that correct? • Mar 1st 2011, 08:10 PM Ackbeet I suppose your answer is in keeping with the question's nomenclature in the OP. I would say that $AB$ is the linear transformation, and that $(AB)^{-1}=B^{-1}A^{-1}$ is the inverse transformation. I think you get the main mathematical idea, though.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680733680725098, "perplexity": 634.5365522918441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00345.warc.gz"}
http://www.fixerrs.com/2014/03/Fix-Runtime-Error-339.html?showComment=1498238651841
# Getting Runtime Error 339 ? Here is how to Fix it Runtime Errors are caused whenever some program requires several files during its runtime to maintain its normal flow of execution but windows registry fails to locate the required file. Further , this may be because of several reasons that either required file is corrupt , deleted , moved or the entry in windows registry is incorrect. Today we will be covering  Runtime Error 339 inWindows ### Causes of Runtime Error 339 • Corrupt .OCX files • Program not installed properly • DLL files of program might be damaged / corrupt or deleted • Malware / Virus infection • Problem with Windows Registry #### How to fix Runtime Error 339 Missing or corrupt .OCX file is major reason behind Runtime Error 339. There are almost 10,000 ocx files exists in an average system so it is quite difficult to tell exactly which file is causing the problem. But the good thing is that when you receive error message , it displays the file name which is causing error. For example - Runtime error 339 component "MSMASK32.OCK" or one of its dependencies not correctly registered: a file is missing or invalid. So at least you get idea which file is causing this error for you and you can focus on that file only. Here are some solutions to get rid of Runtime Error 339 : 1)Reinstall the program : If you have recently installed some program and it is causing this error so uninstall and reinstall the program. A program may cause error if it is not installed correctly. 2)Reregister the .ocx file : For example if you receive message “run time error '339' component COMDLg32.ocx or one of its dependencies not correctly registered: a file is missing or invalid" then you should reregister COMDLg32.ocx file with help of command prompt as follows : • Go to command prompt > Run as administrator and type following command exactly : regsvr32 comdlg32.ocx You will receive a message saying DllRegisterServer in comdlg32.ocx succeeded. It means you have successfully reregistered your .ocx file. ( You may replace above given file name with any file name which is causing error for you. ) Generally reregistering TABCTL32.ocx , ssa3d30.ocx , COMCTL32.ocx ,RICHTX32.ocx , comct232.ocx files solves problem for many users. 3)Fix using command prompt : If error still persists after trying above methods then right click on command prompt and click n Run as administrator and then try to execute following commands in command prompt one by one : Type this command exactly : regsvr32 \Windows\System32\msflxgrd.ocx /u and press enter. Once above command is completely executed , type next command : regsvr32 \Windows\System32\msflxgrd.ocx and press enter. Now exit command prompt and check if you still receive error message. ( Note : Replace msflxgrd.ocx with any .ocx file name which is causing error for you. ) One of the best solution for this problem in many cases is simply download a fresh copy of missing or corrupt .ocx file and replace it with old one present in your system. This simple trick works very well. Just go to google type missing .ocx file name and download it. I am quite sure that performing above steps will definately solve your issue of fixing Runtime Error 339 in Windows. its works !!! thanks Thanks Vishnu for this fix. Your second option is succeed for me. Ones again thanks.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598341941833496, "perplexity": 4477.725922395461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00527.warc.gz"}
https://www.khanacademy.org/math/multivariable-calculus
# Multivariable calculus Think calculus. Then think algebra II and working with two variables in a single equation. Now generalize and combine these two mathematical concepts, and you begin to see some of what Multivariable calculus entails, only now include multi dimensional thinking. Typical concepts or operations may include: limits and continuity, partial differentiation, multiple integration, scalar functions, and fundamental theorem of calculus in multiple dimensions. Community Questions
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308212757110596, "perplexity": 1442.4971150180504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682102.57/warc/CC-MAIN-20151001215802-00101-ip-10-137-6-227.ec2.internal.warc.gz"}
http://patthompson.net/ThompsonCalc/section_3_15.html
< Previous Section Home Next Section > # Section 3.15Constant Rate of Change & Linear Functions The concept of constant rate of change is central to all of calculus. This will become evident in Chapters 4 and 5. In this chapter we will review situations in which the idea of constant rate of change is involved. Constant rate of change always involves two quantities varying together. Two quantities vary at a constant rate with respect to each other if all variations in one are proportional to the corresponding variations in the other. If you are told two quantities vary at a constant rate with respect to each other, then you know immediately that all variations in one are proportional to corresponding variations in the other. The one exception is a constant rate of change of 0. Then the relationship is not reversible. Let x represent the value of Quantity A and let y represent the value of Quantity B, and let x be the independent variable. Assume that B varies at a constant rate with respect to A. Then, according to the meaning of constant rate of change, any variation in y will be m times as large as the variation in x that corresponds to it, where m is a real number and $m \ne 0$. We can state the above paragraph symbolically. Suppose that y varies at a constant rate with respect to x. Let $dx$ represents a variation in x. Let $dy$ represents the corresponding variation in y. Then the relationship between variations is $dy=m\cdot dx$ for some number $m, m \neq 0$. If the variation in the value of x is a variation from 0, then $dx=x$ and $dy=m\cdot x$. The value of y when $x=0$ is represented by the letter b, thus $y=b$ when $x=0$. Therefore, $y=mx+b$ is the general form of the relationship between values of two quantities that vary at a constant rate with respect to each other. The letter m represents the value of that constant rate of change. ## Why Our Definition of Constant Rate of Change? Students often wonder why we say that two quantities vary at a constant rate with respect to each other if and only if $dy=m\cdot dx$, that variations in their measures are proportional. They often insist on using the more common definition given below. It turns out that the more common definition is both inaccurate and misleading. Common, but misleading, definition of constant rate of change: y varies at a constant rate with respect to x if for a fixed amount of change in x (i.e., $\Delta x$) the amount of change in y (i.e., $\Delta y$) is constant. Figure 3.15.0 illustrates why thinking in terms of $\Delta y$ in relation to fixed variations $\Delta x$ is problematic. You must think in terms of variations in y in relation to variations in x within $\Delta x$-intervals, which means attending to $dy$ and $dx$. Figure 3.15.0. An illustration of why you must think of constant rate of change in terms of relationships between differentials $dy$ and $dx$ instead of relationships between "chunks" of variation $\Delta y$ and $\Delta x$. Notice that "change" is often used in mathematics with three different meanings: • change in progress • completed change • replace one thing by another This can lead to misreading a statement about "change in x" or "change in y". Are we talking about change in progress, completed change, or replacing one value by another? It is useful to avoid using the word "change" except in the case of replacing one thing by another. We use "vary" to mean change in progress and "variation" to mean completed change. We use "change" to mean replacing one thing with another. Your use of "vary" or "variation" is not a rule. The important thing is to be aware of which meaning of "change" you have in mind or someone else intends. The word "change" in the phrase "rate of change" always has the connotation of change in progress. ## Envisioning Constant Rate of Change The top bar in Figure 3.15.1 represents the value of y as the value of x varies. The value of y is b when the value of x is 0. The bottom bar in Figure 3.15.1 represents the value of x. The value of y varies at a constant rate of m with respect to the value of x. As the value of x varies by $dx$, the value of y varies by $m\,dx$. This is true no matter the value of m. Notice also that the value of $dx$ varies smoothly. Variations are not added wholly. Rather, they vary smoothly. Therefore values of $dy$ vary smoothly so that $dy=m\, dx$ even as $dx$ varies. Move your pointer away from the animation to hide the control bar. Figure 3.15.1. Variations in the value of y (dy) are proportional to variations in the value of x (dx). Move your pointer away from the animation to hide the control bar. In Figure 3.15.2., the top bar represents the value of y as the value of x varies. The bottom bar represents the value of x. The value of y varies at a constant rate of m with respect to the value of x. The variations in x aggregate to the value of x. The corresponding variations in y aggregate to mx. The value of y is therefore always $y = mx + b$. This is true no matter the value of m. Figure 3.15.2. Variations in x aggregate to the value of x, variations in y aggregate to the value of mx. The value of y is always y = mx + b. Reflection 3.15.1. Some people think that Figures 3.15.1 and 3.15.2 say the same thing. Others disagree. Why might each group of people think as they do about Figures 3.15.1 and 3.15.2? Alert! How you read the next two examples is important. Read them with the goal that you become able to repeat their patterns of reasoning. Do not read them with the goal of remembering what to write! ### Example 3.15.1 Stan started running errands in Phoenix at 8:00 am. He had driven 32 km at the end of the last errand. At 11:30 am Stan headed for Tucson at a constant speed of 113 km/hr; 1.6 hours later he began slowing to a stop. Let D represent the number of km that Stan drove since 8:00 am, and let t represent the number of hours since 8:00 am. While on his trip to Tucson, the rate of change of D with respect to t was 113 km/hr, so $dD = 113dt$ km, and D, Stan's distance traveled at the constant speed of 113 km/hr, is $D=113(t-3.5)+32$ km. Why must we subtract 3.5 from t? Because it was not until 11:30 am, when t = 3.5, that Stan began driving at a constant speed. So it only for $3.5 \le t \le 5.1$ that $D=113(t-3.5)+32$ is a valid model of the distance Stan drove while driving to Tucson. Stan’s distance driven with respect to time did not vary at a constant rate while he ran errands nor after he began slowing down in Tucson. Reflection 3.15.2. Describe similarities and differences between the story of Stan’s trip and Figures 3.15.1 and 3.15.2. ### Example 3.15.2 I am traveling away from home on a straight road at the constant speed of 0.35 km/min. I am 3.8 km from home and I first looked at my watch 7 minutes ago. 1. How much will my distance from home vary in the next 1/10 minute? How much will my distance from home vary in the next 3/10 minute? How much will my distance from home vary in the next 1/10 minute? • Let D represent number of km from home • Let t represent the number of minutes since I looked at my watch. • At the moment I looked at my watch, $dD = (0.35)(0.1)$. • When $dt = 0.3$, $dD = (0.35)(0.3)$. • When $dt = t$, $dD = (0.35)t$. 2. Was I at home when I looked at my watch? • No, 7 minutes ago I was $(0.35)(-7)$ km from where I am now. • Home is -3.8 km from where I am now. • So I was $3.8 + (0.35)(-7)$ km from home when I first looked at my watch. 3. Define a function that relates my distance from home in km to the number of minutes since I looked at my watch. • From (b), I was $3.8 + (0.35)(-7)$ km from home when I looked at my watch. • My distance from home increased by 0.35t km in t minutes since looking at my watch. • My distance from home as a function of time is $D = 0.35t+(3.8+ 0.35(-7))$, or $D = 0.35(t-7) + 3.8$. 4. Generalize a-c. • The variable y varies at a constant rate m with respect to x. • The value of y is $y_0$ when the value of x is $x_0$. • Define a function f that relates values of y to their corresponding values of x. • Variations in y and x are proportional. • The variation in the value of x from $x_0$ is $(x-x_0)$. So the variation in the value of y that corresponds to the variation $(x-x_0)$ is $m(x-x_0)$. • The value of y is $y_0$ when the value of x is $x_0$. Therefore, the value of y for any value of x is $y = m(x-x_0) + y_0$. • Put differently, the value of y for any value of x equals the initial value of y plus the variation in y due to the variation in x from $x_0$. Define the function f to be $f(x) = m(x-x_0) + y_0$. ## Determining a Constant Rate of Change Suppose that we are told that Quantity A varied by p kg while Quantity B varied by q liters, $p, q > 0$, and that they varied at a constant rate of m with respect to each other. Let y represent the value of Quantity A and let x represent the value of Quantity B. What is the value of m? There are two ways to reason about determining the constant rate of change of Quantity A with respect to Quantity B. The first one is to reason quantitatively; the second is to reason algebraically. Way of Reasoning #1: Look at variations in A relative to a variation of 1 liter in B. • Since y and x varied at a constant rate with respect to each other, variations in y are proportional to variations in x. • The value of x varied by q liters. Thus, for the value of x to vary by 1 liter, it would vary by $\left(\dfrac{1}{q}\text{ of }q\right)$ liters. • Since variations in x and y are proportional, y would vary by $\left(\dfrac{1}{q}\text{ of }p\right)$ kg when x varies by $\left(\dfrac{1}{q}\text{ of }q\right)$ liters. • But $\left(\dfrac{1}{q}\text{ of }p\right)$ is $\dfrac{p}{q}$. • Therefore y varies $\left(\dfrac{p}{q}\right)$ kg per liter, or $m = \dfrac{p}{q}$. Way of Reasoning #2: Look at total variation in A relative to total variation in B. • Since y and x varied at a constant rate with respect to each other, all variations in y are proportional to variations in x. • x varied by p kg. y varied by q liters. • So $p = mq$. • Therefore $m = \dfrac{p}{q}$. The quantitative way of reasoning explains why you compute a constant rate of change by dividing the value of one variation by the value of the other variation. The algebraic way of reasoning is shorter, but it is about algebra, making it harder to connect with the idea of constant rate of change. You should strive to think both ways. ## Determining Variations and Constant Rate of Change Let f be a function having a constant rate of change with respect to its independent variable x. The value of x varies from 7.2 to 9.35. • What is the variation in the value of x from $x=7.2$ to $x=9.35$? • What is the variation in the value of $f(x)$ when the value of x varies from $x=7.2$ to $x=9.35$? • What is f’s constant rate of change with respect to x as the value of x varies from $x=7.2$ to $x=9.35$? Let $\Delta x$ represent the variation in the value of x; let $\Delta f(x)$ represent the variation in the value of $f(x)$. Then: • Variation in the value of x from $x=7.2$ to $x=9.35:\qquad$ $\Delta x=9.35-7.2$ • Variation in the value of $f(x)$ from $x=7.2$ to $x=9.35:\qquad$ $\Delta f(x)=f(9.35)-f(7.2)$ • Constant rate of change of f with respect to x as the value of x varies from 7.2 to 9.35: $$\frac{\Delta f(x)}{\Delta x} = \frac{f(9.35)-f(7.2)}{9.35-7.2}$$ More generally, if the value of x varies from $x=a$ to $x=b$, $a\neq b$ then: Variation in the value of x from $x=a \text{ to } x=b:$ $\Delta x=b-a$ Variation in the value of $f(x)$ from $x=a$ to $x=b:$ $\Delta f(x)=f(b)-f(a)$ Constant rate of change of $f(x)$ with respect to x as the value of x varies from $x=a$ to $x=b:$ $\dfrac{\Delta f(x)}{\Delta x} = \dfrac{f(b)-f(a)}{b-a}$ Reflection 3.15.3 Notice two things: 1. We answered these questions without knowing a definition of f. 2. In the general statement, we stated $a\neq b$. We said nothing about whether $a\lt b$ or $b\lt a$. Is the general statement true, that $\dfrac{f(b)-f(a)}{b-a}$ gives a function's constant rate of change, regardless of whether $a\lt b$ or $b\lt a$? Test your answer with $f(x)=2.5x-1$. ## Connecting Constant Rate of Change to Cartesian Graphs The animation in Figure 3.15.3 is similar to that in Figure 3.15.2. It shows an initial value of y (labeled b) and variations in y that are m times as large as variations in x as the value of x varies. Figure 3.15.3 then shows the upper bar rotating so that it rests upon the end of the bar representing the value of x as it (the value of x) varies. This is like plotting the value of y above a value of x in a rectangular coordinate system. The result is that the graph of $y=mx+b$ is a line in a rectangular coordinate system. Figure 3.15.3. Variations in y being proportional to variations in x creates linear graphs in a rectangular coordinate system. ## Exercise Set 3.15 1. Turtle and Rabbit ran a race. The graph on the right shows the total distance they had run at each moment in time while running. 1. Estimate Rabbit’s speed over and speed back. Zoom the animation to full screen if it is too small as shown. 2. Estimate Turtle’s speed over and speed back. Zoom the animation to full screen if it is too small as shown. 3. At what speed must Rabbit travel back so that the two tie? 2. Type $r = 2θ$ in GC. Print its displayed graph. On the printed graph, add details that illustrate that r varies at a constant rate of 2 units per radian with respect to θ. 3. A function g has a constant rate of change of 2.25 with respect to its independent variable. The point (3.25, 4.3125) is on g’s graph. Use the fact that g has a constant rate of change to give coordinates of three other points on g’s graph. 4. A function h has a constant rate of change with respect to its independent variable. The points (1.5, -0.375) and (4.2, 4.35) are on h’s graph. Use the fact that h has a constant rate of change to give coordinates of three other points on h’s graph. How does your strategy build from your strategy for question 3? 5. Define a function that has a constant rate of change of 0.75 with respect to its independent variable and passes through the point (0, 2) in the polar coordinate system. 6. The animation below illustrates how to answer questions 6a - 6f. Complete 6a - 6f similarly. 1. . 7. A function $k$ has a constant rate of change of 2.5 with respect to values of the function $h$, where $h(x)=1+\ln(x)$, x > 0. Also, k has a value of 3 when h has a value of 2. Express k as a function of x. 8. A top-fuel dragster ran a 1/4-mile (1320 ft) race. It had traveled 1305.48 feet after 3.58 seconds and it traveled the entire 1320 feet in 3.61 seconds. 1. What was the dragster's speed, approximately, in feet per second at the end of the race? 2. What was the dragster's speed, approximately, in miles per hour at the end of the race? < Previous Section Home Next Section >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261616468429565, "perplexity": 447.88433555984733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513760.4/warc/CC-MAIN-20181021052235-20181021073735-00387.warc.gz"}
http://math.stackexchange.com/questions/285227/prove-expxy-expx-expy
# Prove $\exp(x+y) = \exp(x) \exp(y)$ I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$. I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$ I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then $$f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n$$ with $$e_n = \sum_{m=0}^n c_md_{n-m}$$ - What's the question? –  nbubis Jan 23 '13 at 18:54 Presumably you tried to multiply the power series for $\exp(x)$ and $\exp(y)$. What did you get stuck on? –  rschwieb Jan 23 '13 at 18:54 The title is the question @nbubis. This question is from Tao Analysis II and he gives the hint with the multiplication of power series. I dont see how to apply that hint. –  André Jan 23 '13 at 18:57 Ok I got it :) Can I delete this question ? –  André Jan 23 '13 at 19:00 This must be a duplicate... –  Fabian Jan 23 '13 at 19:02 \begin{align} \exp(x+y)&=\sum_n\frac{(x+y)^n}{n!} \\\\ &=\sum_{n}\frac{1}{n!}\sum_{a+b=n} {n \choose a} x^ay^b \\\\ &= \sum_{n}\frac{n!}{n!}\sum_{a+b=n}\frac{x^a}{a!}\frac{y^b}{b!} \\\\ &= \sum_{a,b} \frac{x^a}{a!}\frac{y^b}{b!} \\\\ &= \exp(x)\cdot\exp(y) \end{align} - You can write faster than me :D –  André Jan 23 '13 at 19:16 You overcomplicated things in your solution. There is no need to work with a generating function as you have done. Also, you left out some steps in your first equality when going from a product of two generating functions into a single generating function. –  pre-kidney Jan 23 '13 at 19:17 I gave the legitimation for that in my question. –  André Jan 23 '13 at 19:19 Oh, I see now. Thanks for pointing that out :) –  pre-kidney Jan 23 '13 at 19:20 you have used this equality: $$\bigsqcup _n\{(a,b) \mid a+b=n\}=\{(a,b)\mid \}$$ and absolute convergence. –  user59671 May 9 '13 at 11:05 My solution Let $x,y \in \mathbb R$ and $f(z) := \sum_{n=0}^\infty \left(\frac {x^n}{n!} \right )z^n$ and $g(z) := \sum_{n=0}^\infty \left(\frac {y^n}{n!} \right )z^n$. Then $\exp(x) \exp(y) = f(1)g(1)$. That is $$f(z)g(z) = \sum_{n=0}^\infty \left( \sum_{k=0}^m \frac {x^m y^{n-m}}{m! (n-m)!} \right)z^n$$ $$= \sum_{n=0}^\infty \frac 1 {n!} (x+y)^n z^n$$ thus $f(1)g(1) = \exp(x+y)$. - This can actually be done without writing a single sum. Consider the function $$f(x, y) = \frac{e^x e^y}{e^{x+y}}.$$ Observe that $$\frac{\partial f}{\partial x} = \frac{e^x e^y e^{x+y} - e^x e^y e^{x+y}}{(e^{x+y})^2} = 0.$$ Similarly, $$\frac{\partial f}{\partial y} = \frac{e^x e^y e^{x+y} - e^x e^y e^{x+y}}{(e^{x+y})^2} = 0.$$ This shows that $f$ is a constant function. Now, we need only to use the series definition to show $f(0, 0) = 1$. Then, by rearrangement, we have the desired result: $$e^{x+y} = e^x e^y.$$ - $A(t)=\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}t^n$ $B(t)=\exp(y) = \sum_{n=0}^\infty \frac {y^n}{n!}t^n$ $C(t) = A(t)*B(t)=\sum_{n=0}^\infty (\sum_{k+z=n}^\ \frac {x^k}{k!}*\frac {y^z}{z!})t^n=\sum_{n=0}^\infty \frac {(x+y)^n}{n!}t^n=exp(x+y)$ and use $t=1$ sry i was too late^^ - Duplicate of the two answers ^^ –  André Jan 23 '13 at 19:23 Actually, both you and @nordmann use an extraneous generating function [in the variables $t$,$z$ respectively]. The reason why I say "extraneous" is because if you look at my solution above, no new variable need be introduced. –  pre-kidney Jan 23 '13 at 19:43 sry im new to this latex stuff^^ –  nordmann Jan 23 '13 at 23:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852499961853027, "perplexity": 856.5179229710918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119662145.58/warc/CC-MAIN-20141024030102-00195-ip-10-16-133-185.ec2.internal.warc.gz"}
https://courses.lumenlearning.com/calculus2/chapter/introduction-to-integration-formulas-and-the-net-change-theorem/
## What you’ll learn to do: Use the net change theorem In this section, we use some basic integration formulas studied previously to solve some key applied problems. It is important to note that these formulas are presented in terms of indefinite integrals. Although definite and indefinite integrals are closely related, there are some key differences to keep in mind. A definite integral is either a number (when the limits of integration are constants) or a single function (when one or both of the limits of integration are variables). An indefinite integral represents a family of functions, all of which differ by a constant. As you become more familiar with integration, you will get a feel for when to use definite integrals and when to use indefinite integrals. You will naturally select the correct approach for a given problem without thinking too much about it. However, until these concepts are cemented in your mind, think carefully about whether you need a definite integral or an indefinite integral and make sure you are using the proper notation based on your choice.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869810342788696, "perplexity": 137.25388710322827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034877.9/warc/CC-MAIN-20220625065404-20220625095404-00102.warc.gz"}
https://www.physicsforums.com/threads/electron-wave-funtion-harmonic-oscillator.972365/
# I Electron wave funtion harmonic oscillator • Thread starter jhonnyS • Start date • Tags #### jhonnyS 1 0 Summary the electron wave function for a determined energy level, without superposition of states, decreases its frequency as the distance from the center increases. So the "oscilation" shouldn't be slower with distance from the center? As we see in this Phet simulator, this is only the real part of the wave function, the frequency decreases with the potential, so lose energy as moves away the center. we se this real-imaginary animation in Wikipedia, wave C,D,E,F. Because with less energy, the frequency of quantum wave decreases, and the speed decreases too, the oscilation wouldn't be slower acording to frequency variation in image F for example? (the same way we see image D oscillating slower than F) The desired response, without formulas, and without Schrödinger time independent ecuation, just explained thank you! Related Quantum Physics News on Phys.org #### BvU Science Advisor Homework Helper 11,990 2,638 Hello jhonny, $\quad$ $\quad$ ! and the speed decreases too No. CDEF are not 'moving': they are solutions of the SE consisting of solutions of the TISE times $e^{i\omega t}$. For those $<x(t)>=0$: the expectation value for the position is zero (i.e. constant). #### BvU Science Advisor Homework Helper 11,990 2,638 As we see in this Phet simulator Your interpretation of the variations in the wave function as 'frequency' is wrong: those are (spatial) variations in the wave function, nothing else. $\ \Psi^2\$ is a probability density and that is fluctuating, unlike in a clssical harmonic oscillator. Even if you hate formulas you can look at this difference between QM and classical ### Want to reply to this thread? "Electron wave funtion harmonic oscillator" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039940237998962, "perplexity": 4459.21446926466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00400.warc.gz"}
https://spinningnumbers.org/a/rlc-natural-response-derivation.html
We derive the natural response of a series resistor-inductor-capacitor $(\text{RLC})$ circuit. The $\text{RLC}$ circuit is representative of real life circuits we actually build, since every real circuit has some finite resistance, inductance, and capacitance. This circuit has a rich and complex behavior. It shows up in many areas of engineering. The mechanical analog of an $\text{RLC}$ circuit is a pendulum with friction. If you’ve never solved a differential equation I recommend you begin with the RC natural response - derivation Written by Willy McAllister. ### Contents The $\text{RLC}$ circuit can be modeled with a second-order linear differential equation, with current $i$ as the independent variable, $\text L \,\dfrac{d^2i}{dt^2} + \text R\,\dfrac{di}{dt} + \dfrac{1}{\text C}\,i = 0$ The resulting characteristic equation is, $s^2 + \dfrac{\text R}{\text L}s + \dfrac{1}{\text{LC}} = 0$ We find the roots of the characteristic equation with the quadratic formula, $s=\dfrac{-\text R \pm\sqrt{\text R^2-4\text L/\text C}}{2\text L}$ It is very helpful to introduce variables $\alpha$ and $\omega_o$, Let $\quad \alpha = \dfrac{\text R}{2\text L}\quad$ and $\quad\omega_o = \dfrac{1}{\sqrt{\text{LC}}}$ Then the characteristic equation and its roots can be compactly written as, $s^2 + 2\alpha s + \omega_o^2 = 0$ $s=-\alpha \pm\,\sqrt{\alpha^2 - \omega_o^2}$ Where $\alpha$ is called the damping factor, and $\omega_o$ is called the resonant frequency. ## Strategy The strategy for solving this circuit is the same one we used for the second-order LC circuit. 1. The second-order differential equation is based on the $i$-$v$ equations for $\text R$, $\text L$, and $\text C$. Use Kirchhoff’s Voltage Law (sum of voltages around a loop) to assemble the equation. 2. Make an informed guess at a solution. As usual, our guess will be an exponential function of the form $Ke^{st}$. 3. Insert the proposed solution into the differential equation. 4. Do a little algebra: factor out the exponential terms to leave us with a characteristic equation in the variable $s$. 5. Find the roots of the characteristic equation with the quadratic formula. 6. Find the $K$ constants by accounting for the initial conditions. 7. The original guess is confirmed if $K$’s are found and are in fact constant (not changing with time). 8. Celebrate the solution! In this article we cover the first three steps of the derivation up to the point where we have the so-called characteristic equation. The following article on RLC natural response - variations carries through with three possible outcomes depending on the specific component values. ## Model the circuit Here’s the $\text{RLC}$ circuit the moment before the switch is closed. We call this time $t(0^-)$, The moment before the switch closes. The current $i$ is $0$ everywhere, and the capacitor is charged up to an initial voltage $\text V_0$. voltage polarity and current direction There’s a bit of cleverness with the voltage polarity and current direction. I looked ahead a little in the analysis and arranged the voltage polarities to get some positive signs where I want them, just for aesthetic value. At the same time, it is important to respect the sign convention for passive components. Capacitor voltage: I want the capacitor to start out with a positive charge on the top plate, which means the positive sign for $v_\text C$ is also the top plate. The natural response will start out with a positive voltage hump. Inductor current: When the switch closes, the initial surge of current flows from the capacitor over to the inductor, in a counter-clockwise direction. I want this initial current surge to have a positive sign. Current $i$ flows into the inductor from the top. I think this makes the natural response current plot look nicer. Inductor voltage: The sign convention for the passive inductor tells me assign $v_\text L$ with the positive voltage sign at the top. Resistor voltage: The resistor voltage makes no artistic contribution, so it can be assigned to match either the capacitor or the inductor. I happened to match it to the capacitor, but you could do it either way. The voltage and current assignment used in this article. $v_\text C$ is positive on the top plate of the capacitor. Both $v_\text R$ and $v_\text C$ will have $-$ signs in the clockwise KVL equation. Respect the passive sign convention: The artistic voltage polarity I chose for $v_\text C$ (positive at the top) conflicts with the direction of $i$ in terms of the passive sign convention. Current $i$ flows up out of the $+$ capacitor instead of down into the $+$ terminal as the sign convention requires. I account for the backwards current when I write the $i$-$v$ equation for the capacitor, with a $-$ sign in front of $i$. The current through the resistor has the same issue as the capacitor, it’s also backwards from the passive sign convention. I will handle it the same way when I write Ohm’s law for the resistor, with a $-$ sign in front of $i$. Notice how I achieved artistic intent and respected the passive sign convention. I thought it would be helpful walk through this in detail. Most textbooks give you the integro-differential equation without this long explanation. You have to work out the signs yourself. Now we close the switch and the circuit becomes, From the moment the switch closes we want to find the current and voltage for $t=0^+$ and after. We write $i$-$v$ equations for each individual element, $v_\text L = \text L\, \dfrac{di}{dt}$ $v_\text R = - i\,\text R$ $v_\text C = \dfrac{1}{\text C}\,\displaystyle \int{-i \,dt}$ The $-$ signs in the $v_\text R$ and $v_\text C$ equations appear because the current arrow points backwards from the passive sign convention. We model the connectivity with Kirchhoff’s Voltage Law (KVL). Let’s start in the lower left corner and sum voltages around the loop going clockwise. The inductor has a voltage rise, while the resistor and capacitor have voltage drops. $+v_{\text L} - v_{\text R} - v_{\text C} = 0$ We substitute each $v$ term with its $i$-$v$ relationship, $\text L \,\dfrac{di}{dt} + \text R\,i + \dfrac{1}{\text C}\,\displaystyle \int{i \,dt} = 0$ If we wanted to, we could attack this equation and try to solve it. However, the integral term is awkward and makes this approach a pain in the neck. It’s possible to retire the integral by taking the derivative of the entire equation, $\dfrac{d}{dt}\left (\,\text L \,\dfrac{di}{dt} + \text R\,i + \dfrac{1}{\text C}\,\displaystyle \int{i \,dt} = 0 \,\right)$ We end up with a second derivative term, a first derivative term, and a plain $i$ term, all still equal to $0$. $\text L \,\dfrac{d^2i}{dt^2} + \text R\,\dfrac{di}{dt} + \dfrac{1}{\text C}\,i = 0$ This is called a homogeneous second-order ordinary differential equation. It is homogeneous because every term is related to $i$ and its derivatives. It is second order because the highest derivative is a second derivative. It is ordinary because there is only one independent variable, $t$, (no partial derivatives). Now we solve our differential equation. ## Propose a solution Just like we did with previous natural response problems (RC, RL, LC), we assume a solution with an exponential form, (“assume a solution” is a mathy way to say “guess”), $i(t) = Ke^{st}$ $K$ is an adjustable parameter. It determines the amplitude of the current. $s$ is up there in the exponent next to $t$, so it must represent some kind of frequency ($s$ has to have units of $1/t$ to make the exponent dimensionless). We call $s$ the natural frequency. When we have multiple derivatives in an equation it’s really nice when they all have a strong family resemblance. An exponential function has a wondrous property. Its derivatives look a lot like itself. It has the strongest family resemblance of all. ## Try the proposed solution Next, we substitute the proposed solution into the differential equation. If the equation turns out to be true then our proposed solution is a winner. $\text L \,\dfrac{d^2}{dt^2}Ke^{st} + \text R\,\dfrac{d}{dt}Ke^{st} + \dfrac{1}{\text C}Ke^{st} = 0$ First, go to work on the two derivative terms. The middle term has a first derivative, $\text R\,\dfrac{d}{dt}Ke^{st} = s\text{R}Ke^{st}$ The leading term has a second derivative, so we take the derivative of $\text Ke^{st}$ two times, $\dfrac{d}{dt}Ke^{st}= sKe^{st}$ $\dfrac{d}{dt}sKe^{st}= s^2Ke^{st}$ $\text L \dfrac{d^2}{dt^2}Ke^{st} = s^2\text LKe^{st}$ Now we can plug our new derivatives back into the differential equation, $s^2\text LKe^{st} + s\text RKe^{st} + \dfrac{1}{\text C}\,Ke^{st} = 0$ Next, factor out the common $Ke^{st}$ terms, $Ke^{st}\left (s^2\text L + s\text R + \dfrac{1}{\text C}\right ) = 0$ This is what our differential equation becomes when we assume $i(t) = Ke^{st}$. ## Characteristic equation Now let’s figure out how many ways we can make this equation true. We could set the amplitude term $K$ to $0$. That means $i = 0$. We put nothing into the circuit and get nothing out. $K = 0$ is pretty boring. We could let $e^{st}$ decay to $0$. The term $e^{st}$ goes to $0$ if $s$ is negative and we wait until $t$ goes to $\infty$. Infinity is a really long time. If we wait for $e^{st}$ to go to zero we get pretty bored, too. We have one more way to make the equation true. We can set the term with all the $s$’s equal to zero, $s^2\text L + s\text R + \dfrac{1}{\text C} = 0$ This is called the characteristic equation of the $\text{LRC}$ circuit. It is by far the most interesting way to make the differential equation true. ## Roots of the characteristic equation Let’s find values of $s$ to the characteristic equation true. If we can make the characteristic equation true, then the differential equation becomes true, and our proposed solution is a winner. We need to find the roots of the characteristic equation. We have exactly the right tool, the quadratic formula. Quadratic equations have the form, $ax^2 + bx + c = 0$ and the roots are given by the quadratic formula, $x=\dfrac{-b \pm\sqrt{b^2-4ac}}{2a}$ Now look back at the characteristic equation and match up the components to $a$, $b$, and $c$, $a = \text L$, $b = \text R$, and $c = 1/\text{C}$ and the roots of the characteristic equation become, $s=\dfrac{-\text R \pm\sqrt{\text R^2-4\text L/\text C}}{2\text L}$ We have solved for $s$, the natural frequency. As we might expect, the natural frequency is determined by (a rather complicated) combination of all three component values. ## $\alpha$ and $\omega_o$ notation We can make the characteristic equation and the expression for $s$ more compact if we create two new made-up variables, $\alpha$ and $\omega_o$. Let, $\alpha = \dfrac{\text R}{2\text L}\quad$ and $\quad\omega_o = \dfrac{1}{\sqrt{\text{LC}}}$ Reformat the characteristic equation a little, divide through by $\text L$, $s^2 + \dfrac{\text R}{\text L}s + \dfrac{1}{\text{LC}} = 0$ Substitute in $\alpha$ and $\omega_o$ and we get this compact expression, $s^2 + 2\alpha s + \omega_o^2 = 0$ Use the quadratic formula on this version of the characteristic equation, $x=\dfrac{-b \pm\sqrt{b^2-4ac}}{2a}$ $s = \dfrac{-2\alpha \pm\sqrt{4\alpha^2-4\omega_o^2}}{2}$ and it simplifies to a neat and tidy, $s=-\alpha \pm\,\sqrt{\alpha^2 - \omega_o^2}$ ### What are $s$, $\alpha$, and $\omega_o$? We know $s$ has to be some sort of frequency because it appears next to $t$ in the exponent of $e^{st}$. An exponent has to be dimensionless, so the units of $s$ must be $1/t$, the unit of frequency. That means $\alpha$ and $\omega_o$, the two terms inside $s$, are also some sort of frequency. • $s$ is called the natural frequency, composed of two parts, $\alpha$ and $\omega_o$. • $\alpha$ is called the damping factor. It will determine how fast (or slow) the signal fades to zero. • $\omega_o$ is called the resonant frequency. It will determine how often the system swings back and forth. This is the same resonant frequency we found for the $\text{LC}$ natural response. The quadratic formula gives us two solutions for $s$, because of the $\pm$ term in the quadratic formula. We’ll call these $s_1$ and $s_2$. Perhaps both of them impact the final answer, so we update our proposed solution so the current is a linear combination of (the sum or superposition of) two separate exponential terms, $i = K_1 e^{s_1t} + K_2 e^{s_2t}$ We know $s_1$ and $s_2$ from above. Now we have to deal with two adjustable amplitude parameters, $K_1$ and $K_2$. We’ll see what happens with this change to two exponentials in the worked examples. ## Three variations Now it gets really interesting. The problem splits into three different paths based on how $s$ turns out. $s=-\alpha \pm\,\sqrt{\alpha^2 - \omega_o^2}$ Depending on the relative size of $\alpha$ compared to $\omega_o$ the expression $\alpha^2 - \omega_o^2$ under the square root will be positive, zero, or negative, relation sign of $\alpha^2 - \omega_o^2$ $\sqrt{\alpha^2 - \omega^2}$ $\alpha>\omega_o$ $+$ positive real number $\alpha=\omega_o$ $0$ vanishes $\alpha<\omega_o$ $-$ imaginary number The roots $s$ will come out like this, relation sign of $\alpha^2 - \omega_o^2$ $s$ $\alpha>\omega_o$ $+$ 2 different real roots $\alpha=\omega_o$ $0$ 2 repeated real roots $\alpha<\omega_o$ $-$ 2 complex conjugate roots Looking farther ahead, the response $i(t)$ will come out like this, relation sign of $\alpha^2 - \omega_o^2$ $i(t)$ $\alpha>\omega_o$ $+$ sum of 2 decaying exponentials $\alpha=\omega_o$ $0$ $t\,\cdot$ decaying exponential $\alpha<\omega_o$ $-$ $\sin\cdot$ decaying exponential We have nicknames for the three variations, relation sign of $\alpha^2 - \omega_o^2$ nickname $\alpha>\omega_o$ $+$ over damped $\alpha=\omega_o$ $0$ critically damped $\alpha<\omega_o$ $-$ under damped Now you are ready to go to the following article, RLC natural response - variations, where we look at each outcome in detail. ## Summary The $\text{RLC}$ circuit is modeled by this second-order linear differential equation, $\text L \,\dfrac{d^2i}{dt^2} + \text R\,\dfrac{di}{dt} + \dfrac{1}{\text C}\,i = 0$ The characteristic equation is, $s^2 + \dfrac{\text R}{\text L}s + \dfrac{1}{\text{LC}} = 0$ We solved for the roots of the characteristic equation with the quadratic formula, $s=\dfrac{-\text R \pm\sqrt{\text R^2-4\text L/\text C}}{2\text L}$ We define variables $\alpha$ and $\omega_o$ as, $\quad \alpha = \dfrac{\text R}{2\text L}\quad$ and $\quad\omega_o = \dfrac{1}{\sqrt{\text{LC}}}$ which allows us to write the characteristic equation as, $s^2 + 2\alpha s + \omega_o^2 = 0$ and $s$ as, $s = -\alpha \pm\,\sqrt{\alpha^2 - \omega_o^2}$ The roots of the characteristic equation can be real or complex. It depends on the relative size of $\alpha^2$ and $\omega_o^2$. Since two roots come out of the characteristic equations, we modified the proposed solution to be a superposition of two exponential terms, $i = K_1 e^{s_1t} + K_2 e^{s_2t}$ The next article picks up at this point and completes the solution(s).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9072418808937073, "perplexity": 361.6851303539816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00673.warc.gz"}
http://www.zora.uzh.ch/53433/
# Filtrations Coculescu, D; Nikeghbali, A (2010). Filtrations. In: Cont, R. Encyclopedia of quantitative finance. Chichester, UK: John Wiley & Sons, 683-686. ## Abstract In this article, we define the notion of a filtration and the related notion of the usual hypotheses. We then explain the problem of enlargements of filtrations: how are (semi)martingales affected under a change of filtrations? We state the main theorems in the classical frameworks of initial and progressive enlargements of filtrations. In the case of initial enlargements of filtrations, we state the well known Jacod's condition and in the framework of progressive enlargements of filtrations, we give the decomposition of a local martingale in the larger filtration. We finally specialize to the case of immersed filtrations, which is very widely used in credit risk modeling, and study the effect of a combination of changes of filtration and probability measure in this situation. In this article, we define the notion of a filtration and the related notion of the usual hypotheses. We then explain the problem of enlargements of filtrations: how are (semi)martingales affected under a change of filtrations? We state the main theorems in the classical frameworks of initial and progressive enlargements of filtrations. In the case of initial enlargements of filtrations, we state the well known Jacod's condition and in the framework of progressive enlargements of filtrations, we give the decomposition of a local martingale in the larger filtration. We finally specialize to the case of immersed filtrations, which is very widely used in credit risk modeling, and study the effect of a combination of changes of filtration and probability measure in this situation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870487451553345, "perplexity": 503.16498018759086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543434.57/warc/CC-MAIN-20161202170903-00374-ip-10-31-129-80.ec2.internal.warc.gz"}
https://mathstrek.blog/2020/04/13/commutative-algebra-26/
# Left-Exact Functors We saw (in theorem 1 here) that the localization functor $M \mapsto S^{-1}M$ is exact, which gave us a whole slew of nice properties, including preservation of submodules, quotient modules, finite intersection/sum, etc. However, exactness is often too much to ask for. Throughout this article, A and B are fixed rings and $F: A\text{-}\mathbf{Mod} \to B\text{-}\mathbf{Mod}$ is a covariant additive functor. Definition. We say F is left-exact if it takes a short exact sequence of A-modules $0 \longrightarrow N \stackrel f\longrightarrow M \stackrel g\longrightarrow P \longrightarrow 0$ to an exact sequence of B-modules $0 \longrightarrow F(N) \stackrel {F(f)}\longrightarrow F(M) \stackrel {F(g)} \longrightarrow F(P).$ Immediately we can weaken the condition. Lemma 1. F is left-exact if and only if it takes an exact sequence of A-modules $0 \longrightarrow N \stackrel f\longrightarrow M \stackrel g\longrightarrow P$ to an exact sequence of B-modules $0 \longrightarrow F(N) \stackrel {F(f)}\longrightarrow F(M) \stackrel {F(g)} \longrightarrow F(P).$ Proof (⇐) is obvious. For (⇒) given an exact $0 \to N \stackrel f\to M \stackrel g\to P$, we extend it to an exact $0 \to N \to M \to P \to \mathrm{coker } g \to 0$. [Recall that $\mathrm{coker } g = P/\mathrm{im } g$ is the cokernel of g.] We can split this exact sequence into short exact ones: \begin{aligned} 0 \to N \stackrel f\to M \to \mathrm{im } g \to 0 &\implies 0 \to F(N) \stackrel{F(f)}\to F(M) \to F(\mathrm{im } g)\\ 0 \to \mathrm{im } g \to P \to \mathrm{coker } g \to 0 &\implies 0 \to F(\mathrm{im } g) \to F(P) \to F(\mathrm{coker } g).\end{aligned} which gives an exact sequence $0\to F(N) \to F(M) \to F(P)$ as desired. ♦ Note Left-exact functors are not as nice as exact ones but we still get some useful results out of them. For example, if $f : N\to M$ is injective, so is $F(f) : F(N) \to F(M)$ so for a submodule $N\subseteq M$, we can consider $F(N)$ as a submodule of $F(M)$. Also, for any $f:N\to M$, since $0\to \mathrm{ker } f \to N \to M$ is exact, applying F gives an exact $0 \to F(\mathrm{ker} f) \to F(N) \stackrel{F(f)}\to F(M)$ and thus $F(\mathrm{ker} f) = \mathrm{ker } F(f)$. # Hom Functors Are Left-Exact The main result we wish to show is Proposition 1. For any A-module M, the functor $\mathrm{Hom}_A(M, -)$ is a left-exact functor. Proof Take the exact sequence $0 \to N_1 \stackrel f\to N_2 \stackrel g\to N_3$. We need to show that $0 \longrightarrow \mathrm{Hom}_A(M, N_1) \stackrel {f_*}\longrightarrow \mathrm{Hom}_A(M, N_2) \stackrel{g_*}\longrightarrow \mathrm{Hom}_A(M, N_3)$ is exact. It is easy to show $f_*$ is injective (easy exercise). Next, we have $g_*\circ f_* = (g\circ f)_* = 0 \implies \mathrm{im } f_* \subseteq \mathrm{ker } g_*.$ Conversely suppose $h:M\to N_2$ satisfies $g_*(h) = g\circ h = 0$. Then $\mathrm{im } h \subseteq \mathrm{ker } g = \mathrm{im } f$. Since f is injective it follows that for each $m\in M$, we have $h(m) = f(x)$ for a unique $x\in N_1$. This map $h' : M \to N_1, m\mapsto x$ is clearly A-linear so $h = f\circ h' = f_*(h')$. ♦ Here is one application of this result. Corollary 1. Let $a\in A$; for each A-module, let $M[a] = \{m \in M : am = 0\}$. Then $0 \longrightarrow N \stackrel f \longrightarrow M \stackrel g \longrightarrow P \text{ exact } \implies 0 \longrightarrow N[a] \stackrel f\longrightarrow M[a] \stackrel g \longrightarrow P[a] \text{ exact.}$ Proof Indeed the functor $M\mapsto M[a]$ is naturally isomorphic to $\mathrm{Hom}_A(A/(a), -)$. Now apply the above. ♦ Exercise A Find a surjective $M\to P$ for which $M[a] \to P[a]$ is not surjective. This shows that the functor $\mathrm{Hom}_A(M, -)$ is not exact in general. # Converse Statement Next we have the following converse. Proposition 2. Suppose $f:N_1 \to N_2$ and $g:N_2\to N_3$ are A-linear maps such that for any A-module M, $0\longrightarrow \mathrm{Hom}_A(M, N_1) \stackrel {f_*} \longrightarrow \mathrm{Hom}_A(M, N_2) \stackrel {g_*} \longrightarrow \mathrm{Hom}_A(M, N_3)$ is exact. Then $0 \to N_1 \stackrel f \to N_2 \stackrel g \to N_3$ is exact. Proof First we show f is injective. Let $M =\mathrm{ker } f$ so that $0 \to M \to N_1 \stackrel {f} \to N_2$ is exact. By left-exactness of Hom we have an exact: $0 \to {\mathrm{Hom}_A(M, M)} \to \mathrm{Hom}_A(M, N_1) \stackrel {f_*} \to \mathrm{Hom}_A(M, N_2)$. Since $f_*$ is injective, $\mathrm{Hom}_A(M, M) = 0$ and in particular $1_M = 0\implies M=0$. Next we show $g\circ f = 0$. Setting $M = N_1$ gives: $0 = g_* f_* (\mathrm{Hom}_A(N_1, N_1)) = (g\circ f)_*(\mathrm{Hom}_A(N_1, N_1))$ so $g\circ f = (g\circ f)_* (1_{N_1}) = 0.$ This gives $\mathrm{im } f \subseteq \mathrm{ker } g$. Finally we set $M = \mathrm{ker } g$. The following sequence: $0\longrightarrow \mathrm{Hom}_A(M, N_1) \stackrel {f_*} \longrightarrow \mathrm{Hom}_A(M, N_2) \stackrel {g_*} \longrightarrow \mathrm{Hom}_A(M, N_3)$ is exact. The inclusion $i : M\hookrightarrow N_2$ gives $g_*(i) = g\circ i = 0$ and so $i = f_*(j) = f\circ j$ for some $j : M \to N_1$. Then $M = \mathrm{im } i \subseteq \mathrm{im } f$. ♦ # Another Left-Exactness Now suppose $F: A\text{-}\mathbf{Mod} \to B\text{-}\mathbf{Mod}$ is a contravariant additive functor. Definition. We say F is left-exact if it takes a short exact sequence of A-modules $0 \longrightarrow N \stackrel f\longrightarrow M \stackrel g\longrightarrow P \longrightarrow 0$ to an exact sequence of B-modules $0 \longrightarrow F(P) \stackrel {F(g)}\longrightarrow F(M) \stackrel {F(f)} \longrightarrow F(N).$ Again we have: Lemma 2. F is left-exact if and only if it takes an exact sequence of A-modules $N \stackrel f\longrightarrow M \stackrel g\longrightarrow P \longrightarrow 0$ to an exact sequence of B-modules $0 \longrightarrow F(P) \stackrel {F(g)}\longrightarrow F(M) \stackrel {F(f)} \longrightarrow F(N).$ Proof Exercise. ♦ Now we show, as before: Proposition 3. For any A-module M, the functor $\mathrm{Hom}_A(-, M)$ is a left-exact functor. Proof Take the exact sequence $N_1 \stackrel f\to N_2 \stackrel g\to N_3 \to 0$. We need to show that $0 \longrightarrow \mathrm{Hom}_A(N_3, M) \stackrel {g^*}\longrightarrow \mathrm{Hom}_A(N_2, M) \stackrel{f^*}\longrightarrow \mathrm{Hom}_A(M, N_1)$ is exact for any A-module M. Injectivity of $g^*$ is obvious. Also $f^* \circ g^* = (g\circ f)^* = 0$ so $\mathrm{im } g^* \subseteq \mathrm{ker } f^*$. It remains to show $\mathrm{ker } f^* \subseteq \mathrm{im } g^*$. Pick $h: N_2 \to M$ such that $f^*(h) = h\circ f = 0$ so that $\mathrm{ker } g = \mathrm{im } f \subseteq \mathrm{ker } h$. Hence h factors through $N_2 / \mathrm{ker } g \to M$. Since $N_2 / \mathrm{ker } g \cong N_3$, we have $h = h'\circ g = g^*(h'')$ for some $h' : N_3 \to M$. ♦ # Another Converse Statement Finally, the reader should expect the following. Proposition 4. Suppose $f:N_1 \to N_2$ and $g:N_2\to N_3$ are A-linear maps such that for any A-module M, $0\longrightarrow \mathrm{Hom}_A(N_3, M) \stackrel {g^*} \longrightarrow \mathrm{Hom}_A(N_2, M) \stackrel {f^*} \longrightarrow \mathrm{Hom}_A(N_1, M)$ is exact. Then $N_1 \stackrel f \to N_2 \stackrel g \to N_3 \to 0$ is exact. Proof We leave it as an exercise to show: g is surjective, $g\circ f = 0$. It remains to show $\mathrm{ker } g \subseteq \mathrm{im } f$; let $M = \mathrm{coker } f$ and take the exact sequence $0\longrightarrow \mathrm{Hom}_A(N_3, M) \stackrel {g^*} \longrightarrow \mathrm{Hom}_A(N_2, M) \stackrel {f^*} \longrightarrow \mathrm{Hom}_A(N_1, M)$ If $\pi : N_2 \to M$ is the canonical map, $f^*(\pi) = \pi\circ f = 0$ so we have $\pi = g^*(j) = j\circ g$ for some $j : N_3 \to M$. Thus $\mathrm{ker } g \subseteq \mathrm{ker } \pi = \mathrm{im } f$. ♦ As an application, let us prove the following. Corollary 2. Let $\mathfrak a \subseteq A$ be an ideal. If $N \to M \to P \to 0$ is an exact sequence of A-modules, then we get an exact sequence of $(A/\mathfrak a)$-modules: $N/\mathfrak a N \longrightarrow M/\mathfrak a M \longrightarrow P/\mathfrak a P \longrightarrow 0.$ Proof Let $B = A/\mathfrak a$. It suffices to show that for any B-module Q, the sequence $0 \longrightarrow \mathrm{Hom}_B(P/\mathfrak a P, Q) \longrightarrow \mathrm{Hom}_B(M/\mathfrak a M, Q) \longrightarrow \mathrm{Hom}_B(N/\mathfrak a N, Q)$ is exact. But since $M\mapsto M/\mathfrak a M$ gives the B-module induced from M, the above sequence is naturally isomorphic to: $0 \longrightarrow \mathrm{Hom}_A(P, Q) \longrightarrow \mathrm{Hom}_A (M, Q) \longrightarrow \mathrm{Hom}_A(N, Q)$ which we know is exact because the Hom functor is left-exact. ♦ This leads to the following definition. Definition. Suppose $F: A\text{-}\mathbf{Mod} \to B\text{-}\mathbf{Mod}$ is a covariant additive functor. We say F is right-exact if it takes a short exact sequence of A-modules $0 \longrightarrow N \stackrel f\longrightarrow M \stackrel g\longrightarrow P \longrightarrow 0$ to an exact sequence of B-modules $F(N) \stackrel {F(f)}\longrightarrow F(M) \stackrel {F(g)} \longrightarrow F(P) \longrightarrow 0.$ Exercise B Prove that F is right-exact if and only if it takes an exact sequence of A-modules $N \to M \to P \to 0$ to an exact sequence of B-modules $F(N) \to F(M) \to F(P) \to 0.$ Thus, we have shown that the functor $A\text{-}\mathbf{Mod} \longrightarrow (A/\mathfrak a)\text{-}\mathbf{Mod}, \quad M \mapsto M/\mathfrak aM$ is right-exact. This will be generalized in future chapters. Exercise C • Prove that the functor $M\mapsto M/\mathfrak a M$ is not exact in general. • Use a direct proof to show $M\mapsto M/\mathfrak a M$ is right-exact. In summary, we have covered the following concepts: This entry was posted in Advanced Algebra and tagged , , , , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 111, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990392923355103, "perplexity": 525.8566767179391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00620.warc.gz"}
http://mathhelpforum.com/advanced-algebra/61389-prove.html
1. ## prove Prove that there is no homomorphism from Z_8 (+) Z_2 onto Z_4 (+) Z_4 [(+) is the external direct product] 2. Originally Posted by mandy123 Prove that there is no homomorphism from Z_8 (+) Z_2 onto Z_4 (+) Z_4 [(+) is the external direct product] Hint: Any homomorphism is determined by $f( 1,1)$. Try to arrive to a contradiction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.98634934425354, "perplexity": 3469.639340418307}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00162-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4085092/notational-differences-for-motives
# Notational differences for motives? I have recently been learning about pure motives and i have begun to learn about mixed motives, in this paper, motives are written as a triple $$h(X,e,n)$$ where $$e$$ is an idempotent endomorphism and $$n$$ is the tate twist, however, in this paper they seem to be written as $$h(X)(n)[m]$$, is know that the $$(n)$$ is a tate twist but what is the $$[m]$$? • The second paper is about the derived category of mixed motives (the abelian category of mixed motives is still conjectural so this is simply a triangulated category which looks like the derived category of mixed motives). The $[m]$ denotes the usual shift in a triangulated category. Apr 1 at 7:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648862481117249, "perplexity": 172.24108142989337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00682.warc.gz"}
http://www.iga.adelaide.edu.au/dgseminar/dgwg_2016.html
## Differential Geometry Seminars 2016 ### School of Mathematical Sciences – The University of Adelaide Stay tuned for the Differential Geometry Seminar 2017. #### ~ Past talks ~ • Roozbeh Hazrat (Western Sydney University) Leavitt path algebras Friday, 2 December 2016 at 12:10pm in Engineering & Math EM213 From a directed graph one can generate an algebra which captures the movements along the graph. One such algebras are Leavitt path algebras. Despite being introduced only 10 years ago, Leavitt path algebras have arisen in a variety of different contexts as diverse as analysis, symbolic dynamics, noncommutative geometry and representation theory. In fact, Leavitt path algebras are algebraic counterpart to graph C*-algebras, a theory which has become an area of intensive research globally. There are strikingly parallel similarities between these two theories. Even more surprisingly, one cannot (yet) obtain the results in one theory as a consequence of the other; the statements look the same, however the techniques to prove them are quite different (as the names suggest, one uses Algebra and other Analysis). These all suggest that there might be a bridge between Algebra and Analysis yet to be uncovered. In this talk, we introduce Leavitt path algebras and try to classify them by means of (graded) Grothendieck groups. We will ask nice questions! • Abdelghani Zeghib (École Normale Supérieure de Lyon) Introduction to Lorentz Geometry: Riemann vs Lorentz Friday, 18 November 2016 at 12:10pm in Engineering North N132. The goal is to compare Riemannian and Lorentzian geometries and see what one loses and wins when going from the Riemann to Lorentz. Essentially, one loses compactness and ellipticity, but wins causality structure and mathematical and physical situations when natural Lorentzian metrics emerge. • Emma Carberry (University of Sydney) Toroidal Soap Bubbles: Constant Mean Curvature Tori in$$\mathbb{S}^3$$ and $$\mathbb{R}^3$$ Friday, 28 October 2016 at 12:10pm in Ingkarni Wardli B18 Constant mean curvature (CMC) tori in $$\mathbb{S}^3$$, $$\mathbb{R}^3$$ or $$\mathbb{H}^3$$ are in bijective correspondence with spectral curve data, consisting of a hyperelliptic curve, a line bundle on this curve and some additional data, which in particular determines the relevant space form. This point of view is particularly relevant for considering moduli-space questions, such as the prevalence of tori amongst CMC planes and whether tori can be deformed. I will address these questions for the spherical and Euclidean cases, using Whitham deformations. • David Baraglia (University of Adelaide) Parahoric bundles, invariant theory and the Kazhdan-Lusztig map Friday, 21 October 2016 at 12:10pm in Ingkarni Wardli B18 In this talk I will introduce the notion of parahoric groups, a loop group analogue of parabolic subgroups. I will also discuss a global version of this, namely parahoric bundles on a complex curve. This leads us to a problem concerning the behaviour of invariant polynomials on the dual of the Lie algebra, a kind of "parahoric invariant theory". The key to solving this problem turns out to be the Kazhdan-Lusztig map, which assigns to each nilpotent orbit in a semisimple Lie algebra a conjugacy class in the Weyl group. Based on joint work with Masoud Kamgarpour and Rohith Varma. • Hang Wang (University of Adelaide) Character Formula for Discrete Series Friday, 14 October 2016 at 12:10pm in Ingkarni Wardli B18 Weyl character formula describes characters of irreducible representations of compact Lie groups. This formula can be obtained using geometric method, for example, from the Atiyah-Bott fixed point theorem or the Atiyah-Segal-Singer index theorem. Harish-Chandra character formula, the noncompact analogue of the Weyl character formula, can also be studied from the point of view of index theory. We apply orbital integrals on K-theory of Harish-Chandra Schwartz algebra of a semisimple Lie group $$G$$, and then use geometric method to deduce Harish-Chandra character formulas for discrete series representations of $$G$$. This is work in progress with Peter Hochs. • Yann Bernard (Monash University) Energy quantisation for the Willmore functional Friday, 7 October 2016 at 11:10pm in Ligertwood 314 Flinders Room We prove a bubble-neck decomposition and an energy quantisation result for sequences of Willmore surfaces immersed into $$\mathbb{R}^{m\ge3}$$ with uniformly bounded energy and non-degenerating conformal structure. We deduce the strong compactness (modulo the action of the Moebius group) of closed Willmore surfaces of a given genus below some energy threshold. This is joint-work with Tristan Rivière (ETH Zürich). • Ugo Bruzzo (International School for Advanced Studies, Trieste) Hilbert schemes of points of some surfaces and quiver representations Friday, 23 September 2016 at 12:10pm in Ingkarni Wardli B17 Hilbert schemes of points on the total spaces of the line bundles O(-n) on P1 (desingularizations of toric singularities of type (1/n)(1,1)) can be given an ADHM description, and as a result, they can be realized as varieties of quiver representations. • Mathai Varghese (University of Adelaide) Geometry of pseudodifferential algebra bundles Friday, 16 September 2016 at 12:10pm in Ingkarni Wardli B18 I will motivate the construction of pseudodifferential algebra bundles arising in index theory, and also outline the construction of general pseudodifferential algebra bundles (and the associated sphere bundles), showing that there are many that are purely infinite dimensional that do not come from usual constructions in index theory. I will also discuss characteristic classes of such bundles. This is joint work with Richard Melrose. • Guo Chuan Thiang (University of Adelaide) Singular vector bundles and topological semi-metals Friday, 2 September 2016 at 12:10pm in Ingkarni Wardli B18 The elusive Weyl fermion was recently realised as quasiparticle excitations of a topological semimetal. I will explain what a semi-metal is, and the precise mathematical sense in which they can be "topological", in the sense of the general theory of topological insulators. This involves understanding vector bundles with singularities, with the aid of Mayer-Vietoris principles, gerbes, and generalised degree theory. • Lesley Ward (University of South Australia) Product Hardy spaces associated to operators with heat kernel bounds on spaces of homogeneous type Friday, 19 August 2016 at 12:10pm in Ingkarni Wardli B18 Much effort has been devoted to generalizing the Calderón-Zygmund theory in harmonic analysis from Euclidean spaces to metric measure spaces, or spaces of homogeneous type. Here the underlying space $$\mathbb{R}^n$$ with Euclidean metric and Lebesgue measure is replaced by a set $$X$$ with general metric or quasi-metric and a doubling measure. Further, one can replace the Laplacian operator that underpins the Calderón-Zygmund theory by more general operators $$L$$ satisfying heat kernel estimates. I will present recent joint work with P. Chen, X.T. Duong, J. Li and L.X. Yan along these lines. We develop the theory of product Hardy spaces $$H^p_{L_1,L_2}(X_1 \times X_2)$$, for $$1 \leq p < \infty$$, defined on products of spaces of homogeneous type, and associated to operators $$L_1$$, $$L_2$$ satisfying Davis-Gaffney estimates. This theory includes definitions of Hardy spaces via appropriate square functions, an atomic Hardy space, a Calderón-Zygmund decomposition, interpolation theorems, and the boundedness of a class of Marcinkiewicz-type spectral multiplier operators. • Mike Eastwood (University of Adelaide) Calculus on symplectic manifolds Friday, 12 August 2016 at 12:10pm in Ingkarni Wardli B18 One can use the symplectic form to construct an elliptic complex replacing the de Rham complex. Then, under suitable curvature conditions, one can form coupled versions of this complex. Finally, on complex projective space, these constructions give rise to a series of elliptic complexes with geometric consequences for the Fubini-Study metric and its X-ray transform. This talk, which will start from scratch, is based on the work of many authors but, especially, current joint work with Jan Slovak. • Tuyen Truong (University of Adelaide) Étale ideas in topological and algebraic dynamical systems Friday, 12 August 2016 at 12:10pm in Ingkarni Wardli B18 In étale topology, instead of considering open subsets of a space, we consider étale neighbourhoods lying over these open subsets. In this talk, I define an étale analog of dynamical systems: to understand a dynamical system $$f:(X,\Omega )\to(X,\Omega )$$, we consider other dynamical systems lying over it. I then propose to use this to resolve the following two questions: Question 1: What should be the topological entropy of a dynamical system $$(f,X,\Omega )$$ when $$(X,\Omega )$$ is not a compact space? Question 2: What is the relation between topological entropy of a rational map or correspondence (over a field of arbitrary characteristic) to the pullback on cohomology groups and algebraic cycles? • David Bowman (University of Adelaide) Holomorphic Flexibility Properties of Spaces of Elliptic Functions Friday, 29 July 2016 at 12:10pm in Ingkarni Wardli B18 The set of meromorphic functions on an elliptic curve naturally possesses the structure of a complex manifold. The component of degree 3 functions is 6-dimensional and enjoys several interesting complex-analytic properties that make it, loosely speaking, the opposite of a hyperbolic manifold. Our main result is that this component has a 54-sheeted branched covering space that is an Oka manifold. • Elizabeth Gillaspy (University of Colorado, Boulder) Twists over etale groupoids and twisted vector bundles Tuesday, 22 July 2016 at 12:10pm in Ingkarni Wardli B18 Given a twist over an etale groupoid, one can construct an associated C*-algebra which carries a good deal of geometric and physical meaning; for example, the K-theory group of this C*-algebra classifies D-brane charges in string theory. Twisted vector bundles, when they exist, give rise to particularly important elements in this K-theory group. In this talk, we will explain how to use the classifying space of the etale groupoid to construct twisted vector bundles, under some mild hypotheses on the twist and the classifying space. My hope is that this talk will be accessible to a broad audience; in particular, no prior familiarity with groupoids, their twists, or the associated C*-algebras will be assumed. This is joint work with Carla Farsi. • Ryan Mickler (Northeastern University) Chern-Simons invariants of Seifert manifolds via Loop spaces Tuesday, 28 June 2016 at 2:10pm in Ingkarni Wardli B17 Over the past 30 years the Chern-Simons functional for connections on G-bundles over three-manfolds has lead to a deep understanding of the geometry of three-manfiolds, as well as knot invariants such as the Jones polynomial. Here we study this functional for three-manfolds that are topologically given as the total space of a principal circle bundle over a compact Riemann surface base, which are known as Seifert manifolds. We show that on such manifolds the Chern-Simons functional reduces to a particular gauge-theoretic functional on the 2d base, that describes a gauge theory of connections on an infinite dimensional bundle over this base with structure group given by the level-k affine central extension of the loop group LG. We show that this formulation gives a new understanding of results of Beasley-Witten on the computability of quantum Chern-Simons invariants of these manifolds as well as knot invariants for knots that wrap a single fiber of the circle bundle. A central tool in our analysis is the Caloron correspondence of Murray-Stevenson-Vozzo. • Steve Rosenberg (The University of Adelaide / Boston University) Algebraic structures associated to Brownian motion on Lie groups Thursday, 16 June 2016 at 1:10pm in Ingkarni Wardli B17 In (1+1)-d TQFT, products and coproducts are associated to pairs of pants decompositions of Riemann surfaces. We consider a toy model in dimension (0+1) consisting of specific broken paths in a Lie group. The products and coproducts are constructed by a Brownian motion average of holonomy along these paths with respect to a connection on an auxiliary bundle. In the trivial case over the torus, we (seem to) recover the Hopf algebra structure on the symmetric algebra. In the general case, we (seem to) get deformations of this Hopf algebra. This is a preliminary report on joint work with Michael Murray and Raymond Vozzo. • Yoshiyasu Fukumoto (Kyoto University) On the Strong Novikov Conjecture for Locally Compact Groups in Low Degree Cohomology Classes Friday, 3 June 2016 at 12:10pm in Eng & Maths EM205 The main result I will discuss is non-vanishing of the image of the index map from the $$G$$-equivariant K-homology of a $$G$$-manifold $$X$$ to the K-theory of the C*-algebra of the group $$G$$. The action of $$G$$ on $$X$$ is assumed to be proper and cocompact. Under the assumption that the Kronecker pairing of a K-homology class with a low-dimensional cohomology class is non-zero, we prove that the image of this class under the index map is non-zero. Neither discreteness of the locally compact group $$G$$ nor freeness of the action of $$G$$ on $$X$$ are required. The case of free actions of discrete groups was considered earlier by B. Hanke and T. Schick. • Valentina Wheeler (University of Wollongong) Some free boundary value problems in mean curvature flow and fully nonlinear curvature flows Friday, 27 May 2016 at 12:10pm in Eng & Maths EM205 In this talk we present an overview of the current research in mean curvature flow and fully nonlinear curvature flows with free boundaries, with particular focus on our own results. Firstly we consider the scenario of a mean curvature flow solution with a ninety-degree angle condition on a fixed hypersurface in Euclidean space, that we call the contact hypersurface. We prove that under restrictions on either the initial hypersurface (such as rotational symmetry) or restrictions on the contact hypersurface the flow exists for all times and converges to a self-similar solution. We also discuss the possibility of a curvature singularity appearing on the free boundary contained in the contact hypersurface. We extend some of these results to the setting of a hypersurface evolving in its normal direction with speed given by a fully nonlinear functional of the principal curvatures. • David Roberts (University of Adelaide) Smooth mapping orbifolds Friday, 20 May 2016 at 12:10pm in Eng & Maths EM205 It is well-known that orbifolds can be represented by a special kind of Lie groupoid, namely those that are étale and proper. Lie groupoids themselves are one way of presenting certain nice differentiable stacks. In joint work with Ray Vozzo we have constructed a presentation of the mapping stack $$\mathrm{Hom}(\mathrm{disc}(M),X)$$, for $$M$$ a compact manifold and $$X$$ a differentiable stack, by a Fréchet-Lie groupoid. This uses an apparently new result in global analysis about the map $$C^\infty(K_1,Y) \to C^\infty(K_2,Y)$$ induced by restriction along the inclusion $$K_2 \to K_1$$, for certain compact $$K_1$$, $$K_2$$. We apply this to the case of $$X$$ being an orbifold to show that the mapping stack is an infinite-dimensional orbifold groupoid. We also present results about mapping groupoids for bundle gerbes. • Pierre Portal (Australian National University) Harmonic analysis of Hodge-Dirac operators Friday, 13 May 2016 at 12:10pm in Eng & Maths EM205 When the metric on a Riemannian manifold is perturbed in a rough (merely bounded and measurable) manner, do basic estimates involving the Hodge Dirac operator $$D = d+d^*$$ remain valid? Even in the model case of a perturbation of the euclidean metric on $$\mathbb{R}^n$$, this is a difficult question. For instance, the fact that the $$L^2$$ estimate $$\|Du\|_2 \sim \|\sqrt{D^{2}}u\|_2$$ remains valid for perturbed versions of $$D$$ was a famous conjecture made by Kato in 1961 and solved, positively, in a ground breaking paper of Auscher, Hofmann, Lacey, McIntosh and Tchamitchian in 2002. In the past fifteen years, a theory has emerged from the solution of this conjecture, making rough perturbation problems much more tractable. In this talk, I will give a general introduction to this theory, and present one of its latest results: a flexible approach to $$L^p$$ estimates for the holomorphic functional calculus of $$D$$. This is joint work with D. Frey (Delft) and A. McIntosh (ANU). • David Baraglia (The University of Adelaide) How to count Betti numbers Friday, 6 May 2016 at 12:10pm in Eng & Maths EM205 I will begin this talk by showing how to obtain the Betti numbers of certain smooth complex projective varieties by counting points over a finite field. For singular or non-compact varieties this motivates us to consider the "virtual Hodge numbers" encoded by the "Hodge-Deligne polynomial", a refinement of the topological Euler characteristic. I will then discuss the computation of Hodge-Deligne polynomials for certain singular character varieties (i.e. moduli spaces of flat connections). • Alessandro Ottazzi (University of New South Wales) Sard Theorem for the endpoint map in sub-Riemannian manifolds Friday, 29 April 2016 at 12:10pm in Eng & Maths EM205 Sub-Riemannian geometries occur in several areas of pure and applied mathematics, including harmonic analysis, PDEs, control theory, metric geometry, geometric group theory, and neurobiology. We introduce sub-Riemannian manifolds and give some examples. Therefore we discuss some of the open problems, and in particular we focus on the Sard Theorem for the endpoint map, which is related to the study of length minimizers. Finally, we consider some recent results obtained in collaboration with E. Le Donne, R. Montgomery, P. Pansu and D. Vittone. • Mathai Varghese (The University of Adelaide) Geometric analysis of gap-labelling Friday, 8 April 2016 at 12:10pm in Eng & Maths EM205 Using an earlier result, joint with Quillen, I will formulate a gap labelling conjecture for magnetic Schrodinger operators with smooth aperiodic potentials on Euclidean space. Results in low dimensions will be given, and the formulation of the same problem for certain non-Euclidean spaces will be given if time permits. This is ongoing joint work with Moulay Benameur. • Tuyen Truong (The University of Adelaide) Counting periodic points of plane Cremona maps Friday, 1 April 2016 at 12:10pm in Eng & Maths EM205 In this talk, I will present recent results, join with Tien-Cuong Dinh and Viet-Anh Nguyen, on counting periodic points of plane Cremona maps (i.e. birational maps of $$\mathbb{P}^2$$). The tools used include a Lefschetz fixed point formula of Saito, Iwasaki and Uehara for birational maps of surface whose fixed point set may contain curves; a bound on the arithmetic genus of curves of periodic points by Diller, Jackson and Sommerse; a result by Diller, Dujardin and Guedj on invariant (1,1) currents of meromorphic maps of compact Kahler surfaces; and a theory developed recently by Dinh and Sibony for non proper intersections of varieties. Among new results in the paper, we give a complete characterisation of when two positive closed (1,1) currents on a compact Kahler surface behave nicely in the view of Dinh and Sibony's theory, even if their wedge intersection may not be well-defined with respect to the classical pluripotential theory. Time allows, I will present some generalisations to meromorphic maps (including an upper bound for the number of isolated periodic points which is sometimes overlooked in the literature) and open questions. • Andy Hammerlindl (Monash University) Expanding maps Friday, 18 March 2016 at 12:10pm in Eng & Maths EM205 Consider a function from the circle to itself such that the derivative is greater than one at every point. Examples are maps of the form $$f(x) = mx$$ for integers $$m > 1$$. In some sense, these are the only possible examples. This fact and the corresponding question for maps on higher dimensional manifolds was a major motivation for Gromov to develop pioneering results in the field of geometric group theory. In this talk, I'll give an overview of this and other results relating dynamical systems to the geometry of the manifolds on which they act and (time permitting) talk about my own work in the area. • Finnur Larusson (The University of Adelaide) The parametric h-principle for minimal surfaces in $$\mathbb{R}^n$$ and null curves in $$\mathbb{C}^n$$ Friday, 11 March 2016 at 12:10pm in Ingkarni Wardli B17 I will describe new joint work with Franc Forstneric (arXiv:1602.01529). This work brings together four diverse topics from differential geometry, holomorphic geometry, and topology; namely the theory of minimal surfaces, Oka theory, convex integration theory, and the theory of absolute neighborhood retracts. Our goal is to determine the rough shape of several infinite-dimensional spaces of maps of geometric interest. It turns out that they all have the same rough shape. • Jonathan Rosenberg (University of Maryland) T-duality for elliptic curve orientifolds Friday, 4 March 2016 at 12:10pm in Ingkarni Wardli B17 Orientifold string theories are quantum field theories based on the geometry of a space with an involution. T-dualities are certain relationships between such theories that look different on the surface but give rise to the same observable physics. In this talk I will not assume any knowledge of physics but will concentrate on the associated geometry, in the case where the underlying space is a (complex) elliptic curve and the involution is either holomorphic or anti-holomorphic. The results blend algebraic topology and algebraic geometry. This is mostly joint work with Chuck Doran and Stefan Mendez-Diez. A fixed point theorem on noncompact manifolds Friday, 12 February 2016 at 12:10pm in Ingkarni Wardli B21 For an elliptic operator on a compact manifold acted on by a compact Lie group, the Atiyah-Segal-Singer fixed point formula expresses its equivariant index in terms of data on fixed point sets of group elements. This can for example be used to prove Weyl's character formula. We extend the definition of the equivariant index to noncompact manifolds, and prove a generalisation of the Atiyah-Segal-Singer formula, for group elements with compact fixed point sets. In one example, this leads to a relation with characters of discrete series representations of semisimple Lie groups. (This is joint work with Hang Wang.) • Franc Forstneric (University of Ljubljana) A long $$\mathbb{C}^2$$ without holomorphic functions Friday, 29 January 2016 at 12:10pm in Engineering North N132 For every integer $$n>1$$ we construct a complex manifold of dimension n which is exhausted by an increasing sequence of biholomorphic images of $$\mathbb{C}^n$$ (i.e., a long $$\mathbb{C}^n$$), but it does not admit any nonconstant holomorphic functions. We also introduce new biholomorphic invariants of a complex manifold, the stable core and the strongly stable core, and we prove that every compact strongly pseudoconvex and polynomially convex domain $$B$$ in $$\mathbb{C}^n$$n is the strongly stable core of a long $$\mathbb{C}^n$$; in particular, non-equivalent domains give rise to non-equivalent long $$\mathbb{C}^n$$'s. Thus, for any $$n>1$$ there exist uncountably many pairwise non-equivalent long $$\mathbb{C}^n$$'s. These results answer several long standing open questions. (Joint work with Luka Boc Thaler.) • Siye Wu (National Tsing Hua Univeristy) Quantisation of Hitchin's moduli space Friday, 22 January 2016 at 12:10pm in Engineering North N132 In this talk, I construct prequantum line bundles on Hitchin's moduli spaces of orientable and non-orientable surfaces and study the geometric quantisation and quantisation via branes by complexification of the moduli spaces. • Frank Kutzschebauch (University of Bern) A fibered density property and the automorphism group of the spectral ball Friday, 15 January 2016 at 12:10pm in Engineering North N132 The spectral ball is defined as the set of complex n by n matrices whose eigenvalues are all less than 1 in absolute value. Its group of holomorphic automorphisms has been studied over many decades in several papers and a precise conjecture about its structure has been formulated. In dimension 2 this conjecture was recently disproved by Kosinski. We not only disprove the conjecture in all dimensions but also give the best possible description of the automorphism group. Namely we explain how the invariant theoretic quotient map divides the automorphism group of the spectral ball into a finite dimensional part of symmetries which lift from the quotient and an infinite dimensional part which leaves the fibration invariant. We prove a precise statement as to how hopelessly huge this latter part is. This is joint work with R. Andrist. • Gerald Schwarz (Brandeis University) Oka principles and the linearization problem Friday, 8 January 2016 at 12:10pm in Engineering North N132 Let $$G$$ be a reductive complex Lie group (e.g., $$\mathrm{SL}(n,\mathbb{C}))$$ and let $$X$$ and $$Y$$ be Stein manifolds (closed complex submanifolds of some $$\mathbb{C}^n$$). Suppose that $$G$$ acts freely on $$X$$ and $$Y$$. Then there are quotient Stein manifolds $$X/G$$ and $$Y/G$$ and quotient mappings $$p_X:X\to X/G$$ and $$p_Y: Y\to Y/G$$ such that $$X$$ and $$Y$$ are principal $$G$$-bundles over $$X/G$$ and $$Y/G$$. Let us suppose that $$Q=X/G \cong Y/G$$ so that $$X$$ and $$Y$$ have the same quotient $$Q$$. A map $$\Phi: X\to Y$$ of principal bundles (over $$Q$$) is simply an equivariant continuous map commuting with the projections. That is, $$\Phi(gx)=g \Phi(x)$$ for all $$g$$ in $$G$$ and $$x$$ in $$X$$, and $$p_X=p_Y\circ \Phi$$. The famous Oka Principle of Grauert says that any $$\Phi$$ as above embeds in a continuous family $$\Phi_t: X \to Y$$, $$t \in [0,1]$$, where $$\Phi_0=\Phi$$, all the $$\Phi_t$$ satisfy the same conditions as $$\Phi$$ does and $$\Phi_1$$ is holomorphic. This is rather amazing. We consider the case where $$G$$ does not necessarily act freely on $$X$$ and $$Y$$. There is still a notion of quotient and quotient mappings $$p_X: X\to X/\hspace{-1mm}/ G$$ and $$p_Y: Y\to Y/\hspace{-1mm}/G$$ where $$X/\hspace{-1mm}/G$$ and $$Y/\hspace{-1mm}/G$$ are now Stein spaces and parameterize the closed $$G$$-orbits in $$X$$ and $$Y$$. We assume that $$Q\cong X/\hspace{-1mm}/G\cong Y/\hspace{-1mm}/G$$ and that we have a continuous equivariant $$\Phi$$ such that $$p_X=p_Y \circ \Phi$$. We find conditions under which $$\Phi$$ embeds into a continuous family $$\Phi_t$$ such that $$\Phi_1$$ is holomorphic. We give an application to the Linearization Problem. Let $$G$$ act holomorphically on $$\mathbb{C}^n$$. When is there a biholomorphic map $$\Phi:\mathbb{C}^n \to \mathbb{C}^n$$ such that $$\Phi^{-1} \circ g \circ \Phi \in \mathrm{GL}(n,\mathbb{C})$$ for all $$g$$ in $$G$$? We find a condition which is necessary and sufficient for "most" $$G$$-actions. This is joint work with F. Kutzschebauch and F. Larusson.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476019501686096, "perplexity": 598.6122643618402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805008.39/warc/CC-MAIN-20171118171235-20171118191235-00422.warc.gz"}
https://www.physicsforums.com/threads/how-do-i-find-the-radius-of-this-path.271769/
# Homework Help: How do i find the radius of this path 1. Nov 14, 2008 ### devanlevin given $$\vec{R}$$=(30t-t$$^{3}$$)$$\hat{x}$$+(22t-4t$$^{2}$$)$$\hat{y}$$ find, for time t=2s 1)the acceleration (a) 2)the angular acceleration($$\alpha$$) 3)the radius of the curve of the path it takes, at t=2s as far as i can see, i can integrate $$\vec{R}$$ twice to get the acceleration, but i need the radius to find out ($$\alpha$$),... the only thing i could think of to find the radius is using the equation of a circle (X-Xo)$$^{2}$$+(Y-Yo)$$^{2}$$=r$$^{2}$$ then plugging in the values of $$\vec{R}$$(t=2s) in the X and Y, problem is i dont know the Xo, Yo (centre point) of the circle. thought maybe to plug in X,Y at t=0 for Xo Yo but that just seems wrong. any ideas 2. Nov 14, 2008 ### Redbelly98 Staff Emeritus Well, actually you don't integrate R. Hopefully you know that, and just typed in the wrong thing. True, to get angular information there has to be a reference point for the rotation. Since none was given, I'm guessing we use the origin. 3. Nov 14, 2008 ### devanlevin sorry, meant take the derivative of r twice, but thats not really my problem, how do i find the radius? 4. Nov 14, 2008 ### Redbelly98 Staff Emeritus Do you have a calculus book lying around? Curvature of a path or function is usually covered in the 1st semester. 5. Nov 14, 2008 ### devanlevin like i said, only thing i can think of is r^2=(x-x0)^2+(y-y0)^2 dont know where the centre(x0,y0) is? any ideas 6. Nov 14, 2008 ### alphysicist Redbelly98, If I'm reading the question correctly I don't believe that is correct. The path itself defines the center of rotation at any particular time. As an alternative approach to find the radius of curvature, you could examine the tangent and normal components of the acceleration. 7. Nov 14, 2008 ### Redbelly98 Staff Emeritus I'm not convinced. I'd like to know if this is a basic Phys 101 class that is studying introductory rotational dynamics. If so, I'd expect a straightforward use of the origin as reference: (x0, y0) = (0,0) r is simply the vector ( x(t), y(t) ) Angle value is taken between r and +x axis. You're interpretation could be right, but I would only expect that in the case of an advanced honors track course -- which is of course possible. Presumably there is an example or discussion in devanlevin's text book that would clear this up. 8. Nov 14, 2008 ### alphysicist You mean find $$\frac{d^2\theta}{dt^2}$$ for the coordinate $\theta$? That's possible, but to me doesn't seem to fit the rest of the question. If they are expected to find the radius of curvature, I think finding the tangential and normal components is something they would be expected to do. Since the velocity, speed, and acceleration were already found in part a, a dot product and using the pythagorean theorem is all that is required to find the two acceleration components. Then one component gives the radius of curvature, and the other gives (alpha) = aT/R. (So my point is that these ideas all go together, and the above work takes only a few lines so I think it is well within reason.) Please correct me if I misunderstand what you were saying, but I think this is much more straightforward than calculating $d^2 \theta/dt^2$ (which seems to involve taking the second derivative of the arctangent of y(t)/x(t) which is a lot of tedium). I definitely agree with that; just asking for the "angular acceleration" is rather ambiguous since it could go either way. 9. Nov 15, 2008 ### devanlevin sorry about the terminology, im not taking it from an english textbook and english isnt the language im studying in,,, what im looking for when i say "angular acceleration" is the amount of radians/s^2 how do i find the tangent and normal components of the acceleration, 10. Nov 16, 2008 ### alphysicist Unfortunately that does not clear up the ambiguity. The question is whether the word "angular" in angular acceleration here refers to the angle that the position vector makes with the x axis, or to the angle it makes with the center of rotation at that particular time. As I was using the terms, the tangential component is parallel to the velocity, and the normal component is perpendicular to the velocity. (Tangential and normal to the path.) Since you already have the forms for the velocity, speed, and acceleration, you can use the dot product to get the component of the acceleration parallel to the velocity. How can you then use that to find the normal component? 11. Nov 16, 2008 ### devanlevin the angle it makes with the centre of the circle 12. Nov 16, 2008 ### devanlevin but how would i get the radius, or the "angular acceleration" from the component parallel or normal to the acceleration and velocity? 13. Nov 16, 2008 ### alphysicist The angular acceleration relative to the center of the path is given by: $$a_T= \alpha\ R$$ (similar to simple circular motion) where aT is the tangential component (component parallel to the velocity) and R is the radius of curvature, and alpha is related to the angle of the instantaneous center of rotation. The normal component is similar to the formula for the centripetal acceleration: $$a_N = \frac{v^2}{R}$$ where aN is the normal component (perpendicular to the velocity). Since you have the speed, once you find aN and aT you can find alpha and R. Last edited: Nov 16, 2008 14. Nov 16, 2008 ### devanlevin 15. Nov 16, 2008
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389727711677551, "perplexity": 548.4248390140767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590051.20/warc/CC-MAIN-20180718041450-20180718061450-00533.warc.gz"}
https://hal.univ-cotedazur.fr/hal-00755435
# Improved convergence rate for the simulation of stochastic differential equations driven by subordinated Lévy processes Abstract : We consider the Euler approximation of stochastic differential equations (SDEs) driven by Lévy processes in the case where we cannot simulate the increments of the driving process exactly. In some cases, where the driving process Y is a subordinated stable process, i.e., Y=Z(V) with V a subordinator and Z a stable process, we propose an approximation Y by Z(Vn) where Vn is an approximation of V. We then compute the rate of convergence for the approximation of the solution X of an SDE driven by Y using results about the stability of SDEs. Keywords : Document type : Journal articles Domain : Complete list of metadatas https://hal.univ-cotedazur.fr/hal-00755435 Contributor : Sylvain Rubenthaler <> Submitted on : Wednesday, November 21, 2012 - 11:38:36 AM Last modification on : Thursday, May 3, 2018 - 1:32:58 PM ### Citation Sylvain Rubenthaler, Magnus Wiktorsson. Improved convergence rate for the simulation of stochastic differential equations driven by subordinated Lévy processes. Stochastic Processes and their Applications, Elsevier, 2003, 108 (1), pp.Pages 1-26. ⟨10.1016/S0304-4149(03)00100-5⟩. ⟨hal-00755435⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931334018707275, "perplexity": 1305.6269917877723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00495.warc.gz"}
http://mathhelpforum.com/number-theory/148143-continued-fraction-expansion-problems-2-print.html
# Continued Fraction Expansion Problems Show 40 post(s) from this thread on one page Page 2 of 3 First 123 Last • June 8th 2010, 10:11 AM undefined Quote: Originally Posted by Samson I was able to follow along but I don't see why they chose the number '4' to start the expansion off with. I don't know how it relates to Sqrt[23]. Can someone start me off with how they got Sqrt[5]'s expansion? They give the answer but I'd like to know how they got there. As far as Q5 goes, I need similar help because I don't know how to get to the point to where they left off in the book. The procedure is the same for Q4 and Q5, so there's no need to treat them separately. If you know how to do one, you know how to do them all. The 4 is chosen because $4^2 < 23$ and $5^2 > 23$. • June 8th 2010, 10:17 AM Amer Quote: Originally Posted by Samson Well for Sqrt[5], the Wiki article shows [2;4,4,4,4,4,...] and they have it shown visually at the top of the page. How did they reach this conclusion though? How did they set it up? Sqrt[7] i'm lost on as well. $\sqrt{5}$ the floor of sqrt[5] is 2 $\sqrt{5} = 2 + \sqrt{5} -2 = 2 + \frac{(\sqrt{5}-2)(\sqrt{5}+2)}{\sqrt{5}+2}= 2 + \frac{5 -4}{\sqrt{5}+2} = 2 + \frac{1}{\sqrt{5}+2}$ [2 ; $\sqrt{5}+2 = 4 + \sqrt{5} - 2$ [2;4 $\sqrt{5} - 2 = \frac{(\sqrt{5}-2)(\sqrt{5}+2)}{\sqrt{5}+2} = \frac{1}{\sqrt{5} +2 }$ $\sqrt{5} +2 = 4 + \sqrt{5} - 2$ [2;4,4 and so on • June 8th 2010, 10:23 AM Samson Quote: Originally Posted by Samson I was able to follow along but I don't see why they chose the number '4' to start the expansion off with. I don't know how it relates to Sqrt[23]. Can someone start me off with how they got Sqrt[5]'s expansion? They give the answer but I'd like to know how they got there. As far as Q5 goes, I need similar help because I don't know how to get to the point to where they left off in the book. Can someone explain how they got from (1/(Sqrt[23]-4)) --> 1 + (Sqrt[23]-3)/7 ? Source: (from the link - Problem 64) • June 8th 2010, 10:26 AM Samson Quote: Originally Posted by Amer $\sqrt{5}$ the floor of sqrt[5] is 2 $\sqrt{5} = 2 + \sqrt{5} -2 = 2 + \frac{(\sqrt{5}-2)(\sqrt{5}+2)}{\sqrt{5}+2}= 2 + \frac{5 -4}{\sqrt{5}+2} = 2 + \frac{1}{\sqrt{5}+2}$ [2 ; $\sqrt{5}+2 = 4 + \sqrt{5} - 2$ [2;4 $\sqrt{5} - 2 = \frac{(\sqrt{5}-2)(\sqrt{5}+2)}{\sqrt{5}+2} = \frac{1}{\sqrt{5} +2 }$ $\sqrt{5} +2 = 4 + \sqrt{5} - 2$ [2;4,4 and so on So why do you use the 4 in this part (I see why, but I mean how did you decide to use it instead of, say 6 or 8 (6, then -4, or 8 then -6) ?) $\sqrt{5} +2 = 4 + \sqrt{5} - 2$ • June 8th 2010, 10:31 AM undefined Quote: Originally Posted by Samson Can someone explain how they got from (1/(Sqrt[23]-4)) --> 1 + (Sqrt[23]-3)/7 ? Source: (from the link - Problem 64) First rationalize the denominator. Then express as the sum of an integer and a number between 0 and 1. • June 8th 2010, 10:32 AM Amer $\sqrt{7}$ the has the floor 2 $\sqrt{7} = 2 + \frac{\sqrt{7} -2}{1} = 2 + \frac{3}{\sqrt{7}+2}$ turned over $\frac{3}{\sqrt{7}+2}$ $\frac{\sqrt{7}+2}{3}$ we say before $\sqrt{7} = 2 +\sqrt{7}-2$ so $\frac{2+\sqrt{7} -2 +2 }{3} = \frac{4 + \sqrt{7}-2}{3} = 1 + \frac{\sqrt{7} -1}{3}= 1 + \frac{7-1}{3(\sqrt{7}+1)}= \frac{6}{3(\sqrt{7}+1)}$ turned over $\frac{6}{3(\sqrt{7}+1)}$ $\frac{\sqrt{7}+1}{2}$ multiply with the conjugate and so on i find [2;1 can u continue ? • June 8th 2010, 10:50 AM Samson Okay, first off, what's the point of turning it over to get $\frac{45}{\sqrt{7}+2}$ and $\frac{6}{3(\sqrt{7}+1)}$ (respectively) ? Then, how did you get from $\frac{4 + \sqrt{7}-2}{3}$ to $1 + \frac{\sqrt{7} -1}{3}$ ? Lastly, when did you know to place 2 and 1 in the list like you did ( [2;1 ) ? • June 8th 2010, 11:07 AM Amer Quote: Originally Posted by Samson Okay, first off, what's the point of turning it over to get $\frac{45}{\sqrt{7}+2}$ and $\frac{6}{3(\sqrt{7}+1)}$ (respectively) ? Then, how did you get from $\frac{4 + \sqrt{7}-2}{3}$ to $1 + \frac{\sqrt{7} -1}{3}$ ? Lastly, when did you know to place 2 and 1 in the list like you did ( [2;1 ) ? in fact I'm just doing what in here the way is for example we want to find continued fraction of sqrt[s] let f=floor of sqrt[s] ok then it is trivial that sqrt[s] = f + sqrt[s] - f, is it ok for now if it then we find the first number [f; now we say sqrt[s] = f + (sqrt[s] - f ) the expression in the brackets multiply it with it is conjugate $\frac{(\sqrt{s}-f)(\sqrt{s}+f)}{\sqrt{s}+f} = \frac{s - f^2}{\sqrt{s}+f}$ turn over it $\frac{\sqrt{s}+f}{s - f^2}$ we say that sqrt[s] = f + sqrt[s] - f so $\frac{\sqrt{s}+f}{s - f^2} = \frac{2f + \sqrt{s} - f }{s - f^2 }$ suppose that 2f/(s-f^2) = r + t/(s-f^2) , r is integer so $\frac{2f + \sqrt{s} - f }{s - f^2 } = r + \frac{ \sqrt{s} - f +t}{s-f^2}$ [f;r, $\frac{ \sqrt{s} - f +t}{s-f^2}$ multiply with the conjugate then turn over and so on • June 8th 2010, 11:11 AM Samson I'm trying to follow you, but I think if you worked with the explicit example I had pointed out I'd understand. Those are the steps I'm missing. • June 8th 2010, 11:20 AM Amer I worked it in sqrt[7] and sqrt[5] hope u will get the answer • June 8th 2010, 11:24 AM Samson Quote: Originally Posted by Amer I worked it in sqrt[7] and sqrt[5] hope u will get the answer I'm asking explicitly if you or someone can explain how you got from $\frac{4 + \sqrt{7}-2}{3}$ to $1 + \frac{\sqrt{7} -1}{3}$ ? • June 8th 2010, 11:33 AM undefined Quote: Originally Posted by Samson I'm asking explicitly if you or someone can explain how you got from $\frac{4 + \sqrt{7}-2}{3}$ to $1 + \frac{\sqrt{7} -1}{3}$ ? $\frac{4 + \sqrt{7}-2}{3} = \frac{\sqrt{7}+2}{3} = 1 + \frac{\sqrt{7}+2}{3} - \frac{3}{3} = 1 + \frac{\sqrt{7} -1}{3}$ Note that $\left\lfloor \frac{\sqrt{7}+2}{3} \right\rfloor = 1$ and $0 < \frac{\sqrt{7} -1}{3} < 1$. • June 8th 2010, 11:45 AM Samson Quote: Originally Posted by undefined $\frac{4 + \sqrt{7}-2}{3} = \frac{\sqrt{7}+2}{3} = 1 + \frac{\sqrt{7}+2}{3} - \frac{3}{3} = 1 + \frac{\sqrt{7} -1}{3}$ Note that $\left\lfloor \frac{\sqrt{7}+2}{3} \right\rfloor = 1$ and $0 < \frac{\sqrt{7} -1}{3} < 1$. Thank you for that explanation, that is exactly what I needed. I will try to apply this, but if anyone wants to add anymore onto either Sqrt[5] or Sqrt[7], please feel free. I'll be logging on later and it would be great if I had something to check my work against! • July 2nd 2010, 04:44 AM Samson Quote: Originally Posted by Samson Hello all, Can anyone provide the complete solution to either one of these? I know that Latex has been down for a while but now that its back up, I'm hoping somebody might be able to provide these. *Note: If the users that helped give part of it before are still available, it would be totally awesome if you're able to complete the solutions you started! I'd really appreciate it! lol, I've been trying to see the end of the tunnel for 2 weeks on just these 2 problems! Can anybody help with these? It's been floating up here for nearly a month lol! Anyone have these solutions? • July 4th 2010, 12:51 AM undefined Here's my code: Code: import java.util.ArrayList; public class ContFracSqrt { static boolean v=true; // verbosity public static void main(String[] args) { print(cfe(5)); } static void print(ArrayList<Integer> a) { int i; String s=a.toString(); if(a.size()>1) for(i=2;;i++) if(s.charAt(i)==',') { s=s.substring(0,i)+"; ("+s.substring(i+2); s=s.substring(0,s.length()-1)+")]"; break; } System.out.println(s); } static ArrayList<Integer> cfe(int n) { ArrayList<Integer> x=new ArrayList<Integer>(); int a=(int)Math.sqrt(n),b=a,c=1,d,e,f,g; if(a*a==n) return x; if(v) System.out.println("\\sqrt{"+n+"}="+a+"+\\dfrac{\\sqrt{"+n+"}-"+a+"}{1}"); if(v) System.out.print("\\dfrac{1}{\\sqrt{"+n+"}-"+a+"}="); while(true) { d=c; c=n-b*b; g=gcd(c,d); c/=g; d/=g; b=-b; f=a-c; for(e=0;b<=f;e++) b+=c; if(v) System.out.println(e+"+\\dfrac{\\sqrt{"+n+"}-"+b+"}{"+c+"}"); if(b==a&&c==1) return x; if(v) System.out.print("\\dfrac{"+c+"}{\\sqrt{"+n+"}-"+b+"}="); } } static int gcd(int a,int b) { return (b==0)?a:gcd(b,a%b); } } Which gives output for $\displaystyle \sqrt{5}$: $\sqrt{5}=2+\dfrac{\sqrt{5}-2}{1}$ $\dfrac{1}{\sqrt{5}-2}=4+\dfrac{\sqrt{5}-2}{1}$ $[2; (4)]$ and for $\displaystyle \sqrt{7}$: $\sqrt{7}=2+\dfrac{\sqrt{7}-2}{1}$ $\dfrac{1}{\sqrt{7}-2}=1+\dfrac{\sqrt{7}-1}{3}$ $\dfrac{3}{\sqrt{7}-1}=1+\dfrac{\sqrt{7}-1}{2}$ $\dfrac{2}{\sqrt{7}-1}=1+\dfrac{\sqrt{7}-2}{3}$ $\dfrac{3}{\sqrt{7}-2}=4+\dfrac{\sqrt{7}-2}{1}$ $[2; (1, 1, 1, 4)]$ Let me know how this works for you. I decided against ASCII mode since LaTeX is so much nicer. Show 40 post(s) from this thread on one page Page 2 of 3 First 123 Last
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039206266403198, "perplexity": 1535.5337345155758}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461132.22/warc/CC-MAIN-20151124205421-00350-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/moment-of-inertia-equation.59068/
Moment of inertia equation 1. Jan 8, 2005 CuriousJonathan I'm in AP physics because I was really bored in acc physics, but I'm not actually in calculus, so I may ask you to explain basic concepts further if I have not yet had the chance to figure them out for myself. I was wondering about the equation for the moment of inertia of a uniform disc rotating about it's central axis. My book tells me that it's 1/2MR^2, but I attempted to arrive at that same equation myself and got MR^2, which is wrong-o. I've attempted several times trying different ways, but I think the problem isn't that I don't know how conceptually, it's that I haven't learned to think in calculus yet. If someone could explain to me how to do this that would be great . Thank you very much for your help. Also, I would like to know if mathematical symbols are built into this forum anywhere. 2. Jan 8, 2005 CuriousJonathan I figured out the mathematical symbols! $$\int \ r^{2} \ dm$$ This is what my book tells me to start with as the generic moment of inertia equation and then to substitute for dm. 3. Jan 8, 2005 dextercioby Chose a mass element "dm" at a distance "\rho" from the center of the circle.Due to the symmetry of the problem,it's wise to chose polar coordinates The mass element is given wrt to its surface by $$dm=\frac{M}{\pi R^{2}} dS$$(1) The surface element in polar coordintes is $$dS=\rho \ d\rho \ d\phi$$(2) Therefore the integral $$I=\int \rho^{2} dm$$ (3) becomes: $$I=\int_{0}^{R}d\rho \int_{0}^{2\pi} d\phi \frac{M}{\pi R^{2}} \rho^{3}$$(4) Integrate and find $$I=\frac{M}{\pi R^{2}} 2\pi \frac{\rho^{4}}{4}|_{0}^{R} =\frac{1}{2}MR^{2}$$(5) Daniel. Last edited: Jan 8, 2005 4. Jan 8, 2005 dextercioby ALTERNATIVE METHOD Think the disk as a reunion of concentric pieces in forms of coronas.Then: The surface element of such a corona situated at a distance "r" from the center of the circle is $$dS=2\pi r \ dr$$ (1) If the disk has homogenous surface density,then the mass element "dm" has the value: $$dm=\rho_{Superficial} dS=\frac{M_{disk}}{S_{disk}} dS$$(2) (this assertion is valid for the prior post as well). Then the mass element of such an infinitesimal corona is $$dm=\frac{M}{\pi R^{2}} 2\pi r \ dr =\frac{2M}{R^{2}} r \ dr$$(3) The integral giving the moment of inertia is $$I=\int r^{2} dr =\int_{0}^{R} \frac{2M}{R^{2}} r^{3} dr$$ (4) ,which is $$I=\frac{2M}{R^{2}} \frac{r^{4}}{4}|_{0}^{R} =\frac{1}{2}MR^{2}$$ (5) Daniel. Last edited: Jan 8, 2005 5. Jan 9, 2005 Galileo In general. If you know the density $\rho(x,y,z)$ of the body, using $dm=\rho dV$ will enable you to solve the problem. $$I=\int \limits_E r^2\rho dV$$ where E denotes the region you integrate over. In this particular case $\rho$ is constant, so you can bring it outside the integral. Then, switching to polar coordinates, letting $r$ run from 0 to R and $\theta$ from 0 to $2\pi$: $$I=\rho\int_0^{2\pi}\int_0^Rr^2(r)drd\theta$$ You can eliminate $\rho$ in favor of the mass to get the answer. 6. Jan 9, 2005 CuriousJonathan Thanks for the help thank you for all of the help. to tell the truth, I don't really understand any of it just reading it through, I'm going to have to print it out and look at it, but having the right answer will help me greatly in understanding why. I have a question of the last post before this. I was wondering if the author of that post could supply me with the rest of the math, because I don't have any idea how one would proceed with integrating with respect to two different things. thank you again for the help 7. Jan 9, 2005 CuriousJonathan Also, when I tried to accomplish this on my own, I attempted to think of it not as a union of concentric rings, but as a union of pie-piece-shaped wedges. This didn't work, but it did get me the moment of inertia equation for a ring. At least I got somewhere, even if it wasn't where I was intending to go. 8. Jan 9, 2005 CuriousJonathan I was hoping someone could explain to me why my previous aproach did not work. 9. Jan 9, 2005 dextercioby It is called double integral and i doubt u'll see any of these in HS.Maybe if u go to a college/university in science domain,u'll learn about these "animals"... Daniel. 10. Jan 9, 2005 dextercioby Rings have no width.They don't have a surface.They have only one dimension,namely legth.You need to find the moment of inertia for a disk,which has both width and legth.The assumption of infinitesimally thin circular coronas yields the result,however,because these mathematical objects are similar to a disk,in respect that a finite reunion of such concentrical coronas cover the disk... Daniel. 11. Jan 10, 2005 Galileo Sorry about the double integral. Fortunately it can be done without it. For a ring (circle) the moment of inertia is really simple. Since all the mass is at a distance R from the center, r in the integral is constant. So you can bring it outside of the integral and the integral over dm simply gives the mass. $$I_{ring}=\int_{C} r^2 dm = R^2 \int_{C}dm=MR^2$$ where C is the circle you are integrating over. If you consider the uniform disc as a collection of polar 'rectangles', you'll end up with a double integral. I think the only way to get a single integral is to consider the disc as a collection of concentric rings with width dr. Then express dm in terms of r: [tex]dm = (\frac{M}{\pi R^2})(2\pi rdr)=\frac{2M}{R^2}rdr[/itex] Now that you know dm in terms of dr, can you set up the integral and get the moment of inertia? BTW: This is identical to the method in dexter's second post and to a double integral after having integrated wrt theta. 12. Jan 10, 2005 CuriousJonathan yes yes, I can proceed with the ring approach, and actually some friends I know are going to be doing double integrals second semester, but since I'm not in calc I don't have to worry about them. --My friend says "He's an idiot!. He's taking AP physics without taking calc"-- Anyways, thank you for the help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947013795375824, "perplexity": 520.9456437363118}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00243-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/26169/cross-correlation-in-matlab-help
# Cross correlation in MATLAB HELP I have values from two sensors stored in two vectors A and B. They both represent values of the sensors at times TA and TB which is stored in two other vectors(since it is not uniform sampling) Both A and B represent the same data but A is shifted a bit to the right because of the delay in starting the sensors. My question is, how do I calculate this delay and more importantly, how do I shift A to match B or vice versa, such that I can do a one-to-one correspondence of the data? Right now, I am finding the peak of the cross-correlation function of A and B to find the offset (in number of samples, not time) and padding the arrays with zero upto that number, but I am 100% sure that is wrong. Problem is, I don't know how else.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326823711395264, "perplexity": 238.7137959997907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051268601.70/warc/CC-MAIN-20160524005428-00189-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/96379-algebra-problems-fun-36-a.html
# Math Help - Algebra, Problems For Fun (36) 1. ## Algebra, Problems For Fun (36) Definition: An element $r$ in a ring $R$ is called central if $r$ is in the center of $R,$ i.e. $rs=sr,$ for all $s \in R.$ 1) Suppose $R$ is a ring in which $x^2=0 \Longrightarrow x=0.$ Let $e \in R$ be an idempotent, i.e. $e^2=e.$ Prove that $e$ is central. Hint: Spoiler: Let $r \in R.$ What do you get if you expand $(er-ere)^2$ and $(re-ere)^2$ ? 2) Use 1) to give a short proof of this very special case of Jacobson's theorem: if $x^3=x,$ for all $x$ in a ring with identity $R,$ then $R$ is commutative. Hint: Spoiler: clearly $r^2$ is idempotent, and thus central by 1), for any $r \in R.$ Show that $2r$ and $3r$ are also central. 3) This time suppose $x^4=x,$ for all $x$ in a ring with identity $R.$ Use 1) to prove that $R$ is commutative. Hint: Spoiler: It's immediate that $r^2 + r$ is an idempotent, and thus central by 1), for all $r \in R.$ Put $r=a+b$ to show that $ab+ba$ is central for all $a,b \in R.$ 4) Challenge: Suppose $x^5=x,$ for all $x$ in a ring with identity $R.$ Use 1) to prove that $R$ is commutative. 2. Originally Posted by NonCommAlg Hint: Spoiler: It's immediate that $r^2 + r$ is an idempotent, and thus central by 1), for all $r \in R.$ Put $r=a+b$ to show that $ab+ba$ is central for all $a,b \in R.$ Why is $r^2+r$ an idempotent? Isn’t $(r^2+r)^2=r^2+r+2r^3?$ 3. Originally Posted by TheAbstractionist Why is $r^2+r$ an idempotent? Isn’t $(r^2+r)^2=r^2+r+2r^3?$ because in $R$ we have $(-1)^4=-1,$ which gives us $2=0.$ so $2r^3=0.$ 4. Ah! I didn’t realize that $R$ was a ring with multiplicative identity. 5. Right. For the last one, $r^4$ is idempotent and therefore central. Now $(r+1)^5\,=\,r+1$ $\implies\ 2r^3+2r^2+r\,=\,-r^4$ is central and $(r-1)^5\,=\,r-1$ $\implies\ 2r^3-2r^2+r\,=\,r^4$ is central and so $(2r^3+2r^2+r)-(2r^3-2r^2+r)=4r^2$ and $(2r^3+2r^2+r)+(2r^3-2r^2+r)=4r^3+2r$ are central. Hence $8(4r^2)=2^5r^2=2r^2$ is central. Moreover $4r^3+6r^2+4r\,=\,(r+1)^4-r^4-1$ is central. Hence $(4r^3+6r^2+4r)-(4r^3+2r)-3(2r^2)=2r$ is central. Hence $(4r^3+2r)-2r=4r^3$ is central and so $8(4r^3)=2^5r^3=2r^3$ is central. Hence $(2r^3+2r^2+r)-2r^3-2r^2=r$ is central. 6. Originally Posted by TheAbstractionist $(r+1)^5\,=\,r+1$ $\implies\ 2r^3+2r^2+r\,=\,-r^4$ is central we can't divide by 5. so we actually get: $10r^3+10r^2+5r=-5r^4$ is central. 7. Okay, let’s try again. $10r^3+10r^2+5r=-5r^4$ and $10r^3-10r^2+5r=-5r^4 $ are central, so $(10r^3+10r^2+5r)-(10r^3-10r^2+5r)=20r^2$ and $(10r^3+10r^2+5r)+(10r^3-10r^2+5r)=20r^3+10r$ are central. So $2(20r^2)=(2^5+8)r^2=10r^2$ is central. Moreover $4r^3+6r^2+4r\,=\,(r+1)^4-r^4-1$ is central. Hence $5(4r^3+6r^2+4r)-(20r^3+10r)-3(10r^2)=10r$ is central. Hence $(20r^3+10r)-10r=20r^3$ is central, and so $10r^3$ is central. Hence $(10r^3+10r^2+5r)-10r^3-10r^2=5r$ is central. Now $4r^3-6r^2+4r\,=\,-(r+1)^4+r^4+1$ is also central, and so $(4r^3+6r^2+4r)-(4r^3-6r^2+4r)=12r^2$ is central, and so $12r^2-10r^2=2r^2$ is central. Replacing $r$ by $r+1$ gives that $2(r+1)^2$ is central. Hence $2(r+1)^2-2r^2-2=4r$ is central. At last! Hence $r=5r-4r$ is central. 8. that's a nice work! there are two points, which won't have any effect on your solution: Originally Posted by TheAbstractionist Hence $5(4r^3+6r^2+4r)-(20r^3+10r)-3(10r^2)=10r$ is central. or since $30 = 0$ in R, we have $5(4r^3 + 6r^2 + 4r) - (20r^3 + 10r)=10r.$ $4r^3-6r^2+4r\,=\,-(r+1)^4+r^4+1$ the right hand side should be $-(r-1)^4+r^4+1.$ 9. My question is related to: 2) Use 1) to give a short proof of this very special case of Jacobson's theorem: if for all in a ring with identity then is commutative. Hint: Spoiler: clearly is idempotent, and thus central by 1), for any Show that and are also central. Let center of the ring be C. $C=\{r \in R | rs=sr \forall s \in R\}$ Clearly C is a sub-ring of R Also $1 , r^2 \in C$ $\forall r \in R$ To prove $2r \in C$ $\forall r \in R$ Consider $(1+r)^2 - 1 - r^2 = 2r$ From the closure property of sub-ring we get $2r \in C$ Is my approach correct? I am having trouble in proving $3r \in C$. Any push plz? 10. Originally Posted by aman_cc Let center of the ring be C. $C=\{r \in R | rs=sr \forall s \in R\}$ Clearly C is a sub-ring of R Also $1 , r^2 \in C$ $\forall r \in R$ To prove $2r \in C$ $\forall r \in R$ Consider $(1+r)^2 - 1 - r^2 = 2r$ From the closure property of sub-ring we get $2r \in C$ Is my approach correct? yes. I am having trouble in proving $3r \in C$. Any push plz? $1+r=(1+r)^3 = 1+3r+3r^2 + r^3 = 1+3r+3r^2+r.$ thus $3r=-3r^2 \in C.$ 11. Originally Posted by NonCommAlg yes. $1+r=(1+r)^3 = 1+3r+3r^2 + r^3 = 1+3r+3r^2+r.$ thus $3r=-3r^2 \in C.$ Thanks. It look so easy when you see it I'm now stuck with 3) This time suppose for all in a ring with identity Use 1) to prove that is commutative. Hint: Spoiler: It's immediate that is an idempotent, and thus central by 1), for all Put to show that is central for all As suggested in the hint - I could prove the following: $\forall a,b \in R$ 1. $a+a = 0$ 2. $a^2 + a \in C$ 3. $ab+ba \in C$ How do I show R is commutative? A push again plz? Infact I was trying to show that ab+ba=0. Then using (1) above I will show ab = ba and hence will be done. But no success. 12. Originally Posted by aman_cc Thanks. It look so easy when you see it I'm now stuck with 3) This time suppose for all in a ring with identity Use 1) to prove that is commutative. Hint: Spoiler: It's immediate that is an idempotent, and thus central by 1), for all Put to show that is central for all As suggested in the hint - I could prove the following: $\forall a,b \in R$ 1. $a+a = 0$ 2. $a^2 + a \in C$ 3. $ab+ba \in C$ How do I show R is commutative? A push again plz? Infact I was trying to show that ab+ba=0. Then using (1) above I will show ab = ba and hence will be done. But no success. so $a^2b + ba^2 \in C,$ for all $a,b \in R.$ thus $a^2(a^2b + ba^2)=(a^2b + ba^2)a^2,$ which gives us: $ab=ba.$ 13. Originally Posted by NonCommAlg so $a^2b + ba^2 \in C,$ for all $a,b \in R.$ thus $a^2(a^2b + ba^2)=(a^2b + ba^2)a^2,$ which gives us: $ab=ba.$ Thanks. And now for the last part. 4) Challenge: Suppose for all in a ring with identity Use 1) to prove that is commutative. I completely follow TheAbstractionist proof. I have just two question. 1. I would never have been able to do this proof. Is there a structured line of attack which you were following here? Or this just a matter of lot of practice and off-course lot of gray-matter? 2. Do we end at 5? Or this is true in general, where $x^n=x$? 14. Originally Posted by aman_cc Thanks. And now for the last part. 4) Challenge: Suppose for all in a ring with identity Use 1) to prove that is commutative. I completely follow TheAbstractionist proof. I have just two question. 1. I would never have been able to do this proof. Is there a structured line of attack which you were following here? Or this just a matter of lot of practice and off-course lot of gray-matter? the idea is to look for idempotents. 2. Do we end at 5? Or this is true in general, where $x^n=x$? the result is true for all $n.$ the interesting thing is that $n$ can even change as the element $x$ changes. so, the Jacobson's theorem says: if $R$ is a ring with identity such that for every $x \in R,$ there exists an integer $n(x) \geq 2$ such that $x^{n(x)}=x,$ then $R$ is commutative. of course, understanding the proof of this theorem requires a fairly deep knowledge of ring theory. there are, however, special cases of this theorem which can be proved quite easily. for example if $R$ is finite or, more genrally, Artinian, then the theorem is just a quick result of Artin-Wedderburn theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 135, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751124978065491, "perplexity": 433.1159642428262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160950.71/warc/CC-MAIN-20160205193920-00038-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/51317-algebra-word-problem.html
1. ## algebra word problem alice invested a portion of $500 at 4% interest and the remainder at 5%. how much did she invest at each rate if her total income from both investments is$23.28. 2. Originally Posted by watsod8 alice invested a portion of $500 at 4% interest and the remainder at 5%. how much did she invest at each rate if her total income from both investments is$23.28. Let x = amout invested at 4% Let 500 - x = amount invested at 5% .04x + .05(500 - x) = 23.28 Finish up....good luck. 3. Originally Posted by watsod8 alice invested a portion of $500 at 4% interest and the remainder at 5%. how much did she invest at each rate if her total income from both investments is$23.28. Let x denote the amount which is invested at 4%, then (500-x) are invested at 5%: $x \cdot \frac4{100} + (500-x) \cdot \frac5{100} = 23.38$ Solve for x. I've got $162 4. Hi, I tried to solve this and I don't get$162 . can someone please check my working and tell me were I have gone wrong . thank you $0.4x + 0.5(500-x) = 23.38$ $0.4x + 250-0.5x = 23.38$ $-0.1x + 250 = 23.38$ $-0.1x = -226.62$ $x = 2266.2$ ? 5. Originally Posted by earboth Let x denote the amount which is invested at 4%, then (500-x) are invested at 5%: $x \cdot \frac4{100} + (500-x) \cdot \frac5{100} = {\color{red}23.38}$ Solve for x. I've got $162 Typo, Earboth, but setup is good. Investment at 4% =$172 6. I don't get $172. were have I gone wrong? 7. Originally Posted by Tweety Hi, I tried to solve this and I don't get$162 . can someone please check my working and tell me were I have gone wrong . thank you $0.4x + 0.5(500-x) = 23.38$ First, it's 23.28, not 23.38 Second, 4%=.04 and 5%=.05 $0.4x + 250-0.5x = 23.38$ $-0.1x + 250 = 23.38$ $-0.1x = -226.62$ $x = 2266.2$ ? Corrections in red.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565256595611572, "perplexity": 3715.9315899650073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702730377/warc/CC-MAIN-20130516111210-00008-ip-10-60-113-184.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Damping
# Harmonic oscillator (Redirected from Damping) Jump to: navigation, search This article is about the harmonic oscillator in classical mechanics. For its uses in quantum mechanics, see quantum harmonic oscillator. In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force, F, proportional to the displacement, x: ${\displaystyle {\vec {F}}=-k{\vec {x}}\,}$ where k is a positive constant. If F is the only force acting on the system, the system is called a simple harmonic oscillator, and it undergoes simple harmonic motion: sinusoidal oscillations about the equilibrium point, with a constant amplitude and a constant frequency (which does not depend on the amplitude). If a frictional force (damping) proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the friction coefficient, the system can: • Oscillate with a frequency lower than in the non-damped case, and an amplitude decreasing with time (underdamped oscillator). • Decay to the equilibrium position, without oscillations (overdamped oscillator). The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a particular value of the friction coefficient and is called "critically damped." If an external time dependent force is present, the harmonic oscillator is described as a driven oscillator. Mechanical examples include pendulums (with small angles of displacement), masses connected to springs, and acoustical systems. Other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is very important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many manmade devices, such as clocks and radio circuits. They are the source of virtually all sinusoidal vibrations and waves. ## Simple harmonic oscillator Simple harmonic motion A simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a mass m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the mass's position x and a constant k. Balance of forces (Newton's second law) for the system is ${\displaystyle F=ma=m{\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}=m{\ddot {x}}=-kx.}$ Solving this differential equation, we find that the motion is described by the function ${\displaystyle x(t)=A\cos \left(\omega t+\phi \right),}$ where ${\displaystyle \omega ={\sqrt {\frac {k}{m}}}={\frac {2\pi }{T}}.}$ The motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. In addition to its amplitude, the motion of a simple harmonic oscillator is characterized by its period T, the time for a single oscillation or its frequency f = 1T, the number of cycles per unit time. The position at a given time t also depends on the phase, φ, which determines the starting point on the sine wave. The period and frequency are determined by the size of the mass m and the force constant k, while the amplitude and phase are determined by the starting position and velocity. The velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the opposite direction as the displacement. The potential energy stored in a simple harmonic oscillator at position x is ${\displaystyle U={\frac {1}{2}}kx^{2}.}$ ## Damped harmonic oscillator Dependence of the system behavior on the value of the damping ratio ζ A damped harmonic oscillator, which slows down due to friction Another damped harmonic oscillator In real oscillators, friction, or damping, slows the motion of the system. Due to frictional force, the velocity decreases in proportion to the acting frictional force. While simple harmonic motion oscillates with only the restoring force acting on the system, damped harmonic motion experiences friction. In many vibrating systems the frictional force Ff can be modeled as being proportional to the velocity v of the object: Ff = −cv, where c is called the viscous damping coefficient. Balance of forces (Newton's second law) for damped harmonic oscillators is then ${\displaystyle F=F_{ext}-kx-c{\frac {\mathrm {d} x}{\mathrm {d} t}}=m{\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}.}$ When no external forces are present (i.e. when ${\displaystyle F_{ext}=0}$), this can be rewritten into the form ${\displaystyle {\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}+2\zeta \omega _{0}{\frac {\mathrm {d} x}{\mathrm {d} t}}+\omega _{0}^{\,2}x=0,}$ where ${\displaystyle \omega _{0}={\sqrt {\frac {k}{m}}}}$ is called the 'undamped angular frequency of the oscillator' and ${\displaystyle \zeta ={\frac {c}{2{\sqrt {mk}}}}}$ is called the 'damping ratio'. Step-response of a damped harmonic oscillator; curves are plotted for three values of μ = ω1 = ω01−ζ2. Time is in units of the decay time τ = 1/(ζω0). The value of the damping ratio ζ critically determines the behavior of the system. A damped harmonic oscillator can be: • Overdamped (ζ > 1): The system returns (exponentially decays) to steady state without oscillating. Larger values of the damping ratio ζ return to equilibrium slower. • Critically damped (ζ = 1): The system returns to steady state as quickly as possible without oscillating (although overshoot can occur). This is often desired for the damping of systems such as doors. • Underdamped (ζ < 1): The system oscillates (with a slightly different frequency than the undamped case) with the amplitude gradually decreasing to zero. The angular frequency of the underdamped harmonic oscillator is given by ${\displaystyle \omega _{1}=\omega _{0}{\sqrt {1-\zeta ^{2}}},}$ the exponential decay of the underdamped harmonic oscillator is given by ${\displaystyle \lambda =\omega _{0}\zeta .}$ The Q factor of a damped oscillator is defined as ${\displaystyle Q=2\pi \times {\frac {\text{Energy stored}}{\text{Energy lost per cycle}}}.}$ Q is related to the damping ratio by the equation ${\displaystyle Q={\frac {1}{2\zeta }}.}$ ## Driven harmonic oscillators Driven harmonic oscillators are damped oscillators further affected by an externally applied force F(t). Newton's second law takes the form ${\displaystyle F(t)-kx-c{\frac {\mathrm {d} x}{\mathrm {d} t}}=m{\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}.}$ It is usually rewritten into the form ${\displaystyle {\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}+2\zeta \omega _{0}{\frac {\mathrm {d} x}{\mathrm {d} t}}+\omega _{0}^{2}x={\frac {F(t)}{m}}.}$ This equation can be solved exactly for any driving force, using the solutions z(t) which satisfy the unforced equation: ${\displaystyle {\frac {\mathrm {d} ^{2}z}{\mathrm {d} t^{2}}}+2\zeta \omega _{0}{\frac {\mathrm {d} z}{\mathrm {d} t}}+\omega _{0}^{2}z=0,}$ and which can be expressed as damped sinusoidal oscillations, ${\displaystyle z(t)=A\mathrm {e} ^{-\zeta \omega _{0}t}\ \sin \left({\sqrt {1-\zeta ^{2}}}\ \omega _{0}t+\varphi \right),}$ in the case where ζ ≤ 1. The amplitude A and phase φ determine the behavior needed to match the initial conditions. ### Step input See also: Step response In the case ζ < 1 and a unit step input with x(0) = 0: ${\displaystyle {F(t) \over m}={\begin{cases}\omega _{0}^{2}&t\geq 0\\0&t<0\end{cases}}}$ the solution is: ${\displaystyle x(t)=1-\mathrm {e} ^{-\zeta \omega _{0}t}{\frac {\sin \left({\sqrt {1-\zeta ^{2}}}\ \omega _{0}t+\varphi \right)}{\sin(\varphi )}},}$ with phase φ given by ${\displaystyle \cos \varphi =\zeta .\,}$ The time an oscillator needs to adapt to changed external conditions is of the order τ = 1/(ζω0). In physics, the adaptation is called relaxation, and τ is called the relaxation time. In electrical engineering, a multiple of τ is called the settling time, i.e. the time necessary to ensure the signal is within a fixed departure from final value, typically within 10%. The term overshoot refers to the extent the maximum response exceeds final value, and undershoot refers to the extent the response falls below final value for times following the maximum response. ### Sinusoidal driving force Steady state variation of amplitude with relative frequency ${\displaystyle \omega /\omega _{0}}$ and damping ${\displaystyle \zeta }$ of a driven simple harmonic oscillator. In the case of a sinusoidal driving force: ${\displaystyle {\frac {\mathrm {d} ^{2}x}{\mathrm {d} t^{2}}}+2\zeta \omega _{0}{\frac {\mathrm {d} x}{\mathrm {d} t}}+\omega _{0}^{2}x={\frac {1}{m}}F_{0}\sin(\omega t),}$ where ${\displaystyle \,\!F_{0}}$ is the driving amplitude and ${\displaystyle \,\!\omega }$ is the driving frequency for a sinusoidal driving mechanism. This type of system appears in AC driven RLC circuits (resistor-inductor-capacitor) and driven spring systems having internal mechanical resistance or external air resistance. The general solution is a sum of a transient solution that depends on initial conditions, and a steady state that is independent of initial conditions and depends only on the driving amplitude ${\displaystyle \,\!F_{0}}$, driving frequency, ${\displaystyle \,\!\omega }$, undamped angular frequency ${\displaystyle \,\!\omega _{0}}$, and the damping ratio ${\displaystyle \,\!\zeta }$. The steady-state solution is proportional to the driving force with an induced phase change of ${\displaystyle \,\!\phi }$: ${\displaystyle x(t)={\frac {F_{0}}{mZ_{m}\omega }}\sin(\omega t+\phi )}$ where ${\displaystyle Z_{m}={\sqrt {\left(2\omega _{0}\zeta \right)^{2}+{\frac {1}{\omega ^{2}}}\left(\omega _{0}^{2}-\omega ^{2}\right)^{2}}}}$ is the absolute value of the impedance or linear response function and ${\displaystyle \phi =\arctan \left({\frac {2\omega \omega _{0}\zeta }{\omega ^{2}-\omega _{0}^{2}}}\right)+n\pi }$ is the phase of the oscillation relative to the driving force. The phase value is usually taken to be between -180 degrees and 0 (that is, it represents a phase lag, for both positive and negative values of the arctan's argument). For a particular driving frequency called the resonance, or resonant frequency ${\displaystyle \,\!\omega _{r}=\omega _{0}{\sqrt {1-2\zeta ^{2}}}}$, the amplitude (for a given ${\displaystyle \,\!F_{0}}$) is maximum. This resonance effect only occurs when ${\displaystyle \,\zeta <1/{\sqrt {2}}}$, i.e. for significantly underdamped systems. For strongly underdamped systems the value of the amplitude can become quite large near the resonance frequency. The transient solutions are the same as the unforced (${\displaystyle \,\!F_{0}=0}$) damped harmonic oscillator and represent the systems response to other events that occurred previously. The transient solutions typically die out rapidly enough that they can be ignored. ## Parametric oscillators Main article: Parametric oscillator A parametric oscillator is a driven harmonic oscillator in which the drive energy is provided by varying the parameters of the oscillator, such as the damping or restoring force. A familiar example of parametric oscillation is "pumping" on a playground swing.[1][2][3] A person on a moving swing can increase the amplitude of the swing's oscillations without any external drive force (pushes) being applied, by changing the moment of inertia of the swing by rocking back and forth ("pumping") or alternately standing and squatting, in rhythm with the swing's oscillations. The varying of the parameters drives the system. Examples of parameters that may be varied are its resonance frequency ${\displaystyle \omega }$ and damping ${\displaystyle \beta }$. Parametric oscillators are used in many applications. The classical varactor parametric oscillator oscillates when the diode's capacitance is varied periodically. The circuit that varies the diode's capacitance is called the "pump" or "driver". In microwave electronics, waveguide/YAG based parametric oscillators operate in the same fashion. The designer varies a parameter periodically to induce oscillations. Parametric oscillators have been developed as low-noise amplifiers, especially in the radio and microwave frequency range. Thermal noise is minimal, since a reactance (not a resistance) is varied. Another common use is frequency conversion, e.g., conversion from audio to radio frequencies. For example, the Optical parametric oscillator converts an input laser wave into two output waves of lower frequency (${\displaystyle \omega _{s},\omega _{i}}$). Parametric resonance occurs in a mechanical system when a system is parametrically excited and oscillates at one of its resonant frequencies. Parametric excitation differs from forcing, since the action appears as a time varying modification on a system parameter. This effect is different from regular resonance because it exhibits the instability phenomenon. ## Universal oscillator equation The equation ${\displaystyle {\frac {\mathrm {d} ^{2}q}{\mathrm {d} \tau ^{2}}}+2\zeta {\frac {\mathrm {d} q}{\mathrm {d} \tau }}+q=0}$ is known as the universal oscillator equation since all second order linear oscillatory systems can be reduced to this form. This is done through nondimensionalization. If the forcing function is f(t) = cos(ωt) = cos(ωtcτ) = cos(ωτ), where ω = ωtc, the equation becomes ${\displaystyle {\frac {\mathrm {d} ^{2}q}{\mathrm {d} \tau ^{2}}}+2\zeta {\frac {\mathrm {d} q}{\mathrm {d} \tau }}+q=\cos(\omega \tau ).}$ The solution to this differential equation contains two parts, the "transient" and the "steady state". ### Transient solution The solution based on solving the ordinary differential equation is for arbitrary constants c1 and c2 ${\displaystyle q_{t}(\tau )={\begin{cases}\mathrm {e} ^{-\zeta \tau }\left(c_{1}\mathrm {e} ^{\tau {\sqrt {\zeta ^{2}-1}}}+c_{2}\mathrm {e} ^{-\tau {\sqrt {\zeta ^{2}-1}}}\right)&\zeta >1{\text{ (overdamping)}}\\\mathrm {e} ^{-\zeta \tau }(c_{1}+c_{2}\tau )=\mathrm {e} ^{-\tau }(c_{1}+c_{2}\tau )&\zeta =1{\text{ (critical damping)}}\\\mathrm {e} ^{-\zeta \tau }\left[c_{1}\cos \left({\sqrt {1-\zeta ^{2}}}\tau \right)+c_{2}\sin \left({\sqrt {1-\zeta ^{2}}}\tau \right)\right]&\zeta <1{\text{(underdamping)}}\end{cases}}}$ The transient solution is independent of the forcing function. ### Steady-state solution Apply the "complex variables method" by solving the auxiliary equation below and then finding the real part of its solution: ${\displaystyle {\frac {\mathrm {d} ^{2}q}{\mathrm {d} \tau ^{2}}}+2\zeta {\frac {\mathrm {d} q}{\mathrm {d} \tau }}+q=\cos(\omega \tau )+\mathrm {i} \sin(\omega \tau )=\mathrm {e} ^{\mathrm {i} \omega \tau }.}$ Supposing the solution is of the form ${\displaystyle \,\!q_{s}(\tau )=A\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )}.}$ Its derivatives from zero to 2nd order are ${\displaystyle q_{s}=A\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )},\ {\frac {\mathrm {d} q_{s}}{\mathrm {d} \tau }}=\mathrm {i} \omega A\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )},\ {\frac {\mathrm {d} ^{2}q_{s}}{\mathrm {d} \tau ^{2}}}=-\omega ^{2}A\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )}.}$ Substituting these quantities into the differential equation gives ${\displaystyle \,\!-\omega ^{2}A\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )}+2\zeta \mathrm {i} \omega A\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )}+A\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )}=(-\omega ^{2}A\,+\,2\zeta \mathrm {i} \omega A\,+\,A)\mathrm {e} ^{\mathrm {i} (\omega \tau +\phi )}=\mathrm {e} ^{\mathrm {i} \omega \tau }.}$ Dividing by the exponential term on the left results in ${\displaystyle \,\!-\omega ^{2}A+2\zeta \mathrm {i} \omega A+A=\mathrm {e} ^{-\mathrm {i} \phi }=\cos \phi -\mathrm {i} \sin \phi .}$ Equating the real and imaginary parts results in two independent equations ${\displaystyle A(1-\omega ^{2})=\cos \phi \qquad 2\zeta \omega A=-\sin \phi .}$ #### Amplitude part Bode plot of the frequency response of an ideal harmonic oscillator. Squaring both equations and adding them together gives ${\displaystyle \left.{\begin{array}{rcl}A^{2}(1-\omega ^{2})^{2}&=&\cos ^{2}\phi \\[6pt](2\zeta \omega A)^{2}&=&\sin ^{2}\phi \end{array}}\right\}\Rightarrow A^{2}[(1-\omega ^{2})^{2}+(2\zeta \omega )^{2}]=1.}$ Therefore, ${\displaystyle A=A(\zeta ,\omega )={\text{sign}}\left({\frac {-\sin \phi }{2\zeta \omega }}\right){\frac {1}{\sqrt {(1-\omega ^{2})^{2}+(2\zeta \omega )^{2}}}}.}$ Compare this result with the theory section on resonance, as well as the "magnitude part" of the RLC circuit. This amplitude function is particularly important in the analysis and understanding of the frequency response of second-order systems. #### Phase part To solve for φ, divide both equations to get ${\displaystyle \tan \phi =-{\frac {2\zeta \omega }{1-\omega ^{2}}}={\frac {2\zeta \omega }{\omega ^{2}-1}}\Rightarrow \phi \equiv \phi (\zeta ,\omega )=\arctan \left({\frac {2\zeta \omega }{\omega ^{2}-1}}\right)+n\pi .}$ This phase function is particularly important in the analysis and understanding of the frequency response of second-order systems. ### Full solution Combining the amplitude and phase portions results in the steady-state solution ${\displaystyle \,\!q_{s}(\tau )=A(\zeta ,\omega )\cos(\omega \tau +\phi (\zeta ,\omega ))=A\cos(\omega \tau +\phi ).}$ The solution of original universal oscillator equation is a superposition (sum) of the transient and steady-state solutions ${\displaystyle \,\!q(\tau )=q_{t}(\tau )+q_{s}(\tau ).}$ For a more complete description of how to solve the above equation, see linear ODEs with constant coefficients. ## Equivalent systems Main article: System equivalence Harmonic oscillators occurring in a number of areas of engineering are equivalent in the sense that their mathematical models are identical (see universal oscillator equation above). Below is a table showing analogous quantities in four harmonic oscillator systems in mechanics and electronics. If analogous parameters on the same line in the table are given numerically equal values, the behavior of the oscillators—their output waveform, resonant frequency, damping factor, etc.—are the same. Translational Mechanical Rotational Mechanical Series RLC Circuit Parallel RLC Circuit Position ${\displaystyle x\,}$ Angle ${\displaystyle \theta \,\!}$ Charge ${\displaystyle q\,}$ Flux linkage ${\displaystyle \phi \,}$ Velocity ${\displaystyle {\frac {\mathrm {d} x}{\mathrm {d} t}}\,}$ Angular velocity ${\displaystyle {\frac {\mathrm {d} \theta }{\mathrm {d} t}}\,}$ Current ${\displaystyle {\frac {\mathrm {d} q}{\mathrm {d} t}}\,}$ Voltage ${\displaystyle {\frac {\mathrm {d} \phi }{\mathrm {d} t}}\,}$ Mass ${\displaystyle M\,}$ Moment of inertia ${\displaystyle I\,}$ Inductance ${\displaystyle L\,}$ Capacitance ${\displaystyle C\,}$ Spring constant ${\displaystyle K\,}$ Torsion constant ${\displaystyle \mu \,}$ Elastance ${\displaystyle 1/C\,}$ Electrical reluctance ${\displaystyle 1/L\,}$ Damping ${\displaystyle \gamma \,}$ Rotational friction ${\displaystyle \Gamma \,}$ Resistance ${\displaystyle R\,}$ Conductance ${\displaystyle G=1/R\,}$ Drive force ${\displaystyle F(t)\,}$ Drive torque ${\displaystyle \tau (t)\,}$ Voltage ${\displaystyle e\,}$ Current ${\displaystyle i\,}$ Undamped resonant frequency ${\displaystyle f_{n}\,}$: ${\displaystyle {\frac {1}{2\pi }}{\sqrt {\frac {K}{M}}}\,}$ ${\displaystyle {\frac {1}{2\pi }}{\sqrt {\frac {\mu }{I}}}\,}$ ${\displaystyle {\frac {1}{2\pi }}{\sqrt {\frac {1}{LC}}}\,}$ ${\displaystyle {\frac {1}{2\pi }}{\sqrt {\frac {1}{LC}}}\,}$ Differential equation: ${\displaystyle M{\ddot {x}}+\gamma {\dot {x}}+Kx=F\,}$ ${\displaystyle I{\ddot {\theta }}+\Gamma {\dot {\theta }}+\mu \theta =\tau \,}$ ${\displaystyle L{\ddot {q}}+R{\dot {q}}+q/C=e\,}$ ${\displaystyle C{\ddot {\phi }}+G{\dot {\phi }}+\phi /L=i\,}$ ## Application to a conservative force The problem of the simple harmonic oscillator occurs frequently in physics, because a mass at equilibrium under the influence of any conservative force, in the limit of small motions, behaves as a simple harmonic oscillator. A conservative force is one that has a potential energy function. The potential energy function of a harmonic oscillator is: ${\displaystyle V(x)={\frac {1}{2}}kx^{2}}$ Given an arbitrary potential energy function ${\displaystyle V(x)}$, one can do a Taylor expansion in terms of ${\displaystyle x}$ around an energy minimum (${\displaystyle x=x_{0}}$) to model the behavior of small perturbations from equilibrium. ${\displaystyle V(x)=V(x_{0})+(x-x_{0})V'(x_{0})+{\frac {1}{2}}(x-x_{0})^{2}V^{(2)}(x_{0})+O(x-x_{0})^{3}}$ Because ${\displaystyle V(x_{0})}$ is a minimum, the first derivative evaluated at ${\displaystyle x_{0}}$ must be zero, so the linear term drops out: ${\displaystyle V(x)=V(x_{0})+{\frac {1}{2}}(x-x_{0})^{2}V^{(2)}(x_{0})+O(x-x_{0})^{3}}$ The constant term V(x0) is arbitrary and thus may be dropped, and a coordinate transformation allows the form of the simple harmonic oscillator to be retrieved: ${\displaystyle V(x)\approx {\frac {1}{2}}x^{2}V^{(2)}(0)={\frac {1}{2}}kx^{2}}$ Thus, given an arbitrary potential energy function ${\displaystyle V(x)}$ with a non-vanishing second derivative, one can use the solution to the simple harmonic oscillator to provide an approximate solution for small perturbations around the equilibrium point. ## Examples ### Simple pendulum A simple pendulum exhibits approximately simple harmonic motion under the conditions of no damping and small amplitude. Assuming no damping, the differential equation governing a simple pendulum is ${\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {g}{\ell }}\sin \theta =0.}$ If the maximum displacement of the pendulum is small, we can use the approximation ${\displaystyle \sin \theta \approx \theta }$ and instead consider the equation ${\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {g}{\ell }}\theta =0.}$ The solution to this equation is given by: ${\displaystyle \theta (t)=\theta _{0}\cos \left({\sqrt {g \over \ell }}t\right)}$ where ${\displaystyle \theta _{0}}$ is the largest angle attained by the pendulum. The period, the time for one complete oscillation, is given by the expression ${\displaystyle T_{0}=2\pi {\sqrt {\frac {\ell }{g}}},}$ which is a good approximation of the actual period when ${\displaystyle \theta _{0}}$ is small. ### Pendulum swinging over turntable Simple harmonic motion can in some cases be considered to be the one-dimensional projection of two-dimensional circular motion. Consider a long pendulum swinging over the turntable of a record player. On the edge of the turntable there is an object. If the object is viewed from the same level as the turntable, a projection of the motion of the object seems to be moving backwards and forwards on a straight line orthogonal to the view direction, sinusoidally like the pendulum. ### Spring/mass system Spring–mass system in equilibrium (A), compressed (B) and stretched (C) states. When a spring is stretched or compressed by a mass, the spring develops a restoring force. Hooke's law gives the relationship of the force exerted by the spring when the spring is compressed or stretched a certain length: ${\displaystyle F\left(t\right)=-kx\left(t\right)}$ where F is the force, k is the spring constant, and x is the displacement of the mass with respect to the equilibrium position. The minus sign in the equation indicates that the force exerted by the spring always acts in a direction that is opposite to the displacement (i.e. the force always acts towards the zero position), and so prevents the mass from flying off to infinity. By using either force balance or an energy method, it can be readily shown that the motion of this system is given by the following differential equation: ${\displaystyle F(t)=-kx(t)=m{\frac {\mathrm {d} ^{2}}{\mathrm {d} {t}^{2}}}x\left(t\right)=ma.}$ ...the latter being Newton's second law of motion. If the initial displacement is A, and there is no initial velocity, the solution of this equation is given by: ${\displaystyle x\left(t\right)=A\cos \left({\sqrt {k \over m}}t\right).}$ Given an ideal massless spring, ${\displaystyle m}$ is the mass on the end of the spring. If the spring itself has mass, its effective mass must be included in ${\displaystyle m}$. #### Energy variation in the spring–damping system In terms of energy, all systems have two types of energy, potential energy and kinetic energy. When a spring is stretched or compressed, it stores elastic potential energy, which then is transferred into kinetic energy. The potential energy within a spring is determined by the equation ${\displaystyle U=k{x}^{2}/2.}$ When the spring is stretched or compressed, kinetic energy of the mass gets converted into potential energy of the spring. By conservation of energy, assuming the datum is defined at the equilibrium position, when the spring reaches its maximum potential energy, the kinetic energy of the mass is zero. When the spring is released, it tries to return to equilibrium, and all its potential energy converts to kinetic energy of the mass. ## Definition of terms Symbol Definition Dimensions SI units ${\displaystyle a}$ Acceleration of mass ${\displaystyle \mathbf {LT^{-2}} }$ meter / second2 ${\displaystyle A}$ Peak amplitude of oscillation ${\displaystyle \mathbf {L} }$ meter ${\displaystyle c}$ Viscous damping force ${\displaystyle \mathbf {MT^{-3}} }$ newton second / meter ${\displaystyle F}$ Drive force ${\displaystyle \mathbf {MLT^{-2}} }$ newton ${\displaystyle g}$ Acceleration of gravity at the Earth's surface ${\displaystyle \mathbf {LT^{-2}} }$ meter / second2 ${\displaystyle i}$ Imaginary unit, ${\displaystyle {\sqrt {-1}}}$ - - ${\displaystyle k}$ Spring constant ${\displaystyle \mathbf {MT^{-2}} }$ newton / meter ${\displaystyle m,M}$ Mass ${\displaystyle \mathbf {M} }$ kilogram ${\displaystyle Q}$ Quality factor - - ${\displaystyle T}$ Period of oscillation ${\displaystyle \mathbf {T} }$ second ${\displaystyle t}$ Time ${\displaystyle \mathbf {T} }$ second ${\displaystyle U}$ Potential energy stored in oscillator ${\displaystyle \mathbf {ML^{2}T^{-2}} }$ joule ${\displaystyle x}$ Position of mass ${\displaystyle \mathbf {L} }$ meter ${\displaystyle \zeta }$ Damping ratio ${\displaystyle \mathbf {T^{-1}} }$ 1/second ${\displaystyle \phi }$ Phase shift - radian ${\displaystyle \omega }$ Angular frequency ${\displaystyle \mathbf {T^{-1}} }$ radian / second ${\displaystyle \omega _{0}}$ Natural resonant frequency ${\displaystyle \mathbf {T^{-1}} }$ radian / second ## Notes 1. ^ Case, William. "Two ways of driving a child's swing". Retrieved 27 November 2011. 2. ^ Case, W. B. (1996). "The pumping of a swing from the standing position". American Journal of Physics. 64 (3): 215–211. Bibcode:1996AmJPh..64..215C. doi:10.1119/1.18209. 3. ^ Roura, P.; Gonzalez, J.A. (2010). "Towards a more realistic description of swing pumping due to the exchange of angular momentum". European Journal of Physics. 31 (5): 1195–1207. Bibcode:2010EJPh...31.1195R. doi:10.1088/0143-0807/31/5/020. ## References • Serway, Raymond A.; Jewett, John W. (2003). Physics for Scientists and Engineers. Brooks/Cole. ISBN 0-534-40842-7. • Tipler, Paul (1998). Physics for Scientists and Engineers: Vol. 1 (4th ed.). W. H. Freeman. ISBN 1-57259-492-6. • Wylie, C. R. (1975). Advanced Engineering Mathematics (4th ed.). McGraw-Hill. ISBN 0-07-072180-7. • Hayek, Sabih I. (15 Apr 2003). "Mechanical Vibration and Damping". Encyclopedia of Applied Physics. WILEY-VCH Verlag GmbH & Co KGaA. doi:10.1002/3527600434.eap231. ISBN 9783527600434.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 144, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964179277420044, "perplexity": 623.4347374113469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00008-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.ti.inf.ethz.ch/ew/mise/mittagssem.html?action=show&what=abstract&id=6ca7c6bda9146635ee9f0927bf60d759d7ff8b82
## Theory of Combinatorial Algorithms Prof. Emo Welzl and Prof. Bernd Gärtner # Mittagsseminar (in cooperation with M. Ghaffari, A. Steger and B. Sudakov) Mittagsseminar Talk Information Date and Time: Tuesday, December 11, 2012, 12:15 pm Duration: 30 minutes Location: CAB G51 Speaker: Anna Gundert ## The Containment Problem for Random 2-Complexes In this talk we consider 2-complexes as complete graphs were some triangles have been added. Given a fixed 2-complex, consider the probability that a random 2-complex contains a copy of it. Just as for random graphs, the threshold for this is determined by the density of the densest subcomplex of the given 2-complex. What if we only want to find a subdivision of our complex in the random structure? Costa, Farber and Kappeler have shown that for any ε >0 the random complex X2(n,p) with p=n-1/2 + ε contains a subdivision of ANY fixed 2-complex. I will present work in progress on how this can be improved to p=c n-1/2, at least for fixed complexes with a constant number of vertices. Joint work with Uli Wagner Previous talks by year:   2018  2017  2016  2015  2014  2013  2012  2011  2010  2009  2008  2007  2006  2005  2004  2003  2002  2001  2000  1999  1998  1997  1996 Information for students and suggested topics for student talks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856868326663971, "perplexity": 872.210819255478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948119.95/warc/CC-MAIN-20180426070605-20180426090605-00513.warc.gz"}
http://tex.stackexchange.com/users/569/las3rjock?tab=activity&sort=comments
# las3rjock less info reputation 816 bio website location Ann Arbor, MI age 35 member for 4 years seen May 20 at 15:01 profile views 61 Mar23 comment Self-publishing an academic journal I asked a similar question in January, and the comments and answers there may be helpful: tex.stackexchange.com/q/8730/569 Mar9 comment Problem with generating Reference list (no bbl file generated?) I'm glad you found and fixed your problem. JabRef is a great program that can do many things, but generating .bbl files from .bib files is not one of them. :-) Mar9 comment Problem with generating Reference list (no bbl file generated?) I think TH has probably diagnosed your problem. In your first screenshot, it seems that you have told TeXMaker to run JabRef instead of bibtex at the stage when TeXMaker would ordinarily run bibtex to generate a .bbl file from your .bib file. Replacing "C:/Program Files (x86)/JabRef/JabRef.exe" with bibtex should fix your problem. Feb16 comment Can “style justification” be part of the colophon? Why don't you structure your thesis so that you can compile it in your institution's provided format and in classicthesis format? Feb15 comment How do I install TikZ on a MacBook? @Joseph: It's just point 2 that's inaccurate, yes? Specifically, the lack of administrative privileges was responsible for the error messages that ann reported, but the underlying problem was trying to use a TikZ library that doesn't actually exist. Feb15 comment How do I install TikZ on a MacBook? I think the problem here is that you should be using \usetikzlibrary{mindmap} instead of \usetikzlibrary{mindmaps}. Feb15 comment How do I install TikZ on a MacBook? For what it's worth, the current version of PGF/TikZ on my MacTeX 2010 install (updated through the TeX Live Utility) is 2.10. Feb15 comment How do I install TikZ on a MacBook? As for your specific problem, you do not ordinarily have write permissions in /usr in Mac OS X, so you need to use sudo texhash instead of texhash (which is essentially what Caramdir is suggesting that you do). Feb15 comment How do I install TikZ on a MacBook? When you say that you have TeXShop, do you mean that you have MacTeX: tug.org/mactex ? If so, then you should be able to run the "TeX Live Utility" (found in /Applications/TeX) to install PGF (and TikZ) if they're not already installed. Jan31 comment Headings and advice to write a PhD in physics using LaTeX @Mermoz: Git works fine with LaTeX. You may want to take a look at the vc package ( ctan.org/tex-archive/help/Catalogue/entries/vc.html ), which allows you to include version control information (from Git, Bazaar, or Subversion) in LaTeX documents. Jan27 comment What's a good package for typesetting quantum circuits? OP's primary objection to XyPic was speed, which matters if you're using it every time you build your document (which is the standard workflow for Q-circuit). The workflow for qasm2circ is to build the quantum circuit diagrams beforehand and then use \includegraphics, which offsets the slowness of XyPic (and is a better workflow for preparing documents for publication). And regardless of the OP's objections to XyPic, the fact that qasm2circ was used to produce the quantum circuit diagrams for the book on quantum computation makes it an important alternative to be aware of. Jan26 comment Should one use thousands separators in equations? Are you using the \num macro from the siunitx package? Aug31 comment How can I format sections/subsections/etc. like nested lists? The particular document in question is a compiled code of legislation, analogous to the United States Code ( gpoaccess.gov/uscode ), so it's much more semantic to mark it up with \section{heading}, \subsection{heading}, etc., than to use \item \textbf{heading} Jun4 comment How do I cite author in LaTeX? I've updated my example and included an image of the output. I hope that will help you track down your problem. Jun3 comment How do I cite author in LaTeX? Whoops, there is no alphanat.bst--I've fixed the Wikibooks article (I wrote the table of natbib-compatible styles ;-) ). I believe plainnat.bst is the style you're looking for--the plain and alpha citation format appears to be the same, it's just that in the bibliography, plain uses numbers for the cross-references, whereas alpha uses abbreviated last names and two-digit years. natbib defaults to author-year cross-references, but it can also do numerical cross-references if you use "\usepackage[numbers]{natbib}" instead of "\usepackage{natbib}"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838715136051178, "perplexity": 3187.598533227944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834258.45/warc/CC-MAIN-20140820021354-00178-ip-10-180-136-8.ec2.internal.warc.gz"}
http://gssi.infn.it/seminars/impact-seminars-2015/item/654-the-truncated-moment-problem-on-n
back ## The truncated moment problem on N • Date May 26, 2015 • Hour 3 pm • Room GSSI Main Lecture Hall • Speaker Tobias Kuna (University of Reading) The realizability problem is a long open problem in the theory of liquids and quantum chemistry. Conditional on its solution several results have been derived in the theory of heterogenous materials. The realizability problem is to realise given functions as the correlation functions of a point process. The zero dimensional version is the truncated moment problem on N, that is given numbers m_1, .., m_n find a probability measure \mu on N such that \sum_{x \in N} x^k \mu(\{x\}) = m_k. This seemingly easy problem was completely unsolved for n \geq 4. The results are based on the truncated moment problem on the non negative axis and the classification of the convex cone of non-negative polynomials. This is a joint work with M. Infusino, J. Lebowitz and E. Speer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320532083511353, "perplexity": 1243.9976185020141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423808.34/warc/CC-MAIN-20170721182450-20170721202450-00454.warc.gz"}
http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume26/bryce06a-html/node7.html
Next: State Distance Assumptions Up: Planning Graph Heuristics for Previous: # Belief State Distance In both the CAltAlt and planners we need to guide search node expansion with heuristics that estimate the plan distance between two belief states and . By convention, we assume precedes (i.e., in progression is a search node and is the goal belief state, or in regression is the initial belief state and is a search node). For simplicity, we limit our discussion to progression planning. Since a strong plan (executed in ) ensures that every state will transition to some state , we define the plan distance between and as the number of actions needed to transition every state to a state . Naturally, in a strong plan, the actions used to transition a state may affect how we transition another state . There is usually some degree of positive or negative interaction between and that can be ignored or captured in estimating plan distance.4 In the following we explore how to perform such estimates by using several intuitions from classical planning state distance heuristics. We start with an example search scenario in Figure 3. There are three belief states (containing states and ), (containing state ), and (containing states and ). The goal belief state is , and the two progression search nodes are and . We want to expand the search node with the smallest distance to by estimating - denoted by the bold, dashed line - and - denoted by the bold, solid line. We will assume for now that we have estimates of state distance measures - denoted by the light dashed and solid lines with numbers. The state distances can be represented as numbers or action sequences. In our example, we will use the following action sequences for illustration: , , . In each sequence there may be several actions in each step. For instance, has and in its first step, and there are a total of eight actions in the sequence - meaning the distance is eight. Notice that our example includes several state distance estimates, which can be found with classical planning techniques. There are many ways that we can use similar ideas to estimate belief state distance once we have addressed the issue of belief states containing several states. Selecting States for Distance Estimation: There exists a considerable body of literature on estimating the plan distance between states in classical planning [5,23,18], and we would like to apply it to estimate the plan distance between two belief states, say and . We identify four possible options for using state distance estimates to compute the distance between belief states and : • Sample a State Pair: We can sample a single state from and a single state from , whose plan distance is used for the belief state distance. For example, we might sample from and from , then define . • Aggregate States: We can form aggregate states for and and measure their plan distance. An aggregate state is the union of the literals needed to express a belief state formula, which we define as: Since it is possible to express a belief state formula with every literal (e.g., using to express the belief state where is true), we assume a reasonably succinct representation, such as a ROBDD [8]. It is quite possible the aggregate states are inconsistent, but many classical planning techniques (such as planning graphs) do not require consistent states. For example, with aggregate states we would compute the belief state distance . • Choose a Subset of States: We can choose a set of states (e.g., by random sampling) from and a set of states from , and then compute state distances for all pairs of states from the sets. Upon computing all state distances, we can aggregate the state distances (as we will describe shortly). For example, we might sample both and from and from , compute and , and then aggregate the state distances to define . • Use All States: We can use all states in and , and, similar to sampling a subset of states (above), we can compute all distances for state pairs and aggregate the distances. The former two options for computing belief state distance are reasonably straightforward, given the existing work in classical planning. In the latter two options we compute multiple state distances. With multiple state distances there are two details which require consideration in order to obtain a belief state distance measure. In the following we treat belief states as if they contain all states because they can be appropriately replaced with the subset of chosen states. The first issue is that some of the state distances may not be needed. Since each state in needs to reach a state in , we should consider the distance for each state in to a'' state in . However, we don't necessarily need the distance for every state in to every'' state in . We will explore assumptions about which state distances need to be computed in Section 3.1. The second issue, which arises after computing the state distances, is that we need to aggregate the state distances into a belief state distance. We notice that the popular state distance estimates used in classical planning typically measure aggregate costs of state features (literals). Since we are planning in belief space, we wish to estimate belief state distance with the aggregate cost of belief state features (states). In Section 3.2, we will examine several choices for aggregating state distances and discuss how each captures different types of state interaction. In Section 3.3, we conclude with a summary of the choices we make in order to compute belief state distances. Subsections Next: State Distance Assumptions Up: Planning Graph Heuristics for Previous: 2006-05-26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785431027412415, "perplexity": 725.3339909825557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651471.95/warc/CC-MAIN-20150417045731-00230-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/0811.0828/
# Accurate universal models for the mass accretion histories and concentrations of dark matter halos D.H. Zhao, Y.P. Jing, H.J. Mo, G. Börner ###### Abstract A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum varies with mass scale dramatically in the so-called concordance CDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution -body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and CDM) to study the mass accretion histories, the mass and redshift dependence of concentrations and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the CDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when cosmological parameters and the power index of the initial density fluctuation spectrum have changed dramatically. Our model predictions also match the PINOCCHIO mass accretion histories very well, which are much independent of our numerical simulations and of our definitions of halo merger trees. These models are also simple and easy to implement, making them very useful in modeling the growth and structure of dark matter halos. We provide appendices describing the step-by-step implementation of our models. A calculator which allows one to interactively generate data for any given cosmological model is provided at http://www.shao.ac.cn/dhzhao/mandc.html, together with a user-friendly code to make the relevant calculations and some tables listing the expected concentration as a function of halo mass and redshift in several popular cosmological models. We explain why CDM and open CDM halos on nearly all mass scales show two distinct phases in their mass growth histories. We discuss implications of the universal relations we find in connection to the formation of dark matter halos in the cosmic density field. cosmology: miscellaneous — galaxies: clusters: general — methods: numerical 11affiliationtext: Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, CAS, 80 Nandan Road, Shanghai 200030, China; e-mail: 22affiliationtext: Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Strasse 1, 85748 Garching, Germany33affiliationtext: Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA ## 1. Introduction In the current cold dark matter (CDM) paradigm of structure formation, a key concept in the buildup of structure in the universe is the formation of dark matter halos. These halos are quasi-equilibrium systems of CDM particles formed through nonlinear gravitational collapse in the cosmic density field. Since galaxies and other luminous objects are believed to form by cooling and condensation of the baryonic gas in potential wells of dark matter halos, the understanding of the formation and properties of CDM halos is an important part of galaxy formation. One of the most important properties of the halo population is their density profiles. Based on -body simulations, Navarro, Frenk, & White (1997, hereafter NFW) found that CDM halos can in general be approximated by a two-parameter profile, ρ(r)=4ρs(r/Rs)(1+r/Rs)2, (1) where is a characteristic “inner” radius at which the logarithmic density slope is , and is the density at . A halo is often defined so that the mean density within the halo radius is a factor times the mean density of the universe () at the redshift () in consideration. The halo mass can then be written as Mh≡4π3Δh¯ρR3h. (2) One can define the circular velocity of a halo as , so that Mh=V2hRhG=21/2V3h[ΔhΩm(z)]1/2H(z), (3) where is the Hubble parameter and is the cosmic mass density parameter at redshift . The shape of an NFW profile is usually characterized by a concentration parameter , defined as . It is then easy to show that ρs=ρhc312[ln(1+c)−c/(1+c)]. (4) We denote the mass within by , and the circular velocity at by . These quantities are related to and by Ms=ln2−1/2ln(1+c)−c/(1+c)Mh,     V2s=V2hcMsMh. (5) In the literature, a number of definitions have been used for (hence ). Some authors opt to use (e.g., Jenkins et al. 2001) or (e.g., NFW), while others (e.g., Bullock et al. 2001; Jing & Suto 2002) choose according to the spherical virialization criterion (Kitayama & Suto 1996; Bryan & Norman 1998). These different definitions can lead to sizable differences in for a given halo, and the differences are cosmology-dependent. In our discussion in the main text we use , and adopt the fitting formulae of obtained by Bryan & Norman (1998). The value of ranges from at high redshift to at present for the current ‘concordance’ CDM cosmology. In appendices A and B, we also provide results based on the other two definitions of . To avoid confusing, we denote halo mass, radius, concentration and circular velocity by , , , and for the last definition (), by , , , and for the second definition (), and by , , , and for the first definition (). The structure of a halo is expected to depend not only on cosmology and fluctuation power spectrum, but also on its formation history. Attempts have therefore been made to relate the halo concentration to quantities that characterize the formation of the halo. In their original paper, NFW suggested that the characteristic density of a halo, , is a constant () times the mean density of the universe, , at the redshift (referred to as the formation time of the halo by NFW) where half of the halo’s mass was first assembled into progenitors more massive than times the halo mass. NFW used the extended Press–Schechter formula to calculate and found that the anticorrelation between and observed in their simulation can be reproduced reasonably well with a proper choice of values of the parameters and . Subsequent investigations demonstrated that additional complexities are involved in the CDM halo structure. First, halos of a fixed mass may have significant scatter in their values (e.g., Jing 2000), although there is a mean trend of with . If this trend is indeed due to a correlation between concentration and formation time, the scatter in may reflect the expected scatter in the formation history for halos of a given mass (e.g., Jing 2000, Lu et al. 2006). Second, Bullock et al. (2001, hereafter B01) and Eke et al. (2001, hereafter E01) found that the halo concentration at a fixed mass is systematically lower at higher redshift. B01 proposed a model with ( being the mass at which the rms of the linear density field is ), which has a stronger halo-mass dependence, and a much stronger redshift dependence, than that predicted by the NFW model. E01 proposed a similar model that has a weaker mass dependence, and a slightly weaker redshift dependence, than the B01 model. Both of these models have been widely used in the literature to predict the concentration–halo mass relation. Using the same set of numerical simulations as that used in B01, Wechsler et al. (2002; hereafter W02) found that, over a large mass range, the mass accretion histories (hereafter MAHs) of individual halos identified at redshift can be approximated by a one-parameter exponential form, M(z)=M(zobs)exp[−2(z−zobs)/(1+zf)], (6) where is the formation time of a halo, determined by fitting the simulated MAH with the above formula. Assuming that equals at and grows proportionally to the scale factor , W02 proposed a recipe to predict for individual halos through their MAHs, and found that their model can reproduce the dependence of on both halo mass and redshift found in B01. Using a large set of Monte Carlo realizations of the extended Press–Schechter formalism (EPS), van den Bosch (2002) showed that the average MAH of CDM halos follows a simple universal function with two scaling variables. Based on a set of high-resolution -body simulations, Zhao et al. (2003a, 2003b, hereafter Z03a and Z03b, respectively) found that the MAH of a halo in general consists of two distinct phases: an early fast phase and a late slow phase (see also Li et al. 2007 and Hoffman et al. 2007). As shown in Z03a, the early phase is characterized by rapid halo growth dominated by major mergers, which effectively reconfigure and deepen the gravitational potential wells and cause the collisionless dark matter particles to undergo dynamical relaxation and mix up sufficiently to form the core structure, while the late phase is characterized by slower quiescent growth predominantly through accretion of material onto the halo outskirt, little affecting the inner structure and potential. Z03a proposed that the concentration evolution of a halo depends much on its mass accretion rate and the faster the mass grows, the slower the concentration increases. In particular, they predicted that halos which are still in the fast growth regime should all have a similar concentration, . Using a combination of -body simulations of different resolutions, Z03b studied in detail how the concentrations of CDM halos depend on halo mass at different redshifts. They confirmed that halo concentration at the present time depends strongly on halo mass, but their results also show marked differences from the predictions of the B01 and E01 models. The mass dependence of halo concentrations becomes weaker at higher redshifts, and at halos with all have a similar median concentration, . While the median concentrations of low-mass halos grow significantly with time, those of massive halos change very little with redshift. These results are in good agreement with the empirical model proposed by Z03a and favored by the Chandra observation of Schmidt & Allen (2007), but are very different from the predictions of the models proposed by B01, E01 and W02. Recently, Gao et al. (2008) and Macció et al. (2008) confirmed the results of Z03b with the use of a large set of simulated halos, suggesting again that the much-used B01 and E01 models need to be revised. There have been attempts to modify the early empirical models of halo MAH and halo concentration to accommodate the new simulation results. Miller et al. (2006) and Neistein et al. (2006) provided MAH models based on theoretical EPS formalism. With help of the Millennium simulation, Neistein & Dekel (2008) presented a empirical algorithm for constructing halo merger trees, which is significantly better than their previous method based on EPS but fails to reproduce the non-Markov features of merger trees. Moreover, because of the limited resolution of the simulation, this algorithm is tuned to reproduce MAHs within redshift 2.5 for only massive halos. McBride et al. (2009) examined halo MAHs of the Millennium simulation and modeled the scatter among different halos by fitting individual MAHs with a two-parameter function, which was proved to be more accurate than the one-parameter function of W02. Note that both the two models above are restricted to the specific cosmology and the specific fluctuation power spectrum of the Millennium simulation, similar to WMAP1 results. Gao et al. (2008) confirmed the finding of B01 that the NFW prescription overpredicts the halo concentrations at high redshift, and tried to overcome this shortcoming by modifying the definition of halo formation time. However, the revised model still fails to reproduce the evolution of concentrations of galaxy sized halos. Modifying the parameters of the B01 and E01 models can also reduce the discrepancy between these models with simulation results at high masses, but the overly rapid redshift evolution of the concentration remains. Motivated partly by Z03a and Z03b, Macció et al. (2008) presented a model based on some modification of the B01 model, and found that it can reproduce the concentration–mass relation at in their simulations. Unfortunately, their model is not universal, because the normalization of the concentration–mass relation has to be calibrated for each cosmological model and for each redshift and thus the model cannot be used to predict correctly the redshift evolution of halo concentrations. In addition, as we will show below, the model of Macció et al. has the same shortcoming as the original B01 model in that it predicts too steep a concentration–mass relation for halos at high redshift in the current CDM models and for halos at all redshift in the ‘standard’ CDM model with . Thus, all the existing empirical models for the halo concentration can at best provide reliable predictions for dark matter halos in limited ranges of halo mass and redshift (often around ), and only for some specific cosmological models (according to which the empirical models are calibrated). Clearly, they are insufficient in the era of precision cosmology. We believe that even though a large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy, cosmological parameters change with time and the power index of the power spectrum varies with mass scale dramatically in the so-called concordance CDM cosmology and thus any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. In this paper, we use -body simulations of a variety of structure formation models, including a set of scale-free (SF) models, two CDM (LCDM) models, the standard CDM (SCDM) model, and an open CDM (OCDM) model, to investigate in detail the MAHs, the mass and redshift dependence of concentrations and the concentration evolution histories of dark matter halos. We show that early empirical models for halo MAHs and concentrations all fail significantly to describe the simulation results. Based on our simulation results, we develop new empirical models for both the MAHs and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal, in the sense that the same set of model parameters works well for different cosmological models and for halos of different masses and at different redshifts. These models are also simple and easy to implement, making them very useful in modeling the formation and structure of dark matter halos. Furthermore, the universal relations we find may also provide important insight into the formation processes of dark matter halos in the cosmic density field. The organization of the paper is as follows. In Section 2, we present the set of -body simulations used in the paper. We describe the simulated MAHs and the corresponding modeling in Section 3. Our modeling of halo concentrations is presented in Section 4. Finally, we discuss and summarize our results in Section 5. We provide appendices describing the step-by-step implementation of our models. A calculator which allows one to interactively generate data for any given cosmological model is provided at http://www.shao.ac.cn/dhzhao/mandc.html, together with a user-friendly code to make the relevant calculations and some tables listing the expected concentration as a function of halo mass and redshift in several currently popular models of structure formation. ## 2. Numerical simulations and dark matter halos We use a very large set of cosmological simulations to study the formation and structure of dark matter halos of different masses in different cosmological models. The first subset contains the simulations that were used in Z03b: the then ‘concordance’ CDM model with density parameter and a cosmological constant given by . These simulations, labeled as LCDM1–3 in Table 1, were generated with a parallel-vectorized Particle-Particle/Particle-Mesh code (Jing & Suto 2002). The linear power spectrum has a shape parameter , and an amplitude specified by , where is the Hubble constant in units of , and is the rms of the linear density field smoothed within a sphere of radius at the present time. We used particles for the simulation of boxsize , and particles for the other two simulations, and (see Table 1). The simulations with and were evolved by 5000 time steps with a force softening length (the diameter of the S2 shaped particles, Hockney & Eastwood 1981) equal to and , respectively. In order to examine the dependence on cosmological parameters, we also use simulations for the ‘Standard’ CDM model () and the open CDM model (, ), as described in Jing & Suto (2002, 1998). These are listed as SCDM1, SCDM2 and OCDM1 in Table 1, together with the corresponding model and simulation parameters. In addition, we also use a set of four scale-free (SF) simulations in an Einstein de Sitter universe, with the linear power-law power spectra given by . These models are listed as SF1–4 in Table 1 and Figure 1 presents the rms of the linear density field as a function of the mass scale for SF1–4 and LCDM1–3. We use both the friends of friends (hereafter FOF) algorithm and the spherical overdensity (hereafter SO) algorithm of Lacey & Cole (1994) to identify dark matter groups in the simulations. The FOF algorithm connects every pair of particles closer than 0.2 times the mean particle distance and the SO algorithm selects spherical regions whose average density is equal to times the mean cosmic density . We select groups at a total of 20–30 snapshots for each CDM simulation and at 10 snapshots for each SF simulation. The outputs are uniformly spaced in the logarithm of the cosmic scale factor . For each group, we choose the particle of the highest local density as the group center, and around it we select as ‘halo’ a spherical region whose average density is , with a routine very similar to the SO algorithm except that here the group center is fixed. We find that the results presented in this paper do not depend significantly on the group finding algorithm. Our following presentation is based on the FOF groups and we use all FOF groups without applying any further selection criteria. The simulations described above are also used to study the density profiles and the concentrations of dark matter halos. We use all halos containing more than 500 particles for our analysis. Each halo is fitted with the NFW profile, using a similar method as described in Z03b. As demonstrated in Z03b, 500 particles are sufficient for measuring the concentration parameter reliably. In order to examine concentrations of very massive halos, we also use three additional simulations with large boxsizes (see Jing, Suto & Mo, 2007). These simulations are listed as LCDM4–6 in Table 1. Note that the model parameters of these simulations are slightly different from those of LCDM1–3. Furthermore, the initial power spectrum for these simulations is obtained using the fitting formula of Eisenstein & Hu (1998), which includes the effects of baryonic oscillations. The details of these simulations can be found in Jing, Suto & Mo (2007), and all the change in model parameters are properly taken into account when we compare our model predictions of halo concentrations with the simulation results. For the same reason, we also use a SCDM simulation of boxsize (listed as SCDM3 in Table 1) to investigate the concentration–mass relation at the highest halo mass end in this model. ## 3. A universal model for the halo mass accretion histories Since we want to study how a halo grows with time, we need to construct the main branch of the merger tree for each halo. Given a group of dark matter particles at a given output time (which we refer to as group 2), we trace all its particles back to an earlier output time. A group (group 1) at the earlier output is selected as the “main progenitor” of group 2 if it contributes the largest number of particles to group 2 among all groups at this earlier output. We found that in most cases more than half of the particles of group 1 is contained in group 2. We refer to group 2 as the “main offspring” of group 1. We construct the main branch of the merger tree, for each of the most massive 10000 halos identified at redshift , until the number of particles in the progenitor drops below 10 or the main progenitor cannot be found anymore. More than of the histories analyzed here can be traced until the particle number goes below 10. As an illustration, Figure 2 shows the median MAHs of halos both in the SF simulations and in the CDM models. The halo mass is normalized by the characteristic mass at the final output in the SF simulations. Clearly, MAHs depend on the power spectrum. This is because halos grow faster in a model with a smaller value of . The effect of cosmological parameters can also be seen clearly in CDM models: the growth of halos is slower in the LCDM model than in the SCDM model (despite the fact that the LCDM has more clustering power than the SCDM) and is the slowest in the OCDM. In our analysis, we have taken into account the incompleteness effect of the MAHs at the earliest epochs when the progenitors containing less than 10 particles cannot be followed in the simulation. This effect leads to an overestimation of MAHs. In order to correct for it, statistical analysis of the MAHs is carried out only up to the redshift where the progenitors of of all the halos in consideration can be traced. As one can see from Figure 2, the MAHs show a complex dependence on power spectrum, cosmology and halo mass. The goal of this section is to find an order in such complexity so as to obtain a universal model to describe all the MAHs. ### 3.1. The Expected Asymptotic Behavior at High Redshift Before modeling the MAHs in detail, let us first consider some generic properties of the growth of CDM halos in the cosmic density field. In the CDM scenario, structures form in a hierarchical fashion, and the growth of dark matter halos in general has the following properties. First, at any given time, more massive halos are, on average, growing faster because they sit in higher density environments. Second, the higher the redshift, the faster the halos of a fixed mass grow because at high redshift they are relatively more massive with respect to others. Third, the MAH of a halo depends on power spectrum of the initial density fluctuation, as mentioned before. Fourth, the MAH of a halo also depends on cosmology, because of the change of the background expansion. The last, halo growth also suffers from some nonlinear processes in their local environments. All these factors entangle together and keep varying in their own complicated ways. For example, in a CDM universe, the cosmological density parameter decrease with time and the power index of the initial density fluctuation spectrum increase with mass scale dramatically, as shown in Figure 3 and Figure 1, respectively. This is why one need a universal evolution model which works well for various cosmologies and different spectra, as mentioned in the introduction already. For exactly the same reason, however, we may not expect that variations of all these factors compensate exactly to bring us a universal function form for MAH in terms of and . Thus, we turn to model halo mass accretion rate instead, which can be integrated to build the MAH, and in order to describe the accretion rate accurately and universally, we need to disentangle all effects due to halo mass, redshift, the cosmological parameters, the linear power spectrum and those relevant nonlinear processes. Consider two average halos of different masses at a given time. Since the more massive one grows faster, we expect that at a slightly earlier time, the two halos were closer in mass (in terms of the mass ratio), and the masses of their progenitors at higher redshift are expected to be even closer. The two halos are expected to have similar mass accretion rate at sufficiently high redshift, because otherwise with finite difference in accretion rates their mass accretion trajectories would cross. This would lead to an unlikely situation where more massive progenitors actually grow slower than low-mass progenitors at redshifts higher than the crossing redshift. This means that the average MAHs of halos of different masses should all have the same asymptotic behavior at high redshift. Here we try to look for this behavior. Let us first consider an Einstein de Sitter universe with an initial Gaussian density fluctuation field of a power-law power spectrum, . As there is no characteristic scale either in time or in space, this is a case where the structure formation is a self-similar process. When halo mass is scaled with a time-dependent characteristic mass, all statistical quantities as functions of the scaled mass should be the same at all redshifts. Given a linear power spectrum, we can linearly extrapolate to , and estimate, the rms of the linear density field on a mass scale . According to the spherical collapse model, the linear critical overdensity for collapse at redshift is , where is the linear growth factor, which is in an Einstein de Sitter universe, and is the conventional critical overdensity for collapse at redshift , which is a constant in this universe. Because is a function of only and is a function of only , we can think of as the “mass” and as the “time”. Furthermore, since , which satisfies , represents the characteristic nonlinear mass at redshift , actually corresponds to the characteristic nonlinear “mass” at redshift if is regarded as “mass”. In this case, the scaled “mass” is , i.e., the peak height of the halo in the conventional terminology. Suppose that there are some halos at a given redshift and the average mass growth rate is , i.e., so that . Then at a slightly higher redshift, , the scaled mass of their progenitors will be the same as that at . Since the statistical properties of halos of the same scaled mass are the same for all redshifts in the self-similar case, these progenitors will have the same growth rate as that at , i.e., both the mass growth rate , and the scaled mass , remains constant with time. Therefore, the main progenitors at different redshifts will all have the same and the same growth rate as they have at redshift , and the average MAH will be a straight line in the logarithmic diagram of versus . This line is exactly the asymptotic behavior we are looking for. In the case of cold dark matter cosmology (SCDM, LCDM, and OCDM), structure formation is not self-similar. Nevertheless, we can still use to represent halo mass, to represent time, and to represent the scaled mass. However, it is not guaranteed that the relation has a unique asymptotic behavior as it has in the SF case. As we will see below, simulated median MAHs of different final masses show the same asymptotic behavior at high not only for the SF models but also for the CDM models. We can then use the result to guide our modeling of the halo MAHs in simulations. ### 3.2. Toward a Universal Model for the Halo Mass Accretion Histories If we use as mass and as time, the MAH looks like the curves shown in Figure 4, where we have adopted the SF model as an example. The figure clearly shows the asymptotic behavior described above. The MAHs of halos of different present masses converge at high redshift, and the scaled mass approaches a constant, as one can see in the case of the most massive halos in the figure, and as expected from the asymptotic behavior discussed above. Figure 5 shows the median accretion rate, , as a function of the scaled mass, , for the four SF models. For each model, a set of symbols of the same type connected with a solid line represents accretion rates of progenitors of the halos of a given mass at , and the filled symbol represents the end of the MAH at . It is interesting to note that, for each SF model, the results for different halo masses are similar except at the end of the history. The accretion rate plotted in this way is approximately a constant for all the progenitors. As mentioned above, in each model the scaled mass is asymptotically a constant at high . This corresponds to a line, , which is plotted as the dashed line in the figure together with a star showing the corresponding asymptotic accretion rate. At the end of each history, the mass accretion rate is somewhat lower than the average of their progenitors. This deviation from self-similarity is not a numerical artifact, as the simulation resolution is sufficient at . Instead, it reflects the fact that the main progenitors are a special subset of the total halo population. While the halos of a given at the end of MAHs are chosen only by mass, their progenitors have an additional selection bias: they are chosen to be main progenitors of more massive halos. This biases the progenitor halos to have higher mass accretion rates than typical halos of a given value. In other words, a fraction of halos in the total population are not main progenitors of larger halos in the future but instead will be swallowed by more massive halos. Their mass accretion rates may have been suppressed by their massive neighbors due to environment-heating or tidal-stripping, as envisaged in Wang et al. (2007), and thus their merger trees should also have some non-Markov features. As seen in Figure 5, accretion rates are different in different SF models, again indicating that the accretion rate depends on the shape of the power spectrum. In order to model such dependence, we use instead of to represent the accretion rate. The results are shown in the upper left panel of Figure 6, which again indicates that the accretion rate depends strongly on the power index of the linear spectrum. However, we find that this dependence can be scaled away if we use the following variable instead of : w(z,M)≡δc(z)/s(M), (7) where s(M)≡σ(M)×10dlgσ/dlgm|M. (8) This is shown in the upper right panel of Figure 6. With the use of , all the halo MAHs lie on top of each other, except for the snapshots close to the end of the MAHs. Even more remarkably, the accretion rates of the progenitors at high redshift can all be well described by a straight line, dlgσ(M)/dlgδc(z)=w(z,M)5.85, (9) which is shown in the panel as the solid line. Note that is equal to plus its logarithmic slope at mass : . For a given power spectrum with , decreases with halo mass and so increases with halo mass. The simple relation given by Equation (9) also describes well the MAHs at high redshifts in the LCDM model (see the lower two panels of Figure 6), although the power spectrum and the cosmology are very different from the SF models. As shown in the two right panels of Figure 6, the accretion rate is systematically lower than that given by Equation (9) at the end of the MAHs. We find that this decrease of the accretion rate can be accounted for by replacing with . The shift, , in the horizontal axis depends on redshift and halo mass in the following way, p(z,zobs,Mobs)=p(zobs,zobs,Mobs) ×Max[0,1−lgδc(z)−lgδc(zobs)0.272/w(zobs,Mobs)], (10) where p(zobs,zobs,Mobs)=11+[w(zobs,Mobs)/4]6w(zobs,Mobs)2 (11) is the shift at , the redshift at which the final halo is identified. The horizontal gap between the dashed and solid curves in the right two panels of Figure 6 shows this shift. As one can see, the shift describes well the deviation of the mass accretion rates in the end of the histories (the solid symbols) for the LCDM model. For the SF models, the shift appears to be too much, but this is mainly due to the fact that the interval of the last two snapshots in the SF models is quite big, and the accretion rate is not estimated accurately. Figure 7 shows versus . It is clear that the relation is much tighter, and is well described by the following relation, dlgσ(M)/dlgδc(z)=w(z,M)−p(z,zobs,Mobs)5.85, (12) which is shown as the solid lines. The same results are also found for the SCDM and OCDM simulations, although they are not shown here. Given a cosmology and a linear power spectrum, it is easy to calculate for a halo of mass at a given redshift . From Equation (12), one can estimate the mass accretion rate, at . One can then compute [or equivalently ] at a redshift incrementally higher than , thus tracing the MAH to higher redshifts. The MAH, in terms of and , thus computed are shown in Figures 8 and 9. Since is a monotonic function of and a monotonic function of , the relation can easily be converted into an MAH in terms of and . Figures 1012 show the median MAHs so obtained for halos of different final masses at different redshifts in various models (solid curves), in comparison with the simulation results (circles). For the CDM case, the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when cosmological parameters and the power index of the initial density fluctuation spectrum have changed dramatically. Moreover, the same set model parameters works pretty good for all halo masses, all redshifts and all cosmologies we are studying here. The typical errors of the predicted median MAHs in most cases are . Somewhat larger errors seen in the highest mass bin in Figure 11 may be due to the inaccurate determination in the simulation because the number of halos is small in this mass bin. These predictions are much more accurate than those of the model of W02, shown as the short-dashed lines in Figures 1012,111Following the instruction on Bullock’s Web site http://www.physics.uci.edu/~bullock/CVIR/, we set in the B01 model (see Section 4.1 for more details) and use the returned collapsing redshift as input for the free parameter in the W02 model, Equation (6). and those of the model of van den Bosch (2002), shown as the long-dashed lines in Figures 1012.222Note that predictions of the model of van den Bosch (2002) are the average MAHs while the predictions of ours are the median MAHs. If the same definition is adopted, the difference between predictions of these two models will be even larger in the CDM cases for which our model always predicts more massive progenitors. This is because individual MAHs show a log-normal distribution (D. H. Zhao et al. 2010, in preparation) and so the average value should be somewhat higher than the median one. Furthermore, unlike the latter two models, the MAHs predicted by our model for halos of different final masses do not cross at high redshift. Note that the W02 model works pretty well for low-mass halos in the CDM and the OCDM models (). However, for more massive halos in these two models and for all halos in the scale-free and the SCDM simulations, it fails to provide an accurate description for the MAHs. Even though the W02 model and ours give similar MAHs for low-mass halos in some CDM models, they have very different asymptotic behaviors at high redshift, because the W02 model is an exponential function of while ours is a power-law function of . Several months after this current paper was submitted to the Journal and the Internet, McBride et al. (2009) modeled the distribution of halo MAHs of the Millennium simulation by fitting individual MAHs with a two-parameter function, instead of the one-parameter function of W02. They found that our model prediction for the mass accretion rate to have a slightly steeper dependence than their Equation (9) where our median value is within of their median mass accretion rate at but exceeds theirs by a factor of at . The reason for this large discrepancy is unknown, however, it is worthy to point out that since we combine a set of high-resolution simulations, the dynamical ranges explored here are larger than the Millennium Simulation and our halo samples are well enough for analyzing the median value. To test our model further, we utilize the Lagrangian semianalytic code PINOCCHIO,333A public version of PINOCCHIO is available from the Web site: http://adlibitum.oats.inaf.it/monaco/Homepage/Pinocchio. proposed by Monaco et al. (2002) for identifying dark matter halos in a given numerical realization of the initial density field in a hierarchical universe. Figure 13 compares our model predictions with the median redshifts of the MAHs produced by the PINOCCHIO code, for a cosmology model the same as LCDM1-3 and very similar to that of the Millennium Simulation. Here four different PINOCCHIO runs with box sizes of 25, 100, 300, and and particle numbers of , , , and are analyzed and, to avoid artifact, only those MAHs which converge among runs of different boxsizes are plotted. Again, at all redshifts available, our model predictions match the median values of the PINOCCHIO MAHs very well in halo mass range probed by the Millennium Simulation. The fluctuations on the median MAHs in the smallest box are due to the small number statistics. Note that the PINOCCHIO MAHs are automatically output by the PINOCCHIO code without any postprocessing and so are much independent of our numerical simulations and of our definitions of halo merger trees. It is interesting to examine the scaling relations obtained above more closely. At sufficiently high when , Equation (3.2) gives , and Equation (12) reduces to a simpler form. In this case, corresponds to , or equivalently to . The scaled mass, , is then independent of time, so is in the SF case. This corresponds to the high-redshift asymptotic behavior discussed in the last subsection. For halos more massive than this characteristic scale, the model predicts that and that increases with time. For scale-free cases, this means that the most massive halos will grow faster and faster, as demonstrated in the two upper panels of Figure 10, while for CDM and OCDM cases, this kind of rapidly growing halos are very rare because neither the relation nor the relation is a simple power law. Both the decrease of cosmological density parameter with time and the increase of power index of the power spectrum with mass scale will slow down the halo mass growth rate, as shown above. This is why at the present time halos on nearly all mass scales show two distinct phases in their mass growth histories, as found by Z03b and Z03a. For the SF models, we can obtain some useful analytic formulae, because here has a simple power-law dependence on : . First, using the high- asymptote, , we have and . This means that the mass of a median progenitor at sufficiently high redshift is a fixed fraction of the characteristic mass: . For and , these progenitors have a typical mass , and , respectively. Second, because is a constant in an SF model, we have when . The solution is with a constant along each median history. This solution describes the MAHs in the SF models quite well, as shown in Figure 8. The relations in the CDM models are steeper than that given by this simple model, as shown in Figure 9, because the effective power spectrum index, which comes into the definition of increases with the mass scale. Before ending this section, we would like to point out that the scaling relations obtained here may also apply for individual halos. As shown in Figure 14, the linear relation between and is also a good approximation for individual halos, although the slope may change from halo to halo. Thus, one may model the individual MAHs using a straight line with its slope changing from halo to halo. The scatter in the MAHs for halos of a given mass may then be modeled through the distribution of the slope. Since the linear relation is a good approximation for different halo masses and in different cosmologies, as shown in Figures 8 and 9, this kind of modeling is expected to be valid for different halos and in different cosmological models. We will come back to this point in a forthcoming paper. ## 4. A universal model for the dark matter halo concentrations ### 4.1. The Mass and Redshift Dependence of Halo Concentration Median values of the halo concentrations measured for the CDM models are plotted as symbols in Figures 1517. The results obtained here from simulations LCDM1–3 are in excellent agreement with the results obtained in Z03b. In these plots, we also compare our simulation results with the three much-used empirical models. The first one is the NFW model, which relates the halo characteristic density at to the universe density at the time when of the halo mass is already in progenitors of of the halo mass or bigger. In agreement with B01, our results show that the NFW model not only fails to predict correctly the redshift dependence of halo concentration, but also fails to predict the concentration at accurately. The model of E01 matches our results better, especially at and , but it also fails to match the concentration–mass relation, particularly at high redshift. The model of B01 has several versions of model parameters (see Web site http://www.physics.uci.edu/~bullock/CVIR/). We adopt the original version with and , where and are parameters in Equations (9) and (12) of their paper.444Although in the published paper of B01 is adopted, their latest Web site has deleted all description about this value and claim that should be the original version corresponding to the published paper. This version matches well our simulation results for LCDM1–3 at . However, it fails to match the concentration–mass relation for massive halos, especially at high . If and is used, as suggested for total halo population lately by Macció et al. (2007) and James Bullock on his Web site, the model underpredicts the concentration by an amount of even for low-mass halos at . The conflicts between early model predictions and simulation data have already been discussed by Z03b, and were also confirmed recently by Gao et al. (2008) who used the Millennium Simulation to carry out an analysis very similar to that in Z03b. As mentioned in the introduction, Gao et al. tried some revisions of the NFW model and found that the revised version still fails to match their simulation results. The simulation results presented here are in good agreement with the results in Z03b and in Gao et al. (2008). Since we combine a set of high-resolution simulations, the dynamical ranges explored in Z03b and here are larger than the Millennium Simulation. With our new simulation data of the SCDM model, we find that the B01 cannot even match our data at unless parameters in their model are adjusted. As in the LCDM case, the B01 model also fails to account for the redshift dependence of halo concentration in the SCDM model. Z03a found that the scale radius is tightly correlated with the mass enclosed by it. With this tight correlation, one can predict halo concentration from the MAHs. Using the MAH model given in last section, we can predict the halo concentration according to the Z03a model. The prediction of this model is shown in Figure 16 as the dot-dashed line. For comparison, we also show predictions of the B01 model and the model of Macció et al. (2007) in the same figure. As one can see, the prediction of the Z03a model is more accurate than the other two models, particularly for high mass halos. Although the model of Z03a matches well the simulation data, it is not easy to implement. Here we use our simulation results to find a new model that is more accurate and easier to implement. Let us first consider an SF case with a given linear power index. In this case, the halos of a given mass have a uniquely determined time when their main progenitors reach a fixed fraction , e.g., , of their final mass. There is thus a one-to-one correspondence between the MAH and . If the concentration of a halo is fully determined by its MAH, one may build a model to link and the final halo concentration. In principle, such a relation can be found for at an arbitrarily fixed fraction of the final halo mass, but the resulting relation may be very different for scale-free models of different power indices. What we are seeking is a value of with which the relations between and are the same for all the scale-free models. After many trials, we find that the concentration of a halo is tightly correlated with the time when its main progenitor gained of its final mass. This relation is almost identical for and , as shown in Figure 18, but is slightly lower for the model. Since for realistic cosmological models, the effective slope of the linear power spectrum on scales of halos that can form is always less than 0, we exclude the result of the model in modeling the relation. We find that the relation given by the other three models can be accurately described by the following simple expression: c=[48+(t/t0.04)8.4]1/8=4×[1+(t/3.75t0.04)8.4]1/8. (13) Nontrivially, this same relation also applies very well to the CDM models, as shown in the small window of Figure 18. From our MAH model described in Section 3, we can easily compute the time for a halo of mass . We can then predict its concentration straightforwardly by using Equation (13). The concentration–halo mass relations so obtained are shown as the solid lines in Figure 19 for the SF models, and in Figures 1517 for the CDM models. Comparing the model predictions with the corresponding simulation data, we see that our model works accurately for all the models and for halos at different redshifts. The typical error of our prediction is less than 5%. Somewhat larger deviations () seen for halos of mass at redshift could be a result of numerical inaccuracy in the simulations, because these halos are not well relaxed. Compared with early models, ours is clearly a very significant improvement. In Appendix B, we describe step-by-step how to use Equation 13 to predict the concentration–mass relation at any given redshift for a given cosmological model. It should be pointed out that both our model and the model of NFW are based on the correlation between halo concentration and a characteristic formation time. However, the two models have several important differences. First, in the NFW model the formation time was defined as the epoch when half of the halo mass has been assembled in its progenitors of masses exceeding , while in our model the time is defined as the epoch when its main progenitor has gained of the halo mass. Second, the ways to relate the concentration to the characteristic time are also different. NFW assumed that the inner density at of a halo is proportional to the mean density of the universe at the formation time, while we relate the halo concentration and the characteristic time by Equation (13). Finally, NFW used the extended Press–Schechter formula to compute the formation time, while we use our model of MAHs to calculate the time . These differences make a very big difference in the model predictions, as shown above. ### 4.2. The Evolution of Halo Structure Along the Main Branch The model described above can also be used to predict how halo structural properties, such as , , , , and , and hence density profile evolve along the main branch. For a given MAH, , we can estimate for the current halo, [corresponding to the virial radius and the circular velocity ], at the redshift in question, and use the model described above to obtain at . Since the virial radius is determined by , we can then obtain , , , and through , Equations (4) and (5), respectively. The solid lines in Figure 20 show the model predictions in comparison with the simulation results of LCDM1–3 shown by circles connected by lines. Clearly, our model also works very well in describing these evolutions. These results demonstrate that, for a given halo, our model cannot only predict how its total mass grows with time, but can also predict how its inner structure changes with time. Thus, one can plot its NFW density profile at any point of the main branch, and can obtain the density evolution in spherical shell of any radius. As one can see, for low-mass halos, , , , hence also , all remain more or less constant at low redshift, suggesting that the inner structures of these halos change only little in the late stages of their evolution. Since by definition the virial radius increases with time, the concentration of such halos increases rapidly with decreasing redshift. Note that the circular velocity at the virial radius actually decreases with time for such halos at late time, as shown by the dashed lines in the lower-right panel of Figure 20. On the other hand, for massive halos, , , , and all change rapidly with redshift even at , implying that the inner structures of these halos are still being adjusted by the mass accretion. Therefore, the concentration of such halos increases very slowly or even remains constant with redshift, much different from the W02 model which argues that . All these behaviors had been illustrated clearly in Z03a and Z03b. As discussed in Z03a, these different behaviors are mainly due to the fact that massive halos are still in their early growth phases, which is characterized by rapid halo growth dominated by major mergers, effectively reconfiguring and deepening the gravitational potential wells and causing the collisionless dark matter particles to undergo dynamical relaxation and to mix up sufficiently to form the core structure, while low-mass halos have reached the late growth phase, which is characterized by slower quiescent growth predominantly through accretion of material onto the halo outskirt, little affecting the inner structure and potential. Figure 21 presents the model predictions of the evolution of median in comparison with the simulation results of SF1–4. It is very interesting that the simulated median concentrations of massive halos in each SF model also remain constant but the values are not except for SF4 (). This is not surprising: as shown in Figure 10, the mass accretion of SF4 halos, which is the fastest in these SF models, is very similar to the CDM cases and thus triggers dynamical relaxation and core-structure formation, resulting a constant concentration of about 4; while for the rest SF models, mass accretion is more or less slower and even the massive straight-line-MAH halos is in their slow growth regime (; see Figure 18) and, according to Equation (13), progenitors on these straight line MAHs, which have time-invariant , will all have constant but -dependant concentrations. According to Section 3.1, all median MAHs in SF cases asymptote and, in an Einstein de Sitter universe, universe age . Combining these two relations with Equation (13), we find halo median concentrations also have a unique asymptotic behavior at high redshift: , and for , and , respectively. These numbers are well verified in Figure 21 (except that in the SF1 case should be shifted a little bit when compared with the simulation results because a short-wavelength cutoff in the linear power spectrum have been included in this simulation) and, for comparison, assumption of W02, , is also plotted in each panel of the figure. Actually, for this kind of self-similar straight-line MAHs, any concentration model which claim that halo concentration is tightly correlated to its MAH should predict a constant concentration, as required by logic. For the most massive and so very rare halos which grow faster and faster, as demonstrated in the two upper panels of Figure 10, our model predict a decrease of concentration with time, as its is diminishing. This very interesting behavior is again supported by the simulation results and worth further detailed study. ### 4.3. Comparison with the Zhao et al. (2003a) Model As mentioned above, the Z03a model for the halo concentrations is based on the tight correlation between and . In Figure 22 we show the relation obtained from the simulations used in this paper. Clearly, for a given mass, and are tightly correlated, and the relation is well described by with . The value of is between the values and obtained in Z03a for the late slow and early fast growth regimes, respectively. This suggests that the model we are proposing here is closely related to that of Z03a. To show this more clearly, let us start with Equation (10) in Z03a: [ln(1+c)−c/(1+c)]c−3α[ln(1+c0)−c0/(1+c0)]c−3α0=[ρ(z)ρ0]α[M(z)M0]1−α. (14) The function of in square brackets on the left side behaves like a power law for moderate concentrations and then we will use a power law function instead. Suppose that we have a halo whose main branch reaches of its current mass at a time and assume that the concentration of the progenitor at is 4. Substituting quantities of subscript 0 in the above equation with quantities at gives (c4)β−3α=(ρρ0.04)α25(1−α), (15) where for moderate . We can then obtain c=4[(ρρ0.04)α251−α]1/(β−3α)=4[14(ρρ0.04)−1.05/2] (16) assuming . This is very similar to relation (13) at , indicating that the model proposed here is consistent with that of Z03a. Indeed, for a given MAH, the redshift when is uniquely determined. We found that this redshift separates well the fast growth regime () from the slow growth regime (). In the fast growth regime all halos have about the same median concentration independent of halo mass, while in the slow growth regime the concentration scales with time as . This is exactly the proposal made in Z03a, Z03b that concentration evolution of a halo depends much on its mass accretion rate and the faster the mass grows, the slower the concentration increases. ## 5. Conclusions and Discussion A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum varies with mass scale dramatically in the so-called concordance CDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. In this paper, we use a large set of -body simulations of various structure formation models to study the MAHs, the mass and redshift dependence of concentrations and the concentration evolution histories of dark matter halos. We find that our simulation results cannot be described by the much-used empirical models in the literature. Using our simulation results, we developed new empirical models for both the MAHs and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are universal, in the sense that the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts. Our model predictions also match the PINOCCHIO MAHs very well, which are automatically output by the Lagrangian semianalytic code PINOCCHIO without any postprocessing and so are much independent of our numerical simulations and of our definitions of halo merger trees. These models are also simple and easy to implement, making them very useful in modeling the growth and structure of the dark matter halo population. We found that, in describing the median MAH of dark matter halos, some degree of universality can be achieved by using , the critical overdensity for spherical collapse, to represent the time, and , the rms of the linear density field on mass scale , to represent the mass. This is consistent with the Press–Schechter (PS) theory, in which the dependence on cosmology and power spectrum is all through and . A more universal relation can be found by taking into account the local slope of the power spectrum on the mass scale in consideration. This is again expected, because the growth of an average halo in a given cosmological background is determined by the perturbation power spectrum on larger scales. Indeed, in the extended PS theory, the average mass accretion rate of a halo of a given mass depends both on and its derivative. However, the exact dependence on the shape of the power spectrum in our empirical model is actually different from that given by the extended PS theory, as demonstrated clearly by the discrepancy between our results and the model of van den Bosch (2002), which is based on the extended PS theory (see Figures 1012). We find that another modification is required in order to correct the deviation of halos corresponding to relatively low peaks. We suggest that such deviation is due to the neglect of the large-scale tidal field on the formation of dark matter halos, as found in Wang et al. (2007). Thus, the empirical model we find for the halo accretion histories may help us to develop new models for the mass function and merger trees of dark matter halos. In addition, we find that a linear relation between [defined in Equation (8)] and is also a good approximation for individual halos, although the slope may change from halo to halo. Thus, the MAHs of individual halos may be modeled with a set of straight lines, in the space, with different slopes specified by a distribution function. Research along these lines is clearly worthwhile, and we plan to come back to these problems in forthcoming papers. Our other finding is that the concentration of a halo is strongly correlated with , the age of the universe when its progenitor on the main branch first reaches of its current mass. This is consistent with the general idea that the structure of a dark matter halo is correlated with its formation history. The concentration of a halo selected at a given redshift is determined by the ratio of the universe age at this redshift, , to . If this ratio is smaller than 3.75, the halo is in the fast growth regime and its concentration is expected to be , independent of its mass. For a halo in the fast growth regime, its first mass settled down at a late epoch (of age ) and so has low density. In addition, its current accretion rate is large in the sense that it only takes a time interval of much less than to gain the 96 percent of mass remaining. Such accretion is expected to have a significant impact on the inner structure that is not very dense, and so the inner structure of this halo is adjusted constantly by the mass accretion. In contrast, for a halo in the slow growth regime (), its first mass settled down at an early epoch when the universe age was smaller than , and so this fraction of the mass is expected to settle into a very dense clump. Also, for such a halo, the current accretion rate is small in the sense that it takes a time interval of more than to gain the of mass remaining. Such slow accretion is not expected to have a significant impact on the inner dense structure that has already formed. All these are in qualitative good agreement with the results presented in Z03a. Our model can also be used to predict accurately how halo structural properties, such as , , , and , and hence its density profile evolve along the main branch, making it very useful for our understanding of the formation and evolution of dark matter halos. Assuming that the NFW profile is a good approximation for the halo density profile, the model can predict too the MAHs, the mass and redshift dependence of concentrations and the concentration evolution histories for different halo definitions. In a forthcoming paper, we will apply this model to individual halos. We found that in a self-similar cosmology halos on some characteristic mass scale have a straight-line median MAH and a time-invariant median concentration. All halos asymptote these median features at high redshift: halos under this mass scale will grow slower and slower on average and their concentration will increase with time; while halos above this mass scale will grow faster and faster on average and their concentration will decrease. On the other hand, in CDM and OCDM cases, both the decrease of cosmological density parameter with time and the increase of power index of the power spectrum with mass scale will slow down the halo mass growth rate, inducing that at the present time halos on nearly all mass scales show two distinct phases in their mass growth histories, as found by Z03b and Z03a. Z03a and Z03b also pointed out that when halo mass accretion rate reaches some criteria, the fast mass growth will trigger dynamical relaxation and core-structure formation, resulting in a constant concentration of about 4. Here we found another mechanism that will lead to a time-invariant concentration along the main branch of a halo merger tree: a self-similar straight-line MAH will induce a constant concentration, as required by logic, and when the mass accretion rate is below the above criteria, the value of this constant is determined by the accretion rate and is always larger than 4. We are grateful to Simon D. M. White for helpful discussion and to Julio Navarro and James Bullock for providing their codes to compute . The work is supported by NSFC (10303004, 10533030, 10873028, 10821302, 10878001), by 973 Program (2007CB815402, 2007CB815403), Shanghai Qimingxing project (04QMX1460), and by the Knowledge Innovation Program of CAS (KJCX2-YW-T05). H.J.M. thanks the Chinese Academy of Sciences for supporting his visit to SHAO. He also acknowledges the support of NSF AST-0607535, NASA AISR-126270 and NSF IIS-0611948. Part of the numerical simulations presented in this paper were carried out at the Astronomical Data Analysis Center (ADAC) of the National Astronomical Observatory, Japan. ## References • Bardeen et al. (1986) Bardeen, J. M., Bond, J. R., Kaiser N., & Scalay A. 1986, ApJ, 304, 15 • Bryan & Norman (1998) Bryan, G. L., & Norman, M., 1998, ApJ, 495, 80 • Bullock et al. (2001) Bullock, J. S., Kolatt, T. S., Sigad, Y., Somerville, R. S., Kravtsov, A. V., Klypin, A. A., Primack, J. R., & Dekel, A. 2001, MNRAS, 321, 559 • Eisenstein & Hu (1998) Eisenstein, D. J., & Hu, W. 1998, ApJ, 496, 605 • Eke, Navarro & Steinmetz (2001) Eke, V. R., Navarro, J.F., & Steinmetz, M. 2001, ApJ, 554, 114 • Gao et al. (2008) Gao, L., Navarro, J. F., Cole, S., Frenk, C. S., White, S. D. M., Springel, V., Jenkins, A., & Neto, A. F. 2008, MNRAS, 387, 536 • Hockney & Eastwood (1981) Hockney, R. W., & Eastwood, J. W. 1981, Computer Simulation Using Particles (McGraw Hill, New York) • Hoffman et al. (2007) Hoffman, Y., Romano-Díaz, E., Shlosman, I., & Heller, C. 2007, ApJ, 671, 1108 • Jenkins et al. (2001) Jenkins, A., Frenk, C. S., White, S. D. M., Colberg, J. M., Cole, S., Evrard, A. E., Couchman, H.M. P., & Yoshida, N. 2001, MNRAS, 321, 372 • Jing (2000) Jing, Y. P. 2000, ApJ, 535, 30 • Jing & Suto (1998) Jing, Y. P., & Suto Y. 1998, ApJ, 494, L5 • Jing & Suto (2002) Jing, Y. P., & Suto Y. 2002, ApJ, 574, 538 • Jing, Suto & Mo (2007) Jing, Y. P., Suto, Y., & Mo, H. J. 2007, ApJ, 657, 664 • Kitayama & Suto (1996) Kitayama, T., & Suto, Y. 1996, MNRAS, 280, 638 • Lacey & Cole (1994) Lacey, C., & Cole, S. 1994, MNRAS, 271, 676 • Li et al. (2007) Li, Y., Mo, H. J., van den Bosch, F. C., & Lin, W. P. 2007, MNRAS, 379, 689 • Lu et al. (2006) Lu, Y., Mo, H. J., Katz, N., & Weinberg, M. D. 2006, MNRAS, 368, 1931 • Macció et al. (2007) Macció, A. V., Dutton, A. A., van den Bosch, F. C., Moore, B., Potter D., & Stadel J. 2007, MNRAS, 378, 55 • Macció et al. (2008) Macció, A. V., Dutton, A. A., & van den Bosch, F. C. 2008, MNRAS, 391, 1940 • McBride et al. (2009) McBride, J., Fakhouri, O., & Ma, C. P. 2009, MNRAS, 398, 1858 • Miller et al. (2006) Miller, L., Percival., W. J., Croom, S. M., & Babić, A. 2006, A&A, 459, 43 • Monaco et al. (2002) Monaco, P., Theuns, T., Taffoni, G., Governato, F., Quinn, T., & Stadel, J. 2002, ApJ, 564, 8 • Navarro, Frenk, & White (1997) Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493 • Neistein et al. (2008) Neistein, E., & Dekel, A. 2008, MNRAS, 383, 615 • Neistein et al. (2006) Neistein, E., van den Bosch, F. C., & Dekel, A. 2006, MNRAS, 372, 933 • Schmidt & Allen (2007) Schmidt, R. W., & Allen, S. W. 2007, MNRAS, 379, 209 • van den Bosch (2002) van den Bosch, F. C. 2002 MNRAS, 331, 98 • Wang et al. (2007) Wang, H. Y., Mo, H. J., & Jing, Y. P. 2007, MNRAS, 375, 633 • Wechsler et al. (2002) Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., & Dekel, A. 2002, ApJ, 568, 52 • Zhao et al. (2003a) Zhao, D. H., Mo, H. J., Jing, Y. P., & Boerner, G. 2003a, MNRAS, 339, 12 • Zhao et al. (2003b) Zhao, D. H., Jing, Y. P., Mo, H. J., & Boerner, G. 2003b, ApJ, 597, L9 In the appendices A and B, we give a detailed user’s guide on how to use the results of the present paper to compute the median mass accretion history and the concentration of dark matter halos for a given cosmological model. ## Appendix A Calculating the median mass accretion history of halos For a given cosmological model, one can calculate for a given redshift , where is the conventional spherical collapse threshold at redshift , is the linear growth factor normalized to 1 at redshift , and is the collapse threshold for the density field linearly evolved to . Given a linear power spectrum at , one can also calculate the rms density fluctuation for a spherical volume of mass , as well as . Once these quantities are obtained, one can calculate the median mass growth history of dark matter halos of mass at redshift , i.e., the median mass of their main progenitors at a higher redshift . The step-by-step procedure is as follows. 1. With , and for halos of mass at redshift , one can calculate according to Equation (7) and the shift according to Equation (11). Let and temporarily. 2. One can obtain the accretion rate at this redshift by substituting the above and into Equation (12). 3. At an incrementally higher redshift , one can calculate the new critical collapse threshold at , and its change between and , and hence obtain the change in through . One can then get a new , with being the median mass of the main progenitors at . The mass can be obtained from through the function . 4. In order to trace the history further backward, one needs to calculate according to Equation (7) and calculate the shift according to Equation (3.2). Let , , and repeat steps 2 and 3. Then step by step, one can trace the MAH backwards to high redshift. One can also calculate the halo radius and halo circular velocity along this MAH according to Equations (2 and 3) by setting as defined in Section 1. ## Appendix B Calculating the halo concentration For a given MAH ending at in a given cosmology, one can predict concentrations for all the main progenitors at different redshifts in addition to the final offspring, i.e., one can obtain the concentrations along the MAH, . Then many other useful quantities can be calculated as shown below. There are two steps in computing the concentrations. 1. From a given MAH, , one can compute , the redshift at which the main progenitor of the main progenitor halo at grows to , where is the mass of the main progenitor halo at . 2. In the given cosmology, one can calculate the ages of the universe, and , at these two redshifts, and , and obtain by substituting them into Equation (13). 3. Furthermore, one can calculate the characteristic inner quantities of the halo along the MAH, such as , , and , according to the definition of and Equations (25), and hence can plot the NFW density profile at any point of the MAH according to Equation (1). In the literature, there are several definitions for halo radius. Assuming that the NFW profile is a good approximation for the halo density profile, one can predict the MAHs and the concentration evolution histories for different halo definitions, because halos of different definitions have the same inner structures (such as , and ) although their boundaries are different. For example, according to Equations (4 and 5), one can calculate , , for halo definition and one can also calculate , , for halo definition . Note that obtained with steps 1 and 2 is the concentration of a main progenitor of a halo of mass at . As we pointed out in Section 3.2, this population of main progenitors of mass at is not statistically the same as the whole population of halos of mass at , although in some cases the difference between them is negligible. The former population resides in environment of more frequent mergers and so has slightly smaller concentration. In any case, one can easily compute for the whole population of halos of mass at if one sets and in the above calculation. ## Appendix C A calculator, a user-friendly code and tables available on internet A calculator which allows one to interactively generate the median MAHs, the concentration evolution histories and the mass and redshift dependence of concentrations of dark matter halos is provided at http://www.shao.ac.cn/dhzhao/mandc.html, together with a user-friendly code to make the relevant calculations. The calculator and the code can be used for almost all current cosmological models with or without inclusion of the effects of baryons in the initial power spectrum. We also provide tables for median concentrations of halos with different masses at different redshifts in several popular cosmological models. Detailed instructions on how to use the calculator, the code and the tables can be found at the Web site.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96213698387146, "perplexity": 858.2636277807467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00082.warc.gz"}