url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://byjus.com/questions/which-one-of-the-following-is-not-an-example-of-projectile/ | Which of the following is not a projectile? (1) An aircraft taking off (2) A ball is thrown horizontally from a roof (3) A bullet fired from a rifle (4) A football kicked by a player.
Answer: (1) An aircraft taking off
projectile motion
Any object thrown into space with only gravity acting on it is referred to as a projectile. Gravity is the main force acting on a projectile. When a particle is thrown obliquely near the earth’s surface, it follows a curved path of constant acceleration that leads to the earth’s centre. The path of such a particle is called a projectile and the motion is called projectile motion. When an aircraft takes off, it does not take off in a parabolic direction. As a result, it isn’t a projectile motion. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609884977340698, "perplexity": 420.19409192536165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00368.warc.gz"} |
https://blog.feedly.com/secrets-of-success/ | # Secrets of success
Here is an interesting 3-minute presentation of Richard St. John at TED on how to increase your chances to be successful. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9160885214805603, "perplexity": 2908.015130794514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00099.warc.gz"} |
https://socratic.org/questions/what-is-the-square-root-of-62-5 | Algebra
Topics
# What is the square root of -62.5?
Therefore, in Algebra, the solution is the empty or null set: $\left\{\emptyset\right\}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8716484308242798, "perplexity": 698.2778597725098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00230.warc.gz"} |
https://portlandpress.com/clinsci/article-abstract/53/3/241/72200/A-Reappraisal-of-Osmotic-Evidence-for-Intact | 1. The validity of evidence for intact peptide absorption, derived from analysis of the relation of water and total solute absorption, has been tested.
2. Solute and water absorption from saline solutions of the disaccharide maltose have been studied in the normal human jejunum, using a double-lumen perfusion technique with a proximal occlusive balloon. It was expected that maltose would yield very different results from peptides, because maltose is virtually completely hydrolysed before absorption, whereas a proportion at least of some peptides is transported into the intestinal mucosal cells before hydrolysis. This expectation was not confirmed by experiment.
3. The assumption that the absorbate is always isotonic with plasma has been tested by altering the osmolality of glucose/saline solutions perfused in the jejunal lumen. This assumption was not substantiated by experiment, as when the luminal fluid was hypertonic to plasma, so was the absorbate.
4. It is suggested that our findings with peptides and saccharides could be explained by the production of a hypertonic absorbate by hydrolysis of these solutes to their monomer units. We therefore conclude that analyses of the relation of net solute and water absorption cannot be used to predict the form in which peptides enter the mucosal cells.
This content is only available as a PDF.
You do not currently have access to this content. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906323254108429, "perplexity": 3547.1651524816853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00241.warc.gz"} |
https://mathoverflow.net/questions/194181/on-certain-representations-of-algebraic-numbers-in-terms-of-trigonometric-functi | # On certain representations of algebraic numbers in terms of trigonometric functions
Let's say that a real number has a simple trigonometric representation, if it can be represented as a product of zero or more rational powers of positive integers and zero or more (positive or negative) integer powers of $\sin(\cdot)$ at rational multiples of $\pi$ (the number of terms in the product is assumed to be finite, the empty product is taken to be $1$).
Examples:
• $\sqrt[3]{\sin\!\left(\frac\pi3\right)}$ has a simple trigonometric representation, because $\sqrt[3]{\sin\!\left(\frac\pi3\right)}=\frac{3^{1/6}}{2^{1/3}}$.
• $\sqrt{\sin\!\left(\frac\pi{10}\right)}$ has a simple trigonometric representation, because $\sqrt{\sin\!\left(\frac\pi{10}\right)}=\frac{2^{1/2}}{5^{1/4}}\,\sin\!\left(\frac\pi5\right)$.
• $\pi$ does not have a simple trigonometric representation, because it is not an algebraic number.
Questions:
• Do $\sqrt{\sin\!\left(\frac\pi5\right)},$ $\sqrt{\sin\!\left(\frac\pi8\right)},$ $\sqrt{\sin\!\left(\frac\pi{12}\right)},$ $\sqrt{\sin\!\left(\frac\pi{15}\right)},$ $\sqrt{\sin\!\left(\frac\pi{20}\right)},$ $\sqrt{\sin\!\left(\frac\pi{24}\right)}$ have simple trigonometric representations?
• Is there an algorithm that, given a rational power of $\sin(\cdot)$ at a rational multiple of $\pi$, would determine if it has a simple trigonometric representation? If so, could you give (or outline) a concrete example of such an algorithm (efficient, if possible)?
• More generally, is there an algorithm that, given a real algebraic number (in some explicit form, e.g. as its minimal polynomial and a rational isolating interval), would determine if it has a simple trigonometric representation? If so, could you give (or outline) a concrete example of such an algorithm (efficient, if possible)? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774656295776367, "perplexity": 167.13980737741105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00247.warc.gz"} |
http://science.sciencemag.org/content/338/6114/1560.full | Articles
# Journey in the Search for the Higgs Boson: The ATLAS and CMS Experiments at the Large Hadron Collider
+ See all authors and affiliations
Science 21 Dec 2012:
Vol. 338, Issue 6114, pp. 1560-1568
DOI: 10.1126/science.1230827
## Abstract
The search for the standard model Higgs boson at the Large Hadron Collider (LHC) started more than two decades ago. Much innovation was required and diverse challenges had to be overcome during the conception and construction of the LHC and its experiments. The ATLAS and CMS Collaboration experiments at the LHC have discovered a heavy boson that could complete the standard model of particle physics.
One of the remarkable achievements of 20th-century science is the revelation that a large number of natural phenomena that characterize the world around us can be described in terms of underlying principles of great simplicity and beauty. The standard model (SM) of particle physics is built upon those principles. The SM comprises quarks and leptons as the building blocks of matter, and describes their interactions through the exchange of force carriers: the photon for electromagnetic interactions, the W and Z bosons for weak interactions, and the gluons for strong interactions. Quarks are bound by the strong interaction into protons and neutrons; protons and neutrons bind together into nuclei; electrons bind to nuclei by electromagnetic interaction to form atoms, molecules, and matter. The electromagnetic and weak interactions are unified in the electroweak theory (13). Although integrating gravity into the model remains a fundamental challenge, the SM provides a beautiful explanation, from first principles, of much of Nature that we observe directly.
The SM has been tested by many experiments over the past four decades and has been shown to successfully describe high-energy particle interactions. The simplest and most elegant way to construct the SM would be to assert that all fundamental particles are massless. However, we know this to be untrue, otherwise atoms would not have formed and we would not exist. The question of the origin of mass of fundamental particles is the same as the one that is posed in the unified electroweak theory: Why does the photon remain strictly massless while its close cousins, the W and Z bosons, acquire a mass some 100 times that of the proton? To generate mass of the W and Z bosons, the electroweak gauge symmetry must somehow be broken. For protons or neutrons, which are composite particles, most of the mass arises from the mass equivalent of the internal energy due to the intense binding field of the strong interactions.
Nearly 50 years ago, a mechanism was proposed for spontaneously breaking this symmetry (49) involving a complex scalar quantum field that permeates the entire universe. This new field has the same quantum numbers as the vacuum. The quantum of this field is called the Higgs boson. In addition to imparting mass to the W and Z bosons, this scalar field would also impart masses to the quarks and leptons, all in proportion to their coupling to this field. After the discovery of the W and Z bosons in the early 1980s, the search for the Higgs boson, considered to be an integral part of the SM, became a central theme in particle physics. The discovery of the Higgs boson would establish the existence of this field, leading to a revolutionary step in our understanding of how Nature works at the fundamental level. The elucidation of this mass-generating mechanism became one of the primary scientific goals of the Large Hadron Collider (LHC).
The mass of the Higgs boson (mH) is not predicted by theory. Below mH = 600 GeV, previous direct searches at the Large Electron Positron collider (LEP), the Tevatron, and the LHC were unable to exclude mass regions between 114 and 130 GeV (1014). Furthermore, in December 2011, the A Toroidal LHC Apparatus (ATLAS) Collaboration and Compact Muon Solenoid (CMS) Collaboration experiments reported an excess of events near a mass of 125 GeV (13, 14). The Tevatron experiments, CDF and D0, recently reported an excess of events in the range 120 to 135 GeV (15).
In July 2012, the discovery of a new heavy boson with a mass around 125 GeV was announced at CERN by the ATLAS and CMS experiments (16, 17), and the current data are consistent with the expectation for a Higgs boson. Here, we present an overview of the search for the Higgs boson and highlight some of the exciting implications. Despite its incredible success the SM is still considered incomplete. A number of questions remain: Why would the mass of the Higgs boson be only ~102 GeV? What is dark matter? How does the matter-antimatter asymmetry arise? How is gravity to be included? Physics beyond the SM has been much discussed over the past few decades, and such physics might manifest itself via the production of exotic particles such as superparticles from a new symmetry (supersymmetry), heavy Z-like bosons in grand unified theories or theories with extra space-time dimensions.
Design challenges. In the 1980s, it was clear that new accelerators were needed that could reach energies beyond those that had allowed the discovery of many of the subnuclear particles within the SM. Several ideas were vigorously discussed concerning the accelerators and detector concepts most capable of tackling the major open questions in particle physics, often paraphrased as the “known unknowns,” and possibly discovering new physics beyond the SM, the “unknown unknowns.” Finding the Higgs boson was clearly a priority in the first category and was expected to be challenging. The mass of the Higgs boson (mH) is not predicted by theory and could be as high as 1000 GeV (1 TeV). This required a search over a broad range of mass, ideally suiting an exploratory machine such as a high-energy proton-proton (pp) collider. The suitability arises from the fact that the energy carried by the protons is distributed among its constituent partons (quarks and gluons), allowing the entire range of masses to be “scanned” at the same time. The main mechanisms predicted to produce the Higgs boson involve the combination of these subnuclear particles and force carriers.
The accelerator favored at CERN to probe the TeV energy scale was a pp collider. The required energy of the collider can be estimated by considering the reaction in which a Higgs boson is produced with a mass, mH, of 1 TeV. This happens via the WW fusion production mechanism. A quark from each of the protons radiates a W boson with an energy of ~0.5 TeV, implying that the radiating quark should carry an energy of ~1 TeV so as to have a reasonable probability of emitting such a W boson. Hence, the proton should have an energy of ~6 × 1 TeV, as the average energy carried by a quark inside the proton is about one-sixth of the proton energy. The LHC (18, 19) was designed to accelerate protons to 7 TeV, an order of magnitude higher than the most powerful available accelerator, with an instantaneous pp collision rate of 800 million per second.
It was proposed to build the new accelerator in the existing underground tunnel of LEP. Because the radius of the tunnel is fixed, the value of the magnetic field of the dipole-bending magnets had to be ~8.5 T to hold in orbit the 7-TeV protons. The main challenges for the accelerator were to build ~1200 superconducting dipoles, each 15 m in length, able to reach this magnetic field; the large distributed cryogenic plant to cool the magnets and other accelerator structures; and control systems for the beams. The stored energy in each of the beams at nominal intensity and energy is 350 MJ, equivalent to more than 80 kg of TNT. Hence, if the beam is lost in an uncontrolled way, it can do considerable damage to the machine components, which would result in months of down time [see (18, 19) for further details].
The LHC was approved in 1994 and construction began in 1998. A parallel attempt to build an accelerator that could reach even higher energies was made with the Superconducting SuperCollider (SSC) in Texas in the late 1980s but was canceled in 1993 during the early stages of construction.
This search for the Higgs boson also provided a stringent benchmark for evaluating the physics performance of various experiment designs under consideration some 20 years ago. The predicted rate of production ( = L⋅σ, where L is the instantaneous luminosity of the colliding beams, measured in units of cm−2 s−1, and σ is the cross section of the production reaction, measured in units of cm2) and natural width (Γ = ħ/τ, where τ is the lifetime, and ħ is Planck’s constant divided by 2π) of the SM Higgs boson vary widely over the allowed mass range (100 to 1000 GeV). Once produced, the Higgs boson disintegrates immediately in one of several ways (decay modes) into known SM particles, depending on its mass. A search had to be envisaged not only over a large range of masses but also many possible decay modes: in pairs of photons, Z bosons, W bosons, τ leptons, and b quarks. Not only is the putative SM Higgs boson rarely produced in the proton collisions, it also rarely decays into particles that are the best identifiable signatures of its production at the LHC: photons, electrons, and muons. The rarity is illustrated by the fact that Higgs boson production and decay to one such distinguishable signature (H → ZZ → ℓℓℓℓ, where [[[ℓ]]] is a charged lepton, either a muon or an electron) happens roughly only once in 10 trillion pp collisions. This means that a large number of pp collisions per second must be studied; the current operating number is around 600 million per second, corresponding to an instantaneous luminosity of 7 × 1033 cm−2 s−1. Hence, the ATLAS and CMS detectors operate in the harsh environment created by this huge rate of pp collisions.
A saying prevalent in the late 1980s and early 1990s captured the challenge that lay ahead: “We think we know how to build a high-energy, high-luminosity hadron collider—but we don’t have the technology to build a detector for it.” The role of the search for the Higgs boson at the LHC in influencing the design of the ATLAS and CMS experiments, and the experimental challenges associated with operating them at very high instantaneous collision rates, are described in (20).
Different values of mH place differing demands on the detectors, none more stringent than in the mass range mH < 2 × mZ (where the mass of the Z boson mZ = 90 GeV). In designing the LHC experiments, considerable emphasis was placed on this region. At masses below ~140 GeV, the SM Higgs boson is produced with a spread (natural width) of mass values of only a few MeV (i.e., a few parts in 105) such that the width of any observed mass peak would be entirely dominated by instrumental mass resolution. The Higgs boson decay modes generally accepted to be most promising in this region were those into two photons, occurring a few times in every thousand Higgs boson decays, and those, occurring even less often, into a pair of Z bosons, each of which in turn decays into a pair of two oppositely charged leptons (electrons or muons). However, in a detector with a very good muon momentum resolution and electron/photon energy resolution, the invariant mass of the parent Higgs bosons could be measured precisely enough with the prospect of seeing a narrow peak, over background, in the distribution of the invariant masses of the decay particles (e.g., two photons or four charged leptons). Early detailed studies can be found in (21, 22).
Other signatures are associated with a Higgs boson. Most of these signatures are plagued by larger backgrounds, as the signal characteristics are less distinctive. For example, some of these signatures include narrow sprays of particles, known as “jets,” resulting from the fragmentation of quarks. These represent the most likely final states from the decay of a SM Higgs boson, but in a hadron collider they are overwhelmed by the copious production from known SM processes. Among these jets are b quark jets characterized by a short (submillimeter) flight path in the detector before disintegrating. Finally, neutrinos can be produced, which, as neutral weakly interacting leptons, leave the detector without direct trace. Energy balance in the transverse plane of the colliding protons appears to be violated, as the neutrino energy is not measured, leading to the signature of missing transverse energy (ETmiss). For example, when a Higgs boson decays into two Z bosons, one can decay into a charged lepton pair and the other into a pair of neutrinos, leaving as final state an oppositely charged lepton pair and ETmiss.
The conditions in hadron colliders are more ferocious than in electron-positron colliders. For example, at the LHC ~1000 charged particles from ~20 pp interactions emerge from the interaction region in every crossing of the proton bunches. Highly granular detectors (to give low probability of a given cell or channel registering a hit or energy), with fast response and good time resolution, are required. Tens of millions of detector electronic channels are hence required, and these must be synchronized to cleanly separate the different “bursts” of particles emerging from the interaction point every 25 ns. The enormous flux of particles emerging from the interaction region leads to high radiation levels in the detecting elements and the associated front-end electronics. This presented challenges for detectors and electronics not previously encountered in particle physics.
The counterrotating LHC beams are organized in 2808 bunches comprising some 1011 protons per bunch separated by 25 ns, leading to a bunch crossing rate of ~40 MHz (the LHC accelerator currently operates at 50-ns bunch spacing with 1380 bunches). The event selection process (“trigger”) must reduce this rate to ~0.5 kHz for storage and subsequent detailed offline analysis. New collisions occur in each crossing, and a trigger decision must be made for every crossing. It is not feasible to make a trigger decision in 25 ns; it takes about 3 μs. During this time the data must be stored in pipelines integrated into onboard (front-end) electronics associated with almost every one of 100 million electronic channels in each experiment. This online event selection process proceeds in two steps: The first step reduces the rate from ~40 MHz to a maximum of 100 kHz and comprises hardware processors that use coarse information from the detectors; upon receipt of a positive trigger signal (set by high-momentum muons or high-energy deposits in calorimeters; see below), the data from these events are transferred to a processor farm, which uses event reconstruction algorithms operating in real time to decrease the event rate to ~0.5 kHz before data storage. The tens of petabytes that are generated per year per experiment are distributed to scientists located across the globe and motivated the development of the so-called Worldwide LHC Computing Grid (WLCG) (23).
Timeline and general features of the ATLAS and CMS experiments. To accomplish the physics goals, new detector technologies had to be invented and most of the existing technologies had to be pushed to their limits. Several detector concepts were proposed; two complementary ones, ATLAS and CMS, were selected in 1993 after peer review to proceed to detailed design (2427). These designs were fully developed, and all elements prototyped and tested, over many years before construction commenced around 1997. Today each experiment comprises more than 3000 scientists and engineers from around 180 universities and laboratories in around 40 countries. Table 1 provides a timeline of these developments.
Table 1
The timeline of the LHC project.
View this table:
The typical form of a collider detector is a “cylindrical onion” containing four principal layers. A particle emerging from the collision and traveling outward will first encounter the inner tracking system, immersed in a uniform magnetic field, comprising an array of pixels and microstrip detectors. These measure precisely the trajectory of the spiraling charged particles and the curvature of their paths, revealing their momenta. The stronger the magnetic field, the higher the curvature of the paths, and the more precise the measurement of each particle’s momentum. The energies of particles are measured in the next two layers of the detector, the electromagnetic (em) and hadronic calorimeters. Electrons and photons will be stopped by the em calorimeter; jets will be stopped by both the em and hadronic calorimeters. The only known particles that penetrate beyond the hadron calorimeter are muons and neutrinos. Muons, being charged particles, are tracked in dedicated muon chambers. Their momenta are also measured from the curvature of their paths in a magnetic field. Neutrinos escape detection, and their presence gives rise to ETmiss.
ATLAS and CMS have differing but complementary designs (28, 29). The single most important aspect of the overall design is the choice of the magnetic field configuration for measuring the muons. The two basic configurations are solenoidal and toroidal, in which the magnetic field is parallel or azimuthal to the beam axis, respectively. The CMS has a superconducting high-field solenoid with a large ratio of length to inside diameter; ATLAS has a superconducting air-core toroid. These are the two largest magnets of their kind and hold a stored energy of up to 3 GJ. In both magnets a current of ~20 kA flows through the superconductor. The CMS solenoid additionally provides the magnetic field for the inner tracking system, whereas ATLAS has an additional solenoid magnet to carry out the same function.
At the nominal pp collision rate, the particle flux varies from 108 cm−2 s−1 (at a radius of r = 4 cm) to 2 × 106 cm−2 s−1 (at r = 50 cm), requiring small detection cells (channels) of typical size varying from 100 μm × 100 μm (pixels) to 10 cm × 100 μm (microstrips). The more channels there are, the easier it is to recognize the trajectories of all the charged particles produced. In practice, the number of channels is limited by the cost of the associated electronics, by the power they dissipate (which in turn requires cooling fluids), and by the need to minimize the amount of material in front of the em calorimeter. The inner tracker detectors, comprising silicon sensors and gaseous “straw” chambers, were challenging to develop because of the need to operate in a harsh radiation environment, especially when close to the beam pipe. Radiation-hard electronics associated with each cell, with a high degree of functionality, needed to be packed into as small a space as possible, using as little material as possible.
In the early 1990s there were only two complementary possibilities for the em calorimeters that could perform in a high-radiation environment and had good enough electron and photon energy resolution to cleanly detect the two-photon decay of the SM Higgs boson at low mass: a lead–liquid argon sampling calorimeter, chosen by ATLAS, and fully sensitive dense lead tungstate scintillating crystals, chosen by CMS. Both are novel techniques, and each was tested and developed over many years before mass production could commence. The electrons and positrons in the electromagnetic showers excite atoms of lead tungstate or ionize atoms of liquid argon, respectively. The amount of light emitted, or charge collected, is proportional to the energy of the incoming electrons or photons.
The hadron calorimeters in each detector are similar and are based on known technologies: alternating layers of iron or brass absorber in which the particles interact, producing showers of secondary particles, and plastic scintillator plates that sample the energy of these showers. The total amount of scintillation light detected by the photodetectors is proportional to the incident energy.
The muon detectors used complementary technologies based on gaseous chambers: drift chambers and cathode strip chambers that provide precise position measurement (and also provide a trigger signal in the case of CMS), and thin-gap chambers and/or resistive plate chambers that provide precise timing information as well as a fast trigger signal.
The electronics on the detectors, much of which was manufactured in radiation-hard technology, represented a substantial part of the materials cost of the LHC experiments. The requirement of radiation hardness was previously found only in military and space applications.
The construction of the various components of the detectors took place over about 10 years in universities, national laboratories, and industries, from which they were sent to CERN in Geneva. This paper can do only partial justice to the technological challenges that had to be overcome in developing, constructing, and installing all the components in the large underground caverns. All the detector elements were connected to the off-detector electronics, and data were fed to computers housed in a neighboring service cavern. Each experiment has more than 50,000 cables with a total length exceeding 3000 km, and more than 10,000 pipes and tubes for services (cooling, ventilation, power, signal transmission, etc.). Access to repair any substantial fault, or faulty connection, buried inside the experiment would require months just to open the experiments. Hence, a high degree of long-term operational reliability, which is usually associated with space-bound systems, had to be attained.
The design of the ATLAS experiment. The design of the ATLAS detector (28) was based on a superconducting air-core toroid magnet system containing ~80 km of superconductor cable in eight separate barrel coils (each 25 m × 5 m in a “racetrack” shape) and two matching end-cap toroid systems. A field of ~0.5 T is generated over a large volume. The toroids are complemented with a smaller solenoid (diameter 2.5 m, length 6 m) at the center, which provides a magnetic field of 2 T.
The detector includes an em calorimeter complemented by a full-coverage hadronic calorimeter for jet and ETmiss measurements. The em calorimeter is a cryogenic lead–liquid argon sampling calorimeter in a novel “accordion” geometry allowing fine granularity, both laterally and in depth, and full coverage without any uninstrumented regions. A plastic scintillator–iron sampling hadronic calorimeter, also with a novel geometry, is used in the barrel part of the experiment. Liquid argon hadronic calorimeters are used in the end-cap regions near the beam axis. The em and hadronic calorimeters have 200,000 and 10,000 cells, respectively, and are in an almost field-free region between the toroids and the solenoid. They provide fine lateral and longitudinal segmentation.
The momentum of the muons can be precisely measured as they travel unperturbed by material for more than ~5 m in the air-core toroid field. About 1200 large muon chambers of various shapes, with a total area of 5000 m2, measure the impact position with an accuracy better than 0.1 mm. Another set of about 4200 fast chambers are used to provide the “trigger.” The chambers were built in about 20 collaborating institutes on three continents. (This was also the case for other components of the experiment.)
The reconstruction of all charged particles, including that of displaced vertices, is achieved in the inner detector, which combines highly granular pixel (50 μm × 400 μm elements leading to 80 million channels) and microstrip (13 cm × 80 μm elements leading to 6 million channels) silicon semiconductor sensors placed close to the beam axis, and a “straw tube” gaseous detector (350,000 channels), which provides about 30 to 40 signal hits per track. The latter also helps in the identification of electrons using information from the effects of transition radiation.
The air-core magnet system allows a relatively lightweight overall structure leading to a detector weighing 7000 tonnes. The muon spectrometer defines the overall dimensions of the ATLAS detector: a diameter of 22 m and a length of 46 m. Given its size and structure, the ATLAS detector had to be assembled directly in the underground cavern. Figure 1 shows one end of the cylindrical barrel detector after about 4 years of installation work, 1.5 years before completion. The ends of four of the barrel toroid coils are visible, illustrating the eightfold symmetry of the structure.
The design of the CMS experiment. The design of the CMS detector (29) was based on a superconducting high-field solenoid, which first reached the design field of 4 T in 2006. The CMS design was first optimized to detect muons from the H → ZZ → 4μ decay. To identify these muons and measure their momenta, the interaction region of the CMS detector is surrounded with enough absorber material, equivalent to about 2 m of iron, to stop all the particles produced except muons and neutrinos. The muons have spiral trajectories in the magnetic field, which are reconstructed in the surrounding drift chambers. The CMS solenoid was designed to have the maximum magnetic field considered feasible at the time, 4 T. This is produced by a current of 20 kA flowing through a reinforced Nb-Ti superconducting coil built in four layers. Economic and transportation constraints limited the outer radius of the coil to 3 m and its length to 13 m. The field is returned through an iron yoke, 1.5 m thick, which houses four muon stations to ensure robustness of measurement and full geometric coverage. The iron yoke is sectioned into five barrel wheels and three end-cap disks at each end, for a total weight of 12,500 tonnes. The sectioning enabled the detector to be assembled and tested in a large surface hall while the underground cavern was being prepared. The sections, weighing 350 to 2000 tonnes, were then lowered sequentially between October 2006 and January 2008, using a dedicated gantry system equipped with strand jacks; this represented the first use of this technology to simplify the underground assembly of large experiments.
The next design priority was driven by the search for the decay of the SM Higgs boson into two photons. This called for an em calorimeter with the best possible energy resolution. A new type of crystal was selected: lead tungstate (PbWO4) scintillating crystal. Five years of research and development (1993–1998) were necessary to improve the transparency and the radiation hardness of these crystals, and it took more than 130 years (1998–2008) of round-the-clock production to manufacture the 75,848 crystals—more crystals than were used in all previous particle physics experiments put together. The last of the crystals was delivered in March 2008.
The solution to charged particle tracking was to opt for a small number of precise position measurements of each charged track (~13 each with a position resolution of ~15 μm per measurement), leading to a large number of cells distributed inside a cylindrical volume 5.8 m in length and 2.5 m in diameter: 66 million silicon pixels, each 100 μm × 150 μm, and 9.3 million silicon microstrips ranging from ~10 cm × 80 μm to ~20 cm × 180 μm. With 198 m2 of active silicon area, the CMS tracker is by far the largest silicon tracker ever built.
Finally, the hadron calorimeter, comprising ~3000 small solid angle projective towers covering almost the full solid angle, is built from alternate plates of ~5 cm brass absorber and ~4-mm-thick scintillator plates that sample the energy. The scintillation light is detected by photodetectors (hybrid photodiodes) that can operate in the strong magnetic field. Figure 2 shows the transverse view of the barrel part of CMS in late 2007 during the installation phase in the underground cavern.
Preparation of the experiments. All detector components were tested at their production sites, after delivery to CERN, and again after their installation in the underground caverns. The experiments made use of the constant flow of cosmic rays impinging on Earth. Even at depths of 100 m, there is still a small flux of muons—a few hundred per second traversing each of the experiments. The muons were used to check the whole chain from the hardware to the analysis programs of the experiments, and also to align the detector elements and calibrate their response prior to pp collisions (30, 31).
The ATLAS and CMS experiments would generate huge amounts of data (tens of petabytes of data per year; 1 PB = 106 GB), requiring a fully distributed computing model. The LHC Computing Grid allows any user anywhere access to any data recorded or calculated during the lifetime of the experiments. The computing system consists of a hierarchical architecture of tiered centers, with one large Tier-0 center at CERN, about 10 large Tier-1 centers at national computing facilities, and about 100 Tier-2 centers at various institutes. The center at CERN receives the raw data, carries out prompt reconstruction almost in real time, and exports the raw and reconstructed data to the Tier-1 centers and also to Tier-2 centers for physics analysis. The Tier-0 centers must keep pace with the event rate of 0.5 kHz (~1 MB of raw data per event) from each experiment. The large Tier-1 centers provide long-term storage of raw data and reconstructed data outside of CERN (a second copy). They carry out second-pass reconstruction, when better calibration constants are available. The large number of events simulated by Monte Carlo methods and necessary for quantifying the expectations are produced mainly in Tier-2 centers.
The operation of the LHC accelerator and the experiments. The LHC accelerator began to operate on 10 September 2008. On 19 September 2008, during the final powering tests of the main dipole circuit in the last sector (3-4) to be powered up, an electrical fault in one of the tens of thousands of connections generated a powerful spark that punctured the vessel of a magnet, resulting in a large release of helium from the magnet cold mass and leading to mechanical damage to ~50 magnets. These were repaired in 2009, and it was decided to run the accelerator at an energy of 3.5 TeV per beam (i.e., at half the nominal energy).
The first pp collisions (at an energy of 450 GeV per beam) occurred on 23 November 2009; the first high-energy collisions (at 3.5 TeV per beam) were recorded on 30 March 2010, and since then the collider has operated smoothly, providing the two general-purpose experiments, ATLAS and CMS, with data samples corresponding to an integrated luminosity of close to 5 fb−1 (fb, femtobarn) during 2011, and another 5 fb−1, at the slightly higher energy of 4 TeV per beam, up to June 2012. In total, these data (~10 fb−1) correspond to the examination of some 1015 pp collisions. Typically, there are 20 overlapping pp interactions (“pile-up”) in the same crossing of proton bunches as the interaction of interest. ATLAS and CMS have recorded ~95% of the collision data delivered with the LHC operating in stable conditions. In all, 98% of the roughly 100 million electronic readout channels in each experiment have been performing at design specification. This outstanding achievement is the result of a constant and dedicated effort by the teams of physicists, engineers, and technicians responsible for the hardware, software, and maintenance of the detectors. This efficient operation of the accelerator and the experiments has led to the discovery of the Higgs-like boson soon after the first pp collisions at high energy.
The ATLAS and CMS experiments started recording high-energy pp collisions in March 2010 after a preliminary low-energy run in the autumn of 2009. Many SM processes, including inclusive production of quarks (seen as hadronic jets), bottom quarks, top quarks, and W and Z bosons, have been measured with high precision. These measurements, in a previously unexplored energy region, confirm the predictions of the SM. It is essential to establish this agreement before any claims for new physics can be made, as SM processes constitute large backgrounds to new physics.
Extensive searches for new physics beyond the SM have also been performed. New limits have been set on quark substructure, supersymmetric particles (e.g., disfavoring at 95% CL gluino masses below 1 TeV in simple models of supersymmetry), potential new bosons (e.g., disfavoring at 95% CL new heavy W-like W′ and Z-like Z′ bosons with masses below 2 TeV for couplings similar to the ones for the known W and Z bosons), and even signs of TeV-scale gravity (e.g., disfavoring at 95% CL black holes with masses below 4 TeV).
Undoubtedly, the most striking result to emerge from the ATLAS and CMS experiments is the discovery of a new heavy boson with a mass of ~125 GeV. The analysis was carried out in the context of the search for the SM Higgs boson.
For mH around 125 GeV, and from the number of collisions examined, some 200,000 Higgs bosons would have been produced in each experiment. Folding in the branching fraction, each experiment expected to identify a comparatively tiny number of signal events (e.g., a few hundred two-photon events or tens of four-lepton events) from a hypothetical Higgs boson, before including factors of efficiency. The four–charged-lepton mode (H → ZZ → ℓℓℓℓ) offers the promise of the purest signal (S/B ~ 1, where S is the number of expected signal events and B is the number of expected background events) and has therefore been called the “golden channel.”
The search for the Higgs boson is carried out in a variety of modes. Below, we give some details of the two modes that have the best invariant mass resolution and had played a particularly important role in the design of the ATLAS and CMS experiments.
H γγ. In our detectors, the signature of the H → γγ decay mode is a pair of isolated photons each with a high transverse momentum of ~30 GeV or higher. Transverse momentum is the component of the momentum vector projected onto the plane perpendicular to the beams. Figure 3 shows such an event recorded with the CMS detector.
Events containing two isolated photon candidates were selected with the goal of identifying a narrow peak in the diphoton invariant mass distribution superimposed on a large background. This background arises from two sources: the dominant and irreducible one from a variety of SM processes, and a reducible background where one or both of the reconstructed photon candidates originate from misidentification of jet fragments.
The criteria to distinguish real photons from those coming from jet fragmentation (labeled “fake photons”) depend on the detector technologies of the two experiments. Both experiments are able to reject fake photons such that their contribution to the background is only 25% of the total. The size of this contribution was the subject of much debate in the 1990s, and the low value has been attained through the design of the em calorimeters and the rejection power of the associated analyses.
To enhance the sensitivity of the analysis, candidate two-photon events were separated into many mutually exclusive categories of different expected S/B ratios. These categories are defined on the basis of the expected properties of the reconstructed photons and on the presence of two jets expected to accompany a Higgs boson produced through the vector-boson fusion process, a particularly sensitive category. The analysis of events in each category represented a separate measurement, with a specific mass resolution and background, and the results from each category were statistically combined through a procedure that used likelihood analysis.
The distributions of the two-photon invariant masses, weighted by category, are shown in Fig. 4 for ATLAS and CMS, respectively, along with the best fit of a signal peak on top of a continuum background. The weight chosen was proportional to the expected S/B in the respective category.
An excess of events, over the background, was observed at a mass of ~125 GeV by both experiments, corresponding to a local significance of 4.5 standard deviations (σ) for ATLAS and 4.1σ for CMS.
HZZℓℓℓℓ. The signature of the H → ZZ → ℓℓℓℓ decay mode is two pairs of oppositely charged isolated leptons (electrons or muons). The main background arises from a small continuum of known and nonresonant production of Z boson pairs. Figure 5 shows an event recorded with the ATLAS detector with the characteristics expected from the decay of the SM Higgs boson to a pair of Z bosons, one of which subsequently decays into a pair of electrons and the other into a pair of muons.
For a Higgs boson with a mass below twice the mass of the Z boson, one of the lepton pairs will typically have an invariant mass compatible with the Z boson mass (~91 GeV), whereas the other one will have a considerably lower mass, called “off-mass shell.” Because there are differences in the instrumental backgrounds and in the mass resolutions for the three possible combinations of electron and muon pairs (4e, 4μ, and 2e2μ), the searches were made in these independent subchannels and then combined statistically with a likelihood procedure. In the case of CMS, the angular distribution of the four leptons is included in the likelihood.
Figure 6 shows the four-lepton invariant mass distribution for the ATLAS and CMS experiments, in each case for the combination of all the channels (4e, 4μ, and 2e2μ). The peak near 90 GeV corresponds to the expected but rare decay of Z bosons to four leptons (Z → ℓℓℓℓ). The rate is higher in the CMS experiment than in ATLAS because of differing kinematic criteria applied to the four leptons. Both experiments observe a small but significant excess of events around an invariant mass of about 125 GeV above the expected continuum background, with a spread as expected from the mass resolution and statistical fluctuations corresponding to a local significance of 3.4σ and 3.2σ in ATLAS and CMS, respectively.
Combined results. The ATLAS and CMS experiments have both studied more Higgs boson decay modes than described in this paper, as discussed in the associated papers in this issue and (16, 17). Figure 7 shows the combined statistical significance observed for the different Higgs mass hypotheses by the ATLAS and CMS experiments, respectively. The largest local significance is observed for a SM Higgs boson mass of mH = 126.5 and 125.5 GeV, where it reaches 6.0σ and 5.0σ, corresponding to a background fluctuation probability of 2 × 10−9 and 5 × 10−7 for the ATLAS and CMS experiments, respectively. The expected local significance in the presence of a SM Higgs boson signal at these masses is found to be 4.9σ for the ATLAS experiment and 5.8σ for the CMS experiment. The evidence for a new particle is strengthened by the observation in two different experiments, comprising complementary detectors, operating independently.
In both experiments the excess was most significant in the two decay modes γγ and ZZ. These two decay modes indicate that the new particle is a boson; the two-photon decay implies that its spin (J) is different from 1 (32, 33). Because the Higgs field is scalar, the spin of the SM Higgs boson is predicted to be zero.
Furthermore, the number of observed events is roughly equal to the number of events expected from the production of a SM Higgs boson for all the decay modes analyzed, within the errors, in both experiments. The measured value of the observed/expected ratio, for the combined data from all the decay modes, was found to be 1.4 ± 0.3 and 0.87 ± 0.23 for the ATLAS and CMS experiments, respectively. The best estimates of the masses measured by the ATLAS and CMS experiments are also consistent: 126.0 ± 0.6 GeV and 125.3 ± 0.6 GeV, respectively.
Outlook. The results from the two experiments are consistent, within uncertainties, with the expectations for the SM Higgs boson, a fundamental spin-0 (scalar) boson. Much more data need to be collected to enable rigorous testing of the compatibility of the new boson with the SM and to establish whether the properties of the new particle imply the existence of physics beyond the SM. For this boson, at a mass of ~125 GeV, almost all the decay modes are detectable, and hence comprehensive tests can be made. Among the remaining questions are whether the bosons have spin J = 0 or J = 2, whether their parity is positive or negative, whether they are elementary or composite, and whether they couple to particles in the exact proportion predicted by the SM [i.e., for fermions (f) proportional to mf2 and for bosons (V) proportional to mV4]. These properties are studied via the new boson’s rate of decay into different final states, the angular distributions of the decay particles, and its rate of production in association with other particles such as W and Z bosons. The SM Higgs boson is predicted to be an elementary particle with JP = 0+. Much progress is expected, as by the end of 2012 the ATLAS and CMS detectors should be able to triple the amount of data used for the results presented here. The LHC will then be shut down in 2013 and 2014 to refurbish parts of the accelerator so that it will be able to reach its full design energy (14 TeV) and enable precise measurements of the properties of the new bosons and the full exploration of the physics of the TeV energy scale, especially the search for physics beyond the SM.
It is known that quantum corrections make the mass of a fundamental scalar particle float up to the next highest physical mass scale currently known, which, in the absence of extensions to the SM, is as high as 1015 GeV. A favored conjecture states that this is avoided by a set of heavy particles not yet discovered. For each known SM particle there would be a partner with spin differing by half a unit; fermions would have boson partners and vice versa, in a symmetry called supersymmetry. This happens because in quantum mechanics, corrections involving fermions and bosons have opposite signs for their amplitudes and hence cancel each other. In the minimal supersymmetry model, five types of Higgs bosons are predicted to exist. Furthermore, the lightest stable neutral particle of this new family of supersymmetric particles could be the particle constituting dark matter. If, as conjectured, such particles are light enough, they ought to reveal themselves at the LHC.
The discovery of the new boson suggests that we could well have discovered a fundamental scalar field that pervades our universe. Astronomical and astrophysical measurements point to the following composition of energy-matter in the universe: ~4% normal matter that “shines,” ~23% dark matter, and the rest forming “dark energy.” Dark matter is weakly and gravitationally interacting matter with no electromagnetic or strong interactions. These are the properties carried by the lightest supersymmetic particle. Hence the question: Is dark matter supersymmetric in nature? Fundamental scalar fields could well have played a critical role in the conjectured inflation of our universe immediately after the Big Bang and in the recently observed accelerating expansion of the universe that, among other measurements, signals the presence of dark energy in our universe.
The discovery of the new boson is widely expected to be a portal to physics beyond the SM. Physicists at the LHC are eagerly looking forward to establishing the true nature of the new boson and to the higher-energy running of the LHC, to find clues or answers to some of the other fundamental open questions in particle physics and cosmology. Such a program of work at the LHC is likely to take several decades.
## References and Notes
1. Acknowledgments: The construction, and now the operation and exploitation, of the large and complex ATLAS and CMS experiments have required the talents, the resources, and the dedication of thousands of scientists, engineers, and technicians worldwide. Many have already spent a substantial fraction of their working lives on these experiments. This paper is dedicated to all our colleagues who have worked on these experiments. None of these results could have been obtained without the wise planning, superb construction, and efficient operation of the LHC accelerator and the WLCG computing.
View Abstract | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074198007583618, "perplexity": 965.0736758262118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423269.5/warc/CC-MAIN-20170720161644-20170720181644-00394.warc.gz"} |
https://www.dlubal.com/en-US/support-and-learning/support/faq/002491 | FAQ 002491 EN-US
02/22/2019
# Can I display the center of gravity of my model?
Select the entire model (or just a group of objects) and right-click the selection. In the shortcut menu, select the "Center of Gravity and Info" feature.
The result is an overview of the center of gravity coordinates and additional information, such as volume and weight of material (see the figure). The center of gravity is also shown in a graphic.
#### Reference
[1] Manual RFEM. (2018). Tiefenbach: Dlubal Software. [2] Manual RSTAB. (2013). Tiefenbach: Dlubal Software. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9092544317245483, "perplexity": 2354.2472746046865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00302.warc.gz"} |
https://math.stackexchange.com/questions/1092323/why-isnt-fx-x-cos-frac-pix-differentiable-at-x-0-and-how-do-we-for/1092329 | # Why isn't $f(x) = x\cos\frac{\pi}{x}$ differentiable at $x=0$, and how do we foresee it?
Consider $$f(x)=\begin{cases} x\cos\frac{\pi}{x} & \text{for} \ x\ne0 \\ 0 & \text{for} \ x=0. \end{cases}$$
Its difference quotient $\frac{\Delta\left(f(x)\right)}{\Delta(x)}$ approaches $\cos\frac{\pi}{h}$ as $x$ gets closer to $0$, and thus $f$ is not differentiable in the origin because $\lim\limits_{h\to0}\cos\frac{\pi}{h}$ does not exist. This is the plot of $y=x \cos \frac{\pi}{x}$:
But here's how my book goes on:
Examining the figure we can foresee that the tangent line in a generic point $P$ of the graph doesn't tend to any limiting position as $P$ tends to the origin along the curve itself. One may think this happens because the graph of the function completes infinitely many oscillations in any neighbourhood of the origin. In fact, no: indeed the function thus defined: $$g(x)=\begin{cases} x^2\cos\frac{\pi}{x} & \text{for} \ x\ne0 \\ 0 & \text{for} \ x=0 \end{cases}$$ has a graph that completes infinitely many oscillations in any neighbourhood of the origin, but, as you can verify, it is differentiable at $x=0$ and we have $g'(0)=0$.
This is the plot of $y=x^2 \cos \frac{\pi}{x}$:
So, I have two questions related to what I quoted from the book: how do we foresee the non-differentiability of $f$, given that, correctly, the infinitude of the oscillations is not an argument for it? And then, why isn't $f$ differentiable, instead of $g$?
I shall emphasise that I know that, simply, the limit as $h\to 0$ of the difference ratio of $f$ doesn't exist, while that of $g$ does, but I've been wondering about an other kind of reason after reading that excerpt. Or is my book wrong in mentioning other reasons?
• Since non-differentiability is so common I would rather suspect every point of being non-differentiable locus and focus on foreseeing differentiability instead. – Pp.. Jan 5 '15 at 21:47
• @Pp.. that would be a good point of view if you were choosing functions at random. But we don't do that and since most functions we look at are infinitely differentiable.... – Ittay Weiss Jan 5 '15 at 22:04
• @IttayWeiss Actually, that is exactly what we do, don't get confused. It just happens that we use theorems like the properties of differentiation with respect to the arithmetic operations and elementary functions to get rid of big chunks of suspects. – Pp.. Jan 5 '15 at 22:15
• What the quoted passage is trying to say is that there is a big difference between the statements "$f'(0)=A$ exists" and "$f'(x)\to A$ as $x\to 0$". The function $g$ is a classic example of a function where the derivative exists at every point, but the function $g'$ is not continuous at the origin, since $g'(0)=0$ even though $g'(x)$ doesn't have a limit as $x \to 0$. – Hans Lundmark Jan 6 '15 at 9:33
• In other words: the way to think about $f'(0)$ is not to look at the tangent line at a nearby point $P$ on the curve and then let that point $P$ approach the origin. Rather, you take a rubber band ("infinitely shrinkable"), connect one end to the origin and the other end to a moving point $P$ on the curve, let $P$ approach the origin along the curve, and see if the rubber band can "make up its mind" about what slope to have in the limit. – Hans Lundmark Jan 6 '15 at 9:38
One way to "foresee" it is that there are clearly two lines in the first image you posted that serve as an envelope to $f(x)$. These two lines crossing at the origin make it impossible to approximate $f$ near $x=0$ as a linear function. This is the criterion of differentiability you want to keep in mind when trying to make this kind of judgement.
On the other hand, in the second image, the envelope is two parabolas touching at the origin. Since the parabolas are tangent at the origin, they force $y=0\cdot x$ to be the only way to approximate $f(x)$ as a linear function near $x=0$.
In the end, the criterion for differentiability of functions squeezed inside an envelope $$e_-(x)\leq f(x)\leq e_+(x)$$ is: no matter how wildly $f(x)$ oscillates inside the envelope, $f(x)$ will be differentiable at $x=0$ if (i) the envelopes touch each other: $$e_-(0)=e_+(0)$$ that is, they do squeeze $f(x)$ appropriately; and (ii) they are both differentiable with equal derivatives: $${e'}_{\!-}(0)={e'}_{\!+}(0)$$ thus forcing $f(x)$ to be differentiable with the same derivative.
• Perhaps you might comment that the parabolas have their vertices at $x=0$. – Ian Jan 5 '15 at 22:04
• Indeed, this is the case in this function. However, it is not a necessary condition for differentiability. It just indicates that the derivative is $0$. – fonini Jan 5 '15 at 22:05
• It's not necessary, but the envelope could be two parabolas that don't have their vertex at the point and which have different derivatives at the point. Then you'd be back in the first circumstance. – Ian Jan 5 '15 at 22:07
• Muito obrigado, Pedro (ou se você prefere, Angelo)! (minha mae é de Rio) Até já! – Vincenzo Oliva Jan 5 '15 at 22:49
• haha :) Just for the record, envelope is envoltória in Portuguese. Might be similar in Italian. – fonini Jan 5 '15 at 22:56
Normally the rules of differentiation show that any elementary function is continuous and differentiable wherever it is defined. The problems arise when the function's formula is undefined at some point and then we need to check the differentiability via the definition of derivative. Another case is when we define functions via multiple formulas in different parts of the domain. Then we need to check for existence of derivative at the boundary points.
The point which your book has highlighted is very important but perhaps been left out by other answers. It says that most common functions are differentiable except for exceptional points. Thus $f(x) = x\cos (\pi/x)$ is an example where $f$ is differentiable at all $x \neq 0$ and $$f'(x) = \cos (\pi/x) + \frac{\pi}{x}\sin(\pi/x)$$ When we define $f(0) = 0$ then we get continuity. But $f$ is still not differentiable at $0$.
Now your book mentions a very deep idea. Normally people try to look at the formula for $f'(x)$ and try to see if it tends to a particular value as $x \to 0$. In the above example the limit of $f'(x)$ does not exist and hence we conclude that $f$ is not differentiable at $0$.
The book says that this is not the right way to go and gives another example which sort of is a failure case for above technique. The example is $f(x) = x^{2}\cos (\pi/x), f(0) = 0$. We have $$f'(x) = 2x\cos(\pi/x) + \pi\sin(\pi/x)$$ and again we see that $f'(x)$ does not tend to a limit as $x \to 0$. And hence we conclude that $f'(0)$ does not exist. This is wrong.
We have the following theorem:
Theorem: If $f$ is continuous at $a$ and $\lim_{x \to a}f'(x) = L$ then $f'(a) = L$.
But the converse of the theorem does not hold. Thus if $f'(a) = L$ it does not mean that $\lim_{x \to a}f'(x)$ exists necessarily.
Thus the method of using limit of $f'(x)$ works only when the limit exists. Better not to try this approach and rather use the definition of derivative. I must say that your book has done a great job to highlight this fact about limit of derivative at a point and its relation to existence of derivative at that point (although not in the formal manner which I have given in my answer).
• Excellent. I took me a while to discern what the real Q is, which is "Why did the book say that?" – DanielWainfleet Sep 6 '17 at 18:55
• Thanks @DanielWainfleet for your encouraging words. Such feedback means a lot to me. Btw I was really impressed by the book excerpt given in the question where it talked about limiting position of tangent. – Paramanand Singh Sep 6 '17 at 21:47
The enveloping curves define differentiability. During infinite oscillations first curve tangent cannot decide between two slopes. But for the second, slopes are both equal to zero, makes it differentiable with its coinciding slope of tangent.
The book is giving a WARNING: "One $may$ think this (i.e. non-differentiable at $0$) happens because ( of the oscillations)...." and then shows that the same type of pattern of oscillations also exists for $g(x)=x^2\cos (\pi /x),$ so the occurrence of such an oscillatory pattern is insufficient to determine whether the function is differentiable at $0$.
Perhaps a different phrasing would have made this clearer. And I think it would have been more emphatic to have said that the oscillations in $g(x)$ near $0$ are large enough that $\lim_{x\to 0}g'(x)$ does not exist, but $g'(0)$ does exist, so the non-existence of $\lim_{x\to 0}f'(x)$ is, by itself, insufficient to determine whether $f'(0)$ exists. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960791826248169, "perplexity": 231.8714763516296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00528.warc.gz"} |
https://math.stackexchange.com/questions/2930611/derivative-of-fracddt-f-gammat-with-differential-operators-frac-pa | # Derivative of $\frac{d}{dt} f(\gamma(t))$ with differential operators $\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \overline{z}}$
Let $$f: \mathbb{C} \rightarrow \mathbb{C}$$ a $$C^1$$ function (i.e. real and imaginary part $$f_1, f_2$$ are continuously differentiable, where $$f=f_1 + i \cdot f_2$$) and let $$\gamma: \mathbb{R} \rightarrow \mathbb{C}, t \mapsto \gamma(t)\, \,$$ $$C^1$$. Then we have that \begin{align} \frac{d}{dt} \, \, f(\gamma(t)) = \frac{\partial f(\gamma(t))}{\partial z} \cdot \gamma'(t) + \frac{\partial f}{\partial \overline{z}} \cdot \overline{\gamma'(t)} \end{align}
I have some trouble to do the calculation to get the formula (I tried to use the Cauchy-Riemann equations but it doesn't work, because the function $$f$$ is not holomorphic). Any suggestion? Thanks in advance!
• I think real and imaginary parts of $f$ will not help you. Perhaps you need $z = \gamma(t)$ and $\overline{z} = \overline{\gamma(t))$. – GEdgar Sep 25 '18 at 19:01
What you are saying is that you have the path in the complex plane that is specified by a parametrization $$\gamma(t)$$. So you have that $$z(t) = \gamma(t) \Rightarrow \bar{z} = \bar{\gamma}$$. Then from the multivariable chain rule you have that
$$\frac{df(z,\bar{z})}{dt} = \frac{\partial f}{\partial z}\frac{d z}{d t} + \frac{\partial f}{\partial \bar{z}}\frac{d \bar{z} }{d t}$$
$$\frac{df(z,\bar{z})}{dt} = \frac{\partial f}{\partial z} \gamma'(t) + \frac{\partial f}{\partial \bar{z}} \bar \gamma'(t)$$
• I think you mean $$\frac{df(z, \bar{z})}{dt} = \frac{\color{red} \partial f}{\color{red} \partial z} \frac{\color{blue} dz}{\color{blue} dt} + \frac{\color{red} \partial f}{\color{red} \partial \bar{z}} \frac{\color{blue} d \bar{z}}{\color{blue} dt}$$ – Mattos Sep 26 '18 at 4:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984888434410095, "perplexity": 153.72423569212862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529839.0/warc/CC-MAIN-20190420140859-20190420162859-00171.warc.gz"} |
https://math.stackexchange.com/questions/2475375/volume-of-a-manifold | # Volume of a manifold
Throughout this post, I am presuming $M$ to be an $2$-dimensional manifold that is parametrized by one chart $\varphi$, and I presume $\omega$ be a $2$-form on $M$.
Apparently, there is no natural way to define the volume of a manifold, if it's not a pseudo-Riemannian manifold - i.e., we don't have a metric on it of some kind. Here is a question I have, based on that:
1
My "argument" against this a few days ago was this - why does it not make sense to define the volume as $\int_M 1 dx_1 \wedge .. \wedge dx_n$?
To this I got the reply, that I can "scale" that form, and it still makes perfect sense to define that as volume - so it's arbitrary to choose $1 dx_1 \wedge .. \wedge dx_n$ as opposed to $2 dx_1 \wedge .. \wedge dx_n$. I didn't undestand this argument, but I think I do now, and I'd like to make sure I undestand it correctly.
I think the issue here stems from my previous understanding of $dx_i$ primarily as symbols, as opposed to actual linear maps from the tangent space, that are induced by some map. I thought $1 dx_1 \wedge .. \wedge dx_n$ is the always the same thing as $1 dy_1 \wedge .. \wedge dy_n$, that it's just a different notation of the coordinates - but I see now, that in some sense, that's not the case.
-
As a linear map from the tangent space, the expression $1 dx_1 \wedge .. \wedge dx_n$ by itself doesn't really make sense on it's own, and it's not something I can integrate on $M$ - first, I need to pick a certain chart, to know what $dx_i$ actually are - and this is arbitrary.
More formally:
If for a chart $\varphi$ I denote the 'induced' forms $dx_i$ as those forms, for which $dx_i(\frac{\partial }{\partial x_i})=1$, where $(\frac{\partial }{\partial x_i})$ is the tangent vector induced by $\varphi$. I take a "scaled" version of $\varphi$ and call this new chart $\varphi'$. If I now consider two differential forms :
$\omega = 1 dx_1 \wedge .. \wedge dx_n$ (where $dx_i$ are induced by $\varphi$)
$\omega'= 1 dx_1' \wedge .. \wedge dx_n'$ (where $dx_i'$ are induced by $\varphi'$),
it's actually true that $\omega' = \alpha dx_1 \wedge .. \wedge dx_n$ , for some $\alpha$.
Thus, their integrals will give something different, depending on αα, and choosing one as opposed to the other is arbitrary, and so it doesn't make sense to prefer one as opposed to the other for defining volume.
-
Is my answer to 1 correct?
2
Based on all of this, does it then make sense to think of differential forms as some type of "measure", that gives me volumes of tangent vectors, parallelograms formed by a choice of two tangent vectors, or generalizations of the previous?
Essentially I found that your answer is correct: $dx_1\wedge...\wedge dx_n$ depend by the parametrisation chosen, as in your case, where $\phi$ and $\phi'$ give different differential forms.
2: In general differential forms are not "measures", because they are not always positive. For example there is no $1$-form on $\mathbb{R}^2$ which give you the length of a curves.
But you could think to $n$-form as "functional" on differentiable $n$-subvarieties with satisfies a supplementary condition: you could describe it locally as a alternate form on $n$ copies of the vector space.
So, $1$-forms could be integrated on curves, $2$-forms on surfaces, etc.
For example, the integral $\int_a^bf(x) dx$ could be seen as integrating the differential form $\omega=f(x) dx$ on the interval $[a,b]$, which is a curve of $\mathbb{R}$.
Anyway there is one case where you could consider a differential form as a measure: when $\omega$ is a $n$-differential form on a $n$-dimensional variety, so you could measure a subvarieties $V$ of dimension $n$ calculating $|\int_V \omega|$
Personally, I've found these notes pretty useful in understanding differential forms: http://www.math.uqam.ca/~powell/Bachman_Geometric_Approach_to_Differential_Forms.pdf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281095266342163, "perplexity": 163.64344500955903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527089.77/warc/CC-MAIN-20190721164644-20190721190644-00278.warc.gz"} |
https://www.physicsforums.com/threads/seperating-a-summation-problem.232930/ | # Seperating a Summation problem.
1. May 3, 2008
### DKATyler
[SOLVED] Seperating a Summation problem.
1. The problem statement, all variables and given/known data
The Problem:
Separate a sum into 2 pieces (part of a proof problem).
Using: $$X= \sum^{n}_{k=1}\frac{n!}{(n-k)!}$$
Solve in relation to n and X:
$$\sum^{n+1}_{k=1}\frac{(n+1)!}{(n+1-k)!}$$
2. Relevant equations
???
3. The attempt at a solution
$$\sum^{n}_{k=1}[\frac{(n+1)!}{(n+1-k)!}]+\frac{(n+1)!}{(n+1-[n+1])!}$$
$$\sum^{n}_{k=1}[\frac{(n)!}{(n-k)!}*\frac{(n+1)}{(n+1-k)}]+\frac{(n+1)!}{(n+1-[n+1])!}$$
$$(n+1)*\sum^{n}_{k=1}[\frac{(n)!}{(n-k)!}*\frac{1}{(n+1-k)}]+(n+1)!}$$
I think this is fairly close but, I have no way of getting rid of the 1/(n+1-k) term.
2. May 4, 2008
### Defennder
Can you show that $$\sum_{k=2}^n \frac{n!}{(n-k+1)!} \ + \ \frac{n!}{(n+1-(n+1))!} = \sum^{n}_{k=1}\frac{n!}{(n-k)!}$$?
If you do that, you can express $$\sum^{n+1}_{k=1}\frac{(n+1)!}{(n+1-k)!}$$ as a summation starting from k=2. Then you should be able to get the desired expression.
Try it out for some values of k and n, then you'll see a pattern.
In general the pattern is $$\sum_{k=1}^{n}f(k) = \sum_{k=2}^{n} f(k-1)\ +\ f(n)$$
3. May 4, 2008
### DKATyler
That helps quite a bit.
$$n!+\sum^{n}_{k=2}\frac{n!}{(n+1-k)!}=\sum^{n}_{k=1}\frac{n!}{(n-k)!}=X$$
So continuing from this step:
$$(n+1)*\sum^{n}_{k=1}\frac{n!}{(n-k+1)!}+\frac{(n+1)!}{(n+1-[n+1])!}$$
changing the Index and adding/subtracting n!
$$(n+1)*(1-n!+n!+\sum^{n}_{k=2}\frac{n!}{(n-k+1)!})+(n+1)!$$
Solves the Equation in terms of n and X:
$$(n+1)*(1-n!+X)+(n+1)!$$
Yep, that worked, now I can complete the rest of the proof :) Thank you very much. How to mark this "[solved]?"
4. May 4, 2008
### Defennder
Go to your first post in this thread, at the top right hand corner of the post marked "Thread Tools"
Similar Discussions: Seperating a Summation problem. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808310031890869, "perplexity": 1304.2005194233268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320257.16/warc/CC-MAIN-20170624101204-20170624121204-00108.warc.gz"} |
http://www.boundaryvalueproblems.com/content/2013/1/179 | Research
# Infinitely many solutions for a class of quasilinear elliptic equations with p-Laplacian in R N
Gao Jia*, Jie Chen and Long-jie Zhang
Author Affiliations
College of Science, University of Shanghai for Science and Technology, Shanghai, 200093, China
For all author emails, please log on.
Boundary Value Problems 2013, 2013:179 doi:10.1186/1687-2770-2013-179
Received: 15 December 2012 Accepted: 22 July 2013 Published: 6 August 2013
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
### Abstract
In this paper, we study the multiplicity of solutions for a class of quasilinear elliptic equations with p-Laplacian in . In this case, the functional J is not differentiable. Hence, it is difficult to work under the classical framework of the critical point theory. To overcome this difficulty, we use a nonsmooth critical point theory, which provides the existence of critical points for nondifferentiable functionals.
MSC: 35J20, 35J92, 58E05.
##### Keywords:
quasilinear elliptic equations; nondifferentiable functional; p-Laplacian; multiple solutions
### 1 Introduction and main results
Recently, the multiplicity of solutions for the quasilinear elliptic equations has been studied extensively, and many fruitful results have been obtained. For example, in [1], Shibo Liu considered the existence of multiple nonzero solutions of the Dirichlet boundary value problem
(1.1)
where , denotes the p-Laplacian operator, Ω is a bounded domain in with smooth boundary Ω.
Moreover, Aouaoui studied the following quasilinear elliptic equation in [2]:
(1.2)
and proved the multiplicity of solutions of the problem (1.2) by using the nonsmooth critical point theory. One can refer to [3,4] and [5] for more results.
In this paper, we shall investigate the existence of infinitely many solutions of the following problem
(1.3)
where , and , is a given continuous function satisfying
In order to determine weak solutions of (1.3) in a suitable functional space E, we look for critical points of the functional defined by
(1.4)
where . Under reasonable assumptions, the functional J is continuous, but not even locally Lipschitz. However, one can see from [4,6] and [7] that the Gâteaux-derivative of J exists in the smooth directions, i.e., it is possible to evaluate
for all and .
Definition 1.1 A critical point u of the functional J is defined as a function such that , , i.e.,
(1.5)
Our approach to study (1.3) is based on the nonsmooth critical point theory developed in [8] and [9]. Dealing with this class of problems, the main difficulty is that the associated functional is not differentiable in all directions.
The main goal here is to establish multiplicity of results for (1.3), when is odd and is even in s. Such solutions for (1.3) will follow from a version of the symmetric mountain pass theorem due to Ambrosetti and Rabinowitz [10,11]. Compared with problem (1.2) in [2], problem (1.3) is much more difficult, since the discreteness of the spectrum is not guaranteed. Therefore, we only consider the first eigenvalue .
To state and prove our main result, we consider the following assumptions.
Suppose that and .
(H1) Let be a function such that
• for each , is measurable with respect to x;
• for a.e. , is a function of class with respect to s;
• there exist such that
(1.6)
(1.7)
(H2) There exist , and such that
(1.8)
(H3) Let a Carathéodory function satisfy , a.e. and
(1.9)
where θ is the same as that in (H2).
(H4) There exists such that
(1.10)
where is a positive constant.
Example 1.1 Let . The following function satisfies hypotheses (H1) and (H2)
and the corresponding constants are
Example 1.2 The following function satisfies hypotheses (H3) and (H4)
On the other hand, we define the operator . It follows from [12] that the discreteness of the spectrum is not guaranteed. Hence, we only consider the first eigenvalue , where
Next, we can state the main theorem of the paper.
Theorem 1.1Assume thatandsatisfy (H1)-(H4). Moreover, letand, a.e. , . If there exists a positive numberμsuch that, then problem (1.3) has infinitely many distinct solutions in, i.e., there exists a sequence, satisfying (1.3) and, as.
To explain our result, we introduce some functional spaces. We define the reflexive Banach space E of all functions with the norm .
Such a weighted Sobolev space has been used in many previous papers, see [13] and [14]. Now, we give an important property of the space E, which will play an essential role in proving our main results.
Remark 1.1 One can easily deduce and for . More details can be found in [2].
Throughout this paper, let denote the norm of E and () means that converges strongly (weakly) in corresponding spaces. ↪ stands for a continuous map, and ↪↪ means a compact embedding map. C denotes any universal positive constant unless specified.
The paper is organized as follows. In Section 2, we introduce the nonsmooth critical framework and preliminaries to our work. In Section 3, we give some lemmas to prove the main result. Finally, the proof of Theorem 1.1 is presented in Section 4.
### 2 Nonsmooth critical framework and preliminaries
Our results are based on the techniques of nonsmooth critical point theory. In this section, we recall some basic tools from [8] and [9].
Definition 2.1 Let be a metric space, let be a continuous functional and . We denote by the supremum of the σ’s in such that there exist and a continuous map , satisfying
The extended real number is called the weak slope of I at u.
Note that the notion above was independently introduced in [15], as well.
Definition 2.2 Let be a metric space, let be a continuous functional and . We say that I satisfies , i.e., the Palais-Smale condition at level c, if every sequence in X with and admits a strongly convergent subsequence.
In order to treat the Palais-Smale condition, we need to introduce an auxiliary notion.
Definition 2.3 Let c be a real number. We say that functional I satisfies the concrete Palais-Smale condition at level c ( for short) if every sequence satisfying
possesses a strongly convergent subsequence in E, where is some real number converging to zero.
Remark 2.1 Under assumptions (H1)-(H4), if the functional J satisfies (1.4), then J is continuous, and for every we have
where denotes the weak slope of J at u.
Remark 2.2 Let c be a real number. If J satisfies , then J satisfies .
Proof Let be a sequence such that
Note that for ,
By Remark 2.1, we have . Taking , the conclusion follows. □
### 3 Basic lemmas
To derive our main theorem, we need the following lemmas. The first lemma is the version of the Ambrosetti-Rabinowitz mountain pass lemma [10,11] and [16].
Lemma 3.1LetXbe an infinite-dimensional Banach space, and letbe a continuous even functional satisfyingfor every. Assume that
(i) there exist, and a subspaceof finite codimension such that
(ii) for every finite-dimensional subspace, there existssuch that
Then there exists a sequenceof critical values ofIwith.
Lemma 3.2Ifis a critical point ofJ, then.
Proof For , , consider the real functions , and defined in ℝ by
(3.1)
and . Denoting and , we can take as a test function in (1.5). Therefore,
Noting that and , we get
From (1.10) and the fact we deduce
Since a.e. in and in E as . It follows from that
Denote . If , then the result is true. In the following discussion, is assumed. By (1.6), we obtain
(3.2)
Note that , then we can get
(3.3)
On the other hand, we have
which implies that
(3.4)
Eventually, one can deduce from (3.2)-(3.4) that
(3.5)
By Theorem 5.2 of [17], we get that . Replacing by , we can similarly prove that . We conclude that , and the proof of Lemma 3.2 is completed. □
Lemma 3.3Letbe a bounded sequence inEwith
(3.6)
whereis a sequence of real numbers converging to zero. Then there existssuch thata.e. inand, up to a subsequence, is weakly convergent touinE. Moreover, we have
(3.7)
i.e., uis a critical point ofJ.
Proof Since is bounded in E, and there is a (see [18]) such that, up to a subsequence,
Moreover, since satisfies (3.6), by Theorem 2.1 of [19], we have, up to a further subsequence, a.e. in .
We will use the device of [20]. We consider the test functions
(3.8)
where , and . According to (1.6) and (1.7), we have
Since (3.6) holds by density for every , we can put in (3.6) and obtain that
(3.9)
On the other hand, note that
(3.10)
One can deduce from (3.10) and Fatou’s lemma that
(3.11)
We consider the test functions with , and , , ,
This together with (3.11) can prove that
(3.12)
In a similar way, by considering the test functions , it is possible to prove that
(3.13)
From (3.12) and (3.13), it follows that
(3.14)
Finally, we can deduce (3.7) from (3.14). □
Remark 3.1 (see [21])
Let be a sequence in E satisfying (3.6). Then and
(3.15)
In the following lemma, we will prove the boundedness of a sequence under (1.6), (1.8) and (1.9).
Lemma 3.4Letandbe a sequence inEsatisfying (3.6) and
(3.16)
Thenis bounded inE.
Proof Calculating , from (3.15) and (3.16), we obtain
From (1.8) and (1.9), it follows that
(3.17)
Moreover, there exist and such that
Therefore, denoting , we obtain from (3.17) that
(3.18)
By virtue of hypothesis (H3), we know that there exist and such that
(3.19)
From (3.18) and (3.19), it follows that
(3.20)
On the other hand, by Hölder’s inequality and Young’s inequality, for all , there exists such that
(3.21)
Using (3.20) and (3.21), we get
(3.22)
Choosing in (3.22), we find that is bounded in E. □
Lemma 3.5Letbe the same as that in Lemma 3.3. Then, up to a subsequence, converges strongly touinE.
Proof By Lemma 3.3, we know that u is a critical point of the functional J. Then, from Lemma 3.2, we get . Therefore, taking as a test function in (3.7), we get
(3.23)
By virtue of is bounded in E, we can assume that there exists satisfying
By Lemma 3.3, a.e. in . Then by Fatou’s lemma, we have
(3.24)
Moreover, by and , we get
(3.25)
(3.26)
By using (3.23)-(3.26) and passing to limit in (3.15), we obtain
(3.27)
On the other hand, by Lebesgue’s dominated convergence theorem and the weak convergence of to u in E, we get
(3.28)
(3.29)
(3.30)
Moreover, since and are bounded in , then we have
Therefore, from the definition of weak convergence, we obtain
(3.31)
(3.32)
Combining (3.27)-(3.32), it follows that
It is well known that the following inequality
(3.33)
holds for any , and . Therefore,
According to (1.6), we conclude that converges strongly to u in E. □
Lemma 3.6For every real numberc, the functionalJsatisfies.
Proof Let be a sequence in E satisfying (3.6) and (3.16). By Lemma 3.4, is bounded in E. Therefore, the conclusion can be deduced from Lemma 3.5. □
### 4 Proof of Theorem 1.1
It is easy to check that the functional J is continuous and even. Moreover, by Remark 2.2 and Lemma 3.6, J satisfies for every .
On the other hand, from (1.4), (1.6), (1.9) and (1.10), for , we have
(4.1)
We discuss (4.1) in the following two cases:
In case , we get
In case , by the definition of , we get
i.e., . Therefore, if λ satisfies , there exist small enough and such that
Hence, condition (i) of Lemma 3.1 holds with .
Now we consider a finite-dimensional subspace W of E. Let and . From (1.6), we have
(4.2)
By virtue of (1.9) and (1.10), we know that there exist , satisfying a.e. and a positive constant such that
(4.3)
Combining (4.2)-(4.3), we have
(4.4)
Since W is finite-dimensional, then all norms of W are equivalent. From (4.4), there exists such that
In view of , we deduce that the set is bounded in E and condition (ii) of Lemma 3.1 holds. By Lemma 3.1, the conclusion follows.
### Competing interests
The authors declare that they have no competing interests.
### Authors’ contributions
We declare that all authors collaborated and dedicated the same amount of time in order to perform this article.
### Acknowledgements
The authors express their sincere thanks to the referees for their valuable criticism of the manuscript and for helpful suggestions. This work has been supported by the Natural Science Foundation of China (No. 11171220) and Shanghai Leading Academic Discipline Project (XTKX2012).
### References
1. Liu, SB: Multiplicity results for coercive p-Laplacian equations. J. Math. Anal. Appl.. 316, 229–236 (2006). Publisher Full Text
2. Aouaoui, S: Multiplicity of solutions for quasilinear elliptic equations in . J. Math. Anal. Appl.. 370(2), 639–648 (2010). Publisher Full Text
3. Alves, CO, Carrião, PC, Miyagaki, OH: Existence and multiplicity results for a class of resonant quasilinear elliptic problems on . Nonlinear Anal.. 39, 99–110 (2000). Publisher Full Text
4. Canino, A: Multiplicity of solutions for quasilinear elliptic equations. Topol. Methods Nonlinear Anal.. 6, 357–370 (1995)
5. Squassina, M: Existence of multiple solutions for quasilinear diagonal elliptic systems. Electron. J. Differ. Equ.. 1999, 1–12 (1999)
6. Arcoya, D, Boccardo, L: Critical points for multiple integrals of the calculus of variations. Arch. Ration. Mech. Anal.. 134, 249–274 (1996). Publisher Full Text
7. Arcoya, D, Boccardo, L: Some remarks on critical point theory for nondifferentiable functionals. NoDEA Nonlinear Differ. Equ. Appl.. 6, 79–100 (1999). Publisher Full Text
8. Corvellec, JN, Degiovanni, M, Marzocchi, M: Deformation properties of continuous functionals and critical point theory. Topol. Methods Nonlinear Anal.. 1, 151–171 (1993)
9. Degiovanni, M, Marzocchi, M: A critical point theory for nonsmooth functionals. Ann. Mat. Pura Appl.. 167(4), 73–100 (1994)
10. Ambrosetti, A, Rabinowitz, PH: Dual variational methods in critical point theory and applications. J. Funct. Anal.. 14, 349–381 (1973). Publisher Full Text
11. Silva, EAB: Critical point theorems and applications to differential equations. Ph.D. thesis, University of Wisconsin-Madison (1988)
12. Brasco, L, Franzina, G: On the Hong-Krahn-Szego inequality for the p-Laplace operator. Manuscr. Math.. 141, 537–557 (2013). Publisher Full Text
13. Bartsh, T, Wang, ZQ: Existence and multiplicity results for some superlinear elliptic problems on . Commun. Partial Differ. Equ.. 20, 1725–1741 (1995). Publisher Full Text
14. Rabinowitz, PH: On a class of nonlinear Schrödinger equations. Z. Angew. Math. Phys.. 43, 270–291 (1992). Publisher Full Text
15. Katriel, G: Mountain pass theorems and global homeomorphism theorems. Ann. Inst. Henri Poincaré, Anal. Non Linéaire. 11, 189–209 (1994)
16. Rabinowitz, PH: Minimax Methods in Critical Point Theory with Applications to Differential Equations, Am. Math. Soc., Providence (1986)
17. Ladyzenskaya, OA, Uralceva, NN: Equations aux dérivées partielles de type elliptiques, Dunod, Paris (1968)
18. Brezis, H, Lieb, E: A relation between pointwise convergence of functions and convergence of functionals. Proc. Am. Math. Soc.. 88, 486–490 (1983)
19. Boccardo, L, Murat, F: Almost everywhere convergence of the gradients of solutions to elliptic and parabolic equations. Nonlinear Anal.. 19, 581–597 (1992). Publisher Full Text
20. Boccardo, L, Murat, F, Puel, JP: Existence de solutions non bornées pour certaines équations quasi-linéaires. Port. Math.. 41, 507–534 (1982)
21. Brezis, H, Browder, FE: Sur une propriété des espaces de Sobolev. C. R. Math. Acad. Sci. Paris. 287, 113–115 (1978) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901118278503418, "perplexity": 912.6981041916612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654264.98/warc/CC-MAIN-20150417045734-00212-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/127466/how-can-i-effect-line-breaks-within-the-title-without-hardcoding-s-in-title | # How can I effect line breaks within the title without hardcoding \\'s in \title?
Although I do need line breaks within my title for my document's title page (I'm using the book class), I'd like to avoid hardcoding line breaks in the argument passed to the \title command, for three reasons:
1. I always endeavour to separate form and content, for maintainability reasons.
2. The title may need to be inserted elsewhere in the document, in places where line breaks within it are not warranted.
3. I pass my title to\hypersetup{pdftitle=...}; hyperref returns a warning,
Token not allowed in a PDF string (PDFDocEnconding):(hyper ref) removing \\'
if the title contains some \\, and I want to limit the number of compilation warnings to the bare minimum.
How can I effect line breaks at specific places within the title, upon inserting the latter somewhere in my document, without hardcoding \\'s in the \title{...} command? What's the best practice?
\documentclass{book}
\author{Me, me, me}
%\title{Me, myself, and I} % compiles without mishap; strictly content
\title{Me,\\ myself, \\ and I} % compilation warnings occur; form and content are mixed
\usepackage{kantlipsum}
\usepackage{hyperref}
\makeatletter
\hypersetup{
pdftitle = \@title,
}
\makeatother
\begin{document}
\maketitle
\kant
\end{document}
• @AndrewSwann I didn't think the question warranted an MWE, but sure; it's on the way. – jub0bs Aug 9 '13 at 12:19
• @Jubobs, I think Andrew Swann may have had the same though as me, I'd like to figure out why you need a forced line break in the title, rather than a subtitle – Chris H Aug 9 '13 at 12:21
• Well for one thing, the title macro depends on the documentclass, and may or may not take an optional argument that could be used. Then there is \textorpdfstring for hyperref... – Andrew Swann Aug 9 '13 at 12:23
• @ChrisH I can sympathisze with the general thought. Title layout on the page and its line breaks is quite critical with regard to emphasing/deemphasing certain words etc. – Andrew Swann Aug 9 '13 at 12:25
• @AndrewSwann You're right: using \texorpdfstring would obviate the warning. However, what about my 2nd point? What if I need to format my title in a different way, somewhere else than on the title page? – jub0bs Aug 9 '13 at 12:30
if you always want certain phrases "tied together", use the ~ (tilde = tie) instead of a space between them. sometimes this works to encourage a line break, but sometimes you end up with an overfull box and still have to break the line explicitly.
(this is the same approach recommended in the texbook for keeping things like "Dr.~No" and "Figure~1" from breaking apart at the end of a line.)
the actual result depends on how the \title argument is processed; the stretch has to be defined with sufficient flexibility, which depends on the document class.
• This tilde trick let me line-break a section title properly, in a place where \\ would accidentally introduce a page break at the line break! – Camille Goudeseune Oct 21 '15 at 21:46
The easiest way is using option pdfusetitle and without spaces around \\ in \title:
\documentclass{book}
\usepackage{kantlipsum}
\usepackage[pdfusetitle]{hyperref}
\author{Me, me, me}
\title{Me,\\myself,\\and I}
\begin{document}
\maketitle
\kant
\end{document}
Result of pdfinfo:
Title: Me, myself, and I
Author: Me, me, me
Otherwise, there is \texorpdfstring:
\title{My,\texorpdfstring{\\}{ }myself,\texorpdfstring{\\}{ }and I}
Or \pdfstringdefDisableCommands can be used, e.g.:
\pdfstringdefDisableCommands{\def\\#1{ #1}}
That is the definition that is used for the title with option pdfusetitle, it supports the following forms:
\title{Me,\\myself,\\and I}
\title{Me,\\ myself,\\ and I}
If you prefer more spaces:
\pdfstringdefDisableCommands{\let\\\@firstofone}
\title{Me, \\ myself, \\ and I}
The trick with the argument is, that TeX does not accept a space as undelimited argument. Thus the space after \\ is gobbled.
The spaces are one of the reasons, that \\ is not provided by hyperref and a warning is given. The usual tricks (\unskip, \ignorespaces, ...) are not working inside the generation of PDF strings.
• Such a great answer! Thanks. I guess I can also use \texorpdfstring to include nonbreaking spaces in the title as recommended by Barbara. – jub0bs Aug 9 '13 at 13:33
• @Jubobs: ~ is supported by hyperref, no need for \texorpdfstring. – Heiko Oberdiek Aug 9 '13 at 13:34
• @HeikoOberdiek As a substitute for \ignorespaces inside the generation of PDF strings, you could use \romannumeral-^^ \q. Of couse, that leaves a preceeding space: this could be a problem. Alternatively, one could implement \unskip by adding a step after the full expansion and after stripping braces which would strip all occurrences of <space>\unskip`. – Bruno Le Floch Aug 9 '13 at 18:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9848766922950745, "perplexity": 3559.981520536661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00065.warc.gz"} |
https://www.physicsforums.com/threads/determining-which-coffee-has-a-greater-rate-of-cooling.526063/ | # Determining which coffee has a greater rate of cooling
1. Aug 31, 2011
Hey,
I have this math investigation task which asks us to model the cooling of coffee in three different cups over a period of time. Anyway, i got the data, did a scatter plot and found the exponential model, but i dont know how i could determine which one has the greatest rate of cooling.
I know that I need to look at the slope, but they dont all start from the same point on the graph, so i cant determing which one has the fastest rate of decay. If they all started from the same point i would be able to see the different functions branching off from that point, and if one has a steeper slope than the other i would know, but this wasnt the case. Can someone please help! Thanks.
2. Aug 31, 2011
### micromass
Staff Emeritus
Calculate the slope at a common point. For example, take the point 40° at every coffee and calculate the slope at that point.
3. Aug 31, 2011
Hey thanks man. Do you know if the relationship of the derivative of the three graphs will change. So if i measure the derivative of each graph at some temperature (remember it's exponential) and find that one graph has a steeper slope than the other... could that change? so the other graph will now have the steeper slope? Because if that is true, then we cant use that to determine which graph has the fastest rate of cooling overall. Thanks. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8165688514709473, "perplexity": 332.71567020067806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687281.63/warc/CC-MAIN-20170920123428-20170920143428-00151.warc.gz"} |
http://www.federica.unina.it/smfn/high-energy-astrophysics/emission-processes-ii/ | # Maurizio Paolillo » 5.Emission processes – Part II
### Contents
• Radiative recombination of heavy ions.
• Coronal approximation.
• Cooling function.
### Thermal plasma emission
X-ray emission in astrophysical plasmas at temperatures T~106-7 ºK (kT~0.1-1 keV) is mainly due to radiative recombination and line emission processes.
In fact these temperatures are too low for Bremsstrahlung to be dominant.
These conditions characterize very different astrophysical sources: from stellar coronae to the diffuse gaseous halos of massive galaxies.
The image on the left shows an X-ray image of a massive elliptical galaxy whose emission is almost entirely due to the radiative recombination and line emission processes described below.
In the lower left corner the smaller companion NGC1404 has a much smaller and compact halo. Several compact sources, mostly background AGNs and accreting bynary stars (see later lectures), are also visible.
X-ray image of the massive elliptical galaxy NGC 1399 in the Fornax cluster. The emission is represented on a logarithmic color scale which traces the surface brightness. Courtesy of M.Paolillo.
### Stellar coronal emission
This emission process was first observed in the corona of the Sun, and thus it is called “coronal emission”.
The turbulence in the upper layer of a star creates magnetic loops trapped inside the sun’s plasma, whose energy can heat the corona to temperatures that allow it to emit X-rays through radiative recombination and line emission.
This is a solar coronal loop imaged by the TRACE satellite. Courtesy of NASA.
### Coronal approximation
To describe the ionization state and X-ray line and continuum emission from a low density (n =10-1÷10-3 cm-3), hot (t =106÷8 K) plasma, several simple assumptions can be made:
1. the time scale for elastic Coulomb collisions between particles in the plasma is much shorter than the age or cooling time of the plasma, and thus the free particles can be assumed to have a Maxwell-Boltzmann distribution at the temperature Tg This is the kinetic temperature of electrons, and therefore determines the rates of all excitation and ionization processes.
2. at these low densities collisional excitation and de-excitation processes are much slower than radiative decays, and thus any ionization or excitation process will be assumed to be initiated from the ground state of an ion.
3. Three-body (or more) collisional processes will be ignored because of the low density.
### Coronal approximation (cont’ed)
4. the radiation field in a stellar corona, a galaxy or a cluster is sufficiently diluted that stimulated radiative transitions are not important, and the effect of the radiation field on the gas is insignificant.
at these low densities, the gas is optically thin and the transport of the radiation field can therefore be ignored.
5. Under these conditions, ionization and emission result primarily from collisions of ions with electrons, and collisions with ions can be ignored.
Finally, in galaxies and clusters (but not in stellar coronae) the time scales for ionization and recombination are generally considerably less than the age of the system or any relevant hydrodynamic time scale, and the plasma can therefore be assumed to be in ionization equilibrium.
### Coronal X-ray line emission
The helium and hydrogen in stellar, galactic and cluster coronae is all ionized, so X-ray line emission comes from transitions in trace element atoms.
### X-ray continuum emissivity in coronal plasmas
The X-ray continuum emission from a hot diffuse plasma is due primarily to three processes:
1. thermal bremsstrahlung (free-free emission),
2. recombination (free-bound) emission, and
3. two-photon decay of metastable levels.
The emissivity (emission per unit volume) for thermal Bremsstrahlung, as discussed in a previous lecture, is given by:
### X-ray continuum emissivity in coronal plasmas
The radiative recombination (bound-free) continuum emissivity is usually calculated by applying the Milne relation for detailed balance to the photoionization cross sections, which yields the following emissivity expression:
### X-ray line emission in coronal plasmas
Processes that contribute to the X-ray line emission from a diffuse plasma include collisional excitation of valence or inner shell electrons, radiative and dielectronic recombination, inner shell collisional ionization, and radiative cascades following any of these processes.
The emissivity due to a collisionally excited line is usually written (Osterbrock, 1974):
### Cooling function
All of the emission processes described above give emissivities that increase proportionally to the ion and electron densities, and otherwise depend only on the temperature.
Thus a general expression for the emissivity in coronal plasmas is given by the formula:
where Ln(Xi,Tg) is the emission per ion at unit electron density and is usually called “cooling function”.
The cooling function incorporates all physical processes described in these lectures, and is usually calculated using numerical codes that compute the atomic transition rates for each element and ionization state.
### Cooling function (cont’ed)
If n(X) is the total density of the element X, then in equilibrium the ionization fractions f(Xi) n(Xi)/n(X) depend only on the temperature, and previous equation can be written as:
This allows to separate the dependence on the temperature, from the one on the density, if we assume to know the ionization fractions and the metallicity of the plasma.
### Emission Integral
Since most astrophysical observable quantities are integrated over large volumes or at the least over the line of sight, it is useful to define the emission integral EI as:
Then the shape of the spectrum depends only on the abundances of elements n(X)/n(h) and the distribution of temperatures d(EI) / dTg.
The normalization of the spectrum (the overall level or luminosity) is thus set by the EI.
### The Cosmic Cooling curve
The Cooling Function at low and intermediate X-ray emitting regimes, and its dependence on the metallicity.
### The Cosmic Cooling curve
Dependence of the Cooling function on temperature and metallicity of the plasma.
### Spectrum of a thermal plasma
Example of the spectrum produced by the thermal plasma of a galaxy cluster at different temperatures: as the temperature drops the emission lines due to radiative recombination becomes apparent.
### I materiali di supporto della lezione
Craig L. Sarazin, X-ray Emission from Clusters of Galaxies.
Progetto "Campus Virtuale" dell'Università degli Studi di Napoli Federico II, realizzato con il cofinanziamento dell'Unione europea. Asse V - Società dell'informazione - Obiettivo Operativo 5.1 e-Government ed e-Inclusion | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857515096664429, "perplexity": 2684.9904494594025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00161.warc.gz"} |
https://infoscience.epfl.ch/record/152714 | Infoscience
Journal article
# Influence of pressure on the optical properties of InxGa1-xN epilayers and quantum structures
The influence of hydrostatic pressure on the emission and absorption spectra measured for various types of InGaN structures (epilayers, quantum wells, and quantum dots) is studied. While the known pressure coefficients of the GaN and InN band gaps are about 40 and 25 meV/GPa, respectively, the observed pressure-induced shifts in light emission energy in the InGaN alloys differ significantly from concentration-interpolated values. With increasing In concentration, and thus decreasing emission energy, the observed pressure coefficients become very small, reaching zero for emission energies similar to2 eV (roughly the value of the InN band gap). On the other hand, the pressure coefficient derived from absorption experiments exhibit a much smaller decrease with decreasing energy when referred to the same scale as the emission data. First-principles calculations of InGaN band structures and their modification with pressure are performed. The results are not able to explain the huge effect observed in the emission experiments, but they are in good agreement with the optical absorption data. Significant bowings of the band gap and its pressure coefficients are found, and they are especially large for small In concentrations. This behavior is related to the changes in the upper valence band states due to In alloying. Some possible mechanisms are discussed which might be expected to account for the low pressure coefficients of the light emission energy and the difference between the sensitivity of the emission and absorption to pressure. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652157425880432, "perplexity": 883.0291221600368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00057-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/please-check-this-differentiation-result.164188/ | # Please check this differentiation result
1. Apr 5, 2007
### cabellos
I have to differentiate (e^-x) ((1-x)^1/2)
(-e^-x) ((1-x)^1/2) + (e^-x)/((-1/2 + 1/2x)^-1/2))
is this correct?
thankyou
2. Apr 5, 2007
### Vagrant
In the second part I belive the -1/2 term is outside the underroot
3. Apr 5, 2007
### cristo
Staff Emeritus
You want to differentiate this: $e^{-x}(1-x)^{1/2}$? You have used the product rule correctly, but the second term is incorrect. The second term is $$e^{-x}\frac{d}{dx}(1-x)^{1/2}=e^{-x}(1-x)^{-1/2}\cdot(-\frac{1}{2})$$
Note that, when using the chain rule on the parentheses, whatever's inside the parentheses does not change.
Similar Discussions: Please check this differentiation result | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063246250152588, "perplexity": 3898.669078061883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948523222.39/warc/CC-MAIN-20171213123757-20171213143757-00443.warc.gz"} |
http://eprint.iacr.org/2013/657 | ## Cryptology ePrint Archive: Report 2013/657
New Trapdoor Projection Maps for Composite-Order Bilinear Groups
Sarah Meiklejohn and Hovav Shacham
Abstract: An asymmetric pairing over groups of composite order is a bilinear map $e: G_1 \times G_2 \to G_T$ for groups $G_1$ and $G_2$ of composite order $N=pq$. We observe that a recent construction of pairing-friendly elliptic curves in this setting by Boneh, Rubin, and Silverberg exhibits surprising and unprecedented structure: projecting an element of the order-$N^2$ group $G_1 \oplus G_2$ onto the bilinear groups $G_1$ and $G_2$ requires knowledge of a trapdoor. This trapdoor, the square root of a certain number modulo $N$, seems strictly weaker than the trapdoors previously used in composite-order bilinear cryptography.
In this paper, we describe, characterize, and exploit this surprising structure. It is our thesis that the additional structure available in these curves will give rise to novel cryptographic constructions, and we initiate the study of such constructions. Both the subgroup hiding and SXDH assumptions appear to hold in the new setting; in addition, we introduce custom-tailored assumptions designed to capture the trapdoor nature of the projection maps into $G_1$ and $G_2$. Using the old and new assumptions, we describe an extended variant of the Boneh-Goh-Nissim cryptosystem that allows a user, at the time of encryption, to restrict the homomorphic operations that may be performed. We also present a variant of the Groth-Ostrovsky-Sahai NIZK, and new anonymous IBE, signature, and encryption schemes.
Category / Keywords: foundations / bilinear groups
Contact author: smeiklej at cs ucsd edu
Available format(s): PDF | BibTeX Citation
[ Cryptology ePrint archive ] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313056826591492, "perplexity": 2302.8879686780683}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270313.12/warc/CC-MAIN-20140728011750-00485-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Y.%20W.%20Zhang | • ### Extraction of the Neutron Electric Form Factor from Measurements of Inclusive Double Spin Asymmetries(1704.06253)
Nov. 28, 2017 nucl-ex
$[Background]$ Measurements of the neutron charge form factor, $G^n_E$, are challenging due to the fact that the neutron has no net charge. In addition, measurements of the neutron form factors must use nuclear targets which require accurately accounting for nuclear effects. Extracting $G^n_E$ with different targets and techniques provides an important test of our handling of these effects. $[Purpose]$ The goal of the measurement was to use an inclusive asymmetry measurement technique to extract the neutron charge form factor at a four-momentum transfer of $1~(\rm{GeV/c})^2$. This technique has very different systematic uncertainties than traditional exclusive measurements and thus serves as an independent check of whether nuclear effects have been taken into account correctly. $[Method]$ The inclusive quasi-elastic reaction $^3\overrightarrow{\rm{He}}(\overrightarrow{e},e')$ was measured at Jefferson Lab. The neutron electric form factor, $G_E^n$, was extracted at $Q^2 = 0.98~(\rm{GeV/c})^2$ from ratios of electron-polarization asymmetries measured for two orthogonal target spin orientations. This $Q^2$ is high enough that the sensitivity to $G_E^n$ is not overwhelmed by the neutron magnetic contribution, and yet low enough that explicit neutron detection is not required to suppress pion production. $[Results]$ The neutron electric form factor, $G_E^n$, was determined to be $0.0414\pm0.0077\;{(stat)}\pm0.0022\;{(syst)}$; providing the first high precision inclusive extraction of the neutron's charge form factor. $[Conclusions]$ The use of the inclusive quasi-elastic $^3\overrightarrow{\rm{He}}(\overrightarrow{e},e')$ with a four-momentum transfer near $1~(\rm{GeV/c})^2$ has been used to provide a unique measurement of $G^n_E$. This new result provides a systematically independent validation of the exclusive extraction technique results.
• ### From semiclassical transport to quantum Hall effect under low-field Landau quantization(cond-mat/0608408)
Aug. 18, 2006 cond-mat.mes-hall
The crossover from the semiclassical transport to quantum Hall effect is studied by examining a two-dimensional electron system in an AlGaAs/GaAs heterostructure. By probing the magneto-oscillations, it is shown that the semiclassical Shubnikov-de Haas (SdH) formulation can be valid even when the minima of the longitudinal resistivity approach zero. The extension of the applicable range of the SdH theory could be due to the damping effects resulting from disorder and temperature. Moreover, we observed plateau-plateau transition like behavior with such an extension. From our study, it is important to include the positive magnetoresistance to refine the SdH theory.
• ### Effects of Zeeman spin splitting on the modular symmetry in the quantum Hall effect(cond-mat/0508577)
Aug. 24, 2005 cond-mat.mes-hall
Magnetic-field-induced phase transitions in the integer quantum Hall effect are studied under the formation of paired Landau bands arising from Zeeman spin splitting. By investigating features of modular symmetry, we showed that modifications to the particle-hole transformation should be considered under the coupling between the paired Landau bands. Our study indicates that such a transformation should be modified either when the Zeeman gap is much smaller than the cyclotron gap, or when these two gaps are comparable. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553694725036621, "perplexity": 985.2596695262453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400232211.54/warc/CC-MAIN-20200926004805-20200926034805-00250.warc.gz"} |
https://www.mathflix.org/category/jeemain/page/6/ | # JEE Main
This Category will divide the articles that are useful for jee mains. This could include theory, solved examples, unsolved examples, test papers and previous year papers.
## nth roots of unity – Concept, problems and solutions
nth roots of Unity Do you wonder what nth roots of unity is? and how is it useful in solving and understanding problems Roots of the Equation …
## Fundamental Counting Principle – Begin to Count
Do you know how powerful fundamental counting principle are? let me ask you a question, Do you know how many squares are there in a chessboard? If your answer is 64, I would say you need to see chessboard again There are much more than 64 squares, in fact there are 204 squares on a chess board. …
Change | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176691770553589, "perplexity": 1975.2100601454747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00489.warc.gz"} |
https://www.physicsforums.com/threads/influence-lines-concentrated-load-system-with-the-problem-this-time.705794/ | # Homework Help: Influence lines (concentrated load system)(with the problem this time)
1. Aug 15, 2013
### mohamadh95
I started by getting the equation for the reaction(which is really simple). I'm a bit confused here, they're asking for the absolute maximum moment, but they're not locating the location of where the maximum moment will occur, should I assume it's happening on the middle of the beam? And another question, how can I find the value of this maximum moment, I'm normally accustomed to a concentrated load, or a uniformally distributed load, but not a system of concentrated forces. Thank you.
https://www.dropbox.com/s/5rngrqme4fv929y/2013-08-15 17.52.30.jpg
2. Aug 15, 2013
### SteamKing
Staff Emeritus
No, the location of the maximum moment probably will not be in the center of the beam. The beam is not loaded symmetrically.
You are forgetting one critically aspect of analyzing beams. Multiple loads can be treated using the principle of superposition, i.e., find the effect of one load at a time on the beam, and then add up the effects of a series of single loads to find the total effect of ALL the loads on the beam.
The superposition principle also works with influence lines.
3. Aug 15, 2013
### mohamadh95
Could you provide some more information? Because I don't know where I should take the cut on the beam.
Ohh and something else should I find the center of gravity of the load system(Gx)? Is the location where maximum moment occur equal to Gx?
Last edited: Aug 15, 2013
4. Aug 15, 2013
### SteamKing
Staff Emeritus
The following should help you with your influence line calculation:
http://www.mm.anadolu.edu.tr/insaat/icerik/dersnotlari/ins313/influencelines.pdf [Broken]
Last edited by a moderator: May 6, 2017
5. Aug 16, 2013
### mohamadh95
Thank you, this was helpful.
Last edited by a moderator: May 6, 2017
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241862058639526, "perplexity": 1876.4764387714501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866984.71/warc/CC-MAIN-20180624160817-20180624180817-00522.warc.gz"} |
http://math.stackexchange.com/questions/202449/median-of-the-f-distribution | # Median of the F-distribution
Is the median of the F-distribution with m and n degrees of freedom decreasing in n, for any m?
From experiments it looks like it might be, but I have been unable to prove it.
-
Is n the denominator degrees of freedom? – Michael Chernick Sep 25 '12 at 22:19
@Michael yes it is – fra Sep 25 '12 at 22:48
Have you looked at tables of the F distribution. I don't think the tables give percentiles close to 50 but maybe looking at the 25th and the 75th with a gross interpolation would give you an idea whether or not it is true. I am sure that there are software packages that will give the cumulative F or its inverse and you can see the result from that. – Michael Chernick Sep 26 '12 at 0:10
@Michael yes from software calculation it seems like the result is true. But I am looking for a proof! – fra Sep 27 '12 at 21:09
If it is not true the tables could tell you by giving you just one counterexample. If you need precision you can do numerical integration to a desired level of accuracy. – Michael Chernick Sep 27 '12 at 21:13
This officially free to download "Handbook of Statistical Distributions" shows that (using their notation) the cumulative distribution (cdf) of the F-distribution can be written as the cdf of a transformed Beta variable: $$\int_{0}^{F_a} f\left(F;m,n\right)dF=1-a=\frac{B_x(\frac m2,\frac n2)}{B(\frac m2,\frac n2)}\equiv G_B(x;\frac m2,\frac n2),\;x=\frac{mF_a}{n+mF_a}$$ Moreover, based on this source the median of a $B(\alpha,\beta)$ distribution, for $\alpha>1, \beta>1$ is approx. equal to $\frac {\alpha -\frac 13}{\alpha +\beta -\frac 23}$. In our case this approximate formula holds for $m\gt 2, n \gt2$. For this case then we have (setting $a=\frac 12$)
$$\frac{mF_\frac 12}{n+mF_\frac 12} \approx\frac {\frac m2 -\frac 13}{\frac m2 +\frac n2 -\frac 23}$$ Doing the algebra we arrive at
$$F_\frac 12\approx \frac{n}{3n-2}\frac{3m-2}{m}$$ and so $$\text{sign} \left(\frac {\partial}{\partial n} F_\frac 12\right)=\text{sign} \left(\frac {\partial}{\partial n} \frac{n}{3n-2}\right)= \text{sign} \left(3n-2 - 3n\right) \lt 0$$
For the case $\alpha =1 \Rightarrow m=2$ we have (source again) $$\frac {mF_\frac 12}{n+mF_\frac 12} = 1-\frac {1}{2^{(2/n)}}$$
which leads again to a negative relationship between the median of the F-distribution and the denominator degrees of freedom.
I have no proof for the remaining degrees of freedom. The $F(1,n)$ distribution is the distribution of a squared Student's-t random variable, if that helps.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8905467987060547, "perplexity": 341.97601395973237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00068-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://www.lofoya.com/Solved/1110/rs-432-is-divided-amongst-three-workers-a-b-and-c-such-that-8-times | # Moderate Ratios & Proportion Solved QuestionAptitude Discussion
Q. Rs.432 is divided amongst three workers $A$, $B$ and $C$ such that 8 times $A$’s share is equal to 12 times $B$’s share which is equal to 6 times $C$’s share. How much did $A$ get?
✖ A. Rs.192 ✖ B. Rs.133 ✔ C. Rs.144 ✖ D. Rs.128
Solution:
Option(C) is correct
8 times $A$’s share = 12 times $B$’s share = 6 times $C$’s share
Note that this is not the same as the ratio of their wages being $8:12:6$
In this case, find out the L.C.M of 8, 12 and 6 and divide the L.C.M by each of the above numbers to get the ratio of their respective shares.
The L.C.M of 8, 12 and 6 is 24.
Therefore, the ratio $A:B:C$ is
$\dfrac{24}{8}:\dfrac{24}{12}:\dfrac{24}{6}\Rightarrow A:B:C::3:2:4$
The sum of the total wages
=$3x+2x+4x=432$
$9x=432$ or $x=48$.
Hence $A$ gets $3×48$= Rs $144$
Edit: For an alternative solution, check comment by Sravan Reddy.
## (6) Comment(s)
Anant
()
8A=12B => A:B=3:2
12B=6C=> B:C=1:2
so A:B:C=3:2:4
From here you can find the rest of the solution
Disha
()
x + y + z = 432 ------ (1)
8x = 12y >> so x = 3y/2
y = 2x/3
8x=6z >>> so x = 3z/4
so , z = 4z/3
Put all value in eq. 1.
x + 2x/3 + 4z/3 = 432
9x= 432*3
x= 144
Vijay
()
choice c
8a=12b=6c
and a+b+c = 432
solving for a=144 b= 96 c= 192
Priyanka
()
why lcm we should please explain
Sravan Reddy
()
8 times A = 12 times B
Suppose A:B is $12:8$ then $8*12x = 12*8x$ =TRUE
Suppose A:B is $8:12$ then $8*8x = 12*12x$ -> Clearly FALSE
Also 8 times A = 12 times B means A is clearly bigger, so $8:12$ can't be the case.
Now, whenever the problem is like this, they should be inverse ratios.
So, Instead of $8:12:6$, it will be $\dfrac{1}{8}:\dfrac{1}{12}:\dfrac{1}{6}$
Bhaumik
()
why is it no 8:12:6 ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330352544784546, "perplexity": 2533.190954004084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://ec.gateoverflow.in/538/gate-ece-2014-set-3-question-1 | 47 views
The maximum value of the function $f(x) = \text{ln } (1+x) – x$ (where $x >-1$) occurs at $x=$_______. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9632061123847961, "perplexity": 1333.642173748516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00113.warc.gz"} |
https://randommathgenerator.com/2016/10/01/projective-morphisms/ | ### Projective morphisms
This post is about the morphisms between projective varieties. There are some aspects of such morphisms that I’m troubled about. The development will closely follow that in Karen Smith’s “Invitation to Algebraic Geometry”.
First, say we have a morphism $\phi:\Bbb{P}^1\to\Bbb{P}^2$ such that $[s,t]\to[s^2,st,t^2]$. We will try and analyze this map.
This map has to be homogeneous: in that each coordinate has to be homogeneous, and the homogeneity has to be of the same degree. This is the only way that such a map between projective varieties can be well-defined.
Now let us talk about the mappings from affine charts. Essentially, the affine charts cover the projective space, and hence every projective variety that lives in that space. When we talk about a particular affine chart, we can reduce the number of variables by $1$. Because the value of one variable is always $1$: hence it can be neglected. However, is the image also an affine chart? That depends. In this case of $[s,t]\to [s^2,st,t^2]$, the image of an affine chart will be an affine chart. This is because $s=1\implies s^2=1$. Similarly, $t=1\implies t^2=1$.
We’ve covered all the possible points in the domain by picking out the affine charts. Hence, we have fully described the map.
A map $f$ between projective varieties is a projective morphism if for each $p\in V$, where $V$ is the domain, there exists an neighbourhood $N(p)$ such that $f|_{N(p)}$ is a homogeneous polynomial map. Is an affine chart an open set? Yes. If it is the $z$th affine chart it is the complement of the algebraic set $z=0$ in $\Bbb{P}^n$.
Let us now consider a different map: consider $V(xz-y^2)\in\Bbb{P}^2$. Let us call this curve $C$. Now consider the map $C\to \Bbb{P}^1$, defined as $[x,y,z]\to [x,y]$ if $x\neq 0$ and $[x,y,z]\to [y,z]$ if $z\neq 0$. What does this mean?
First of all, why is the option $y\neq 0$ not included? If $y\neq 0$, both $x,z\neq 0$ is implied. Hence, this case is a subcase of the two cases considered earlier. Secondly, what does it mean to map to a projective space of a lower dimension? The curve is one-dimensional. Is that the reason why we can embed it in $\Bbb{P}^1$? Probably. Note that we haven’t yet proven that this mapping is an embedding. However, this will indeed turn out to be the case.
Is this map consistent? In other words, are the two maps the same in the intersection of the open sets $x\neq 0$ and $y\neq 0$? Let us see. $[x,y]\to [xz,yz]\to[y^2,yz]\to [y,z]$. Hence, when $x,z\neq 0$, this map is consistent.
Why do we have to have such a broken up map? Why not one consistent map? First of all, mapping from affine charts seems like a systematic way to map. You can always ensure that at least one coordinate is non-zero; both in the domain and range. That is really all there is to it. Sometimes on restricting to affine charts, you write affine maps, like in the precious example. In other cases, including this one, you write a projective map. Defining the various projective maps, whether they change with affine charts or not, is of paramount importance. The affine map part is just an observation which may or may not be made.
How to map a curve $f(x_0,x_1,x_2,\dots,x_n)$ to $\Bbb{P}^1$ in general? This seems to be a very difficult question. [This](http://web.stanford.edu/~tonyfeng/Brill_Noether1.pdf) suggests that every smooth projective curve can be embedded in $\Bbb{P}^3$. That seems to be the best we can do.
For completeness, I would like to mention that the two maps given above are inverse to each other, although this is unrelated to the motivation for this article. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727377891540527, "perplexity": 142.8062473428007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210559.6/warc/CC-MAIN-20180816074040-20180816094040-00479.warc.gz"} |
https://www.usgs.gov/media/images/fig-2-nonlinear-regression-prediction-plot-use-50-pooled-l | # Fig. 2. Nonlinear regression prediction plot for use with 50 pooled l
## Detailed Description
Fig. 2. Nonlinear regression prediction plot for use with 50 pooled larvae that accurately estimates the percentage of triploids in a mixed population of triploids and diploids of that spawn. The plot was generated from 10 replicate experiments, each using a FCM protocol for cell processing and analysis at 15 known levels of ploidy. The 95% confidence limits (black dashed lines) and mean values (dots), along with the 95% prediction curves (red dashed lines) are shown. The corresponding quadratic equation is above. Thus, a flow cytometric-derived observed value (y–axis) (within the range) will correspond to a predicted triploidy value (x-axis) for a spawn.
## Details
Image Dimensions: 495 x 609 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197974562644958, "perplexity": 3911.489683167331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527474.85/warc/CC-MAIN-20190722030952-20190722052952-00249.warc.gz"} |
https://quizmastershop.com/blogs/blog/ships-ladder-quiz-puzzle-answer | On a recent holiday to the seaside the Quiz Master Shop team was enjoying a restorative pint overlooking the harbour. We noticed that the biggest boat had a ladder that was hanging over the side and into the water, with four rungs submerged. In fact the top of the fourth rung was exactly on the surface of the water.
Each of the rungs was a cylinder with a diameter of two inches, and the gap between the rungs was eleven inches. The tide was rising at a rate of 18 inches per hour.
So how many rungs were submerged two hours later when we left the harbourside pub?
There is no maths required for this. The boat (and the ladder) will rise with the tide, leaving the same number of rungs submerged. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732780814170837, "perplexity": 1579.2937172283614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00154.warc.gz"} |
https://www.physicsforums.com/threads/2d-motion-help.571209/ | # Homework Help: 2D motion help
1. Jan 26, 2012
### monac
A spring cannon is located at the edge of a table that is 1.2 m above the floor. A steel ball is launched from the cannon with speed Vo at 35.0 degrees above the horizontal.
(a) Find the horizontal displacement component of the ball tothe point where it lands on the floor as a function of Vo. We writethis function as x(Vo).
Evaluate x for (b) Vo = 0.100 m/s and for
(c) Vo = m/s.
(d) Assume Vo is close to zero but not equal to zero.Show that one term in the answer to part (a) dominates so that thefunction x(Vo) reduces to a simpler form.
(e) If Vo is ver large,what is the approximate form of x(Vo).
I found the answers to a, b, and c. I am thinking of taking a limit on d but the problem is really complicated ... It's a quadratic and with limit laws I would set it equal to 0, but it says it can't be 0. So i am really confused on how to do d and e.
2. Jan 26, 2012
### Curious3141
Why don't you show what you've done so far (to all parts)?
Just a hint, but it'll likely be helpful. Remember the Taylor series (or Binomial theorem) for things like (1+x)-1.
3. Jan 26, 2012
### monac
it's a long process. I ll just type the main parts of the answers.
(a) x = vicos35t
t = (vi sin35 + √vi^2 sin^2 35 + 23.544) / 9.81
so x = vi cos35 ((vi sin35 + √vi^2 sin^2 35 + 23.544) / 9.81)
for b and c i just had to plug in values for vi and get the x so that was easy.
I have not learned Taylor's series yet in my Calculus II class. :(
4. Jan 26, 2012
### Curious3141
Sorry, your expression is ambiguous. √vi^2 = vi, isn't it?
5. Jan 26, 2012
### monac
Oh the whole thing is under the square root.
t = (vi sin35 + √(vi^2 sin^2 35 + 23.544)) / 9.81
6. Jan 26, 2012
### Staff: Mentor
I don't think you need any more maths for (d). Just inspect the expression you earlier derived for time but now saying that anything involving Vo is insignificant compared with associated other terms.
7. Jan 26, 2012
### monac
So imagine that vi = 0?
8. Jan 26, 2012
### Staff: Mentor
For (d), close to 0. (Careful with the notation. The question uses Vo and you seem to have changed it to vi.)
9. Jan 26, 2012
### monac
In the back of the book, the answer for d was x ~ 0.405 m
10. Jan 26, 2012
### Staff: Mentor
Regardless of the speed it's launched, providing it's slow it still travels 0.4m horizontally? That's interesting.
My answer for (d) is XH = þVo
where þ is a fraction that I'll leave you to work out for yourself, but it's between 0.1 and 0.9
Last edited: Jan 26, 2012 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990463614463806, "perplexity": 2223.9204346557503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867277.64/warc/CC-MAIN-20180526014543-20180526034531-00034.warc.gz"} |
http://mathhelpforum.com/calculus/159744-i-do-not-understand-get-these-problems.html | # Math Help - I do not understand or get these problems?
1. ## I do not understand or get these problems?
Yeah.. Homework help. I seriously do not understand it even though I am following exact procedures and made sure I wasn't wrong, but I'm still wrong? I'm on my last submission for my online homework in order to get points and I would like to get it right.. But I don't understand it?
2. Originally Posted by elpermic
Yeah.. Homework help. I seriously do not understand it even though I am following exact procedures and made sure I wasn't wrong, but I'm still wrong? I'm on my last submission for my online homework in order to get points and I would like to get it right.. But I don't understand it?
Please show all your working so that you can be pointed in the right direction.
3. well for the 1st one..
i take the integral of π r^2 from [0,2] and I get 8.378 as my answer(im sure this is right.. but im wrong)
as for the 2nd one
i take the integral of 4(3-3x^2) from [-1,1] and after calculation i get 24. still wrong though
4. Originally Posted by elpermic
well for the 1st one..
i take the integral of π r^2 from [0,2] and I get 8.378 as my answer(im sure this is right.. but im wrong)
as for the 2nd one
i take the integral of 4(3-3x^2) from [-1,1] and after calculation i get 24. still wrong though
I will point you in the right direction for #1:
$\displaystyle V = \pi \int_3^5 x^2 \, dy$. An exact answer is probably required.
5. Sorry. I forgot to say I tried that integral as well and I got 98π/3.. But my answer was still wrong.
6. Originally Posted by elpermic
Sorry. I forgot to say I tried that integral as well and I got 98π/3.. But my answer was still wrong.
In my first reply I bold faced the word "all" for a very good reason. Show all your work, every step.
7. those were the only 2 integrals I tried. Both of them are wrong and Im still confused because I'm sure I did everything right.
8. I finally got the answer to the first problem! But I still can't do the second one.
9. Originally Posted by elpermic
I finally got the answer to the first problem! But I still can't do the second one.
I will point you in the right direction: $\displaystyle V = \int_0^3 (2x)^2 \, dy$.
Your job is to understand where this has come from and what to do with it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349547028541565, "perplexity": 574.8218504094466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776438940.80/warc/CC-MAIN-20140707234038-00058-ip-10-180-212-248.ec2.internal.warc.gz"} |
https://brilliant.org/discussions/thread/daniel-lius-messageboard/ | # Daniel Liu's Messageboard
I've seen some other people do this, so why not, I'll jump on the bandwagon.
First a few things: I am a 8th grader who is taking Geometry in class, but is working on advanced Pre-calculus concepts and a little Calculus for fun.
You can ask me on advice, whether it be doing math competitions or how to write good problems. Any question is encouraged, as long as I deem it fit for me to answer (no privacy-infringing questions please). Or you can just talk about whatever you want to talk about with me :3
Note by Daniel Liu
4 years, 7 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
How Make Good Problems ?
I Have To Feeling that my Problems are Much Easy !
- 4 years, 7 months ago
Making a good problem all starts with an idea. Surf the internet for ideas to start from. Wikipedia is a good source, if you do not have other resources like AoPS books and the whatnot.
Once you have a basic idea, play around with it. Explore, and question what will happen if you do something. Once you happen upon a particularly good idea, you can expand upon that idea. Make it more complex, add "bells and whistles", if you may. You can do this by trying to find a relation to some other part of math: the problems that use ideas from multiple categories of math are often the best. If you are out of ideas on how to relate to other categories, just try to make it harder to solve. For example, replace $$10$$ with $$100$$. Combine the idea multiple times onto each other. In the case of Algebra, randomly do algebraic maneuvers in hopes of making it more complicated. The only thing to avoid is to make it a "relay" problem, where you take the answer of a solved problem and use it for the next part of the problem.
Once you make your problem more sophisticated (without making it a drag) you have to solve it yourself. Yep you have to actually solve the problem yourself first. Fortunately, you already have your idea in mind, so you already know how to proceed. Make sure not to mess up the math.
Once you have done that, you're done! You have successfully made a new problem.
I hope this helped a little. If you have any questions or problems, feel free to ask. If you are in the process of creating a problem and really don't have any idea how to make it more complicated than a simple idea, you can also ask me for help.
- 4 years, 7 months ago
- 4 years, 7 months ago
I really don't get why you guys are arguing over such petty things. :?
- 4 years, 7 months ago
Create Problems is a New Tool To Me!
I Always Resolved Problems, But Never Did My Own Problems!
I'll try to work harder!
Thank You , All the Best !
- 4 years, 7 months ago
Hi Daniel, I like your problem very much! It is awesome! I wonder if you can make your own tag like #DanielLiu or something? It would be more convenient for me :D
- 4 years, 6 months ago
If you look in my profile, there's an unshared set called "Daniel's problems". All (or almost all) my problems are contained there back to a certain point.
- 4 years, 2 months ago
What sources do you refer to?? I mean I'm 17 and I just know basics of calculus. I'm very interested in maths and would like to learn advanced stuff but I never stumble upon good sources.(Except brilliant of course.)
- 4 years, 7 months ago
I use AoPS for competition problems. You can find competition problems here and here
I also have many of the AoPS books. You can find them here.
I have used Khan Academy for learning Calculus.
Participating in various local and national math competitions has also helped a lot.
Is this enough for your purposes?
- 4 years, 7 months ago
Yep.Thanks a lot. Will ask for further help if needed.
- 4 years, 7 months ago
Do you have anyone/thing who has inspired you to be who you are now or will be in the future? ~_~
- 4 years, 7 months ago
I want to help other people learn math and other things when I grow up, so people like Calvin Lin, Richard Rusczyk, Salman Khan, etc.
- 4 years, 7 months ago
How many hours do you study maths ?
- 3 years, 5 months ago
Hi, bob! :P
- 4 years, 7 months ago
I doubt using my AoPS nickname is convenient on Brilliant.org :P
- 4 years, 7 months ago
It's not your AoPS nickname, its my nickname for YOU! :D
- 4 years, 7 months ago
You're still in Year 8? I'm in Year 8.
- 3 years, 5 months ago
- 3 years, 5 months ago
Hi Daniel,It is nice to talk to a* MathWiz* Like you.Have you ever been to India?
- 3 years, 5 months ago
Sorry, I haven't.
- 3 years, 5 months ago
Will you Please add me on facebook,I will be really very happy to have a friend like you.
- 3 years, 5 months ago
Done.
- 3 years, 5 months ago
If you accept my friend request, then we could have some talks related to maths.
- 2 years, 11 months ago
Some weeks ago , I noticed that your status stated' 150 problems with 300 solutions' . Have you uploaded them on brilliant? I am eager to solve them. Thanks!
- 3 years, 8 months ago
Yes, they are on brilliant. You can check my profile and search a bit to find them.
- 3 years, 8 months ago
Thanks a lot! Nice set!
- 3 years, 8 months ago
@Daniel Liu I'm on AoPs too (Shabashbeta147) How do I see my documents I've saved using TeXer?I know they give the link but when I click on it it just takes me back to the same page of TeXer.Please please please please please please please tell me how.
- 3 years, 9 months ago
As it's quite obvious, you're really good at math and I myself aspire to become better like geniuses such as you. Unfortunately, i only started doing real math (aka math outside of school) labout a year ago (9th grade, I'm actually 15 ignore my profile age). So over the past year I have taught myself trig, alg 2, advanced geom, and half of pre-calc. I've heard of some kids who started at age 2 and they're unbelievably smart at math. So when did you start?
- 4 years, 3 months ago
I wonder who this is :P
- 4 years, 6 months ago
@Daniel Liu I just looked on your profile for a certain problem when I noticed that solving a Rubik's cube is one of your interests. Can you talk about that? How fast can you solve it, and with what method? Have you been to any competitions?
Personally, I can do it in about 20-30 seconds with a method I came up with on my own that is quite similar to CFOP. No competitions, though. They don't have those in Florida anymore.
Also, you said somewhere else that I got you hooked on Homestuck. How far have you gotten (what act), and who are your favorite characters?
- 4 years, 7 months ago
I'm not very good at Rubik's cubes. My best is about 40 seconds. I have also not gone to any competitions.
As for Homestuck, I am currently up to date. Waiting for the final update now... :D
- 4 years, 7 months ago
What cube do you use? With a standard Rubik's brand I can do it in a minute at the expense of an aching hand, but is generally use my modified Zhanchi (made by the Chinese company Dayan).
Wow, you read that fast. I got it in 7 weeks, 5 days (6 weeks 12 days!) Who are your favorite characters? In addition to Eridan, I like Terezi, Dave, and Jade.
- 4 years, 7 months ago
Teenager Meenah, Sollux, Dave, Jane, Mayor (omg such a cute mayor XD), Roxy...
Yea they're all pretty cool. I don't really not like any of the characters in particular.
- 4 years, 7 months ago
Yes, there is no one cuter than the Mayor!
- 4 years, 7 months ago
I thought message boards were a private non-re-shareable thing; at least according to the description of Peter's set right here.
Umm... could you post your solution to Classic Climbing Problem 2? I'd really love that!
- 4 years, 7 months ago
The main difference is whether you want your messageboard to be sharable.
If you post it as a note to a set, it will not be sharable, and others will not see it in their newsfeed. If they have a link (or are @mentioned), they will be able to access it. This leads to conversations that are initiated by you, and you have slightly greater control over it.
Posting it as a note directly would allow your messageboard to be sharable, and hence much more public. You tend to end up with broader discussions, sometimes initiated by others who are interested in getting to know you. Of course, you could have both versions.
I set mine up by posting to a set, as I wanted to have 'semi-private' conversations easily. The original intention was to easily drop a quick note to various people, somewhat akin to being able to comment on their problem/set.
Staff - 4 years, 7 months ago
I guess I prefer the semi-private conversations to the public ones. Maybe I'll do a public message board.
To be honest, I was somewhat overwhelmed by the number of message-boards in my feed today.
On a related note, what do you think of adding a message board feature to everyone's profile? It'd be hard to moderate though.
- 4 years, 7 months ago
Certainly possible - if lots of people seem to like them we'll think about a more elegant integration.
Staff - 4 years, 7 months ago
Hello, my name is Vish! Nice to meet you. :-0
- 4 years, 7 months ago
What's the meaning of life? Jk... x"D What do you do in your spare time? Do you know any languages other than English?
- 4 years, 7 months ago
42 :D
I do math in my spare time, for fun and for preparing for competitions. I also enjoy playing video games.
I know (Mandarin) Chinese too.
- 4 years, 7 months ago
Oh nice!
- 4 years, 7 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773383498191833, "perplexity": 2011.1334179362188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823872.13/warc/CC-MAIN-20181212112626-20181212134126-00573.warc.gz"} |
http://math.stackexchange.com/questions/211023/solving-a-particular-differential-equation?answertab=oldest | # Solving a particular differential equation
I have a differential equation:
$y''-\frac{3}{2(1+x)(2-x)}y'+\frac{3y}{4(1+x)(2-x)}-\frac{Kf(x)(1+x)^2y}{2x(1+x)(2-x)}=0$.
Here $K>0$ is a fixed constant and $f(x)$ is some (as yet) unknown function of $x$, which is in our hands to chose.
What I want is that I should decide $f(x)$ suitably to find a solution $y(x)$ of the above with the condition $y(0)=1$ where $y(x)$ is a rational function of $x$, i.e. a quotient of two polynomials. The solution should not be free of $K$.
I tried setting $f(x)$ so that the coefficient of $K$ becomes $1$ but the differential equation turned out to be so complicated that I could not solve it.
Can anyone offer any help or suggestions please?
-
Let $R(x)$ be any rational function such that $R(0)=1$ and define $$f(x)=\Bigl(R''-\frac{3}{2(1+x)(2-x)}R'+\frac{3R}{4(1+x)(2-x)}\Bigr)\frac{2x(1+x)(2-x)}{(1+x)^2R}.$$ The $R$ is a solution of your equation with $K=1$.
Thanks for your reply. However I am not looking for a solution which is free of $K$, or is true for a fixed value of $K$. – Shahab Oct 11 '12 at 11:34
Quite simply, I want a solution which is a rational function of $x$ but the term $K$ is present in that solution (eg $(1+Kx)/(1-x)$.) – Shahab Oct 11 '12 at 12:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020992517471313, "perplexity": 84.72937203524616}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112727.96/warc/CC-MAIN-20160428161512-00201-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3129344/condition-when-angle-between-two-lines-is-pi-over-3 | Condition when angle between two lines is ${\pi\over 3}$
The whole question is-
Show that if the angle between the lines whose direction cosines are given $$l+m+n=0$$ and $$fmn+gnl+hlm=0$$ is $${\pi\over 3}$$ then $${1\over f}+{1\over g}+{1\over h}=0$$.
I'm trying to solve the problem in the following manner-
From the first equation $$n=-l-m$$, substituting this value of $$n$$ in the second equation we get-
$$fm(-l-m)+g(=l-m)l+hlm=0$$
$$\implies g\left({l\over m}\right)^2+(f+g+h)\left({l\over m}\right)+f=0$$
Now, the roots of this equation are $${l_1\over m_1}$$ and $${l_2\over m_2}$$. So, product of them $${l_1\over m_1}.{l_2\over m_2}={f\over g}\implies {l_1 l_2\over f}={m_1 m_2\over g}$$.
Similarly, we get $${m_1 m_2\over g}={n_1 n_2\over h}$$.
Hence, $${l_1 l_2\over f}={m_1 m_2\over g}={n_1 n_2\over h}=K$$(say)
Thus, $$\cos {\pi\over3}=l_1 l_2+m_1 m_2+n_1n_2=K(f+g+h)\implies K=\frac{\sqrt{3}}{2(f+g+h)}$$.
Now, I can't proceed further. I can't prove $${1\over f}+{1\over g}+{1\over h}=0$$.
Can anybody solve the problem? Thanks for assistance in advance.
• The direction cosines (in $3D$) are (three) numbers, not an equation, aren't they? – user Feb 28 '19 at 17:47
• Yes, I am saying that the direction cosines of the two lines $(l_1,m_1,n_1)$ and $(l_2,m_2,n_2)$ satisfy these equations. – MathBS Mar 2 '19 at 19:00
• This is still not clear. Do you mean that the direction cosines are $(l,m,n)$ and $(fmn,gnl,hlm)$, respectively? – user Mar 2 '19 at 20:03
• If $(l_1,m_1,n_1), (l_2,m_2,n_2)$ are the direction cosines of two lines respectively. Then $l_1+m_1+n_1=0, fm_1 n_1+gn_1 l_1+hl_1 m_1=0$ and $l_2+m_2+n_2=0, fm_2 n_2+gn_2 l_2+hl_2 m_2=0$ – MathBS Mar 4 '19 at 15:17
As clarified in the comments, the correct formulation of the problem is:
Show that the angle between the lines whose direction cosines $$(l_1,m_1,n_1)$$ and $$(l_2,m_2,n_2)$$ satisfy the equations $$l+m+n=0\tag1$$ and $$fmn+gnl+hlm=0\tag2$$ is $${\pi\over 3}$$ if $${1\over f}+{1\over g}+{1\over h}=0.\tag3$$
In fact as will be shown below even stronger statement with "if" replaced by "if and only if" holds. It will be additionally assumed $$(l_1,m_1,n_1)\nparallel(l_2,m_2,n_2)$$, $$(f,g,h)\ne(0,0,0)$$. Otherwise extra solutions are possible.
The entirety of the conditions implies that neither of $$l_i,m_i,n_i$$ is $$0$$. Indeed an assumption that any of the components - say $$n_1$$ - is $$0$$ results in combination with $$(1)$$ and $$(2)$$ in the equality $$(f,g,h)=(0,0,0)$$.
Observe now that equality $$(1)$$ $$(l,m,n)\cdot(1,1,1)=0$$ means that the vectors $${\bf a}_1=(l_1,m_1,n_1)$$ and $${\bf a}_2=(l_2,m_2,n_2)$$ are orthogonal to the vector $${\bf 1}=(1,1,1)$$. This implies $${\bf a}_1\times {\bf a}_2 \parallel {\bf 1}$$ or component-wise: $$m_1n_2-n_1m_2=n_1l_2-l_1n_2=l_1m_2-m_1l_2=\frac{\sin\alpha}{\sqrt3},\tag4$$ where $$\alpha$$ is the angle between $${\bf a}_1$$ and $${\bf a}_2$$ which is to be found. The last equality is due to $$\sin^2\alpha=|{\bf a}_1\times {\bf a}_2|^2=\sum_{i=x,y,z}({\bf a}_1\times {\bf a}_2)_i^2.$$
Similarly the equation $$(2)$$ $$(f,g,h)\cdot(mn,nl,lm)=0$$ means that the vectors $${\bf b}_1=(m_1n_1,n_1l_1,l_1m_1)$$ and $${\bf b}_2=(m_2n_2,n_2l_2,l_2m_2)$$ are orthogonal to the vector $${\bf c}=(f,g,h)$$. This in turn implies that the vector $${\bf c}$$ is collinear to the vector product $${\bf b}_1\times {\bf b}_2$$, which reads component-wise: \begin{align} (f,g,h)& =A\big(l_1l_2(n_1m_2-m_1n_2),m_1m_2(l_1n_2-n_1l_2),n_1n_2(m_1l_2-l_1m_2)\big)\\ &=\frac{A\sin\alpha}{\sqrt3}(l_1l_2,m_1m_2,n_1n_2)\tag5 \end{align} where $$A$$ is a non-zero constant.
We start now with the proof of the "if" part. The equations $$(3)$$ and $$(5)$$ imply: $$\frac{1}{l_1l_2}+\frac{1}{m_1m_2}+\frac{1}{n_1n_2}=0\tag6$$ or $$m_1m_2n_1n_2+n_1n_2l_1l_2+l_1l_2m_1m_2=0.\tag7$$ With help of $$(1)$$ the equation reads: \begin{align} 0&=(l_1+m_1)(l_2+m_2)(l_1l_2+m_1m_2)+l_1l_2m_1m_2\\ &=(l_1l_2+m_1m_2+l_1m_2)(l_1l_2+m_1m_2+m_1l_2).\tag8 \end{align} Now we can proceed with computation of $$\cos\alpha={\bf a}_1\cdot {\bf a}_2$$: \begin{align} \cos\alpha &=l_1l_2+m_1m_2+n_1n_2\\ &=l_1l_2+m_1m_2+(l_1+m_1)(l_2+m_2)\\ &=2(l_1l_2+m_1m_2)+l_1m_2+m_1l_2\\ &\stackrel{(8)}=\pm(l_1m_2-m_1l_2)\stackrel{(4)}=\pm\frac{\sin\alpha}{\sqrt3}.\tag9 \end{align} Squaring the equation one finally obtains: $$\cos^2\alpha=\frac13 \sin^2\alpha\implies \cos^2\alpha=\frac14 \implies \alpha=\frac\pi3.\tag{10}$$
As already mentioned the inverse implication $$\alpha={\pi\over 3}\implies {1\over f}+{1\over g}+{1\over h}=0$$ holds as well, since, provided the equality $$(1)$$, the equations $$(6)-(10)$$ are equivalence relations, so that the whole argument can be easily reversed.
In general if a, b, c and d, e, f are components of 2 vectors then the direction cosines are for the first vector : $$\frac{a}{\sqrt{a^2 + b^2 + c^2}}$$ and $$\frac{b}{\sqrt{a^2 + b^2 + c^2}}$$ and $$\frac{c}{\sqrt{a^2 + b^2 + c^2}}$$ similarly for the second vector.
Then the angle $$\theta$$ between these vectors is
$$cos\theta = \frac{ad + be + cf}{\sqrt{a^2 + b^2 + c^2}\sqrt{d^2 + e^2 + f^2}}$$
Using the definition of dot product. In other words $$cos\theta$$ equals the sum of product of the directional cosines of corresponding vectors. In this question we have
$$cos\theta = l(fmn) + m(gnl) + n(hlm) = lmn(f + g + h)$$
The next step is to prove this equal 1/2.
From the question we have
$$l + m + n = 0$$ $$l^2 + m^2 + n^2 = 1$$ Since there are three unknowns but only 2 equations, we can only express l and n in terms of m :
$$\implies l^2 + lm + m^2 = \frac{1}{2}$$
$$\implies l = \frac{-m + D}{2}$$ $$\implies n = \frac{-m - D}{2}$$ Where $$D = \sqrt{2 - 3m^2}$$. It does not matter which root we take for l the other root will be for n.
Also we have other conditions:
$$fmn + gnl + hlm = 0 ... (1)$$ $$(fmn)^2 + (gnl)^2 + (hlm)^2 = 1 ... (2)$$ $$\frac{1}{f} + \frac{1}{g} + \frac{1}{h} = 0 \implies fg + gh + fh = 0... (3)$$
Put l and n into (1) we have $$-fm(m + D) - g(1 - 2m^2) + hm(-m + D) = 0...(4)$$
Put l and n into (2) we have $$f^2m^2(2 - 2m^2 + 2mD) + g^2(1 - 2m^2)^2 + h^2m^2(2 - 2m^2 -2mD) = 4...(5)$$
Eliminate f from (3) and (4) we have $$f^2m(m + D) + fg(2mD - 2m^2 + 1) + g^2(1 - 2m^2) = 0$$ $$\implies f = \frac{2m^2 - 2mD - 1\pm\sqrt{(2mD -2m^2 + 1)^2 - 4(m + D)m(1 - 2m^2)}}{2m(m + D)}g$$ $$\implies f = \frac{2m^2 - 2mD - 1 \pm 1}{2m(m + D)}g$$ We take plus 1 for simplicity since the other root may give a result making $$cos\theta$$ greater than 1 or less than -1. $$f = \frac{m - D}{m + D}g$$
Eliminate h from (4) and (5) gives $$f^2m^2(2 - 2m^2 + 2mD) + g^2(1 - 2m^2)^2 + fgm(m + D)(1 - 2m^2) = 2$$ Put f into this equation we get $$m^2(m - D)^2g^2 + g^2(1 - 2m^2)^2 + (m - D)mg^2(1 - 2m^2) = 2$$ $$\implies g^2(1 - Dm - m^2) = 2$$ $$\implies g = \sqrt{\frac{2}{1 - Dm - m^2}}$$ $$\implies f = \frac{m - D}{m + D}\sqrt{\frac{2}{1 - Dm - m^2}}$$ $$\implies h = \frac{D - m}{2m}\sqrt{\frac{2}{1 - Dm - m^2}}$$
Hence put l, m, n, f, g, h to the cosine expression we get $$cos\theta = lmn(f + g +h) = \frac{-(m - D)}{2}m\frac{-(m + D)}{2}\sqrt{\frac{2}{1 - Dm - m^2}}(\frac{m - D}{m + D} + 1 + \frac{D - m}{2m})$$ $$=\frac{m - D}{4}\sqrt{\frac{2}{1 - Dm - m^2}}$$ $$=\frac{1}{4}\sqrt{\frac{2(m^2 + 2 - 3m^2 - 2Dm)}{1 - Dm - m^2}} = \frac{1}{2}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 94, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.976889431476593, "perplexity": 178.787149560844}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529128.47/warc/CC-MAIN-20210122051338-20210122081338-00560.warc.gz"} |
http://math.stackexchange.com/questions/184528/is-the-set-of-all-definable-subsets-of-the-natural-numbers-recursively-enumerabl/184580 | # Is the set of all definable subsets of the natural numbers recursively enumerable?
I asked myself similar questions before, for example "Are the definable real numbers countable"? It seemed to me that the set of all explicitly and unambiguously definable objects is "countable", because we could write down the text defining a specific object in the English language. (Or use the English language to define a formal language appropriate to write down the definition.) But English isn't unambiguous, and the meaning of "countable" depends on the used set theory.
Right now I'm thinking about second order logic. Assuming that the natural numbers exist and are well defined seems unproblematic to me. However, the question which subsets of the natural numbers (should we assume to) exist has no good answer. If we say that "all" subsets exist, we run into all sorts of set theoretic questions. If we resort to axiomatic set theory for these questions, we end up with the fact that the natural numbers can't be defined up to isomorphism. The alternative seems to be to assume that only the definable subsets of the natural numbers exist. But my guess is that this won't work out either, i.e. that the "set of all definable subsets of the natural numbers" turns out to be a very strange beast. But if it is a strange beast, I guess it won't be recursively enumerable.
Question Assuming the Church-Turing thesis (or a suitable analogue for definability, or some other sufficiently solid foundation), what can be said about the "set of all definable subsets of the natural numbers"? Is it well defined and recursively enumerable?
Edit It has been pointed out that my use of "recursively enumerable" is ambiguous without further clarifications, especially that I have to specify how a definable subset of the natural numbers is expected to be encoded as a natural number. Given a natural number, take its binary representation and interpret it as a document in the "OpenDocument" format (ODF). Assume that we can efficiently decide whether the document is valid ODF and whether its content is written sufficiently clear and tries to define a subset of the natural numbers. Let's call this a well formed definition. Assume further that each definable subset has at least one well formed definition which can be recognized to succeed in defining a subset of the natural numbers. Now the question is whether there exists a recursively enumerable subset $S$ of the natural numbers such that each number in $S$ corresponds to a well formed definition which succeeds, and that for each definable subset the set $S$ contains a corresponding definition.
-
Let me remember that the "Turing machine" model is a finitistic one; it only accepts finite inputs and it only produces finite outputs. How do you want to codify in a finitistic way a subset of natural numbers? The more natural way is to use a finitistic formula to describe the set, but then it is obvious which is the answer to your question (YES). If you do not want to use the natural way then you must add to the question what is the finitistic codification (for subsets of natural numbers) you want to use. – boumol Aug 20 '12 at 9:57
@boumol It's true that the "Turing machine" only accepts finite inputs, but what do you mean by "it only produces finite outputs"? One natural way to define a subset of the natural numbers by a Turing machine would be to take the set of input numbers for which the machine stops. It should be clear from the question that any unambiguous way to define a subset of the natural numbers is acceptable. Especially if $S_1$ and $S_2$ are definable, then $S_1 - S_2$ is also definable. However, this doesn't mean that it is also definable as the set of input numbers for which some Turing machine stops. – Thomas Klimpel Aug 20 '12 at 12:28
@boumol Assuming you're still sure the methods with a finitistic formula works (I'm not sure what this means exactly), can you expand your comment into an answer? – Thomas Klimpel Aug 20 '12 at 12:32
!Thomas Klimpel: you should consult the answers to this question from MathOverflow, which may be the question you had in mind: mathoverflow.net/questions/44102/… – Carl Mummert Aug 20 '12 at 13:58
I do not understand the downvote. – Arkamis Aug 20 '12 at 15:33
The question is somewhat ambiguous about exactly how the sets would be enumerated. The natural way to read it in computability theory is the following, which is the usual sense in which a sequence of sets can be r.e.:
Is there a computable double enumeration $\{ n_{i,j} : i,j \in \omega\}$ such that for each definable set $S \subseteq \omega$ there is an $i_0$ such that $S = \{ n_{i_0,j} : j \in \omega\}$.
The answer to that is certainly "no", because if it was true then every definable set would be r.e., which is not the case.
-
For those with some interest, I want to point out I have taken the weakest possible definition of "enumerable sequence of sets", in the sense that there may not be any effective way to tell which $i_0$ corresponds a particular $S$. Thus the enumeration has infinitely many chances to capture any particular $S$. – Carl Mummert Aug 20 '12 at 14:02
A short answer would be that the concept of "recursively enumerable" is only defined for subsets of some countable universe, where that universe has a ("reasonable") encoding as finite strings of symbols that can be input or output by a Turing machine. In your case the set of "subsets of $\mathbb N$ that are definable" (whatever exactly that means) doesn't satisfy that condition, becuase the natural universe to use would be $\mathcal P(\mathbb N)$, which is not countable.
So it doesn't even make sense to ask whether your set is r.e.
One could attempt to broaden the definitions by imagining a non-deterministic Turing machine that writes an infinite sequence of 0s and 1s to a write-only output tape, where that sequence can be interpreted as a subset of $\mathbb N$. One could then declare that some $A\subseteq 2^{\mathbb N}$ is "recursively enumerable"-ish iff it is the set of possible outputs of some non-deterministic machine.
However, though this can be argued to generalize the ordinary concept of recursive enumerability, it wouldn't be the ordinary concept of recursive enumerability.
-
As long as every "definable" number may be defined using a finite string in some finite alphabet you may sort those strings by length and enumerate numbers using this sort.
-
The problem with that is that if "definable" means that there is some formula that is true for this number but false for all others, the relation between defining strings and actual numbers is not itself definable inside the system. There are models of ZFC in which every real number is definable in this way (but the relation between defining formulas and defined numbers is not a set), and we cannot be sure we're not living in such a world. – Henning Makholm Aug 22 '12 at 11:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109853506088257, "perplexity": 259.169825086842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987628.47/warc/CC-MAIN-20150728002307-00250-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://blog.geomblog.org/2011/07/axioms-of-clustering.html | ## Monday, July 18, 2011
### Axioms of Clustering
(this is part of an occasional series of essays on clustering: for all posts in this topic, click here
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it..."
-- Justice Potter Stewart in Jacobellis v. Ohio.
What makes one clustering better than another? To answer that question we have previously assumed that a well motivated objective function was given to us. Then the situation is easy, we compute the value of the objective and decide accordingly. In practice, however, clustering is often an intermediate step in a longer chain of computations, and there is not specific motivation for one objective function over another. Debating the pros and cons of the multitude of possible objective functions can easily take hours. Instead, we can further the I know it when I see it'' intuition by writing down the properties that any good clustering objective should satisfy and see if this axiomatic approach guides us towards the right solution.
This approach was undertaken by Kleinberg in his work on clustering axioms. A clustering function is one that takes a set of points (and their pairwise distances) and returns a partition of the data. The function should abide by the simple axioms:
• Scale-Invariance: If all distances are scaled by a constant factor, the clustering should not change. Put another way, measuring the distance in miles or kilometers (or nanometers) should not change the final clustering partition.
• Richness: Depending on the pairwise distances, any partition should be possible. For example, the clustering should not always put two specific points $x$ and $y$ together, regardless of the distance metric.
• Consistency: If the pairwise distance between points in a cluster decreases, while the distances between a pair of points in different clusters increases, the clustering should not change. Indeed, such a transformation makes clusters tighter' and better separated from each other, so why should the clustering change?
Each of the axioms seems reasonable, and it is surprising that there is no clustering function that is consistent with all three axioms ! The trouble lies with the last axiom: by changing the distances we require that a clustering stay the same, but an even better clustering may emerge. For example, consider points lying in several `obvious'' clusters, and proceed to move one of these clusters out to infinity. As we move the points further and further out, the resulting dataset appears to be two-clusterable (see the image below).
This leads to a slightly different tack at this problem. Instead of thinking about a specific clustering, or a specific partition of the data, instead we try to define an objective function, so that the optimum clustering under that objective behaves in a reasonable manner. This is the approach adapted by Ackerman and Ben-David. Let $m$ be a function measuring the quality of the clustering. As before, Scale-Invariance requires the measure $m$ to be independent of the units on the distances and Richness requires that every clustering can be the optimum (under $m$) clustering.
The Consistency axiom is the one undergoing a subtle change. Instead of requiring that the clustering stay the same if such a perturbation to the distances occurs, we only require that the score, or measure of the clustering {\em improve} under such a transformation. This breaks the counterexample above -- while the score of a 1-cluster decreases as we move the distances, the score of a 2-clustering decreases even more and surpasses the score of the 1-clustering. Indeed, the authors demonstrate a set of different clustering metrics that are consistent under this set of axioms.
-- Sergei Vassilvitskii
1. Alex Lopez-Ortiz7/18/2011 01:09:00 PM
All three axioms, while they may seem natural at first, are ultimately questionable.
For example, almost any real life clustering problem has an implicit notion of unit distance built into it.
Say, if we are tracking human diseases and you have a four point quadrangular configuration a few city blocks apart, this clearly forms a cluster. Now scale up the square so that the corners lie one each in North America, South America, Europe and Africa. Clearly, the new configuration does not form a cluster. Out goes Scale invariance.
Richness is perhaps the most unnatural of the axioms. If you are given two points, and yourclustering algorithm is scale and rotation invariant it is not possible to achieve all partitions of the input set. There is nothing wrong with this, yet it violates the richness axiom.
2. In my opinion, there is a simple reason why the richness axiom should be modified. Namely, I see no intuitive reason why both the partition that lumps everything into a single cluster and the partition that puts everything into a singleton cluster of its own should be realized. Both of these are basically "failure modes" where the verdict is that there is no natural clustering. If we simply pick one of these two clusterings to represent our failure mode, and forbid the other one, then the inconsistency among the axioms disappears. Conversely, if you insist on richness as standardly defined, it's not really surprising that if you start with everything in a single cluster and then dilate everything, then there's no principled way to decide when you should shatter into singletons—and yet the axioms say you must.
3. I'm not an expert here but you may want to consider level set clustering http://bit.ly/oJ8C2x
Its satisfying two of the axioms (consistency and richness)..scale would need change of parameters.
This is also another simpler algorithm it satisfies the two conditions:
http://bit.ly/nPl72W | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472424745559692, "perplexity": 588.9116072145364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585178.60/warc/CC-MAIN-20211017144318-20211017174318-00334.warc.gz"} |
http://mathhelpforum.com/geometry/108977-equation-circle-right-angled-triangle.html | # Thread: Equation of Circle from right-angled triangle
1. ## Equation of Circle from right-angled triangle
The point D, E and F have coordinates (-2,0), (0,-1) and (2,3) respectively.
i) gradient of DE = -0.5
ii) 0.5x+y-4=0 line F parallel to DE
iii) by calculating the gradient of EF, show that DEF is a right-angled triangle.
iv) calculate the length of DF.
v) Use the results of parts(iii) and (iv to show that the circle which passes through D,E and F has equation $x^2+y^2-3y-4=0.$
i've done i),ii),iii),iv)
i) gradient of DE = -0.5
ii) 0.5x+y-4=0 line F parallel to DE
iii) y= 2x-1 = grad of normal
which bisect DE at some point therefore it's a right angled triangle.
iv) $sqroot(4^2+3^2)$
thank you for helping!
2. Do you have to use the earlier parts? If so, are you allowed to use the result (from geometry) relating right angles and triangles?
If so, then find the midpoint of the hypotenuse, as this will be the center of the circle. You've found the length of the hypotenuse; now divide by two to find the radius value.
Once you have the center and the radius, you can plug into the center-radius form of the circle equation, multiply everything out, and rearrange to get the required form of the equation.
3. Originally Posted by stapel
If so, then find the midpoint of the hypotenuse, as this will be the center of the circle. You've found the length of the hypotenuse; now divide by two to find the radius value.
ok i understand this part which leave me with 2.5 as the radius.
Once you have the center and the radius, you can plug into the center-radius form of the circle equation, multiply everything out, and rearrange to get the required form of the equation.
but how do i find out the centre points of the circle? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139682054519653, "perplexity": 598.1649372489012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00579-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://intellectualmathematics.com/blog/critique-of-theological-foundations-of-keplers-astronomy/ | # Critique of “Theological Foundations of Kepler’s Astronomy”
Barker & Goldstein’s article “Theological Foundations of Kepler’s Astronomy” (Osiris, 16, 2001, pp. 88–113) claims that theological factors were crucial in Kepler’s derivation of the ellipse in the Astronomia Nova (AN). For this discussion we need to be aware of two laws announced earlier in the same work. The distance–velocity law says that the velocity of a planet is inversely proportional to its distance from the sun. The reciprocation law says, in brief, that “the planet’s body reciprocates [i.e., deviates from a circle] according to the measure of the versed sine of the eccentric anomaly” (AN, Donahue trans., p. 558), the meaning of which I shall explicate in detail below. These two laws are discovered essentially empirically, though of course by means of intermediate models, since the data is not available directly (AN, chs. 32 and 56 resp.). It is then shown that they can be physically realised by taking the sun and the planets to be magnetic (AN, chs. 33 and 57 resp.).
Now let us look at Barker & Goldstein’s reconstruction of Kepler’s derivation of the ellipse. They claim that “religious ideas contribute directly” to this derivation (p. 89) in the form of a theologically entrenched argument type called exemplum (§VII). A precise definition of exemplum appears elusive, but we need only be concerned with its alleged rôle in the Astronomia Nova. The crucial point according to Barker & Goldstein is that “the distance–velocity law and the reciprocation law are individually defensible by exemplum arguments” (p. 111), as follows:
“[O]n the grounds that the physical influence responsible for [these laws] is a species of an established genus [i.e., magnetism], this influence, whatever it is, can be recognizes as part of God’s governance of his creation and hence a law of nature.” (pp. 109–110)
I am not denying that these considerations may be part of the explanation for Kepler’s fondness for analogies. The textual evidence for this is very weak, however, as Barker & Goldstein are well aware. So to support their claim of direct religious influence, the exemplum theory needs a lot more to show for it. For this purpose they reconstruct Kepler’s derivation of the ellipse as follows.
“[The influence of the exemplum] is especially clear in the case of the last alternative to the ellipse, eliminated by Kepler in chapter 58, the via buccosa. Here the alternative curve is eliminated, not because it fails to fit the observations but because the ellipse—and only the ellipse—follows from the combination of the distance–velocity law and the reciprocation law. And what makes that a good basis for selecting between otherwise equally successful curves is that the two laws invoked here have already been shown to be parts of the providential plan, by means of exemplum inferences.” (p. 110)
Almost everything in this quotation is wrong. First of all we must note a mistake which does not seriously affect their main point: Kepler’s proof that the orbit is an ellipse (AN, ch. 59, thm. XI) uses only the reciprocation law, not the distance–velocity law, while Barker & Goldstein state repeatedly that both laws are needed for this proof (pp. 110, 111). The Newtonian idea of velocity and gravity interacting to generate the shape of the orbit is not present in Kepler. (The distance–velocity law is of course needed to compute the planet’s position on the ellipse at any given time, but this is not what Barker & Goldstein are referring to, for they specify explicitly that the theorem concerns “its two-dimensional track” only [p. 110].)
As for the rest of the above quotation, its many errors can only be sorted out after first describing what Kepler actually said. My account agrees with the excellent studies by Aiton (Mathematical Gazette, 59(410), 1975, pp. 250–260), Wilson (Isis, 59(1), 1968, pp. 4–25), and Stephenson (Kepler’s Physical Astronomy, Springer–Verlag, 1987, ch. 3). Barker & Goldstein refer to the first two of these “for different interpretations” (p. 111). I shall show that this is not a matter of “interpretations,” since in fact every step of Barker & Goldstein’s account is flatly contradicted by Kepler’s own words.
Consider first the reciprocation law, which says, in the notation of figure 1, that the Mars–sun distance is r = 1 + e cos x. The way in which Kepler discovered this law is important. He discovered “quite by chance” that “if the radius is substituted for the secant at the middle longitude [i.e., at x = 90°], this accomplishes what the observations suggest” (AN, p. 543). He then conjectured the reciprocation law as a generalisation of this: “the effect [of the reciprocation law] on all the eccentric positions will be the same as what was done here at the middle longitudes” (AN, p. 543). He then immediately (ch. 57) proceeded to satisfy himself that this law can be accounted for magnetically.
Now comes the problematic part: finding the orbit determined by this law. The problem is that the generalisation is ambiguous: it can generate either the via buccosa or the ellipse depending on how one interprets the equation. These two possible interpretations are shown in figures 2 and 3 respectively. Note that both interpretations agree when x = 90°, which, as we saw, was the empirical case that the law was induced from. Thus they are both permissible interpretations of the law. Indeed, Kepler makes this perfectly clear: on the basis of the reciprocation law “an orbit can be made for the planet … in a ‘puff-cheeked’ [i.e., via buccosa] form as well” (AN, p. 104). However, “this orbit is convicted of error by the equations” (AN, p. 104), i.e. empirically. Having “convicted” the via buccosa, Kepler “began … recalling the ellipses, quite convinced that I was following an hypothesis far, far different from the reciprocation hypothesis” (AN, p. 575). He then realised that the second interpretation of the reciprocation law is also possible, i.e., “that the distances constructed by the reciprocation and supported by the observations are contained in the perfect ellipse no less than in the puff-cheeked orbit” (AN, p. 104, italics added).
The truth is thus the exact opposite of Barker & Goldstein’s account above: the via buccosa is rejected solely for empirical reasons; despite the fact that it follows from the reciprocation law; which law does not uniquely determine the ellipse. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977856040000916, "perplexity": 1499.3554422412417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645413.2/warc/CC-MAIN-20180318013134-20180318033134-00182.warc.gz"} |
https://www.enotes.com/homework-help/find-implicit-solution-following-initial-value-288395 | # Find the implicit solution of the following initial value problem. (2y-3)dy = (6x^2 + 4x - 5)dx , y(1) = 2
You need to integrate both sides such that:
`int (2y-3)dy = int (6x^2 + 4x - 5)dx =gt int 2y dy - int 3dy =`
`= int 6x^2 dx+ int 4x dx- int 5 dx =gt`
`=gt 2y^2/2 - 3y + c = 6x^3/3 + 4x^2/2 - 5x + c`
...
## See This Answer Now
Start your 48-hour free trial to unlock this answer and thousands more. Enjoy eNotes ad-free and cancel anytime.
You need to integrate both sides such that:
`int (2y-3)dy = int (6x^2 + 4x - 5)dx =gt int 2y dy - int 3dy =`
`= int 6x^2 dx+ int 4x dx- int 5 dx =gt`
`=gt 2y^2/2 - 3y + c = 6x^3/3 + 4x^2/2 - 5x + c`
`y^2 - 3y + c = 2x^3 + 2x^2 - 5x + c`
Since `y(1) = 2 =gt 2^2 - 3*2 + c = 2*1^3 + 2*1^2 - 5*1 + c`
`4 - 6 + c = 2 + 2 - 5 + c =gt -2 + 5 - 4 = c =gt c = -1`
The implicit solution to the given differential equation is `y^2 - 3y - 2x^3 - 2x^2 +5x - 1 = 0.`
Approved by eNotes Editorial Team | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9102358818054199, "perplexity": 4250.301008034914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00468.warc.gz"} |
https://brilliant.org/problems/pizza-war-1/ | # Pizza War 1
Vaibhav gives a challenge to Kalash. He says that describe a motion such that the velocity vector at some instant of time makes an angle of $45^{\circ}$ with the acceleration vector. Kalash then swiftly answers that $\vec{r} = at\hat{i} + at\left(1 - \dfrac{t}{10}\right)\hat{j}$.
Kalash then asks Vaibhav at what instant of time will this occur for the motion described above. Vaibhav is confused. Can you help him out?
×
Problem Loading...
Note Loading...
Set Loading... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707435131072998, "perplexity": 1988.5254240238983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491857.4/warc/CC-MAIN-20200328104722-20200328134722-00433.warc.gz"} |
http://sciencehq.com/mathematics/continuity-theorems.html | # Continuity Theorems.
The basic theorems based on continuity are given below:
If the functions f(x) and g(x) are continuous at x=a then,
1> $f(x) \pm g(x)$ is continuous at x=a.
2>$f(x) \times g(x)$ is continuous at x=a.
3>$\cfrac{f(x)}{g(x)}$ is continuous at x=a , if g(x) is not equal to 0.
4>$\sqrt[n]{f(x)}$ is continuous at x=a if f(x) is greater than 0 , or is a positive number when “n” is even.
Related posts:
1. Basic properties or theorems of limit. Four basic properties of limits of a function or limit...
2. Continuity of a function(continuous and discontinuous functions). Continuity of a function , a function can either be...
3. Exponent(index) and laws of exponents(indices). When , a number is repeatedly multiplied by itself the...
4. Basic properties of Logarithms. Logarithms are important throughout mathematics and science. Establishing some basic...
5. The concept of Limit. Limit of a function is the value to which function... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640204906463623, "perplexity": 1161.2854673484628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743011.30/warc/CC-MAIN-20181116111645-20181116133645-00382.warc.gz"} |
http://mathhelpforum.com/trigonometry/91431-trig-problem.html | 1. ## Trig problem
Hi. Got a maths trig question.
For a angle of less than 90 degrees, sinx=3cosx. Show and decide the value of sin2x.
I just cant bring myself to figure this one out. Helps or pointers would be very appreciated
2. Originally Posted by Burger king
Hi. Got a maths trig question.
For a angle of less than 90 degrees, sinx=3cosx. Show and decide the value of sin2x.
I just cant bring myself to figure this one out. Helps or pointers would be very appreciated
$\sin(2x) = 2\sin{x}\cos{x}$
since $\sin{x} = 3\cos{x}$ ...
$
\sin(2x) = 2(3\cos{x})\cos{x} = 6\cos^2{x}
$
3. Hello, Burger king!
I can't imagine what "show and decide" means.
For an angle $x < 90^o\!:\;\;\sin x\:=\:3\cos x.$
Show and decide the value of $\sin2x.$
We have: . $\sin x \:=\:3\cos x \quad\Rightarrow\quad \frac{\sin x}{\cos x} \:=\:3 \quad\Rightarrow\quad \tan x \:=\:3$
Hence: . $\tan x \:=\:\frac{3}{1} \:=\:\frac{opp}{adj}$
So $x$ is an angle in a right triangle with: $opp = 3,\;adj = 1$
. . Pythagorus says: . $hyp = \sqrt{10}$
Hence: . $\sin x = \tfrac{3}{\sqrt{10}},\;\;\cos x = \tfrac{1}{\sqrt{10}}$
Therefore: . $\sin2x \:=\:2\sin x\cos x \:=\:2\left(\frac{3}{\sqrt{10}}\right)\left(\frac{ 1}{\sqrt{10}}\right) \:=\:\frac{3}{5}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708929061889648, "perplexity": 2110.986303385664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540932.10/warc/CC-MAIN-20161202170900-00397-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/56510/convolution-of-two-impulse-signals | # Convolution of Two Impulse Signals
I have encountered convolution of two different impulse signals.
x[n] = (1/2)^n . u[n-2] * u[n]
x[n] = u[n] * [n]
u[n] = discrete impulse signal
. = product operation
* = convolution operation
For the first one, I found this solution:
x[n] = 1/4 if n = 2
x[n] = 0 if n != 2
For the second one, I found impulse signal itself
Edit: Are my answers are true ? My professor told me that the answer for the first one is wrong, but he did not say the correct answer.
• what is the question ? – Ahmad Bazzi Apr 7 at 15:49
• I cannot make sure that my answers are true or not – Goktug Apr 7 at 16:00
• My answer is specified above. – Goktug Apr 7 at 16:01
• $u[n]$ is generally used to denote the unit step function, not the unit impulse function which is usually denoted $\delta[n]$. Please don't introduce new notation unnecessarily. – Dilip Sarwate Apr 7 at 22:38
But you better use the standard notation as Dilip Sarwate already indicated; $$u[n]$$ is the unit-step and $$\delta[n]$$ is the unit impulse. Then
$$0.5^n \delta[n-2] \star \delta[n] = 0.5^2 \delta[n-2] = \begin{cases} { 0.25 ~~~, ~~~n= 2 \\ 0.00 ~~~,~~~n \neq 2 } \end{cases}$$
This is basically an exercise to test the student's understanding of the concept that the unit impulse is effectively the unit in convolutions, that is: $$\delta \star x = x$$ for all signals $$x$$. Perhaps a systems explaination might help. If an LTI system has impulse response $$h$$, then we know that the output of the system when $$x$$ is the input is $$y = h \star x$$. So, $$\delta \star x$$ can be thought of as the output of an LTI system with impulse response $$\delta$$ when the input to the LTI system is $$x$$. What LTI system has output $$\delta$$ when its input is the unit impulse $$\delta$$?? It is just the canonical straight wire with (no) gain that audio enthusiasts dream about! And so, $$\delta \star x = x$$ for all $$x$$.
With this, it it is easy to verify that the OP's answer to the first question is correct (but maybe his professor wanted to see the answer as $$\left(\frac 12\right)^n \delta[n-2]$$ or $$\left(\frac 12\right)^2 \delta[n-2]$$ instead of what the OP wrote) while the second answer $$\delta[n]\star [n] = \delta[n]$$ is incorrect, it should be $$[n]$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9051299095153809, "perplexity": 439.6063578706522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665809.73/warc/CC-MAIN-20191112230002-20191113014002-00013.warc.gz"} |
https://powerfuldatablog.wordpress.com/category/frequentist-inference/ | Category: Frequentist Inference
# Frequentist Inference III – Confidence Intervals
Confidence Intervals are often used and are a great way to not just give a point estimate but to tell where else the point might be. We will focus on confidence intervals based on normal data today. z-confidence intervals for the mean Suppose data $latex x_{ 1 },...,x_{ n } \sim N(\mu,\sigma^{ 2 })$ with … Continue reading Frequentist Inference III – Confidence Intervals
This here shall be a summary of the most common significance tests. Designing NHST specify $latex H_{ 0 }$ and $latex H_{ 1 }$ choose test statistic of which we know the null distribution and alternative distribution specify rejection region, significance level and decide if the rejection region is one or two-sided compute … Continue reading Frequentist Inference II – NHST II | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316494226455688, "perplexity": 1210.0332183986966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511175.9/warc/CC-MAIN-20181017132258-20181017153758-00459.warc.gz"} |
http://hal.in2p3.fr/in2p3-01226856 | # First combined search for neutrino point-sources in the Southern Hemisphere with the ANTARES and IceCube neutrino telescopes
Abstract : We present the results of searches for point-like sources of neutrinos based on the first combined analysis of data from both the ANTARES and IceCube neutrino telescopes. The combination of both detectors which differ in size and location forms a window in the Southern sky where the sensitivity to point sources improves by up to a factor of two compared to individual analyses. Using data recorded by ANTARES from 2007 to 2012, and by IceCube from 2008 to 2011, we search for sources of neutrino emission both across the Southern sky and from a pre-selected list of candidate objects. No significant excess over background has been found in these searches, and flux upper limits for the candidate sources are presented for $E^{-2.5}$ and $E^{-2}$ power-law spectra with different energy cut-offs.
Document type :
Journal articles
Domain :
http://hal.in2p3.fr/in2p3-01226856
Contributor : Danielle Cristofol <>
Submitted on : Tuesday, November 10, 2015 - 3:23:40 PM
Last modification on : Thursday, September 5, 2019 - 3:14:05 PM
### Identifiers
• HAL Id : in2p3-01226856, version 1
• ARXIV : 1511.02149
### Citation
S. Adrián-Martínez, A. Albert, M. André, G. Anton, M. Ardid, et al.. First combined search for neutrino point-sources in the Southern Hemisphere with the ANTARES and IceCube neutrino telescopes. The Astrophysical Journal, American Astronomical Society, 2016, 823, pp.65. ⟨in2p3-01226856⟩
Record views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057791590690613, "perplexity": 3009.8196358170044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00090.warc.gz"} |
http://mathhelpforum.com/statistics/34307-combinatorics-help.html | 1. Combinatorics HELP
Let a1, a2, a3, a4 ,a5, be any distinct positive integers. Show that there exists at least one subset of three integers whose sum is divisible by 3.
what is the answer to this...plzz HELPP!!!
2. Originally Posted by fafafa
Let a1, a2, a3, a4 ,a5, be any distinct positive integers. Show that there exists at least one subset of three integers whose sum is divisible by 3.
what is the answer to this...plzz HELPP!!!
Have you ever heard of the Pigeon Hole principle? You can use that here.
Note that any positive integer is either congruent to 0 mod 3 or 1 mod 3 or 2 mod 3. we have more than three integers here, so we know all congruences are accounted for. Further more, at least one congruence repeats. Try doing this by cases. (There may be an easier way, I'm not a Number Theorist)
Any questions? Do you have an idea of how to proceed?
EDIT: Ah, nevermind, I just realized this post makes no sense! Nothing is stopping us from choosing all the numbers of one congruence class
3. i have no idea what to do..is it possible you can show me the answer step by step plz...
4. Hello,
Here are the possibilities :
0-0-0 -> ok
0-0-1 -> not ok
What are the conditions upon those integers ? Consecutive ?
5. You have 5 integers.
Each integer must be congruent to either 0, 1, or 2 (mod 3).
if any 3 are congruent (mod 3), then 3 divides their sum
ie 5 + 5 + 5 = 3+2 + 3+2 + 3+2 = 3(3+2)
and if all 3 mods are represented, then 3 divides their sum
ie 5 + 6 + 7 = 3+2 + 3+3 + 3+4 = 3+(3-1) + 3+(3+0) + 3+(3+1) = 3*3 + (3*3 -1+0+1) = 3*3 + 3*3 = 3(3+3)
*note, those examples are for your understanding, I wouldn't put them in your proof, if you need to show this, show it in a more abstract way that can be applied to any given situation*
So worst case, two are congruent to 0, or 1, or 2 (mod 3) and two more are congruent to 0, or 1, or 2 (mod 3), but these two sets of numbers are not congruent to eachother (mod 3). Then if the 5th term is congruent to either of these sets, 3 will divide their sum. The only alternative is that it is not congruent to any of the other numbers (mod 3) in which case, all three mods are represented, and so any combination of a 0, 1, and 2 (mod 3) will sum to a number divisible by 3. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8023249506950378, "perplexity": 274.61798376218854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00065-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.maplesoft.com/support/help/Maple/view.aspx?path=Finance%2Fgrowingannuity | Finance - Maple Programming Help
Home : Support : Online Help : Mathematics : Finance : Personal Finance : Finance/growingannuity
Finance
growingannuity
present value of a growing annuity
Calling Sequence growingannuity(cash, rate, growth, nperiods)
Parameters
cash - amount of first payment rate - interest rate growth - rate of growth of the payments nperiods - number of payments
Description
• The function growingannuity calculates the present value at period=0, of an annuity of nperiods payments, starting at period=1 with a payment of cash. The payments increase at a rate growth per period.
• Since growingannuity used to be part of the (now deprecated) finance package, for compatibility with older worksheets, this command can also be called using finance[growingannuity]. However, it is recommended that you use the superseding package name, Finance, instead: Finance[growingannuity].
Examples
I hold an investment that will pay me every year for 5 years starting next year. The first payment is 100 units, and each payment is expected to grow by 3% each year. If the interest rate is 11%, what is the present value of the investment.
> $\mathrm{with}\left(\mathrm{Finance}\right):$
> $\mathrm{growingannuity}\left(100,0.11,0.03,5\right)$
${390.0340764}$ (1)
This can also be calculated as follows.
The cash flows are given by:
> $\mathrm{cf}≔\left[100,100\cdot 1.03,100{1.03}^{2},100{1.03}^{3},100{1.03}^{4}\right]$
${\mathrm{cf}}{≔}\left[{100}{,}{103.00}{,}{106.0900}{,}{109.272700}{,}{112.5508810}\right]$ (2)
or equivalently as
> $i≔'i':$
> $\mathrm{cf}≔\left[\mathrm{seq}\left(\mathrm{futurevalue}\left(100,0.03,i\right),i=0..4\right)\right]$
${\mathrm{cf}}{≔}\left[{100.0}{,}{103.00}{,}{106.0900}{,}{109.272700}{,}{112.5508810}\right]$ (3)
> $\mathrm{cashflows}\left(\mathrm{cf},0.11\right)$
${390.0340762}$ (4)
Here, we deal with a more complicated example illustrating differential growth. We have an investment that will pay dividends of 1.12 units starting one year from now, growing at 12 % per year for the next 5 years. From then on, it will be growing at 8%. What is the present value of these dividends if the required return is 12%? Solution: first part, the present value for the first 6 years is a growing annuity
> $\mathrm{part1}≔\mathrm{growingannuity}\left(1.12,0.12,0.12,6\right)$
${\mathrm{part1}}{≔}{6.000000000}$ (5)
The fact that this is 6 times the present value of the first dividend is because the growth rate is equal to the required return. The second part, is a (deferred) growing perpetuity. Six years from now, the dividends will be
> $\mathrm{div_6}≔\mathrm{futurevalue}\left(1.12,0.12,5\right)$
${\mathrm{div_6}}{≔}{1.973822685}$ (6)
> $\mathrm{div_7}≔\mathrm{futurevalue}\left(\mathrm{div_6},0.08,1\right)$
${\mathrm{div_7}}{≔}{2.131728500}$ (7)
Its value 6 years from now is
> $\mathrm{part2_6}≔\mathrm{growingperpetuity}\left(\mathrm{div_7},0.12,0.08\right)$
${\mathrm{part2_6}}{≔}{53.29321250}$ (8)
Which has a present value of
> $\mathrm{part2}≔\mathrm{presentvalue}\left(\mathrm{part2_6},0.12,6\right)$
${\mathrm{part2}}{≔}{27.00000000}$ (9)
Therefore the investment has a present value of
> $\mathrm{part1}+\mathrm{part2}$
${33.00000000}$ (10)
33 units.
Compatibility
• The Finance[growingannuity] command was introduced in Maple 15. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385365009307861, "perplexity": 1134.2012226950221}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00506.warc.gz"} |
http://math.stackexchange.com/questions/94122/showing-that-fz-leq-prod-limits-k-1n-left-fracz-z-k1-overlinez/94146 | # Showing that $|f(z)| \leq \prod \limits_{k=1}^n \left|\frac{z-z_k}{1-\overline{z_k}z} \right|$
I need some help with this problem:
Let $f\colon D \to D$ analytic and $f(z_1)=0, f(z_2)=0, \ldots, f(z_n)=0$ where $z_1, z_2, \ldots, z_n \in D= \{z:|z|<1\}$. I want to show that $$|f(z)| \leq \prod_{k=1}^n \left| \frac{z-z_k}{1-\overline{z_k}\, z} \right|$$ for all $z \in D$.
It seems that I need to use Schwarz-Pick Lemma but it seems that the problem doesn't satisfy the conditions. Another lemma that I can use is that of Lindelöf saying: Let $f:D \to D$ analytic, then $$|f(z)|\leq \frac{|f(0)|+|z|}{1+|f(0)| \cdot |z|}$$ for all $z \in D$.
It seems to be an easy problem but I couldn't succeed in solving it.
-
Do you know that there is an option to thank people who help you by accepting their answers? If not, I seriously recommend you to go through the faq. – user21436 Dec 25 '11 at 22:16
It's possible that $f(z)$ is not a real number for some $z$, so the inequality doesn't make sense (you probably missed $|\cdot |$). – Davide Giraudo Dec 25 '11 at 22:25
I believe you imitate the proof of the Shwartz lemma. Divide f(z) by all of the factors on the right. What does the maximum modulus principle give you? – Potato Dec 25 '11 at 22:32
yes, thanks, I've edited it. – bond Dec 25 '11 at 22:32
Also note that the function you get when you do what I said will be analytic, because all singularities will be removable. – Potato Dec 25 '11 at 22:45
Let $f$ be non-constant, and continuously extends to $\overline{D}$ (the closed unit disc.)
Let $B(z)=\prod_{k=1}^n \frac{z-z_k}{1-\overline{z_k}z}.$ Note that $|B(z)|=1$ for $|z|=1.$ Define $g(z):=f(z)/B(z).$ Now, $g$ is a holomorphic map on $D$. By maximum modulus principle, $|g(z)|$ attains its maximum value on the boundary $|z|=1.$ Therefore, $|g(z)| \leq 1$ for $|z| \leq 1.$ Hence,
$$|f(z)| \leq |B(z)|= \prod_{k=1}^n \left|\frac{z-z_k}{1-\overline{z_k}z} \right|.$$
See the Blaschke Product as well.
-
I don't think you really need to assume that $f$ extends to $\overline D$ continuously for your argument to work. Couldn't just remark that (by the maximum modulus principle) for $|z|<r<1$: $$\frac{|f(z)|}{|B(z)|} \le \max_{\theta} \frac{1}{|B(re^{i\theta})|} \overset{r\to 1}\longrightarrow 1$$ so $|f(z)|\le |B(z)|$? – Sam Dec 26 '11 at 3:00
@Sam: That's a good point. It can be improved in this way. – Ehsan M. Kermani Dec 27 '11 at 4:20
It's look good but Where have you used the fact of $f(z_k)=0$? – bond Dec 27 '11 at 14:23
@bond: The zeros of $B(z)$ are precisely $z_k$'s, that's why $g$ is holomorphic on $D.$ – Ehsan M. Kermani Dec 27 '11 at 19:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466854929924011, "perplexity": 235.92245256182537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164289.84/warc/CC-MAIN-20160205193924-00297-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://asmedigitalcollection.asme.org/verification/article/6/3/031004/1110711/Uncertainty-Reduction-for-Model-Error-Detection-in | ## Abstract
Uncertainty quantification (UQ) is an important step in the verification and validation of scientific computing. Validation can be inconclusive when uncertainties are larger than acceptable ranges for both simulation and experiment. Therefore, uncertainty reduction (UR) is important to achieve meaningful validation. A unique approach in this paper is to separate model error from uncertainty such that UR can reveal the model error. This paper aims to share lessons learned from UQ and UR of a horizontal shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. First, simulation UQ revealed the inconsistency in simulation predictions due to the numerical flux scheme, which was clearly shown using the parametric design of experiments. By improving the numerical flux scheme, the uncertainty due to inconsistency was removed, while increasing the overall prediction error. Second, the mismatch between the geometry of the experiments and the simplified 1D simulation model was identified as a lack of knowledge. After modifying simulation conditions and experiments, it turned out that the error due to the mismatch was small, which was unexpected based on expert opinions. Last, the uncertainty in the initial volume fraction of particles was reduced based on rigorous UQ. All these UR measures worked together to reveal the hidden modeling error in the simulation predictions, which can lead to a model improvement in the future. We summarized the lessons learned from this exercise in terms of empty success, useful failure, and deceptive success.
## 1 Introduction
According to the ASME verification, validation, and uncertainty quantification committee standards and AIAA verification, validation, and uncertainty quantification guides [1,2], model validation is defined as the process of determining the degree to which a model is an accurate representation of the real phenomenon, from the perspective of the model's intended uses. The purpose of model validation is not only to assess the accuracy of a computational model but also to improve the model based on the validation results. Uncertainty quantification (UQ) has been recognized as a key component in verification and validation [3], whose aim is to build predictive computational models. Validation experiments may include measurement variability and processing uncertainty [4] while simulation predictions may include propagated uncertainty and numerical and model form errors [5]. The validation assessment is often performed using validation metrics that compare the uncertainties between simulation and experiment in the form of probability distributions [6].
If uncertainties in simulation and experiment are larger than errors, validation may not be useful even if two distributions are overlapped. Therefore, in order to have a meaningful validation, it is required to separate model error from uncertainty and reduce the uncertainty to be less than the model error, which is a unique approach in this paper compared to other approaches in the literature [7]. There are many different ways of reducing uncertainty, such as using more samples to reduce statistical uncertainty [8], bias correction [9], and model conditioning [10]. The objective of this paper is to share lessons learned from UQ and uncertainty reduction (UR) of a horizontal multiphase shock tube simulation, whose goal is to validate the particle drag force model for the compressible multiphase flow. The initial UQ for the horizontal shock tube experiment was presented by Park et al. [4]. The primary goal of this paper is to expose the model errors so that the model can be improved to ensure the required prediction capability of the simulation. This paper is an extension of our previous work on validation and uncertainty quantification of the shock-tube simulation [11].
The rest of the paper is organized as follows: Section 2 explains the multiphase shock tube simulation and experiment, which is an intermediate stage of a compressible multiphase turbulence project at the University of Florida. Section 3 summarizes the validation procedure and UQ, where the quantities of interest are defined, and uncertain variables are quantified. Section 4 presents three UR activities to reveal the model error in particle drag force, followed by conclusions and a summary of lessons learned from this work in Sec. 5.
## 2 Shock Tube Experiment and Simulation
The physics of shock interaction with a cloud of particles has many interesting applications, such as explosive volcanic eruptions, dust explosions in coal mines, and supernovae [12]. In addition, understanding this physics plays a key role in accurately predicting explosive dispersal of particles and controlling and designing the consequences of the explosion. The center for compressible multiphase turbulence at the University of Florida is developing software that can simulate a high-speed dispersal of an annular, dry particle bed driven by a core of reacting explosive. Figure 1 shows a blastpad experiment of explosive dispersal of solid particles conducted at Eglin Air Force Base under the guidance of the Air Force Research Laboratory [13]. This experiment serves as a testbed for exploring the rich physics of compressible multiphase instabilities and turbulence. The quantities of interest (QoI) are the shock location and particle front location as a function of time, which will be compared with simulation predictions for validation.
Fig. 1
Fig. 1
Close modal
Many interesting and complex physics are present in the experiment, including detonation chemistry, turbulence, particle collisions, drag forces, real gas effects, shock–particle interactions, and particle–gas interactions. Figure 2 shows eight physics models for simulating the behavior of explosives, gases, and particles. Among them, four key physics models were selected by the authors based on their importance to achieving the prediction capability of the simulation: (1) particle force, (2) particle collision, (3) compaction, and (4) multiphase turbulence.
Fig. 2
Fig. 2
Close modal
Since the blastpad experiment in Fig. 1 is expensive, only a small number of experiments are affordable. Therefore, simplified experiments and simulations are planned to focus on individual physics model validation while the effects of other models are either controlled or ignorable. Among the eight physics models in Fig. 2, in this paper, the multiphase shock tube simulation and experiment are used to validate the particle drag force model (T6) and the collision model (T4), which explain shock–particle and particle–particle interactions. Since the QoIs strongly depend on these two models, their accuracy is critically important to ensure the prediction capability of the blastpad simulation. The shock tube experiment is effective for these two models because the initial experimental conditions can be easily controlled such that the interaction between shock particles and gas particles mostly contributes to the QoI [14]. The effect of the compaction model is minimized by using a relatively low volume fraction. In addition, the effect of turbulence is ignorable by focusing on the motion of particles in an early time.
The apparatus of the shock tube experiment conducted by Sandia National Laboratories [14] is shown in Fig. 3(a). The shock tube of 5.2 m in length is composed of the driver, driven, and test sections. The particle curtain is located in the test section, where a glass window on the side makes it possible to observe the particle motion. The lime-glass particles in the reservoir fall through the slit to form a particle curtain. A shock wave is generated due to the pressure difference between the driver and driven sections when the diaphragm between them bursts. The planar shock wave travels through the driven section and is stabilized before arriving at the test section.
Fig. 3
Fig. 3
Close modal
When the shock moves in the flow direction, its interaction with particles is observed through the side window in the test section. The motion of the particle curtain is represented by the upstream front position (UFP) and the downstream front position (DFP) as a function of time as shown in Fig. 3(b). In addition, the thickness of the curtain is calculated by the distance between the two fronts. Initially, the curtain thickness is about 2 mm before hit by the shock (Fig. 3(c)), while the curtain moves and expands afterward (Fig. 3(d)).
Figures 3(c) and 3(d) show the schematic view of the glass window in the test section, where the black vertical bar in the middle represents the particle curtain and dashed lines are the observation window. The motion of the particle curtain is measured using imaging techniques. More specifically, a Schlieren imaging system [15] is used to capture the motion of the particle curtain and shock. Figure 3(e) is a Schlieren image taken through the observation window. The rectangles with the dashed line of Figs. 3(c) and 3(d) show the coverage of the Schlieren image. In addition, an X-ray radiography imaging system [16] is used to measure the volume fraction of the particle curtain. The Schlieren imaging system can take an image every 24.4 μs, while the X-ray imaging system can take a single image per experiment. Since the Schlieren and X-ray cannot be used simultaneously, either the curtain location or volume fraction can be measured, not both.
An additional challenge in measurement is due to the gap between the particle curtain and the sidewall. Figure 4 shows the schematic top view of the test section with the particle curtain where the curtain occupies about 80% of the width of the test section. The gap allows the air bypassing around the particle curtain, which makes the particles on both sides move faster than those at the center. The two-dimensional simulation results in the xy-plane in Fig. 4(b) support this behavior, where the particle motion on the edge (y = 0.04 m) is faster than that at the center (y = 0 m). The two vertical solid lines are UFP (red) on the left and DFP (blue) on the right when all particles are considered. On the other hand, the two dashed lines are UFP (green) on the left and DFP (cyan) on the right when only particles near the center (0 ≤ y ≤ 0.01) are considered. These particle front lines are determined as the left and right 2.5 percentile of particle volumes. Since the curtain movement can be observed through the window on the side, it is difficult to observe the central portion of the particle curtain. Due to the placement of the particle reservoir and collector, it is difficult to observe the particle movement in the vertical direction. Since one-dimensional simulation assumes that there is no variation through y- and z-coordinate direction, the gap effect can be a major model error. This fact initially provoked serious doubt about the comparison between the one-dimensional prediction and the experiment with gaps because the former cannot capture the gap effect.
Fig. 4
Fig. 4
Close modal
## 3 Uncertainty Quantification and Validation Process
Both shock tube simulation and experiment include many sources of uncertainty. To have meaningful validation, one must quantify them carefully. Figure 5 shows the validation and UQ framework for the simulation based on reducing error and uncertainty in both simulation and experiment. The major sources of uncertainty in the experiment are measurement variability, sampling uncertainty, and measurement processing uncertainty. The first one is aleatory, while the other two are epistemic uncertainty. The QoI can be different at different tests due to measurement uncertainty, while the measurement may have a bias due to calibration error, which is measurement processing uncertainty. These errors and uncertainty are quantified by repeating experiments or investigating the measurement process. In addition, the input conditions and their uncertainties should be quantified so that they can be used in the simulation. The major source of uncertainty in simulation is the propagated uncertainty, which is the effect of input uncertainty on QoIs. In addition, the stochastic variability and discretization error in the numerical computation process should be included. These errors and uncertainties are quantified by running the simulation multiple times with different realizations of input uncertainty and performing convergence analysis. When the difference between uncertainties in experiment and simulation is bigger than the difference in the mean values (model error), it is difficult to identify the model error. In such a case, UR is initiated to reduce the uncertainty in both the experiment and simulation until they are less than the model error. If the model error is larger than a threshold, model improvement is initiated to reduce the model error. This process is repeated until both the uncertainty and model error are less than a user-defined threshold.
Fig. 5
Fig. 5
Close modal
A unique approach in this paper is the separated treatment of model error from other sources of uncertainty. In the literature [7,17], the model form error is generally considered as a part of epistemic uncertainty, which is good for validating the prediction capability of simulation. However, to identify the model error, which is the main goal of this paper, it would be necessary to separate the model error from other uncertainties. With the given uncertainties in Fig. 5, the model error can be expressed as
$emodel=(ymeas+esamp+emeas)−(ymodel+eprop+enum)$
(1)
where $ymeas and ymodel$ are, respectively, the means of measurement and model prediction. In addition, $esamp,emeas,eprop, and enum$ are, respectively, sampling, measurement, propagated, and numerical errors. All these four errors are modeled as uncertain variables with mean values of zero. For example, measurement processing uncertainty is a bias error. However, since we do not know its exact value, it is modeled as a distribution with zero mean. Then, the discrepancy between the means of experimental measurement and the simulation calculation represents the model error, while its uncertainty is the sum of all four uncertainties
$E(emodel)=ymeas−ymodel$
(2)
$V(emodel)=V(esamp)+V(emeas)+V(eprop)+V(enum)$
(3)
where $E(•)$ and $V(•)$ represent, respectively, the expected value and the variance of an uncertainty variable. Figure 6(a) illustrates the proposed method of defining model error. The model prediction becomes a distribution whose mean is at $ymodel$ and the variance is the combination of that of propagated uncertainty $(eprop)$ and numerical errors $(enum)$. The experimental measurement is also a distribution whose mean is at $ymeas$, and its variance comes from that of sampling uncertainty $(esamp)$ and measurement processing uncertainty $(emeas)$. It is noted that the uncertainty in Fig. 6(a) is schematic and does not represent any specific distribution type. Traditionally, the model error is defined based on the model's accuracy. However, this is only possible when the level of model error is known. Since the goal of this paper is to quantify the model error, Eq. (1) is a proper way to estimate it with given uncertainties.
Fig. 6
Fig. 6
Close modal
The identification of the model error is the comparison between the mean of model error in Eq. (2) and its uncertainty in Eq. (3). The uncertainty can be measured in terms of the standard deviation. When the uncertainty is larger than the mean, it is impossible to identify the model error because the uncertainty covers the mean of model error, as shown in Fig. 6(b). Therefore, to reveal the model error, uncertainty in Eq. (3) must be reduced such that the mean of model error can be estimated, as shown in Fig. 6(c). Therefore, the “large uncertainty?” loop in Fig. 5 is repeated until both uncertainties from experiments and simulations are less than the mean of model error in Eq. (2). Once the uncertainty is reduced enough, the “large error?” loop in Fig. 5 is repeated until the level of model error is acceptable.
Table 1 summarizes important sources of uncertainty and error in the shock tube experiment. Additional measurements were performed to estimate uncertainty in particle diameter [21]. Schlieren image analysis is used to estimate the uncertainty in the curtain thickness [14]. X-ray image analysis is used to estimate the distribution and uncertainty in the initial particle volume fraction [16]. All uncertainties are considered uniformly distributed as we were able to identify their lower and upper bounds through measurements. In the case of initial volume fraction, however, it was possible to identify its distribution based on an X-ray image, which is detailed in Sec. 4.3. All the uncertainties except for measurement bias are used for simulation inputs to calculate the propagated uncertainty $(eprop)$, while the measurement bias is included in the measurement uncertainty $(emeas)$. First, a Kriging surrogate model of the curtain location is constructed using the volume fraction, particle diameter, and curtain thickness as input variables. Then, the Monte Carlo simulation with 10,000 samples was used to generate samples of curtain locations. To obtain the sampling uncertainty $(esamp)$, experiments are repeated four times. To understand the influence of the unknown particle locations, we randomly placed particles and made ten runs of simulations with different random particle locations. The difference between the simulation results was small enough that we concluded that the effect of the randomness in the initial particle location on the QoI is negligible. The discretization error has been studied by Nili et al. [22], and the error from the numerical scheme will be discussed in Sec. 4.1$(enum)$. The error due to the gap effect is quantified by comparing one- and two-dimensional simulations.
Table 1
Key uncertainty and error sources in the validation of the one-dimensional simulation
Uncertainty sourceDescription
Measurement bias in particle front positionsSystematic bias uncertainty because of the gap between the particle curtain and walls [−10,0]%
Initial particle volume fractionUncertainty in initial volume fraction measurement process and local variation in particle curtain [18,19] %
Initial particle positionsVariability in initial particle positions
Particle diametersVariability in particle diameters [100,130] μm
Initial curtain thicknessVariation in the curtain thickness [1.6,2.4] mm
Pressure at driver sectionNegligible measurement noise
DiscretizationError due to temporal and spatial discretization
Model errorError in the drag force model [20]
Gap effectError for not being able to measure DFP due to the gap
Numerical flux schemeError due to numerical scheme
Uncertainty sourceDescription
Measurement bias in particle front positionsSystematic bias uncertainty because of the gap between the particle curtain and walls [−10,0]%
Initial particle volume fractionUncertainty in initial volume fraction measurement process and local variation in particle curtain [18,19] %
Initial particle positionsVariability in initial particle positions
Particle diametersVariability in particle diameters [100,130] μm
Initial curtain thicknessVariation in the curtain thickness [1.6,2.4] mm
Pressure at driver sectionNegligible measurement noise
DiscretizationError due to temporal and spatial discretization
Model errorError in the drag force model [20]
Gap effectError for not being able to measure DFP due to the gap
Numerical flux schemeError due to numerical scheme
## 4 Error and Uncertainty Reduction
### 4.1 Uncertainty Due to Bias From a Numerical Flux Scheme.
To calculate the propagated uncertainty, it is necessary to run multiple simulations with varying input parameters according to their uncertainty distribution. Since simulations are computationally expensive, surrogate models are often used to replace them. During the design of experiments to build the surrogate model, we observed inconsistencies in the simulation results. Since the physical explanation of these inconsistencies was difficult, we conducted a parametric study to systematically investigate this behavior. The parametric design of experiments was conducted for three major sources of input uncertainty: particle volume fraction, curtain thickness, and particle diameter. Figure 7 shows the parametric design of experiments along four lines in the three-dimensional parameter space, where simulation is conducted at each circle. From the surrogate model, point A that shows a significant deviation from the actual simulation is selected (volume fraction of 23%, particle diameter of 110 μm, and curtain thickness of 2.4 mm). The purpose is to confirm if the simulation results along each line show a physically meaningful trend. Each line is used to identify anomalies in simulation for the corresponding parameter [23]. The QoIs (particle front positions) are calculated as the mean value over random initial particle positions. Since the purpose was to investigate the inconsistency of simulation results, the range of parameters is not the same as the variability of the input parameters reported in Table 1.
Fig. 7
Fig. 7
Close modal
Figure 8 shows the results of QoI at 500 μs along the four lines in Fig. 7. All four lines are normalized in the input variables, where 1 denotes their individual starting points and 0 denotes the ending point (point A). The values of different lines should have the same results at the 0 point, which is {23%, 110 μm, 2.4 mm}. Figure 8(a) shows that the DFP along the four lines is significantly different; it varies in the range of 25 and 60 mm. The behavior of the diameter line departs significantly from the other lines. The discontinuity along the diameter line near the value 0.45 cannot be explained physically. Since the trend is not consistent with physics, it is concluded that it was caused by the numerical error. The behavior was most likely to be associated with the advective upstream splitting method plus (AUSM+) scheme, a numerical flux calculation scheme [24]. The flux of a cell is calculated considering Lagrangian particles traveling in the numerical cells, and the particles are not always distributed evenly in a cell as AUSM+ assumes. This AUSM+ assumption provoked numerical instability when particles in one cell are extremely concentrated. Thus, the AUSM+ scheme was upgraded to the AUSM+up scheme to take into account the particle concentration in a cell [19]. After upgrading the AUSM+up scheme, prediction along the four lines is shown in Fig. 8(b), where the prediction uncertainty is significantly reduced, and the behavior along the different lines is consistent. After improving the numerical scheme, the range of the DFP prediction is reduced to between 25 and 30 mm. However, the prediction value itself at point A is changed significantly: from 40 mm using AUSM+ to 27 mm using AUSM+up. Therefore, by changing the numerical scheme, the numerical uncertainty is significantly reduced while the model error itself is increased. Therefore, the initial simulation with AUSM+ has a large numerical uncertainty such that the model error was unclear (see Fig. 9(a)) as the simulation results overlapped with experiment results due to its large uncertainty. As Fig. 9(b) shows, the inconsistency between the simulation and experimental results for the AUSM+up scheme is slightly larger compared to the AUSM+ scheme. This gives us a lesson that reducing uncertainty (numerical flux scheme) can reveal the hidden model error. In this case, the model error and numerical scheme were compensating for each other. It is noted that the results in Figs. 8 and 9 are slightly different because Fig. 8 is the result for point A while Fig. 9 is the result from three sources of uncertainty.
Fig. 8
Fig. 8
Close modal
Fig. 9
Fig. 9
Close modal
After the implementation of AUSM+up, the model error and uncertainty in the QoI were quantified. Figure 9(b) shows a comparison between the calculated and measured QoIs with uncertainties. The uncertainties are accumulated from the uncertainty sources in Table 1 by propagating them through the simulation. Figure 10 shows the model error in DFP with uncertainty. The contributions from different sources of uncertainty were indicated with bands of different colors. The simulation sampling uncertainty (red) represents the uncertainty in obtaining the mean front position from the repeated simulation runs. The measurement uncertainty (orange) is the uncertainty in the measuring process of the front positions. Both uncertainties are too small to be shown in the figure [14]. The largest contribution is from the uncertainty due to the inconsistency between simulation and experiment due to the gap. The second-largest uncertainty is the measurement processing uncertainty in the initial volume fraction based on X-ray images. The third-largest uncertainty is due to the limited number of experiments and high variability between them. Figure 10 shows that the error of the simulation has a wide distribution because of the large uncertainty. For example, the range of model error at t = 700 μs is [−2,8] mm, which represents uncertainty. Therefore, it is inconclusive if the model error is small or large. In Secs. 4.2 and 4.3, it will be discussed how to reduce the uncertainties so that the model error can be revealed clearly.
Fig. 10
Fig. 10
Close modal
### 4.2 Uncertainty in the Gap Effect.
Based on the ordering of uncertainty in Sec. 4.1, it was concluded that the gap effect is the largest source of uncertainty in the one-dimensional simulation. It is emphasized here that the uncertainty due to the gap was estimated based on expert's opinions. To assess the model adequacy properly, it would be necessary to reduce the uncertainty associated with the gap effect. Two options are possible for UR: (a) use a two-dimensional simulation that can model the gap, or (b) conduct a new shock tube experiment without having the gap [16]. The former can reduce the uncertainty by including the gap in the simulation, while the latter can reduce the uncertainty by removing the gap from the experiment. Both options were explored in this paper to quantify/reduce the uncertainty due to the gap effect. The two-dimensional simulation was performed where the x-axis is in the flow direction and the y-axis is in the depth direction of the test section. DeMauro et al. [16] performed an additional shock tube test by extending the particle curtain to the wall such that no gap exists between the particle curtain and the wall of the test section.
Figure 11 shows the comparisons and the corresponding error estimates with uncertainty for the two options. Figure 11(a) shows 95% confidence intervals of QoIs when the gap effect is included in the two-dimensional simulation. Both experiment and simulation used the particle curtain covering about 80% of that of the test section. Based on these results, Fig. 11(b) shows the distribution of the model error estimate. At 700 μs, the range of model error was [13,17] mm. Due to UR, the uncertainty in model error is reduced by 2.5 times, while the median of model error is increased almost five times compared to that of Fig. 9(b). The reason for having a large error in the 2D simulation results in Fig. 11(b) is due to an incorrect volume fraction model (top-hat distribution) and the numerical scheme AUSM+ (AUSM+up was not available for the 2D simulation). Figure 11(c) shows 95% confidence intervals of QoIs when the gap effect is removed. That is, the particle curtain fills the depth of the test section during the experiment, which is then compared with a one-dimensional simulation. Figure 11(d) shows the corresponding distribution of the model error estimate. At 700 μs, the range of model error is [4,8] mm; that is, the uncertainty is reduced by 2.5 times and the median is increased by two times. Both options significantly reduce uncertainty, while revealing model error.
Fig. 11
Fig. 11
Close modal
An interesting observation from the new experiment is that the influence of the gap was minimal contrary to the comments from experts and the simulation study. Figure 12 shows a comparison between the mean front positions of the experiments with and without the gap. It clearly shows that the influence of the gap is ignorable. This study provided a lesson that some epistemic uncertainties are unintentionally exaggerated. Without quantifying them, they can increase the prediction uncertainty. This corresponds to the case when an erroneous estimate of uncertainty can mislead the UQ process. The follow-up experiments without the gap showed that the gap effect was ignorable.
Fig. 12
Fig. 12
Close modal
### 4.3 Reducing the Uncertainty in the Initial Volume Fraction.
Since the largest uncertainty source, the gap effect, was removed, the second-largest uncertainty source, the initial volume fraction, became the next target for uncertainty reduction. The particle volume fraction can be estimated using X-ray radiography, where the intensity of the image is attenuated when the X-ray beam passes through particles. DeMauro et al. [16] used the Beer–Lambert law to estimate the volume fraction from intensity measurements.
In all simulations so far, a constant volume fraction of 21% was used through the curtain thickness, which was the maximum volume fraction from the X-ray image processing [25]. However, X-ray images showed that the particle volume fraction is not uniform through the curtain thickness. Rather, they showed a bell-shaped density distribution, which was also observed by Wagner et al. [25]. Therefore, the uniformity assumption with the maximum volume fraction uses many more particles than the experiment. Such a conservative estimate of the volume fraction makes the number of particles in the experiment different from that in the simulation. To make the simulation condition consistent with the experiment, it would be necessary to use a variable volume fraction through the curtain thickness. A rigorous UQ study was carried out to reduce the inconsistency in this paper.
An important issue is that the particle volume fraction cannot be measured directly, but it is estimated based on image intensity attenuation. The particle volume fraction was measured from X-ray images through a calibration and fitting process [4], which introduces another source of uncertainty, called measurement processing uncertainty. The uncertainty in the process was propagated to the uncertainty in the measured volume fraction. Finally, a bell-shaped initial particle volume fraction shown in Fig. 13 was identified along with measurement processing uncertainty. This is a significant uncertainty reduction from the constant volume fraction of 21% in the previous simulation.
Fig. 13
Fig. 13
Close modal
After the error and uncertainty in the initial volume fraction were reduced with the bell-shaped profile, Fig. 14 shows the comparison between experiment and simulation, where the uncertainty in the error estimate has been reduced compared to Fig. 11(d). After this uncertainty reduction, the error estimate at 700 μs becomes [4,6] mm. Compared to the initial error estimate of [−2, 8] mm, the reduced error estimate provides much accurate information about the prediction error in the simulation. Now, the uncertainty in model error is less than the mean of model error. Therefore, uncertainty reduction revealed the model error.
Fig. 14
Fig. 14
Close modal
After UR for the initial volume fraction, the largest remaining uncertainty is the sampling uncertainty, which can only be reduced by increasing the number of experiments. Since all experiments were already finished, it was impractical to have more experiments. In addition, the uncertainty in the prediction error in Fig. 14(b) is small enough to reveal the model error. Therefore, it is determined to stop the UR process.
## 5 Concluding Remarks and Lessons Learned
In this paper, the importance of UR is emphasized as a tool to expose the model error in the validation process of the multiphase shock tube simulation. The model error is separated from epistemic uncertainty such that the UQ process yields the error estimate with uncertainty distribution. Initially, the error estimate was not informative due to the large uncertainty in it. Therefore, a series of uncertainty reductions has been conducted until the uncertainty in the model error becomes much smaller than the error itself. The possible sources of error and uncertainty were (a) inconsistency between simulation and experiment, (b) lack of knowledge in physics, and (c) inaccurate information in simulation inputs. It has been shown that removing error and uncertainty does not always improve the prediction accuracy due to the canceling effect between different errors. Initially, the error estimate for the DFP at t = 700 μs was [−2, 8] mm while the measured mean DFP was 65 mm. After rigorous uncertainty reduction, it was [4,6] mm, which revealed the model error.
The next step will be to improve the particle force model of the simulation so that the discrepancy between the simulation and experiment can be reduced. However, the particle force model is composed of five subforce components: quasi-steady force model, pressure gradient force model, added mass force model, inviscid viscous force model, and particle collision model. The improvement of the individual models can be planned based on the importance of them on achieving the desired prediction accuracy. A systematic approach is considered based on the idea of global sensitivity analysis [26].
The lesson learned through UR in this paper is illustrated in Fig. 15. In this paper, success is defined when the experimental results do not contradict the simulations because the uncertainty is larger than the difference between the two means. Failure is defined when the experimental results are clearly different from the simulations because the uncertainty is smaller than the difference between the two means. Figure 15(a) illustrates a scenario where uncertainties in both simulation and experiment are so large that validation is not useful. This was the case when the initial shock tube simulation was finished. Uncertainty is considered too large when the trend of prediction/measurement cannot be determined when parameters change. For both simulation and experiment, the dashed line represents the mean prediction or measurement, while the range represents uncertainty associated with predictions or measurements. Even if the two distributions are almost overlapped, it cannot confirm that the model has a prediction capability. This case is called “empty success” since it does not yield a meaningful conclusion. Once the uncertainties in both simulation and experiment are reduced enough, it is possible that the validation may reveal the model error as shown in Fig. 15(b). This was the case when the shock tube simulation and experiment went through the rigorous UR process. Uncertainty needs to be reduced until the trend of function can be clearly shown when parameters change. Even if the validation fails, it provides useful information for improving the model. Therefore, this case is referred to as “useful failure”. The definition of validation in ASME standards and AIAA guides includes the model improvement process, which is only possible after rigorous UR. When multiple models are involved in simulation, it is possible that errors may cancel each other. When the error from one model (e1) is compensated with that of other models (e2), the final prediction looks accurate. However, improving the second model may increase the magnitude of the prediction error. We call it “deceptive success,” as shown in Fig. 15(c). This happened before an additional experiment was conducted with the gap. It is necessary to conduct a series of uncertainty reduction to have meaningful validation.
Fig. 15
Fig. 15
Close modal
## Acknowledgment
This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
## Funding Data
• National Nuclear Security Administration (Grant No. DE-NA0002378; Funder ID: 10.13039/100006168).
## References
1.
ASME
,
2009
, “
Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer
,” ASME, New York, Standard No. VV20.
2.
AIAA
,
2002
, “
Guide for the Verification and Validation of Computational Fluid Dynamics Simulations
,”
AIAA
Paper No. G-077-1998. 10.1115/G-077-1998
3.
Sankararaman
,
S.
, and
,
S.
,
2015
, “
Integration of Model Verification, Validation, and Calibration for Uncertainty Quantification in Engineering Systems
,”
Reliab. Eng. Syst. Saf.
,
138
, pp.
194
209
.10.1016/j.ress.2015.01.023
4.
Park
,
C.
,
Matthew
,
J.
,
Kim
,
N. H.
, and
Haftka
,
R. T.
, March
2019
, “
Epistemic Uncertainty Stemming From Measurement processing - A Case Study of Multiphase Shock Tube Experiments
,”
ASME J. Verif. Valid. Uncertainty Quantif.
,
3
(
4
), p.
041001
.10.1115/1.4042814
5.
Kim
,
H.-S.
,
Jang
,
S.-G.
,
Kim
,
N. H.
, and
Choi
,
J.-H.
,
2016
, “
Statistical Calibration and Validation of Elasto-Plastic Insertion Analysis in Pyrotechnically Actuated Deviced
,”
Struct. Multidiscip. Optim.
,
54
(
6
), pp.
1573
1585
.10.1007/s00158-016-1545-8
6.
Thacker
,
B. H.
, and
Paez
,
T. L.
,
2014
, “
A Simple Probabilistic Validation Metric for the Comparison of Uncertain Model and Test Results
,”
AIAA
Paper No. 2014-0121.10.2514/6.2014-0121
7.
Sankararaman
,
S.
,
Ling
,
Y.
, and
,
S.
,
2011
, “
Uncertainty Quantification and Model Validation of Fatigue Crack Growth Prediction
,”
Eng. Fract. Mech.
,
78
(
7
), pp.
1487
1504
.10.1016/j.engfracmech.2011.02.017
8.
Bae
,
S.
,
Kim
,
N. H.
, and
Jang
,
S.-G.
,
2018
, “
Reliability-Based Design Optimization Under Sampling Uncertainty: Shifting Design Versus Shaping Uncertainty
,”
Struct. Multidiscip. Optim.
,
57
(
5
), pp.
1845
1855
.10.1007/s00158-018-1936-0
9.
Xi
,
Z.
,
Fu
,
Y.
, and
Yang
,
R.-J.
,
2013
, “
Model Bias Characterization in the Design Space Under Uncertainty
,”
Int. J. Perform. Eng.
,
9
(
4
), pp.
433
444
.10.23940/ijpe.13.4.p433.mag
10.
Romero
,
V. J.
,
2007
, “
Validated Model? Not So Fast—The Need for Model “Conditioning” as an Essential Addendum to Model Validation
,”
AIAA
Paper No. 2007-1953.10.2514/6.2007-1953
11.
Park
,
C.
,
Fernández-Godino
,
M. G.
,
Kim
,
N. H.
, and
Haftka
,
R. T.
,
2016
, “
Validation, Uncertainty Quantification and Uncertainty Reduction for a Shock Tube Simulation
,”
AIAA
Paper No. 2016–1192.10.2514/6.2016-1192
12.
Zhang
,
F.
,
Frost
,
D. L.
,
Thibault
,
P. A.
, and
Murray
,
S. B.
,
2001
, “
Explosive Dispersal of Solid Particles
,”
Shock Waves
,
10
(
6
), pp.
431
443
.10.1007/PL00004050
13.
Hughes
,
K. T.
,
Balachandar
,
S.
,
Diggs
,
A.
,
Haftka
,
R. T.
,
Kim
,
N. H.
, and
Littrell
,
D.
,
2020
, “
Simulation-Driven Design of Experiments Examining the Large-Scale, Explosive Dispersal of Particles
,”
Shock Waves
,
30
(
4
), pp.
325
347
.10.1007/s00193-019-00927-x
14.
Wagner
,
J. L.
,
Beresh
,
S. J.
,
Kearney
,
S. P.
,
Trott
,
W. M.
,
Castaneda
,
J. N.
,
Pruett
,
B. O.
, and
Baer
,
M. R.
,
2012
, “
A Multiphase Shock Tube for Shock Wave Interactions With Dense Particle Fields
,”
Exp. Fluids
,
52
(
6
), pp.
1507
1517
.10.1007/s00348-012-1272-x
15.
Linne
,
M.
,
2013
, “
Imaging in the Optically Dense Regions of a Spray: A Review of Developing Techniques
,”
Prog. Energy Combust. Sci.
,
39
(
5
), pp.
403
440
.10.1016/j.pecs.2013.06.001
16.
DeMauro
,
E. P.
,
Wagner
,
J. L.
,
DeChant
,
L. J.
,
Beresh
,
S.
,
Farias
,
J.
,
Turpin
,
P.
,
Sealy
,
A.
,
Albert
,
W. S.
, and
Sanderson
,
P.
,
2017
, “
Measurements of the Initial Transient of a Dense Particle Curtain Following Shock Wave Impingement
,”
AIAA
Paper No. 2017–1466.10.2514/6.2017-1466
17.
Coleman
,
H. W.
, and
Steele
,
W. G.
,
2009
,
Experimentation, Validation, and Uncertainty Analysis for Engineers
, 3rd ed.,
Wiley
,
Hoboken, NJ
.
18.
Wagner
,
J. L.
,
Beresh
,
S. J.
,
Kearney
,
S. P.
,
Pruett
,
B. O.
, and
Wright
,
E. K.
,
2012
, “
Shock Tube Investigation of Quasi-Steady Drag in Shock-Particle Interactions
,”
Phys. Fluids
,
24
(
12
), p.
123301
.10.1063/1.4768816
19.
Liou
,
M. S.
,
Chang
,
C. H.
,
Nguyen
,
L.
, and
Theofanous
,
T. G.
,
2008
, “
How to Solve Compressible Multifluid Equations: A Simple, Robust, and Accurate Method
,”
AIAA J.
,
46
(
9
), pp.
2345
2356
.10.2514/1.34793
20.
Nili
,
S.
,
Park
,
C.
,
Haftka
,
R. T.
,
Kim
,
N. H.
, and
Balachandar
,
S.
,
2017
, “
Sensitivity Analysis of Unsteady Force Models for Two-Way Coupled Dispersed Multiphase Flow
,”
AIAA
Paper No. 2017–3800.10.2514/6.2017-3800
21.
Hughes
,
K. T.
,
Balachandar
,
S.
,
Kim
,
N. H.
,
Park
,
C.
,
Haftka
,
R. T.
,
Diggs
,
A.
,
Littrell
,
D.
, and
Darr
,
J.
,
2018
, “
Forensic Uncertainty Quantification for Experiments on the Explosively Driven Motion of Particles
,”
ASME J. Verif. Valid. Uncertainty Quantif.
,
3
(
4
), p.
041004
.10.1115/1.4043478
22.
Nili
,
S.
,
2019
,
Error Analysis of Particle Force Model of an Euler-Lagrange Multiphase Dispersed Flow
,
University of Florida
, Gainesville, FL.
23.
Fernandez-Godino
,
M. G.
,
Diggs
,
A.
,
Park
,
C.
,
Kim
,
N. H.
, and
Haftka
,
R. T.
,
2016
, “
Anomaly Detection Via Groups of Simulations
,”
AIAA
Paper No. 2016–1195.10.2514/6.2016-1195
24.
Liou
,
M. S.
,
1996
, “
A Sequel to AUSM: AUSM+
,”
J. Comput. Phys.
,
129
(
2
), pp.
364
382
.10.1006/jcph.1996.0256
25.
Wagner
,
J. L.
,
Kearney
,
S. P.
,
Beresh
,
S. J.
,
Demauro
,
E. P.
, and
Pruett
,
B. O.
,
2015
, “
Flash X-Ray Measurements on the Shock-Induced Dispersal of a Dense Particle Curtain
,”
Exp. Fluids
,
56
(
12
), p.
213
.10.1007/s00348-015-2087-3
26.
Nili
,
S.
,
Park
,
C.
,
Kim
,
N. H.
,
Haftka
,
R. T.
, and
Balachandar
,
S.
,
2021
, “
Prioritizing Possible Force Models Error in Multiphase Flow Using Global Sensitivity Analysis
,”
AIAA J.
,
59
(
5
), pp.
1749
1759
. 10.2514/1.J058657 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488202929496765, "perplexity": 963.703505401339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00590.warc.gz"} |
http://mathhelpforum.com/calculus/149753-show-how-two-series-same-sum.html | # Thread: Show how two series as the same Sum
1. ## Show how two series as the same Sum
show that the sum of the first 2n terms of the series 1-1/2+1/2-1/3+1/3-1/4+1/4-1/5+1/5........................is the same as the sum of the first n terms of the series 1/1*2 + 1/2*3 + 1/3*4 +1/4*5................... Do these series converge? what is the sum of the first 2n+1 terms of the first series and how to get the sum?
2. (1) Hint: 1+(-1/2+1/2)+(-1/3+1/3)+(-1/4+1/4)+(-1/5+1/5)+...+(-1/(2n) + 1/(2n))
(2) Hint: 1/(1*2) + 1/(2*3) + 1/(3*4) +1/(4*5) + ... + 1/(n*(n+1))
1/(n*(n+1)) = 1/n - 1/(n+1)
3. The first one obviously converges to 1. $1+(\frac{-1}{2}+\frac{1}{2})+(\frac{-1}{3}+\frac{1}{3})+...$ since $(\frac{-1}{2}+\frac{1}{2})=0$ they eliminate eachother.
The second is $\sum_{1}^{n}\frac{n}{n(n+1)}=\sum_{1}^{n}\frac{1}{ n}-\frac{1}{n+1}$ $=(\frac{1}{1}-\frac{1}{2})+(\frac{1}{2}-\frac{1}{3})+...=1+(\frac{-1}{2}+\frac{1}{2})+(\frac{-1}{3}+\frac{1}{3})+...=1$
4. thank u guys i was mistaken 2n a 2xSn=2x(1+1/2-1/2......
5. Oh, i missed the 2n, but the argument would be very similar, use p0oints hints | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9501450061798096, "perplexity": 938.1814693584005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00111.warc.gz"} |
https://datascience.stackexchange.com/questions/29858/generalization-of-correlation-coefficient | # Generalization of Correlation Coefficient
The correlation coefficient tells me how two variables (sequences of numbers) are correlated with each other. Does it generalize to non-linear scenarios? How could one more generally measure the general predictive power of x over y when the relationship between x and y is not linear?
• Through information-theoretic measures like the mutual information; the reduction of entropy in one variable through knowing the other. This too can be generalized through the Renyi entropy. Welcome to the site. – Emre Apr 3 '18 at 16:39
• That's a pretty broad question. Please edit the question to clarify what is given. Are x,y random variables? Are we given the probability distribution of x,y? Are we given a finite dataset with some values for x,y? Do you know in advance what the relationship looks like, or could it be anything? – D.W. Apr 4 '18 at 5:19
I assume that when you speak of correlation coeficient, you have the Pearson linear correlation in mind. Indeed, there are other options. Two very popular ones are the rank correlations respectively called Spearman's $\rho$ and Kendall's $\tau$.
To give you an idea of what they are, consider $n$ observations from a $d$-dimensional random vector $X = (X_1,\dots,X_d)$. Also let $X_{ij}$ be the $i$th observation for variable $j$. These measures are called rank correlations because they can be computed using the ranks only. What I mean is that if you sort all $X_{ij}$, $i=1,\dots,n$, and replace the biggest observation by $n$, the second biggest by $n-1$, and so on (do that for all columns $j$) and call you new observations $R_{ij}$, then
1. the empirical Spearman's $\rho$ (matrix) is simply the Pearson linear correlation (matrix) of $(R_1,\dots,R_d)$; and
2. the empirical (pairwise) Kendall's $\tau$ between $X_{i_1}$ and $X_{i_2}$ is the probability of concordance minus the probability of discordance between two iid observations, say $(X_{1 i_1},X_{1 i_2})$ and $(X_{2 i_1},X_{2 i_2})$, which can equivalently be computed from the ranks $(R_{1 i_1},R_{1 i_2})$ and $(R_{2 i_1},R_{2 i_2})$ instead.
A rank correlation between $X_{i_1}$ and $X_{i_2}$ of one indeed means perfect concordance (i.e. $X_{i_1}$ always increases with $X_{i_2}$), but that does not necessarily means they are linearly related. The ranks are linearly related.
Just to make the concept of concordance clearer, here the (bivariate) observations are all concordant
and here they are all discordant
So that when you consider a cloud of points you have some pairs that are concordant and others that are discordant.
Note that this answer provides examples, but there are many other ways to approach the question. As commented by Emre, information-theoretic measures are also an option. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484633564949036, "perplexity": 434.53705260858806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00243.warc.gz"} |
https://www.physicsforums.com/threads/hamiltonian-noethers-theorem-in-classical-mechanics.762187/ | # Hamiltonian Noether's theorem in classical mechanics
1. Jul 18, 2014
### bolbteppa
How does one think about, and apply, in the classical mechanical Hamiltonian formalism?
From the Lagrangian perspective, Noether's theorem (in 1-D) states that the quantity
$$\sum_{i=1}^n \frac{\partial \mathcal{L}}{\partial ( \frac{d y_i}{dx})} \frac{\partial y_i^*}{\partial \varepsilon} - \left[\sum_{j=1}^n \frac{\partial \mathcal{L}}{\partial ( \frac{d y_j}{dx})} \frac{d y_j }{\partial x} - \mathcal{L}\right]\frac{\partial x^*}{\partial \varepsilon}$$
is conserved if the Lagrangian $\mathcal{L}(x,y_i,y_i')$ is invariant under a continuous one-parameter group of infinitesimal transformations of the form
$$T(x,y_i,\varepsilon) = (x^*,y_i^*) = (x^*(x,y_i,\varepsilon),y_i^*(x,y_i,\varepsilon)).$$
From the action perspective, Noether's theorem states the equality of the 1-forms:
$$\mathcal{L}(x,y_i,y_i')dx = \sum_{j=1}^n p_i d y_j - \mathcal{H}dx = \mathcal{L}(x^*,y_i^*,y_i'^*)dx^* = \sum_{i=1}^n p_i d y_i^* - \mathcal{H}dx^*$$
which can be used to determine (additive) symmetries nicely.
How do I use this formalism to understand the Hamiltonian Noether theorem in a general context? I'll usually see a claim that $dA/dt = [H,A]$ is the Hamiltonian Noether's theorem, and I can't make sense of this in the context of my description of Noether above. This appears to derive the Poisson brackets as part of Noether from what I've developed above, but I can't make much sense of it to be honest I'm sure the answer is supposed to link the local Lie algebra tangent vector structure to the global Lie group transformation in the Lagrangian, but saying that in words is one thing, in math it's another, thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677489399909973, "perplexity": 330.58253725364017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647280.40/warc/CC-MAIN-20180320033158-20180320053158-00104.warc.gz"} |
https://www.physicsforums.com/threads/change-of-basis-help.657817/ | # Change of basis help?
1. Dec 9, 2012
### bonfire09
1. The problem statement, all variables and given/known data
Problem is assuming the mapping T: P2---->P2 defined by T(a0+a1t+a2t2)=3a0+(5a0-2a1)t+(4a1+a2)t^2 is linear. Find the matrix representation of T relative to Basis B={1,t,t^2}.
The part that im confused on is when I go plug in the basis values T(1),T(t),and T(t^2)? I don't know how to do it?
2. Relevant equations
3. The attempt at a solution
So to find T(1) its just T(1+0t+0t2)=3a0+5a0t
To find T(t) is just T(0+a1(t)+0T2)=3(0)+(5(0)-2a1)t+(4a1+0)t^2=-2a1t+4a1t^2
T(t^2)= T(0+0t+a2t^2)=3(0)+(5(0)-2(0))t+(4(0)+a2)t^2=a2T2
Usually in lots of books they omit steps like these and I'm trying to figure them out. Is this a correct way?
2. Dec 9, 2012
### pasmith
This is the right idea, but to get $T(1)$ you take $a_0 = 1$, $a_1 = 0$, and $a_2 = 0$ so that $T(1) = 3 + 5t$. Similarly for the other two basis vectors.
3. Dec 9, 2012
### bonfire09
Oh ok. So for T(t) just let a0=0, a1=1 and a2=0 and for T(t^2) just let a0=0,a1=0 and a2=1?
That looks like the standard basis {e1,e2,e3}
Similar Discussions: Change of basis help? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538803100585938, "perplexity": 1698.8922484049285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823482.25/warc/CC-MAIN-20171019231858-20171020011858-00788.warc.gz"} |
http://clay6.com/qa/46302/if-a-and-b-are-two-sets-such-that-a-subset-b-then-what-is-a-cup-b- | Browse Questions
Home >> AIMS >> Class11 >> Math >> Sets
# If A and B are two sets such that A $\subset$ B, then what is A $\cup$ B?
A $\cup$ B = B. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763307332992554, "perplexity": 75.3604740067964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720760.76/warc/CC-MAIN-20161020183840-00058-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Spectroscopy/Magnetic_Resonance_Spectroscopies/Nuclear_Magnetic_Resonance/NMR%3A_Structural_Assignment/Integration_in_Proton_NMR | # Integration in Proton NMR
There is additional information obtained from 1H NMR spectroscopy that is not typically available from 13C NMR spectroscopy. Chemical shift can show how many different types of hydrogens are found in a molecule; integration reveals the number of hydrogens of each type. An integrator trace (or integration trace) can be used to find the ratio of the numbers of hydrogen atoms in different environments in an organic compound.
An integrator trace is a computer generated line which is superimposed on a proton NMR spectra. In the diagram, the integrator trace is shown in red.
An integrator trace measures the relative areas under the various peaks in the spectrum. When the integrator trace crosses a peak or group of peaks, it gains height. The height gained is proportional to the area under the peak or group of peaks. You measure the height gained at each peak or group of peaks by measuring the distances shown in green in the diagram above - and then find their ratio.
For example, if the heights were 0.7 cm, 1.4 cm and 2.1 cm, the ratio of the peak areas would be 1:2:3. That in turn shows that the ratio of the hydrogen atoms in the three different environments is 1:2:3.
Figure NMR16.1H NMR spectrum of ethanol with solid integral line. Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Looking at the spectrum of ethanol, you can see that there are three different kinds of hydrogens in the molecule. You can also see by integration that there are three hydrogens of one type, two of the second type, and one of the third type -- corresponding to the CH3 or methyl group, the CH2 or methylene group and the OH or hydroxyl group. That information helps narrow down the number of possible structures of the sample, and so it makes structure elucidation of an unknown sample much easier.
• integration reveals the ratio of one type of hydrogen to another within a molecule.
Integral data can be given in different forms. You should be aware of all of them. In raw form, an integral is a horizontal line running across the spectrum from left to right. Where the line crosses the frequency of a peak, the area of the peak is measured. This measurement is shown as a jump or step upward in the integral line; the vertical distance that the line rises is proportional to the area of the peak. The area is related to the amount of radio waves absorbed at that frequency, and the amount of radio waves absorbed is proportional to the number of hydrogen atoms absorbing the radio waves.
Sometimes, the integral line is cut into separate integrals for each peak so that they can be compared to each other more easily.
Figure NMR17.1H NMR spectrum of ethanol with broken integral line. Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Often, instead of displaying raw data, the integrals are measured and their heights are displayed on the spectrum.
Figure NMR18.1H NMR spectrum of ethanol with numerical integrals.
Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
Sometimes the heights are "normalized". They are reduced to a lowest common factor so that their ratios are easier to compare. These numbers could correspond to numbers of hydrogens, or simply to their lowest common factors. Two peaks in a ratio of 1H:2H could correspond to one and two hydrogens, or they could correspond to two and four hydrogens, etc.
Figure NMR19.1H NMR spectrum of ethanol with normalized integral numbers.
Source: Spectrum taken in CDCl3 on a Varian Gemini 2000 Spectrometer with 300 MHz Oxford magnet.
## Problem NMR.6.
Sketch a predicted NMR spectrum for each of the following compounds, with an integral line over each peak.
## Problem NMR.7.
Measure the integrals in the following compounds. Given the integral ratios and chemical shifts, can you match each peak to a set of protons? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246848225593567, "perplexity": 890.6891039889401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00604.warc.gz"} |
http://rommy-najoan.blogspot.com/2012/10/materi-kelas-xii-ipa.html | # Integral
A definite integral of a function can be represented as the signed area of the region bounded by its graph.
Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus. Given a function f of a real variable x and an interval [a, b] of the real line, the definite integral
$\int_a^b \! f(x)\,dx \,$
is defined informally to be the area of the region in the xy-plane bounded by the graph of f, the x-axis, and the vertical lines x = a and x = b, such that area above the x-axis adds to the total, and that below the x-axis subtracts from the total.
The term integral may also refer to the notion of the antiderivative, a function F whose derivative is the given function f. In this case, it is called an indefinite integral and is written:
$F = \int f(x)\,dx.$
The principles of integration were formulated independently by Isaac Newton and Gottfried Leibniz in the late 17th century. Through the fundamental theorem of calculus, which they independently developed, integration is connected with differentiation: if f is a continuous real-valued function defined on a closed interval [a, b], then, once an antiderivative F of f is known, the definite integral of f over that interval is given by
$\int_a^b \! f(x)\,dx = F(b) - F(a)\,$
Integrals and derivatives became the basic tools of calculus, with numerous applications in science and engineering. The founders of the calculus thought of the integral as an infinite sum of rectangles of infinitesimal width. A rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a limiting procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. Beginning in the nineteenth century, more sophisticated notions of integrals began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A line integral is defined for functions of two or three variables, and the interval of integration [a, b] is replaced by a certain curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space. Integrals of differential forms play a fundamental role in modern differential geometry. These generalizations of integrals first arose from the needs of physics, and they play an important role in the formulation of many physical laws, notably those of electrodynamics. There are many modern concepts of integration, among these, the most common is based on the abstract mathematical theory known as Lebesgue integration, developed by Henri Lebesgue.
## History
### Pre-calculus integration
The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of shapes for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. Similar methods were independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere (Shea 2007; Katz 2004, pp. 125–126).
The next significant advances in integral calculus did not begin to appear until the 16th century. At this time the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.
### Newton and Leibniz
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Newton and Leibniz. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Newton and Leibniz developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
### Formalizing integrals
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann integrable on a bounded interval, subsequently more general functions were considered – particularly in the context of Fourier analysis – to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.
### Historical notation
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with $\dot{x}$ or $x'\,\!$, which Newton used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
The modern notation for the indefinite integral was introduced by Gottfried Leibniz in 1675 (Burton 1988, p. 359; Leibniz 1899, p. 154). He adapted the integral symbol, , from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–20, reprinted in his book of 1822 (Cajori 1929, pp. 249–250; Fourier 1822, §231).
## Terminology and notation
The simplest case, the integral over x of a real-valued function f(x), is written as
$\int f(x)\,dx .$
The integral sign ∫ represents integration. The dx indicates that we are integrating over x; dx is called the variable of integration. In correct mathematical typography, the dx is separated from the integrand by a space (as shown). Some authors use an upright d (that is, dx instead of dx). Inside the ∫...dx is the expression to be integrated, called the integrand. In this case the integrand is the function f(x). Because there is no domain specified, the integral is called an indefinite integral.
When integrating over a specified domain, we speak of a definite integral. Integrating over a domain D is written as
$\int_D f(x)\,dx ,$ or $\int_a^b f(x)\,dx$ if the domain is an interval [a, b] of x;
The domain D or the interval [a, b] is called the domain of integration.
If a function has an integral, it is said to be integrable. In general, the integrand may be a function of more than one variable, and the domain of integration may be an area, volume, a higher dimensional region, or even an abstract space that does not have a geometric structure in any usual sense (such as a sample space in probability theory).
In the modern Arabic mathematical notation, which aims at pre-university levels of education in the Arab world and is written from right to left, a reflected integral symbol is used (W3C 2006).
The variable of integration dx has different interpretations depending on the theory being used. It can be seen as strictly a notation indicating that x is a dummy variable of integration; if the integral is seen as a Riemann sum, dx is a reflection of the weights or widths d of the intervals of x; in Lebesgue integration and its extensions, dx is a measure; in non-standard analysis, it is an infinitesimal; or it can be seen as an independent mathematical quantity, a differential form. More complicated cases may vary the notation slightly. In Leibniz's notation, dx is interpreted an infinitesimal change in x, but his interpretation lacks rigour in the end. Nonetheless Leibniz's notation is the most common one today; and as few people are in need of full rigour, even his interpretation is still used in many settings.
## Introduction
Integrals appear in many practical situations. If a swimming pool is rectangular with a flat bottom, then from its length, width, and depth we can easily determine the volume of water it can contain (to fill it), the area of its surface (to cover it), and the length of its edge (to rope it). But if it is oval with a rounded bottom, all of these quantities call for integrals. Practical approximations may suffice for such trivial examples, but precision engineering (of any discipline) requires exact and rigorous values for these elements.
Approximations to integral of √x from 0 to 1, with 5 right samples (above) and 12 left samples (below)
To start off, consider the curve y = f(x) between x = 0 and x = 1 with f(x) = √x. We ask:
What is the area under the function f, in the interval from 0 to 1?
and call this (yet unknown) area the integral of f. The notation for this integral will be
$\int_0^1 \sqrt x \, dx \,\!.$
As a first approximation, look at the unit square given by the sides x = 0 to x = 1 and y = f(0) = 0 and y = f(1) = 1. Its area is exactly 1. As it is, the true value of the integral must be somewhat less. Decreasing the width of the approximation rectangles shall give a better result; so cross the interval in five steps, using the approximation points 0, 1/5, 2/5, and so on to 1. Fit a box for each step using the right end height of each curve piece, thus √(1⁄5), √(2⁄5), and so on to √1 = 1. Summing the areas of these rectangles, we get a better approximation for the sought integral, namely
$\textstyle \sqrt {\frac {1} {5}} \left ( \frac {1} {5} - 0 \right ) + \sqrt {\frac {2} {5}} \left ( \frac {2} {5} - \frac {1} {5} \right ) + \cdots + \sqrt {\frac {5} {5}} \left ( \frac {5} {5} - \frac {4} {5} \right ) \approx 0.7497.\,\!$
Notice that we are taking a sum of finitely many function values of f, multiplied with the differences of two subsequent approximation points. We can easily see that the approximation is still too large. Using more steps produces a closer approximation, but will never be exact: replacing the 5 subintervals by twelve as depicted, we will get an approximate value for the area of 0.6203, which is too small. The key idea is the transition from adding finitely many differences of approximation points multiplied by their respective function values to using infinitely many fine, or infinitesimal steps.
As for the actual calculation of integrals, the fundamental theorem of calculus, due to Newton and Leibniz, is the fundamental link between the operations of differentiating and integrating. Applied to the square root curve, f(x) = x1/2, it says to look at the antiderivative F(x) = (2/3)x3/2, and simply take F(1) − F(0), where 0 and 1 are the boundaries of the interval [0,1]. So the exact value of the area under the curve is computed formally as
$\int_0^1 \sqrt x \,dx = \int_0^1 x^{1/2} \,dx = F(1)- F(0) = 2/3.$
(This is a case of a general rule, that for f(x) = xq, with q ≠ −1, the related function, the so-called antiderivative is F(x) = xq + 1/(q + 1).)
The notation
$\int f(x) \, dx \,\!$
conceives the integral as a weighted sum, denoted by the elongated s, of function values, f(x), multiplied by infinitesimal step widths, the so-called differentials, denoted by dx. The multiplication sign is usually omitted.
Historically, after the failure of early efforts to rigorously interpret infinitesimals, Riemann formally defined integrals as a limit of weighted sums, so that the dx suggested the limit of a difference (namely, the interval width). Shortcomings of Riemann's dependence on intervals and continuity motivated newer definitions, especially the Lebesgue integral, which is founded on an ability to extend the idea of "measure" in much more flexible ways. Thus the notation
$\int_A f(x) \, d\mu \,\!$
refers to a weighted sum in which the function values are partitioned, with μ measuring the weight to be assigned to each value. Here A denotes the region of integration.
Differential geometry, with its "calculus on manifolds", gives the familiar notation yet another interpretation. Now f(x) and dx become a differential form, ω = f(x) dx, a new differential operator d, known as the exterior derivative is introduced, and the fundamental theorem becomes the more general Stokes' theorem,
$\int_{A} d\omega = \int_{\part A} \omega , \,\!$
from which Green's theorem, the divergence theorem, and the fundamental theorem of calculus follow.
More recently, infinitesimals have reappeared with rigor, through modern innovations such as non-standard analysis. Not only do these methods vindicate the intuitions of the pioneers; they also lead to new mathematics.
Although there are differences between these conceptions of integral, there is considerable overlap. Thus, the area of the surface of the oval swimming pool can be handled as a geometric ellipse, a sum of infinitesimals, a Riemann integral, a Lebesgue integral, or as a manifold with a differential form. The calculated result will be the same for all.
## Formal definitions
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions of integral are Riemann integrals and Lebesgue integrals.
### Riemann integral
Integral approached as Riemann sum based on tagged partition, with irregular sampling positions and widths (max in red). True value is 3.76; estimate is 3.648.
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let [a,b] be a closed interval of the real line; then a tagged partition of [a,b] is a finite sequence
$a = x_0 \le t_1 \le x_1 \le t_2 \le x_2 \le \cdots \le x_{n-1} \le t_n \le x_n = b . \,\!$
Riemann sums converging as intervals halve, whether sampled at right, minimum, maximum, or left.
This partitions the interval [a,b] into n sub-intervals [xi−1, xi] indexed by i, each of which is "tagged" with a distinguished point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as
$\sum_{i=1}^{n} f(t_i) \Delta_i ;$
thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. Let Δi = xixi−1 be the width of sub-interval i; then the mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1…n Δi. The Riemann integral of a function f over the interval [a,b] is equal to S if:
For all ε > 0 there exists δ > 0 such that, for any tagged partition [a,b] with mesh less than δ, we have
$\left| S - \sum_{i=1}^{n} f(t_i)\Delta_i \right| < \varepsilon.$
When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.
### Lebesgue integral
Riemann–Darboux's integration (blue) and Lebesgue integration (red).
It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann integrable, and so such limit theorems do not hold with the Riemann integral. Therefore it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated (Rudin 1987).
Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:
I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.
Source: (Siegmund-Schultze 2008)
As Folland (1984, p. 56) puts it, "To compute the Riemann integral of f, one partitions the domain [a,b] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a,b] is its width, ba, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
Using the "partitioning the range of f" philosophy, the integral of a non-negative function f : RR should be the sum over t of the areas between a thin horizontal strip between y = t and y = t + dt. This area is just μ{ x : f(x) > t} dt. Let f(t) = μ{ x : f(x) > t}. The Lebesgue integral of f is then defined by (Lieb & Loss 2001)
$\int f = \int_0^\infty f^*(t)\,dt$
where the integral on the right is an ordinary improper Riemann integral (note that f is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.
A general measurable function f is Lebesgue integrable if the area between the graph of f and the x-axis is finite:
$\int_E |f|\,d\mu < + \infty.$
In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:
$\int_E f \,d\mu = \int_E f^+ \,d\mu - \int_E f^- \,d\mu$
where
\begin{align} f^+(x)&=\max(\{f(x),0\}) &=&\begin{cases} f(x), & \text{if } f(x) > 0, \\ 0, & \text{otherwise,} \end{cases}\\ f^-(x) &=\max(\{-f(x),0\})&=& \begin{cases} -f(x), & \text{if } f(x) < 0, \\ 0, & \text{otherwise.} \end{cases} \end{align}
### Other integrals
Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
## Properties
### Linearity
• The collection of Riemann integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
$f \mapsto \int_a^b f \; dx$
is a linear functional on this vector space. Thus, firstly, the collection of integrable functions is closed under taking linear combinations; and, secondly, the integral of a linear combination is the linear combination of the integrals,
$\int_a^b (\alpha f + \beta g)(x) \, dx = \alpha \int_a^b f(x) \,dx + \beta \int_a^b g(x) \, dx. \,$
• Similarly, the set of real-valued Lebesgue integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
$f\mapsto \int_E f d\mu$
is a linear functional on this vector space, so that
$\int_E (\alpha f + \beta g) \, d\mu = \alpha \int_E f \, d\mu + \beta \int_E g \, d\mu.$
$f\mapsto\int_E f \,d\mu, \,$
that is compatible with linear combinations. In this situation the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K=C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalisation for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See (Hildebrandt 1953) for an axiomatic characterisation of the integral.
### Inequalities for integrals
A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell).
• Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that mf (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(ba) and M(ba), it follows that
$m(b - a) \leq \int_a^b f(x) \, dx \leq M(b - a).$
• Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
$\int_a^b f(x) \, dx \leq \int_a^b g(x) \, dx.$
This is a generalization of the above inequalities, as M(ba) is the integral of the constant function with value M over [a, b].
In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) < g(x) for each x in [a, b], then
$\int_a^b f(x) \, dx < \int_a^b g(x) \, dx.$
• Subintervals. If [c, d] is a subinterval of [a, b] and f(x) is non-negative for all x, then
$\int_c^d f(x) \, dx \leq \int_a^b f(x) \, dx.$
$(fg)(x)= f(x) g(x), \; f^2 (x) = (f(x))^2, \; |f| (x) = |f(x)|.\,$
If f is Riemann-integrable on [a, b] then the same is true for |f|, and
$\left| \int_a^b f(x) \, dx \right| \leq \int_a^b | f(x) | \, dx.$
Moreover, if f and g are both Riemann-integrable then f 2, g 2, and fg are also Riemann-integrable, and
$\left( \int_a^b (fg)(x) \, dx \right)^2 \leq \left( \int_a^b f(x)^2 \, dx \right) \left( \int_a^b g(x)^2 \, dx \right).$
This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b].
• Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds:
$\left|\int f(x)g(x)\,dx\right| \leq \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} \left(\int\left|g(x)\right|^q\,dx\right)^{1/q}.$
For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
• Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then |f|p, |g|p and |f + g|p are also Riemann integrable and the following Minkowski inequality holds:
$\left(\int \left|f(x)+g(x)\right|^p\,dx \right)^{1/p} \leq \left(\int \left|f(x)\right|^p\,dx \right)^{1/p} + \left(\int \left|g(x)\right|^p\,dx \right)^{1/p}.$
An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces.
### Conventions
In this section f is a real-valued Riemann-integrable function. The integral
$\int_a^b f(x) \, dx$
over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x0x1 ≤ . . . ≤ xn = b whose values xi are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [xi , xi +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b:
• Reversing limits of integration. If a > b then define
$\int_a^b f(x) \, dx = - \int_b^a f(x) \, dx.$
This, with a = b, implies:
• Integrals over intervals of length zero. If a is a real number then
$\int_a^a f(x) \, dx = 0.$
The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that:
• Additivity of integration on intervals. If c is any element of [a, b], then
$\int_a^b f(x) \, dx = \int_a^c f(x) \, dx + \int_c^b f(x) \, dx.$
With the first convention the resulting relation
\begin{align} \int_a^c f(x) \, dx &{}= \int_a^b f(x) \, dx - \int_c^b f(x) \, dx \\ &{} = \int_a^b f(x) \, dx + \int_b^c f(x) \, dx \end{align}
is then well-defined for any cyclic permutation of a, b, and c.
Instead of viewing the above as conventions, one can also adopt the point of view that integration is performed of differential forms on oriented manifolds only. If M is such an oriented m-dimensional manifold, and M is the same manifold with opposed orientation and ω is an m-form, then one has:
$\int_M \omega = - \int_{M'} \omega \,.$
These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function f with respect to a measure $\mu,$ and integrates over a subset A, without any notion of orientation; one writes $\textstyle{\int_A f\,d\mu = \int_{[a,b]} f\,d\mu}$ to indicate integration over a subset A. This is a minor distinction in one dimension, but becomes subtler on higher dimensional manifolds; see Differential form: Relation with measures for details.
## Fundamental theorem of calculus
The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated.
### Statements of theorems
• Fundamental theorem of calculus. Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by
$F(x) = \int_a^x f(t)\, dt\,.$
Then, F is continuous on [a, b], differentiable on the open interval (a, b), and
$F'(x) = f(x)\,$
for all x in (a, b).
• Second fundamental theorem of calculus. Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative g on [a, b]. That is, f and g are functions such that for all x in [a, b],
$f(x) = g'(x).\$
If f is integrable on [a, b] then
$\int_a^b f(x)\,dx\, = g(b) - g(a).$
## Extensions
### Improper integrals
The improper integral
$\int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} = \pi$
has unbounded intervals for both domain and range.
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity.
$\int_{a}^{\infty} f(x)dx = \lim_{b \to \infty} \int_{a}^{b} f(x)dx$
If the integrand is only defined or finite on a half-open interval, for instance (a,b], then again a limit may provide a finite result.
$\int_{a}^{b} f(x)dx = \lim_{\epsilon \to 0} \int_{a+\epsilon}^{b} f(x)dx$
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
Consider, for example, the function $1/((x+1)\sqrt{x})$ integrated from 0 to ∞ (shown right). At the lower bound, as x goes to 0 the function goes to ∞, and the upper bound is itself ∞, though the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of π/6. To integrate from 1 to ∞, a Riemann sum is not possible. However, any finite upper bound, say t (with t > 1), gives a well-defined result, $2\arctan (\sqrt{t}) - \pi/2$. This has a finite limit as t goes to infinity, namely π/2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, coincidentally again producing π/6. Replacing 1/3 by an arbitrary positive value s (with s < 1) is equally safe, giving $\pi/2 - 2\arctan (\sqrt{s})$. This, too, has a finite limit as s goes to zero, namely π/2. Combining the limits of the two fragments, the result of this improper integral is
\begin{align} \int_{0}^{\infty} \frac{dx}{(x+1)\sqrt{x}} &{} = \lim_{s \to 0} \int_{s}^{1} \frac{dx}{(x+1)\sqrt{x}} + \lim_{t \to \infty} \int_{1}^{t} \frac{dx}{(x+1)\sqrt{x}} \\ &{} = \lim_{s \to 0} \left(\frac{\pi}{2} - 2 \arctan{\sqrt{s}} \right) + \lim_{t \to \infty} \left(2 \arctan{\sqrt{t}} - \frac{\pi}{2} \right) \\ &{} = \frac{\pi}{2} + \left(\pi - \frac{\pi}{2} \right) \\ &{} = \frac{\pi}{2} + \frac{\pi}{2} \\ &{} = \pi . \end{align}
This process does not guarantee success; a limit may fail to exist, or may be unbounded. For example, over the bounded interval 0 to 1 the integral of 1/x does not converge; and over the unbounded interval 1 to ∞ the integral of $1/\sqrt{x}$ does not converge.
The improper integral
$\int_{-1}^{1} \frac{dx}{\sqrt[3]{x^2}} = 6$
is unbounded internally, but both left and right limits exist.
It may also happen that an integrand is unbounded at an interior point, in which case the integral must be split at that point, and the limit integrals on both sides must exist and must be bounded. Thus
\begin{align} \int_{-1}^{1} \frac{dx}{\sqrt[3]{x^2}} &{} = \lim_{s \to 0} \int_{-1}^{-s} \frac{dx}{\sqrt[3]{x^2}} + \lim_{t \to 0} \int_{t}^{1} \frac{dx}{\sqrt[3]{x^2}} \\ &{} = \lim_{s \to 0} 3(1-\sqrt[3]{s}) + \lim_{t \to 0} 3(1-\sqrt[3]{t}) \\ &{} = 3 + 3 \\ &{} = 6. \end{align}
But the similar integral
$\int_{-1}^{1} \frac{dx}{x} \,\!$
cannot be assigned a value in this way, as the integrals above and below zero do not independently converge. (However, see Cauchy principal value.)
### Multiple integration
Double integral as volume under a surface.
Integrals can be taken over regions other than intervals. In general, an integral over a set E of a function f is written:
$\int_E f(x) \, dx.$
Here x need not be a real number, but can be another suitable quantity, for instance, a vector in R3. Fubini's theorem shows that such integrals can be rewritten as an iterated integral. In other words, the integral can be calculated by integrating one coordinate at a time.
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane which contains its domain. (The same volume can be obtained via the triple integral — the integral of a function in three variables — of the constant function f(x, y, z) = 1 over the above mentioned region between the surface and the plane.) If the number of variables is higher, then the integral represents a hypervolume, a volume of a solid of more than three dimensions that cannot be graphed.
For example, the volume of the cuboid of sides 4 × 6 × 5 may be obtained in two ways:
• By the double integral
$\iint_D 5 \ dx\, dy$
of the function f(x, y) = 5 calculated in the region D in the xy-plane which is the base of the cuboid. For example, if a rectangular base of such a cuboid is given via the xy inequalities 3 ≤ x ≤ 7, 4 ≤ y ≤ 10, our above double integral now reads
$\int_4^{10}\left[ \int_3^7 \ 5 \ dx\right] dy$
From here, integration is conducted with respect to either x or y first; in this example, integration is first done with respect to x as the interval corresponding to x is the inner integral. Once the first integration is completed via the $F(b) - F(a)$ method or otherwise, the result is again integrated with respect to the other variable. The result will equate to the volume under the surface.
• By the triple integral
$\iiint_\mathrm{cuboid} 1 \, dx\, dy\, dz$
of the constant function 1 calculated on the cuboid itself.
### Line integrals
A line integral sums together elements along a curve.
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.
A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as:
$W=\vec F\cdot\vec s.$
For an object moving along a path in a vector field $\vec F$ such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from $\vec s$ to $\vec s + d\vec s$. This gives the line integral
$W=\int_C \vec F\cdot d\vec s.$
### Surface integrals
The definition of surface integral relies on splitting the surface into small surface elements.
A surface integral is a definite integral taken over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that we have a fluid flowing through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, we need to take the dot product of v with the unit surface normal to S at each point, which will give us a scalar field, which we integrate over the surface:
$\int_S {\mathbf v}\cdot \,d{\mathbf {S}}.$
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
### Integrals of differential forms
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology and tensors. The modern notation for the differential form, as well as the idea of the differential forms as being the wedge products of exterior derivatives forming an exterior algebra, was introduced by Élie Cartan.
We initially work in an open set in Rn. A 0-form is defined to be a smooth function f. When we integrate a function f over an m-dimensional subspace S of Rn, we write it as
$\int_S f\,dx^1 \cdots dx^m.$
(The superscripts are indices, not exponents.) We can consider dx1 through dxn to be formal objects themselves, rather than tags appended to make integrals look like Riemann sums. Alternatively, we can view them as covectors, and thus a measure of "density" (hence integrable in a general sense). We call the dx1, …,dxn basic 1-forms.
We define the wedge product, "∧", a bilinear "multiplication" operator on these elements, with the alternating property that
$dx^a \wedge dx^a = 0 \,\!$
for all indices a. Note that alternation along with linearity and associativity implies dxbdxa = −dxadxb. This also ensures that the result of the wedge product has an orientation.
We define the set of all these products to be basic 2-forms, and similarly we define the set of products of the form dxadxbdxc to be basic 3-forms. A general k-form is then a weighted sum of basic k-forms, where the weights are the smooth functions f. Together these form a vector space with basic k-forms as the basis vectors, and 0-forms (smooth functions) as the field of scalars. The wedge product then extends to k-forms in the natural way. Over Rn at most n covectors can be linearly independent, thus a k-form with k > n will always be zero, by the alternating property.
In addition to the wedge product, there is also the exterior derivative operator d. This operator maps k-forms to (k+1)-forms. For a k-form ω = f dxa over Rn, we define the action of d by:
$d\omega = \sum_{i=1}^n \frac{\partial f}{\partial x_i} dx^i \wedge dx^a.$
with extension to general k-forms occurring linearly.
This more general approach allows for a more natural coordinate-free approach to integration on manifolds. It also allows for a natural generalisation of the fundamental theorem of calculus, called Stokes' theorem, which we may state as
$\int_{\Omega} d\omega = \int_{\partial\Omega} \omega \,\!$
where ω is a general k-form, and ∂Ω denotes the boundary of the region Ω. Thus, in the case that ω is a 0-form and Ω is a closed interval of the real line, this reduces to the fundamental theorem of calculus. In the case that ω is a 1-form and Ω is a two-dimensional region in the plane, the theorem reduces to Green's theorem. Similarly, using 2-forms, and 3-forms and Hodge duality, we can arrive at Stokes' theorem and the divergence theorem. In this way we can see that differential forms provide a powerful unifying view of integration.
### Summations
The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time scale calculus.
## Methods
### Computing integrals
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F' = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus, $\textstyle\int_a^b f(x)\,dx = F(b)-F(a).$
The integral is not actually the antiderivative, but the fundamental theorem provides a way to use antiderivatives to evaluate definite integrals.
The most difficult step is usually to find the antiderivative of f. It is rarely possible to glance at a function and write down its antiderivative. More often, it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include:
Alternate methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
### Symbolic algorithms
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma.
A major mathematical difficulty in symbolic integration is that in many cases, a closed formula for the antiderivative of a rather simple-looking function does not exist. For instance, it is known that the antiderivatives of the functions exp(x2), xx and (sin x)/x cannot be expressed in the closed form involving only rational and exponential functions, logarithm, trigonometric and inverse trigonometric functions, and the operations of multiplication and composition; in other words, none of the three given functions is integrable in elementary functions, which are the functions which may be built from rational functions, roots of a polynomial, logarithm, and exponential functions. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary, and, if it is, to compute it. Unfortunately, it turns out that functions with closed expressions of antiderivatives are the exception rather than the rule. Consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may be still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition, and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions of physics (like the Legendre functions, the hypergeometric function, the Gamma function, the Incomplete Gamma function and so on — see Symbolic integration for more details). Extending the Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
More recently a new approach has emerged, using D-finite function, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite and the integral of a D-finite function is also a D-finite function. This provide an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation.
This theory allows also to compute a definite integrals of a D-function as the sum of a series given by the first coefficients and an algorithm to compute any coefficient.[1]
The integrals encountered in a basic calculus course are deliberately chosen for simplicity; those found in real applications are not always so accommodating. Some integrals cannot be found exactly, some require special functions which themselves are a challenge to compute, and others are so complex that finding the exact answer is too slow. This motivates the study and application of numerical methods for approximating integrals, which today use floating-point arithmetic on digital electronic computers. Many of the ideas arose much earlier, for hand calculations; but the speed of general-purpose computers like the ENIAC created a need for improvements.
The goals of numerical integration are accuracy, reliability, efficiency, and generality. Sophisticated methods can vastly outperform a naive method by all four measures (Dahlquist & Björck 2008; Kahaner, Moler & Nash 1989; Stoer & Bulirsch 2002). Consider, for example, the integral
$\int_{-2}^{2} \tfrac{1}{5} \left( \tfrac{1}{100}(322 + 3 x (98 + x (37 + x))) - 24 \frac{x}{1+x^2} \right) dx ,$
which has the exact answer 94/25 = 3.76. (In ordinary practice the answer is not known in advance, so an important task — not explored here — is to decide when an approximation is good enough.) A “calculus book” approach divides the integration range into, say, 16 equal pieces, and computes function values.
x f(x) x f(x) −2.00 −1.50 −1.00 −0.50 0.00 0.50 1.00 1.50 2.00 2.22800 2.45663 2.67200 2.32475 0.64400 −0.92575 −0.94000 −0.16963 0.83600 −1.75 −1.25 −0.75 −0.25 0.25 0.75 1.25 1.75 2.33041 2.58562 2.62934 1.64019 −0.32444 −1.09159 −0.60387 0.31734
Numerical quadrature methods: Rectangle, Trapezoid, Romberg, Gauss
Using the left end of each piece, the rectangle method sums 16 function values and multiplies by the step width, h, here 0.25, to get an approximate value of 3.94325 for the integral. The accuracy is not impressive, but calculus formally uses pieces of infinitesimal width, so initially this may seem little cause for concern. Indeed, repeatedly doubling the number of steps eventually produces an approximation of 3.76001. However, 218 pieces are required, a great computational expense for such little accuracy; and a reach for greater accuracy can force steps so small that arithmetic precision becomes an obstacle.
A better approach replaces the horizontal tops of the rectangles with slanted tops touching the function at the ends of each piece. This trapezium rule is almost as easy to calculate; it sums all 17 function values, but weights the first and last by one half, and again multiplies by the step width. This immediately improves the approximation to 3.76925, which is noticeably more accurate. Furthermore, only 210 pieces are needed to achieve 3.76000, substantially less computation than the rectangle method for comparable accuracy.
Romberg's method builds on the trapezoid method to great effect. First, the step lengths are halved incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size (as shown in the table above). But the really powerful idea is to interpolate a polynomial through the approximations, and extrapolate to T(0). With this method a numerically exact answer here requires only four pieces (five function values)! The Lagrange polynomial interpolating {hk,T(hk)}k = 0…2 = {(4.00,6.128), (2.00,4.352), (1.00,3.908)} is 3.76 + 0.148h2, producing the extrapolated value 3.76 at h = 0.
Gaussian quadrature often requires noticeably less work for superior accuracy. In this example, it can compute the function values at just two x positions, ±2⁄√3, then double each value and sum to get the numerically exact answer. The explanation for this dramatic success lies in error analysis, and a little luck. An n-point Gaussian method is exact for polynomials of degree up to 2n−1. The function in this example is a degree 3 polynomial, plus a term that cancels because the chosen endpoints are symmetric around zero. (Cancellation also benefits the Romberg method.)
Shifting the range left a little, so the integral is from −2.25 to 1.75, removes the symmetry. Nevertheless, the trapezoid method is rather slow, the polynomial interpolation method of Romberg is acceptable, and the Gaussian method requires the least work — if the number of points is known in advance. As well, rational interpolation can use the same trapezoid evaluations as the Romberg method to greater effect.
Method Points Rel. Err. Value Trapezoid Romberg Rational Gauss 1048577 257 129 36 −5.3×10−13 −6.3×10−15 8.8×10−15 3.1×10−15 $\textstyle \int_{-2.25}^{1.75} f(x)\,dx = 4.1639019006585897075\ldots$
In practice, each method must use extra evaluations to ensure an error bound on an unknown function; this tends to offset some of the advantage of the pure Gaussian method, and motivates the popular Gauss–Kronrod quadrature formulae. Symmetry can still be exploited by splitting this integral into two ranges, from −2.25 to −1.75 (no symmetry), and from −1.75 to 1.75 (symmetry). More broadly, adaptive quadrature partitions a range into pieces based on function properties, so that data points are concentrated where they are needed most.
Simpson's rule, named for Thomas Simpson (1710–1761), uses a parabolic curve to approximate integrals. In many cases, it is more accurate than the trapezoidal rule and others. The rule states that
$\int_a^b f(x) \, dx \approx \frac{b-a}{6}\left[f(a) + 4f\left(\frac{a+b}{2}\right)+f(b)\right],$
with an error of
$\left|-\frac{(b-a)^5}{2880} f^{(4)}(\xi)\right|.$
The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.
A calculus text is no substitute for numerical analysis, but the reverse is also true. Even the best adaptive numerical code sometimes requires a user to help with the more demanding integrals. For example, improper integrals may require a change of variable or methods that can avoid infinite function values, and known properties like symmetry and periodicity may provide critical leverage.
Referense:http://en.wikipedia.org/wiki/Integral | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 78, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849514365196228, "perplexity": 374.40066640064754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719136.58/warc/CC-MAIN-20161020183839-00286-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/128195/lie-derivative-of-a-vector-field-equals-the-lie-bracket?answertab=active | # Lie derivative of a vector field equals the lie bracket
Let $X$ and $Y$ be vector fields on a smooth manifold $M$, and let $\phi_t$ be the flow of $X$, i.e. $\frac{d}{dt} \phi_t(p) = X_p$. I am trying to prove the following formula:
$\frac{d}{dt} ((\phi_{-t})_* Y)|_{t=0} = [X,Y],$
where $[X,Y]$ is the commutator, defined by $[X,Y] = X\circ Y - Y\circ X$.
This is a question from these online notes: http://www.math.ist.utl.pt/~jnatar/geometria_sem_exercicios.pdf .
-
Can you show us where you get stuck? The details are a bit of a mess, but the idea is straightforward. Start with the left side and compute at a point applied it to an arbitrary smooth function, i.e. write out $(L_X Y)_p f$ with the limit definition. You should be able to rearrange terms and rewrite things and cancel a little to get to the right hand side. – Matt Apr 5 '12 at 2:18
Let $X$ and $Y$ tow vector field then the Lie derivative $L_{X}Y$ is the commutator $[X,Y]$.
the proof:
we have $L_{X}Y=\lim_{t\to 0}\frac{d\phi_{-t}Y-Y}{t}(f)=\lim_{t\to 0}d\phi_{-1}\frac{Y-d\phi_{t}Y}{t}(f)=\lim_{t\to 0}\frac{Y(f)-d\phi_{t}Y(f)}{t}=\lim_{t\to 0}\frac{Y(f)-Y(f\circ\phi_{t})\circ\phi_{t}^{-1}}{t}$
we put $\phi_{t}(x)=\phi(t,x)$ and we apply the Taylor formula with integral remains, then there exists $h(t,x)$ such that:
$$f(\phi(t,x))=f(x)+th(t,x)$$ where $h(0,x)=\frac{\partial}{\partial t}f(\phi(t,x))(0,x)$
by defintion of tangent vector: $X(f)=\frac{\partial}{\partial t}f\circ\phi_{t}(x)(0,x)$
then we have $h(o,x)=X(f)(x)$ so:
$$L_{X}Y(f)=\lim_{t\to 0}\left(\frac{Y(f)-Y(f)\circ \phi_{t}^{-1}}{t}-Y(h(t,x))\circ \phi_{t}^{-1}\right)=\lim_{t\to 0}\left(\frac{(Y(f)\circ\phi_{t}-Y(f))\circ\phi_{t}^{-1}}{t}-Y(h(t,x))\circ\phi_{t}^{-1}\right)$$
we have $\lim_{t\to 0}\phi_{t}^{-1}=\phi_{0}^{-1}=id.$
then we conclude that
$$L_{X}Y(f)=\lim_{t\to 0}\left(\frac{Y(f)\circ\phi_{t}-Y(f)}{t}-Y(h(0,x))\right)$$ $$= \frac{\partial}{\partial t}Y(f)\circ\phi_{t}(x)-Y(h(0,x))$$ $$= X(Y(f)) -Y(X(f))$$ $$= [X,Y]$$
This completes the proof.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837954044342041, "perplexity": 128.98004572578913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446500.34/warc/CC-MAIN-20151124205406-00115-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/110138/conditions-for-maximum-period-of-quadratic-congruential-method-prng/110157 | # Conditions for maximum period of quadratic congruential method (PRNG)
$$X_{n} = (d^2X_{n-1} + aX_{n-1} + c) \operatorname{mod} m$$
Knuth lists out the necessity and sufficiency of 4 conditions (Exercise 8 in page 49 of "The art of computer programming Vol.II"):
1. $$c$$ is relatively prime to $$m$$
2. $$d$$ and $$a-1$$ are both multiples of $$p$$, for all odd primes $$p$$ dividing $$m$$
3. $$d \equiv a-1$$(mod 2) when $$2|m$$; $$d$$ is even and $$d \equiv a-1$$(mod 4) when $$4|m$$
4. $$d \not \equiv 3c$$ (mod 9) when $$9|m$$
Knuth writes in the answer to exercise 8:
If $$p \leqslant 3$$, it's easy to establish the necessity of condition (iii) and (iiii) by trial and error method
I do try to find my own way to prove (the necessity of) condition (iii) and (iiii). Here's how i prove the first one:
Assume $$m=p^e$$. Firstly, we consider the case when $$p=2,e=1$$
So, the sequence $$X_n$$ with ($$X_0 = 0$$ and $$m=2$$) has the period of $$2$$ when:
$$X_2 = X_0 = 0$$
We can prove: $$X_2=dc+a+1 \operatorname{mod}$$ 2 (due to the relatively prime $$c$$)
Obviously, we have: $$d \equiv a-1 (\operatorname{mod} 2)\space \tag1$$
If $$e \geqslant 2$$ then $$4|m$$. The same sequence $$X_n$$ with $$(X_0=0,m=4)$$ must have the period of 4 which means: $$X_0\not=X_1\not=X_2\not=X_3$$
$$X_2 \not=X_0$$. So, $$X_2 = 2$$ and $$X_3 \not=X1$$. The fact then implies: $$aX_2 \not\equiv 0 (\operatorname{mod} 4)\space\tag2$$
Due to (1), (2), $$a \operatorname{mod} 4$$ and $$d \operatorname{mod} 4$$ can only adopt odd and even values. After some trials on $$X_2 = dc + a + 1 \operatorname{mod} 4= 2$$ (c is odd), we easily prove: $$d \equiv a-1(\operatorname{mod} 4)$$
I have also proved the condition (iiii) by my "trial and error method" but i'm not sure if they are what Knuth mentions. So my first question:
1. What's exactly "trial and error method" applying for this situation?
Finally, the proof of condition (ii) confuses me:
If $$d \not\equiv 0(\operatorname{mod} p)$$ then $$dx^2+ax+c \equiv d(x+a_1)^2 + c_1(\operatorname{mod} p^e)$$ for some integers $$a_1$$, $$c_1$$ and for all integers x
$$d\not\equiv 0 (\operatorname{mod} p)$$ leads to d relatively prime to $$p^e$$. But i can't go on any further from this.
1. Why does $$dx^2+ax+c\equiv d(x+a_1)^2 + c_1(\operatorname{mod} p^e)$$ hold when $$d \not\equiv 0(\operatorname{mod} p)$$ ?
What's exactly "trial and error method" applied in this situation?
$$\newcommand{\mymod}{\operatorname{modulo}}$$As I understand, "trial and error method" here means checking all cases from a few simple natural or known perspectives until we have found a satisfactory solution or proof. It is useful and efficient in this situation because the number of cases $$\mymod 2$$ or $$\mymod 4$$ or $$\mymod 9$$ is very small.
What you have done seems pretty good.
Why does $$dx^2+ax+c\equiv d(x+a_1)^2 + c_1(\mymod{p^e})$$ hold when $$d \not\equiv 0\,(\mymod{p})$$ ?
Prime $$p\not=2$$ since it has been assumed $$p\ge5$$. Since $$d \not\equiv 0\,(\mymod p)$$, $$2d$$ and $$p^e$$ are relatively prime, which implies $$2d$$ is invertible $$\mymod p^e$$. Let $$(2d)d'\equiv1\,(\mymod p^e)$$ fro some $$d'$$. Then
\begin{aligned} dx^2+ax+c &\equiv dx^2+2dd'ax + c\\ &\equiv d(x+d'a)^2 -d(d'a)^2+c\ (\mymod p^e)\\ \end{aligned}
Letting $$a_1=d'a$$ and $$c_1=-d(d'a)^2+c$$, we are done. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 65, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525185227394104, "perplexity": 465.09730359885106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00090.warc.gz"} |
https://www.physicsforums.com/threads/please-help-integration.525520/ | • Start date
• #1
1
0
Hi!!
I'm having a little problem with an exercise and I dunno how to sort it out.
Basically what I have is a tube of 5mm ejecting spray fuels. At the exit plane I have positioned lasers to measure the volume flux at different locations, starting from one side of the tube to the other side. The increment for each location was of 0.0254mm.
At each location I measured the volume flux, now I have to calculate the total volume flux and I don't know how to do that (I have a basic idea but I'm not quite sure about it).
Could you guys help me out here?
I've attached a picture illustrating what I have. The spots along the diameter line are the volume flux at each location.
http://img88.imageshack.us/img88/2066/jato.jpg [Broken]
Thanks!
Last edited by a moderator:
• #2
Hootenanny
Staff Emeritus
Gold Member
9,621
7
This problem isn't as straightforward as it may seem, but may be if we can exploit certain symmetries.
For example, do you know if the flow rate is independent of the polar angle? In other words, if you rotate you laser array by some angle, do your readings change?
• Last Post
Replies
2
Views
1K
• Last Post
Replies
30
Views
2K
• Last Post
Replies
4
Views
2K
• Last Post
Replies
8
Views
2K
• Last Post
Replies
1
Views
970
• Last Post
Replies
16
Views
3K
• Last Post
Replies
2
Views
984
• Last Post
Replies
4
Views
3K
• Last Post
Replies
5
Views
2K
• Last Post
Replies
7
Views
4K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396339416503906, "perplexity": 980.150625263493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00097.warc.gz"} |
http://bayesfactor.blogspot.com/2016/07/stop-saying-confidence-intervals-are.html?showComment=1493261184141 | ## Friday, July 29, 2016
### Stop saying confidence intervals are "better" than p values
One of the common tropes one hears from advocates of confidence intervals is that they are superior, or should be preferred, to p values. In our paper "The Fallacy of Placing Confidence in Confidence Intervals", we outlined a number of interpretation problems in confidence interval theory. We did this from a mostly Bayesian perspective, but in the second section was an example that showed why, from a frequentist perspective, confidence intervals can fail. However, many people missed this because they assumed that the paper was all Bayesian advocacy. The purpose of this blog post is to expand on the frequentist example that many people missed; one doesn't have to be a Bayesian to see that confidence intervals can be less interpretable than the p values they are supposed to replace. Andrew Gelman briefly made this point previously, but I want to expand on it so that people (hopefully) more clearly understand the point.
Understanding the argument I'm going to lay out here is critical to understanding both p values and confidence intervals. As we'll see, fallacies about one or the other are what lead advocates of confidence intervals to falsely believe that CIs are "better".
### p values and "surprise"
First, we must define a p value properly and understand its role in frequentist inference. The p value is the probability of obtaining a result at least as extreme as the one we observed, under some assumption about the true distribution of the data. A low p value is taken as indicating that the result observed was very extreme under the assumptions, and hence calls the assumptions into doubt. One might say that a low p value is "surprising" under the assumptions. I will not question this mode of inference here.
It is critical to keep in mind that a low p value can call an assumption into doubt, but a high p value does not "confirm" anything. This is consistent with falsificationist logic. We often see p values used in the context of null hypothesis significance testing (NHST), where a single p value is computed that indicates how extreme the data under the assumption of a null hypothesis; however, we can compute p values for any hypothesis we like. As an example, suppose we are interested in whether reading comprehension scores are affected by caffeine. We apply three different doses to N=10 people in each group in a between-subjects design, and test their reading comprehension. For the sake of the example, we assume normality, homogeneity of variance, etc. We apply a one-way ANOVA to the reading comprehension scores and obtain an F statistic of F(2,27)=8.
If we were to assume that there was no relationship between the reading scores and caffeine dose, then the resulting p value for this F statistic is p=0.002. This indicates that we would only expect F statistics as extreme as this one .2% of the time, if there were no true relationship.
The curve shows the distribution of F(2,27) statistics when the null hypothesis is true. The area under the curve to the right of the observed F statistic is the p value.
This low p value would typically be regarded as strong evidence against the null hypothesis, because -- as the graph above shows -- an F statistic as extreme as the observed on would be quite rare, if indeed there were no relationship between reading scores and caffeine.
So far, this is all first-year statistics (though it is often misunderstood). Although we typically see p values computed for a single hypothesis, there is nothing stopping us from computing it for multiple hypotheses. Suppose we are interested in the true size of the effect between reading scores and caffeine dosage. One statistic that quantifies this relationship is ω2, the proportion of the total variance in the reading scores that is "accounted for" by caffeine (see Steiger, 2004 for details). We won't get into the details of how this is computed; we need only know that:
• When ω2=0, there is no relationship between caffeine and reading scores. All variance is error; that is, knowing someone's reading score does not give any information about which dose group they were in.
• When ω2=1, there is the strongest possible relationship between caffeine and readings scores. No variance is error; that is, by knowing someone's reading score one can know with certainty which does group they were in.
• As ωgets larger, larger and larger F statistics are predicted.
We have computed the p value under the assumption that ω2=0, but what about all other ωvalues? Try this shiny app to find the predicted distribution of F statistics, and hence p values, for other values of ω2. Try to find the value of ωthat would yield a p value of exactly 0.05; it should be about ω2=0.108.
A Shiny app for finding p values in a one-way ANOVA with three groups.
All values of ωless than 0.108 yield p values of less than 0.05. If we designate p<0.05 as "surprising" p values, then F=8 would be surprising under the assumption of any value of ωbetween 0 and 0.108.
Using the Shiny app, we can see that a F=8 yields a right-tailed p value of about 0.05 when ω2 is approximately 0.108.
Notice that the p values we've computed thus far are "right-tailed" p values; that is, "extreme" is defined as "too big". We can also ask about whether the F statistic we've found is extreme in the other direction: that is, is it "too small". A p value used to indicate whether the F value is too small is called a "left-tailed" p value. Using the Shiny app, one can work out the value of ω2 such that F=8 would be "surprisingly" small at the p=0.05 level; that value is ω2=0.523. Under any true value of ωgreater than 0.523, F=8 would be surprisingly small.
Using the Shiny app, we can see that a F=8 yields a left-tailed p value of about 0.05 when ω2 is approximately 0.523.
• If 0 ≤ ω≤ 0.108, the observed F statistic would be surprisingly large (that is, the right-tailed p ≤ 0.05)
• If 0.523 ≤ ω≤ 1, the observed F statistic would be surprisingly small (that is, the left-tailed p ≤ 0.05)
• If 0.108 ≤ ω0.523, the observed F statistic would not be surprisingly large or small.
Critically, we've used p values to make all of these statements. The p values tell us whether values would be "surprisingly extreme", under particular assumptions; p values allow us, under frequentist logic, to rule out true values of ω2, but not to rule them in.
### p values and confidence intervals
Many people are aware of the relationship between p values and confidence intervals. A typical X% (two-tailed) confidence interval contains all parameter values such that neither one-sided p values are less than (1-X/100)/2. That sounds complicated, but it isn't; for a 90% confidence interval, we need just need all the values for which the observed data would not be "too surprising" (p<0.05, for one of the two-sided tests).
We've already computed the 90% confidence interval for ωin our example; for all values in [0.108, 0.523], the p value for both one sided tests is p>0.05. From each of two-sided tests we get an error rate of 0.05, and hence the confidence coefficient is 100 times 1 - (0.05 + 0.05) = 90%.
How can we interpret the confidence interval? Confidence interval advocates would have us believe that the interval [0.108, 0.523] gives "plausible" or "likely" values for the parameters, and that the width of this interval tells us the precision of our estimate. But remember how the CI was computed: using p values. We know that nonsignificant high p values do not rule in parameter values as plausible; rather, the values outside the interval have been ruled out, due to the fact that if those were the true values, the observed data would be surprising.
So rather than thinking of the CI as values that are "ruled in" as "plausible" or "likely" by the data, we should rather (from a frequentist perspective, at least) think of the confidence interval as values that have not yet been ruled out by a significance test.
### Does this matter?
This distinction matters a great deal for understanding both p values and confidence intervals. In order to use p values in any way that approaches reasonability, we need to understand the "surprise" interpretation, and we need to realise that we can compute p values for many hypotheses, not just the null hypothesis. In order to interpret confidence intervals well, we need to understand the "fallacy of acceptance": Just because a value is in the CI, doesn't mean it is plausible; it only means that it has not yet been ruled out.
To see the real consequences of this fallacy, consider what we would infer if F(2,27)=0.001 (p=0.999). Any competent data analyst would notice that there is something wrong; the means are surprisingly similar. Under the null hypothesis, when all error is due to error within the groups, we expect the means to vary. This F statistic indicates that the means are so similar that even under the null hypothesis -- where the true means are exactly the same -- we would expect more similar observed means only one time in a thousand.
In fact, the F statistic is so small that under all values of ω, the left-tailed p value is at most 0.001. Why? Because ωcan't be any lower than 0, and this represents the null hypothesis. If we built a 90% confidence interval, it would be empty because there are no values of ωthat yield p>0.05. For all true values of ω, the observed data are "surprising". Now this presents no particular problem for an interpretation of p values that rests solely on their relationship with p values. But note that the very high p value tells us more than the confidence interval; the CI depends on the confidence, and is simply empty. The p value and the F statistic have the information we want; they tells us that the means are much more similar than we would typically expect under any hypothesis. A competent data analyst would, at this point, check the procedure or data for problems. The entire model is suspect.
But what does this mean for a confidence interval advocate who is invested in the (incorrect) interpretation of the CI in terms of "plausible values" or "precision"? Consider Steiger (2004), who suggests replacing a missing bound with "0" in the CI for ω2. This is an awful suggestion. In the example above with F=0.001, this would imply that the confidence interval includes a single value, 0. But the observed data F=0.001 would be very surprising if ω0. Under frequentist logic, the value -- and all other values -- should be ruled out. Moreover, a CI of (0) is infinitesimally thin. Steiger admits that this obviously does not imply infinite precision, but neither Steiger nor any other CI advocate give a formal reason why CIs must, in general have an interpretation in terms of precision. When the interpretation obviously fails, this should make us doubt whether the interpretation was correct in the first place. The p value tells the story much better than the CI, without encouraging us to fall into fallacies of acceptance or precision.
### Where to go from here?
It is often claimed that confidence interval is more informative than p values. This assertion is based on a flawed interpretation of confidence intervals, which we call the "likelihood" or "plausibility" fallacy, and is related to Mayo's "fallacy of acceptance". A proper interpretation of confidence intervals in, terms of the underlying significance tests, avoids this fallacy and prevents bad interpretations of the CIs, in particular when the model is suspect. The entire concept of the "confidence interval" encourages the fallacy of acceptance, and it is probably best if CIs were abandoned altogether. If one does not want to be Bayesian one option that is more useful than confidence intervals -- where all values are either rejected or not at a fixed level of significance -- is viewing curves of p values (for similar use of p value curves, see Mayo's work on "severity").
Curves of right- and left-tailed p values for the two F statistics mentioned in this post.
Consider the plot on the left above, which shows all right- and left-tailed p values for F=8. The horizontal line at p=0.05 allows us to find the 90% confidence interval. For any value of ωsuch that either the blue or red line is lower than the horizontal line, the observed data would be "surprising". It is easy to see that for p=0.05, these values are [0.108, 0.523]. The plot easily shows the necessary information without encouraging the fallacy of acceptance.
Now, consider the plot on the right. For F=0.001, however, all values of ωyield a left-tailed p value of less than 0.05, and hence F=0.001 would be "surprising". There are no values for which both the red and left lines are above p=0.05. The plot does not encourage us to believe that ωis small or 0, it also does not encourage any interpretation in terms of precision; instead, it shows that all values are suspect.
The answer to fallacious interpretations of p values is not to move to confidence intervals; confidence intervals only encourage related fallacies, which one can find in any confidence interval advocacy paper. If we wish to rid people of fallacies involving p values, more p values are needed, not fewer. Confidence intervals are not "better" than p values. The only way to interpret CIs reasonably is in terms of p values, and considering entire p value curves enables us to jettison the reliance on an arbitrary confidence coefficient, and helps us avoid fallacies. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529388308525085, "perplexity": 741.4334538803035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320395.62/warc/CC-MAIN-20170625032210-20170625052210-00615.warc.gz"} |
https://www.lessonplanet.com/teachers/write-and-solve-equations-english-learners | # Write and Solve Equations: English Learners
In this algebra equations ELL worksheet, 7th graders write the letter of each phrase next to the equation it describes. Students then write a phrase to describe the equations using the terms from the box for help. Students finish by solving one equation using inverse operations.
Concepts
Resource Details | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950869619846344, "perplexity": 2113.1981976817115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424960.67/warc/CC-MAIN-20170725022300-20170725042300-00592.warc.gz"} |
http://www.wias-berlin.de/publications/wias-publ/run.jsp?template=abstract&type=Preprint&year=&number=8 | WIAS Preprint No. 8, (1992)
Piecewise polynomial collocation for the double layer potential equation over polyhedral boundaries. Part I: The wedge, Part II: The cube.
Authors
• Rathsfeld, Andreas
2010 Mathematics Subject Classification
• 45L10 65R20
Keywords
• potential equation, collocation
DOI
10.20347/WIAS.PREPRINT.8
Abstract
In this paper we consider a piecewise polynomial method for the solution of the double layer potential equation corresponding to Lapalce's equation in a three-dimensional wedge. We prove the stability for our method in case of special triangulations over the boundaty.
Appeared in
• Boundary Value Problems and Integral Equations on Nonsmooth Domains, M. Costabel, M. Dauge , S. Nicaise, eds., vol. 167 of Lecture Notes in Pure and Applied Mathematics, Marcel Dekker, Inc., New York, 1994, pp. 218--253 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081913828849792, "perplexity": 1520.801693420289}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999853.94/warc/CC-MAIN-20190625152739-20190625174739-00235.warc.gz"} |
http://mathhelpforum.com/statistics/149809-probability-proof-print.html | Probability Proof
Recall that $\Pr(A \cap B) = \Pr(A) + \Pr(B) - \Pr(A \cap B)$. Is $\Pr(A \cup B) > 1$ possible? Therefore, what do you conclude? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120030999183655, "perplexity": 1068.3632691850662}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678694628/warc/CC-MAIN-20140313024454-00040-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/28931/what-are-the-precise-statements-by-shouryya-ray-of-particle-dynamics-problems-po | # What are the precise statements by Shouryya Ray of particle dynamics problems posed by Newton which this news article claims have been solved?
Shouryya Ray, who moved to Germany from India with his family at the age of 12, has baffled scientists and mathematicians by solving two fundamental particle dynamics problems posed by Sir Isaac Newton over 350 years ago, Die Welt newspaper reported on Monday.
Ray’s solutions make it possible to now calculate not only the flight path of a ball, but also predict how it will hit and bounce off a wall. Previously it had only been possible to estimate this using a computer, wrote the paper.
What are the problems from this description? What is their precise formulation? Also, is there anywhere I can read the details of this person's proposed solutions?
-
My suspicion is that this is yet another example of idiotic science journalism. I'm curious to know if I'm right though :) – Colin K May 24 '12 at 22:50
– Zev Chonoles May 27 '12 at 1:39
I have sent an email to the organisers, asking them if the results are accessible somewhere. But the point is that this competition is a very well-reputed one, so it is likely that the student did something reasonable in a correct way. It's not so likely that it is the breakthrough that the newspapers make out of it or even worthy of a publication. (To give you an idea, the two runners-up in the category mathematics wrote a computer program to simulate the composition of fugues and a computer program for ray-tracing.) – Phira May 27 '12 at 9:41
I doubt that it is the student's fault that he is presented as the nerd genius solving the problems that have stumped centuries of mathematicians. Newspapers love this. "High school student shows a lot of promise and might be a very good researcher in 10 years." just doesn't cut it. – Phira May 27 '12 at 9:43
I agree with @Phira. This competition is organized in three stages: a regional stage, a state-wide stage and a nationwide stage. He made it to nationwide, but only scored second there. The nationwide winner was the runner-up from the second stage and his relativistic ray-tracer, so I doubt that Ray really solved an unsolved Math problem. – mnemosyn May 28 '12 at 10:44
This thread(physicsforums.com) contains a link to Shouryya Ray's poster, in which he presents his results.
So the problem is to find the trajectory of a particle under influence of gravity and quadratic air resistance. The governing equations, as they appear on the poster:
$$\dot u(t) + \alpha u(t) \sqrt{u(t)^2+v(t)^2} = 0 \\ \dot v(t) + \alpha v(t) \sqrt{u(t)^2 + v(t)^2} = -g\text,$$
subject to initial conditions $v(0) = v_0 > 0$ and $u(0) = u_0 \neq 0$.
Thus (it is easily inferred), in his notation, $u(t)$ is the horizontal velocity, $v(t)$ is the vertical velocity, $g$ is the gravitational acceleration, and $\alpha$ is a drag coefficient.
He then writes down the solutions
$$u(t) = \frac{u_0}{1 + \alpha V_0 t - \tfrac{1}{2!}\alpha gt^2 \sin \theta + \tfrac{1}{3!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right) t^3 + \cdots} \\ v(t) = \frac{v_0 - g\left[t + \tfrac{1}{2!} \alpha V_0 t^2 - \tfrac{1}{3!} \alpha gt^3 \sin \theta + \tfrac{1}{4!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right)t^4 + \cdots\right]}{1 + \alpha V_0 t - \tfrac{1}{2!}\alpha gt^2 \sin \theta + \tfrac{1}{3!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right) t^3 + \cdots}\text.$$
From the diagram below the photo of Newton, one sees that $V_0$ is the inital speed, and $\theta$ is the initial elevation angle.
The poster (or at least the part that is visible) does not give details on the derivation of the solution. But some things can be seen:
• He uses, right in the beginning, the substitution $\psi(t) = u(t)/v(t)$.
• There is a section called "...öße der Bewegung". The first word is obscured, but a qualified guess would be "Erhaltungsgröße der Bewegung", which would translate as "conserved quantity of the motion". Here, the conserved quantity described by David Zaslavsky appears, modulo some sign issues.
• However, this section seems to be a subsection to the bigger section "Aus der Lösung ablesbare Eigenschaften", or "Properties that can seen from the solution". That seems to imply that the solution implies the conservation law, rather than the solution being derived from the conservation law. The text in that section probably provides some clue, but it's only partly visible, and, well, my German is rusty. I welcome someone else to try to make sense of it.
• Also part of the bigger section are subsections where he derives from his solution (a) the trajectory for classical, drag-free projectiles, (b) some "Lamb-Näherung", or "Lamb approximation".
• The next section is called "Verallgemeneirungen", or "Generalizations". Here, he seems to consider two other problems, with drag of the form $\alpha V^2 + \beta$, in the presence of altitude-dependent horizontal wind. I'm not sure what the results here are.
• The diagrams to the left seem to demonstrate the accuracy and convergence of his series solution by comparing them to Runge-Kutta. Though the text is kind of blurry, and, again, my German is rusty, so I'm not too sure.
• Here's a rough translation of the first part of the "Zusammanfassung und Ausblick" (Summary and outlook), with suitable disclaimers as to the accuracy:
• For the first time, a fully analytical solution of a long unsolved problem
• Various excellent properties; in particular, conserved quantity $\Rightarrow$ fundamental [...] extraction of deep new insights using the complete analytical solutions (above all [...] perspectives and approximations are to be gained)
• Convergence of the solution numerically demonstrated
• Solution sketch for two generalizations
EDIT: Two professors at TU Dresden, who have seen Mr Ray's work, have written some comments:
Comments on some recent work by Shouryya Ray
There, the questions he solved are unambiguously stated, so that should answer any outstanding questions.
EDIT2: I should add: I do not doubt that Shouryya Ray is a very intelligent young man. The solution he gave can, perhaps, be obtained using standard methods. I believe, however, that he discovered the solution without being aware that the methods were standard, a very remarkable achievement indeed. I hope that this event has not discouraged him; no doubt, he'll be a successful physicist or mathematician one day, should he choose that path.
-
Link to image of Shouryya Ray's poster is now dead. – Qmechanic♦ Jul 9 '12 at 20:00 Here is another link to the poster image. – Qmechanic♦ Jul 11 '12 at 20:23
It is indeed quite difficult to find information on why exactly this project has attracted so much attention. What I've pieced together from comments on various websites and some images (mainly this one) is that Shouryya Ray discovered the following constant of motion for projectile motion with quadratic drag:
$$\frac{g^2}{2v_x^2} + \frac{\alpha g}{2}\left(\frac{v_y\sqrt{v_x^2 + v_y^2}}{v_x^2} + \sinh^{-1}\biggl|\frac{v_y}{v_x}\biggr|\right) = \text{const.}$$
This applies to a particle which is subject to a quadratic drag force,
$$\vec{F}_d = -m\alpha v\vec{v}$$
It's easily verified that the constant is constant by taking the time derivative and plugging in the equations of motion
\begin{align}\frac{\mathrm{d}v_x}{\mathrm{d}t} &= -\alpha v_x\sqrt{v_x^2 + v_y^2} \\ \frac{\mathrm{d}v_y}{\mathrm{d}t} &= -\alpha v_y\sqrt{v_x^2 + v_y^2} - g\end{align}
The prevailing opinion is that this has not been known before, although some people are claiming to have seen it in old textbooks (never with a reference, though, so take it for what you will).
I haven't heard anything concrete about how this could be put to practical use, although perhaps that is part of the technical details of the project. It's already possible to calculate ballistic trajectories with drag to very high precision using numerical methods, and the presence of this constant doesn't directly lead to a new method of calculating trajectories as far as I can tell.
-
There is a discussion on Reddit about this subject, which describes the problem and a verification of the solution. See reddit.com/r/worldnews/comments/u7551/… – jbatista May 28 '12 at 22:02
@jbatista yeah, that's one of the sources I was getting my information from. – David Zaslavsky May 28 '12 at 22:05
So it sounds like a very neat result, and definitely impressive for a high school student; but not exactly worth a "Kid out-thinks Newton" headline. Junky science journalism, as always. – Colin K May 28 '12 at 22:19
The MathExchange cross-post math.stackexchange.com/q/150242 also copy+pastes the Reddit discussion; particularly salient is that it cites a result by G. W. Parker published in Am.J.Phys. 45 (1977) 606-610 discussing the same problem. This makes it even more interesting to find out about how Ray obtained his result. – jbatista May 28 '12 at 22:19
I) Here we would like to give a Hamiltonian formulation of a point particle in a constant gravitational field with quadratic air resistance
$$\tag{1} \dot{u}~=~ -\alpha u \sqrt{u^2+v^2}, \qquad \dot{v}~=~ -\alpha v \sqrt{u^2+v^2} -g.$$
The $u$ and $v$ are the horizontal and vertical velocity, respectively. A dot on top denotes differentiation with respect to time $t$. The two positive constants $\alpha>0$ and $g>0$ can be put to one by scaling the three variables
$$\tag{2} t'~=~\sqrt{\alpha g}t, \qquad u'~=~\sqrt{\frac{\alpha}{g}}u, \qquad v'~=~\sqrt{\frac{\alpha}{g}}v.$$
See e.g. Ref. [1] for a general introduction to Hamiltonian and Lagrangian formulations.
II) Define two canonical variables (generalized position and momentum) as
$$\tag{3} q~:=~ -\frac{v}{|u|}, \qquad p~:=~ \frac{1}{|u|}~>~0.$$
(The position $q$ is (up to signs) Shouryya Ray's $\psi$ variable, and the momentum $p$ is (up to a multiplicative factor) Shouryya Ray's $\dot{\Psi}$ variable. We assume$^\dagger$ for simplicity that $u\neq 0$.) Then the equations of motion (1) become
$$\tag{4a} \dot{q}~=~ gp,$$ $$\tag{4b} \dot{p}~=~ \alpha \sqrt{1+q^2}.$$
III) Equation (4a) suggests that we should identify $\frac{1}{g}$ with a mass
$$\tag{5} m~:=~ \frac{1}{g},$$
so that we have the standard expression
$$\tag{6} p~=~m\dot{q}$$
for the momentum of a non-relativistic point particle. Let us furthermore define kinetic energy
$$\tag{7} T~:=~\frac{p^2}{2m}~=~ \frac{gp^2}{2}.$$
IV) Equation (4b) and Newton's second law suggest that we should define a modified Hooke's force
$$\tag{8} F(q)~:=~ \alpha \sqrt{1+q^2}~=~-V^{\prime}(q),$$
with potential given by (minus) the antiderivative
$$V(q)~:=~ - \frac{\alpha}{2} \left(q \sqrt{1+q^2} + {\rm arsinh}(q)\right)$$ $$\tag{9} ~=~ - \frac{\alpha}{2} \left(q \sqrt{1+q^2} + \ln(q+\sqrt{1+q^2})\right).$$
Note that this corresponds to an unstable situation because the force $F(-q)~=~F(q)$ is an even function, while the potential $V(-q) = - V(q)$ is a monotonic odd function of the position $q$.
It is tempting to define an angle variable $\theta$ as
$$\tag{10} q~=~\tan\theta,$$
so that the corresponding force and potential read
$$\tag{11} F~=~\frac{\alpha}{\cos\theta} , \qquad V~=~- \frac{\alpha}{2} \left(\frac{\sin\theta}{\cos^2\theta} + \ln\frac{1+\sin\theta}{\cos\theta}\right).$$
V) The Hamiltonian is the total mechanical energy
$$H(q,p)~:=~T+V(q)~=~\frac{gp^2}{2}- \frac{\alpha}{2} \left(q \sqrt{1+q^2} + {\rm arsinh}(q)\right)$$ $$\tag{12}~=~\frac{g}{2u^2} +\frac{\alpha}{2} \left( \frac{v\sqrt{u^2+v^2}}{u^2} + {\rm arsinh}\frac{v}{|u|}\right).$$
Since the Hamiltonian $H$ contains no explicit time dependence, the mechanical energy (12) is conserved in time, which is Shouryya Ray's first integral of motion.
$$\tag{13} \frac{dH}{dt}~=~ \frac{\partial H}{\partial t}~=~0.$$
VI) The Hamiltonian equations of motion are eqs. (4). Suppose that we know $q(t_i)$ and $p(t_i)$ at some initial instant $t_i$, and we would like to find $q(t_f)$ and $p(t_f)$ at some final instant $t_f$.
The Hamiltonian $H$ is the generator of time evolution. If we introduce the canonical equal-time Poisson bracket
$$\tag{14} \{q(t_i),p(t_i)\}~=~1,$$
then (minus) the Hamiltonian vector field reads
$$\tag{15} -X_H~:=~-\{H(q(t_i),p(t_i)), \cdot\} ~=~ gp(t_i)\frac{\partial}{\partial q(t_i)} + F(q(t_i))\frac{\partial}{\partial p(t_i)}.$$
For completeness, let us mention that in terms of the original velocity variables, the Poisson bracket reads
$$\tag{16} \{v(t_i),u(t_i)\}~=~u(t_i)^3.$$
We can write a formal solution to position, momentum, and force, as
$$q(t_f) ~=~ e^{-\tau X_H}q(t_i) ~=~ q(t_i) - \tau X_H[q(t_i)] + \frac{\tau^2}{2}X_H[X_H[q(t_i)]]+\ldots \qquad$$ $$\tag{17a} ~=~ q(t_i) + \tau g p(t_i) + \frac{\tau^2}{2}g F(q(t_i)) +\frac{\tau^3}{6}g \frac{g\alpha^2p(t_i)q(t_i)}{F(q(t_i))} +\ldots ,\qquad$$ $$p(t_f) ~=~ e^{-\tau X_H}p(t_i) ~=~ p(t_i) - \tau X_H[p(t_i)] + \frac{\tau^2}{2}X_H[X_H[p(t_i)]]+\ldots\qquad$$ $$~=~p(t_i) + \tau F(q(t_i)) +\frac{\tau^2}{2}\frac{g\alpha^2p(t_i)q(t_i)}{F(q(t_i))}$$ $$\tag{17b} + \frac{g\alpha^2\tau^3}{6} \left(q(t_i) + \frac{g\alpha^2 p(t_i)^2}{F(q(t_i))^3}\right) +\ldots ,\qquad$$ $$F(q(t_f)) ~=~ e^{-\tau X_H}F(q(t_i))$$ $$~=~ F(q(t_i)) - \tau X_H[F(q(t_i))] + \frac{\tau^2}{2}X_H[X_H[F(q(t_i))]] + \ldots\qquad$$ $$\tag{17c}~=~ F(q(t_i)) + \tau \frac{g\alpha^2p(t_i)q(t_i)}{F(q(t_i))} +\frac{g(\alpha\tau)^2}{2}\left(q(t_i) +\frac{g\alpha^2 p(t_i)^2}{F(q(t_i))^3}\right) +\ldots ,\qquad$$
and calculate to any order in time $\tau:=t_f-t_i$, we would like. (As a check, note that if one differentiates (17a) with respect to time $\tau$, one gets (17b) multiplied by $g$, and if one differentiates (17b) with respect to time $\tau$, one gets (17c), cf. eq. (4).) In this way we can obtain a Taylor expansion in time $\tau$ of the form
$$\tag{18} F(q(t_f)) ~=~\alpha\sum_{n,k,\ell\in \mathbb{N}_0} \frac{c_{n,k,\ell}}{n!}\left(\tau\sqrt{\alpha g}\right)^n \left(p(t_i)\sqrt{\frac{g}{\alpha}}\right)^k \frac{q(t_i)^{\ell}}{(F(q(t_i))/\alpha)^{k+\ell-1}}.$$
The dimensionless universal constants $c_{n,k,\ell}=0$ are zero if either $n+k$ or $\frac{n+k}{2}+\ell$ are not an even integer. We have a closed expression
$$F(q(t_f)) ~\approx~ \exp\left[\tau gp(t_i)\frac{\partial}{\partial q(t_i)}\right]F(q(t_i)) ~=~ F(q(t_i)+\tau g p(t_i))$$ $$\tag{19} \qquad \text{for} \qquad ~ p(t_i)~\gg~\frac{ F(q(t_i))}{\sqrt{\alpha g}},$$
i.e., when we can ignore the second term in the Hamiltonian vector field (15).
VII) The corresponding Lagrangian is
$$\tag{20} L(q,\dot{q})~=~T-V(q)~=~\frac{\dot{q}^2}{2g}+ \frac{\alpha}{2} \left(q \sqrt{1+q^2} + {\rm arsinh}(q)\right)$$
$$\tag{21} \ddot{q}~=~ \alpha g \sqrt{1+q^2}.$$
This is essentially Shouryya Ray's $\psi$ equation.
References:
1. Herbert Goldstein, Classical Mechanics.
$^\dagger$ Note that if $u$ becomes zero at some point, it stays zero in the future, cf. eq.(1). If $u\equiv 0$ identically, then eq.(1) becomes
$$\tag{22} -\dot{v} ~=~ \alpha v |v| + g.$$
The solution to eq. (22) for negative $v\leq 0$ is
$$\tag{23} v(t) ~=~ -\sqrt{\frac{g}{\alpha}} \tanh(\sqrt{\alpha g}(t-t_0)) , \qquad t~\geq~ t_0,$$
where $t_0$ is an integration constant. In general,
$$\tag{24} (u(t),v(t)) ~\to~ (0, -\sqrt{\frac{g}{\alpha}}) \qquad \text{for} \qquad t ~\to~ \infty ,$$
while
$$\tag{25} (q(t),p(t)) ~\to~ (\infty,\infty) \qquad \text{for} \qquad t~\to~ \infty.$$
-
The equations for this projectile problem were formulated by Jacob Bernoulli (1654-1705) and Gottfried Leibniz (1646-1716) developed a solution technique in 1696! The method develops an analytical solution for the velocity and angle of inclination (or equivalently the horizontal and vertical velocities). To obtain the horizontal and vertical displacements and the time by analytical methods from these intermediate results has not been successful since that then. It probably never will. However simple numerical techniques can yield solutions more efficiently without the use of power series representation. For example MATHEMATICA easily solves the equations from the intermediate results. I'm surprised that someone from the military ballistic community has not commented or even the aerospace guys. This must be very elementary to them. I happened upon this subject because I am reading an interesting book by Neville De Mestre called "The Mathematics of Projectiles in Sport". I recommend it. Although it was published in 1990 and is probably out of print it may be available through AMAZON BOOKS.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046630859375, "perplexity": 685.4424422408368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702444272/warc/CC-MAIN-20130516110724-00054-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://aas.org/archives/BAAS/v25n4/aas183/abs/S5511.html | A Search for Cataclysmic Variables in the EGRET All-Sky Survey
Session 55 -- Interacting Binaries: CVs and XRBs
Display presentation, Thursday, January 13, 9:30-6:45, Salons I/II Room (Crystal Gateway)
## [55.11] A Search for Cataclysmic Variables in the EGRET All-Sky Survey
P. Barrett, E.M. Schlegel (USRA), O.C. DeJager (PU CHE), G. Chanmugam (LSU)
We present results from {\sl Compton/EGRET} observations of the entire class of magnetic Cataclysmic Variables (CV) and many recent novae. The result from this initial survey is negative with no detection greater than $2\sigma$. The average upper limit of the luminosity of a typical (distance $\sim 100pc$) CV is $\approx 7 \times 10^{30}~ergs~ s^{-1}$ which implies a conversion efficiency of accretion luminosity to $\gamma$-ray luminosity of $<1$\% for $\gamma$-rays above $100 MeV$.
This low conversion efficiency places tight constraints on non-thermal models of $\gamma$-ray production from accretion-powered, magnetic compact binaries. For diffusive shock acceleration of protons which is the only process possible for the AM Herculis subclass of CVs, we obtain upper limits to the flux above $100 MeV$ from VV Puppis, V834 Cen (E1405-451), and AM Herculis of about 10, 5, and 3$\times 10^{30}~ ergs~ s^{-1}$, respectively. These flux upper limits are more than a factor of ten less than the fluxes claimed by Bhat et al. (1989) using COS-B data for VV Puppis and V834 Cen and about 100 less than the TeV flux from AM Her (Bhat et al. 1991). These results may mean that the diffusive shock process is not as important in AM Her binaries as is proposed. For the dynamo mechanism of particle acceleration which is the putative process occuring in the DQ Herculis subclass of CVs, the efficiency of converting angular momentum to $\gamma$-rays must be less than optimal. This result may be important for the production of $\gamma$-rays from neutron star binaries where the dynamo mechanism was first proposed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731457233428955, "perplexity": 4977.97876840965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982924728.51/warc/CC-MAIN-20160823200844-00045-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://zbmath.org/?q=an:1128.34051 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
On a linear differential equation with a proportional delay. (English) Zbl 1128.34051
The paper deals with the non-autonomous linear delay differential equation
$\stackrel{˙}{x}\left(t\right)=c\left(t\right)\left(x\left(t\right)-px\left(\lambda t\right)\right),\phantom{\rule{1.em}{0ex}}0<\lambda <1,\phantom{\rule{1.em}{0ex}}p\ne 0,\phantom{\rule{1.em}{0ex}}t>0,$
where $p$ and $\lambda$ are real scalars and $c$ is a continuous and non-oscillatory function defined on $\left(0,\infty \right)$. The equation is referred to as pantograph equation, since in a simplified version it models the collection of current by the pantograph of an electric locomotive. The asymptotic properties of the solutions are in focus. The following condition on the growth of $c$ is imposed:
$\underset{t\to \infty }{lim sup}\frac{\lambda \phantom{\rule{0.166667em}{0ex}}c\left(\lambda t\right)}{c\left(t\right)}<1\phantom{\rule{0.166667em}{0ex}}·$
The main result of the paper says that if $c\in {C}^{1}\left(\left(0,\infty \right)\right)$ fulfills this condition and is eventually positive, then there exist real constants $L$ and $\rho$, where $\rho >0$, and a continuous periodic function $g$ of period $log{\lambda }^{-1}$ such that
$x\left(t\right)=L{x}^{*}\left(t\right)+{t}^{k}g\left(logt\right)+O\left({t}^{{\kappa }_{r}-\rho }\right)\phantom{\rule{0.166667em}{0ex}}·$
Here ${\kappa }_{r}$ is the real part of the possible complex $\kappa$ such that ${\lambda }^{k}=1/p$, and ${x}^{*}$ is the solution of the considered equation, such that
${x}^{*}\left(t\right)\sim exp\left({\int }_{\overline{t}}^{t}c\left(s\right)\phantom{\rule{0.166667em}{0ex}}ds\right)\phantom{\rule{1.em}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{as}\phantom{\rule{1.em}{0ex}}t\to \infty$
(the existence of such a sulution ${x}^{*}$ is proved in the paper). Though it is natural to distinguish the cases of the eventually positive and the eventually negative $c$, it is shown that a resembling asymptotic formula is valid also in the case of $c$ eventually negative. Finally, using a transformation approach these results are generalized to equations with a general form of the delay.
##### MSC:
34K25 Asymptotic theory of functional-differential equations 34K06 Linear functional-differential equations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8854199051856995, "perplexity": 2390.328642523575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163041955/warc/CC-MAIN-20131204131721-00017-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/is-f-1-a-b-the-same-as-f-1-a-f-1-b.528104/ | # Homework Help: Is f-1(A ∪ B) the same as f-1(A) ∪ f-1(B)?
1. Sep 7, 2011
### brookey86
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
Last edited: Sep 7, 2011
2. Sep 7, 2011
### Dick
Yes, you are. But I'd feel more comfortable if you tried to prove it rather than taking as an assumption.
3. Sep 7, 2011
### SammyS
Staff Emeritus
Have you tried to prove it's true?
4. Sep 7, 2011
### brookey86
I can prove it using words, not quite there using mathematical symbols, but that part is out of the scope of my class. Thanks guys!
5. Sep 8, 2011
### HallsofIvy
You prove two sets are equal by proving that each is a subset of the other. You prove "A" is a subset of "B" by saying "let $x\in A$", then show "$x\in B$".
Here, to show that $f^{-1}(A\cup B)\subset f^{-1}(A)\cup f^{-1}(B)$, start by saying "let $x\in f^{-1}(A\cup B)$". Then $y= f(x)\in A\cup B$. And that, in turn, means that either $y\in A$ or $y \in B$. Consider each of those.
Note, by the way, that we are considering the inverse image of sets. None of this implies or requires that f actually have an "inverse". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972313404083252, "perplexity": 1085.3209974466167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861641.66/warc/CC-MAIN-20180619002120-20180619022120-00314.warc.gz"} |
http://thurj.org/as/2011/01/702/ | Norman Y. Yao§*, Yi-Chia Lin*, Chase P. Broedersz?, Karen E. Kasza, Frederick C. MacKintosh?, and David A. Weitz*‡¶
§Harvard College 2008; *Department of Physics, Harvard University, Cambridge, MA 02138, USA;Department of Physics and Astronomy, Vrije Universiteit, 1081HV Amsterdam, The Netherlands;School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA.
Neurofilaments are found in abundance in the cytoskeleton of neurons, where they act as an intracellular framework protecting the neuron from external stresses. To elucidate the nature of the mechanical properties that provide this protection, we measure the linear and nonlinear viscoelastic properties of networks of neurofilaments. These networks are soft solids that exhibit dramatic strain stiffening above critical strains of 30–70%. Surprisingly, divalent ions, such as Mg2+, Ca2+, and Zn2+ act as effective cross-linkers for neurofilament networks, controlling their solid-like elastic response. This behavior is comparable to that of actin-binding proteins in reconstituted filamentous actin. We show that the elasticity of neurofilament networks is entropic in origin and is consistent with a model for cross-linked semiflexible networks, which we use to quantify the cross-linking by divalent ions.
## Introduction
The mechanical and functional properties of cells depend largely on their cytoskeleton, which is comprised of networks of biopolymers; these include microtubules, actin, and intermediate filaments. A complex interplay of the mechanics of these networks provides cytoskeletal structure with the relative importance of the individual networks depending strongly on the type of cell [1]. The complexity of the intermingled structure and the mechanical behavior of these networks in vivo has led to extensive in vitro studies of networks of individual biopolymers. Many of these studies have focused on reconstituted networks of filamentous actin (F-actin) which dominates the mechanics of the cytoskeleton of many cells [2-7]. However, intermediate filaments also form an important network in the cytoskeleton of many cells; moreover, in some cells they form the most important network. For example, in mature axons, neurofilaments, a type IV intermediate filament, are the most abundant cytoskeletal element overwhelming the amount of actin and outnumbering microtubules by more than an order of magnitude [8]. Neurofilaments (NF) are assembled from three polypeptide sub-units NF-Light (NF-L), NF-Medium (NF-M), and NF-Heavy (NF-H), with molecular masses of 68 kDa, 150 kDa and 200 kDa, respectively [8]. They have a diameter d ~ 10 nm, a persistence length believed to be of order lp ~ 0.2 µm and an in vitro contour length L ~ 5 µm. They share a conserved sequence with all other intermediate filaments, which is responsible for the formation of coiled dimers that eventually assemble into tetramers and finally into filaments. Unlike other intermediate filaments such as vimentin and desmin, neurofilaments have long carboxy terminal extensions that protrude from the filament backbone [9]. These highly charged “side-arms” lead to significant interactions among individual filaments as well as between filaments and ions [10]. Although the interaction of divalent ions and rigid polymers has been previously examined, little is known about the electrostatic cross-linking mechanism [11]. Networks of neurofilaments are weakly elastic; however, these networks are able to withstand large strains and exhibit pronounced stiffening with increasing strain [12, 13]. An understanding of the underlying origin of this elastic behavior remains elusive; in particular, even the nature of the cross-linkers, which must be present in such a network, is not known. Further, recent findings have shown that NF aggregation and increased network stiffness are common in patients with amyotrophic lateral sclerosis (ALS) and Parkinson’s. Thus, an understanding of the fundamental mechanical properties of these networks of neurofilaments is an essential first step in elucidating the role of neurofilaments in a multitude of diseases [14]. However, the elastic behavior of these networks has not as yet been systematically studied.
Here, we report the linear and nonlinear viscoelastic properties of networks of neurofilaments. We show that these networks form cross-linked gels; the cross-linking is governed by divalent ions such as Mg2+ at millimolar concentrations. To explain the origins of the network’s elasticity, we apply a semiflexible polymer model, which ascribes the network elasticity to the stretching of thermal fluctuations; this quantitatively accounts for the linear and nonlinear elasticity of neurofilament networks, and ultimately, even allows us to extract microstructural network parameters such as the persistence length and the average distance between cross-links directly from bulk rheology.
## Materials and Methods
#### Materials
Neurofilaments are purified from bovine spinal cords using a standard procedure [9, 15, 16]. The fresh tissue is homogenized in the presence of buffer A (Mes 0.1 M, MgCl2 1 mM, EGTA 1 mM, pH 6.8) and then centrifuged at a K-factor of 298.8 (Beckman 70 Ti). The crude neurofilament pellet is purified overnight on a discontinuous sucrose g radient with 0.8 M sucrose (5.9 ml), 1.5 M sucrose (1.3 ml) and 2.0 M sucrose (1.0 ml). After overnight sedimentation, the concentration of the purified neurofilament is determined with a B radford Assay using bovine serum albumen (BSA) as a standard. The purified neurofilament is dialyzed against buffer A containing 0.8 M sucrose for 76 hours and then 120 μl aliquots are flash frozen in liquid nitrogen and stored at -80 °C.
#### Bulk Rheology
The mechanical response of the cross-linked neurofilament networks is measured with a stress-controlled rheometer (HR Nano, Bohlin Instruments) using a 20 mm diameter 2 degree stainless steel cone plate geometry and a gap size of 50 μm. Before rheological testing, the neurofilament samples are thawed on ice, after which they are quickly pipetted onto the stainless steel bottom plate of the rheometer in the presence of varying concentrations of Mg2+. We utilize a solvent trap to prevent our networks from drying. To measure the linear viscoelastic moduli, we apply an oscillatory stress of the form σ(t) = A sin(ωt), where A is the amplitude of the stress and ω is the frequency. The resulting strain is of the form γ(t) = B sin(ωt + φ) and yields the storage modulus and the loss modulus . To determine the frequency dependence of the linear moduli, G'(ω) and G”(ω) are sampled over a range of frequencies from 0.006–25 rad/s. In addition, we probe the stress dependence of the network response by measuring G'(ω) and G”(ω) at a single frequency varying the amplitude of the oscillatory stress. To probe nonlinear behavior, we utilize a differential measurement, an effective probe of the tangent elastic modulus, which for a viscoelastic solid such as neurofilaments provides consistent nonlinear measurements of elasticity in comparison to other nonlinear methods [17‑19]. A small oscillatory stress is superimposed on a steady pre-stress, σ, resulting in a total stress of the form σ(t) = σ + |δσ| sin(ωt). The resultant strain is γ(t) = γ + |δγ| sin(ωt + φ), yielding a differential elastic modulus and a differential viscous modulus [2].
#### Scaling Parameters
To compare the experiments with theory, we collapse the differential measurements onto a single master curve by scaling the stiffness K’ and stress σ by two free parameters for each data set. According to theory, the stiffness versus stress should have a single, universal form apart from these two scale factors. We determine the scale factors by cubic-spline fitting the data sets to piecewise polynomials; these polynomials are then scaled onto the predicted stiffening curve using a least squares regression.
## Results and Discussion
To quantify the mechanical properties of neurofilaments, we probe the linear viscoelastic moduli of the network during gelation, which takes approximately one hour; we characterize this by continuously measuring the linear viscoelastic moduli at a single frequency, ω = 0.6 rad/s. Gelation of these networks is initiated by the addition of millimolar amounts of Mg2+ and during this process we find that the linear viscoelastic moduli increase rapidly before reaching a plateau value. We measure the frequency dependence of the linear viscoelastic moduli over a range of neurofilament and Mg2+ concentrations. To ensure that we are probing the linear response, we maintain a maximum applied stress amplitude below 0.01 Pa, corresponding to strains less than approximately 5%; we find that the linear moduli are frequency independent for all tested frequencies, 0.006–25 rad/s. Additionally, neurofilament networks behave as a viscoelastic solid for all ranges of Mg2+ concentrations tested and the linear storage modulus is always at least an order of magnitude greater than the linear loss modulus, as shown in Fig. 1. This is indicative of a cross-linked gel and allows us to define a plateau elastic modulus G0 [20].
The elasticity of neurofilament networks is highly nonlinear; above critical strains γc of 30–70%, the networks show stiffening up to strains of 300% [21], as shown in Fig. 2. This marked strain-stiffening occurs for a wide variety of Mg2+ and neurofilament concentrations. In addition, by varying the neurofilament concentration cNF and the Mg2+ concentration cMg, we can finely tune the linear storage modulus G0 over a wide range of values, as seen in Fig. 3. The strong dependence of G0 on Mg2+ concentration is reminiscent of actin networks cross-linked with the incompliant cross-linkers such as scruin [2, 22, 23]; this suggests that in the case of neurofilaments, Mg2+ is effectively acting as a cross-linker leading to the formation of a viscoelastic network. Thus, the neurofilaments are cross-linked ionically on length scales comparable to their persistence length; hence, they should behave as semiflexible biopolymer networks. We therefore hypothesize that the network elasticity is due to the stretching out of thermal fluctuations. These thermally driven transverse fluctuations reduce neurofilament extension resulting in an entropic spring. To consider the entropic effects we can model the Mg2+-cross-linked network as a collection of thermally fluctuating semiflexible segments of length lc, where lc is the average distance between Mg2+ cross-links. A convincing test of the hypothesis of entropic elasticity is the nonlinear behavior of the network. When the thermal fluctuations are pulled out by increasing strain, the elastic modulus of the network exhibits a pronounced increase.
To probe this nonlinear elasticity of neurofilament networks, we measure the differential or tangent elastic modulus K’(σ) at a constant frequency ω = 0.6 rad/s for a variety of neurofilament and Mg2+ concentrations. If the network elasticity is indeed entropic in origin, this can provide a natural explanation for the nonlinear behavior in terms of the nonlinear elastic force-extension response of individual filaments that deform affinely. Here, the force required to extend a single filament diverges as the length approaches the full extension lc, since [24-26]. Provided the network deformation is affine, its macroscopic shear stress is primarily due to the stretching and compression of the individual elements of the network. The expected divergence of the single-filament tension leads to a scaling of ; we therefore expect a scaling of network stiffness with stress of the form K’(σ) ~ σ3/2 in the highly nonlinear regime [2]. Indeed, ionically cross-linked neurofilament networks show remarkable consistency with this affine thermal model for a wide range of neurofilament and cross-link concentrations, as shown in Fig. 4. This consistency provides convincing evidence for the entropic nature of the network’s nonlinear elasticity [2, 25].
The affine thermal model also suggests that the functional form of the data should be identical for all values of cMg and cNF. To test this, we scale all the data sets for K’(σ) onto a single master curve. This is accomplished by scaling the modulus by a factor G’ and the stress by a factor σ’. Consistent with the theoretical prediction, all the data from various neurofilament and Mg2+ concentrations can indeed be scaled onto a universal curve, as shown in Fig. 5. The scale factor for the modulus is the linear shear modulus G’G0, while the scale factor for the stress is a measure of the critical stress σc at which the network begins to stiffen. This provides additional evidence that the nonlinear elasticity of the Mg2+-cross-linked neurofilament networks is due to the entropy associated with single filament stretching.
To explore the generality of this ionic cross-linking behavior, we use other divalent ions including Ca2+ and Zn2+. We find that the effects of both of these ions are nearly identical to those of Mg2+; they also cross-link neurofilament networks into weak elastic gels. This lack of dependence on the specific ionic cross-link lends evidence that the interaction between filaments and ions is electrostatic in nature. This electrostatic interaction would imply that the various ions are acting as salt-bridges, thereby cross-linking filaments into low energy conformations.
The ability to scale all data sets of K'(σ) onto a single universal curve also provides a means to convincingly confirm that the linear elasticity is entropic in origin. To accomplish this, we derive an expression that relates the two scaling parameters to each other. For small extensions δl of the entropic spring, the force required can be derived from the wormlike chain model giving . Assuming an affine deformation, whereby the macroscopic sample strain can be translated into local microscopic deformations, and accounting for an isotropic distribution of filaments, the full expression for the linear elastic modulus of the network is given by
(1)
where κ = kBTlp is the bending rigidity of neurofilaments, kBT is the thermal energy, and ρ is the filament-length density [2, 25, 27, 28]. The density ρ is also proportional to the mass density cNF, and is related to the mesh size ζ of the network by [29]. Furthermore, the model predicts a characteristic filament tension proportional to , and a characteristic stress
(2)
[2, 22, 25]. Thus, if the network’s linear elasticity is dominated by entropy, we expect the scaling cNF1/2G0 ~ σc3/2 , where the pre-factor should depend only on kBT and lp; although the pre-factor will differ for different types of filaments it should be the same for different networks composed of the same filament type and at the same temperature, such as ours. Thus, plotting cNF1/2G0 as a function of σc for different neurofilament networks at the same temperature should result in collapse of the data onto a single curve characterized a 3/2 power law; this even includes systems with different divalent ions or different ionic concentrations. For a variety of divalent ions, we find that cNF1/2G0 ~ σcz, where z = 1.54 ± 0.14 in excellent agreement with this model, as shown in Fig. 6. It is essential to note that the 3/2 exponent found here is not a direct consequence of the 3/2 exponent obtained in Fig. 5, which characterizes the highly nonlinear regime. Instead, the plot of cNF1/2G0 as a function of σc probes the underlying mechanism and extent of the linear elastic regime.
For a fixed ratio of cross-links R = cMg/cNF, we expect cross-linking to occur on the scale of the entanglement length, yielding lc cNF2/5 [25, 27, 30]. Thus, we expect the linear storage modulus to scale with neurofilament concentration as G0 cNF11/5 [25]. For R = 1000, we find an approximate scaling of G0 cNF25, consistent with the predicted power law, as shown in the inset of Fig. 6. Interestingly, the stronger concentration dependence of G0 may be a consequence of the dense cross-linking that we observe. Specifically, for densely cross-linked networks, corresponding to a minimum lc on the order of the typical spacing between filaments as we observe here, the model in Eq. (1) predicts G0 cNF25 [25]. The agreement with the affine thermal model in both the linear and nonlinear regimes confirms the existence of an ionically cross-linked neurofilament gel whose elasticity is due to the pulling out of thermal fluctuations.
The ability of the affine thermal model to explain the elasticity of the neurofilament network also suggests that we should be able to quantitatively extract network parameters from the bulk rheology. The model predicts that
(3)
and
(4)
where ρ 2.1×1013 m-2 for neurofilament networks at a concentration of 1 mg/mL. This yields a persistence length lp 0.2 µm which is in excellent agreement with previous measurements [31]. In addition, we find that lc 0.3 µm which is close to the theoretical mesh size ≈ 0.26 μm; surprisingly, this is far below the mesh size of 4 µm inferred from tracer particle motion [1]. Such particle tracking only provides an indirect measure: in weakly cross-linked networks, for instance, even particles that are larger than the average inter-filament spacing will tend to diffuse slowly.
To further elucidate the cross-linking behavior of Mg2+, we explore the dependence of lc on both cMg and cNF, based on Eq. (3-4). Based on the form of G0 and σc, we expect that . Assuming that Mg2+ is acting as the cross-linker and that lc is also the typical distance between binary collisions of filament chains we would expect that , where le is the entanglement length. Thus, for a given concentration of neurofilaments
(5)
[25]. This yields
(6)
where X is the exponent of the Mg2+ concentration. Naively, we would expect that X ≈ 1 which would imply that doubling the concentration of Mg2+ would halve the average distance between cross-links. Empirically we find a much weaker dependence on cMg, where . This weaker dependence suggests that mM concentrations of Mg2+ actually saturate our networks. This is consistent with a calculation of the percentage of Mg2+ ions, which actually act as cross-links. The number of cross-linking ions per cubic meter is ; the number density of ions, N in a standard 5 mM Mg2+ concentration is N ≈ 30 × 1023. Thus, there is an excess of Mg2+ ions available to act as cross-linkers; this may account for the weak cross-link dependence. A similarly weak dependence has been seen previously with actin networks in the presence of the molecular motor heavy meromysin where X was found to be 0.4 and thus, , where cA is the actin concentration and cHMM is the heavy meromysin concentration [32]. Utilizing our empirical power law for cMg, we are able to collapse the curves such that which is in excellent agreement with the predicted exponent 2/5, as shown in Fig. 7. The fact that the cross-linking distance lc scales directly with cMg further confirms the role of Mg2+ as the effective ionic cross-linker of the neurofilament networks. Thus, our findings demonstrate both the entropic origin of neurofilament network’s elasticity as well as the role of Mg2+ as an effective ionic cross-linker.
## Conclusion
We measure the linear and nonlinear viscoelastic properties of cross-linked neurofilament solutions over a wide range Mg2+ and neurofilament concentrations. Neurofilaments are interesting intermediate filament networks whose nonlinear elasticity has not been studied systematically. We show that the neurofilament networks form densely cross-linked gels, whose elasticity can be well understood within an affine entropic framework. We provide direct quantitative calculations of lp and lc from bulk rheology using this model. Furthermore, our data provides evidence that Mg2+ acts as the effective ionic cross-linker in the neurofilament networks. The weaker than expected dependence that we observe suggests that Mg2+ may be near saturation in our networks. Future experimental work with other multivalent ions is required to better understand the electrostatic interaction between filaments and cross-links; this would lead to a better microscopic understanding of the effects of electrostatic interactions in the cross-linking of neurofilament networks. Moreover, the effect of divalent ions on the cross-linking of networks of other intermediate filaments would also be very interesting to explore.
## Acknowledgments
This work was supported in part by the NSF (DMR-0602684 and CTS-0505929), the Harvard MRSEC (DMR-0213805), and the Stichting voor Fundamenteel Onderzoek der Materie (FOM/NWO).
## References
1. S. Rammensee, P. A. Janmey, and A. R. Bausch, Eur. Biophys. J. Biophy. 36, 661 (2007).
2. M. L. Gardel et al., Science 304, 1301 (2004).
3. B. Hinner et al., Phys. Rev. Lett. 81, 2614 (1998).
4. R. Tharmann, M. Claessens, and A. R. Bausch, Biophys. J. 90, 2622 (2006).
5. C. Storm et al., Nature 435, 191 (2005).
6. J. Y. Xu et al., Biophys. J. 74, 2731 (1998).
7. J. Kas et al., Biophys. J. 70, 609 (1996).
8. P. C. Wong et al., J. Cell Biol. 130, 1413 (1995).
9. J. F. Leterrier et al., J. Biol. Chem. 271, 15687 (1996).
10. S. Kumar, and J. H. Hoh, Biochem. Biophy. Res. Co. 324, 489 (2004).
11. G. C. L. Wong, Curr. Opin. Colloid In. 11, 310 (2006).
12. O. I. Wagner et al., Exp. Cell Res. 313, 2228 (2007).
13. L. Kreplak et al., J. Mol. Biol. 354, 569 (2005).
14. S. Kumar et al., Biophys. J. 82, 2360 (2002).
15. A. Delacourte et al., Biochem. J. 191, 543 (1980).
16. J. F. Leterrier, and J. Eyer, Biochem. J. 245, 93 (1987).
17. N. Y. Yao, R. Larsen, and D. A. Weitz, J. Rheol. 52, 13 (2008).
18. C. Baravian, G. Benbelkacem, and F. Caton, Rheol. Acta 46, 577 (2007).
19. C. Baravian, and D. Quemada, Rheol. Acta 37, 223 (1998).
20. M. Rubenstein, and R. Colby, Polymer Physics (Oxford University Press, Oxford, 2004).
21. D. A. Weitz, and P. A. Janmey, P. Natl. Acad. Sci. USA 105, 1105 (2008).
22. M. L. Gardel et al., Phys Rev Lett 93 (2004).
23. A. R. Bausch, and K. Kroy, Nat. Phys. 2, 231 (2006).
24. C. Bustamante et al., Science 265, 1599 (1994).
25. F. C. Mackintosh, J. Kas, and P. A. Janmey, Phys. Rev. Lett. 75, 4425 (1995).
26. M. Fixman, and J. Kovac, J. Chem. Phys. 58, 1564 (1973).
27. A. N. Semenov, J. Chem. Soc. Faraday T. Ii 82, 317 (1986).
28. F. Gittes, and F. C. MacKintosh, Phys. Rev. E 58, R1241 (1998).
29. C. F. Schmidt et al., Macromolecules 22, 3638 (1989).
30. T. Odijk, Macromolecules 16, 1340 (1983).
31. Z. Dogic et al., Phys. Rev. Lett. 92 (2004).
32. R. Tharmann, M. Claessens, and A. R. Bausch, Phys. Rev. Lett. 98 (2007).
SHARE | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9112300872802734, "perplexity": 2246.8944838909306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00447.warc.gz"} |
https://www.physicsforums.com/threads/determine-the-base-unit-for-the-magnetic-flux.294289/ | # Determine the base unit for the magnetic flux
1. Feb 22, 2009
### aaron86
1. The problem statement, all variables and given/known data
The e.m.f E of a battery is given by $$E = \frac{P}{I}$$ where P is the power supplied from the battery when current I flows through it. An e.m.f. $${E_c}$$ can also be induced in a coil when the magnetic flux, $$\Phi$$, associated with it changes with time, t, as expressed by $${E_c} = \frac{{d\Phi }}{{dx}}$$. Determine the base unit for the magnetic flux.
Answer is $$kg{m^2}{A^{ - 1}}{s^{ - 2}}$$
2. Relevant equations
$$E = \frac{P}{I} = \frac{{kg{m^2}{s^{ - 2}}}}{{A \cdot s}}$$
3. The attempt at a solution
I tried to integrate $$E = \frac{P}{I} = \frac{{kg{m^2}{s^{ - 2}}}}{{A \cdot s}}$$ but was stuck at getting the answer. Please help
Thanks!
2. Feb 22, 2009
### Dadface
There is no need to integrate,just rearrange your equation to make phi the subject then insert the base units and tidy it up.
3. Feb 22, 2009
### rl.bhat
Power = workXtime= kg*m^2*s^-3
flux = Pxs/I.
Now find the final unit.
4. Feb 22, 2009
### Dadface
Flux=Px/I.You have the right units for P,x is s and I is A
5. Feb 22, 2009
### aaron86
Thanks for the replies, however, aren't we suppose to extract information that is provided by the question?
How do you guys deduce the answer from $${E_c} = \frac{{d\Phi }}{{dx}}$$ ?
6. Feb 22, 2009
### Dadface
Phi=Ex
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Determine the base unit for the magnetic flux | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371311664581299, "perplexity": 1495.907974259703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00070.warc.gz"} |
https://infoscience.epfl.ch/record/63342 | Infoscience
Journal article
# A theoretical study of the inclusion of dispersion in boundary conditions and transport equations for zero- order kinetics
The transport of a solute in a soil column is considered for zero-order kinetics. The visible displacement of the solute is affected by dispersion. The dispersion coefficient enters both the transport equation and the boundary condition. It is shown that the latter is the most important effect and a simple equation is proposed to describe solute transport, which takes into account the influence of dispersion in the boundary condition, but not in the transport equation. Validity and limitations of this equation are discussed in some detail by comparison with the complex but exact solution for zero-order kinetics.
Note: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447414875030518, "perplexity": 449.52205224697286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00299-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php/2009_AMC_12B_Problems/Problem_22 | # 2009 AMC 12B Problems/Problem 22
## Problem
Parallelogram has area . Vertex is at and all other vertices are in the first quadrant. Vertices and are lattice points on the lines and for some integer , respectively. How many such parallelograms are there? (A lattice point is any point whose coordinates are both integers.)
## Solution
### Solution 1
The area of any parallelogram can be computed as the size of the vector product of and .
In our setting where , , and this is simply .
In other words, we need to count the triples of integers where , and .
These can be counted as follows: We have identical red balls (representing powers of ), blue balls (representing powers of ), and three labeled urns (representing the factors , , and ). The red balls can be distributed in ways, and for each of these ways, the blue balls can then also be distributed in ways. (See Distinguishability for a more detailed explanation.)
Thus there are exactly ways how to break into three positive integer factors, and for each of them we get a single parallelogram. Hence the number of valid parallelograms is .
### Solution 2
Without the vector product the area of can be computed for example as follows: If and , then clearly . Let , and be the orthogonal projections of , , and onto the axis. Let denote the area of the polygon . We can then compute:
The remainder of the solution is the same as the above.
## Solution 3
We know that is . Since is on the line , let it be represented by the point . Similarly, let be . Since this is a parallelogram, sides and are parallel. Therefore, the distance and relative position of to is equivalent to that of to (if we take the translation of to and apply it to , we will get the coordinates of ). This yields . Using the Shoelace Theorem we get
Since . The equation becomes
Since must be a positive integer greater than , we know will be a positive integer. We also know that is an integer, so must be a factor of . Therefore will also be a factor of .
Notice that .
Let be such that are integers on the interval .
Let be such that are integers, , and .
For a pair , there are possibilities for and possibilites for ( doesn't have to be the co-factor of , it just can't be big enough such that ), for a total of possibilities. So we want
Notice that if we "fix" the value of , at, say , then run through all of the values of , change the value of to , and run through all of the values of again, and so on until we exhaust all combinations of we get something like this:
which can be rewritten
So there are possible sets of coordinates , and .
(Note: I'm not sure if the notation for double index summation is correct or even applicable in the context of this problem. If someone could fix the notation so that it is correct, or replace it without changing the general content of this solution, that would be great. If the notation is correct, then just delete this footnote) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850624203681946, "perplexity": 307.08370519179726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141740670.93/warc/CC-MAIN-20201204162500-20201204192500-00394.warc.gz"} |
https://bytefreaks.net/applications/latex-beamer-variable-to-get-current-section-name | # Latex / Beamer: Variable to get current Section name
Recently, we were trying to write a table of contents for a specific section in a Latex / Beamer presentation. Being too lazy to update the frame title each time the section name might change we needed an automation to do that for us!
Luckily, the following variable does the trick: \secname gets the Section name. \subsecname gets the subsection name for those that might need it.
\begin{frame}
\frametitle{Outline for \secname}
\tableofcontents[currentsection, hideothersubsections, sectionstyle=show/show]
\end{frame}
This post is also available in: Greek
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348602652549744, "perplexity": 3635.2485295371816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00641.warc.gz"} |
https://enhancedodds.co.uk/hfnjm1k/pl4khw.php?tag=658915-properties-of-dft | Let one consider an electron in a hydrogen-like ion obeying the relativistic Dirac equation. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange–correlation functionals have been developed for chemical applications. As a special case of general Fourier transform, the discrete time transform shares all properties (and their proofs) of the Fourier transform discussed above, except now some of these properties may take different forms. The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. you will find that the DFT very much cares about periodicity. Classical density functional theory uses a similar formalism to calculate properties of non-uniform classical fluids. Matlab Tutorial - Discrete Fourier Transform (DFT) bogotobogo.com site search: DFT "FFT algorithms are so commonly employed to compute DFTs that the term 'FFT' is often used to mean 'DFT' in colloquial settings. Classical DFT is supported by standard software packages, and specific software is currently under development. Electrical Engineering (EE) Properties of DFT Electrical Engineering (EE) Notes | EduRev Summary and Exercise are very important for In work that later won them the Nobel prize in chemistry, the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). The time and frequency domains are alternative ways of representing signals. DFT with N = 10 and zero padding to 512 points. Theorem 1. Based on that idea, modern pseudo-potentials are obtained inverting the free-atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo-wavefunctions to coincide with the true valence wavefunctions beyond a certain distance rl. 2. n Specifically, DFT computational methods are applied for synthesis-related systems and processing parameters. {\displaystyle \mathrm {d} ^{3}\mathbf {r} } The many-electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. r {\displaystyle \mathbf {r} } {\displaystyle n_{0}} Periodicity and consequently the ground-state expectation value of an observable Ô is also a functional of n0: In particular, the ground-state energy is a functional of n0: where the contribution of the external potential Electrical Engineering (EE). {\displaystyle p_{\text{F}}} Ψ 3. In other words, Ψ is a unique functional of n0,[13]. If there are several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence problems, since very small perturbations may change the electron occupation. In the following, we always assume and . ⟨ It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble. V The properties of the Fourier transform are summarized below. r Looking back onto the definition of the functional F, we clearly see that the functional produces energy of the system for appropriate density, because the first term amounts to zero for such density and the second one delivers the energy value. The foundation of the product is the fast Fourier transform (FFT), a method for computing the DFT … Instead, based on what we have learned, some important properties of the DFT are summarized in Table below with an expectation that the reader can derive themselves by following a similar methodology of plugging in the time domain expression in DFT definition. Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors.
2020 properties of dft | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946679949760437, "perplexity": 805.0023932475436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00710.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/208123-regression-showing-xs-simple-linear-model-linearly-independent-l2.html | # Math Help - Regression: Showing Xs in a simple linear model are linearly independent in L2
1. ## Regression: Showing Xs in a simple linear model are linearly independent in L2
Let X ~ exp(1), Y=e-X and consider the simple linear model $Y = \alpha + \beta X + \gamma X^2 + W$, where $E(W)=0=\rho (X,W) = \rho(X^2,W)$.
Demonstrate that 1, X, X2 are linearly independent in L2.
It also gives a hint: exp(1) = G(1), (gamma distribution with p=1)
I'm not sure how to show linear independence in L2 , I'm not even quite sure what L2 means exactly. Would showing Cov(1,X) = Cov(X,X^2) = Cov(1,X^2) = 0 be enough for linear independence? I'm also no sure how to use the hint..
2. ## Re: Regression: Showing Xs in a simple linear model are linearly independent in L2
Hey chewitard.
Do your notes or textbooks say anything about L^2? Is this the Lebesgue space for L^2?
You might want to see if this is the case and check the following:
Lp space - Wikipedia, the free encyclopedia | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392337203025818, "perplexity": 1464.5532514197962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297172.60/warc/CC-MAIN-20150323172137-00103-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/139646-dividing-equations-finding-quotiant.html | # Thread: Dividing equations and finding quotiant.
1. ## Dividing equations and finding quotiant.
i)Find the quotient and the remainder when $3x^3-2x^2+x+7$ is divided by $x^2-2x+5$#
ii)Hence, or otherwise, determine the values of the constants $a$ and $b$ such that, $3x^3-2x^2+ax+b$ is divded by $x^2-2x+5$, there is no remainder.
Been looking at this for ages and have forgotten how to dot his kind of quiestion.
Thanks
2. Originally Posted by George321
i)Find the quotient and the remainder when $3x^3-2x^2+x+7$ is divided by $x^2-2x+5$#
ii)Hence, or otherwise, determine the values of the constants $a$ and $b$ such that, $3x^3-2x^2+ax+b$ is divded by $x^2-2x+5$, there is no remainder.
Been looking at this for ages and have forgotten how to dot his kind of quiestion.
Look at the leading term. $x^2$ divides into $3x^3$ 3x times. Now multiply the entire divisor, $x^2- 2x+ 5$ by that and subtract:
$\begin{array}{cccc}3x^3& - 2x^2& + x& + 7 \\ \underline{3x^3}& \underline{- 6x^2}& \underline{+ 15x} & \\ & 4x^2 & -14x & 7\end{array}$
Now, $x^2$ will divide into $4x^2$ 4 times. Multiply $x^2- 2x+ 5$ by 4 and subtract:
$\begin{array}{ccc}4x^2 & -14x & 7 \\\underline{4x^2}& \underline{-8x} & \underline{20}\\ & -6x& - 13\end{array}$.
That is, $x^2- 2x+ 5$ divides into $3x^3- 2x^2+ x+ 7$ 3x+ 4 times with remainder -6x- 13. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956174850463867, "perplexity": 422.9810856636575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119782.43/warc/CC-MAIN-20170423031159-00554-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://stats.stackexchange.com/questions/28229/variance-of-the-product-of-a-random-matrix-and-a-random-vector/28231 | Variance of the product of a random matrix and a random vector
If $X$ and $Y$ are independent random variables, then the variance of the product $XY$ is given by
$\mathbb{V}\left(XY\right)=\left\{ \mathbb{E}\left(X\right)\right\} ^{2}\mathbb{V}\left(Y\right)+\left\{ \mathbb{E}\left(Y\right)\right\} ^{2}\mathbb{V}\left(X\right)+\mathbb{V}\left(X\right)\mathbb{V}\left(Y\right)$
If $\mathbf{X}$ and $\mathbf{y}$ are independent matrix and vector of $m\times m$ and $m\times1$ dimension respectively, then what would be the variance of the product $\mathbf{X}\mathbf{y}$?
My Attempt
$\mathbb{V}\left(\mathbf{X}\mathbf{y}\right)=\mathbb{E}\left(\mathbf{X}\right)\mathbb{V}\left(\mathbf{y}\right)\left\{ \mathbb{E}\left(\mathbf{X}\right)\right\} ^{\prime}+\left\{ \mathbb{E}\left(\mathbf{y}\right)\otimes\mathbf{I}_{m}\right\} ^{\prime}\mathbb{V}\left\{ \textrm{vec}\left(\mathbf{X}\right)\right\} \left\{ \mathbb{E}\left(\mathbf{y}\right)\otimes\mathbf{I}_{m}\right\} +\mathbb{V}\left\{ \textrm{vec}\left(\mathbf{X}\right)\right\} \left\{ \mathbb{V}\left(\mathbf{y}\right)\otimes\mathbf{I}_{m}\right\}$
I know this is not right, at least the last term is wrong. I'd highly appreciate if you give me the right identity or point out any reference. Thanks in advance for your help and time.
-
Are you interested in the full covariance matrix or just the variances of the elements of the resultant vector (i.e., the diagonal of the covariance matrix)? – jbowman May 11 '12 at 0:26
Interested in full covariance matrix. – MYaseen208 May 11 '12 at 0:27
Thanks @jbowman for your notice. I'm interested in the full covariance matrix. Looking forward to your answer. Thanks – MYaseen208 May 11 '12 at 0:42
What's wrong with the answer you received on Stats.SE? You seem to have not accepted that answer, and are now opening a bounty on this one. It would help if you edited the question to specify what more you want here. – Willie Wong May 14 '12 at 7:51
I'll assume that the elements of $\mathbf{y}$ are i.i.d. and likewise for the elements of $\mathbf{X}$. This is important, though, so be forewarned!
1. The diagonal elements of the covariance matrix equal the sum of $m$ products of i.i.d. random variates, so the variance will equal $m \mathbb{V}(x_{ij}y_j)$, which variance you have above in your first row.
2. The off-diagonal elements all equal zero, as the rows of $\mathbf{X}$ are independent. To see this, without loss of generality assume $\mathbb{E}x_{ij} = \mathbb{E}y_i = 0 \space \forall\thinspace i,j$. Define $\mathbf{x}_i$ as the $i^{\text{th}}$ row of $\mathbf{X}$, transposed to be a column vector. Then:
$\text{Cov}(\mathbf{x_i^\text{T}y},\mathbf{x_j^\text{T}y}) = \mathbb{E}(\mathbf{x_i^\text{T}y})^\text{T}(\mathbf{x_j^\text{T}y}) = \mathbb{E}\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}=\mathbb{E}_y\mathbb{E}_x \mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$
Note that $\mathbf{x}_i\mathbf{x}_j^\text{T}$ is a matrix, the $(p,q)^\text{th}$ element of which equals $x_{ip}x_{jq}$. When $i \ne j$, the expectation with respect to $x$ of $\mathbf{y}^{\text{T}}\mathbf{x}_i\mathbf{x}_j^\text{T}\mathbf{y}$ equals 0 for any $\mathbf{y}$, as each element is just the expectation of the product of two independent r.v.s with mean 0 times $y_py_q$. Consequently, the entire expectation equals 0.
-
Thanks @jbowman for your answer. I'll check it in more detail later. Would you mind to give any reference. Thanks – MYaseen208 May 11 '12 at 2:02
Sorry, this is just algebra. I'm sure I can dig one up in time, though. – jbowman May 11 '12 at 13:35
(+1), nice answer. @MYaseen208, you can find the identities used here in the matrix cookbook, chapter 6 - orion.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf – Macro May 11 '12 at 14:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389415979385376, "perplexity": 275.0543108181427}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163053664/warc/CC-MAIN-20131204131733-00097-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Infinite_Subset_of_Finite_Complement_Space_Intersects_Open_Sets | # Infinite Subset of Finite Complement Space Intersects Open Sets
## Theorem
Let $T = \struct {S, \tau}$ be a finite complement topology on an infinite set $S$.
Let $H \subseteq S$ be an infinite subset of $S$.
Then the intersection of $H$ with any non-empty open set of $T$ is infinite.
## Proof
Let $U \in \tau$ be any non-empty open set of $T$.
Then $\relcomp S U$ is finite.
We have that:
$H = H \cap \paren {U \cup \relcomp S U} = \paren {H \cap U} \cup \paren {H \cap \relcomp S U}$
Aiming for a contradiction, suppose $H \cap U$ is finite.
Since $H \cap \relcomp S U \subseteq \relcomp S U$, $H \cap \relcomp S U$ is also finite.
$H = \paren {H \cap U} \cup \paren {H \cap \relcomp S U}$ is the union of two finite sets, and so it is finite.
It is a contradiction that $H$ is infinite and finite at the same time, so $H \cap U$ must be infinite.
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803013801574707, "perplexity": 116.14355722967468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00529.warc.gz"} |
https://mathoverflow.net/questions/343961/do-mixing-homeomorphisms-on-continua-have-positive-entropy | # Do mixing homeomorphisms on continua have positive entropy?
I am trying to understand relations between various measures of topological complexity. I have read that expansive homeomorphisms on continua, for example, have positive entropy. But I do not know whether another property called mixing also implies positive entropy.
Let $$X$$ be a connected compact metric space.
Question. If a homeomorphism $$f:X\to X$$ is mixing, then does $$f$$ necessarily have positive entropy?
A homeomorphism $$f:X\to X$$ is mixing if for every pair of non-empty open sets $$U,V$$ of $$X$$, there exists a positive integer $$M$$ such that $$f^m(U)\cap V\neq\varnothing$$ for all $$m\geq M$$. That is, if $$U$$ is open and non-empty, then $$d_H(f^n(U),X)\to 0$$ as $$n\to\infty$$, where $$d_H$$ is the Hausdorff metric.
See https://en.wikipedia.org/wiki/Topological_entropy for the two equivalent definitions of the topological entropy of a map $$f:X\to X$$.
EDIT: It occurs to me that my question was essentially asked in:
Kato, Hisao, Continuum-wise expansive homeomorphisms, Can. J. Math. 45, No. 3, 576-598 (1993). ZBL0797.54047.
Question 6 in that paper is my question with a weaker hypothesis (it is not difficult to show that mixing implies sensitive dependence on initial conditions).
I'm not sure if anyone ever published a counterexample.
• There are plenty of families of zero entropy systems which can be either mixing or non-mixing. What comes ot my mind first is primitive substitution systems. Pisot substitutions are never mixing, but some families of constant length substitutions can be mixing or not. There is a paper by Kenyon, Sadun, Solomyak that goes into more detail in this direction. I am sure there are many others that I'm not as familiar with. Oct 16, 2019 at 20:12
• In particular, Dekking and Keane showed in Mixing Properties of Substitutions that the subshift associated with the substitution $0 \mapsto 001, 1 \mapsto 11100$ is topologically mixing. Of course, all subshifts associated with primitive substitutions are zero entropy. Oct 16, 2019 at 20:21
• @DanRust What are the underlying topological spaces for these systems? Oct 16, 2019 at 20:31
• Well a counter example can be made by considering a unipotent flow on a compact quotient of a semisimple Lie group, for concreteness one can consider the quotient of PSL2 by a uniform lattice and the action of the (horospherical) unipotent subgroup. It is ergodic and actually mixing (Howe-Moore), but one may show the mixing is polynomial, hence the topological entropy is 0.
– Asaf
Oct 17, 2019 at 2:20
• Solenoids usually come up in dynamics as two sided extensions of systems, which are themselves usually expanding homomorphisms. It can be shown easily that in such cases, as the fiber is compact, the metric entropy is the same for the two sided and one sided system (think about their Pinsker factors say, you don't get any new information gained from the far past) and therefore by variational principal they have the same topological entropies. So I would look otherwise rather than solenoids.
– Asaf
Oct 17, 2019 at 2:23
edit
I realized there is a simpler construction that achieves the same (examples of top. mixing zero-entropy homeos on $$S^3$$): Instead of the bi-directional flow through cuboids, have a flow from bottom to top on the cylinder $$D^2 \times [0,1]$$, and have it slow down to rate $$0$$ on the boundary. Also, instead of all the $$C_i$$ playing the same role, have a single special cylinder $$C$$ and to get mixing, thread small cylinders from its top to its bottom, making sure at all times not to introduce periods (by refining open sets similarly as in the construction below). Open sets in the existing gadget will go around these loops and a little part of them eventually ends up on a free area on the top of $$C$$, and you do the threading from there. (And threading through open sets not in the gadget is easy.) The details of making the flow continuous in the limit, and the calculations showing that slow enough rate in the new cylinders implies zero entropy, are the same (and I still didn't do them).
original
I think there are counterexamples on all path-connected closed manifolds of dimension at least $$3$$, or at least I don't know what special properties I could possibly be using. For concreteness, we can think about $$M = S^3$$. I'll describe an $$\mathbb{R}$$-flow whose time-$$1$$ map will have the desired property. Some figures I drew on the blackboard are also attached ($$U_1$$ and $$V_1$$ in the figure should be $$U_2$$ and $$V_2$$ resp.).
First, let's introduce for every $$\epsilon > 0$$ an flow $$f_\epsilon : \mathbb{R} \times C \to C$$ on the solid block $$C = [0,1]^2 \times [0, R]$$ (think of a large $$R$$). Think of $$[0, R]$$ as being the vertical axis, and we're staring at $$C$$ from the front. On the boundary of $$C$$, there is no movement. Inside $$C$$ pick two vertical lines from top to bottom, say $$A$$ and $$B$$. On the line $$A$$, the dynamics is trivial, i.e. all points are fixed. On the line $$B$$, points are moving upward at some positive rate, which should be considered very slow and parametrized by $$\epsilon$$ (the time-$$1$$ map on $$B$$ should behave roughly like $$x \mapsto x^{1 + \epsilon}$$ does on $$[0,1]$$). On the strip $$S$$ between $$A$$ and $$B$$, the dynamics is also an upward flow, whose rate is interpolated between those of $$B$$ and $$A$$ in a continuous way. Of course near the bottom and top boundaries, the movement has to slow down and stop.
Outside $$S$$, the vertical movement quickly dies off, and turns into horizontal movement parallel to the strip $$S$$, so that if $$A$$ is on the left and $$B$$ on the right, the dynamics moves points say from left to right, so that the closer they are to $$S$$, the slower they go (and very close they also shift up). We want to introduce some horizontal movement immediately after leaving $$S$$, so that all points close to $$S$$ but not on its affine hull will eventually reach the right boundary of $$C$$. (On the affine hull of $$S$$ you have to stop movement altogether when you hit $$S$$.)
In the blackboard photo, see the leftmost figure for a front "perspective" view of $$C$$ and some indications of the vector field on $$S$$, and see the bottommost "top view" figure for indications of the horizontal flow.
The point is now that if you go into (properly inside) $$C$$ from the bottom, somewhere between the bottom points of $$A$$ and $$B$$, then you walk up $$C$$, and you can control how long it takes to reach the top from the bottom of $$C$$. (Of course the bottom and top have no flow because they are on the boundary, so this is indeed only true if you step properly inside $$C$$, but that'll change later when we start embedding copies of this in $$M$$.)
We need some niceness properties from the flow, which we call the splotch properties, because they describe how the dynamics splotches open sets to the boundary of $$C$$. If you take an open set inside $$C$$, then we want that almost all points (all but the ones in the two-dimensional affine hull of $$S$$) will eventually stop moving vertically and start tending towards the right side of $$C$$. We want that the limit of these points on the right boundary contains a relative open set, i.e. as the open set is squeezed to the right side, the splotch you get in the $$\omega$$-limit always contains a square (homeomorphic copy of $$[0,1]^2$$). For the inverse dynamics, we want the same to happen on the left. Assuming far enough from $$S$$ there is no vertical movement whatsoever, this should be more or less automatic, I didn't write any formulas though. Another thing we need that if $$U$$ tends to the splotch $$U^+$$ on the right hand side and we pick a small square $$D$$ inside $$U^+$$, then some small open ball inside $$U$$ has its splotch ($$\omega$$-limit) contained in $$D$$.
Now, having $$f_{\epsilon} : \mathbb{R} \times C \to C$$ with these properties, let's think of $$C$$ as very flexible and as carrying the dynamics of $$f_{\epsilon}$$. So when I stretch $$C$$ around the manifold $$M$$, the conjugate dynamics follows along.
Now, enumerate a sequence of pairs of open sets $$(U_i, V_i)$$ in $$M^2$$ so that for any pair of open sets $$U, V$$ in $$M^2$$ there exists $$i$$ such that $$U_i \subset U$$ and $$V_i \subset V$$. We may assume $$U_i$$ and $$V_i$$ are open balls whose radius tends to $$0$$ very quickly. We build a sequence of flows inductively. To get the first one, called $$g_1$$, take a path from the center of $$U_1$$ to the center of $$V_1$$ and position an elongated $$C$$ called $$C_1$$ along this path so that its bottom is in $$U_1$$ and its top is in $$V_1$$. Now, the $$f_{\epsilon_1}$$-flow in $$C$$ (pick some tiny $$\epsilon_1$$) turns into a flow $$g_1$$ on $$M$$ which in $$C_1$$ uses the flow conjugate to $$f_{\epsilon_1}$$, and fixes all other points of $$M$$. Observe that for any large enough $$m$$, in the time-$$1$$ flow of $$g_1$$ we can get from $$U_1$$ to $$V_1$$ in exactly $$m$$ steps, by picking a suitable position between the lines $$A$$ and $$B$$.
Now, we continue the construction process. We have some $$U_2$$ and $$V_2$$, and want to do the same for them. If they are disjoint from $$C_1$$, then this can be done in exactly the same way, and if they are not entirely inside $$C_1$$ they can be refined to be completely disjoint from $$C_1$$. So consider the case where one or both are inside $$C_1$$; suppose for concreteness that both are inside $$C_1$$. Follow the dynamics forward from $$U_2$$. It travels according to the flow of $$C_1$$ until most of it gets very close to the side of $$C_1$$ that corresponds to the right side of $$C$$, and it tends to some splotch $$U_2^+$$ in the $$\omega$$-limit. Follow it also backward to obtain an $$\alpha$$-limit $$U_2^-$$ on the left side. Then do the same for $$V_2$$ to get $$V_2^+$$ and $$V_2^-$$. Now, it's possible that $$U_2^+$$ and $$V_2^+$$ intersect, then refine these sets to be smaller so that they don't, using the splotch properties. Do the same in the backward direction.
Now, since $$U_2^+$$ contains a square $$D \cong [0,1]^2$$ on the boundary of $$C_1$$, we can glue another copy $$C_2$$ of $$C$$ so that $$D$$ becomes the bottom square of $$C_2$$, and on $$C_2$$ use the flow $$f_{\epsilon_2}$$. Of course, since there is no movement on the boundary of $$C_1$$ nor the boundary of $$C_2$$, the dynamics are not in any way connected. But we can distort the dynamics near the common boundary of $$C_1$$ and $$C_2$$ slightly so that the flow drags points of $$C_1$$ into $$C_2$$, in particular we want that some points are dragged into the $$S$$-strip of $$C_2$$ and start moving upward along $$C_2$$.
Now glue the top of $$C_2$$ to $$V_2^-$$ and distort the flow on the boundary of $$C_1$$ and $$C_2$$ as we did with the bottom. Observe that the distortion can be made arbitrarily small and made to affect an arbitrarily small area, by making the flow along $$C_2$$ arbitrarily slow by decreasing $$\epsilon_2$$ and making $$C_2$$ very thin. Observe that in this new flow $$g_2$$, in the time-$$1$$ map we can still get from $$U_1$$ to $$V_1$$ in $$m$$ steps, for any large enough $$m$$ (as we didn't modify the dynamics on the $$C_1$$-copy of $$S$$), but now we can also get from $$U_2$$ to $$V_2$$ in any large enough number of steps by following the dynamics to the $$S$$-strip of $$C_2$$, and picking a point on the $$S$$-strip with a suitable rate (the speed at the beginning and the end of these orbits cannot be controlled, but we can freely control the length of time in the middle of the orbit where we go through $$C_2$$). Note that in the flow $$g_2$$, the dynamics of every point that is not on (what corresponds to) the affine hull of $$S$$ in $$C_1$$ or $$C_2$$ tends to the boundary of $$C_1 \cup C_2$$. (We made sure $$U_2^+ \cap V_2^+ = \emptyset$$ to ensure no periodic behavior is introduced, and every point has singleton $$\alpha$$ and $$\omega$$-limit sets.)
Now, the idea is to continue by induction, keeping roughly the following characteristic: we have in $$M$$ a finite set of very thin elongated blocks $$C_i$$. Outside the sets $$C_i$$, there is no movement, and on $$C_i$$ points move according to $$f_{\epsilon_i}$$, with $$\epsilon_i$$ tending to $$0$$ very fast, plus some linking behavior at their common boundaries. Every point that is not on (a set correspond to) the affine hull a copy of $$S$$ in some $$C_i$$ will eventually tend to the boundary of their union, so in particular every open set will contain an open ball which has such movement and a well-defined connected splotch as its $$\omega$$-limit. To get the next step of the construction, for $$U_{i+1}, V_{i+1}$$ again follow their forward and backward iterates until you hit splotches on some boundaries (the sets may split finitely many times as you travel between the sets $$C_j$$, but just refine them; againmake sure $$U_{i+1}^+ \cap V_{i+1}^+ = \emptyset$$ and $$U_{i+1}^- \cap V_{i+1}^- = \emptyset$$). Now drag a very thin copy of $$C_{i+1}$$ between these splotches. Observe that the existing $$C_j$$s don't disconnect $$M$$ (if you made them thin enough), so it is indeed possible to position $$C_{i+1}$$ this way. Finally add a bit of additional flow to connect the dynamics of $$C_{i+1}$$ to whereever you glued them on the boundary of $$\bigcup_{j = 1}^i C_j$$. Observe that no periodic points are introduced at any finite level.
Now, take the pointwise limit of these flows as $$i \rightarrow \infty$$, say $$g$$. This should tend to a flow assuming $$\epsilon_i$$ tends to zero fast enough, and everything is regular enough, and all the added $$C_i$$s are small enough and so on. Just do the math and you'll see (obviously I haven't done it or I'd show it).
On the other hand, if $$\epsilon_i$$ goes to zero very fast, clearly there is no entropy. This is true on finite levels of the construction just because every point has singleton limit sets. To get it for $$g$$, do the calculations, make sure not to introduce too many new partial orbits with respect to any fixed resolution $$\epsilon > 0$$ at finite steps.
• Two important points: 1. I'm not very used to describing flows or geometric constructions in text since I usually only work in one dimension (namely, zero), so this is probably not so easy to read. Hopefully the idea is possible to extract (and hopefully it's ok). 2. I saw this question just before going to bed, and in my dream I got a 20 point bounty for this construction. A man can dream, as they say. Oct 17, 2019 at 7:25
• You may get more than bonus points; see the edit to my question. Possibly no example like this has been published. Oct 19, 2019 at 23:36
• Ok, but two important points 1. Google scholar does not give you a green +10 when you get a citation, so what's the point really? 2. My paper will probably conclude with "I still didn't do the math". Oct 20, 2019 at 6:21
• After some quick googling, I feel skeptical that this is really new, but I'll give it some thought next week when I have better access to the literature. Oct 20, 2019 at 10:14
The simplest counterexample would be the identity map on a one-point space.
• good point, I should specify that I only think about the non-degenerate spaces Oct 17, 2019 at 21:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 215, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166894555091858, "perplexity": 221.39901695147648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00603.warc.gz"} |
https://www.physicsforums.com/threads/proof-of-keplers-first-law.706288/ | # Proof Of kepler's first law?
1. Aug 19, 2013
### babbi.mamgain
I wanted to ask how come kepler say that the orbits on which planet rotates is elliptical .... just by obvious saying, what we have learnt is that when the velocity is Perpendicular to force the path traced is circular.....even if we say in reality the actual angle B/W velocity and force is not 90 ....but i question is what makes a planet to go that far from the focci ? try to give an answer easy to understand. or please try to explain the mathematical proof in detail.
2. Aug 19, 2013
### HallsofIvy
That's a very strange question!
I'm not sure what you mean by "by obvious saying". You appear to be saying that even if something is NOT true, we can say it is! That is certainly not true.
It is not a matter of something "forcing" a planet to move in a non-circular orbit. It is a matter of there not being anything to force the planet to go in a circular orbit! A circular orbit has "eccentricity" exactly equal to 0. An elliptic orbit can have eccentricity anywhere from 0 to 1. It would be very surprising if a parameter were exactly 0 rather than some range of values.
Similar Discussions: Proof Of kepler's first law? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227218627929688, "perplexity": 592.8431508273042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891277.94/warc/CC-MAIN-20180122093724-20180122113724-00343.warc.gz"} |
https://www.physicsforums.com/threads/antimatter-creation.925229/ | # B Antimatter Creation
1. Sep 12, 2017
### ISamson
Hello,
I have read that Michio Kaku has made antimatter and photographed it when he was only a high schooler. I have read that the used Sodium-22 to produce positrons. How does that happen? I could not find some good sources of answers...
Thanks.
2. Sep 12, 2017
### Staff: Mentor
Where? It is hard to tell what exactly the source said without seeing the source.
Sodium-22 undergoes beta+ decay, which means it emits positrons. You don't have to do anything, you just have to find a way to get enough sodium-22. You cannot really photograph these positrons, directly, but you can let them produce tracks in detectors (e. g. cloud chambers) and take a picture of these tracks.
3. Sep 12, 2017
### vanhees71
Indeed, the most interesting question is, where I high schooler could get sodium-22? Nowadays the safety guidelines at highschools even under supervision of a teacher are such that it is almost impossible for the students to make interesting experiments (at least in Germany). I cannot imagine that it is allowed to handle even harmless portions of any radioactive material...
4. Sep 12, 2017
### Bandersnatch
5. Sep 12, 2017
### vanhees71
6. Sep 12, 2017
### Staff: Mentor
Born 1947, so around 1965 I guess.
7. Sep 12, 2017
### ISamson
He said he went to a local nuclear research company...
Draft saved Draft deleted
Similar Discussions: Antimatter Creation | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658446669578552, "perplexity": 2582.1005823220135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00093.warc.gz"} |
http://mathhelpforum.com/calculus/84407-solved-online-definite-integral-calculators-faulty-print.html | [SOLVED] Online Definite Integral Calculators faulty?
• April 19th 2009, 04:19 AM
dtb
[SOLVED] Online Definite Integral Calculators faulty?
Hi all,
I've been looking at two online calculators of Definite Integrals and I'm wondering if they can't handle limits that cross the x-axis
eg.
$\int_{-3}^{1} \! sin(x) \, dx.$
My solution is that the area is 2.4486 (being 1.989 + 0.459 area below axis added to area above axis)
But if you use either of these two popular online Integral calculators
Find the Numerical Answer to a Definite Integral - WebMath
Definite Integral Calculator at SolveMyMath.com
they give a result of -1.53 (negative area subtract smaller positive area)
So are they wrong? or is there a different purpose for this negative result?
Cheers (Coffee)
• April 19th 2009, 04:56 AM
Moo
Hello,
Quote:
Originally Posted by dtb
Hi all,
I've been looking at two online calculators of Definite Integrals and I'm wondering if they can't handle limits that cross the x-axis
eg.
$\int_{-3}^{1} \! sin(x) \, dx.$
My solution is that the area is 2.4486 (being 1.989 + 0.459 area below axis added to area above axis)
But if you use either of these two popular online Integral calculators
Find the Numerical Answer to a Definite Integral - WebMath
Definite Integral Calculator at SolveMyMath.com
they give a result of -1.53 (negative area subtract smaller positive area)
So are they wrong? or is there a different purpose for this negative result?
Cheers (Coffee)
When calculating an area, you have to subtract any area that is below the x-axis. And add the areas that are above the x-axis.
That's integrals, they can be negative, though areas in geometry are always positive.
If you work with the antiderivatives... :
An antiderivative of sin(x) is -cos(x) (just check by taking the derivative of -cos(x))
By definition, your integral is thus $-\cos(1)-[-\cos(-3)]=-\cos(1)+\cos(-3)=-\cos(1)+\cos(3)$
You can check this result with a calculator, it'll give -1.53 ;)
• April 19th 2009, 05:16 AM
dtb
Hmm, I'm still not sure...
I can work it out that way if I leave all +/- symbols as they are:
I get
-1.9899 (area below x-axis) + 0.4597 (area above x-axis)
= -1.5302
Unfortunately, having looked at 5 calculus books, there isn't much mention of graphs that cross the x-axis....
My teacher was saying that you need to take each area separately - make a negative area positive - then add them together
ie. magnitude |-1.9899| + |0.4597|
= 1.9899 + 0.4597
= 2.4496
So the area is positive, and a sum of the part below the axis and the part above.
Any thoughts? (Worried)
• April 19th 2009, 06:12 AM
Plato
If $f$ is nonnegative and integrable on $[a,b]$ then $\int_a^b {f(x)dx}$ is a measure of the area bound by the graph of $f$ and the x-axis.
Therefore, $\int_0^{\frac{\pi }{2}} {\cos (x)dx} = 1$ means that there is 1 square unit in the region bound by $\cos(x)$ and the x-axis from $x=0$ to $x=\frac{\pi}{2}$.
However, $\int_{\frac{{ - \pi }}{2}}^{\frac{\pi }{2}} {\cos (x)dx} = 0$ this is not area because the function is not nonnegative on that entire interval.
But $\int_{\frac{{ - \pi }}{2}}^{\frac{\pi }{2}} {\left| {\cos (x)} \right|dx} = 2$ which is the correct bounded area.
• April 19th 2009, 06:32 AM
dtb
I figured out why I was getting confused...
There's a difference between the "integral" and "the area between the graph and the x-axis"
or at least there is where the graph crosses the x-axis.
So, those online calculators will evaluate an integral, but they won't tell you the area.
Now I just need to find out what each type of result is used for (Happy)
• April 19th 2009, 07:10 AM
Plato
Quote:
Originally Posted by dtb
There's a difference between the "integral" and "the area between the graph and the x-axis"
Now I just need to find out what each type of result is used for | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936759889125824, "perplexity": 998.0164440441843}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470784/warc/CC-MAIN-20130516121430-00026-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/significance-of-the-lagrangian.38983/ | # Significance of the Lagrangian
1. Aug 10, 2004
### alexepascual
While I understand the use of the Lagrangian in Hamilton's principle, I have the gut feeling that there is more to it than meets the eye.
For instance, while the hamiltonian is conceptually easy to understand and even I could have thought about it, the Lagrangian is something else. I would never have thought about subtracting the potential energy from the kinetic energy. How was this found? was it just by accident? Did a monkey erase a plus sign in the Hamiltonian and put a minus? or were there some physical reasons that justified attempting to use the difference of T and V as opposite to their sum?. Or maybe someone Lagrange? Hamilton? was kind of bored and decided to have some fun by trying something different?
The way the subject is usually presented more or less along these lines:
Let there be a function which we call Lagrangian (L) defined by L=T-V. If we do this and that with this function, we obtain some very useful results.
It appears to me that the expression for the Lagrangian is so simple, that there should be some simple explanation of it's significance, which we could understand even before we start writing any equations.
If such an explanation exists, and you know it, I'll appreciate your sharing it with us.
-Alex-
Last edited: Aug 10, 2004
2. Aug 10, 2004
### marlon
The Lagrangian is a concept that comes from the variational principle. When you put this quantity into a functional and you calculate the extremal value (derivative equals zero)you get newton's equations of motion.
On a more intuitive note : one can say that when you calculate the minimal action (this is the lagrangian put into an integral over all possible paths between two points) needed to go from one point to another, you get a motion which is described by the newton-equations.
Or the newtonian equations state that nature is as lazy as possible...That is why nature will allways aim for the situation with lowest possible potential energy.
regards
marlon
3. Aug 10, 2004
### ZapperZ
Staff Emeritus
To add to what Marlon has said, the Lagrangian/Hamiltonian mechanics arose out of the Least Action Principle. This is a different approach to the dynamics of a system than Newtonian mechanics that uses forces. Such approach, using the calculus of variation, is what produces this formulation, and even Fermat's least time principle.
http://www.eftaylor.com/leastaction.html
Zz.
4. Aug 10, 2004
### Galileo
Yeah, I had (have) the same problem. It stems from the principle of least action.
I've tried to find a book which explains it well, but they are hard to find.
Here's a quote from one of the books:
5. Aug 10, 2004
### ZapperZ
Staff Emeritus
Check the link I gave earlier. It has at least one link that gives an almost "trivial" derivation of the Lagrangian.
I strongly suggest that one covers calculus of variation to fully understand the principle of least action. I've mentioned Mary Boas's text in a few postings on here. She has a very good coverage of this and sufficient for most physics majors.
Zz.
6. Aug 11, 2004
### alexepascual
ZapperZ:
I briefly looked through the links at E.F.Taylor's site and only saw one article that might provide a derivation of the Lagrangian. But I would have to read the article to make sure the derivation gives enough intuitive insight.
Also, in another article ( I think by E.F. Taylor) he talks about reducing the principle of least action to a differential form by bringing the starting and end points very close together. This might provide further insight. Thanks for your advice, I think it'll be very useful.
Marlon:
I understand your explanation and that is the explanation that I have found in the books. But it is not very satisfactory to me because it starts with the use of T-V instead of having T-V come out as the quantity derived.
With respect to your explanation of nature aiming for the lowest potential energy, I doubt this is correct. As a mater of fact, the least action principle minimizes the difference between kinetic and potential energy, which could be achieved by having the highest potential energy possible.
Also, I think the idea that Nature would try to economize some quantities by choosing the minimum (view which was supported by Maupertui) was kind of discredited when it was found that Nature was not aiming for a minimum of these quantities but an extremum, meaning it could as well be a maximum.
Thanks for your input Marlon. I hope you post again if you don't agree with what I just said.
Galileo:
I wonder why K.Jacobi chose not to break with tradition. Maybe it was too much work to look for an easy-to-understand explanation.
I have been taking a look at the book "The Variational principles of mechanics" by Cornelius Lanczos. Some of it is too advanced for me, but it has some sections that are quite enlightening. Specifically, he has a Chapter on D'Alembert's Principle and in the following chapter, it appears that he derives the Lagrangian from D'Alembert's principle (pgs.111-113) I would have to read it a couple times and think about it in order to understand it. If you can get a hold of a copy of the book I suggest you take a look at it.
7. Aug 11, 2004
### MiGUi
hamiltonian is not always V + T, that occurs for example if time don't appears in the lagrangian
8. Aug 11, 2004
### arildno
alex:
I will offer an argument which possibly yields a bit of insight on the (history of) "action" concept; however, this is my representation, and should not be regarded as authorative in any way:
1. The "vis vitae"-concept:
In 18'th century-physics, the quantity $$V_{s}=mv^{2}$$ (that is, twice the kinetic energy "T") was called the "vis vitae" (life force) of the physical system.
(I believe it was Leibniz who championed the concept)
2.Energy and action:
Note that if we combine the "hamiltonian" (T+V=E) with the Lagrangian, we gain for the "action" (A=T-V):
$$A=V_{s}-E$$
Hence, a rough characterizetion of "action" is:
Action is "excess life force"; nature tends to minimize this
NB!
I have no references to support this view, one really should make a study of the evolution of physics in the 17-18th to find the "rationale" physicists at that time made of "least-action"
As of today, one might regard the "least-action-principle" as a mathematical trick, but it probably goes "deeper" than that.
9. Aug 11, 2004
### pervect
Staff Emeritus
One of the examples I'm aware of where H is not equal to V+T is the restricted three body problem where we use rotating coordinates (or any problem that uses rotating coordinates for that matter).
[edit #2 total re-write]
We can write the inertial coordinates in terms of the rotating coordinates
$$x_{inertial}= x \left( t \right) \cos \left( \omega\,t \right) -y \left( t \right) \sin \left( \omega\,t \right) \hspace{.5 in} y_{inertial}=y \left( t \right) \cos \left( \omega\,t \right) +x \left( t \right) \sin \left( \omega\,t \right)$$
We can then say that
$$T = (\frac{d x_{inertial}}{dt})^2+(\frac{d y_{inertial}}{dt})^2$$
$$T = 1/2\,m{{\it xdot}}^{2}+1/2\,m{{\it ydot}}^{2}+1/2\,m{x}^{2}{\omega}^{2}+1/2\,m{y}^{2}{\omega} ^{2}-m{\it xdot}\,y\omega+mx\omega\,{\it ydot}$$
and $$L = T - V(x,y)$$
We can generate the energy function as follows
$$h = xdot \frac {\partial L}{\partial xdot} + ydot \frac {\partial L}{\partial ydot} - L$$
$$h = 1/2\,m{{\it xdot}}^{2}+1/2\,m{{\it ydot}}^{2}-1/2\,m{x}^{2}{\omega}^{2 }-1/2\,m{y}^{2}{\omega}^{2}+V \left( x,y \right)$$
Note that the energy function, which is the Hamiltonian before we make the variable substitution that changes xdot and ydot into px and py, is NOT equal to the energy of the system. This quantity -2*h, using the above variables, is often called the Jacobi intergal of the three body problem.
http://scienceworld.wolfram.com/physics/JacobiIntegral.html
We complete the transformation to the Hamiltonian in the usual variables by setting
$$px = \frac {\partial L}{\partial xdot}\hspace{.5 in} py = \frac {\partial L}{\partial ydot}$$
$$H = \frac {px^2}{2m} + \frac {py^2}{2m} + \omega (px \; y - py \; x)$$
We can compare H to the value of the kinetic energy in the same variables and again see it's not the same
$$E = \frac {px^2}{2m} + \frac {py^2}{2m} + V(x,y)$$
Last edited: Aug 11, 2004
10. Aug 11, 2004
### alexepascual
Arildno:
Thanks for your post. I was already somewhat familiar with the "viz vitae" (also known as "viz viva"). But as far as I can see, this quantity would be equivalent to the kinetic energy, except for a factor of two. I understand that in certain problems it may be more convenient by not requiring the division by two, but I think both quantities would be mostly interchangeable (after correcting for the factor 2).
With respect to the equations you post, I don't see the Lagrangian coming out of them. With respect to "Action" my understanding is that it represents the integration of the lagrangian with respect to time. I think the following would be the correct equations: (which don't explain my question either)
H=T+V
L=T-V
Vs=2T
A= Integral{L dt}
A= Integral{(T-V)dt}
If we were to consider only the case where total energy is conserved, then we can consider:
V=H-T
L=T-(H-T)
L=T-H+T
L=2T-H
L=Vs-H
A=integral{(Vs-H)dt}
But these last equations and the inclusion of the viz viva don't appear to throw any more light on the subject.
Something interesting is that if L=2T-H , then when considering alternative paths with the same energy, minimizing A would be equivalent to minimizing the integral of T with respect to time. But I guess in Hamilton's principle we have the freedom to choose paths with different total energy, which would make this a mute point.
11. Aug 12, 2004
### arildno
Hmm..you're probably right.
So much for pet theories..
12. Aug 13, 2004
### turin
IMO, this is crucial to a physical interpretation of the principle of least action. Otherwize, the principle seems kind of "spooky" (i.e. non-causal).
Don't you think that may be a bit picky? Whether a relative extremum is specifically a maximum or a minimum depends on the convention imposed. However, you are neglecting yet a third possibility for the action of a physical path: inflection (or saddle-point). The length of the physical path must be stationary (according to variations of parameters about that path), but not necessarily an extremum.
13. Aug 14, 2004
### krab
Are you saying that you could have intuited that the total energy written as a function of space and momentum coordinates has the characteristic that partial derivatives w.r.t. the momentum coordinates give the time derivatives of the corresponding positions and partial derivatives with respect to the space coordinates are equal to the negative of time derivatives of the corresponding momenta? If so, I find it very hard to believe.
14. Aug 15, 2004
### alexepascual
Turin:
Your observation is very interesting. I didn't think about inflection points. Woundn't this support my point that Hamilton's principle does not represent an attempt by Nature to obtain an economy in a certain quantity?.
I have read that Maupertui tried to give to the principle of "least action" (Maupertuisian action, not the Hamiltonian Action) a kind of magical meaning, as if some kind of intelligence had economy as a purpose.
Don't you think that the fact that Action does not per force need to be a minimum talks against Maupertui's interpretation?.
Do you think I am wrong in saying that Maupertui's interpretation has been Discredited?.
Do you agree with Marlon's statement (which I was arguing against?) Or do you have a different objection to it?
The fact that the principle of least action can be proven equivalent to Newton's second law I guess would take some of the spookyness out of it.
But I agree that if we don't have a good intuitive understanding of how it translates to a causal approach, then it would still feel "spooky".
I have made some progress reading Cornelius Lanczos book. I still have to read more and re-read some sections to fully understand it.
Krab,
I am not saying that I could have come up with Hamiltonian mechanics myself. My statement was not an attempt to brag about my capacity. I was just trying to say that the Hamiltonian as the sum of kinetic and potential energy was sufficiently simple for someone like me to understand, as opposed to the idea of the Lagrangian.
It is a concern for me though, what the mental process that leads to discovery is. I think very often a concept that appears "magical", which we think we would have never been able to find, would appear less so if we knew the mental path the discoverer took.
15. Aug 16, 2004
### turin
I don't know to what extent you intend to take this analogy/personification. I have no doubt in my own mind that the principle has a profound meaning regarding physical reality, and there does seem to be some kind of tendancy to, dare I say, "mimimization," but it is probably better to refer to the phenomenon as "equilibration." A system "seeks" a state from which all deviations present the same variation in action, to first order. Of course, there seems to be this unwritten rule in physics that the dynamics are only unambiguous up to second order, which I consider also a rather obscure concept to try to get ahold of.
I don't know anything about that. It sounds like metaphysics to me (and I say that in condescention).
I think I basically agree with your position. I don't think that there is some underlying drive towards an extremum condition. Though, I also don't take any integral nearly as seriously as a good, solid derivative in physics. Integrals introduce extra ambiguity whereas derivatives eliminate them (up to a point).
I argue that neither Newton's laws (obviously) nor the principle of least action fundamentally characterize physical behavior; however, to me the principle of least action seems more fundamental than Newton's laws, when considered infinitesimally (integration over a trivially small temporal range).
16. Aug 16, 2004
### alexepascual
Turin,
I am quite frustrated because I had just typed a response to your post and it suddenly dissapeared and the editor window appeared blank again.
I'll try to reproduce my answer in condensed form.
Actually it is not my analogy/personification but Maupertui's and all I have said is that it has been discredited, part of the reason being that it is metaphysical, and partly because if "Nature" (some resemblance of "God" here?) really had a "purpose", this purpose would not be one of "economy" as proposed by Maupertui, but one of "equilibrization" as you say.
So, it looks like we agree more than it first appeared.
I also agree that an understanding of the principle would have to be more in terms of a derivative rather than an integration over time. (Athough there seems to be a need to integrate at a point, wich results in the principle as conventionally stated). This is explained by Cornelius Lanczos, but I don't fully grasp it yet. His explanation uses the concept of "forces of inertia" where every time a particle is acclerated, the "impressed force" is opposed (and often cancelled) by this "force of inertia" (ma). But the forces of inertia would not cancel the impressed forces when there is a constraint that has not been eliminated by a change of coordinates. If I am not explaining this correctly, it is because I am still in the process of understanding it. There is a principle in connection with these "forces of inertia" wich is known as "D'alembert's principle".
With respect to the ambiguity above second order you mention, I am not familiar with that. It would be nice to have Eye_in_the_sky here. I am sure he would have some opinion about that.
Last edited: Aug 16, 2004
17. Aug 16, 2004
### pmb_phy
It's a shame that we don't see more discussion about this principle here. Its an interesting topic.
Pete
18. Aug 16, 2004
### pervect
Staff Emeritus
Well, as I understand it, to get to D'alembert's principle, you start with the principle of virtual work.
The way I look at it is that if you have a system in equilibrium, no work is being done on the system. In a metaphysical sense, it's "not moving", though this is not necessarily true in a literal sense. (This may be oversimplified, but it works for me)
If you exclude systems where the forces of constraint do any work (usually this excludes dissipative forces of constraint, I.e friction), you can say that the applied physical forces do no work at equilibrium. This is the principle of Virtual work.
Mathematically, we write:
$$\sum F^{applied}_{i} \cdot \delta r_i = 0$$
D'alembert's principle starts off with this principle, but extends it to cover systems that are not in equilibrium.
To accomplish this we must do something rather clever. We take the equations for a non-equilibrium system, F = dp/dt, and re-write them as F - dp/dt = 0. We then reinterpret this equation to observe that if we physically applied additional forces dp/dt to the system, we would have a system that was in equilibrium. Now we can then apply the equations of virtual work, since our new system is at equilibrium
In equation form, we write
$$\sum ( F^{applied}_i - \dot p_i) \cdot \delta r_i = 0$$
This is known as D'alembert's principle, and it allows us to proceed with the derivation of the Lagrangian. The next step in the derivation is to get rid of the physical coordinates ri through substitution and replace them with the generalized coordinates qi
However, I'll leave this to you and your textbook at this point.
19. Aug 16, 2004
### alexepascual
Pervect:
Thanks for your nice introduction to D'Alembert's principle. I'll print it out and use it as a guide while I read Goldstein's explanation.
20. Aug 16, 2004
### pmb_phy
I know the principle and did this out several times in the last 20 years. I was simply saying that its an interesting topic that should be discussed more.
Pete
Similar Discussions: Significance of the Lagrangian | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165432453155518, "perplexity": 558.0912011483024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126538.54/warc/CC-MAIN-20170423031206-00072-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/271536/ring-of-formal-power-series-finitely-generated-as-algebra?answertab=active | Ring of formal power series finitely generated as algebra?
I'm asked if the ring of formal power series is finitely generated as a $K$-algebra. Intuition says no, but I don't know where to start. Any hint or suggestion?
-
You mean formal power series? – Siméon Jan 6 '13 at 13:47
Try to write $1+x+x^2+x^3+\cdots$ as a finite linear combination? – Hui Yu Jan 6 '13 at 13:55
@HuiYu yes, you can write it as $1\times (1+x+x^2+...)$. – Louis La Brocante Jan 6 '13 at 13:56
formal series, right sorry – user55354 Jan 6 '13 at 13:56
If $K$ is a field, then show that $K[[x]]$ has uncountable dimension as a $K$-vector space, while any finitely-generated $K$-algebra has at most countable dimension. – Zhen Lin Jan 6 '13 at 14:04
Finitely generated $k$-algebras are Jacobson, hence finitely generated local $k$-algebras are artinian, hence finitely generated local $k$-domains are fields. Well, $k[[x]]$ is not a field.
-
Dear @Martin, This is really nice! – Keenan Kidwell Jan 16 '13 at 18:46
I don't understand your claim that finitely-generated local $k$-algebras are artinian, but it's certainly true that a local Jacobson domain must be a field. (Because then the unique maximal ideal = Jacobson radical = nilradical = 0.) – Zhen Lin Jan 17 '13 at 9:59
In a local jacobson ring, there is only one prime ideal, and artinian = noetherian + zero-dimensional. – Martin Brandenburg Jan 21 '13 at 22:58
Basically I use the same argument which you suggest. – Martin Brandenburg Jan 22 '13 at 8:47
Let $A$ be a non-trivial commutative ring. Then $A[[x]]$ is not finitely generated as a $A$-algebra.
Indeed, observe that $A$ must have a maximal ideal $\mathfrak{m}$, so we have a field $k = A / \mathfrak{m}$, and if $k[[x]]$ is not finitely-generated as a $k$-algebra, then $A[[x]]$ cannot be finitely-generated as an $A$-algebra. So it suffices to prove that $k[[x]]$ is not finitely generated. Now, it is a straightforward matter to show that the polynomial ring $k[x_1, \ldots, x_n]$ has a countably infinite basis as a $k$-vector space, so any finitely-generated $k$-algebra must have an at most countable basis as a $k$-vector space.
However, $k[[x]]$ has an uncountable basis as a $k$-vector space. Observe that $k[[x]]$ is obviously isomorphic to $k^\mathbb{N}$, the space of all $\mathbb{N}$-indexed sequences of elements of $k$, as $k$-vector spaces. But it is well-known that $k^\mathbb{N}$ is of uncountable dimension: see here, for example. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387053847312927, "perplexity": 339.505426776412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021872753/warc/CC-MAIN-20140305121752-00075-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://byjus.com/question-answer/find-the-area-of-a-trapezium-if-the-distance-between-its-parallel-sides-is-26-1/ | Question
# Find the area of a trapezium if the distance between its parallel sides is 26 cm and one of the parallel sides is 84 cm, the other parallel side is half of the perpendicular distance between the parallel sides.
A
1430 cm2
B
1261 cm2
C
1225 cm2
D
Data insufficient
Solution
## The correct option is B 1261 cm2 Area of trapezium = h (a+b)2sq. units It is given that other parallel side is half of the perpendicular distance between the parallel sides i.e. half of 26 which is = 13 cm ⇒ 12 × (84 + 13) × 26 ⇒ 12 × 97 × 26 = 1261 cm2Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601260185241699, "perplexity": 1164.270477075041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00442.warc.gz"} |
http://en.m.wiktionary.org/wiki/Lipschitz | # Lipschitz
## EnglishEdit
### EtymologyEdit
Named after Rudolf Lipschitz.
Lipschitz (not comparable)
1. (mathematics) (Of a real-valued real function $f$) Such that there exists a constant $K$ such that whenever $x_1$ and $x_2$ are in the domain of $f$, $|f(x_1)-f(x_2)|\leq K|x_1-x_2|$.
#### Derived termsEdit
• Lipschitz condition
• Lipschitz constant
• Lipschitz continuity
• Lipschitz continuous | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955817461013794, "perplexity": 3578.330250715567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163982738/warc/CC-MAIN-20131204133302-00068-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/86136/the-recursive-solution-to-the-all-pairs-shortest-paths-of-floyd-warshall-algorit | The recursive solution to the all-pairs shortest-paths of Floyd-Warshall algorithm
In the Floyd-Warshall algorithm we have:
Let $d_{ij}^{(k)}$ be the weight of a shortest path from vertex $i$ to $j$ for which all intermediate vertices are in the set $\{1, 2, \cdots, k\}$ then
\begin{align*} &d_{ij}^{(k)}= \begin{cases} w_{ij} & \text{ if } k = 0 \\ \min\{d_{ij}^{(k-1)}, d_{ik}^{(k-1)} + d_{kj}^{(k-1)}\} & \text{ if } k > 0 \end{cases}\\ \end{align*}
In fact it considers whether $k$ is an intermediate vertex in the shortest path from $i$ to $j$ or not. If $k$ is an intermediate it selects $d_{ik}^{(k-1)} + d_{kj}^{(k-1)}$ becuase it decomposes the shortest path to $i \stackrel{p_1}{\leadsto} k \stackrel{p_2}{\leadsto} j$ otherwise $d_{ij}^{(k-1)}$ since $k$ is not an intermediate vertex so it has no effect on the shortest path.
My problem is, For a given shortest path between $i$ and $j$, $k$ is an intermediate vertex or not and its existence is deduced from the structure of the graph not our decision. so we have no freedom to select or not to select the $k$, because if $k$ is an intermediate vertex so we must choose $d_{ik}^{(k-1)} + d_{kj}^{(k-1)}$ and if not we must choose $d_{ij}^{(k-1)}$. But when in formula it takes $\min$ between two numbers, it sounds like that it has option to select any of them while based on the structure of the graph there is no option for us. I believe the formula must be
\begin{align*} &d_{ij}^{(k)}= \begin{cases} w_{ij} & \text{ if } k = 0 \\ d_{ij}^{(k-1)} & \text{ if } k > 0 \text{ and } k \notin \text{ intermediate}(p)\\ d_{ik}^{(k-1)} + d_{kj}^{(k-1)} & \text{ if } k > 0 \text{ and } k \in \text{ intermediate}(p) \end{cases}\\ \end{align*}
In fact the algorithm determines whether the vertex $k$ is "intermediate" on the path from $i$ to $j$. If indeed $d_{ik}^{(k-1)} + d_{kj}^{(k-1)} < d_{ij}^{(k-1)}$ during the computation we know that (up to the first $k$ vertices) the vertex $k$ is needed to obtain a shorter path between $i$ and $j$.
• actually, I do not think it is true. One might have $d_{ik}^{(k-1)} + d_{kj}^{(k-1)} < d_{ij}^{(k-1)}$ even when $k$ is not on the path from $i$ to $j$, a later step might find an even better shortcut. And frankly, I do not think your formula has a good meaning. Floyd-Warshall does not know which path it tries to optimize, it computes all-pairs shortest path. A vertex that is internal for one $i,j$ pair is not internal for another pair. – Hendrik Jan Jan 2 '18 at 7:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862611293792725, "perplexity": 184.32738356838416}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00073.warc.gz"} |
https://quant.stackexchange.com/questions/43803/greeks-and-options-hedging | # Greeks and options hedging
Why is it that theta is sometimes taken as the proxy for gamma of the underlying asset in options hedging?
• For a delta neutral portfolio, the following equation holds $\Theta+\frac{1}{2} \sigma^2 S^2 \Gamma = r \Pi$. From this if know Theta you can calculate Gamma (or vice versa). – Alex C Jan 30 '19 at 21:34
I can argue your case as follows, consider a portfolio such that The value of $$\Pi$$ of a portfolio satisfies the differential equation given by: $$\frac{\delta \Pi}{\delta t}+rS\frac{\delta \Pi}{\delta S}+\frac{1}{2}\sigma^{2}S^{2}\frac{\delta^{2}\Pi}{\delta S^{2}}=r\Pi$$ From the differential equation, $$\Theta=\frac{\delta \Pi}{\delta t}$$ $$\Delta=\frac{\delta \Pi}{\delta S}$$ $$\Gamma=\frac{\delta^{2} \Pi}{\delta S^{2}}$$ substituting the above to our differential equation we shall have: $$\Theta + rS\Delta+\frac{1}{2}\sigma^{2}S^{2}\Gamma=r\Pi$$ We know that for a delta-neutral portfolio, $$\Delta=0$$, thus we can write the equatio as $$\Theta+\frac{1}{2}\sigma^{2}S^{2}\Gamma=r\Pi$$ From the last equation, we note that when Gamma is large and positive, the theta of the portfolio tends to be large and negative, this explains why theta can be regarded as gamma proxy strictly in delta-neutral portfolio not all scenarios.
• This is the correct answer. Theta and Gamma are related only for delta neutral portfolios absent interest rates, dividends or repo. – Ezy Feb 5 '19 at 8:55
I dont think that people would usually use one as the substitute for the other, as:
$$\theta/\Gamma=-\frac{S^{2}\sigma^{2}}{2}$$
which is arrived upon by neglecting the terms of the formula for $$\theta$$, which are preceded by the interest rate $$r$$. I think the background to your question stems from the fact that option market practitioners will consider theta and gamma as essentially the same thing - decay ($$\theta$$) occurs, where there is convexity ($$\Gamma$$).
• Option market practitionners do not consider gamma and thera « essentially the same thing » – Ezy Feb 5 '19 at 8:37
• ok - meant to say decay is a consequence of convexity – ZRH Feb 5 '19 at 8:46
• Deep in-the-money european call is still convex but has positive theta quant.stackexchange.com/questions/42611/… – Ezy Feb 5 '19 at 8:52
• I’m an options market practitioner. I don’t care about the $rPI$ term because I run a funding neutral portfolio, so that term cancels versus the interest I have to pay to run my books. – dm63 Mar 31 '19 at 11:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663024544715881, "perplexity": 788.0837325778665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00308.warc.gz"} |
https://www.physicsforums.com/threads/two-identical-springs-with-spring-constant-k-and-with-two-identical-masses-m.602503/ | # Two identical springs with spring constant k and with two Identical masses m
1. May 2, 2012
### big_zipp
I am trying to figure out what the kinetic and potential energy of this system. A spring is attached to point A, a mass m hangs from the other end of the spring. Another spring hangs from the first mass, and another mass hangs from the second spring. There is no motion in the horizontal direction. The non stretched spring's length is b.
I'm just looking to find the kinetic and potential energy.
Thank you
2. May 2, 2012
### Staff: Mentor
What are your thoughts? What are the Relevant Equations? Why would there be kinetic energy involved -- it sounds like the system is in motionless equilibrium?
Similar Discussions: Two identical springs with spring constant k and with two Identical masses m | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234568238258362, "perplexity": 527.9728767557128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948583808.69/warc/CC-MAIN-20171216045655-20171216071655-00395.warc.gz"} |
https://www.thecadforums.com/threads/dc-analysis-gain-differs-from-ac-analysis-gain.37081/ | # DC analysis gain differs from AC analysis gain
Discussion in 'Cadence' started by Amr Kassem, Jul 7, 2009.
1. ### Amr KassemGuest
When I design a simple common source amplifier to give a gain of 100
for example. The AC analysis indicates that the gain is actually 100.
But when I do DC analysis and plot the derivative of Vout against Vin
I see a gain of 30. Same happens when I use cascode amplifiers and
even aim for a gain of 5000. I see a gain of 40 from the DC analysis
and 5000 from the AC analysis.
I tried removing the load capacitor but nothing changes. Any idea what
the reason for that could be?
Amr Kassem, Jul 7, 2009
2. ### Guest
Hi Amr,
Whatever you are seeing is right . It is bound to happen that way .
Because gain given by AC analysis is at the Qpoint or Bias Point
and provided signal is small compared to the bias point of ckt.
You can see the same Gain in DC as that of AC only if you
sweep you signal about the Q point in small range .
For example if your Qpoint is 0.9 volt (input common mode) then in
DC you sweep from about 0.9+/- 100uV.
In above case you will observe same gain in both.
With Regards
Pavan Pai
, Jul 8, 2009
3. ### Amr KassemGuest
Does this mean that I should decrease the value of abstol and reltol
in cadence if I want to see the same gain in both analyses?
Amr Kassem, Jul 12, 2009
4. ### Andrew BeckettGuest
Amr Kassem wrote, on 07/12/09 23:17:
No.
Andrew Beckett, Jul 13, 2009
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573154211044312, "perplexity": 4244.658983632988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00083.warc.gz"} |
http://mathoverflow.net/questions/167575/von-neumann-algebras-generated-by-commutators | # von Neumann algebras generated by commutators
Let $A$ be a UHF-algebra of type $n^{\infty}$ and denote its unique and faithful trace by $\tau$. Let $L^2(A)$ be the Hilbert space of the GNS-representation associated to $\tau$. We have two commuting representations $L \colon A \to B(L^2(A))$ and $R \colon A^{\rm op} \to B(L^2(A))$ and by the universal property of the maximal tensor product and the nuclearity of $A$, we obtain a $*$-homomorphism $A \otimes A^{\rm op} \to B(L^2(A))$. I think the image of this is weakly dense, i.e. the von Neumann algebra completion of $A \otimes A^{\rm op}$ in this representation is type I and agrees with $B(L^2(A))$. Now consider the $*$-subalgebra $$B = \left\{ \sum_{i} L_{a_i}R_{b_i} \ |\ \sum_{i} a_i b_i = 0\right\}$$ spanned by those operators corresponding to elements of $A \otimes A^{\rm op}$ that lie in the kernel of the multiplication map. Let $M$ be the weak closure of $B$ in $B(L^2(A))$.
What "is" this algebra $M$, more precisely: What is the type of $M$? Is it a factor?
-
I see that $B$ is an $A-A$ bimodule, but why is $B$ a subalgebra? – Dave Penneys May 19 '14 at 15:22
@DavePenneys: The two actions commute and $R_aR_b = R_{ba}$, since it is a representation of the opposite algebra. If I now take two elements, say $\sum_i L_{a_i}R_{b_i}$ and $\sum_j L_{c_j}R_{d_j}$ of $B$ and multiply them, I end up with $\sum_{i,j} L_{a_ic_j}R_{d_jb_i}$, but $\sum_{i,j} a_ic_jd_jb_i$ should vanish since $\sum_j c_jd_j$ vanishes by assumption. – Ulrich Pennig May 19 '14 at 15:28
Whoops, to get a *-subalgebra, I probably have to demand that the sum $\sum_i b_ia_i$ vanishes as well in the definition of $B$. – Ulrich Pennig May 19 '14 at 17:22
Let $\xi_0 \in L^2(A)$ denote the cyclic vector corresponding to the identity in $A$, and let $P_0$ denote the rank-one projection corresponding to $\xi_0$. Then we clearly have $xP_0 = P_0 x = 0$ for all $x \in M$, and hence $M \subset P_0^\perp B(L^2(A)) P_0^\perp$. I claim that we actually have equality. This should be a well known fact to experts in II$_1$ factors (one just needs that $A$ is a unital $*$-algebra which generates a II$_1$ factor), however I don't know a reference off hand so I'll give a proof instead.
To see that $P_0^\perp \in M$ note that if $u \in A$ is a unitary then the spectral projection of $1 - L_{u}R_{u^*}$ corresponding to $\mathbb C \setminus \{ 0 \}$ is contained in $M$. The supremum of these projections over all $u$ is equal to $P_0^\perp$ since $\tilde A := L(A)''$ is a factor.
Note that the representations $L$ and $R$ extend to normal commuting representations of $\tilde A$ (for which I will use the same notation), and it is then easy to show that in the definition of $B$ we may allow $a_i$ and $b_i$ to be in $\tilde A$.
Note also that if $A_0 \subset \tilde A$ is a von Neumann subalgebra, and if $Q$ denotes the projection onto the closure of $(A_0' \cap \tilde A) \xi_0$, then $Q \in \mathbb CP_0 \oplus M$. This follows from the observation that for $\eta \in L^2(A)$ we have that $Q\eta$ is the unique element of minimal norm in the convex closure of $\{ L_uR_{u^*} \eta \mid u \in \mathcal U(A_0) \}$. Hence, $Q$ is in the weak operator topology convex closure of the set $\{ L_uR_{u^*} \mid u \in \mathcal U(A_0) \} \subset \mathbb CP_0 \oplus M$.
In particular, if $p \in \mathcal P(\tilde A)$ is a projection and we set $A_0 = ( \mathbb Cp \oplus \mathbb Cp^\perp )' \cap \tilde A$, then since $\tilde A$ is a factor it follows that the rank one projection onto $(p - \tau(p)) \xi$ is contained in $M$.
Suppose now that we have a self-adjoint operator $T \in M' \cap P_0^\perp B(L^2(A)) P_0^\perp$. Then for $p \in \mathcal P(\tilde A)$ a non-zero projection, we have shown that there exists $\lambda_p \in \mathbb R$ so that $T( p - \tau(p) )\xi_0 = \lambda_p (p - \tau(p))\xi_0$. If $q \in \mathcal P(\tilde A)$ then we have \begin{align} \lambda_p \langle (p - \tau(p))\xi_0, (q - \tau(q))\xi_0 \rangle &= \langle T(p - \tau(p))\xi_0, (q - \tau(q))\xi_0 \rangle \\ &= \langle (p - \tau(p))\xi_0, T(q - \tau(q))\xi_0 \rangle \\ &= \lambda_q \langle (p - \tau(p))\xi_0, (q - \tau(q))\xi_0 \rangle. \end{align} Since $\tilde A$ is a factor, any pair of non-zero projections has a third projection so that these inner-products are non-zero. Hence, $\lambda := \lambda_p = \lambda_q$ for all non-zero projections $p, q \in \mathcal P(\tilde A)$. The span of projections is norm dense in $\tilde A$ by the spectral theorem, hence $T(x - \tau(x) ) \xi_0 = \lambda (x - \tau(x))\xi_0$ for all $x \in \tilde A$, and so $T = \lambda P_0^\perp$. Since $T$ was arbitrary, the double commutant theorem then gives the result.
-
Nice proof. Thanks! – Ulrich Pennig May 19 '14 at 20:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981222152709961, "perplexity": 92.20805302227694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701167599.48/warc/CC-MAIN-20160205193927-00248-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/why-is-the-speed-of-light-186-000-miles-per-second.806871/ | # Why is the speed of light 186,000 miles per second?
1. Apr 5, 2015
### thejun
Why is the speed of light 186,000 miles per second? Is that how fast the ether will allow it to travel? and if that is the case, if the edge of the universe; the edge to which the universe is speeding up, would the ether out there let light travel at higher or lower speeds? Which to me means that light is 186,000 miles per second in our are of the universe?
2. Apr 5, 2015
What ether?
3. Apr 5, 2015
### thejun
the ether that all particles travel through, what gets there momentum, and probably their spin
4. Apr 5, 2015
### rootone
'Ether' is a very wrong term to use to describe space in modern physics.
It is a term used for a long discarded idea, in which space is a substance through which light propagates in a way similar way to sound propagating through air.
Transmission of light (or any electromagnetism) in a vacuum is very different, but it does have a fixed speed 'c', and this has been verified repeatedly in different ways.
Why 'c' has that particular value is unknown, it just does.
According to special relativity 'c' is constant for all points in space, if it wasn't then SR wouldn't work, but clearly it does work.
Last edited: Apr 5, 2015
5. Apr 5, 2015
### thejun
Why does light go at 186,000 miles per second. Why not 196,000, or 296,000.
What makes it travel at 186,000 miles per second?
6. Apr 5, 2015
### rootone
We don't know why it has that particular value any more than we know why Pi has a particular value.
It just does, it has been experimentally confirmed repeatedly, c is not a theory.
Last edited: Apr 5, 2015
7. Apr 5, 2015
### thejun
Hence, the ether, and you don't know if Pi has a particular value... the answer "it just does", sounds religious to me... Physics is theory, just wondering what people are theorizing...
8. Apr 5, 2015
### rootone
The existence of Ether has been proven wrong experimentally.
http://en.wikipedia.org/wiki/Michelson–Morley_experiment
and also other experiments.
Aether theories are not consistent with what is actually observed.
Special relativity IS consistent with what is actually observed (repeatedly)
Observations, measurements, are facts, not a religion.
Last edited: Apr 5, 2015
9. Apr 5, 2015
### phinds
In addition to no ether that light travels in, there is no edge to the universe. You would do well to study some very basic cosmology.
10. Apr 5, 2015
### thejun
im not talking about measurements. how do you smash to protons together to get the higgs? the higgs is way more massive than the the 2 protons, no matter how much energy you throw at it... if you cant answer why the speed of light is c, and you don't have any theories, than just say I don't know, and let somebody else theorize the question..
thanks for talking with me though!!!
11. Apr 5, 2015
### rootone
Protons colliding at near light speed apparently IS able to produce a particle with a rest mass in the range where the Higgs particle was predicted to be.
That's what the LHC run1 set out to look for, that predicted particle (amongst other things), and they found it.
Last edited: Apr 5, 2015
12. Apr 5, 2015
### bahamagreen
I believe current thinking is that the ether theory and relativity theory make identical predictions so they appear experimentally indistinguishable, the only difference being that the ether theory assumes of all possible inertial frame of reference there is one unique frame at absolute rest (which can never be experimentally identified from the others) and relativity theory assumes there is no such unique absolute frame of rest.
thejun, I think the best explanation about c comes from Minkowski's famous "valiant piece of chalk" address, but it is not easy going; here is a step by step walk through that paper... Minkowski.
13. Apr 5, 2015
### Drakkith
Staff Emeritus
It travels that fast because free space has very specific values for the electric and magnetic constants: http://en.wikipedia.org/wiki/Speed_of_light#Propagation_of_light
Now, if you were to ask why those values are what they are, then the only answer we can give is that "we don't know".
Take it as "we don't know" instead. There are plenty of fundamental constants and rules which have no underlying explanation. That's the nature of science. You always have something which isn't currently explained.
14. Apr 8, 2015
### quincy harman
Pi is the ratio of a circles diameter by the circumference. In other words it's how many times you can fit the diameter in the circumference of any given circle.
15. Apr 8, 2015
### Greg Bernhardt
Last edited: May 7, 2017
16. Apr 8, 2015
### rootone
Yes that's right, and that ratio is a universal constant, having the same value for all circles.
The same can be said of 'c', it is similarly a universal constant
We know what the value of PI is and we know what the value of C is, to a very high degree of precision.
The OP asked why 'c' has the value it does, and the fact is that we don't know, just as we don't know why Pi has the value it has.
All we do know in both cases is that they are universal constants, and knowing their value is extremely useful.
The situation with Pi is exactly analogous to that of 'c', and there are several other such universal constants.
We know what the value of the constant is, but we don't know why they have the values they do.
Universal constants such as these are observed facts, not a consequence of any theory.
As such they simply are what they are and we can make use of them without the neccessity of an underlying explanation for them.
Last edited: Apr 8, 2015
17. Apr 8, 2015
### phinds
Were you making a point with that statement or did you think we didn't know that?
18. Apr 8, 2015
### quincy harman
well he said we don't know why the value of pi is pi. so I didn't know if he knew. lol
19. Apr 8, 2015
### rootone
That's right, we don't know why Pi has the value it has.
We can measure it, and we calculate it to many decimal places,
but that doesn't explain why the value Pi is what it is.
20. Apr 8, 2015
### wabbit
But is the question about c ? In natural units c=1, there's no mystery in that. The number we get is an effect of our choice of units it seems to me, is there more to it than that?
Similar Discussions: Why is the speed of light 186,000 miles per second? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546496033668518, "perplexity": 1090.3366994768683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00632.warc.gz"} |
http://econ101help.com/microeconomics/linear-in-consumption-labor-leisure-tradeoff/ | # Labor-leisure tradeoff with linear in consumption utility function
The labor-leisure tradeoff is the tradeoff between working more hours and earning a wage for an extra hour versus the extra benefit received for consuming an extra hour of leisure.
The labor-leisure tradeoff can be used to determine the optimal labor supply by an individual. For example, consider a consumer with the following utility function:
$U = C - \frac{1}{2}(H)^2$
where C is the level of consumption and H is the labour supplied. This is called a linear in consumption utility function because the marginal utility for consuming an extra unit of consumption is always 1. We can also observe that the marginal disutility from working an extra hour increases as the amount of labour supplied increases.
Suppose that this particular worker only receives wage income and does not save any income. His/her budget constraint would be:
$wH = C$
It is also assumed that the price of consumption is 1 in this case. If we substitute out the consumption function from our utility function we can re-write our utility function as such:
$U = wH - \frac{1}{2}(H)^2$
We can find the optimal labour supply now by taking the derivative of U wrt to H, such that:
$\frac{dU}{dH} = w - H= 0$
Which implies that in equilibrium $w = H$. To see the labour leisure tradeoff, we note that the consumers time constraint for a day is:
$24 = L + H$
where L is the amount of leisure that a worker enjoys. Rearranging and substituting out for H, we find:
$L = 24 - w$
Thus there is a 1-to-1 negative relationship between leisure and wage. As the wage rate increases, the consumer will consume less leisure and work more. | {"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642557859420776, "perplexity": 1197.2646751948973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00623.warc.gz"} |
http://waterinfoods.com/gordon-ramsay-dtgw/f96e7c-central-angle-of-a-sector-calculator | Before we understand what the central angle theorem is, we must understand what subtended and inscribed angles are, because they are a part of the definition. Since the problem defines L = r, and we know that 1 radian is defined as the central angle when L = r, we can see that the central angle is 1 radian. Online calculator. where: C is the central angle in degrees r is the radius of the circle of which the sector is part. Analytical geometry calculators. Calculations at a right circular cylindrical sector (pie slice). You can imagine the central angle being at the tip of a pizza slice in a large circular pizza. An arc length is the measure of the distance along an arc contained by a radius and angle. Statistics calculators. Direct proportion and inverse proportion. For example, if the angle is 45° and the radius 10 inches, the area is (45 / 360) x 3.14159 x 10 2 = 0.125 x 3.14159 x 100 = 39.27 square inches. How to Calculate The Area of Sector with This Tool? For a sector the area is represented by some other angle. The simplicity of the central angle formula originates from the definition of a radian. So to find the sector area, we need to find the fraction of the circle made by the central angle we know, then find the area of the total circle made by the radius we know. We could also use the central angle formula as follows: In a complete circular pizza, we know that the central angles of all the slices will add up to 2π radians = 360°. Perimeter of Sector of Circle Calculator In geometry, a sector of a circle is made by drawing two lines from the centre of the circle to the circumference. Check out 40 similar 2d geometry calculators . Where does the central angle formula come from? Then click Calculate. Be sure to use the TT button when necessary or… The following equation is used to calculate a central angle contained by a circular arc. You can imagine the central angle being at the tip of a pizza slice in a large circular pizza. Intermediate Geometry Help » Plane Geometry » Circles » Sectors » How to find the angle of a sector Example Question #1 : How To Find The Angle Of A Sector In the circle above, the length of arc BC is 100 degrees, and the segment AC is a diameter. Solution: central angle (θ) = NOT CALCULATED. Let’s try an example where our central angle is 72° and our radius is 3 meters. : 234 In the diagram, θ is the central angle, the radius of the circle, and is the arc length of the minor sector. When radius r and central angle θ is input and "Calculate area of sector" button is clicked, the area, the length of arc, and the length of chord of the sector are calculated and displayed. What would the central angle be for a slice of pizza if the crust length (L) was equal to the radius (r)? How to use the calculator. 2 / 4. If the Earth travels about one quarter of its orbit each season, how many km does the Earth travel each season (e.g., from spring to summer)? You can find it by using proportions, all you need to remember is circle area formula (and we bet you do! Click the "Central Angle" button, input arc length =2 and radius =2. This is a great starting point. Then we just multiply them together. Central Angle Theorem. Other Units: Change Equation Select to solve for a different unknown Circle. The total area of a circle is πR 2 corresponding to an angle of 2π radians for the full circle. Solution : The given values radius r = 18 cm sector angle θ = 25. Click "CALCULATE" and your answer is 1 Radian and 57.296 degrees. The central angle calculator is here to help; the only variables you need are the arc length and the radius. Example: The area of a sector with a radius of 6 cm is 35.4 cm 2. Calculate the angle of the sector. Find the area of circular sector having the radius r = 18 cm and sector angle 25.? For example, if the central angle is 100 degrees and the radius is 5, you would divide 100 by 360 to get .28. A circular sector or circle sector (symbol: ⌔), is the portion of a disk enclosed by two radii and an arc, where the smaller area is known as the minor sector and the larger being the major sector. Since the crust length = radius, then 2πr / r = 2π crusts will fit along the pizza perimeter. You can also download the result in PDF filetype. So I'll plug into the arc-length formula, and solve for what I need. As, the area of a circle=r2and the angle of a full circle = 360° Thus, the formula of the area of a sector will be: AreaofSectorAreaofCircle=CentralAngle360° AreaofSectorπr2=0360° Area of Sector=0360°∗πr2 r = radius of the circle This formula supports us to find anyone of the values if the other two values are given. Then, multiply the two numbers to get the area of the sector. In essence, they've given me the central angle of a sector and that sector's arc's length, and they've asked me for the radius. But I will want to convert the mixed number to an improper fraction.) Solution: Central Angle = 35.4/36π × 360° = 112.67° Generally, the sector … The formula for sector area is simple - multiply the central angle by the radius squared, and divide by 2: Sector Area = r² * α / 2; But where does it come from? It’s sometime referred to as the angel of rotation of an arc. In this calculator you can calculate the perimeter of sector of circle based on the radius and the central angle. Let's approach this problem step-by-step: You can try the final calculation yourself by rearranging the formula as: Then convert the central angle into radians: 90° = 1.57 rad, and solve the equation: When we assume that for a perfectly circular orbit, the Earth travels approximately 234.9 million km each season! Solving for circle sector central angle. So the area of the sector is this fraction multiplied by the total area of the circle. Where (for brevity) it says 'radius', 'arc' and so on, it should, more correctly, be something like 'length of radius' or 'arc-length' etc, and 'angle' means 'angle at the centre'. Then, you must multiply that area by the ratio of the angles which would be theta/360 since the circle is 360, and theta is the angle of the sector. MATH FOR KIDS. eval(ez_write_tag([[970,250],'calculator_academy-medrectangle-3','ezslot_8',169,'0','0'])); The following equation is used to calculate a central angle contained by a circular arc. Solution for Calculate the central angle (in degrees) for a sector with arc length of 19 cm and radius of 5 cm. (In this case, I won't need to use a conversion factor, because I can use the radian form for "two-thirds of a circle". The outputs are the … Read on to learn the definition of a central angle and how to use the central angle formula. Inputs: sector area (K) radius (r) Conversions: sector area (K) = 0 = 0. radius (r) = 0 = 0. Simplify the problem by assuming the Earth's orbit is circular (. A problem not dealt with by this calculator is where the length of the chord (c) and the height (h) between the chord and arc are known, and it is required to find the radius (r). Mensuration calculators. Step 2: Now click the button “Calculate” to get the area of a sector Step 3: Finally, the area of a sector will be displayed in the output field. You can find the central angle of a circle using the formula: θ = L / r. where θ is the central angle in radians, L is the arc length and r is the radius. But, if we measure the angle of a circle in radians, the area of sector formula will be AreaofSectorAreaof… The Earth is approximately 149.6 million km away from the Sun. Step by step calculation formula to find sector area = (π r 2 θ) / 360 substitute the values = (π x 18 2 x 25)/360 = 70.71 cm 2 (Take π = 3.142). These unique features make Virtual Nerd a viable alternative to private tutoring. A radian is a unit of angle, where 1 radian is defined as a central angle (θ) whose arc length is equal to the radius (L = r). The radius s of the sector is equal to the slant height s of the cone. 3) An angle has an arc length of 2 and a radius of 2. To calculate the area of the sector you must first calculate the area of the equivalent circle using the formula stated previously. 1 / 4. Try using the central angle calculator in reverse to help solve this problem. Calculates area, arc length, perimeter, and center of mass of circular sector Because maths can make people hungry, we might better understand the central angle in terms of pizza. Now, we will learn about the area of sector, where we measure the central angle (θ) in degrees. r = (c² / 8h) + (h / 2) In words: c squared divided by 8h plus (h divide… Asectoris a part of the circlePerimeter of sector will be the distance around itThus,Perimeter of sector = r + 2r= r( + 2)Where is in radiansIf angle is in degrees, = Angle × π/(180°)Let us take some examples:Find perimeter of sector whose radius is 2 cm and angle … Since each slice has a central angle of 1 radian, we will need 2π / 1 = 2π slices, or 6.28 slices to fill up a complete circle. A sector comprising of the central angle of 180° is known as a semicircle. A central angle is an angle with a vertex at the centre of a circle, whose arms extend to the circumference. We arrive at the same answer if we think this problem in terms of the pizza crust: we know that the circumference of a circle is 2πr. What is Meant by Area of a sector? Please input radius of the circle and the central angle in degrees, then click Calculate Area of Sector button. Sector Angle : A sector angle is a plane figure surrounded by two radii and the included arc in a circle. In this non-linear system, users are free to take whatever path through the material best serves their needs. Missing addend Double facts Doubles word problems. Next, take the radius, or length of one of the lines, square it, and multiply it by 3.14. Calculator to Angle and Radius of the Sector to Make a Cone A central angle is an angle with a vertex at the centre of a circle, whose arms extend to the circumference. A central angle is an angle contained by a portion of an arc defined by an arc length and radius. If the central angle is measured in radians, the formula instead becomes: \text {sector area} = r^2 × \frac {\text {central angle in radians}} {2} sector area = r2 × 2central angle in radians Rearranging the formulas will help to solve for the value of the central angle, or theta. Cite this calculator & page Calculate the area, the length of arc, and the length of chord of the sector from the radius and the central angle using the formula. Enter central angle =123 then click "CALCULATE" and your answer is Radius = 2.2825. π is Pi, approximately 3.142. Perimeter and Central Angle of Circular Sector Calculator . This is the method used in the animation above. Bonus challenge - How far does the Earth travel in each season? ): The area of a circle is calculated as A = πr². Enter radius, height and angle and choose the number of decimal places. How to Calculate the Area of a Sector of a Circle. Angles are calculated and displayed in degrees, here you can convert angle units. How many pizza slices with a central angle of 1 radian could you cut from a circular pizza? So for example, if the central angle was 90°, then the sector would have an area equal to one quarter of the whole circle. Here central angle (θ) = 60° and radius (r) = 42 cm ... Matrix Calculators. 20% " of " 360° = 72° In any sector, there are 3 parts to be considered: the arc length, the sector area the sector angle They all represent the SAME fraction of the whole circle. C = L / r Where C is the central angle in radians L is the arc length Make a Cone from a Sector Calculator To make a cone, we start with a sector of central angle θ and radius s, we then joint points A and B letting point O move upward untill OA and OB are coincident. Calculator Academy© - All Rights Reserved 2020, find the radian measure of the central angle, how to find the central angle of a circle, how to find the central angle of a sector, how to find the radius of a sector with the area and angle, how to find the measure of a central angle, find the length of the arc intercepted by a central angle, how to find the central angle of a circle given the radius, how to find the radian measure of the central angle, how to find radius with arc length and central angle, how to find the central angle of a polygon, how to find radian measure of central angle, how to find central angle without arc length, find the length of an arc intercepted by a central angle, how to find arc length of a circle without central angle, how to find the circumference of a circle with arc length and central angle, formula to find central angle of a pie chart, find the radian measure of the central angle of a circle, find the length of an arc that subtends a central angle, find area of sector with radius and central angle, how do you find the central angle of a circle, arc length and central angle measure calculator, how to find the central angle of a regular polygon, finding the radius of a circle with arc length and central angle, how to find the area of a sector of a circle with radius and central angle, the formula for the area of a sector with a central angle in radians is, find radian measure with radius and arc length, find the area of a sector with a central angle, how to find the measure of a central angle of a regular polygon, find the area of the sector of a circle given radius and central angle, find the radius of a circle given central angle and arc. Calculator will show you the chart of the circle and the central angle ( θ =! It ’ s sometime referred to as the angel of rotation of an arc contained by circular. To evaluate and determine the central angle ( in degrees plug into the arc-length formula, solve! The given values radius r = 2π crusts will fit along the pizza perimeter central. Angle being at the centre of a circle to find the area of a pizza in... Extend to the slant height s of the sector area calculator to angle and choose the number decimal. ): the given values radius r = 18 cm sector angle θ = 25 the. The mixed number to an improper fraction. a viable alternative to private tutoring features make Virtual Nerd viable. Subtended by a circular pizza page how to use the central angle being at the sector is fraction. Where we measure the central angle formula a vertex at the sector that arc the simplicity of the and. Pdf filetype one of the lines, square it, and solve for a,! Find it by 360 cylindrical sector ( pie slice ) pizza slice in a large circular pizza the numbers!, or length of 19 cm and sector angle θ = 25 multiply the two numbers get. Show you the chart of the sector area calculator to evaluate and determine the angle! Non-Linear system, users are free to take whatever path through the material best serves needs... Sector and area of a pizza slice in a large circular pizza example where central... Try an example where our central angle of a sector with arc length of 2 simplicity the. Crust length = radius, or length of 2 improper fraction. reverse! Their needs make a cone how to use the calculator will show you the chart of the full.... That arc only two measurements needed to calculate the area of circular sector having the radius and the angle... Is an angle with a central angle in terms of pizza the central angle is an angle has arc..., the only two measurements needed to calculate the area of a central angle θ... Of which the sector for what I need an angle with a vertex at the centre a... Earth is approximately 149.6 million km away from the definition of a central angle is an angle by! Can make people hungry, we will learn about the area of each pizza slice and how calculate! Sector ( pie slice ) 180° is known as a semicircle solution calculate. Let ’ s try an example where our central angle of 2π RADIANS for the full circle angle θ... Click calculate '' arc defined by an arc defined by an arc by... Radian and 57.296 degrees the chart of the circle of which the sector is part formula and... Bonus challenge - how far does the Earth 's orbit is circular ( of is! Calculations at a right circular cylindrical sector ( pie slice ) calculator is here to ;. The tip of a central angle is an angle contained by a portion of an arc defined by arc... Calculated and displayed in degrees r is the radius r = 2π crusts will along! To as the angel of rotation of an arc contained by a portion of an arc contained by a of...: C is the measure of the central angle of 2π RADIANS for the full circle and solve for I. Make Virtual Nerd a viable alternative to private tutoring - how far does the Earth travel in each season and... And we bet you do can imagine the central angle of 2π RADIANS for the full angle for a unknown... Is 1 radian and 57.296 degrees and solve for a different unknown circle you need to remember is circle formula. Length of 2 and a radius of the sector you must first calculate the area of sector with central! Pizza perimeter need are the arc length of 19 cm and radius 5! Arc-Length formula, and multiply it by using proportions, all you need are the arc length the... Earth is approximately 149.6 million km away from the definition of a central angle of is. Result in PDF filetype by using proportions, all you need are the arc length and central... At the sector is proportional to the arc length =2 central angle of a sector calculator radius =2 the radius, and. And we bet you do, RADIANS or both as positive real numbers and press calculate '' your! Users are free to take whatever path through the material best serves their needs can make people,... Angle in degrees ) for a different unknown circle needed to calculate a central calculator... Crust length = radius, then 2πr / r = 18 cm sector θ... Calculator in reverse to help solve this problem perimeter of sector of a circle, whose extend... Sector you must first calculate the area of a circle, whose arms to., and solve for a sector, where we measure the central angle calculator in reverse help! And displayed in degrees, here you can also download the result in PDF.! Central angle '' button, input arc length and the central angle in degrees ) for circle. The slant height s of the circle contained in that arc to find the central angle in degrees then. And sector angle 25. and we bet you do circle, whose arms extend the! Can make people hungry, we might better understand the central angle in terms of pizza 149.6 million away. Let ’ s sometime referred to as the angel of rotation of arc. Sector angle 25. you ever wondered how to use the calculator to calculate a central angle calculator is here help... Using the central angle of the sector area calculator to evaluate and determine the angle. And determine the central angle ( θ ) = NOT calculated formula ( and bet. Orbit is circular ( to find the area of a circle the circle of which the and... Values radius r = 18 cm and sector angle θ = 25 angle subtended by a sector of circle on! Angle in terms of pizza here you can calculate the area of the sector is part is! Viable alternative to private tutoring / r = 2π crusts will fit along the pizza perimeter need the. Next, take the radius and the central angle being at the based... Can make people hungry, we will learn about the area of a,! Pie slice ) and 57.296 degrees calculate '' evaluate and determine the central angle of a sector with central! Is circular ( = NOT calculated enter the radius s of the is... Of 6 cm is 35.4 cm 2 will learn about the area of sector button angle in,! Needed to calculate the area of sector with this Tool calculate a angle. The animation above what I need 60° and radius of the cone 180° is known as semicircle... Following equation is used to calculate the area of a pizza slice in a large pizza... Of 1 radian and 57.296 degrees solve this problem circular ( the chart of the sector on., input arc length help solve this problem is θ, then 2πr / r = 18 cm angle. Θ = 25 sector area calculator to calculate a central angle ( θ ) in degrees RADIANS... In mathematics, the only variables you need are the arc length =2 and radius NOT., square it, and multiply it by 360 of circular sector having the,... Click calculate area of the circle angle formula ( θ ) = 42 cm... Matrix.... In PDF filetype to as the angel of rotation of an arc contained by a portion of arc. =2 and radius by assuming the Earth travel in each season need are the arc and. First calculate the area of a pizza slice in a large circular pizza numbers to get the area of pizza! We might better understand the central angle in degrees ) for a different unknown circle I 'll plug the. central angle ( in degrees, RADIANS or both as positive real numbers and press calculate... And displayed in degrees, RADIANS or both as positive real numbers and press calculate '' sector! And angle and radius of the equivalent circle using the formula stated previously using the formula stated.. Better understand the central angle formula by an arc defined by an arc radian and 57.296 degrees to calculate area. Arc defined by an arc length is the radius s of the sector calculator! Enter the radius r = 2π crusts will fit along the pizza perimeter and a radius of equivalent! Matrix Calculators 2π crusts will fit along the pizza perimeter plug into the arc-length,. I will want to convert the mixed number to an angle has arc! The central angle ( θ ) = 42 cm... Matrix Calculators input arc length is the radius and angle. The formula stated previously the problem by assuming the Earth 's orbit is circular ( fit along pizza. Sometime referred to as the angel of rotation of an arc if the angle an! Sector you must first calculate the area of a sector comprising of the distance along an arc contained a! Fraction of the sector the pizza perimeter of rotation of an arc defined by an arc and!, we will learn about the area of a circle is calculated as a semicircle on to the! Angle has an arc length of 2 and a radius of 6 cm is 35.4 cm 2 click the central. The angle is an angle with a vertex at the tip of a radian of 5 cm using... Is 1 radian central angle of a sector calculator you cut from a circular arc you must first calculate the area of circle! Radius and central angle ( θ ) = 60° and radius has an arc length and radius.... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345670342445374, "perplexity": 505.7413762047889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00359.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=271597 | # Density of states
by kasse
Tags: density, states
P: 463 1. The problem statement, all variables and given/known data We study a one dimensional metal with length L at 0 K, and ignore the electron spin. Assume that the electrons do not interact with each other. The electron states are given by $$\psi(x) = \frac{1}{\sqrt{L}}exp(ikx), \psi(x) = \psi(x + L)$$ What is the density of states at the Fermi level for this metal? 3. The attempt at a solution The total energy of the system is $$E = \frac{\hbar^{2}\pi^{2}n^{2}}{2mL^{2}}$$ where n is the square of the sums of the three quantum numbers that determine each quantum state. At a certain energy all states up to $$E_{F}(0)=E_{0}n^{2}_{F}$$ is filled. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967510998249054, "perplexity": 142.92530344784043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125488.38/warc/CC-MAIN-20140914011205-00048-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.physicsforums.com/threads/solving-for-current-in-a-circuit.710006/ | # Solving for current in a circuit
• Start date
• #1
1,011
0
I have no idea what I a doing. I dont know how to deal with the two symbols (the one that looks like a diamond and a circle with an arrow).
If those were replaced with a battery I would be able to solve it, but I can't solve it because I dont know which direction the current is flowing in each branch of the circuit.
Anyways I did KVL for each loop and can someone tell me if I am correct?
Loop 1:
I0(6kΩ) - Vx + i1(8Ω) = 0
Loop 2:
Vvoltage over element of 5A - i1(8Ω) + Vx = 0
Loop3: -3Vx - Vx = 0
Out most loop: i0 - 3Vx = 0
Can someone let me know if that is correct?
Then what should I do next to solve for i0?
#### Attachments
• wt.jpg
10.2 KB · Views: 319
• #2
1,011
0
I just need to know which direction the current is going through each branch. does any one know?
• #3
gneill
Mentor
20,922
2,866
I have no idea what I a doing. I dont know how to deal with the two symbols (the one that looks like a diamond and a circle with an arrow).
If those were replaced with a battery I would be able to solve it, but I can't solve it because I dont know which direction the current is flowing in each branch of the circuit.
Anyways I did KVL for each loop and can someone tell me if I am correct?
Loop 1:
I0(6kΩ) - Vx + i1(8Ω) = 0
Loop 2:
Vvoltage over element of 5A - i1(8Ω) + Vx = 0
Loop3: -3Vx - Vx = 0
Out most loop: i0 - 3Vx = 0
Can someone let me know if that is correct?
Then what should I do next to solve for i0?
The circle with the arrow inside is an ideal current source. It will always inject exactly 5 A no matter what. The diamond with the arrow inside is called a controlled current source. It will produce 3Vx amps. That is, by some means not shown it "measures" the potential Vx across the 4 Ω resistor and then produces an amperage of 3 times the magnitude of that potential difference. If you wish, you can think of the coefficient "3" as having units of Ohms so then it becomes 3 Ω * Vx, which by Ohm's law results in Amperes.
Regarding your solution attempt, is there some particular reason you chose to use KVL loop analysis rather than nodal analysis? The reason I ask is that your circuit contains only current sources and one independent node, which makes it very amenable to nodal analysis.
If you really, really want to use loop analysis, since the two current sources are in parallel I'd suggest combining them into a single controlled current source: I = 5 - 3Vx amps directed upwards. That will eliminate the third loop entirely.
When you first write your loop equations, don't use Vx as a potential drop. Write it as -I1*4Ω. Use that for Vx everywhere; that will tie in the current I1 to the controlled source's current in your equations without needing a separate equation for Vx.
#### Attachments
• Fig1.gif
8.7 KB · Views: 441
• #4
gneill
Mentor
20,922
2,866
I just need to know which direction the current is going through each branch. does any one know?
You won't know for sure until you've solved the circuit. If you guess wrong for a current at the outset, the value you find will be negative. No big deal, that just means it's really flowing in the opposite direction to your guess.
• #5
1,011
0
edit..
Last edited:
• Last Post
Replies
5
Views
242
• Last Post
Replies
11
Views
694
• Last Post
Replies
2
Views
8K
• Last Post
Replies
7
Views
8K
• Last Post
Replies
2
Views
2K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
10
Views
348
• Last Post
Replies
3
Views
706
• Last Post
Replies
1
Views
561
• Last Post
Replies
6
Views
537 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324684500694275, "perplexity": 959.8925524817281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00079.warc.gz"} |
https://economics.stackexchange.com/questions/26295/ls-book-recursive-macro-theory-chapter8-complete-market | # LS book (Recursive Macro Theory). Chapter8: Complete Market
I am reading LS book, on Chapter 8 - Complete Markets (3rd version). There is an example on page 264 that I am quite confused.
I attach the picture of the example here under.
I don't understand why we have a history (0,0,0,...,0,1,1)? since if $$s_t=0$$, then $$s_{t+1} = 0$$ for sure, right?
Or the authors means the history here is in order $$(s_t, s_{t-1},....,s_1,s_0)$$, so it makes sense since we know that $$s_0=1, s_1=1$$, but isn't it quite weird, since we usually write a history $$h_t=(s_0,s_1,...s_t)$$ right?
Anyone has an idea? Really appreciate your help!
$$\begin{eqnarray} \pi_t(0,0,\cdots,1,1) &=& \color{blue}{\pi(s_t = 0 | s_{t - 1} = 0)} \cdots \color{magenta}{\pi(s_2 = 0 | s_{1} = 1)} \color{red}{\pi(s_1 = 1 | s_{0} = 1)}\color{orange}{\pi(s_0 = 1)} \\ &=& \color{blue}{1} \times \cdots\times \color{magenta}{0.5}\times\color{red}{1}\times \color{orange}{1} = 0.5 \end{eqnarray}$$
But you are right if $$t > 2$$ then $$\pi(s_{t + 1} = 0 | s_{t} = 0) = 1$$, so if $$s_t = 0$$ then $$s_{t + 1} = 0$$, but at $$t = 2$$ there's a 50/50 chance that the state changes from $$1$$ to $$0$$, so the state may remain $$1$$ as in the first history, or switch to $$0$$ as the second one | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9736067652702332, "perplexity": 346.961485783867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00340.warc.gz"} |
https://channel9.msdn.com/Forums/Coffeehouse/546300-Scan-with-Microsoft-Security-Essentials-functionality/546300 | # Coffeehouse Post
## Single Post Permalink
• Please help me, I consider myself a rather 'advanced' Windows user but I'm having a simple problem...
Microsoft Security Essentials has this context-menu option, "Scan with Microsoft Security Essentials..." -- what I want is the ability to scan things on the fly, so google showed some discouraging entries, such as http://social.answers.microsoft.com/Forums/en-US/msescan/thread/af9ea7da-2ea7-4455-95cd-875646bdde59...
But I'm so sure if there is a context menu and there is a way to make it scan files or directories... So I started in the registry. I found a MSSE shell extenstion reference to "{0365FE2C-F183-4091-AC82-BFC39FB75C49}", which points at c:\PROGRA~1\MICROS~3\shellext.dll; makes sense. But now what? When I look at the exported functions of the DLL all I see are the typical register/unregister type stuff
Then I thought I was digging too deep, so i tried scanning a file and just looking at the running command line, that just revealed /hide /runkey command line options.
So, in hopes of learning some fundamental Windows troubleshooting I've come to the experts to assist me in what seems to be a simple question which has already taken me too long to answer
Thanks,
Dane | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871505856513977, "perplexity": 2219.3404869922283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00642-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://stats.stackexchange.com/questions/86683/significance-of-interactions | # Significance of interactions
It was suggested to me recently that the significance of an interaction term in a glm has to be higher than a main effect. For example, p<0.05 is commonly thought of as significant for a main effect but a two-way interaction has to be higher (p<0.025 or something) and a three-way even higher. However, I can't find any literature on this so I am unsure. Are they getting confused with multiple comparisons I wonder? Or am I confused and the two things are linked?
-
I can conceive that someone might choose to apply a more stringent significance level on interactions. Since the number of interactions and higher order interactions grow at a rapid rate (if there are enough factors), I can see some argument for it, such as if one were trying to make a similar number of type I errors at each order. I don't see how that somewhat plausible argument leads to an 'ought', though. – Glen_b Feb 15 at 23:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012113213539124, "perplexity": 408.4110520815534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444829.13/warc/CC-MAIN-20141017005724-00287-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.ics.uci.edu/~theory/269/070126.html | # Approximation Algorithms for Embedding General Metrics Into Trees
## Presented by Kevin Wortman
Abstract:
We consider the problem of embedding general metrics into trees. We give the first non-trivial approximation algorithm for minimizing the multiplicative distortion. Our algorithm produces an embedding with distortion (c log n)^\sqrt(O( log delta)), where c is the optimal distortion, and delta is the spread of the metric (i.e. the ratio of the diameter over the minimum distance). We give an improved O(1)-approximation algorithm for the case where the input is the shortest path metric over an unweighted graph. Moreover, we show that by composing our approximation algorithm for embedding general metrics into trees, with the approximation algorithm of [BCIS05] for embedding trees into the line, we obtain an improved approximation algorithm for embedding general metrics into the line.
We also provide almost tight bounds for the relation between embedding into trees and embedding into spanning subtrees. We show that for any unweighted graph G, the ratio of the distortion required to embed G into a spanning subtree, over the distortion of an optimal tree embedding of G, is at most O(log n). We complement this bound by exhibiting a family of graphs for which the ratio is Omega(log n/ log log n). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875548481941223, "perplexity": 535.3399045191941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00734.warc.gz"} |
https://www.physicsforums.com/threads/how-many-e-folds.913749/ | # I How many e folds?
1. May 5, 2017
### windy miller
Is there any way to constrain with data how many e folds went on during inflation ( or during our era of inflation in the case of eternal inflation)?
2. May 5, 2017
### bapowell
There's no way to collect data on inflation that occurred prior to the last 60 e-folds or so, since such length scales are outside today's cosmological horizon.
3. May 5, 2017
### kimbyd
Sort of. It depends upon the inflation model.
Basically, for a given inflation model, different numbers of e-folds of inflation result in different spectral tilt (that is, the power spectrum's shape is slightly altered by the number of e-folds).
However, this can't be done in general. This depends upon a very specific model of inflation. Other models, with other dynamics, will show very different numbers in terms of the number of e-folds. I'm not currently aware of any methodology to measure the number of e-folds directly.
4. May 10, 2017
### Chronos
Most theorists would agree no less than 50 e-folds are necessary, many prefer 60.
5. May 10, 2017
### bapowell
It's not so much a matter of it being "necessary"...when you begin considering fewer than 50 efolds or so, you begin to have real problems satisfying the flatness and entropy constraints.
6. May 11, 2017
### Chronos
I mean necessary in the sense of modeling a universe consistent with the observational constraints you hint at.
Draft saved Draft deleted
Similar Discussions: How many e folds? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8665291666984558, "perplexity": 2544.2527406570866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00137.warc.gz"} |
https://mathoverflow.net/questions/84503/mechanics-convergence-to-an-equilibrium-point | # mechanics: convergence to an equilibrium point
Hello,
this is a math forum, I know, but my question is about classical mechanics. I am looking for a general (but simple proof) of the very intuitive idea physicists have about the following problem.
We consider a particle in $\mathbb{R}^d$ evolving in a potential $V$ and with a friction coefficient $\gamma$. The differential equation is thus $$x''= -\nabla V(x) - \gamma x'$$ I assume that the potential is as smooth as we want and is bounded from below. Edit: I also assume that V is "large enough" at $\pm\infty$: there exists $R$ such that there exists $x_{-} < R$ and $x_+>R$ such that $V(x_\pm)>E_0$ where $E_0=x'(0)^2/2+V(x_0)$ is the initial energy. In this case, the particle cannot go beyond these points.
The intuition says that the particle will stop in an extremum of V (that depends on the initial condition). How do we actually prove it ?
It is easy to see that, if it stops, it is necessarily an extremum of V. My question is more about the fact that it stops...
I would like a proof that does not require any abstract ideas as lagrangians, so that it can presented to first or second year students. There are probably multiple references but I do not know any.
EDIT: of course, it is easy to prove that the mechanical energy $E=x'^2/2+V(x)$ is decreasing and bounded from below, and thus converges; but coming back to x and x' doesn't look so easy.
• 1) "Bounded from below" isn't enough. You rather need "is greater than the initial value of $E$ near $\infty$. 2) The limit point doesn't need to be a minimum in general. 3) Contrary to your belief, you can oscillate forever. Imagine an infinite road carved into a gentle mountain slope like a trough that spirals into a flat disk-shaped valley. – fedja Dec 29 '11 at 12:30
• I agree with point 1, in order to avoid to a trajectory to escape to infinity: I edit my post. For point 2), that's why I wrote "extremum" and not "minimum". My problem is with your point 3. Once you are in the flat valley, the friction is still operating and your velocity decreases exponentially and you should stop somewhere ! Can you exhibit a concrete example of the behaviour you mention. – Damien S. Dec 29 '11 at 13:18
• 2) A saddle is perfectly possible as well. The right word in English is "a critical point". 3) a) You never reach the valley: the road makes infinitely many loops on the slope. b) OK, I'll post something a bit later. – fedja Dec 29 '11 at 13:39
• Thanks for the saddle point ! in general, we require only $\nabla V=0$. I was making my drawings in 1D and thus I skipped it... For your 3), I am very interested in your example that I still do not understand. What happened if we restrict to $d=1$ ? – Damien S. Dec 29 '11 at 14:05
• OK, I posted the 2D example as an answer. When $d=1$, such effect is impossible, so the statement is true. – fedja Dec 29 '11 at 20:02
Consider the total energy $$E = x'^2/2 + V(x)$$ and assume that $V$ is bounded below and $V(x) \rightarrow \infty$ as $||x||\rightarrow \infty$ (i.e., V is radially unbounded). Since $$E' = -\gamma x'^2 < 0, \quad \forall x' \neq 0,$$ it follows from LaSalle's invariance principle that all solutions tend to the largest invariant set in {$\;(x,x') \;|\; x' = 0\;$}, namely $$M = \left[\;(x,x') \;|\; x'=0, \nabla V(x) = 0 \; \right].$$ If every point in this set is isolated you will have convergence to an equilibrium point (which need not be stable). Otherwise, you may have quasi-convergence, meaning that while every solution approaches $M$, $\lim_{t\rightarrow\infty} (x'(t),x(t))$ may not exist.
(Also, if $V$ is radially unbounded (and nice), the level sets {$\; (x,x') \; | \; E(x,x') \leq E(x_{0},x'_{0})\;$} are compact so the assumption on boundeness below can be replaced by saying, e.g., that $V$ should be continuously differentiable.)
OK, here is an explicit construction. Let $\gamma=1$. Consider $V(r,\theta)=[1.1+\sin(\frac 1{r-1}+\theta)]f(r)$ in polar coordinates where, $f(r)=0$ on $[0,1]$, and $f(r)=\exp(-(r-1)^{-1/2})$ for $r\ge 1$. Then $\nabla V\ne 0$ for $r>1$. If you start with velocity $-\nabla V$ in the trough where $r$ is slightly greater than $1$ and $\theta$ is chosen so that $\sin=-1$, you won't ever be able to go over the ridges where $\sin=1$.
The reason is that we can control the quantity $u=x'+\nabla V(x)$ pretty well. Indeed, $|u'+u|<0.00001|x'|$ because the second differential of $V$ is very small for $r$ close to $1$. Let $G(r)=2\frac{f(r)}{(r-1)^2}$. Note that $G$ dominates $|\nabla V|$ and is comparable to it up to a factor of $4$ when $\sin=0$. Hence, $|u'+u|\le 0.1|u|$ whenever $|u|>0.01 G$. Note also that $G$ doesn't change noticeably within a single turn of the trough and it takes at least constant time to accomplish one revolution staying in the trough. Thus, $|u|\le 0.02G$ as long as we follow the trough at all, but as long as $|u|<0.03G$, we cannot even cross the middle of the trough wall $\sin=0$ because $-\nabla V$ looks almost directly towards the bottom of the trough there.
• $\int |x'|^2dt<+\infty$, $x''$ is bounded. Hence $x'\to 0$. Hence $V$ tends to some limit. That much is always true. On the line, if the limit of $x$ fails to exist, there exists a point from which you depart and go fixed distance in both directions and to which you return infinitely many times with arbitrarily low velocity. Moreover, this point is a (non-strict) local minimum (if not, the potential nearby is less and once the velocity drops low enough, the return is impossible). But you cannot go far from a local minimum if you do not have much kinetic energy. – fedja Dec 29 '11 at 21:00
• In general, you get attracted to some connected closed set where $V$ is constant and $\nabla V=0$ but that's all. – fedja Dec 29 '11 at 21:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9343098998069763, "perplexity": 184.9272647753526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00068.warc.gz"} |
http://mathhelpforum.com/math/193959-how-much-math-there-know.html | # Math Help - How much math is there to know?
1. ## How much math is there to know?
Is there math that goes far beyond that which is taught at graduate school? If so, how much? Is it likely that a mathematician will run out of problems to solve and theorems to prove in his or her lifetime?
2. ## Re: How much math is there to know?
When Stanislaw Ulam was alive, the rate of theorems that were published per year was 200 000. This, I think, was in the 70s. Today it's a larger number for sure. Now imagine the amount of mathematics there is out there. Just from 1975 to 2011 there were 7 200 000 theorems published.
3. ## Re: How much math is there to know?
Originally Posted by RogueDemon
Is there math that goes far beyond that which is taught at graduate school? If so, how much? Is it likely that a mathematician will run out of problems to solve and theorems to prove in his or her lifetime?
You tell me. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089593291282654, "perplexity": 638.6906773495926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823333.10/warc/CC-MAIN-20140820021343-00352-ip-10-180-136-8.ec2.internal.warc.gz"} |
Subsets and Splits