url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://kawahara.ca/category/math/
## Weighted Precision and Recall Equation The “weighted” precision or recall score using sciki-learn is defined as, $$\frac{1}{\sum_{l\in \color{cyan}{L}} |\color{green}{\hat{y}}_l|} \sum_{l \in \color{cyan}{L}} |\color{green}{\hat{y}}_l| \phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l)$$ • $$\color{cyan}{L}$$ is the set of labels • $$\color{green}{\hat{y}}$$ is the true label • $$\color{magenta}{y}$$ is the predicted label • $$\color{green}{\hat{y}}_l$$ is all the true labels that have the label $$l$$ • $$|\color{green}{\hat{y}}_l|$$ is the number of true labels that have the label $$l$$ • $$\phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l)$$ computes the precision or recall for the true and predicted labels that have the label $$l$$. To compute precision, let $$\phi(A,B) = \frac{|A \cap B|}{|A|}$$. To compute recall, let $$\phi(A,B) = \frac{|A \cap B|}{|B|}$$. ## How is Weighted Precision and Recall Calculated? Let’s break this apart a bit more. Continue reading “Weighted Precision and Recall Equation” ## How to Compute the Derivative of a Sigmoid Function (fully worked example) This is a sigmoid function: $\boldsymbol{s(x) = \frac{1}{1 + e^{-x}}}$ The sigmoid function looks like this (made with a bit of MATLAB code): x=-10:0.1:10; s = 1./(1+exp(-x)); figure; plot(x,s); title('sigmoid'); Alright, now let’s put on our calculus hats… Continue reading “How to Compute the Derivative of a Sigmoid Function (fully worked example)”
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796860218048096, "perplexity": 2174.7208841139145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00353.warc.gz"}
https://cs.stackexchange.com/questions/2539/minimizing-the-total-variation-of-a-sequence-of-discrete-choices
# Minimizing the total variation of a sequence of discrete choices My setup is something like this: I have a sequence of sets of integers $C_i (1\leq i\leq n)$, with $|C_i|$ relatively small - on the order of four or five items for all $i$. I want to choose a sequence $x_i (1\leq i\leq n)$ with each $x_i\in C_i$ such that the total variation (either $\ell_1$ or $\ell_2$, i.e. $\sum_{i=1}^{n-1} |x_i-x_{i+1}|$ or $\sum_{i=1}^{n-1} \left(x_i-x_{i+1}\right)^2$) is minimized. While it seems like the choice for each $x_i$ is 'local', the problem is that choices can propagate and have non-local effects and so the problem seems inherently global in nature. My primary concern is in a practical algorithm for the problem; right now I'm using annealing methods based on mutating short subsequences, and while they should be all right it seems like I ought to be able to do better. But I'm also interested in the abstract complexity — my hunch would be that the standard query version ('is there a solution of total variation $\leq k$?') would be NP-complete via a reduction from some constraint problem like 3-SAT but I can't quite see the reduction. Any pointers to previous study would be welcome — it seems like such a natural problem that I can't believe it hasn't been looked at before, but my searches so far haven't turned up anything quite like it. • Nice question! Just a clarifying question: the length of $x_i$ is $n$, but must you pick some element from each $C_i$? Or is it okay to have some number of sets from which you don't pick at all from? – Juho Jun 29 '12 at 9:26 • @mrm There must be an element from each $C_i$ - the $x$s are directly indexed from $1\ldots n$ just as the $C$s are. – Steven Stadnicki Jun 29 '12 at 15:00 Here is a dynamic program. Assume that $C_i = \{C_{i_1}, \ldots, C_{i_m}\}$ for all $i \in [n]$ for the sake of clarity; the following can be adapted to work if the $C_i$ have different cardinalities. Let $\operatorname{Cost}(i, j)$ be the minimum cost of a sequence over the first $i$ sets, ending with $C_{i_j}$. The following recursion describes $\operatorname{Cost}$: \qquad \displaystyle \begin{align} \operatorname{Cost}(1,j) &= 0 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad, 1\leq j \leq m \\ \operatorname{Cost}(i,j) &= \min_{k = 1}^m \left(\operatorname{Cost}(i - 1, k) + \lvert C_{(i-1)_k} - C_{i_j} \rvert\right) \ , 2 \leq i \leq n, 1 \leq j \leq m. \end{align} The overall minimum cost is $\min_{j = 1}^m \text{Cost}(n, j)$; the actual sequence of choices can be determined by examining the argmins along the way. The runtime is $O(nm)$. • I tried to improve your answer's clarity both in formatting and presentation; please check that I did not mess things up. It would be nice if you included an argument why what you propose is correct. – Raphael Jun 29 '12 at 13:21 • Considering Nicholas' answer, this is similar to the Bellman-Ford algorithm, tailored to the problem at hand. – Raphael Jun 30 '12 at 10:52 • Both answers are really excellent (and as Raphael notes, very similar), but while I like the broad applicability of the other, I really prefer this one for its direct application to my particular question. Thank you! – Steven Stadnicki Jul 18 '12 at 0:00 It seems that this can be solved by simply computing a shortest path in a directed acyclic graph. The reasoning is that your objective function minimizes the total distance between selected "neighbors" in your sets $\mathcal{C} = \{C_1, \dotsc, C_n\}$. Construct an $n$-staged graph $G = (\bigcup_{i=1}^n V_i, E)$ where each $v \in V_i$ corresponds to a unique element $x \in C_i$. For each $u \in V_i, v \in V_{i+1}$, add a directed edge $(u, v)$ the cost being either the $\ell_1$ or $\ell_2$ distance. Now add a source vertex $s$ with 0-cost edges to $V_1$ and a sink vertex $t$ with 0-cost edges from $V_n$. Given that $G$ is a DAG and both distance functions force edge costs to be non-negative, you can compute the shortest path in $O(V + E)$ with topological sort and dynamic programming (similar to description here).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594611525535583, "perplexity": 282.7420072205226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00350.warc.gz"}
https://scholarship.rice.edu/handle/1911/17012/browse?rpp=20%CE%B7l=-1&sort_by=1&type=title&starts_with=N%E2%88%A8der=ASC
Now showing items 1210-1229 of 2182 • #### Observation of $t\overline{t}H$ Production  (2018) The observation of Higgs boson production in association with a top quark-antiquark pair is reported, based on a combined analysis of proton-proton collision data at center-of-mass energies of √s=7, 8, and 13 TeV, corresponding ... • #### Observation of a diffractive contribution to dijet production in proton-proton collisions at √s=7  TeV  (2013) The cross section for dijet production in proton-proton collisions at √s=7  TeV is presented as a function of ξ˜, a variable that approximates the fractional momentum loss of the scattered proton in single-diffractive ... • #### Observation of a Helical Luttinger Liquid in InAs/GaSb Quantum Spin Hall Edges  (2015) We report on the observation of a helical Luttinger liquid in the edge of an InAs/GaSb quantum spin Hall insulator, which shows characteristic suppression of conductance at low temperature and low bias voltage. Moreover, ... • #### Observation of a narrow mass state decaying into Υ(1S)+γ in pp¯ collisions at √s=1.96  TeV  (2012) Using data corresponding to an integrated luminosity of 1.3  fb−1, we observe a narrow mass state decaying into Υ(1S)+γ, where the Υ(1S) meson is detected by its decay into a pair of oppositely charged muons, and the photon ... • #### Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC  (2012) Results are presented from searches for the standard model Higgs boson in proton–proton collisions at √s=7 and 8 TeV in the Compact Muon Solenoid experiment at the LHC, using data samples corresponding to integrated ... • #### Observation of a new boson with mass near 125 GeV in pp collisions at √s=7 and 8 TeV  (2013) A detailed description is reported of the analysis used by the CMS Collaboration in the search for the standard model Higgs boson in pp collisions at the LHC, which led to the observation of a new boson. The data sample ... • #### Observation of a New Ξb Baryon  (2012) The observation of a new b baryon via its strong decay into Ξ−bπ+ (plus charge conjugates) is reported. The measurement uses a data sample of pp collisions at √s=7  TeV collected by the CMS experiment at the LHC, corresponding ... • #### Observation of a peaking structure in the J/ψϕ mass spectrum from B±→ J/ψϕK± decays  (2014) A peaking structure in the J/ψϕ mass spectrum near threshold is observed in B±→J/ψϕK± decays, produced in pp collisions at √s=7 TeV collected with the CMS detector at the LHC. The data sample, selected on the basis of the ... • #### Observation of an Energy-Dependent Difference in Elliptic Flow between Particles and Antiparticles in Relativistic Heavy Ion Collisions  (2013) Elliptic flow (v2) values for identified particles at midrapidity in Au+Au collisions, measured by the STAR experiment in the beam energy scan at RHIC at √sNN=7.7–62.4  GeV, are presented. A beam-energy-dependent difference ... • #### Observation of antiferromagnetic correlations in the Hubbard model with ultracold atoms  (2015) Ultracold atoms in optical lattices have great potential to contribute to a better understanding of some of the most important issues in many-body physics, such as high-temperature superconductivity[1]. The Hubbard model—a ... • #### Observation of Charge Asymmetry Dependence of Pion Elliptic Flow and the Possible Chiral Magnetic Wave in Heavy-Ion Collisions  (2015) We present measurements of π− and π+ elliptic flow, v2, at midrapidity in Au+Au collisions at √sNN=200, 62.4, 39, 27, 19.6, 11.5, and 7.7 GeV, as a function of event-by-event charge asymmetry, Ach, based on data from the ... • #### Observation of Correlated Azimuthal Anisotropy Fourier Harmonics in $pp$ and $p+\mathrm{Pb}$ Collisions at the LHC  (2018) The azimuthal anisotropy Fourier coefficients (vn) in 8.16 TeV p+Pb data are extracted via long-range two-particle correlations as a function of the event multiplicity and compared to corresponding results in pp and PbPb ... • #### Observation of D0 Meson Nuclear Modifications in Au+Au Collisions at √sNN=200  GeV  (2014) We report the first measurement of charmed-hadron (D0) production via the hadronic decay channel (D0→K−+π+) in Au+Au collisions at √sNN=200  GeV with the STAR experiment. The charm production cross section per nucleon-nucleon ... • #### Observation of Edge Transport in the Disordered Regime of Topologically Insulating InAs/GaSb Quantum Wells  (2014-01) We observe edge transport in the topologically insulating InAs=GaSb system in the disordered regime. Using asymmetric current paths we show that conduction occurs exclusively along the device edge, exhibiting a large Hall ... • #### Observation of Electroweak Production of Same-Sign $W$ Boson Pairs in the Two Jet and Two Same-Sign Lepton Final State in Proton-Proton Collisions at $\sqrt{s}=13\text{ }\text{ }\mathrm{TeV}$  (2018) The first observation of electroweak production of same-sign W boson pairs in proton-proton collisions is reported. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy ... • #### Observation of Forbidden Exciton Transitions Mediated by Coulomb Interactions in Photoexcited Semiconductor Quantum Wells  (2013) We use terahertz pulses to induce resonant transitions between the eigenstates of optically generated exciton populations in a high-quality semiconductor quantum well sample. Monitoring the excitonic photoluminescence, we ... (2017) • #### Observation of long-range, near-side angular correlations in pPb collisions at the LHC  (2013) Results on two-particle angular correlations for charged particles emitted in pPb collisions at a nucleon–nucleon center-of-mass energy of 5.02 TeV are presented. The analysis uses two million collisions collected with the ... • #### Observation of Momentum-Confined In-Gap Impurity State in Ba0.6K0.4Fe2As2: Evidence for Antiphase s± Pairing  (2014) We report the observation by angle-resolved photoemission spectroscopy of an impurity state located inside the superconducting gap of Ba 0.6 K 0.4 Fe 2 As 2 and vanishing above the superconducting critical ... • #### Observation of Quantum Jumps in a Single Atom  (1986) We detect the radiatively driven electric quadrupole transition to the metastable D522 state in a single, laser-cooled Hg II ion by monitoring the abrupt cessation of the fluorescence signal from the laser-excited S122→P122 ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961043000221252, "perplexity": 3362.2884692431808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00352.warc.gz"}
https://jp.maplesoft.com/support/help/maple/view.aspx?path=RandomTools/GetFlavors&L=J
GetFlavors - Maple Help RandomTools GetFlavors return the names of all known flavors Calling Sequence GetFlavors() Description • The GetFlavors() function returns an expression sequence that contains the names of all flavors known to function Generate. • This function is part of the RandomTools package, and so it can be used in the form GetFlavors(..) only after executing the command with(RandomTools). However, it can always be accessed through the long form of the command by using the form RandomTools[GetFlavors](..). Examples > $\mathrm{with}\left(\mathrm{RandomTools}\right):$ > $\mathrm{GetFlavors}\left(\right)$ ${\mathrm{Matrix}}{,}{\mathrm{Vector}}{,}{\mathrm{choose}}{,}{\mathrm{complex}}{,}{\mathrm{distribution}}{,}{\mathrm{exprseq}}{,}{\mathrm{float}}{,}{\mathrm{integer}}{,}{\mathrm{list}}{,}{\mathrm{listlist}}{,}{\mathrm{negative}}{,}{\mathrm{negint}}{,}{\mathrm{nonnegative}}{,}{\mathrm{nonnegint}}{,}{\mathrm{nonposint}}{,}{\mathrm{nonpositive}}{,}{\mathrm{nonzero}}{,}{\mathrm{nonzeroint}}{,}{\mathrm{polynom}}{,}{\mathrm{posint}}{,}{\mathrm{positive}}{,}{\mathrm{rational}}{,}{\mathrm{set}}{,}{\mathrm{string}}{,}{\mathrm{truefalse}}{,}{\mathrm{variable}}$ (1) > $\mathrm{AddFlavor}\left(A=\mathrm{rand}\left(1..20\right)\right):$ > $\mathrm{Generate}\left(A\right)$ ${15}$ (2) > $\mathrm{GetFlavors}\left(\right)$ ${A}{,}{\mathrm{Matrix}}{,}{\mathrm{Vector}}{,}{\mathrm{choose}}{,}{\mathrm{complex}}{,}{\mathrm{distribution}}{,}{\mathrm{exprseq}}{,}{\mathrm{float}}{,}{\mathrm{integer}}{,}{\mathrm{list}}{,}{\mathrm{listlist}}{,}{\mathrm{negative}}{,}{\mathrm{negint}}{,}{\mathrm{nonnegative}}{,}{\mathrm{nonnegint}}{,}{\mathrm{nonposint}}{,}{\mathrm{nonpositive}}{,}{\mathrm{nonzero}}{,}{\mathrm{nonzeroint}}{,}{\mathrm{polynom}}{,}{\mathrm{posint}}{,}{\mathrm{positive}}{,}{\mathrm{rational}}{,}{\mathrm{set}}{,}{\mathrm{string}}{,}{\mathrm{truefalse}}{,}{\mathrm{variable}}$ (3)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963467121124268, "perplexity": 1140.2384426666442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00018.warc.gz"}
https://www.physicsforums.com/threads/what-is-the-evidence-for-a-higgs-particle.56989/
# What is the evidence for a higgs particle? 1. Dec 16, 2004 ### tozhan Ive started to hear alot about this 'higgs' particle recently. Any chance sum1 can explain to me what the particle is/does/comes from etc?? any help would be great. also what are the implications of its existance?? 2. Dec 17, 2004 ### misogynisticfeminist the higgs particle gives mass to other particles and it is selective though, not everything interacts with the higgs. It is now used to explain the huge masses of the weak gauge bosons or just any particle which suddenly has cetain unaccounted mass. Uhh, to say where the higgs particle comes from is like asking where muons or electrons come from. Its almost like asking how the universe came about. But it you're talking about a certain decay series which has this particle decaying into something and a higgs, then I'm not too sure about that. I myself don't know what kinda stuff will imply the existence of a higgs, probably of other particles slowing down, implying that they just gained some mass. Perhaps the experts can fill you in on this. 3. Dec 17, 2004 ### Haelfix The Higgs particle is a massive excitation of a presumably scalar field. So it should in principle have decay modes, the likes of which would be detectable. The reason we think it exists is precisely b/c the Standard Model works in every other experiment to date, and w/o a Higgs particle much of the theory would be glaringly inconsistent. Most particle theorists feel its guarenteed to exist in some form (not necessarily the minimal theory tho), just like the top quark was. If its not found in the next two or three set(s) of particle accelerators, all hell breaks loose so to speak and then everyone will be thoroughly confused =) (Incidentally, in one of the most experimentally profound derivations of the standard model, using partial wave unitarity bounds as a prerequisite as well as minimality, the Higgs *must* exist) Last edited: Dec 17, 2004 4. Dec 17, 2004 ### cosmoboy 5. Dec 17, 2004 ### arivero As for evidence, we all hold breath until 2007 or so (except if cosmic ray films do not trap it casually). There has been some deviations at 115 GeV which could fit with a neutral higgs boson, as well as some smaller ones around 69 GeV which could fit with a charged higgs. Charged higgs appear in SUSY breaking, but not with so low energy. 6. Dec 17, 2004 ### PhysicsFan Higgs physics Brian Greene's most recent book The Fabric of the Cosmos (2004) has extensive coverage of Higgs fields. What I (a non-physicist) grasped about Higgs fields is that two key areas of theoretical interest they address are: differential masses of different types of particles; and unification of different forces of nature. Higgs fields create barriers to the movement of other kinds of particles, with different types of particles encountering different degrees of resistance. The more resistance a type of particle encounters, the more mass it is said to have. Higgs fields have been analogized to molasses or a bunch of paparazzi photographers. As Greene explains: "If we liken a particle's mass to a person's fame, then the Higgs ocean is like the paparazzi: those who are unknown pass through the swarming photographers with ease, but famous politicians and movie stars have to push much harder to reach their destination" (p. 263). Greene also notes that: "Photons pass completely unhindered through the Higgs ocean and so have no mass at all. If, to the contrary, a particle interacts significantly with the Higgs ocean, it will have a higher mass. The heaviest quark (it's called the top quark), with a mass that's about 350,000 times an electron's, interacts 350,000 times more strongly with the Higgs ocean than does an electron; it has greater difficulty accelerating through the Higgs ocean, and that's why it has a greater mass" (p. 263). In terms of unifying the different forces, Greene discusses how photons ("messenger particles" of the electromagnetic force) and W and Z particles (particles of the weak nuclear force) were indistinguishable at one point (known as "electroweak unification"), but are now considered to be different, due to the influence of the Higgs field. Glashow, Salam, and Weinberg "realized that before the Higgs ocean formed, not only did all the force particles have identical masses -- zero -- but the photons and W and Z particles were identical in essentially every other way as well... At high enough temperatures, therefore, temperatures that would vaporize today's Higgs-filled vacuum, there is no distinction between the weak nuclear force and the electromagnetic force... The symmetry between the electromagnetic and weak forces is not apparent today because as the universe cooled, the Higgs ocean formed, and -- this is vital -- photons and W and Z particles interact with the condensed Higgs field differently. Photons zip through the Higgs ocean... and therefore remain massless. W and Z particles... have to slog their way through, acquiring masses that are 86 and 97 times that of a proton, respectively" (excerpts from pp. 264-265). A common term used to describe this phenomenon is "symmetry breaking." One of Greene's former professors, Howard Georgi (whose last name, I learned from talking to someone in physics, is pronounced Georg-EYE, not Georgie, as I first thought) spearheaded an idea called "grand unification" that attempted to bring the strong nuclear force into the unification with electromagnetic and weak. According to Greene, grand unification has not yet worked out, but the physicist I talked to seemed fairly optimistic that it might still. 7. Dec 19, 2004 ### ahrkron Staff Emeritus At some point in the development of the SM, things seemed to work fine, but there was a little problem: the symmetry of the model needed all particles to be massless, in contradiction with everyday life. However, Peter Higgs (among others) found a way in which such symmetry, although present, could be "broken" to allow particles to show a mass. This "Higgs mechanism" needs the existence of a new field (plus its corresponding particle). Finding the Higgs boson would be a high point in the history of physics, since its existence was deduced from trying to preserve a symmetry of the model (sort of what happened when Pauli recognized that an apparent violation of energy conservation was reason enough to predict a new particle --the neutrino). In a way, finding the Higgs would show us that local gauge invariance (the symmetry that such particle was invented to preserve) is more than just a "desirable feature" of our models. Also, if the Higgs boson is discovered, we'll be able to measure its mass and other properties that will help us explore physics beyond the current model. 8. Dec 21, 2004 ### tozhan thanks guys this was all great stuff 9. Mar 13, 2011 ### nickthrop101 it gives a reason for why their isnt a disaster as all particle shouldnt ave mass, yet this gives a reason why they do 10. Mar 13, 2011 ### Vanadium 50 Staff Emeritus This thread is 7 years old. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: What is the evidence for a higgs particle?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346202373504639, "perplexity": 1390.806012451076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186895.51/warc/CC-MAIN-20170322212946-00324-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/average-net-force-and-g-force.210573/
# Average Net Force and G Force 1. Jan 23, 2008 ### tod88 1. The problem statement, all variables and given/known data A 65 kilogram tightrope walker falls vertically downward with a velocity of 9.9m/s. She falls into a net which stretches 1.5m vertically as it breaks her fall. What is the average net force on the walker as her fall is being broken, and what is the G force that she experiences? Assume no air resistance. 2. Relevant equations (Force)*(Change in Time) = (mass)*(Vf - Vi) (1/2)kx^2 ?? 3. The attempt at a solution Since I was not given an amount of time I am not sure how to start this problem. I know her Vf will obviously be 0 m/s. 9.9 m/s is her initial velocity. From this I can find that F*(change in time) = (65)(9.9) but I am still lost as to how to find force or change in time. I thought about using elastic potential energy (1/2 k*x^2) or gravitational potential energy (mgh) but I didn't see how this would help me (especially since I don't know the h). Could someone please give me some tips as to where to start? For G force I know that 1 G is equal to 9.81 m/s^2 but that is all. 2. Jan 23, 2008 ### blochwave Err, is that all you're given? You just have to know her velocity when she reaches the net, so you need to know how far above it she is Then once you know her velocity when she reaches the net(maybe we're just assuming that it's 9.9m/s?)you know she has kinetic energy 1/2*mv^2 Once the net is done applying a force to her over that 1.5m of distance, she will have had work done on her to cancel her kinetic energy(use W=F*d) 3. Jan 23, 2008 ### tod88 Thanks...the problem just said that "her velocity is 9.9m/s" so I'm assuming that is velocity when she hits the net, otherwise, as I thought, it would be impossible to solve. Thanks for the help though...that does make more sense to do it that way. Similar Discussions: Average Net Force and G Force
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704453110694885, "perplexity": 908.3446816926956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104172.67/warc/CC-MAIN-20170817225858-20170818005858-00222.warc.gz"}
https://link.springer.com/article/10.1007/s11263-012-0591-y
International Journal of Computer Vision , Volume 105, Issue 2, pp 171–185 Geodesic Regression and the Theory of Least Squares on Riemannian Manifolds Article DOI: 10.1007/s11263-012-0591-y Thomas Fletcher, P. Int J Comput Vis (2013) 105: 171. doi:10.1007/s11263-012-0591-y Abstract This paper develops the theory of geodesic regression and least-squares estimation on Riemannian manifolds. Geodesic regression is a method for finding the relationship between a real-valued independent variable and a manifold-valued dependent random variable, where this relationship is modeled as a geodesic curve on the manifold. Least-squares estimation is formulated intrinsically as a minimization of the sum-of-squared geodesic distances of the data to the estimated model. Geodesic regression is a direct generalization of linear regression to the manifold setting, and it provides a simple parameterization of the estimated relationship as an initial point and velocity, analogous to the intercept and slope. A nonparametric permutation test for determining the significance of the trend is also given. For the case of symmetric spaces, two main theoretical results are established. First, conditions for existence and uniqueness of the least-squares problem are provided. Second, a maximum likelihood criteria is developed for a suitable definition of Gaussian errors on the manifold. While the method can be generally applied to data on any manifold, specific examples are given for a set of synthetically generated rotation data and an application to analyzing shape changes in the corpus callosum due to age. Keywords Geodesic regression Manifold statistics Shape analysis
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8337512016296387, "perplexity": 407.6982288154845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321426.45/warc/CC-MAIN-20170627134151-20170627154151-00709.warc.gz"}
https://courses.maths.ox.ac.uk/node/42521
# C3.4 Algebraic Geometry (2019-2020) ## Primary tabs 2019-2020 Lecturer(s): Prof. Balazs Szendroi General Prerequisites: A3 Rings and Modules is essential. In particular, a solid understanding of the definitions of: field, ring, ideal (prime ideal, maximal ideal), integral domain, field of fractions, module, polynomial ring, irreducible and prime elements, zero divisor, unique factorisation domain, quotient of a ring by an ideal, homomorphisms and isomorphisms. Basic results such as the first isomorphism theorem for rings and the Chinese Remainder Theorem. B3.3 Algebraic Curves is useful but not essential. Projective spaces and homogeneous coordinates will be defined in C3.4, but a working knowledge of them would be useful. There is some overlap of topics, as B3.3 studies the algebraic geometry of one-dimensional varieties. B2.2 Commutative Algebra is useful but not essential. There is a substantial overlap of topics with this course, but in C3.4 these topics are rephrased in terms of geometrical properties. Courses closely related to C3.4 include C2.2 Homological Algebra, C2.7 Category Theory, C3.7 Elliptic Curves, C2.6 Introduction to Schemes; and partly related to: C3.1 Algebraic Topology, C3.3 Differentiable Manifolds, C3.5 Lie Groups. Course Term: Michaelmas Course Lecture Information: 16 lectures. Course Weight: 1.00 unit(s) Course Level: M ### Assessment type: Course Overview: Algebraic geometry is the study of algebraic varieties: an algebraic variety is roughly speaking, a locus defined by polynomial equations. One of the advantages of algebraic geometry is that it is purely algebraically defined and applied to any field, including fields of finite characteristic. It is geometry based on algebra rather than calculus, but over the real or complex numbers it provides a rich source of examples and inspiration to other areas of geometry. Course Synopsis: Affine algebraic varieties, the Zariski topology, morphisms of affine varieties. Irreducible varieties. Projective space. Projective varieties, affine cones over projective varieties. The Zariski topology on projective varieties. The projective closure of affine variety. Morphisms of projective varieties. Projective equivalence. Veronese morphism: definition, examples. Veronese morphisms are isomorphisms onto their image; statement, and proof in simple cases. Subvarieties of Veronese varieties. Segre maps and products of varieties. Coordinate rings. Hilbert's Nullstellensatz. Correspondence between affine varieties (and morphisms between them) and finitely generated reduced $k$-algebras (and morphisms between them). Graded rings and homogeneous ideals. Homogeneous coordinate rings. Discrete invariants of projective varieties: degree, dimension, Hilbert function. Statement of theorem defining Hilbert polynomial. Quasi-projective varieties, and morphisms between them. The Zariski topology has a basis of affine open subsets. Rings of regular functions on open subsets and points of quasi-projective varieties. The ring of regular functions on an affine variety is the coordinate ring. Localisation and relationship with rings of regular functions. Tangent space and smooth points. The singular locus is a closed subvariety. Algebraic re-formulation of the tangent space. Differentiable maps between tangent spaces. Function fields of irreducible quasi-projective varieties. Rational maps between irreducible varieties, and composition of rational maps. Birational equivalence. Correspondence between dominant rational maps and homomorphisms of function fields. Blow-ups: of affine space at a point, of subvarieties of affine space, and of general quasi-projective varieties along general subvarieties. Statement of Hironaka's Desingularisation Theorem. Every irreducible variety is birational to a hypersurface. Re-formulation of dimension. Smooth points are a dense open subset.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012728095054626, "perplexity": 1080.8445595447106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00674.warc.gz"}
http://www.ecineq.org/2016/11/01/the-benchmark-of-maximum-relative-bipolarisation-2/
# The benchmark of maximum relative bipolarisation ###### Working Paper 2016-419 Abstract Relative bipolarisation indices are usually constructed making sure that they achieve their minimum value of bipolarisation if and only if distributions are perfectly egalitarian. However, the literature has neglected discussing the existence of a benchmark of maximum relative bipolarisation. Consequently there is no discussion as to the implications of maximum bipolarisation for the optimal normalisation of relative bipolarisation indices either. In this note we characterize the situation of maximum relative bipolarisation as the only one consistent with the key axioms of relative bipolarisation. We illustrate the usefulness of incorporating the concept of maximum relative bipolarisation in the design of bipolarisation indices by identifying, among the family of rank-dependent Wang-Tsui indices, the only subclass fulfilling a normalisation axiom that takes into account both benchmarks of minimum and maximum relative bipolarisation. Authors: Gaston Yalonetzky. Keywords: Relative bipolarisation, normalisation. JEL: D30, D31.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9357290863990784, "perplexity": 2072.3337901292625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610196.46/warc/CC-MAIN-20210613161945-20210613191945-00451.warc.gz"}
https://encyclopediaofmath.org/wiki/Uniformly-convergent_series
# Uniformly-convergent series A series of functions $$\tag{1 } \sum _ {n = 1 } ^ \infty a _ {n} ( x),\ \ x \in X,$$ with (in general) complex terms, such that for every $\epsilon > 0$ there is an $n _ \epsilon$( independent of $x$) such that for all $n > n _ \epsilon$ and all $x \in X$, $$| s _ {n} ( x) - s ( x) | < \epsilon ,$$ where $$s _ {n} ( x) = \sum _ {k = 1 } ^ { n } a _ {k} ( x)$$ and $$s ( x) = \sum _ {k = 1 } ^ \infty a _ {k} ( x).$$ In other words, the sequence of partial sums $s _ {n} ( x)$ is a uniformly-convergent sequence. The definition of a uniformly-convergent series is equivalent to the condition $$\lim\limits _ {n \rightarrow \infty } \ \sup _ {x \in X } \ | r _ {n} ( x) | = 0,$$ which denotes the uniform convergence to zero on $X$ of the sequence of remainders $$r _ {n} ( x) = \ \sum _ {k = n + 1 } ^ \infty a _ {k} ( x),\ \ n = 1, 2 \dots$$ of the series (1). Example. The series $$\sum _ {n = 1 } ^ \infty \frac{z ^ {n} }{n! } = \ e ^ {z}$$ is uniformly convergent on each bounded disc of the complex plane, but is not uniformly convergent on the whole of $\mathbf C$. The Cauchy criterion for uniform convergence of a series gives a condition for the uniform convergence of the series (1) on $X$ without using the sum of the series. A sufficient condition for the uniform convergence of a series is given by the Weierstrass criterion (for uniform convergence). A series $\sum _ {n = 1 } ^ \infty a _ {n} ( x)$ is called regularly convergent on a set $X$ if there is a convergent numerical series $\sum \alpha _ {n}$, $\alpha _ {n} \geq 0$, such that for all $n = 1, 2 \dots$ and all $x \in X$, $$| a _ {n} ( x) | \leq \alpha _ {n} ,$$ that is, if (1) satisfies the conditions of the Weierstrass criterion for uniform convergence. On the strength of this criterion, a regularly-convergent series on $X$ is uniformly convergent on that set. In general, the converse is false; however, for every series that is uniformly convergent on $X$ the successive terms can be collected into finite groups so that the series thus obtained is regularly convergent on $X$. There are criteria for the uniform convergence of series analogous to Dirichlet's and Abel's criteria for the convergence of series of numbers. These tests for uniform convergence first occurred in papers of G.H. Hardy. If in a series $$\tag{2 } \sum a _ {n} ( x) b _ {n} ( x)$$ the functions $a _ {n} ( x)$ and $b _ {n} ( x)$, $n = 1, 2 \dots$ defined on $X$, are such that the sequence $\{ a _ {n} ( x) \}$ is monotone for each $x \in X$ and converges uniformly to zero on $X$, while the sequence of partial sums $\{ B _ {n} ( x) \}$ of $\sum b _ {n} ( x)$ are uniformly bounded on $X$, then (2) is uniformly convergent on this set. If the sequence $\{ a _ {n} ( x) \}$ is uniformly bounded on $X$ and is monotone for each fixed $x \in X$, while the series $\sum b _ {n} ( x)$ is uniformly convergent on $X$, then (2) is also uniformly convergent on $X$. ## Properties of uniformly-convergent series. If two series $\sum a _ {n} ( x)$ and $\sum b _ {n} ( x)$ are uniformly convergent on $X$ and $\lambda , \mu \in \mathbf C$, then the series $$\sum \lambda a _ {n} ( x) + \mu b _ {n} ( x)$$ is also uniformly convergent on $X$. If a series $\sum a _ {n} ( x)$ is uniformly convergent on $X$ and $b ( x)$ is bounded on $X$, then $\sum b ( x) a _ {n} ( x)$ is also uniformly convergent on $X$. Continuity of the sum of a series. In the study of the sum of a series of functions, the notion of "point of uniform convergence" turns out to be useful. Let $X$ be a topological space and let the series (1) converge on $X$. A point $x _ {0} \in X$ is called a point of uniform convergence of (1) if for any $\epsilon > 0$ there are a neighbourhood $U = U ( x _ {0} )$ of $x _ {0}$ and a number $n _ \epsilon$ such that for all $x \in U$ and all $n > n _ \epsilon$ the inequality $| r _ {n} ( x) | < \epsilon$ holds. If $X$ is a compact set, then in order that the series (1) be uniformly convergent on $X$ it is necessary and sufficient that each point $x \in X$ is a point of uniform convergence. If $X$ is a topological space, the series (1) is convergent on $X$, $x _ {0}$ is a point of uniform convergence of (1), and there are finite limits $$\lim\limits _ {x \rightarrow x _ {0} } a _ {n} ( x) = c _ {n} ,\ \ n = 1, 2 \dots$$ then the numerical series $\sum c _ {n}$ converges, the sum $s ( x)$ of (1) has a limit as $x \rightarrow x _ {0}$, and, moreover, $$\tag{3 } \lim\limits _ {x \rightarrow x _ {0} } s ( x) = \ \lim\limits _ {x \rightarrow x _ {0} } \ \sum a _ {n} ( x) = \ \sum c _ {n} ,$$ that is, under the assumptions made on (1) it is possible to pass term-by-term to the limit in the sense of formula (3). Hence it follows that if (1) converges on $X$ and its terms are continuous at a point of uniform convergence $x _ {0} \in X$, then its sum is also continuous at that point: $$\lim\limits _ {x \rightarrow x _ {0} } s ( x) = \sum \lim\limits _ {x \rightarrow x _ {0} } a _ {n} ( x) = \ \sum a _ {n} ( x _ {0} ) = s ( x _ {0} ).$$ Therefore, if a series of continuous functions converges uniformly on a topological space, then its sum is continuous on that space. When $X$ is a compactum and the terms of (1) are non-negative on $X$, then uniform convergence of (1) is also a necessary condition for the continuity on $X$ of the sum (see Dini theorem). In the general case, a necessary and sufficient condition for the continuity of the sum of a series (1) that converges on a topological space $X$, and whose terms are continuous on $X$, is quasi-uniform convergence of the sequence of partial sums $s _ {n} ( x)$ to the sum $s ( x)$( the Arzelà–Aleksandrov theorem). The answer to the question of the existence of points of uniform convergence for a convergent series of functions that are continuous on an interval is given by the Osgood–Hobson theorem: If (1) converges at each point of an interval $[ a, b]$ and the terms $a _ {n} ( x)$ are continuous on $[ a, b]$, then there is an everywhere-dense set in $[ a, b]$ of points of uniform convergence of the series (1). Hence it follows that the sum of any series of continuous functions, convergent in some interval, is continuous on a dense set of points of the interval. At the same time there exists a series of continuous functions, convergent at all points of an interval, such that the points at which it converges non-uniformly form an everywhere-dense set in the interval in question. Term-by-term integration of uniformly-convergent series. Let $X = [ a, b]$. If the terms of the series $$\tag{4 } \sum a _ {n} ( x),\ \ x \in [ a, b],$$ are Riemann (Lebesgue) integrable on $[ a, b]$ and (4) converges uniformly on this interval, then its sum $s ( x)$ is also Riemann (Lebesgue) integrable on $[ a, b]$, and for any $x \in [ a, b]$ the equality $$\tag{5 } \int\limits _ { a } ^ { x } s ( t) dt = \ \int\limits _ { a } ^ { x } \left [ \sum a _ {n} ( t) \right ] dt = \ \sum \int\limits _ { a } ^ { x } a _ {n} ( t) dt$$ holds, where the series on the right-hand side is uniformly convergent on $[ a, b]$. In this theorem it is impossible to replace the condition of uniform convergence of (4) by convergence on $[ a, b]$, since there are series, even of continuous functions and with continuous sums, that converge on an interval and for which (5) does not hold. At the same time there are various generalizations. Below some results for the Stieltjes integral are given. If $g ( x)$ is an increasing function on $[ a, b]$, the $a _ {n} ( x)$ are integrable functions relative to $g ( x)$ and (4) converges uniformly on $[ a, b]$, then the sum $s ( x)$ of (4) is Stieltjes integrable relative to $g ( x)$, $$\int\limits _ { a } ^ { x } s ( t) dg ( t) = \ \sum \int\limits _ { a } ^ { x } a _ {n} ( t) dg ( t),$$ and the series on the right-hand side converges uniformly on $[ a, b]$. Formula (5) has been generalized to functions of several variables. Conditions for term-by-term differentiation of series in terms of uniform convergence. If the terms of (4) are continuously differentiable on $[ a, b]$, if (4) converges at some point of the interval and the series of derivatives of the terms of (4) is uniformly convergent on $[ a, b]$, then the series (4) itself is uniformly convergent on $[ a, b]$, its sum $s ( x)$ is continuously differentiable and $$\tag{6 } { \frac{d}{dx} } s ( x) = \ { \frac{d}{dx} } \sum a _ {n} ( x) = \ \sum { \frac{d}{dx} } a _ {n} ( x).$$ In this theorem the condition of uniform convergence of the series obtained by term-by-term differentiation cannot be replaced by convergence on $[ a, b]$, since there are series of continuously-differentiable functions, uniformly convergent on an interval, for which the series obtained by term-by-term differentiation converges on the interval, but the sum of the original series is either not differentiable on the whole interval in question, or it is differentiable but its derivative is not equal to the sum of the series of derivatives. In this way, the presence of the property of uniform convergence of a series, in much the same way as absolute convergence (see Absolutely convergent series), permits one to transfer to these series certain rules of operating with finite sums: for uniform convergence — term-by-term passage to the limit, term-by-term integration and differentiation (see (3)–(6)), and for absolute convergence — the possibility of permuting the order of the terms of the series without changing the sum, and multiplying series term-by-term. The properties of absolute and uniform convergence for series of functions are independent of each other. Thus, the series $$\sum _ {n = 0 } ^ \infty \frac{x ^ {2} }{( 1 + x ^ {2} ) ^ {n} }$$ is absolutely convergent on the whole axis, since all its terms are non-negative, but obviously $x = 0$ is not a point of uniform convergence, since its sum $$s ( x) = \ \left \{ is discontinuous at this point (whereas all terms are continuous). The series$$ \sum _ {n = 1 } ^ \infty \frac{(- 1) ^ {n + 1 } }{x ^ {2} + n } is uniformly convergent on the whole real axis but does not converge absolutely at any point. For references see Series. How to Cite This Entry: Uniformly-convergent series. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Uniformly-convergent_series&oldid=49074 This article was adapted from an original article by L.D. Kudryavtsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941551685333252, "perplexity": 114.03627124254342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504969.64/warc/CC-MAIN-20210622002655-20210622032655-00176.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/RMS-AM-GM-HM
# Root-Mean Square-Arithmetic Mean-Geometric Mean-Harmonic mean Inequality (Redirected from RMS-AM-GM-HM) The Root-Mean Square-Arithmetic Mean-Geometric Mean-Harmonic Mean Inequality (RMS-AM-GM-HM) or Quadratic Mean-Arithmetic Mean-Geometric Mean-Harmonic Mean Inequality (QM-AM-GM-HM), is an inequality of the root-mean square, arithmetic mean, geometric mean, and harmonic mean of a set of positive real numbers that says: with equality if and only if . This inequality can be expanded to the power mean inequality. As a consequence we can have the following inequality: If are positive reals, then with equality if and only if ; which follows directly by cross multiplication from the AM-HM inequality. This is extremely useful in problem solving. The Root Mean Square is also known as the quadratic mean, and the inequality is therefore sometimes known as the QM-AM-GM-HM Inequality. ## Proof The inequality is a direct consequence of the Cauchy-Schwarz Inequality; Alternatively, the RMS-AM can be proved using Jensen's inequality: Suppose we let (We know that is convex because and therefore ). We have: Factoring out the yields: Taking the square root to both sides (remember that both are positive): The inequality is called the AM-GM inequality, and proofs can be found here. The inequality is a direct consequence of AM-GM; , so , so . Therefore the original inequality is true. ### Geometric Proof The inequality is clearly shown in this diagram for
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968227744102478, "perplexity": 373.9451738579372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00029.warc.gz"}
https://jp.mathworks.com/help/stats/ncfcdf.html?lang=en
# ncfcdf Noncentral F cumulative distribution function ## Syntax ```p = ncfcdf(x,nu1,nu2,delta) p = ncfcdf(x,nu1,nu2,delta,'upper') ``` ## Description `p = ncfcdf(x,nu1,nu2,delta)` computes the noncentral F cdf at each value in `x` using the corresponding numerator degrees of freedom in `nu1`, denominator degrees of freedom in `nu2`, and positive noncentrality parameters in `delta`. `nu1`, `nu2`, and `delta` can be vectors, matrices, or multidimensional arrays that have the same size, which is also the size of `p`. A scalar input for `x`, `nu1`, `nu2`, or `delta` is expanded to a constant array with the same dimensions as the other inputs. `p = ncfcdf(x,nu1,nu2,delta,'upper')` returns the complement of the noncentral F cdf at each value in `x`, using an algorithm that more accurately computes the extreme upper tail probabilities. The noncentral F cdf is `$F\left(x|{\nu }_{1},{\nu }_{2},\delta \right)=\sum _{j=0}^{\infty }\left(\frac{{\left(\frac{1}{2}\delta \right)}^{j}}{j!}{e}^{\frac{-\delta }{2}}\right)I\left(\frac{{\nu }_{1}\cdot x}{{\nu }_{2}+{\nu }_{1}\cdot x}|\frac{{\nu }_{1}}{2}+j,\frac{{\nu }_{2}}{2}\right)$` where I(x|a,b) is the incomplete beta function with parameters a and b. ## Examples collapse all Compare the noncentral F cdf with δ = 10 to the F cdf with the same number of numerator and denominator degrees of freedom (5 and 20 respectively). ```x = (0.01:0.1:10.01)'; p1 = ncfcdf(x,5,20,10); p = fcdf(x,5,20); plot(x,p,'-',x,p1,'-')``` ## References [1] Johnson, N., and S. Kotz. Distributions in Statistics: Continuous Univariate Distributions-2. Hoboken, NJ: John Wiley & Sons, Inc., 1970, pp. 189–200.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994598388671875, "perplexity": 1053.3943484419958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00698.warc.gz"}
http://mathhelpforum.com/algebra/97855-greatest-integer-function.html
1. ## Greatest Integer Function For [[5.95]], the answer is 5 "...since 5 the largest integer which is less than or equal to 5.95." Am I understanding this correctly by saying that the answer is the number in the unit's place. If that is the case, then the answer for [[3.66]] = 3, right? The textbook tells me that "...negatives are a bit tricky. For [[-1.6]], the answer is -2 not -1." Can someone explain why the answer is -2 not -1? Also, I need someone to briefly explain how this function is applied in real-life. Is this function the same as the step function? What is the easiest way to graph this function? 2. Originally Posted by sharkman For [[5.95]], the answer is 5 "...since 5 the largest integer which is less than or equal to 5.95." Am I understanding this correctly by saying that the answer is the number in the unit's place. If that is the case, then the answer for [[3.66]] = 3, right? The textbook tells me that "...negatives are a bit tricky. For [[-1.6]], the answer is -2 not -1." Can someone explain why the answer is -2 not -1? Also, I need someone to briefly explain how this function is applied in real-life. Is this function the same as the step function? What is the easiest way to graph this function? Yes for positive numbers it will be the integer ignoring the decimals [[-1.6]]=-2 since -2 is the greatest integer which is <-1.6, -1 is not <-1.6, so -1 is not the answer In real life, you probably won't see this function unless to take more math classes or some physics and engineering classes, it can come up (or something similar) in a differential equation To graph this function, it is the step function that most people think of, it just consists of horizontal lines of length 1 with breaks at the integers , , , , , # greatest integer function of_2.5 Click on a term to search for related topics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124593734741211, "perplexity": 431.6335283527793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102993.24/warc/CC-MAIN-20170817073135-20170817093135-00509.warc.gz"}
https://www.neetprep.com/question/5748-system-CaFsCa-aq--Faq-increasing-concentration-Ca-ions--times-will-cause-equilibrium-concentration-F-ions-change-toa--initial-valueb--initial-valuec--times-initial-valued-none-above?courseId=19
• Subject: ... • Topic: ... In the system, CaF2(s) $⇌$Ca2+ (aq) + 2F-(aq), increasing the concentration of Ca2+ ions 4 times will cause the equilibrium concentration of F- ions to change to: (a) 1/4 of the initial value (b) 1/2 of the initial value (c) 2 times of the initial value (d) none of the above
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984390377998352, "perplexity": 4536.001587137289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00129.warc.gz"}
http://www.turkmath.org/beta/seminer.php?id_seminer=1002
#### İstanbul Center for Mathematical Sciences Which Latin square is the loneliest? Nick Cavenagh Waikato University, New Zealand Özet : Given two Latin squares $L_1$ and $L_2$ of the same order, the hamming distance between $L_1$ and $L_2$ gives the number of corresponding cells containing distinct symbols. If we think of Latin squares as sets of ordered triples, this is given by |$L_1$ \ $L_2$|. Given a specific Latin square $L_1$, we may wish to know a Latin square $L_2$ which is closest to it; i.e. for which the Hamming distance is minimized. Equivalently, we may ask for the size of the smallest Latin trade within a given Latin square. It is known that the back circulant Latin square of order $n$ (the operation table for the integers modulo n) has Hamming distance at least $e \log n + 2$ to any other Latin square. We explore whether the back circulant Latin square is the loneliest of all Latin squares; i.e. has greatest minimum Hamming distance to any other Latin square. This is a joint work with R. Ramadurai. Tarih : 25.08.2015 Saat : 14:00 Yer : IMBM Seminar Room, Bogazici University South Campus Dil : English
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188620209693909, "perplexity": 567.3904604273584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825349.51/warc/CC-MAIN-20181214022947-20181214044447-00125.warc.gz"}
http://www.khanacademy.org/math/linear-algebra/matrix_transformations
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. # Linear algebra Matrices, vectors, vector spaces, transformations, eigenvectors/values. Covers all topics in a first year college linear algebra course. This is an advanced course normally taken by science or engineering majors after taking at least two semesters of calculus (although calculus really isn't a prereq) so don't confuse this with regular high school algebra. Community Questions Matrix transformations Understanding how we can map one set of vectors to another set. Matrices used to define linear transformations. All content in “Matrix transformations” ### Functions and linear transformations People have been telling you forever that linear algebra and matrices are useful for modeling, simulations and computer graphics, but it has been a little non-obvious. This tutorial will start to draw the lines by re-introducing you functions (a bit more rigor than you may remember from high school) and linear functions/transformations in particular. ### Linear transformation examples In this tutorial, we do several examples of actually constructing transformation matrices. Very useful if you've got some actual transforming to do (especially scaling, rotating and projecting) ;) ### Transformations and matrix multiplication You probably remember how to multiply matrices from high school, but didn't know why or what it represented. This tutorial will address this. You'll see that multiplying two matrices can be view as the composition of linear transformations. ### Inverse functions and transformations You can use a transformation/function to map from one set to another, but can you invert it? In other words, is there a function/transformation that given the output of the original mapping, can output the original input (this is much clearer with diagrams). This tutorial addresses this question in a linear algebra context. Since matrices can represent linear transformations, we're going to spend a lot of time thinking about matrices that represent the inverse transformation. ### Finding inverses and determinants We've talked a lot about inverse transformations abstractly in the last tutorial. Now, we're ready to actually compute inverses. We start from "documenting" the row operations to get a matrix into reduced row echelon form and use this to come up with the formula for the inverse of a 2x2 matrix. After this we define a determinant for 2x2, 3x3 and nxn matrices. ### More determinant depth In the last tutorial on matrix inverses, we first defined what a determinant is and gave several examples of computing them. In this tutorial we go deeper. We will explore what happens to the determinant under several circumstances and conceptualize it in several ways. ### Transpose of a matrix We now explore what happens when you switch the rows and columns of a matrix!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151944875717163, "perplexity": 604.5213203515397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813887.15/warc/CC-MAIN-20140820021333-00109-ip-10-180-136-8.ec2.internal.warc.gz"}
http://link.springer.com/chapter/10.1007%2F978-1-84800-046-9_12
2008, pp 219-232 # Document Representation and Quality of Text: An Analysis * Final gross prices may vary according to local VAT. There are three factors involved in text classification: the classification model, the similarity measure, and the document representation. In this chapter, we will focus on document representation and demonstrate that the choice of document representation has a profound impact on the quality of the classification.We will also show that the text quality affects the choice of document representation. In our experiments we have used the centroid-based classification, which is a simple and robust text classi-fication scheme. We will compare four different types of document representation: N-grams, single terms, phrases, and a logic-based document representation called RDR. The N-gram representation is a string-based representation with no linguistic processing. The single-term approach is based on words with minimum linguistic processing. The phrase approach is based on linguistically formed phrases and single words. The RDR is based on linguistic processing and representing documents as a set of logical predicates. Our experiments on many text collections yielded similar results.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130550384521484, "perplexity": 1571.3157215677616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246646036.55/warc/CC-MAIN-20150417045726-00000-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/find-the-eccentricity-of-this-ellipse.658531/
# Find the eccentricity of this ellipse 1. Dec 12, 2012 ### utkarshakash 1. The problem statement, all variables and given/known data The tangent at any point P of a circle meets the tangent at a fixed point A in T, and T is joined to B, the other end of diameter through A. Prove that the locus of point of intersection of AP and BT is an ellipse whose eccentricity is $1/ \sqrt{2}$ 2. Relevant equations 3. The attempt at a solution The very first thing I do is assume the equation of a circle. The next thing is to write the equations for tangents and solve them to get T. But it is getting complicated as nothing is known to me. So there are a number of variables which can't be eliminated. Any other ideas? 2. Dec 13, 2012 ### tiny-tim hi utkarshakash! show us what you've tried, and where you're stuck, and then we'll know how to help! alternatively, since the eccentricity is 1/√2, the minor axis must be 1/√2 time the major axis … so have you tried squashing the whole diagram by 1/√2 (along AB), so that the the final result is a circle? 3. Dec 14, 2012 ### utkarshakash For the sake of simplicity, let the equation of the circle be $x^2 + y^2 = c^2$. Let A = (x1,y1) and P = (x2,y2) Equation of tangent at P $xx_2 + yy_2 - c^2 = 0$ Equation of tangent at A $xx_1 + yy_1 -c^2 = 0$ When I solve these two equations I get $x = \dfrac{c^2 (y_2 - y_1)}{x_1y_2-x_2y_1} \\ y = \dfrac{c^2 (x_1 - x_2)}{x_1y_2-x_2y_1}$ OMG It looks so dangerous! I don't want to proceed ahead as the calculation will be complex and rigorous. 4. Dec 14, 2012 ### tiny-tim no! A is fixed, so you can simplify by letting A = (c,0) (and P = (ccosθ,csinθ) ) Last edited: Dec 14, 2012 5. Dec 15, 2012 ### utkarshakash That did simplify the expression to a large extent. Thanks! Similar Discussions: Find the eccentricity of this ellipse
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371306300163269, "perplexity": 598.1754771445392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804881.4/warc/CC-MAIN-20171118113721-20171118133721-00423.warc.gz"}
https://brilliant.org/problems/can-euler-still-help/
# Can Euler still help? Algebra Level 4 If the expression $i\pi + e + 1$ can be expressed in the form $\ln(k)$ where $k$ is a real number, determine $k$ to three decimal places. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9764471054077148, "perplexity": 335.7473180390549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00151.warc.gz"}
https://physics.stackexchange.com/questions/470214/electric-potential-and-field-due-to-a-continuous-charge-distribution
# Electric potential and field due to a continuous charge distribution (1) The electric potential due to a continuous charge distribution is: $$\psi=\int_V \dfrac{\rho}{r}\ dV$$ To calculate this integral $$\rho$$ must be continuous over $$V$$. But $$\rho$$ is discontinuous at the boundary of $$V$$. Why does it not prevent us from carrying out the integral? (2) I have read that discontinuity in $$\rho$$ prevents us from computing the field (via potential) at the point of discontinuity. Why is it so? The potentials and field integrals require that the integrand be integrable, not continuous. Any continuous function is (Riemann) integrable, but an integrable function does not need to be continuous. In fact, for integrals in one dimension, a function is Riemann integrable if and only if it is continuous almost everywhere [that is, everywhere except on a set of (Lebesgue) measure zero]. The condition in three dimensions is more complicated, but for functions that only have discontinuities along the boundaries of simple solids, there is no problem with the integrability of the function. If you want to calculate the potential and field at a point where the charge density $$\rho$$ is discontinuous, there is no problem if the discontinuity is a simple one (meaning there is a change from one smooth function to another smooth function across a similarly smooth surface). As an example of this, consider the field inside and outside a uniformly charged solid sphere. The electric field is well defined and continuous at the boundary, as can be verified by Gauss's Law, and the potential is continuous and differentiable at the boundary, meaning that we can indeed calculate the electric field as the gradient of the potential. • Thanks... Please explain the second part – Alec Apr 3 '19 at 4:19 • @Alec Which second part? – Buzz Apr 3 '19 at 5:15 • @Buzz You know, the part after the first part. – Aaron Stevens Apr 3 '19 at 11:50 • @AaronStevens: Can you please answer the second part...? – Alec Apr 3 '19 at 13:55 • @AaronStevens: Since the integrand is integrable at the boundary of a volume charge distribution, does it mean we can calculate the field (via potential) even at the point of discontinuity? Is it what Buzz's answer is saying? – Alec Apr 3 '19 at 14:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9516192674636841, "perplexity": 162.33910790892918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592565.2/warc/CC-MAIN-20200118110141-20200118134141-00505.warc.gz"}
http://mathhelpforum.com/algebra/191573-help-maths-equations.html
# Math Help - Help with maths equations 1. ## Help with maths equations 1.find the coordinates of the points where the graph y=2x^3+3x^2-4x+1 cuts the x axis 2. show that (x-1) is a factor of 6x^3+11x^2-5x-12 and find the other two linear factors of this expression 2. ## Re: Help with maths equations Originally Posted by kitobeirens 1.find the coordinates of the points where the graph y=2x^3+3x^2-4x+1 cuts the x axis 2. show that (x-1) is a factor of 6x^3+11x^2-5x-12 and find the other two linear factors of this expression For 1, we know that any point on the x-axis is of the form (a, 0)... So set y = 0 and find what value(s) "a" can be. This might require factoring, synthetic division, and/or numerical methods. Punch your equation into wolframalpha.com if you need a shortcut for your intuition. 3. ## Re: Help with maths equations Originally Posted by kitobeirens 1.find the coordinates of the points where the graph y=2x^3+3x^2-4x+1 cuts the x axis 2. show that (x-1) is a factor of 6x^3+11x^2-5x-12 and find the other two linear factors of this expression For 2 use the factor theorem which says that if $f(x-a) = 0$ then $f(a) = 0$. In layman's terms if x-1 is a factor then f(1) = 0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598456382751465, "perplexity": 974.9268758475437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00179-ip-10-164-35-72.ec2.internal.warc.gz"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/G13/g13dxc.html
g13 Chapter Contents g13 Chapter Introduction NAG C Library Manual # NAG Library Function Documentnag_tsa_arma_roots (g13dxc) ## 1  Purpose nag_tsa_arma_roots (g13dxc) calculates the zeros of a vector autoregressive (or moving average) operator. ## 2  Specification #include #include void nag_tsa_arma_roots (Integer k, Integer ip, const double par[], double rr[], double ri[], double rmod[], NagError *fail) ## 3  Description Consider the vector autoregressive moving average (VARMA) model $Wt-μ=ϕ1Wt-1-μ+ϕ2Wt-2-μ+⋯+ϕpWt-p-μ+εt-θ1εt-1-θ2εt-2-⋯-θqεt-q,$ (1) where ${W}_{t}$ denotes a vector of $k$ time series and ${\epsilon }_{t}$ is a vector of $k$ residual series having zero mean and a constant variance-covariance matrix. The components of ${\epsilon }_{t}$ are also assumed to be uncorrelated at non-simultaneous lags. ${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$ denotes a sequence of $k$ by $k$ matrices of autoregressive (AR) parameters and ${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$ denotes a sequence of $k$ by $k$ matrices of moving average (MA) parameters. $\mu$ is a vector of length $k$ containing the series means. Let $Aϕ= ϕ1 I 0 . . . 0 ϕ2 0 I 0 . . 0 . . . . . . ϕp-1 0 . . . 0 I ϕp 0 . . . 0 0 pk×pk$ where $I$ denotes the $k$ by $k$ identity matrix. The model (1) is said to be stationary if the eigenvalues of $A\left(\varphi \right)$ lie inside the unit circle. Similarly let $Bθ= θ1 I 0 . . . 0 θ2 0 I 0 . . 0 . . . . . . θq-1 0 . . . 0 I θq 0 . . . 0 0 qk×qk .$ Then the model is said to be invertible if the eigenvalues of $B\left(\theta \right)$ lie inside the unit circle. nag_tsa_arma_roots (g13dxc) returns the $pk$ eigenvalues of $A\left(\varphi \right)$ (or the $qk$ eigenvalues of $B\left(\theta \right)$) along with their moduli, in descending order of magnitude. Thus to check for stationarity or invertibility you should check whether the modulus of the largest eigenvalue is less than one. ## 4  References Wei W W S (1990) Time Series Analysis: Univariate and Multivariate Methods Addison–Wesley ## 5  Arguments 1:     kIntegerInput On entry: $k$, the dimension of the multivariate time series. Constraint: ${\mathbf{k}}\ge 1$. 2:     ipIntegerInput On entry: the number of AR (or MA) parameter matrices, $p$ (or $q$). Constraint: ${\mathbf{ip}}\ge 1$. 3:     par[${\mathbf{ip}}×{\mathbf{k}}×{\mathbf{k}}$]const doubleInput On entry: the AR (or MA) parameter matrices read in row by row in the order ${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$ (or ${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$). That is, ${\mathbf{par}}\left[\left(\mathit{l}-1\right)×k×k+\left(i-1\right)×k+j-1\right]$ must be set equal to the $\left(i,j\right)$th element of ${\varphi }_{l}$, for $\mathit{l}=1,2,\dots ,p$ (or the $\left(i,j\right)$th element of ${\theta }_{\mathit{l}}$, for $\mathit{l}=1,2,\dots ,q$). 4:     rr[${\mathbf{k}}×{\mathbf{ip}}$]doubleOutput On exit: the real parts of the eigenvalues. 5:     ri[${\mathbf{k}}×{\mathbf{ip}}$]doubleOutput On exit: the imaginary parts of the eigenvalues. 6:     rmod[${\mathbf{k}}×{\mathbf{ip}}$]doubleOutput On exit: the moduli of the eigenvalues. 7:     failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_EIGENVALUES An excessive number of iterations have been required to calculate the eigenvalues. NE_INT On entry, ${\mathbf{ip}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{ip}}\ge 1$. On entry, ${\mathbf{k}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{k}}\ge 1$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. ## 7  Accuracy The accuracy of the results depends on the original matrix and the multiplicity of the roots. The time taken is approximately proportional to $k{p}^{3}$ (or $k{q}^{3}$). ## 9  Example This example finds the eigenvalues of $A\left(\varphi \right)$ where $k=2$ and $p=1$ and ${\varphi }_{1}=\left[\begin{array}{rr}0.802& 0.065\\ 0.000& 0.575\end{array}\right]$. ### 9.1  Program Text Program Text (g13dxce.c) ### 9.2  Program Data Program Data (g13dxce.d) ### 9.3  Program Results Program Results (g13dxce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 54, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976245164871216, "perplexity": 2726.0366027030636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447559078.79/warc/CC-MAIN-20141224185919-00059-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/153222-solving-improper-integrals-type-1-a.html
# Math Help - Solving Improper Integrals of type 1 1. ## Solving Improper Integrals of type 1 I ve been trying to solve this integral where it reads int (-infinity to +infinity) x/((x^2+4)^3/2) so i wrote this as a limit where lim A approaches -infinity (int from A to 0 x/((x^2+4)^3/2) + lim B approaches +infinity (int from 0 to B x/((x^2+4)^3/2). By subbing in u=x^2+4 i got the integral to equal -1/sqrt (X^2+4)+c and ended up with an answer of 0 when both integrals were added. is this correct i feel like this is wrong somehow 2. you got correct result it's zero $\displaystyle \int \frac {x\; dx }{\sqrt{(x^2+4)^{3}} } = -\frac {1}{\sqrt{(x^2+4)}} \big{|} _{-\infty} ^{+\infty} = 0$ 3. That's correct.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936999499797821, "perplexity": 1717.9155354683446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117783.16/warc/CC-MAIN-20160428161517-00179-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.gradesaver.com/invisible-man/q-and-a/what-is-the-meaning-of-the-word-fancied-in-the-this-sentences-378492
# What is the meaning of the word “fancied” in the this sentences “the shadows , she fancied, had tricked her.” ##### Answers 2 In context, fancied would mean supposed, believed, or imagined. Either of the three would work. Fancied mean to: feel a desire or liking for And it can also mean to: regard (a horse, team, or player) as a likely winner If you like my answer please leave a comment give me another update on another question. And also answer my question thank you BYE BYE BYE BYE BYE BYE BYE BYE BYE ##### Source(s) Thank you thank you thank you
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8897722363471985, "perplexity": 4918.385596768651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262311.80/warc/CC-MAIN-20190527085702-20190527111702-00425.warc.gz"}
https://www.physicsforums.com/threads/resolving-power.53008/
# Resolving Power 1. Nov 17, 2004 ### PixelHarmony Easy Resolving Power Problem sin(Xmin) = Xmin = 1.22 (lambda/D) (a) Two stars are photographed utilizing a telescope with a circular aperture of diameter of 2.32 m and light with a wavelength of 461 nm. If both stars are 1022 m from us, what is their minimum separation so that we can recognize them as two stars (instead of just one)? d = ???? m***** (b) A car passes you on the highway and you notice the taillights of the car are 1.14 m apart. Assume that the pupils of your eyes have a diameter of 6.7 mm and index of refraction of 1.36. Given that the car is 13.8 km away when the taillights appear to merge into a single spot of light because of the effects of diffraction, what wavelength of light does the car emit from its taillights (what would the wavelength be in vacuum)? l = ???? nm***** Last edited: Nov 17, 2004 2. Nov 17, 2004 ### PixelHarmony nevermind i got them Similar Discussions: Resolving Power
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272046446800232, "perplexity": 1216.6437050429179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814538.66/warc/CC-MAIN-20180223075134-20180223095134-00530.warc.gz"}
http://mathhelpforum.com/calculus/127966-existence-derivative.html
1. Existence of Derivative Hello i have the following exercise: $x(t) = \frac{3}{t^3}$ and $y(t) = e^{tan(t)}$ for which $t$ does the first derivative exists ? 2. Originally Posted by coobe Hello i have the following exercise: $x(t) = \frac{3}{t^3}$ and $y(t) = e^{tan(t)}$ for which $t$ does the first derivative exists ? $\mid t \mid \ne 0$ 3. for y(t) dy/dt = e^tan(t) sec^2(t) is defined except for odd multiples of pi/2 --> (2n-1)pi/2 for all integers n. as tan(t) and sec(t) are not defined at these points
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901763200759888, "perplexity": 2557.6426616003596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512584.10/warc/CC-MAIN-20171211071340-20171211091340-00013.warc.gz"}
https://or.stackexchange.com/questions/2999/a-heuristic-approach-to-solve-a-milp-problem
# A heuristic approach to solve a MILP problem? I have the following optimization problem which is a MILP. I can solve it with a MILP solver. This one I posted here Is there a heuristic approach to the MILP problem? Since I have an additional but very important constraint, I am putting it as a new post. I am looking for a heuristic approach to solve this problem. Any Hint? \begin{alignat}{2}\min_t&\quad t\\\text{s.t.}&\quad d_{c}-t\le \sum_{n=1}^{N} B_{n,c}x_{n}\le d_{c}+t,\quad&\forall c\in\{1,2,\cdots,C\}\\&\quad\sum_{n=1}^{N} x_n = M\\ &\quad x_n\le M y_n, \quad\forall n\\ &\quad\sum_{n=1}^N y_n \le 3 \end{alignat} ## VERY IMPORTANT As this a part of another problem, I found that this is not serving what I actually needed. I want to put one more constraint. There can be only a given number of non-zero $$x_n$$, for example, for the previous example, let say, we can have a maximum $$3$$ non-zero $$x_n$$s. where • $$B$$ is a binary matrix of size $$N\times C$$ • $$d$$ is a known vector of the positive numbers of size $$1\times C$$ • $$M$$ is a known parameter • $$x_n$$ is an optimization variable (integer variable, $$x_n\ge 0$$, $$x_n\in\{0,1,2,3,\cdots,M\}$$) • $$y_n$$ is a binary variable • $$t$$ is also an optimization variable (integer/continuous) @prubin has suggested a reformulation of the problem as If we set $$S_c = \{n : B_{n,c}=1\}$$, we can rewrite the problem as \begin{align*} \min_{t} & \quad t\\ \text{s.t.} & \quad\left|\sum_{n\in S_{c}}x_{n}-d_{c}\right|\le t,\quad\forall c\in\{1,2,\cdots,C\}\\ & \quad\sum_{n=1}^{N}x_{n}=M. \end{align*}A simple greedy heuristic is to start with $$x_n=0\,\forall n$$ and, in each iteration, bump one of the $$x$$ variables up by 1, selecting the $$x_n$$ that most improves (or least degrades) $$t$$, until the equality constraint is satisfied. • The heuristic I previously suggested could be adapted here, by adding a restriction that once three different x variables have been bumped, you are limited to using just those three (or however many the limit on nonzeroes is). I don't know that it would be a very good heuristic, though. – prubin Nov 6 '19 at 22:52 Introduce binary variables $$y_n$$ and constraints \begin{align} x_n &\le M y_n &&\text{for all n}\\ \sum_n y_n &\le 3 \end{align} • For once, $M$ is actually the right value to use for big-M. :) – RobPratt Nov 6 '19 at 13:14 Here is a modification of the bumping heuristic, assuming that you are limited to using $$K$$ of the $$x$$ variables (so $$K=3$$ in your example): 1. Select $$K$$ distinct values of the index $$n$$ randomly. 2. Apply the bumping heuristic, but limit it to bumping those $$K$$ variables. Note that the heuristic may not bump all of them.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521701097488403, "perplexity": 823.4908115674018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00370.warc.gz"}
https://www.physicsforums.com/threads/bounds-on-sets.763874/
# Bounds on Sets 1. Jul 29, 2014 ### res3210 Hey guys, I'm puzzling a bit over an example I read in Rudin's Principles of Mathematical Analysis. He has just defined least upper bound in the section I am reading, and now he wants to give an example of what he means. So the argument goes like this: Consider the set A, where A = {p} s.t. p2 < 2 and p $\in Q+$ the set B, where B = {p} s.t. p2 > 2 and p is the same as above. Now let A $\subset Q$ and B $\subset Q$, where Q is the ordered set of all rational numbers. He says that A has no least upper bound and B has no greatest lower bound. I do not see why. If I consider A by itself a subset of Q, then I think 2 = sup A, and B by itself a sub set of Q, 2 = inf B. I could see that if we are talking about the set A AND B, then there is no sup A, if A $\subset A AND B$, because he just proved that there is no least element of B and no greatest element of A, and so it follows there is could be neither sup A nor inf B in this case. But he states to consider A and B as subsets of Q. Any help clarifying this matter would be greatly appreciated. Also, sorry if this is in the wrong place; not sure where it goes, so I figured general math would be best. 2. Jul 29, 2014 ### Fredrik Staff Emeritus He defines $A=\{p\in\mathbb Q|p^2<2\}$ and says that this set has no largest element. He then says that A has no least upper bound in $\mathbb Q$. You say that 2 should be the least upper bound, but that's clearly not true, since (for example) 1.5 is an upper bound of A that's less than 2. The least upper bound of A in $\mathbb R$ is the irrational number $\sqrt 2$. What Rudin is trying to explain is that no rational number can be a least upper bound of A. He defines $B=\{p\in\mathbb Q|p^2>2\}$ and points out two things: 1. A rational number M is an upper bound of A, if and only if it's an element of B. 2. B doesn't have a smallest element. These two things together imply that no rational number can be the least upper bound of A. Now regarding the notations of "upper bound in X" vs. "upper bound in Y", where X is a subset of Y, recall that the definition of "upper bound" involves a specific ordering relation. If S is a subset of X (which is still a subset of Y), no element of Y is "an upper bound of S, period". It can be an upper bound of S with respect to the ordering relation on X, or an upper bound of S with respect to the ordering relation on Y. Only an element of X can be an upper bound of S with respect to the ordering relation on X. In the case of $\mathbb Q$ and $\mathbb R$, we can't just say "with respect to ≤", because that symbol is used both for the ordering relation on $\mathbb Q$ and the ordering relation on $\mathbb R$. So we use phrases like "upper bound in $\mathbb Q$" to shorten the phrase "upper bound with respect to the ordering relation on $\mathbb Q$". Last edited: Jul 29, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987608015537262, "perplexity": 159.72175470917463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647251.74/warc/CC-MAIN-20180320013620-20180320033620-00407.warc.gz"}
https://forensiccoe.org/webinar/effects-of-dna-extraction-methods/
Original Air Date: Thursday, August 20th, 2020 1:00:00 PM ET – 2:00:00 PM ET Duration: 1 hour DNA recovered from forensic and ancient DNA (aDNA) sources is generally expected to be in low copy number and degraded in strand length. However, considering the following equation, it is important to realize that the exact amount of recoverable DNA from any source is unknown: Net yield of DNA= Original amount – loss in sampling – loss in extraction/purification – loss due to amplification bias (e.g., due to PCR inhibitors) This study’s first objectives included estimating: 1) the amount of DNA lost during DNA extraction and purification, and 2) the degree to which molecules are damaged as a result of the extraction method. Four DNA standards were created from freshly extracted pig (Sus scrofa) liver DNA, representing highly intact genomic DNA to more degraded and low copy number DNA. Ten aliquots of each of these standards were then subjected to seven published aDNA extraction methods and twelve commercial extraction methods, many of which are marketed as suitable for forensic and ancient DNA applications. “DNA in” of the standards was compared to “DNA out” using a Fragment Analyzer that is capable of simultaneously measuring DNA quantity (i.e., concentration) and quality (i.e., DNA strand length). Some kits performed very well, whereas others were associated with tremendous loss and/or fragmentation. This demonstrated that many extraction methods used to study degraded DNA are far from optimal, and that research should be focused on improving their efficacy. DNA loss during extraction may impact downstream applications. Considering the case of silica-based extractions, DNA can only be lost via one of two main mechanisms. First, it is possible that not all the source DNA binds to the silica column. Second, it is possible that DNA remains bound to silica during the elution step. In the second part of this study, DNA standards of known concentration were subjected to the Qiagen DNeasy Blood and Tissue Kit (a silica-based extraction) to identify: 1) how much DNA did not bind to the silica column and 2) how much DNA is lost due to elution inefficiency. DNA standards were created using genomic porcine DNA. “Intact DNA” was represented by Standard 1Z (~20k bp at 100 ng/μl). To represent “degraded DNA,” Standard 3Z was sheared by sonication to ~300 bp. Lower copy number variants of each of these standards were created through dilution of standards to < 1 ng/μl (Standards 2Z and 4Z). First, following the manufacturer’s protocol for extraction of standards, each spin column was moved into a new collection tube and an additional 200 μl of buffer AE was used to elute DNA that remained on the column. This elution process was repeated multiple times (up to 10 times) from the original column to observe the quantity and quality of DNA that continued to be recovered in the eluates. Characteristics of the DNA that did not initially bind to the columns were determined by repeatedly passing the original purification flow across new spin columns (up to 10 times). DNA bound to each of these columns was also re-eluted up to 10 times. Following this method, 100 eluates were made from each of Standards 1Z and 3Z, and 25 eluates from each of Standards 2Z and 4Z. DNA from these treatments was quantified using a Qubit Fluorometer and eluates were run on a Fragment Analyzer to determine the size and concentration of the DNA fragments that were recovered in each step. Molecules from all standards were recovered during this process, but detectable amounts varied by standard used. For example, standard 1Z eluates had detectable amounts of DNA in 40 of 100 eluates (Qubit) vs 56 of 100 eluates on the Fragment Analyzer. However, some general trends were detected. First, molecules were detected in the flow through of all standards, especially those at higher concentrations. Second, the larger fragments (over 4000 bp in length) were recovered at a higher frequency in initial elutions in a new spin column. Thirdly, smaller fragment standard (~300 bp in length) eluates were not detectable using the Qubit.  This suggests that modifications to the extraction protocol may be needed to recover adequate DNA from low concentration, damaged sources. Detailed Learning Objectives: 1) To illustrate that we do not know how much DNA is available from any given sources. Analysts can only know how much is retained through the process of its extraction and purification. With regards to highly compromised DNA sources, there may exist more DNA than is currently recognized. 2) To illustrate the influence of various DNA extraction methods on the quantity (i.e., concentration) and quality (i.e., strand length) of resulting DNA eluates. 3) To provide a better illustration of the nature of DNA loss from degraded and low concentration sources when utilizing commonly used silica-based extraction methods. Funding for this Forensic Technology Center of Excellence event has been provided by the National Institute of Justice. Speakers: Brian Kemp Kristine Beaty Cara Monroe
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762231469154358, "perplexity": 3962.939632870482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00050.warc.gz"}
https://scholars.ttu.edu/en/publications/search-for-long-lived-particles-using-displaced-jets-in-proton-pr
# Search for long-lived particles using displaced jets in proton-proton collisions at s =13 TeV CMS Collaboration Research output: Contribution to journalArticlepeer-review 2 Scopus citations ## Abstract An inclusive search is presented for long-lived particles using displaced jets. The search uses a data sample collected with the CMS detector at the CERN LHC in 2017 and 2018, from proton-proton collisions at a center-of-mass energy of 13 TeV. The results of this search are combined with those of a previous search using a data sample collected with the CMS detector in 2016, yielding a total integrated luminosity of 132 fb-1. The analysis searches for the distinctive topology of displaced tracks and displaced vertices associated with a dijet system. For a simplified model, where pair-produced long-lived neutral particles decay into quark-antiquark pairs, pair production cross sections larger than 0.07 fb are excluded at 95% confidence level (C.L.) for long-lived particle masses larger than 500 GeV and mean proper decay lengths between 2 and 250 mm. For a model where the standard model-like Higgs boson decays to two long-lived scalar particles that each decays to a quark-antiquark pair, branching fractions larger than 1% are excluded at 95% C.L. for mean proper decay lengths between 1 mm and 340 mm. A group of supersymmetric models with pair-produced long-lived gluinos or top squarks decaying into various final-state topologies containing displaced jets is also tested. Gluino masses up to 2500 GeV and top squark masses up to 1600 GeV are excluded at 95% C.L. for mean proper decay lengths between 3 and 300 mm. The highest lower bounds on mass reach 2600 GeV for long-lived gluinos and 1800 GeV for long-lived top squarks. These are the most stringent limits to date on these models. Original language English Physical Review D 104 1 https://doi.org/10.1103/PhysRevD.104.012015 Published - Jul 1 2021 ## Fingerprint Dive into the research topics of 'Search for long-lived particles using displaced jets in proton-proton collisions at s =13 TeV'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907854199409485, "perplexity": 2732.051862970234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00615.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-12-cumulative-review-page-914/49
## Algebra: A Combined Approach (4th Edition) $\log_{x}$ $25=2$ is transformed into $25=x^{2}$ using logarithmic rules. Now solving $x^{2}=25$: $x^{2}=25$ $x^{2}=5^{2}$ $\sqrt {x^{2}}=\sqrt {5^{2}}$ $x=5$ Therefore, the solution set is {5}.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902982473373413, "perplexity": 762.5400150199039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160853.60/warc/CC-MAIN-20180925004528-20180925024928-00472.warc.gz"}
http://math.stackexchange.com/questions/275039/correct-use-of-substitution-rule-for-integration-on-riemannian-manifolds
# Correct use of substitution rule for Integration on Riemannian manifolds Let $(N,g_N)$ be a Riemannian manifold and let $\psi: M \rightarrow N$ be a a diffeomorphism. Now I know how the Riemannian metric on $M$ defined by the pull-back of the metric on $N$ looks like (this is given in my case): $$\psi^{\ast}g_N = \sum g_{ij} dx_i \otimes dx_j$$ is a Riemannian metric on $M$. Now I want to compute Integrals of the form $$\int_M f \mathrm{dvol_{g_M}}$$ where $f \in C^{\infty}(M)$. But since I only have Information about $N$ and the pull-back I'd like to use the substitution rule. In my case I know that $N=(0,\varepsilon) \times S^1$ and $\psi^{\ast}g_N = \frac{dx^2+d\theta^2}{x^2}$ for a small $\varepsilon > 0$ where $d\theta$ refers to the standard metric of $S^1$. How do I use the substitution rule for Integration correctly in my situation? Is it then correct to write $$\int_M f \mathrm{dvol_{g_M}} = \int_{\psi(M)} f(x,\theta) \frac{1}{x^2} dxd\theta$$ ? As far as I know the substitution rule reads like $$\int_M (f \circ \psi) (x) \sqrt{\det D_\psi} \mathrm{dx} = \int_{\psi(M)} f(y) \mathrm{dy}$$ where $D_\psi$ is the Jacobian of the diffeomorphism $\psi: M \rightarrow N$. More generally: If $w$ is a form on $N$ then: $$\int_M \psi^{\ast}w = \int_{\psi(M)} w$$ In my case the metric is given and defined by a pull-back like described. I am confused if I got it right: If the metric is defined by pullback, then the volume form is also a pull-back of a form on $N$? So in my case the corresponding form on $N$ is $\frac{\mathrm{dx\wedge d\theta}}{x^2}$? Is this correct with the above definitions? Did I compute the volume form correctly?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.9962637424468994, "perplexity": 74.98639284566205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826907.66/warc/CC-MAIN-20160723071026-00305-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/206936-help-understanding-order-statistics-formula.html
# Thread: Help understanding order statistics formula 1. ## Help understanding order statistics formula This may be a stupid question, but this formula is just not clicking with me. I understand the rationale behind the factorial, but I do not understand the notation after the factorial. What does it represent? Is it... the individual density functions multiplied by each other? Can someone provide a few examples of how this formula would look in action using real #s? It seems like in the examples I am looking at they always each 1? Again I apologize for the stupid question but I need a pointer or two here! 2. ## Re: Help understanding order statistics formula Hey MN1987. There are a few different order statistics formulas that include one for a maximum, one for a minimum and one for a distribution that is so many places from the minimum or the maximum. What does that one refer to? The one's I've seen for the general cases and the extreme cases are not the same. 3. ## Re: Help understanding order statistics formula I think the formula is one for the joint PDF of the particular sequence of that vector on the LHS. I copy/pasted it from this part on the wiki page: Order statistic - Wikipedia, the free encyclopedia 4. ## Re: Help understanding order statistics formula What you wrote is not what is in the Wikipedia formula. Anyway, the formula is based on having the maximum where all values are less than it (i.e for a maximum, you have every realization less than this value and the probability of this will be P(X1 < a)*P(X2 < a)*...*P(Xn < a) if you are looking at all observations being less than a specified maximum). The minimum uses the reverse argument and the general one looks at a value where so many are less and so many are more where you have say X1 < a, X2 < a but X3 >= a, X4 >= a and so on, and note that in an I.I.D random sample, that P(X and Y) = P(X)P(Y) if X,Y independent.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8538771867752075, "perplexity": 546.1989631362438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00731.warc.gz"}
http://www.sciforums.com/threads/gravity-attracts-and-repels-energy-mass-and-light.133883/
Gravity attracts and repels energy, mass, and light Discussion in 'Pseudoscience Archive' started by ryan2006, Mar 7, 2013. 1. ryan2006Registered Member Messages: 14 Eienstien proved that light bends around the earth proving Newton obsolete. However, gravity attract mass(say a comet or an asteroid) given the size and weight of an object I would say a photon is weighs very little. Not all energy from the sun makes it into earth atomophere. 3. James RJust this guy, you know?Staff Member Messages: 31,621 Yes, Einstein supercedes Newton. Yes, masses attract one another. Yes, photons weigh very little. They actually have zero rest mass. Yes, not all energy from the Sun makes it into Earth's atmosphere.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553164839744568, "perplexity": 4825.1112810830255}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741176.4/warc/CC-MAIN-20181113000225-20181113022225-00064.warc.gz"}
https://www.physicsforums.com/threads/friction-what-is-the-net-force-on-the-block-the-instant-that-it-starts-to-slide.536611/
# Friction: what is the net force on the block the instant that it starts to slide 1. Oct 4, 2011 ### paxian 1. The problem statement, all variables and given/known data The coefficient of static friction between a block and a horizontal floor is 0.40, while the coefficient of kinetic friction is 0.15. The mass of the block is 5.0 kg. If a horizontal force is slowly increased until it is barely enough to make the block start moving, what is the net force on the block the instant that it starts to slide? 2. Relevant equations Fx= F-coefficient of static friction*m*g=0 3. The attempt at a solution F= 0.4*5.0 kg*9.8m/s^2= 19.6N However the answer on the key is 12N???? I am not sure how to get to 12N? Please help... My thought was, when the block start moving, I have to use static coefficient. Your help is greatly appreciated! 2. Oct 4, 2011 ### kjohnson If its moving you always use kinetic friction. Even if its barely sliding at constant velocity. 3. Oct 4, 2011 ### 1MileCrash You found the first step. However, that's just one force, not net force. There is another force, opposite, which is kinetic friction. Kinetic friction is there at the instant the block starts to slide, you found the applied force needed to get the block moving. Find the force that kinetic friction will apply. Remember, it's in the opposite direction. IF you find the net force of what you found, the applied force, and kinetic friction, I think you can tell me why the answer is what it is. 4. Oct 4, 2011 ### paxian Fx= Fstatic - Fkinetic = 0.4*5kg*9.8m/s^2-.15*5kg*9.8m/s^2=12.25N... I found the answer...but why do we have to subtract kinetic friction from static friction??? 5. Oct 4, 2011 ### 1MileCrash We don't really do that. While kinetic friction is found as a set force, static friction is found as a maximum force. Static friction balances out applied force until it's max is reached, at what point the applied force is greater than static friction and therefore starts moving. So, by knowing that we "push the block with a force barely enough to move it" we simply use the maximum static friction to figure out the applied force. We do not subtract kinetic friction from static friction. We use max static friction to know what force we needed to apply, then we subtract kinetic friction from that applied force. 6. Oct 4, 2011 ### paxian I understand the logic behind it now! Thanks so much for your help! Similar Discussions: Friction: what is the net force on the block the instant that it starts to slide
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848873496055603, "perplexity": 731.8629591425473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690591.29/warc/CC-MAIN-20170925092813-20170925112813-00411.warc.gz"}
https://www.explorelearning.com/index.cfm?method=cResource.dspView&ResourceID=510
# Bohr Model: Introduction ### DESCRIPTION Fire photons to determine the spectrum of a gas. Observe how an absorbed photon changes the orbit of an electron and how a photon is emitted from an excited electron. Calculate the energies of absorbed and emitted photons based on energy level diagrams. The light energy produced by the laser can be modulated, and a lamp can be used to view the entire absorption spectrum at once. Full Lesson Info
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941030502319336, "perplexity": 732.6271431025037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813626.7/warc/CC-MAIN-20180221143216-20180221163216-00726.warc.gz"}
http://mathlake.com/article-314-Difference-Between-Sequence-and-Series.html
# Difference Between Sequence and Series Sequence and Series is one of the important topics in Mathematics. Though many students tend to get confused between the two, these two can be easily differentiated. Sequence and series can be differentiated, in which the order of sequence always matter in the sequence but it’s not the case with series. Sequence and series are the two important topics which deal with the listing of elements. It is used in the recognition of patterns, for example, identifying the pattern of prime numbers, and solving puzzles, and so on. Also, the series plays an important role in the differential equations and in the analysis process. In this article, let us discuss the key difference between the sequence and series in detail. Before that, we will see the brief definition of the sequence and series. ## Definition of Sequence and Series in Maths Sequence: The sequence is defined as the list of numbers which are arranged in a specific pattern. Each number in the sequence is considered a term. For example, 5, 10, 15, 20, 25, … is a sequence. The three dots at the end of the sequence represents that the pattern will continue further. Here, 5 is the first term, 10 is the second term,15 is the third term and so on. Each term in the sequence can have a common different, and the pattern will continue with the common difference. In the example given above, the common difference is 5. The sequence can be classified into different types, such as: • Arithmetic Sequence • Geometric Sequence • harmonic Sequence • Fibonacci Sequence Series: The series is defined as the sum of the sequence where the order of elements does not matter. It means that the series is defined as the list of numbers with the addition symbol in between. The series can be classified as a finite series and infinite series which depends on the types of sequence whether it is finite or infinite. Note that, the finite series is a series where the list of numbers has an ending, whereas the infinite series is never-ending. For example, 1+3+5+7+.. is a series. The different types of series are: • Geometric series • Harmonic series • Power series • Alternating series • Exponent series (P-series) ### What is the Difference Between Sequence and Series? Here, the list of major differences between the sequence and series are given below: Sequence Series Sequence relates to the organization of terms in a particular order (i.e. related terms follow each other) and series is the summation of the elements of a sequence. Series can also be classified as a finite and infinite series. In sequence, the ordering of elements is the most important In series, the order of elements does not matter The elements in the sequence follow a specific pattern The series is the sum of elements in the sequence Example: 1, 2, 4, 6, 8, . . . . n are said to be in a Sequence and 1 + 2 + 4 + 6 + 8 . . . . n is said to be in a series. A finite series can be represented as m1 + m2 + m3 + m4 + m5 + m6 + . . . . . + mn General Form: $$[p_{i}]_{i=1}^{n}$$. Unending sequence like p1, p2, p3, p4, p5, p6, . . . . , pn, . . . . . , is known as an infinite sequence. If m1 + m2 + m3 + m4 + m5 + m6 + . . . . . . + mn = Sn, then Sn is termed as the sum to n elements of the series. General Form: $$[p_{n}]_{n=1}^{\infty }$$. General Form: $$S_{n}=\sum_{r=1}^{n}m_{r}$$. The order of a sequence matters. Hence, a sequence 5, 6, 7 is different from 7, 6, 5. However, in case of series 5 + 6 + 7 is same as 7 + 6 + 5. ## Frequently Asked Questions on Sequence and Series ### What is meant by arithmetic sequence? In maths, arithmetic sequence, also known as the arithmetic progression where the difference between the two consecutive terms in the sequence should be a constant. ### Write down the next three terms in the given sequence 1, 4, 7, …. The next three terms in the sequence are 10, 13, 16. In this sequence, The difference between 1 and 4 is 3, and 4 and 7 is 3. So, the common difference of this sequence is 3. Therefore, 7+3 = 10, 10+3 = 13, 13+3 = 16. ### What are the different types of series in maths? The different types of series in maths are arithmetic series, harmonic series and geometric series, P-series, exponential series and so on. ### Define the finite sequence. If the number of terms in the sequence is finite or fixed length, then it is called a finite sequence. ### Define series with an example The series is defined as the addition/sum of terms in a sequence. For example, 2, 4, 6, 8 is a sequence, then the series is written as 2+4+6+8.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9568337798118591, "perplexity": 182.59792813047804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00083.warc.gz"}
https://astronomy.stackexchange.com/questions/23751/how-do-we-define-temperature-in-outer-space
# How do we define temperature in outer space? I was recently reading an article on space.com about : What's the Temperature of Outer Space. They said : Some parts of space are hot! Gas between stars, as well as the solar wind, both seem to be what we call "empty space," yet they can be more than a thousand degrees, even millions of degrees. But here is my question - "How do we define temperature in outer space?" We perceive temperature as the average translational kinetic energy associated with the disordered motion of atoms and moleculules, so how can the temperature be so high at some places, if there isn't any (or very, very few) molecules out there in vacuum? • Temperature is what a thermometer reads. Whatever it is that temperature is a measure of, it is reflected by the reading on a thermometer. The average translational kinetic energy you mention, results from that definition retrospectively. I hope that clears some things up. – George Nov 30 '17 at 10:18 • @George that is incorrect. Temperature is related to log(entropy); a thermometer is a tool which measures temperature correctly only when in a proper environment, such as earth's atmosphere – Carl Witthoft Nov 30 '17 at 13:43 • Temperature is a physical quantity expressing the subjective perceptions of hot and cold. That's the definition. All other relations, including the one with entropy you mention stem from that definition and these relations are defined retrospectively. A quick search of the definition of temperature will convince you of this. – George Nov 30 '17 at 14:40 • @George: Sorry, wikipedia is not an authoritative source on this. Thermodynamics is. And subjective perceptions for sure don't have a solid, reproducible definition. – AtmosphericPrisonEscape Nov 30 '17 at 22:24 • Wikipedia is never an authoritative source. All I'm saying is that the definition is subjective perception, but the physical meaning of temperature is indeed the measure of the average kinetic energy of particles due to their thermal movement. My point is that there is a distinction between the definition and what it represents. At least that is what we were taught in thermodynamics back in the day. – George Dec 1 '17 at 6:58 First of all, the medium they are mentioning is far from empty even though it is much less dense than what we do know on Earth (see this question for more details). Then, the problem with temperature in astrophysics is that the medium you study is in general far from thermodynamic equilibrium (thermodynamic equilibrium means that there are no flows of enery or matter within the system and with the outside of the system). Most commonly, however, we can use another flavor of equilibrium to estimate temperature. This is the kinetic equilibrium: since most collisions are elastic (meaning that the energy is conserved), particles velocities will follow a Maxwell-Boltzmann distribution. Depending on the ionization fraction (how much is your medium ionized?) this temperature will either the kinetic temperature of electrons or the kinetic temperature of hydrogen. Then, if you want to understand these high temperatures, you have to take into account the photoionization rate and the photoelectric heating (a process in which you eject electron from the matter of the studied medium due to UV absorption). The thermal state of the medium then depends on the equilibrium between the energy absorption by the matter and the re-emission of this energy in thermal radiation. In this framework, the energy that can be absorbed comes from the interstellar radiation field (ISRF). In the solar neighborhood, it is dominated by six different types of radiation: galactic synchrotron emission of relativistic electrons (electrons accelerated to tremendous speed in the galaxy), the cosmic microwave background, the infrared emission of the matter heated by the light emitted by stars, emission from ionized plasma, the light emitted by stars itself, and lastly X-rays from hot plasma. References: In the absense of any particles, we evaluate temperature by the radiation wavelengths seen. In "empty space" only the cosmic background radiation exists, so we mark it pretty cold. see, for example, this note. However, if you look at a chunk of space containing even a few particles, then the temperature is defined based on the mean kinetic energy of the particles, or more precisely, the amount of entropy present. See temperature as intensive variable You are correct in your definition of heat, in that it does need a medium. The "heat" in space, however, is radiant. It does need a medium in order to be called heat, but the energy is there. Scientists measure the radiant energy in space to see how hot it would be if it were in a medium. The space is not physically hot, but the energy there means that if you were to travel there, the energy would he transferred into you, and you would become hot.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182994365692139, "perplexity": 501.3787293289982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00027.warc.gz"}
https://projectivegeometricalgebra.org/wiki/index.php?title=Line
# Line Figure 1. A line is the intersection of a 4D bivector with the 3D subspace where $$w = 1$$. In the 4D projective geometric algebra $$\mathcal G_{3,0,1}$$, a line $$\mathbf L$$ is a bivector having the general form $$\mathbf L = v_x \mathbf e_{41} + v_y \mathbf e_{42} + v_z \mathbf e_{43} + m_x \mathbf e_{23} + m_y \mathbf e_{31} + m_z \mathbf e_{12}$$ . The components $$(v_x, v_y, v_z)$$ correspond to the line's direction, and the components $$(m_x, m_y, m_z)$$ correspond to the line's moment. To possess the geometric property, the components of $$\mathbf L$$ must satisfy the equation $$v_x m_x + v_y m_y + v_z m_z = 0$$ , which means that, when regarded as vectors, the direction and moment of a line are perpendicular. The bulk of a line is given by its $$m_x$$, $$m_y$$, and $$m_z$$ coordinates, and the weight of a line is given by its $$v_x$$, $$v_y$$, and $$v_z$$ coordinates. A line is unitized when $$v_x^2 + v_y^2 + v_z^2 = 1$$. When used as an operator in the sandwich product, a unitized line is a specific kind of motor that performs a 180-degree rotation about itself. ## Lines at Infinity Figure 2. A line at infinity consists of all points at infinity in directions perpendicular to the moment $$\mathbf m$$. If the weight of a line is zero (i.e., its $$v_x$$, $$v_y$$, and $$v_z$$ coordinates are all zero), then the line lies at infinity in all directions perpendicular to its moment $$(m_x, m_y, m_z)$$, regarded as a vector, as shown in Figure 2. Such a line cannot be unitized, but it can be normalized by dividing by its bulk norm. When the moment $$\mathbf m$$ is regarded as a bivector, a line at infinity can be thought of as all directions $$\mathbf v$$ parallel to the moment, which satisfy $$\mathbf m \wedge \mathbf v = 0$$. ## Skew Lines Figure 3. The line $$\mathbf J$$ connecting skew lines is given by a commutator. Given two skew lines $$\mathbf L$$ and $$\mathbf K$$, as shown in Figure 3, a third line $$\mathbf J$$ that contains a point on each of the lines $$\mathbf L$$ and $$\mathbf K$$ is given by the commutator $$\mathbf J = [\mathbf L, \mathbf K]^{\Large\unicode{x27C7}}_- = (v_yw_z - v_zw_y)\mathbf e_{41} + (v_zw_x - v_xw_z)\mathbf e_{42} + (v_xw_y - v_yw_x)\mathbf e_{43} + (v_yn_z - v_zn_y + m_yw_z - m_zw_y)\mathbf e_{23} + (v_zn_x - v_xn_z + m_zw_x - m_xw_z)\mathbf e_{31} + (v_xn_y - v_yn_x + m_xw_y - m_yw_x)\mathbf e_{12}$$ . The direction of $$\mathbf J$$ is perpendicular to the directions of $$\mathbf L$$ and $$\mathbf K$$, and it contains the closest points of approach between $$\mathbf L$$ and $$\mathbf K$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9382901191711426, "perplexity": 224.90135843800377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00536.warc.gz"}
https://www.physicsforums.com/threads/molecular-mass-of-he-mon-and-he-mmh.678745/
Molecular mass of He+MON and He+MMH 1. Mar 16, 2013 hamzaaaa Dear all I have a case. I have two tanks, one is filled with Helium gas and monomethylhydrazine(MMH) and another tank with Helium gas and Dinitrogentetraoxide(MON). Both tanks are at 47degC so they are in gaseous form. I want to know what will be the Molecular Mass of the gas. The molecular mass of He=4 g/mol,MMH=46 g/mol and MON=92 g/mol. So can anyone tell me what will be the molecular mass i should use for computations for (He+MMH, He+MON and He+MMH+MON) Thanks 2. Mar 16, 2013 Staff: Mentor For some applications you can use weighted average - but calculation requires knowing composition of the mixtures. 3. Mar 18, 2013 hamzaaaa But why to take the average and why not the sum of all three? 4. Mar 18, 2013 Staff: Mentor Because some properties scale as the average of the molar mass. Solve this problem: you mix 0.5 L He and 0.5 L Ar. What is the density of the mixture?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929330587387085, "perplexity": 1981.6185433228109}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593223.90/warc/CC-MAIN-20180722120017-20180722140017-00597.warc.gz"}
http://synergyfiles.com/2016/07/wind-resource/
# All you need to know about Wind Resource Wind resource assessment is one of the first steps when considering a wind turbine for any particular location. Courtesy of the aviation industry, wind data has been already measured for numerous sites around the world. Meteorological departments in several countries have released average wind speed maps. However atmospheric wind as much as a climatic event is also a localized phenomenon that is hugely influenced by surrounding topography and obstacles. The best practice therefore is to measure the wind speed on the site of interest rather than extrapolating wind data from surrounding sites.   The quality of wind resource varies and can be assessed by understanding the following parameters: #### Average Wind Speeds Average wind speed is one of the foremost factors that determine the quality of wind resource. The feasibility of a site for wind power can be instantaneously ruled out based on this factor alone.  As a thumb rule, an average velocity of 5 m/s at 10m elevation is generally understood as a healthy number for feasibility of wind turbines. However there have been installations in areas where average velocities have been as low as 3.5 m/s. This is mainly because of the lack of other indigenous energy options. Secondly, even if the average wind speed is low but variability in wind speed it high, the overall output of the turbine can be higher than a turbine  facing near constant wind speed. This anomalous behaviour has been explained in detailed in this article. However variability in wind speed provides inconsistent and unreliable power. Unless storage options are added,  it does not make economic sense to invest in low average wind speed sites. Annual Mean Speed for UK at 25 m above ground level. Courtesy ETSU. Planet earth can be divided into zones that have high and low average wind conditions. Regions that are dominated by high pressure generally have poor wind resource. These regions can be visually identified on the map by their arid outlook, such as deserts.   In contrast regions that are dominated by low barometric pressure for most part of the year generally have higher quality wind resource. These can also be identified visually on the map as naturally green areas (high precipitation zones). The ITCZ (Inter tropical Convergence Zone) and Sub Polar Low Pressure Belt are regions where wind turbines location is mostly feasible. In contrast the Sub Tropical High Pressure belt is not the region where many wind farms will be found. #### Atmospheric Boundary Layer The boundary layer phenomenon is well known to mechanical and aeronautical engineers. It  results because of fluid viscosity. The velocity of any  flowing fluid if comes in touch with a surface is affected. The viscosity forces the velocity of the layer of fluid in touch with the surface to reduce to zero. The layer above the contact layer slips on but again viscous forces result a reduced speed. The effect of the surface gets reduced  on the fluid reduces as fluid particle moves away from the surface till it achieves a free stream velocity. Similar is the case with our atmosphere and surface of the earth. Winds closer to the surface are slower as we move away from the ground/sea level. Wind velocity keeps on rising asymptotically as one moves upwards through 5 m, 10 m, 20 m and  30 m elevation till it achieves a constant value.  The thickness of atmospheric boundary layer depends upon both the speed of wind and surface obstacles. It can reach up to 100m.  For wind power, this means higher wind speeds can be anticipated if the turbine mast is placed high.  It should be noted that wind speed and power have a cubic relationship. Doubling the wind speed means 8 times rise in power. #### Wind Direction The assessment of prevalent wind direction is important for any site where wind farm is being planned. Normally a “Met mast” is installed for a period of an year.  Once the prevalent wind direction is established it should be made sure that there are no obstacles (buildings,  trees and pylons) in the line of site.  If there are multiple turbines to be located than it should be made sure that turbines are not installed such that they fall in each other’s  wind shadow particularly in the prevalent wind direction. Wind data depicted as Wind Rose. Courtesy Autodesk In addition to the prevalent wind direction, the frequency of change in wind direction also has to be considered. If wind is variable and changes direction very every few seconds than again the site is classed as low quality wind.  Note that if the site is present in a valley or around a hill, than localized topographical factors can create a funnelling effect. This can enhance the both the wind speed and maintain consistent wind direction which  in turn increases the output of the turbine. Recorded wind data can be displayed in the form of wind rose (as shown in the picture). A wind rose  reveals  the prevalent wind direction in a snapshot. For example, in the example here SW to NW is the direction of prevalent wind. #### Turbulence Intensity Simply put, turbulence is the random movement of air particles in the oncoming wind. The higher the randomness and fluctuation in speed, the more is turbulence. High turbulence is a serious cause for concern. Turbulence not only causes a drop in the output but also induces vibration in the blades of the turbines that can cause structural damage in the long run.  The power curves for turbine, for various levels of turbulence  are shown in the chart (right) by the American Meteorological Society. The drop in turbine output with rising turbulence can be noticed. Turbulence is measured in a variety of ways, with Turbulence Kinetic energy being one of the metrics. However the most common way is to express it in the form of Turbulence intensity. Turbulence Intensity Vs Power Curve Courtesy: American Meteorological Society Mathematically  it can be expressed as I = u’/U Where u’ is the root mean square of the velocity fluctuations. U is the mean wind speed. For example, if the mean wind speed is 1 m/s while the velocity fluctuates between 0.9 m/s and 1.1 m/s than Turbulence Intensity is 0.1 or 10%. It should be noted that generally wind blowing over a body of calm waters has a lower Turbulence Intensity.  On the other hand Turbulence Intensity of wind blowing past a wooded  area is much higher.  Thus offshore winds turbines generally have a higher capacity factor compared to their onshore counterparts. Generally Turbulence Intensity of 20 % means extremely high degree turbulence. As mentioned earlier turbulence substantially increases in the wake of the turbine and provision must be made that other turbines in the farm are not located in the line of sight of prevailing wind.  It has been noted that due to lower background turbulence, the structural damage on the wind turbines on the edge of the wind farm is much lower compared to turbines sitting in the middle of the farm. #### IEC 61400 Standard The International Electrotechnical Commission  is a body that prepares standards for electrical, electronics instruments and other related technologies.  It has published IEC 61400 that is related to wind turbine.  The reason for publishing this standard was simple, to identify “horses for courses”.  Different wind turbines have to be designed for different wind conditions.  The standard identifies seven different wind speeds. They have been listed below. Wind Class/Turbulence Annual average wind speed at hub-height(m/s) Extreme 50-year gust in meters/second (miles/hour) Ia High wind – Higher Turbulence 18% 10.0 70 (156) Ib High wind – Lower Turbulence 16% 10.0 70 (156) IIa Medium wind – Higher Turbulence 18% 8.5 59.5 (133) IIb Medium wind – Lower Turbulence 16% 8.5 59.5 (133) IIIa Low wind – Higher Turbulence 18% 7.5 52.5 (117) IIIb Low wind – Lower Turbulence 16% 7.5 52.5 (117) IV 6.0 42.0 (94) When a wind turbine is selected, it normally comes with the IEC specification.  Therefore it is important that the site wind data should be measured and classified so to select the right type of turbine. In addition to average wind speed and direction there are additional factors that are also required from an environmental and aviation point of view for wind farm consideration. The estimate of migratory birds that pass through the site is important to measure to gauge ecological impact of wind farm. For this purpose avian radar are normally installed in addition to the met mast. Similarly, if the site falls in an area that is close to approach flight path ( during landing/take off)  than the turbines can potentially cause interference with the radar systems.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806176781654358, "perplexity": 1770.7213809430316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103316.46/warc/CC-MAIN-20170817131910-20170817151910-00403.warc.gz"}
https://www.physicsforums.com/threads/rlc-ac-circuits.295887/
# Homework Help: RLC AC Circuits 1. Feb 28, 2009 ### Stingarov 1. The problem statement, all variables and given/known data Information given: Resistance=105 ohms, , XL= 212 ohms, XC= 93 ohms V rms is 143 Volts. Frequency of the circuit is f. Not defined. Information Solved for: Impedance (Z) = 158.7 I rms=.901 Amps Phase angle = 48.58 The final question, which I cannot solve for some reason, is this: At the instant the voltage across the generator is maximum, what is the current I? I know that the current will be less than maximum as it lags in this case, but I don't know by what factor or how to compute it from here. Any suggestions? 2. Relevant equations tan(theta)=(XL-XC)/R Vmax=ImaxZ=2^1/2*V rms Z= (XL - XC) / R 3. The attempt at a solution I have tried everything here and it's been pathetic so. I tried using the angle difference from 45 degrees (3.58) as somehow why I will not be maximized as the current is out of phase with V. This was wrong. Any suggestions? Last edited: Feb 28, 2009 2. Feb 28, 2009 ### Staff: Mentor 3. Feb 28, 2009 ### Stingarov Not so far. 4. Feb 28, 2009 ### Staff: Mentor Let me re-phrase. 5. Feb 28, 2009 ### mplayer I don't even know where to start here, do you have a diagram you could post of the circuit for us? 6. Feb 28, 2009 ### Kruum Does $$v(t)= \sqrt{2}V_{rms}sin( \omega t)$$ say anything to you? Is this given in the problem? Or has your teacher said this is the signal you can use in all of your calculations? 7. Feb 28, 2009 ### Stingarov Kruum: It was not given in the problem; it *is* relevant in the next problem, but I don't think in this one. As far as the circuit goes, It is a simple: Generator----R----L----C----(return to Generator) circuit. berkeman: aside from putting amps where I should have put volts, I don't see any other typos. 8. Feb 28, 2009 ### Kruum You don't need the frequency or the time. All you need to know that sin(90) is 1. But since it's not given or you can't assume that is the signal, it's no use then, obviously. 9. Feb 28, 2009 ### Stingarov I don't see how I can use the reactances when I'm given neither the L or C values. I result in variables Lf or Cf either way, as the equations are XL=2pi*fL , and XC=(2pifC)^-1, respectively. 10. Feb 28, 2009 ### Kruum Yeah, I was abit too hasty. I remembered the values were given, but it was only the reactance. But you really don't need that info as I wrote above all you need is the fact that sin(90) is 1. So $$i(t)= \sqrt{2}I_{rms}sin(90-48.58)$$ sould be the value for current. But if you can't assume that is the signal, then it's no use. 11. Feb 28, 2009 ### Stingarov It is indeed the signal. I tested this and it was correct. Thanks. However, I would like to know why this assumption is valid? It doesn't seem so apparent to me. 12. Feb 28, 2009 ### Kruum Which assumption, that the signal for the voltage is what it is or the signal for the current? 13. Feb 28, 2009 ### Stingarov The assumption that it 2pi*ft = 90 (= 1 when computed in sin). I know the phase angle, and I know that v(t)=Vmax*sin(2pi*ft + phase angle), but that doesn't give me a logical pathway to i(t)=Imax*sin(2pi*ft - phase angle) nor describe why 2pi*ft = 90. In other words, my book does a poor, poor job of explaining this relationship. 14. Feb 28, 2009 ### Kruum You probably have had some trigonometry and you know that $$\frac {\pi}{2}=90^{ \circ}$$ and $$sin( \frac {\pi}{2})=1$$. So $$2 \pi f*t$$ must give us $$\frac {\pi}{2}$$ in order to get the maximum voltage, right? Now, the phase angle is quite a bit more complicated story. But basically it's due to the physical properties of the capasitor and the inductor. The inductor creates a magnetic field, which tries to oppose the rise of the current. And the capasitor, on the other hand, is gathering pontetial difference between it's plates. So the faster the potential of the capasitor rises, the faster the electrons has to go from plate to plate, thus stronger the current. Does that help? Last edited: Feb 28, 2009 15. Feb 28, 2009 ### Stingarov That helps quite a bit. Thanks for the in-depth explanation. That's what I was looking for.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072430729866028, "perplexity": 1322.0258474735167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00121.warc.gz"}
http://mathhelpforum.com/geometry/54016-circles-vectors.html
# Math Help - Circles & Vectors 1. ## Circles & Vectors In circle geometry, there is a mathematical property that states a radius, drawn to bisect a chord, will meet the chord at 90 degrees. I must prove this property using vector methods. How do I do this? I posted a similar question here some times ago that uses non-vector methods. I did eventually figure that out, but now I need to use vector methods. Any help would be appreciated. 2. Originally Posted by DJ Hobo In circle geometry, there is a mathematical property that states a radius, drawn to bisect a chord, will meet the chord at 90 degrees. I must prove this property using vector methods. How do I do this? I posted a similar question here some times ago that uses non-vector methods. I did eventually figure that out, but now I need to use vector methods. Any help would be appreciated. 1. Let C denote the center of the circle and the origin. Then the vectors $\vec a$ and $\vec b$ are position vectors pointing at points on the circle line. Therefore $|\vec a| = |\vec b| = r$ 2. The vector $\overrightarrow{AB} = \vec b - \vec a$ 3. The vector $\vec m = \frac12(\vec a + \vec b)$ has the same direction as the line passing through C and M, the midpoint of $\overline{AB}$ 4. Calculate $\vec m \cdot \overrightarrow{AB} = \frac12(\vec a + \vec b) \cdot (\vec b - \vec a) = \frac12\left( \vec a \vec b + (\vec b)^2 - (\vec a)^2 - \vec a \vec b \right)$ Since $|\vec a| = |\vec b|$ the value in the bracket is zero. Therefore $\vec m$ and $\overrightarrow{AB}$ are perpendicular. Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501711487770081, "perplexity": 429.949742974209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894782.90/warc/CC-MAIN-20140722025814-00114-ip-10-33-131-23.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007%2Fs13201-018-0654-5
Applied Water Science , 8:11 # A study on the influence of tides on the water table conditions of the shallow coastal aquifers Open Access Original Article ## Abstract Tidal variation and water level in aquifer is an important function in the coastal environment, this study attempts to find the relationship between water table fluctuation and tides in the shallow coastal aquifers. The study was conducted by selecting three coastal sites and by monitoring the water level for every 2-h interval in 24 h of observation. The study was done during two periods of full moon and new moon along the Cuddalore coastal region of southern part of Tamil Nadu, India. The study shows the relationship between tidal variation, water table fluctuations, dissolved oxygen, and electrical conductivity. An attempt has also been made in this study to approximate the rate of flow of water. Anyhow, the differences are site specific and the angle of inclination of the water table shows a significant relation to the mean sea level, with respect to the distance of the point of observation from the sea and elevation above mean sea level. ## Keywords Tide Water table Coastal aquifers Dissolved oxygen Electrical conductivity Rate of flow ## Introduction Water levels in aquifer are a significant parameter in groundwater hydrology and a careful and detailed analysis of its spatio-temporal variation reveals useful information on the coastal aquifer system. Various causes affecting the groundwater levels are groundwater withdrawals, rainfall recharge, evapotranspiration, interaction with surface water bodies, etc. Ocean tides are also known to affect the groundwater fluctuation in the coastal aquifers (Kim et al. 2008). The water level fluctuations can be due to two different factors; the first one is anthropogenic. It is well known that the groundwater withdrawal from an aquifer or from a field of wells induces water level decline creating a cone of depression depending, among other parameters, on the aquifer hydrodynamic parameters and geometry. But, after the pumping is stopped, the level starts coming up due to recuperation to attain equilibrium. The second one cause able to induce daily water level fluctuations is the tides. Actually, the effect of tides has been observed in the groundwater level fluctuation of an aquifer, if monitored continuously or at a shorter interval (Kim et al. 2006; Wang et al. 2015). The dilatation of the Earth due mainly to the position of Moon and Sun induces measurable water level fluctuations in the well (Davies 2014). In addition, most of the previous research, including the mixing of seawater/freshwater studies, has focused on the behavior of the water table in coastal beaches. All these works were concerned particularly with the relationship between tides and beach water tables emphasizing the tidal-induced fluctuations of the water table near the shore and the consequences for processes affecting beach stability. None of them give an accurate picture of rate of flow and inclination of the water level to the mean sea level near the sea boundary. Dissolved oxygen, electrical conductivity, Turbidity play a significant role in the determination of the water quality (Davies and Ugwumba 2013; Davies 2014). It is also known as re-oxygenating and photosynthetic processes in water are a most important indicator of coastal aquifer quality (Praveena et al. 2013). A limited number of researchers have attempted to model the physics of groundwater flow processes in beaches. Ashtiani et al. (2001) used an implicit finite-difference numerical solution of the simulate beach water table response to tidal forcing. Kim et al. (2006, 2008) used a time-series monitoring model to solve the beach water-table response to tidal fluctuations. A review on the tidal studies (Table 1) shows that, multifaceted approaches have been adapted to understand the tidal zone dynamics. Usually, regional groundwater flow and contaminant transport studies in the vicinity of the coastal zone assume that the coastal boundary water level is equivalent to the mean sea level and that tidal- and wave induced variations have negligible effect. This study aims in understanding the relation between water table, dissolved oxygen and electrical conductivity in the coastal aquifers and tidal influence in selected three locations in the coastal alluvial aquifers. The tidal- water table behavior was studied for two different periods during New moon (NM) and Full Moon (FM). Table 1 Study on Literature of tidal in the different parts of the world S. no. Authors and year Area Remark 1 Wolaver et al. (1983) Lower Chesapeake Bay Tidal exchanges of nitrogen and phosphorus between a mesohaline vegetated marsh 2 Simmons et al. (1988) Virginia Waters The role of submarine groundwater discharge in transporting nutrient flux to coastal marine environments. 3 Harvey and Odum (1990) Virginia Influence of tidal marshes on upland groundwater discharge to Estuaries 4 Cheng and Chen (2001) Shandong Province Three dimensional modeling, Jahe River basin, Shandong province 5 Srinivas and Dinesh Kumar (2002) Southwest coast of India Tidal and non-tidal respect to water condition 6 Barlow (2003) Atlantic coast Ground water in fresh water-salt water environments of the Atlantic Coast 7 Westbrook et al. (2005) Western Australia Tidally forced estuarine boundary, Canning River, Western Australia 8 Vethamony et al. (2005) Gulf of Kutch Tidal eddies in a semi-enclosed basin 9 Eriksson et al. (2006) South Africa An unusual fluvial to tidal transition in the mesoarchean 10 Mao et al. (2006) Ardeer, Scotland Tidal influence on behavior of a coastal aquifer 11 Marcello and Mauro (2007) Mediterranean Sea Tidal, seiche and wind dynamics in a small lagoon 12 Ramaswamy et al. (2007) Gulf of Kachchh Source and dispersal of suspended sediment in the macro-tidal 13 Li et al.(2008) Shallow beach Tide-induced seawater–groundwater circulation in shallow beach aquifers 14 Kim et al. 2008 Jeju Island, Korea Multi-depth monitoring at a multilayered coastal aquifer 15 Manoj et al. (2009) West Coast, India Tidal asymmetry in the mandovi and zuari estuaries 16 Sridevi et al. (2010) East coast of India Internal waves on sound propagation 17 Singaraja (2011) Southeast coast of India A study on the influence of tides on the water table conditions 18 Mitra et al. (2011) West Bengal, India Spatial and tidal variations of physico-chemical parameters 19 Praveena et al. (2013) Strait of Malacca, Malaysia Assessment of tidal and anthropogenic impacts on coastal waters 20 Simon Peter et al. (2014) Southeast of Tamil Nadu, India Tidal effects on sandy marine beach 21 Wang et al. (2015) Coast Modeling study of the potential water quality impacts from tidal energy ## Study area The study area lies between (11º37′4″) to (11°45′2″) North latitude and (79º44′18″) to (79º47′46″) East Longitude (Fig. 1). It falls in survey of India map 56M/10 and 14. The study area is predominantly agricultural in nature along the coastal. The three location chosen for study falls near the tide dominated region demarcated from the toposheets. The location are Devanampatnam (79º47′19″ and 11º44′), Rajapettai (79º46′22″ and 11º40′55″) and Tiyagavalli (79º45′38″ and 11º37′14″) of which the first two station are located very near to the coast 513 and 460 m, respectively, and Tiyagavalli is at a greater distance (700 m) compared to the other two station. The study area receives rainfall of about 1050 mm to about 1400 mm. The locations are covered by unconsolidated eastern coastal plain, which is predominantly occupied by the flood plain of fluvial origin formed under the influence of Penneyar, Vellar and Coleroon river systems. The unconsolidated quaternary alluvium is underlined by the Cuddalore sandstone forming the principal and potential aquifers of the region (Chidambaram et al. 2010; Singaraja et al. 2014). Ground water occurs in recent Alluvial formation. The studies on the impact of tides were attempted on the coastal alluvial aquifer only. The water level at Devanampatnam ranges from 2.59 to 2.80 m, at Rajapettai it ranges from 0.98 to 3.05 m and the water level at Tiyagavalli ranges 2.71–3.05 m (mbgl). ## Methodology The influence of the tidal area was demarcated from the toposheets and sites were selected. Three wells at Devanampatnam, Rajapettai, and Tiyagavalli were observed for water level fluctuations using water level sensor. The observations were carried out in 2-h interval for 24 h in the field. The tidal observation stations are present at Nagapattinam in the south and at Chennai in the north. Since the study area falls between these two regions the tidal range has been calculated using TIDECAL software. The tidal values obtained were specific to the particular latitude and longitude and to the time of field observation for the specific areas. The rate of flow of water through the observation wells were approximately calculated using the Darcy’s law principle. The inclination of the water table to the MSL was calculated by simple Pythagoras method. All the parameters were observed and analyzed for two different periods full moon (FM) and new moon (NM). The date and time of the observation schedule is given in Table 2. Table 2 Details of sample locations and Time Devanampatnam Rajapettai Tiyagavalli New moon Full moon New moon Full moon New moon Full moon Date January/20/2015 Date January/05/2015 Date February/18/2015 Date February/03/2015 Date March/20/2015 Date March/05/2015 Time (h) Time (h) Time (h) Time (h) Time (h) Time (h) 12:00 18:00 14:00 16:00 20:00 20:00 14:00 20:00 16:00 18:00 22:00 22:00 16:00 22:00 18:00 20:00 24:00 24:00 18:00 24:00 20:00 22:00 02:00 02:00 20:00 02:00 22:00 24:00 04;00 04:00 22:00 04:00 24:00 02:00 06:00 06:00 24:00 06:00 02:00 04:00 08:00 08:00 02:00 08:00 04:00 06:00 10:00 10:00 04:00 10:00 06:00 08:00 12:00 12:00 06:00 12:00 08:00 10:00 14:00 14:00 08:00 14:00 10:00 12:00 16:00 16:00 10:00 16:00 12:00 14:00 18:00 18:00 12:00 18:00 14:00 16:00 20:00 20:00 ## Result and discussion ### Water level fluctuations The water level indicates the piezometric balance of the groundwater with the atmosphere. The variation of the groundwater level indicates the rate of inflow and out flow across a particular well. The data observed in the Devanampatnam well during both the periods indicate that there is an increase of water level from 14:00 to 08.00 h, subsequently there is a decrease in the water level beyond this in both the time periods of observation (Fig. 2). It is also interesting to note that water level is shallow during the FM compared the NM, though the pattern remains the same. The water level fluctuation during the period of observation is about 0.06 m in FM and 0.08 m in NM. The water level observations in the Rajapettai shows similar trend as that of Devanampatnam, but the variation in the elevation of water level is lesser and a parallelism is maintained (Fig. 2). The water level has been lowered to a maximum from the initial reading to a depth of about 0.06 m in FM and 0.08 m in NM. The water level observations of the Tiyagavalli indicate a different trend that the NM water level is higher than the FM water level (Fig. 2). The increase of water level is noted from 20:00 to the 08:00 h. The pattern of decrease of water level further does not show steady parallelism. Water level is higher during the NM though the trend of increase in water level remains the same in the both periods. The water level has been lowered to a maximum from the initial reading to a depth of about 0.07 m in FM and 0.09 m in NM (Fig. 2). Due to the water table-dependent transmissivity of an unconfined aquifer, the sea tide has an enhancing effect on the mean water table (Philip 1973 Kim et al. 2008). ### Tidal variation The tidal curve of the Full moon and the New moon at Devanampatnam indicates that there is a symmetrical crest and trough (Fig. 3) and it is also noted that the tidal fluctuation is diurnal with time in 6 h interval. The highest (1.06 m AMSL) and the lowest tide level (0.21 m AMSL) are noted in the NM than the FM. The lowest tide and the highest tide in both FM and NM are noted at 02:0 and 08:0 h, respectively. The figure also indicates that the fluctuation is more during the NM than the FM. It is interesting to note that the tidal height is lesser in FM from 20:00 to 02:00 h, when compared to NM, but after 02:00 h the tidal height of FM increases than that of the NM and again after 08:00 h the tidal height of the FM decreases than the NM. Hence there is a cyclic decrease and increase of the tidal height which is alternatively witnessed in both NM and FM. The tidal observations made at Rajapettai are similar to Devanampatnam, the tidal curve of the Full moon and the New moon indicates that there is a symmetrical crest and trough (Fig. 3) and it is also noted that there is a tidal fluctuation with time in 6-h interval. The highest (1.08 m amsl) and the lowest tide level (0.24 m amsl) are noted in the NM than the FM. The lowest tide and the highest tide in both FM and NM are noted at 02:00 and 08:00 h, respectively. The figure also indicates that the fluctuation is more during the NM than the FM. It is interesting to note that the tidal height is lesser in NM from 18:00 to 24:00 h, when compared to FM, but after 24:00 h the tidal height of NM increases than that of the FM. Duration of a high and low water is also affected by local conditions (Manoj et al. 2009; Wang et al. 2015). But for a given location the high water will always lag the passage of the Moon by a fixed amount of hours (this is known as “the establishment of the port”). The tidal observations made at Tiyagavalli, is also similar to Devanampatnam and Rajapettai, the tidal curve of the Full moon and the New moon indicates that there is a symmetrical crest and trough (Fig. 3) and it is also noted that there is a tidal fluctuation with time in 6-h interval. The highest (1.11 m amsl) and the lowest tide level (0.18 m amsl) were noted in the NM similar to the other stations. The lowest tide and the highest tide in NM are noted at 02:00 h and 08:00 am, respectively, and at FM the lowest is at 02:00 h and the highest at 20:00 h. The figure also indicates that the fluctuation is more during the NM than the FM. It is interesting to note that the tidal height is lesser in NM from 20:00 to 02:00 h, when compared to FM, but after 02:00 h the tidal height of NM increases than that of the FM. The fluctuation is more during the NM than the FM. ### Water level and tidal influence Tidal forcing of adjacent groundwater is a common feature in coastal environments and can be an important mechanism of pore water movement in saturated and intertidal zones (Hughes et al. 1998). In shallow unconfined aquifers, tidal forcing can enhance the extent of saltwater ingress and can also alter the configuration of solute concentration contours, particularly near the top of the water table (Ashtiani et al. 1999; Davies 2014). The tidal cycling shows water level variation in the coastal region within the 24:00 h observation. There is an increase of 0.06 m of water level at the Devanampatnam coast during the Full Moon and 0.08 m increase in New Moon (Fig. 4). A study on the comparison between the water level variation and the tidal variation shows that the water level increase corresponds to the increase of tidal height around 8.00 am. Similarly the variation of water level in Rajapettai is 0.06 m during FM and 0.08 during NM (Fig. 5). At Tiyagavalli the variation of water level is 0.07 m at FM and 0.09 m at NM. Comparing FM and NM in all the three stations the water level variations is about 0.02 m; further the greater variation in water level is noted in new moon period in all the stations and the highest in Tiyagavalli (Fig. 6). It is located near a river creek of Uppanar influenced by tides, but it is comparatively far from the sea with results in the time log. In general the water level increases around 08:00 h in all the station during both the periods of observation and the sinusoidal nature observed in tide is not well reflected in the water level though a minor increase in water level is noted at the time of high-tide. ### Rate of flow Hydrologists have traditionally applied Darcy’s Law to describe the freshwater flow resulting from measured hydraulic gradients. However, when comparisons have been made, the modeled outflow is often much less than what is actually measured (Smith and Zawadzki 2003). Hence attempts have also been made using law to calculate the water flow in this region. Assuming that the groundwater flows from the land to the sea with the available data the rate of flow at a particular point and its variation with time was also attempted. The Mean sea level at Devanampatnam is 5 m, the water level observed during FM is 2.35 m and that of NM is 2.2 m, the distance from well to sea is 513 m (dL); Rajapettai mean sea level 3.5 m (water level varies from 2.46 m during FM to 1.76 m in NM) and distance from well to sea 460 m; at Tiyagavalli mean sea level 4.5 m (water level varies from 1.45 m during FM, 1.7 m in NM) and distance from well to sea 700 m (Fig. 7). The flow of water through an aquifer is governed by Darcy’s law, which states that the rate of flow is directly proportional to the hydraulic gradient (Harvey and Odum 1990) $$Q = K{\text{d}}h/{\text{d}}L,$$ where Q—rate of flow, K—hydraulic conductivity, m/day (concent values) (K is a hydraulic conductivity for an alluvial sandy aquifer is taken as 12 m/day (Morris and Johnson, 1967), dh—Mean sea level and dL—distance between well and sea). Field studies of the groundwater discharge process in unconfined coastal aquifers show that the tide can significantly influence the temporal and spatial patterns of groundwater discharge as well as the salt concentration in the near-shore groundwater (Robinson and Gallagher 1993; Robinson et al. 1998). The rate of flow of water was calculated from the hydraulic gradient (dh/dL) considering the hydraulic conductivity of the sandy aquifer as 12 m/day (Morris and Johnson 1967). The calculated results for Devanampatnam show that the rate of flow of water increases till 08:00 h in the morning. It starts decreasing thereon, during both the periods of observation. The range of flow rate of groundwater during full moon is 0.045–0.047 m/day and that during New moon is 0.043–0.044 m/day. The rate of flow is higher during FM than NM (Fig. 8). The rate of flow of water in Rajapettai also shows similar pattern to Devanampatnam, during both the periods of observation (Fig. 8). The range of flow rate of groundwater during full moon is 0.063–0.065 m/day and that during new moon is 0.045–0.048065 m/day. The trend of rate of flow is similar to that of Devanampatnam. The rate of flow of water in Rajapettai is higher than Devanampatnam. The observation well at Tiyagavalli shows that the rate increases till 08:00 h and then starts decreasing, during both the periods of observation similar to the other two sites. The range of flow rate of groundwater is during full moon is 0.018–0.019 m/day and that during New moon is 0.02–0.022 m/day. The rate of flow is higher during NM than FM. In other two stations the rate of flow is higher during FM, but at Tiyagavalli the rate of flow is higher during NM (Fig. 8). Variation in hydraulic head in the underlying aquifer was small over the tidal cycle (Harvey et al. 1987) and some seasonal variation was apparent; hydraulic heads in the aquifer were highest in full moon and lowest in new moon at Devanampatnam and Rajapettai, but Tiyagavalli is contrary due to the greater distance from ocean. ### Influence of EC and DO on coastal aquifers DO, EC, tides and water level fluctuation are significant parameters in groundwater dynamics for the most part shallow coastal aquifers. Tides are also to affect the groundwater fluctuation, EC, and DO in the coastal region (Davies 2014). Devanampatnam and Rajapettai (Fig. 9) wells shows that the water level, DO, and EC value revealed significant changes during the monitoring period. There is an increase of tide, water level, and EC at Devanampatnam and Rajapettai during the NM and FM around 08:00 h, but corresponds to the decrease of DO due to monitoring wells by re-oxygenating its water table aquifers that often exhibit low DO concentrations and increases seawater intrusion by tidal effect (Kim et al. 2008; Singaraja 2011; Davies 2014). Tiyagavalli monitoring wells clearly show that high-tide level fluctuates around 08:00 h and does not correspond to the increase EC. But water level fluctuation corresponding to DO may be due to the influence of freshwater recharge. In this category, there is a clear demarcation of freshwater recharge because the trend of the DO at this location inversely matches with the EC. ### Inclination of the water level to the mean sea level The inclination of the water table can be calculated by simple Pythagoras method, Where the height of the water level observed, i.e., AB and the distance of the point of observation to the sea, i.e., AC (Fig. 10) evolving the length of the hypo tense by \begin{aligned} \text{Sin} \emptyset = AB/BC \hfill \\ BC = \sqrt {(AB)^{2} + (AC)^{2} } . \hfill \\ \end{aligned} The angle of inclination with respect to msl can be calculated by AB/BC (Opp/hypo tense). This angle of inclination between the water level nearing the observation well at the measuring point and the MSL is given by the angle θ. The variation in this angle will indirectly indicate the change in water level with respect to the measuring point and the sea. Higher angle shows more variation in water level and vise versa. At Devanampatnam the variation of the angle ranges from 6.284 × 10−5 to 6.45 × 10−5 degrees during the NM and the angle is maximum around 20:00–10:00 h. Similar observations were also made during FM and it shows that the value of θ ranges from 6.632 × 10−5 to 6.806 × 10−5 and the maximum was also around 22:00–12:00 h. The observations indicate that the water level variation with respect to sea was maximum around 22:00–12:00 h during both the periods and the FM values are higher than the NM, which indirectly influences the rate of flow (Fig. 11). At Rajapettai the variation of the angle ranges from 6.632 × 10−5 to 6.806 × 10−5 degrees during the NM and the angle is maximum around 02:00–10:00 h. Similar observations were also made during FM and it shows that the value of θ ranges from 9.25 × 10−5 to 9.424 × 10−5 and the maximum was around 24:00–10:00 h. The observations indicate that the water level variation with respect to sea was maximum; around 24:00–10:00 h and the FM values are higher than the NM (Fig. 11). It is also an interesting point to note that the increase of tide, water level, and EC at Devanampatnam and Rajapettai during the NM and FM around 08:00 h corresponds to the decrease of DO due to monitoring wells by re-oxygenating. The observations at Tiyagavalli show that the variation of the angle ranges from 2.967 × 10−5 to 3.141 × 10−5 degrees during the NM and the angle is maximum around 20:00–12:00 h indicating that the angle between the water level at measuring point and the sea is maximum. Similar observations were also made during FM and it shows that the value of θ ranges from 2.617 × 10−5 to 2.792 × 10−5 and the maximum was around 04:00 h–12:00 h. The observations indicate that the water level variation with respect to sea was maximum around 04:00–12.00 h and the NM values are higher than the FM unlike the other two locations (Fig. 11). But water level fluctuation corresponding to DO and inversely matches with the EC may be due to increase of photosynthetic processes from freshwater recharge in river creek. Thus, the variation in the angle helps us to find out that the variation between the water level at the sea and that of the measuring point. It is calculated that for every 1 m the angular variation in Devanampatnam is 2.847 × 10−5, that of Rajapettai is 3.794 × 10−5 and that of Tiyagavalli is 1.745 × 10−5. ## Conclusion The above study shows that water level is shallow during the FM compared to the NM at Devanampatnam and Rajapettai. But at Tiyagavalli the NM shows shallow water table than FM. In general water level increases from 14:00 to 08.00 h and the water fluctuations are higher during the NM than the FM. There is a cyclic decrease and increase of the tidal height which is alternatively witnessed in both NM and FM. There is a tidal fluctuation with time in 6-h interval. Tidal forcing of adjacent groundwater is a common feature in coastal environments and can be an important mechanism of pore water movement in saturated and intertidal zones In general there should be a time lag between the tidal rise and water level rise, but it is interesting to note that there is a simultaneous increase in both tide and water level as the porosity of alluvium is higher and sampling location are very near to the coast. There is an increase of tide, water level fluctuation, and EC at Devanampatnam and Rajapettai during the NM and FM around 08:00 h, but corresponds to the decrease of DO due to re-oxygenating because increases of seawater intrusion by tidal effect. Tiyagavalli well is influence of freshwater recharge due to water level fluctuation corresponding to DO and does not correspond to the increase of EC. The angles of inclination clearly show that rate of flow is higher during FM than NM at Devanampatnam and Tiyagavalli wells by re-oxygenating due to tidal effect. Though the influence of river creek is noted in the ground water in Tiyagavalli due to the increase of photosynthetic processes. Tiyagavalli well varies from certain observations with respect to other two stations. This may be due to the distance of this point of observation to the sea and influence of river creek. It is calculated that for every 1 m the angular variation in Devanampatnam is 2.847 × 10−5, that of Rajapettai is 3.794 × 10−5 and that of Tiyagavalli is 1.745 × 10−5. ## References 1. Ashtiani BA, Volker RE, Lockington DA (1999) Tidal effects on seawater intrusion in unconfined aquifers. J Hydrol 216:17–31 2. Ashtiani BA, Volker RE, Lockington DA (2001) Tidal effects on groundwater dynamics in unconfined aquifers. Hydrol Process 15:655–669 3. Barlow PM (2003) Groundwater in fresh water-saltwater environments of the Atlantic Coast. U.S. Geological Survey Circular, 1262Google Scholar 4. Cheng JM, Chen CX (2001) Three Dimensional modelling of density-dependent salt water intrusion in multilayered coastal aquifers in Jahe River Basin, Shandong Province, China. Groundwater 39(1):137–143 5. Chidambaram S, Ramanathan AL, Prasnna MV, Karmegam U, Dheivanayagi V, Ramesh R, Johnsonbabu G, Premchandar B, Manikandan S (2010) Study on the hydrogeochemical characteristics in groundwater, post- and pre-tsunami scenario, from Portnova to Pumpuhar, southeast coast of India. Environ Monit Assess. 169(1):553–568. Google Scholar 6. Davies OA (2014) Tidal influence on the physico-chemistry quality of Okpoka Creek, Nigeria. Int J Biol Sci Appl 1(3):113–123Google Scholar 7. Davies OA, Ugwumba OA (2013) Tidal influence on nutrients status and phytoplankton population of Okpoka Creek, upper bonny estuary, Nigeria. J Mar Biol Article. Google Scholar 8. Eriksson KA, Simpson EL, Mueller W (2006) An unusual fluvial to tidal transition in the mesoarchean Moodies Group, South America: a response to high tidal range and active tectonics. Sedim Geol 190:13–24 9. Harvey JW, Odum WE (1990) The influence of tidal marshes on upland groundwater discharge to estuaries. Biogeochemistry 10(3):217–236 10. Harvey JW, Germann PF, Odum WE (1987) Geomorphological control of subsurface hydrology in the creekbank zone of tidal marshes. Estuar Coast Shelf Sci 25:677–691 11. Hughes CE, Binning P, Willgoose GR (1998) Characterisation of the hydrology of an estuarine wetland. J Hydrol 211:34–49 12. Kim KY, Seong H, Kim T, Park KH, Woo NC, Park YS, Koh GW, Park WB (2006) Tidal effects on variations of fresh-saltwater interface and groundwater flow in a multilayered coastal aquifer on volcanic island (Jeju Island, Korea). J Hydrol 330:525–542 13. Kim KY, Chon CM, Park KH, Park YS, Woo NC (2008) Multi-depth monitoring of electrical conductivity and temperature of groundwater at a multilayered coastal aquifer: Jeju Island, Korea. Hydrol Process 22:3724–3733 14. Li H, Boufadel MC, Weaver JW (2008) Tide-induced seawater-groundwater circulation in shallow beach aquifers. J Hydrol 352:211–224 15. Manoj NT, Unnikrishnan AS, Sundar D (2009) Tidal asymmetry in the mandovi and zuari estuaries, the west coast of India. J Coast Res 25(6):1187–1197 16. Mao X, Enot P, Barry DA, Li L, Binly A, Jeng DS (2006) Tidal influence on behaviour of a coastal aquifer adjacent to a low-relief estuary. J Hydrol 327:110–127 17. Marcello N, Mauro G (2007) Tidal, seiche and wind dynamics in a small lagoon in the Mediterranean Sea Estuarine. Coast Shelf Sci 74:21–30 18. Mitra A, Mondal K, Banerjee K (2011) Spatial and tidal variations of physico-chemical parameters in the lower gangetic delta region, West Bengal, India. J Spat Hydrol 11(1):52–59Google Scholar 19. Morris DA, Johnson AI (1967) Summary of hydrologic and physical properties of rock and Soil materials as analysed by the Hydrologic laboratory of the U.S. Geologic Survey. US Geol Surv water, Supply Paper 1839-D, p 42Google Scholar 20. Peter TS, Chandrasekar N, Selvakumar S, Kaliraj S, Magesh NS, Srinivas Y (2014) Tidal effects on estuarine water quality through a sandymarine beach: a case study in Vembar estuary, southeast coast of Tamil Nadu, India. J Coast Sci 1(1):6–14Google Scholar 21. Philip JR (1973) Periodic nonlinear diffusion: an integral relation and its physical consequences. Aust J Phys 26:513–519 22. Praveena SM, Siraj SS, Aris AZ, Al-Bakri NM, Suleiman AK, Zainal AA (2013) Assessment of tidal and anthropogenic impacts on coastal waters by exploratory data analysis: an example from Port Dickson, Strait of malacca, Malaysia. Environ Forensics 14(2):146–154 23. Ramaswamy V, Nath BN, Vethamony P, Illangovan D (2007) Source and dispersal of suspended sediment in the macro-tidal Gulf of Kachchh. Mar Pollut Bull 54:708–719 24. Robinson MA, Gallagher DL (1993) A model of ground water discharge from an unconfined coastal aquifer. Ground Water 37(1):80–87 25. Robinson MA, Gallagher DL, Reay WG (1998) Field observations of tidal and seasonal variations in ground water discharge to estuarine surface waters. Ground Water Monit Rem 18(1):83–92 26. Simmons GM, Von Schmidt-Pauli K, Waller J, Lemourex E (1988) The role of submarine groundwater discharge in transporting nutrient flux to coastal marine environments, Virginia Waters: current developments. Water Resources Research center, V.P.I, Blacksburg, p 52Google Scholar 27. Singaraja C (2011) Impact of tidal variation in shallow coastal groundwaters of Cuddalore District. Unpublished M Phil Thesis, p 147Google Scholar 28. Singaraja C, Chidambaram S, Srinivasamoorthy K, Thivya C, Thilagavthi R, Sarathidasan J (2014) Study on the groundwater quality using Watclast program, in coastal aquifers of Cuddalore district. Inventi Rapid Water Environ 1:1–5Google Scholar 29. Smith L, Zawadzki W (2003) A hydro geologic model of submarine groundwater discharge: Florida inter comparison experiment. Biogeochemistry 66:95–110 30. Sridevi B, Ramana murty TV, Sadhuram Y, Rao MMM, Maneesha K, Sujithkumar S, Prasanna PL (2010) Impact of internal waves on sound propagation off Bhimilipatnam, east coast of India Estuarine. Coast Shelf Sci 88:249–259 31. Srinivas K, Dinesh Kumar PK (2002) Tidal and non-tidal sea level variations adjacent ports on the southwest coast of India. Indian J Mar Sci 31(4):271–282Google Scholar 32. Vethamony P, Reddy GS, Babu MT, Desa E, Sudheesh K (2005) Tidal eddies in a semi-enclosed basin a model study. Mar Environ Res 59:519–532 33. Wang T, Yang Z, Copping A (2015) A modeling study of the potential water quality impacts from in-stream tidal energy extraction. Estuar Coasts 38(1):173–186 34. Westbrook SJ, Rayner JL, Davis GB, Clement TP, Bjergd PL, Fisher SJ (2005) Interaction between shallow groundwater, Saline surface water and contamination discharge at a seasonally and tidally forced estuarine boundary. J Hydrol 302:255–269Google Scholar 35. Wolaver TG, Zieman JC, Wetzel R, Webb KL (1983) Tidal exchange of nitrogen and phosphorus between a mesohaline vegetated marsh and the surrounding estuary in the lower Chesapeake Bay. Estuar Coast Shelf Sci 16:321–332
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256211519241333, "perplexity": 4384.320617615267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947939.52/warc/CC-MAIN-20180425174229-20180425194229-00503.warc.gz"}
http://mathhelpforum.com/algebra/176724-rational-expression-help.html
# Math Help - Rational Expression Help 1. ## Rational Expression Help Im having trouble with a problem off a review... Ill try to my best abilities to write it out Solve. 3 OVER x^2 - 6x + 9 PLUS x - 2 OVER 3x - 9 SET EQUAL TO x OVER 2x - 6 First I factor out all the denominators. Get the LCD then multiply it to each individual term to get rid of my fractions.. But this is where I have a problem. I dont understand what the LCD should be. Should it be 6(x-3)(x-3)? 2. Is this the equation? $\displaystyle \frac{3}{x^2-6x+9}+\frac{x-2}{3x-9}= \frac{2}{2x-6}$ 3. So you are trying to solve for $\displaystyle x$ in the equation $\displaystyle \frac{3}{x^2 - 6x + 9} + \frac{x-2}{3x - 9} = \frac{x}{2x-6}$. For starters, factor all the denominators $\displaystyle \frac{3}{(x - 3)^2} + \frac{x - 2}{3(x - 3)} = \frac{x}{2(x - 3)}$. Now get a common denominator for both sides, and yes, you are correct that the lowest common denominator is $\displaystyle 6(x - 3)^2$. 4. ok.. I multiply 6(x-3)^2 to each section and I get.. 18 + 2x-6 = 3x -9 then i try and solve for x and get 21... which isnt the write answer according to this answer sheet 5. Originally Posted by gurrry ok.. I multiply 6(x-3)^2 to each section and I get.. 18 + 2x-6 = 3x -9 then i try and solve for x and get 21... which isnt the write answer according to this answer sheet OK, for starters, don't jump to multiplying by the common denominator. I ALWAYS write the fractions with their common denominators first, it is clearer and reduces mistakes. So $\displaystyle \frac{6\cdot 3}{6(x - 3)^2} + \frac{2(x - 3)(x - 2)}{6(x - 3)^2} = \frac{3x(x - 3)}{6(x - 3)^2}$ $\displaystyle \frac{18}{6(x - 3)^2} + \frac{2x^2 - 10x + 12}{6(x - 3)^2} = \frac{3x^2 - 9x}{6(x- 3)^2}$ $\displaystyle \frac{2x^2 - 10x + 30}{6(x - 3)^2} = \frac{3x^2 - 9x}{6(x - 3)^2}$ $\displaystyle 2x^2 - 10x + 30 = 3x^2 - 9x$. 6. Originally Posted by Prove It OK, for starters, don't jump to multiplying by the common denominator. I ALWAYS write the fractions with their common denominators first, it is clearer and reduces mistakes. So $\displaystyle \frac{6\cdot 3}{6(x - 3)^2} + \frac{2(x - 3)(x - 2)}{6(x - 3)^2} = \frac{3x(x - 3)}{6(x - 3)^2}$ $\displaystyle \frac{18}{6(x - 3)^2} + \frac{2x^2 - 10x + 12}{6(x - 3)^2} = \frac{3x^2 - 9x}{6(x- 3)^2}$ $\displaystyle \frac{2x^2 - 10x + 30}{6(x - 3)^2} = \frac{3x^2 - 9x}{6(x - 3)^2}$ $\displaystyle 2x^2 - 10x + 30 = 3x^2 - 9x$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700074791908264, "perplexity": 1061.2057251086644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
https://issuu.com/eduriteteam/docs/iit_entrance_exam
IIT Entrance Exam IIT Entrance Exam 1. If the driver is firmly held in non stretching seat belt harness. (a) 52500 N (b) 26250 N (c) 13125 N (d) can’t be said 2. If the driver is not wearing a seatbelt and stopping distance is determined by nature of collision with windshield, stearing column etc. (a) 262500 N (b) 131250 N (c) 65625 N (d) can’t be said 3. If the driver is wearing a stretching seat belt harness (which increases the stopping distance by 50 %) (a) 35000 N (b) 17500 N (c) 8750 N (d) can’t be said Exams.Edurite.com Page : 1/3 4. The magnitude of normal reaction from ground is (a) equal to 2Mg (b) less than 2Mg (c) more than 2Mg (d) can’t determine 5. The correct relation between acm and α will be (a) acm = √5/4 Rα (b) acm = 1/2 Rα (c) acm = Rα (d) None of these 6. If the ground is rough (instead of smooth), then the direction of friction force will be (a) forward (b) backwards (c) depend upon the value of µ (d) none of these 7. In a rectilinear motion of a particle along x-axis velocity varies as v x where is a positive constant. At t = 0, particle is located at origin. Statement-1: The mean velocity of the particle averaged over the time that the particle takes to cover the first S meters of the path is. Statement-2: In a uniformly accelerated motion, the average velocity in a given duration is the arithmetic mean of the velocities at the initial and final points of the given duration. (a) A (b) B (c) C (d) D 8. Statement-1: The principle of conservation of mechanical energy fails if there are nonconservative forces doing work on the system. Statement-2: Non-conservative forces, if work, result in change in mechanical energy (a) A (b) B (c) C (d) D Exams.Edurite.com Page : 2/3 Thank You Exams.Edurite.com IIT Entrance Exam 1. If the driver is firmly held in non stretching seat belt harness. (a) 52500 N (b) 26250 N (c) 13125 N (d) can’t be said 2. If the driver...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951705098152161, "perplexity": 2392.881143263435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00542-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/how-to-identify-the-curve-at-intersection-of-level-surfaces.797869/
# How to identify the curve at intersection of level surfaces 1. Feb 14, 2015 ### RJLiberator 1. The problem statement, all variables and given/known data Sketch a picture of the cone x = sqrt(y^2+z^2) , and elliptic paraboloid x = 2−y^2−z^2 on the same grid. Although the picture does not have to be perfect, indicate clearly the orientation of both figures relative to coordinate axes. Identify the curve at the intersection of the surfaces. 2. Relevant equations 3. The attempt at a solution What does it mean to identify the curve at the intersection? My thinking is that this means that I have to set these two equations equal to each other, and solve, resulting in some curve? Am I on the right track? 2. Feb 14, 2015 ### Staff: Mentor Yes. 3. Feb 14, 2015 ### RJLiberator Perfect. So I set 2-y^2-z^2=sqrt(y^2+z^2) I then add y^2 and z^2 to both sides. 2 = sqrt(y^2+z^2)+y^2+z^2. Something tells me this needs to be simplified and is not a correct answer. Correct? 4. Feb 14, 2015 ### RJLiberator Ok, it seems I was being lazy, perhaps: So, I set the equal to each other in my first step here: sqrt(y^2+z^2)=2-y^2-z^2 Second step: I square both sides, the right hand side gives me a long equation that I can deal with y^2+z^2 = 4-4y^2+y^4-4 z^2+2y^2z^2+z^4 Third step: Algebraic manipulation to set the equation with respects to y or z 5y^2+5z^2=(y^2+z^2)^2+4 5(y^2+z^2)=(y^2+z^2)^2+4 Divide both sides by (y^2+z^2) I'm not sure if I can do this step... on the right hand side I have the +4 in the numerator, my math intuition is telling me this anot be simplified like I am doing, please let me know if I made an error 5=(y^2+z^2)+4 1-y^2=z^2 z=sqrt(1-y^2) This should be the correct answer, according to wolfram simplification. 5. Feb 15, 2015 ### Staff: Mentor It's legitimate to divide by y2 + z2, provided that y and z are not both zero. However, your work below is incorrect, because you didn't divide the constant term. When you divide one side of an equation, you have to divide all terms. I found a much simpler way to do this. Squaring both sides of $x = \sqrt{y^2 + z^2}$ yields $x^2 = y^2 + z^2$. In the new equation, we must have $x \ge 0$ because the square root produces only nonnegative numbers. The other equation is x = 2 - y2 - z2 = 2 - (y2 + z2. Replace y2 + z2 in the equation immediately above with x2. Solve, keeping only the positive value for x. 6. Feb 15, 2015 ### RJLiberator I see what you did there, however: When we get to x=2-(y^2+z^2) and substitute our x^2=y^2+z^2 It becomes x=2-x^2 which results in sqrt(2-x)=x Isn't this an incomplete result? 7. Feb 15, 2015 ### RJLiberator Anyone have any more insight in this problem? 8. Feb 16, 2015 ### HallsofIvy This is a quadratic equation. Surely you know how to solve x^2+ x- 2= 0? 9. Feb 16, 2015 ### RJLiberator Ah, I see the light now. The quadratic equation represents the curve. Thank you kindly for the words. 10. Feb 17, 2015 ### LCKurtz What quadratic equation? I have yet to see in this thread where anyone has specifically said that the curve of intersection is a circle nor have I seen an equation or parametric representation of it, which is the usual way to express a curve in 3d. Can you do that? 11. Feb 17, 2015 ### RJLiberator You are absolutely right. I jumped the gun and didn't reason: x^2+x-2=0. [-1+/- sqrt(1-4*1*-2)]2 we get the solutions of 1 and -2 So at x=1 and x=-2 we have points of intersection. Am I correct up until this point? 12. Feb 17, 2015 ### LCKurtz So you have a couple of values of $x$. What do they represent? Do they help you solve the problem? What are you going to do next? What I am trying to get at is whether you have a plan for solving this problem. Are you making progress? Do you know whether you are making progress? Is the problem almost solved? Or are you just shuffling symbols around? 13. Feb 17, 2015 ### RJLiberator To be honest, this is the last part of the assignment that I am unsure of. Most likely due to the way it is worded. So we have the values x=1 and x=-2. By the looks of the graph It appears that these values represent the y-intercepts and z-intercepts for equations x=2-y^2-z^2. I don't really have much else. I'm not quite sure how they help me solve the problem. I was hoping that they would be the intersection points, but it doesn't appear to be so. I have to believe that the problem is very closed to being solved, but I do feel like I am merely shuffling symbols around :/. I imagine the curve must represent a circle of intersection points based on the graph. 14. Feb 17, 2015 ### LCKurtz You do have the graph of the two surfaces? Do both $x=1$ and $x=-2$ (remember, those are both equations of planes in 3d) look equally relevant to the intersection? Can you describe in words what the intersection curve must look like and where it is? 15. Feb 17, 2015 ### RJLiberator Assuming my graph is correct here is my interpretation of the information: x=1 and x=-2 are both planes. x=1 plane seems to be a major point of intersection. x=-2 seems to be of no relevance. based on this, the curve must be a part of x=1. I cant imagine that the answer must be the plane of x=1, that doesn't make sense to me. So it must be something deeper. when x=1 the curves intersect. But y and z values can vary in proportion so they must form some sort of circle of intersection. Does this mean that the radius must be 1 and the circle must be y^2+z^2=1 ? 16. Feb 17, 2015 ### LCKurtz Now you are getting somewhere. Instead of guessing for that last question, check what happens when you put $x=1$ in both of your original equations. 17. Feb 17, 2015 ### RJLiberator 1=sqrt(y^2+z^2) square both sides 1=y^2+z^2 1=2-y^2-z^2 -1=-y^2-z^2 1=y^2+z^2 Aha. That's a thing of beauty. So what threw me off was the x=-2. The -2 from the quadratic equation means nothing in this case as it is not relevant due to the domain, however, using the value of 1 i can then plug it into both equations and get a curve of intersection. I appreciate you for taking the extra time to teach me this. 18. Feb 17, 2015 ### LCKurtz A couple more things. The reason you got the extraneous $x=-2$ was you squared one of the equations during the process of solving, which generated the extra root. So now you know the intersection is the circle $y^2+z^2=1$ in the plane $x=1$. That is an acceptable form for the answer. For extra credit, now lets see a parametric equation for it$$\vec R(t) = \langle x(t),y(t),z(t)> = \langle ?,?,?\rangle$$Think "polar" coordinates. 19. Feb 17, 2015 ### RJLiberator The erroneous solution from the square makes sense. Polar coordinates... hm, let's try: we know the radius is 1. rcos(theta) rsin(theta) with r being 1 and theta being an angle from 0 to 2pi. This doesn't make complete sense since we have three dimensions and I'm not sure how to proceed beyond this at this point. <0, cos(theta), sin(theta) > 20. Feb 17, 2015 ### LCKurtz Call the variable theta instead of "t" if it makes you feel more comfortable and you like to type. Only two questions left: 1. Where is the left side and the = sign for the parametric equation? 2. Why did you put the x coordinate zero? Especially after all that work... Draft saved Draft deleted Similar Discussions: How to identify the curve at intersection of level surfaces
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572754263877869, "perplexity": 670.9801151832947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00640.warc.gz"}
http://www.purplemath.com/learning/viewtopic.php?t=3614
## Simplification of complex fractions. Quadratic equations and inequalities, variation equations, function notation, systems of equations, etc. ### Simplification of complex fractions. This is my attempt at simplification below: $\frac{(8^x)(8^{3x})}{(2^{3x})(4^{x+2})}$ = $\frac{8^{4x}}{(2^{3x})(2^{2x})(2^4)}$ = $\frac{2^{12x}}{(2^{3x})(2^{2x})(2^4)}$ = $(2^{12x})(2^{-3x})(2^{-2x})(2^{-4})$ = $(2^{12x-3x-2x})(2^{-4})$ = $(2^{7x})(\frac{1}{16})$ The revision handbook gives the following as options: 1. $(2^{7x})(16)$ 2. $\frac{2^{7x}}{16}$ 3. $2^{17x+4}$ 4. $\frac{64^{4x}}{8^{4x+2}}$ 5. none of the above. I'm second-guessing my calculation. Please could someone review my method? Ian Posts: 4 Joined: Tue Feb 25, 2014 8:02 am ### Re: Simplification of complex fractions. Ian wrote:This is my attempt at simplification below: $\frac{(8^x)(8^{3x})}{(2^{3x})(4^{x+2})}$ = $\frac{8^{4x}}{(2^{3x})(2^{2x})(2^4)}$ = $\frac{2^{12x}}{(2^{3x})(2^{2x})(2^4)}$ = $(2^{12x})(2^{-3x})(2^{-2x})(2^{-4})$ = $(2^{12x-3x-2x})(2^{-4})$ = $(2^{7x})(\frac{1}{16})$ The revision handbook gives the following as options: 1. $(2^{7x})(16)$ 2. $\frac{2^{7x}}{16}$ 3. $2^{17x+4}$ 4. $\frac{64^{4x}}{8^{4x+2}}$ 5. none of the above. I'm second-guessing my calculation. Please could someone review my method? yrs is the same as #2 buddy Posts: 125 Joined: Sun Feb 22, 2009 10:05 pm ### Re: Simplification of complex fractions. Voilà! I completely (ashamedly) missed that. Thanks. Ian Posts: 4 Joined: Tue Feb 25, 2014 8:02 am
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 20, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467769861221313, "perplexity": 4496.201835541756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900544.33/warc/CC-MAIN-20141030025820-00041-ip-10-16-133-185.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/60418/are-electronic-wavefunctions-in-band-gap-insulators-localized-is-a-single-parti
# Are electronic wavefunctions in band gap insulators localized? is a single-particle picture sufficient in this case? I am having trouble understanding the physics of band gap insulators. Usually in undergrad solid state physics one looks at non-interacting electrons in a periodic potential, with no disorder. Then, if the chemical potential lies in the gap between two bands, the material is insulating. At least in this derivation, the individual electronic wavefunctions composing the bands are not localized. Do the electronic wavefunctions become localized in band gap insulators? If they are, is it because of interactions? I was thinking that perhaps, since screening is not effective in insulators, the role of interactions is increased, and therefore perhaps the entire non-interacting, single-particle picture used to construct the band structure breaks down. Similarly, an impurity potential will not be screened and could localize the states. So which is it? - The deep insight of Anderson is that the difference between insulators and conductors is not the energy spectrum. In fact the entire picture we are taught in introductory courses is highly misleading. [Note: Everything I am going to talk about will be about single particle effects, so no interaction.] First lets just remember the introductory picture. We have a perfect crystal, so we get energy bands. We fill those bands up with electrons. In the case when a band is partially filled we get a conductor. In the case when all of our bands are completely occupied, so that the Fermi level lies in the gap, we get an insulator. Now that problems: finite conductivity is entirely dependent on impurities. In the absence of impurities momentum is completely conserved. If I give the carriers any momentum, they will never lose it. Therefore a finite current can never dissipate, which is the same as saying the resistance is zero. Since there will all always be some carriers at any non-zero temperature, in the absence of impurities all materials will be "perfect conductors". So it is clear that to make any sense we need to add impurities. However if we add impurities the nice energy band picture disappears. Since we just added random stuff to our Hamiltonian there is no reason we shouldn't be able to to find a state of any energy if we look hard enough. Obviously there will be more states in what used to be the bands, but there will also be states in the gap. In short the bands will blur together. But if the bands blur together then there is no longer any notion of a gap - so what could possibly separate insulators and conductors? It is not the electronic energy spectrum, it is the electronic wavefunction themselves. Since there is no longer translational symmetry these are not restricted to the Bloch form. There are two main possibilities: 1) The wavefunctions near the Fermi level are extended, i.e. their magnitude is roughly constant over the entire system, like a plane wave. This is a conductor. 2) The wavefunctions near the Fermi level are localized, i.e. their magnitude decays roughly exponentially as you go out from some point. This is an insulator. This is what actually distinguishes insulators and conductors. Going back to the band gap classification of materials - why does it basically work? The reason is if one adds disorder to a perfect crystal, the states that are added in the gap and near the band edges are usually localized states, so thinking about the gaps leads to the correct answer. But this is not the direct physical mechanism. - Interesting. I didn't downvote but I am curious if there are physics problems with this answer. Seriously downvoters: leave a comment explaining. @Bebop, could you cite any good literature on this? Solid state is not my forte so I only followed you up to this: "there is no reason we shouldn't be able to to find a state of any energy if we look hard enough." Of course impurities distort the bands, but it's not obvious to me that they blur them together in any serious way. Can't density of states/spectral functions be directly measured anyway? – Michael Brown Aug 18 '13 at 4:32 @MichaelBrown: It is clearest coming from the insulating side - think of disorder as providing all these potential wells which have localized states. If your system is infinite and your disorder is not tuned in some weird way, then you will be able to find a potential somewhere hosting any energy level you want. When I couple the wells together the energy levels hybridize, but this won't change the fact in an infinite system I can find a state that with any energy I want. – BebopButUnsteady Aug 20 '13 at 16:04 From a more technical view, a random Gaussian potential just introduces a finite lifetime for all states. A state with a finite lifetime can be measured at any energy. Philosophically, it is another manifestation of "Everything which is not forbidden is mandatory" - Once there is disorder there is no conserved momentum so your system is just a big box of states, and there is no reason to expect a gap. Experimentally, states in the gap are measured all the time, sometimes they are called band tails, or impurity bands. Here is an RMP rmp.aps.org/abstract/RMP/v64/i3/p755_1 – BebopButUnsteady Aug 20 '13 at 16:23 As to the downvote, I've also always been grumpy that this, of all my answers, has net downvotes. This is just Anderson localization, en.wikipedia.org/wiki/Anderson_localization, so it is hard to imagine what physics problems there are. I'm not sure what other references I can add. – BebopButUnsteady Aug 20 '13 at 16:42 Thanks. That clears it up. I've done what I can about the downvote. :) – Michael Brown Aug 20 '13 at 23:48 No, the electrons are not localized. Insulation is an effect coming from energy/bandstructure properties. Kohn-Sham orbitals (the orbitals in a bandstructure) are in general delocalized. -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8416223526000977, "perplexity": 561.9304179542598}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153998.27/warc/CC-MAIN-20160205193913-00293-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/120244/upper-bound-on-sum-of-binomial-coefficients
# Upper bound on sum of binomial coefficients If have been trying to prove the following limit: $\displaystyle\lim_{n\to \infty}\dfrac{\binom{n}{n/2}+\cdots+\binom{n}{n/2+\sqrt{n}}}{2^n}=0$ using the chernoff bounds for $\displaystyle\sum_{i=0}^{n/2+\sqrt{n}}\binom{n}{i}$ and then I suppose that the sum first $n/2$ terms should be easy to calculate and subtract. But thus far I have had no success in proving this. I've thought of using Stirling's formula for each binomial coefficient, but then I see that that may not lead to anything. I would appreciate I you could give me a hint to start my proof. - Let $X_n$ be a symmetric binomial random variable, i.e. $X_n \sim \operatorname{B}(n,1/2)$: $$\mathbb{P}(X_n = k) = \frac{1}{2^n} \binom{n}{k} \times [ 0 \leqslant k \leqslant n]$$ The mean and the variance of $X_n$ is well known: $$\mathbb{E}(X_n) = \frac{n}{2} \qquad \mathbb{Var}(X_n) = \frac{n}{4}$$ By the central limit theorem, the random variable $Z_n$ defined as $$Z_n = \frac{X_n - n/2}{\sqrt{n/4}} = \frac{2 X_n - n}{\sqrt{n}}$$ converges in distribution to the standard normal random variable. Therefore: $$\sum_{n/2 \leqslant k \leqslant n/2 +\sqrt{n}} \binom{n}{k} \frac{1}{2^n} = \mathbb{P}\left( \frac{n}{2} \leqslant X_n \leqslant \frac{n}{2} + \sqrt{n} \right) = \mathbb{P}\left( 0 \leqslant \frac{2 X_n - n}{\sqrt{n} } \leqslant 2 \right) \stackrel{n\to \infty}{\longrightarrow} \Phi(2) - \Phi(0)$$ With $\Phi(0) = \frac{1}{2}$ this is exactly what Robert stated in his answer. By the De Moivre-Laplace theorem, the limit is not zero: it is $\Phi(2) - 1/2$, where $\Phi$ is the standard normal cumulative distribution function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656315445899963, "perplexity": 81.91171306804296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988051.33/warc/CC-MAIN-20150728002308-00181-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.tankonyvtar.hu/en/tartalom/tamop412A/2011_0025_mat_14/ch04s03.html
Ugrás a tartalomhoz ## Convex Geometry Csaba Vincze (2013) University of Debrecen 4.3 A sandwich theorem ## 4.3 A sandwich theorem Theorem 4.3.1 Let B be a family of parallel compact segments with different supporting lines in the coordinateplane such that any three segments have a common transversal line. Then there exists a line transversal to all the members of B. Proof Without loss of generality we can suppose that all the segments parallel to the second coordinate axis labelled by y. Consider such a segment with endpoints (a,r) and (a,s), where r < s and let $y=mx+b$ (4.12) be a line intersecting this segment. Then the common point has the second coordinate ma+b. Therefore $r\le ma+b\le s$ showing that $-ma+r\le b\le -ma+s.$ (4.13) Let us define the parallel lines (4.14) corresponding to the endpoints of the segment and consider the point p with coordinates (m,b) corresponding to the line 4.12. Inequalities 4.13 shows that p is an element of the band bounded by the parallel lines 4.14. Therefore we can reformulate our condition in the following way: we have a collection of bands such that any three bands have a common point. The goal is to prove that all of them have a common point. Since the segments have different supporting lines it is easy to create a compact convex set K in the family we are interested in. Actually the intersection of finitely many not parallel bands is a convex polygon as the intersection of finitely many closed half-planes, see chapter 9. Then the corresponding version 4.1.7 of Helly's theorem implies the existence of the common point of the bandsand we also have a line intersecting all segments in B. Remark Theorem 4.3.1 plays an important role in the theory of approximation of continuous functions with polynomials. In what follows we show another application resulting in a sandwich theorem [44]. The result presents necessary and sufficient conditions under which the graphs of two functions can be separated by a straight line (functions having lines as graphs are called affine functions). Theorem 4.3.2 (K. Nikodem and Sz. Wasowicz) Let f and g be real functions defined on a real interval I. There exists an affine function h satisfying the inequalities $f\le h\le g$ if and only if $f\left(\lambda x+\left(1-\lambda \right)y\right)\le \lambda g\left(x\right)+\left(1-\lambda \right)g\left(y\right)$ (4.15) and $g\left(\lambda x+\left(1-\lambda \right)y\right)\ge \lambda f\left(x\right)+\left(1-\lambda \right)f\left(y\right)$ (4.16) hold for any x, y from I and λ between 0 and 1. Proof Since affine functions preserve the affine (especially convex) combinations of the elements it is obvious that if an affine function h is between f and g then conditions 4.15 and 4.16 are also satisfied for any x, y from I and λ between 0 and 1. Figure 32: The proof of the sandwich theorem. To prove the converse of the statement first of all note that f(x) is less or equal than g(x). It can be easily seen by substitution λ=1. Consider now the set of segments with endpoints (x, f(x)) and (x, g(x)) as x runs through the elements of the interval I. These are parallel compact segments with different supporting lines in the coordinate plane. To finish the proof we are going toshow that this collection of segments satisfies the condition of the previous theorem. Let x(1) < x(2) < x(3) be three different points in I and consider the coefficient λ such that x(2)=λ x(1)+(1 - λ)x(3). Using the notations condition 4.15 says that (x(2),y(2)) is under the line of (x(1),z(1)) and (x(3),z(3)). At the same time, by condition 4.16, (x(2),z(2)) is above the line of (x(1),y(1)) and (x(3),y(3)). These conditions obviously guarantee the existence of a common transversal to the segments at x(1), x(2) and x(3), respectively. Finally the previous theorem shows the existence of a common transversal to all the segments as well. This is just the graphof an affine function h between f and g as was to be proved. Corollary 4.3.3 If a convex function majorizes a concave one then there exists an affine function between them. Remark Necessary and sufficient conditions for the existence of separation by members of a given linear interpolation family can be found in [46]: the proof is also based on Helly's theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391171336174011, "perplexity": 341.97747490026075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00128.warc.gz"}
http://mathhelpforum.com/calculus/132513-evaluate-improper-integral-help.html
# Math Help - Evaluate an improper integral - help! 1. ## Evaluate an improper integral - help! Improper integral from 0 to infinity ∫ 5t dt/(t^2 +1)^3/2 dx I am having a really hard time figuring this one out. Thanks for your help! 2. What is the derivative of $\frac{5}{\sqrt{t^2+1}}?$ 3. Originally Posted by Plato What is the derivative of $\frac{5}{\sqrt{t^2+1}}?$ Is it -2(t^2+1)^-3? Or am I totally off. To evaluate the corresponding indifinite integral, substitute $u=t^2+1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974905848503113, "perplexity": 1059.4432842990511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115859923.61/warc/CC-MAIN-20150124161059-00242-ip-10-180-212-252.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Chromatic_polynomial
# Chromatic polynomial All non-isomorphic graphs on 3 vertices and their chromatic polynomials, clockwise from the top. The independent 3-set: $k^3$. An edge and a single vertex: $k^2(k-1)$. The 3-path: $k(k-1)^2$. The 3-clique: $k(k-1)(k-2)$. The chromatic polynomial is a polynomial studied in algebraic graph theory, a branch of mathematics. It counts the number of graph colorings as a function of the number of colors and was originally defined by George David Birkhoff to attack the four color problem. It was generalised to the Tutte polynomial by H. Whitney and W. T. Tutte, linking it to the Potts model of statistical physics. ## History George David Birkhoff introduced the chromatic polynomial in 1912, defining it only for planar graphs, in an attempt to prove the four color theorem. If $P(G, k)$ denotes the number of proper colorings of G with k colors then one could establish the four color theorem by showing $P(G, 4)>0$ for all planar graphs G. In this way he hoped to apply the powerful tools of analysis and algebra for studying the roots of polynomials to the combinatorial coloring problem. Hassler Whitney generalised Birkhoff’s polynomial from the planar case to general graphs in 1932. In 1968 Read asked which polynomials are the chromatic polynomials of some graph, a question that remains open, and introduced the concept of chromatically equivalent graphs. Today, chromatic polynomials are one of the central objects of algebraic graph theory.[1] ## Definition All proper vertex colorings of vertex graphs with 3 vertices using k colors for $k=0,1,2,3$. The chromatic polynomial of each graph interpolates through the number of proper colorings. The chromatic polynomial of a graph G counts the number of its proper vertex colorings. It is commonly denoted $P_G(k)$, $\chi_G(k)$, $\pi_G(k)$, and $P(G, k)$ which we will use from now on. For example, the path graph $P_3$ on 3 vertices cannot be colored at all with 0 or 1 colors. With 2 colors, it can be colored in 2 ways. With 3 colors, it can be colored in 12 ways. Available colors $k$ 0 1 2 3 Number of colorings $P(P_3, k)$ 0 0 2 12 For a graph G with n vertices, the chromatic polynomial is defined as the unique interpolating polynomial of degree at most n through the points $\left \{ (0, P(G, 0)), (1, P(G, 1)), \cdots, (n, P(G, n)) \right \}.$ If G does not contain any vertex with a self-loop, then the chromatic polynomial is a monic polynomial of degree exactly n. In fact for the above example we have: $P(P_3, t)= t(t-1)^2, \qquad P(P_3, 3)=12.$ The chromatic polynomial includes at least as much information about the colorability of G as does the chromatic number. Indeed, the chromatic number is the smallest positive integer that is not a root of the chromatic polynomial, $\chi (G)=\min\{ k : P(G, k) > 0 \}.$ ## Examples Triangle $K_3$ $t(t-1)(t-2)$ Complete graph $K_n$ $t(t-1)(t-2)...(t-(n-1))$ Path graph $P_n$ $t(t-1)^{n-1}$ Any tree on n vertices $t(t-1)^{n-1}$ Cycle $C_n$ $(t-1)^n+(-1)^n(t-1)$ Petersen graph $t(t-1)(t-2) \left (t^7-12t^6+67t^5-230t^4+529t^3-814t^2+775t-352 \right)$ ## Properties For fixed G on n vertices, the chromatic polynomial $P(G, t)$ is in fact a polynomial of degree n. By definition, evaluating the chromatic polynomial in $P(G, k)$ yields the number of k-colorings of G for $k=0, 1,\cdots, n$. The same hold for k > n. The expression $(-1)^{|V(G)|} P(G,-1)$ yields the number of acyclic orientations of G.[2] The derivative evaluated at 1, $P'(G, 1)$ equals the chromatic invariant, $\theta(G)$, up to sign. If G has n vertices, m edges, and c components $G_1, \cdots, G_c$, then • The coefficients of $t^0, \cdots, t^{c-1}$ are zeros. • The coefficients of $t^c, \cdots, t^n$ are all non-zero. • The coefficient of $t^n$ in $P(G, t)$ is 1. • The coefficient of $t^{n-1}$ in $P(G, t)$ is $-m$. • The coefficients of every chromatic polynomial alternate in signs. • The absolute values of coefficients of every chromatic polynomial form a log-concave sequence.[3] • $\scriptstyle P(G, t) = P(G_1, t)P(G_2,t) \cdots P(G_c,t)$ A graph G with n vertices is a tree if and only if $P(G, t) = t(t-1)^{n-1}.$ ### Chromatic equivalence The three graphs with a chromatic polynomial equal to $(x-2)(x-1)^3x$. Two graphs are said to be chromatically equivalent if they have the same chromatic polynomial. Isomorphic graphs have the same chromatic polynomial, but non-isomorphic graphs can be chromatically equivalent. For example, all trees on n vertices have the same chromatic polynomial: $(x-1)^{n-1}x,$ in particular, $(x-1)^3x$ is the chromatic polynomial of both the claw graph and the path graph on 4 vertices. ### Chromatic uniqueness A graph is chromatically unique if it is determined by its chromatic polynomial, up to isomorphism. In other words, G is chromatically unique, then $P(G, t) = P(H, t)$ would imply that G and H are isomorphic. All Cycle graphs are chromatically unique.[4] ### Chromatic roots A root (or zero) of a chromatic polynomial, called a “chromatic root”, is a value x where $P(G, x)=0$. Chromatic roots have been very well studied, in fact, Birkhoff’s original motivation for defining the chromatic polynomial was to show that for planar graphs, $P(G, x)>0$ for x ≥ 4. This would have established the four color theorem. No graph can be 0-colored, so 0 is always a chromatic root. Only edgeless graphs can be 1-colored, so 1 is a chromatic root for every graph with at least an edge. On the other hand, except for these two points, no graph can have a chromatic root at a real number smaller than or equal to 32/27. A result of Tutte connects the golden ratio $\phi$ with the study of chromatic roots, showing that chromatic roots exist very close to $\phi^2$: If $G_n$ is a planar triangulation of a sphere then $P(G_n,\phi^2) \leq \phi^{5-n}.$ While the real line thus has large parts that contain no chromatic roots for any graph, every point in the complex plane is arbitrarily close to a chromatic root in the sense that there exists an infinite family of graphs whose chromatic roots are dense in the complex plane.[5] ## Algorithms Chromatic polynomial Input Graph G with n vertices. Output Coefficients of $P(G, t)$ Running time $O(2^nn^r)$ for some constant $r$ Complexity #P-hard Reduction from #3SAT #k-colorings Input Graph G with n vertices. Output $P(G, k)$ Running time In P for $k=0,1,2$. $O(1.6262^n)$ for $k=3$. Otherwise $O(2^nn^r)$ for some constant $r$ Complexity #P-hard unless $k=0,1,2$ Approximability No FPRAS for $k>2$ Computational problems associated with the chromatic polynomial include • finding the chromatic polynomial $P(G, t)$ of a given graph G; • evaluating $P(G, k)$ at a fixed point k for given G. The first problem is more general because if we knew the coefficients of $P(G, t)$ we could evaluate it at any point in polynomial time because the degree is n. The difficulty of the second type of problem depends strongly on the value of k and has been intensively studied in computational complexity. When k is a natural number, this problem is normally viewed as computing the number of k-colorings of a given graph. For example, this includes the problem #3-coloring of counting the number of 3-colorings, a canonical problem in the study of complexity of counting, complete for the counting class #P. ### Efficient algorithms For some basic graph classes, closed formulas for the chromatic polynomial are known. For instance this is true for trees and cliques, as listed in the table above. Polynomial time algorithms are known for computing the chromatic polynomial for wider classes of graphs, including chordal graphs[6] and graphs of bounded clique-width.[7] The latter class includes cographs and graphs of bounded tree-width, such as outerplanar graphs. ### Deletion–contraction A recursive way of computing the chromatic polynomial is based on edge contraction: for a pair of vertices $u$ and $v$ the graph $G/uv$ is obtained by merging the two vertices and removing any edges between them. Then the chromatic polynomial satisfies the recurrence relation $P(G,k)=P(G-uv, k)- P(G/uv,k)$ where $u$ and $v$ are adjacent vertices and $G-uv$ is the graph with the edge $uv$ removed. Equivalently, $P(G,k)= P(G+uv, k) + P(G/uv,k)$ if $u$ and $v$ are not adjacent and $G+uv$ is the graph with the edge $uv$ added. In the first form, the recurrence terminates in a collection of empty graphs. In the second form, it terminates in a collection of complete graphs. These recurrences are also called the Fundamental Reduction Theorem.[8] Tutte’s curiosity about which other graph properties satisfied such recurrences led him to discover a bivariate generalization of the chromatic polynomial, the Tutte polynomial. The expressions give rise to a recursive procedure, called the deletion–contraction algorithm, which forms the basis of many algorithms for graph coloring. The ChromaticPolynomial function in the computer algebra system Mathematica uses the second recurrence if the graph is dense, and the first recurrence if the graph is sparse.[9] The worst case running time of either formula satisfies the same recurrence relation as the Fibonacci numbers, so in the worst case, the algorithm runs in time within a polynomial factor of $\phi^{n+m}=\left (\frac{1+\sqrt{5}}{2} \right)^{n+m}\in O\left(1.62^{n+m}\right),$ on a graph with n vertices and m edges.[10] The analysis can be improved to within a polynomial factor of the number $t(G)$ of spanning trees of the input graph.[11] In practice, branch and bound strategies and graph isomorphism rejection are employed to avoid some recursive calls, the running time depends on the heuristic used to pick the vertex pair. ### Cube Method There is a natural geometric perspective on graph colorings by observing that, as an assignment of natural numbers to each vertex, a graph coloring is a vector in the integer lattice. Since two vertices $i$ and $j$ being given the same color is equivalent to the $i$’th and $j$’th coordinate in the coloring vector being equal, each edge can be associated with a hyperplane of the form $\{x\in R^d:x_i=x_j\}$. The collection of such hyperplanes for a given graph is called its graphic arrangement. The proper colorings of a graph are those lattice points which avoid forbidden hyperplanes. Restricting to a set of $k$ colors, the lattice points are contained in the cube $[0,k]^n$. In this context the chromatic polynomial counts the number of lattice points in the $[0,k]$-cube that avoid the graphic arrangement. ### Computational complexity The problem of computing the number of 3-colorings of a given graph is a canonical example of a #P-complete problem, so the problem of computing the coefficients of the chromatic polynomial is #P-hard. Similarly, evaluating $P(G, 3)$ for given G is #P-complete. On the other hand, for $k=0,1,2$ it is easy to compute $P(G, k)$, so the corresponding problems are polynomial-time computable. For integers $k>3$ the problem is #P-hard, which is established similar to the case $k=3$. In fact, it is known that $P(G, x)$ is #P-hard for all x (including negative integers and even all complex numbers) except for the three “easy points”.[12] Thus, from the perspective of #P-hardness, the complexity of computing the chromatic polynomial is completely understood. In the expansion $P(G, t)= a_1 t + a_2t^2+\dots +a_nt^n,$ the coefficient $a_n$ is always equal to 1, and several other properties of the coefficients are known. This raises the question if some of the coefficients are easy to compute. However the computational problem of computing ar for a fixed r and a given graph G is #P-hard.[13] No approximation algorithms for computing $P(G, x)$ are known for any x except for the three easy points. At the integer points $k=3,4,\dots$, the corresponding decision problem of deciding if a given graph can be k-colored is NP-hard. Such problems cannot be approximated to any multiplicative factor by a bounded-error probabilistic algorithm unless NP = RP, because any multiplicative approximation would distinguish the values 0 and 1, effectively solving the decision version in bounded-error probabilistic polynomial time. In particular, under the same assumption, this rules out the possibility of a fully polynomial time randomised approximation scheme (FPRAS). For other points, more complicated arguments are needed, and the question is the focus of active research. As of 2008, it is known that there is no FPRAS for computing $P(G, x)$ for any x > 2, unless NP = RP holds.[14] ## References • Biggs, N. (1993), Algebraic Graph Theory, Cambridge University Press, ISBN 0-521-45897-8 • Chao, C.-Y.; Whitehead, E. G. (1978), "On chromatic equivalence of graphs", Theory and Applications of Graphs, Lecture Notes in Mathematics 642, Springer, pp. 121–131, ISBN 978-3-540-08666-6 • Dong, F. M.; Koh, K. M.; Teo, K. L. (2005), Chromatic polynomials and chromaticity of graphs, World Scientific Publishing Company, ISBN 981-256-317-2 • Giménez, O.; Hliněný, P.; Noy, M. (2005), "Computing the Tutte polynomial on graphs of bounded clique-width", Proc. 31st Int. Worksh. Graph-Theoretic Concepts in Computer Science (WG 2005), Lecture Notes in Computer Science 3787, Springer-Verlag, pp. 59–68, doi:10.1007/11604686_6 • Goldberg, L.A.; Jerrum, M. (2008), "Inapproximability of the Tutte polynomial", Information and Computation 206 (7): 908, doi:10.1016/j.ic.2008.04.003 • Huh, J. (2012), Milnor numbers of projective hypersurfaces and the chromatic polynomial of graphs, arXiv:1008.4749v3 • Jaeger, F.; Vertigan, D. L.; Welsh, D. J. A. (1990), "On the computational complexity of the Jones and Tutte polynomials", Mathematical Proceedings of the Cambridge Philosophical Society 108: 35–53, doi:10.1017/S0305004100068936 • Linial, N. (1986), "Hard enumeration problems in geometry and combinatorics", SIAM J. Algebraic Discrete Methods 7 (2): 331–335, doi:10.1137/0607036 • Makowsky, J. A.; Rotics, U.; Averbouch, I.; Godlin, B. (2006), "Computing graph polynomials on graphs of bounded clique-width", Proc. 32nd Int. Worksh. Graph-Theoretic Concepts in Computer Science (WG 2006), Lecture Notes in Computer Science 4271, Springer-Verlag, pp. 191–204, doi:10.1007/11917496_18 • Naor, J.; Naor, M.; Schaffer, A. (1987), "Fast parallel algorithms for chordal graphs", Proc. 19th ACM Symp. Theory of Computing (STOC '87), pp. 355–364, doi:10.1145/28395.28433. • Oxley, J. G.; Welsh, D. J. A. (2002), "Chromatic, flow and reliability polynomials: The complexity of their coefficients.", Combinatorics, Probability, and Computing 11 (4): 403–426 • Pemmaraju, S.; Skiena, S. (2003), Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica, Cambridge University Press, section 7.4.2, ISBN 0-521-80686-0 • Sekine, K.; Imai, H.; Tani, S. (1995), "Computing the Tutte polynomial of a graph of moderate size", Algorithms and Computation, 6th International Symposium, Lecture Notes in Computer Science 1004, Cairns, Australia, December 4–6, 1995: Springer, pp. 224–233 • Sokal, A. D. (2004), "Chromatic Roots are Dense in the Whole Complex Plane", Combinatorics, Probability and Computing 13 (2): 221–261, doi:10.1017/S0963548303006023 • Stanley, R. P. (1973), "Acyclic orientations of graphs", Disc. Math. 5 (2): 171–178, doi:10.1016/0012-365X(73)90108-8 • Voloshin, Vitaly I. (2002), Coloring Mixed Hypergraphs: Theory, Algorithms and Applications., American Mathematical Society, ISBN 0-8218-2812-6 • Wilf, H. S. (1986), Algorithms and Complexity, Prentice–Hall, ISBN 0-13-021973-8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 101, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242235779762268, "perplexity": 443.0596706003463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304412.34/warc/CC-MAIN-20150323172144-00123-ip-10-168-14-71.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/85411-homomorphism-7-a.html
# Math Help - Homomorphism 7 1. ## Homomorphism 7 If m is a homomorphism of G onto G' and N is a normal subgroup of G, show that m(N) is a normal subgroup of G'. I will give you a hint for this problem. In general if $m: G\to G'$ is a homorphism and $N$ is a normal subgroup of $G$ then $m(N)$ is a normal subgroup of $m(G)$. Prove this version. Then note if $m$ is onto then $m(G) = G$ and so $m(N)$ is a normal subgroup of $G$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979002058506012, "perplexity": 89.49636247051292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281978.84/warc/CC-MAIN-20160524002121-00200-ip-10-185-217-139.ec2.internal.warc.gz"}
https://getrevising.co.uk/revision-cards/p6_revision_cards_2
# P6 - Revision Cards waves and electrmagetic spectrum HideShow resource information • Created by: lara • Created on: 06-04-11 09:08 ## What are waves Light and sound travel as waves. There are two types of wave - transverse and longitudinal. Waves can be described by their amplitude, wavelength and frequency. The speed of a wave can be calculated from its frequency and wavelength. # What are waves? Waves are vibrations that transfer energy from place to place without matter (solid, liquid or gas) being transferred. Some waves must travel through a substance. The substance is known as the medium, and it can be solid, liquid or gas - sound waves. Other waves do not need to travel through a substance.Visible light, infrared rays, microwaves and other types of electromagnetic radiation are like this.They can travel through empty space. Electrical or magnetic fields vibrate as the waves travel through. 1 of 4 ## Transverse Light and other types of electromagnetic radiation are transverse waves. Water waves and S wave (a type of seismic wave) are also transverse waves. In transverse waves, the vibrations are at right angles to the dirction of travel. Longitudinal Sound waves and waves in a stretched spring are longitudinal waves. P waves (a type of seismic wave) are also longitudinal waves. In longitudinal waves, the vibrations are along the same direction as the direction of travel. 2 of 4 ## Amplitude, wavelength and frequency As waves travel, they set up patterns of disturbance. The amplitude of a wave is its maxium disturbance from its undisturbed position.take care : the amplitude is not the distance between the top and bottom of a wave. it is the distance from the middle to the top. The wavelength of a wave is the distance between a point on one wave and the same point on the next wave. Its often easiest to measure this from the crest of one waves to the crest of the next wave, but it doesn't matter where as long as it is the same point in each wave. The frequency of a wave is the number of waves produced by a source each second. It is also the number of waves that pass a certain point each second. The unit of frequency is the hertz (Hz). For example, most people cannot hear a high-pitched sound above 20kHz, radio stations broadcast radio waves with frequencies of about 100MHz, while most wireless computer networks operate at 2.4GHz. 3 of 4 ## How fast do waves travel? The speed of a wave - its wave speed (metres per second, m/s)- is related to its frequency (hertz, Hz) and wavelength (metre, m), according to this equation: wave speed = frequency x wavelength For example, a wave with a frequency of 100Hz and a wavelength of 2m travels at 100 x 2 = 200m/s. The speed of a wave does not usually depend on its frequency or its amplitude. Check your understanding of the equation by having a go at this activity. 4 of 4
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012993574142456, "perplexity": 713.0664535408264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00267-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/153748-order-group.html
# Math Help - Order of a group 1. ## Order of a group I got the following from Wikipedia: The order of a group is its cardinality, i.e., the number of its elements; the order, sometimes period, of an element a of a group is the smallest positive integer m such that am = e (where e denotes the identity element of the group, and am denotes the product of m copies of a). If no such m exists, we say that a has infinite order. All elements of finite groups have finite order. Question: If I have $a^m$ where $m=4$, is it correct to say that $|G|=4$. If so, then $G$contains $4$ elements. Must all the elements be distinct? Must an identity be in the group? 2. ## re: Order of a group "Question: If I have where , is it correct to say that . If so, then contains elements. Must all the elements be distinct? Must an identity be in the group" If you mean that G is a cyclic group (in other words, the entire group is $G = \{a^0, a^1, a^2, a^3, a^4 \}$ ) such that $a^4$ is as high as you can get without repeating, then actually $|G| = 5$ because $G$ has 5 elements (count them). Yes, by definition, a group must have an identity, which in this case is $a^0$, and all elements must be distinct. For example, although $a^0$ and $a^5$ look different, they're actually the same element because in a cyclic group, the elements start repeating.'' 3. Since $m=4$ and $|G|=5$, is it of the order of $4$ or $5$? Drawing from what is said in Wikipedia, when $m$ is the smallest positive integer such that $a^m=e$, then $m=0$ must be the number, then what is the order? hmm? 4. In my example, since $|G| = 5$, the order of the group is 5. But what exactly do you mean when you say, "I have $a^m$ where $m = 4$"? We can raise $a$ to any power we want and the result will always be in the group $G$. However, if this is a finite group, we'll get a lot of repetition. Let's say you start out with $a$, raise it to some power $n$, and get the same thing as the identity, $a^0$. If $n$ is the smallest positive integer power such that $a^0 = a^n$, then $n$ is the order of the group. In my example, since 5 is the smallest power that gives me back the identity, that's the order of the group. Thus, if I have a group of order $n$, since I don't want duplicates in my listing, I'll only raise $a$ to the powers $0, 1, ..., n-1$. This will give me every unique element in the group. 5. Your explanation makes sense. Thanks. There is an example at wikipedia Order (group theory) - Wikipedia, the free encyclopedia It shows a symmetric group table. In the example, each element $s,t$, and $w$ square to $e$. Then it says that these group elements have order 2. I am a little confused by the terms "order of a group" and "order of group elements." Are they the same? 6. Nope, they're not the same--it's confusing at first, I know. The order of a group'' G is the number of elements it contains. Consider $G = \{e, a, a^2, a^3 \}.$ The order of this group is 4 since it has 4 elements. The order of an element'' $a$ means: to what smallest power do I have to raise this element to get back the identity? Consider the element $a^2$ in the group I described above. I know $(a^2)^2 = a^4 = e$. Thus, the element $a^2$ has order 2. So you see that the order of a group and the order of an element are not the same, but in fact, the order of an element always divides the order of a group (notice that 2, the order of $a^2$, divides 4, the order of the group.) In your example from Wikipedia, it says if you square those elements, you get the identity. By definition, then, these elements have order 2. Make sense? 7. Since $s,t$, and $w$ square into $e$, which means that $s^2=s*(s^{-1})=e$, similarly $t$ and $w$. If $s=s^{-1}=e$, then it's trivial. Now, if $s \not =s^{-1}$, how do you explain $s^2=e$? What is the simplest example? 8. So we're assuming that $s^2 = e,$ which implies $s = s^{-1}$ (you can see this by multiplying each side of the first equation by $s^{-1}$). Thus, you cannot have both $s^2 = e$ and $s \not= s^{-1}$--it's a contradiction. If an element $s$ has order 2, it is its own inverse, i.e. $s = s^{-1}$. Certainly, if $s = e$, then $s^2 = e$. But $s$ doesn't have to be the identity for this to happen. Let $G = \{e, a, a^2, a^3, a^4, a^5 \}$. Then consider what happens when we square $a^3$. We'll have $(a^3)^2 = a^6 = e$. So we know $a^3 = (a^{3})^{-1}$ in this group, following similar reasoning to the first paragraph. So in summary, if $s$ squares to $e$, you definitely know that $s = s^{-1}.$ But $s$ doesn't necessarily have to be the identity. 9. In terms of a real number, if $\forall a \in \mathbb{R}, a^0=1$. What will $a$ be where $a\not \in \{0,1\}$, $a=a^{-1}$ and $a^{2n}=1$, $n \in \mathbb{N}$? I can't think of any. Can it be an imaginary number? $-i^2 = 1$ 10. If "Multiplication" is the group operation, then we're treating 1 as the identity for this group. Keep in mind, this means 0 is not the identity for this group. (0 is the identity for a different group, where "addition" is the operation) What about $a = -1$? We know $(-1)^{2n} = ((-1)^2)^n= 1$, regardless of what $n$ is. Since $(-1)^2 = 1$, from previous discussion we know $-1 = (-1)^{-1}$. So $a = -1$ satisfies the criteria, doesn't it? In fact $a$ must be a real number since when you square a complex number, you double its angle (measured counter-clockwise about the origin from the real axis). So the only possibility is for $a$ to have angle 180 (so that it's a negative number) or angle 0 (so that it's a positive number). 11. Sir, I want to thank you for your time for answering all my questions. You gave very clear instructions at every step.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 84, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348094463348389, "perplexity": 167.08187956437473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134620.5/warc/CC-MAIN-20140914011214-00021-ip-10-234-18-248.ec2.internal.warc.gz"}
https://arizona.pure.elsevier.com/en/publications/erratum-corrigendum-to-similarity-and-generalized-analysis-of-eff
# Erratum: Corrigendum to “Similarity and generalized analysis of efficiencies of thermal energy storage systems” (Renew. Energy (2012) (39) (388–402)) Peiwen Li, Jon Van Lew, Cholik Chan, Wafaa Karaki, Jake Stephens, J. E. O'Brien Research output: Contribution to journalComment/debatepeer-review ## Abstract The paper, Similarity and generalized analysis of efficiencies of thermal energy storage systems, in Renewable Energy, 39 (2012), 388–402, presented a work in which the authors considered that the two fluid-solid configurations in thermal storage systems, as shown in Fig. 2 (in the paper) denoted as (a) and (b), have similarity. The authors considered that a thermal storage system that has heat transfer fluid tubes imbedded through a packed solid (or liquid), also shown clearly in Fig. 8 (in the paper), can be viewed as a case that fluid passing through a porous material as that of the configuration (a), if similarity analysis is made. Because of this similarity, the authors developed methods to introduce and find several equivalent parameters in the paper to describe the energy storage process in case (b) in the similar manner so that our method of generalized charts for analysis of energy storage effectiveness for case (a) in a previous work can be used. These equivalent parameters include equivalent porosity, equivalent heat transfer area per unit length, and effective heat transfer coefficient, as were presented in the paper. The idea of introducing the similarity parameters and analyzing a much complicate fluid-solid configuration (b) in a similar and simple method as applied to case (a) is very valuable due to the less effort needed for analysis while with no sacrifice of the accuracy. Fig. 2 (from the paper) Thermal storage tanks with the use of thermal storage medium and heat transfer fluid (HTF). (a) Filler material, such as rocks, fully submerged in flowing HTF; (b) HTF pipes passing through filler materials, such as soil, concrete, sands, or molten salts.[figure presented] Fig. 8 (from the paper) Heat transfer fluid tubes and the surrounding thermal storage material (Deq is an equivalent diameter based on the area that the cross section area of the container is divided by the number of HTF tubes.)[figure presented] The authors would like to point out here that our method of generalized charts for analysis of energy storage effectiveness for case (a) was published in a previous paper, Generalized charts of energy storage effectiveness for thermocline heat storage tank design and calibration, in Solar Energy, 85 (2011), 2130–2143, by the same group with most of the same authors. Furthermore, in order to lay the foundation for the new findings presented in Renewable Energy, 39 (2012), 388–402, text and graphics from our previous work in Solar Energy, 85 (2011), 2130–2143, were repeated without reference. Specifically, Figures 5, 6, 7, 11, 12, 13, 14 from Solar Energy, 85 (2011), 2130–2143, were duplicated in Renewable Energy, 39 (2012), 388–402. The authors regret that Solar Energy, 85 (2011), 2130–2143 was not cited in Renewable Energy, 39 (2012), 388–402 as the original source. With this corrigendum the authors wish to add the missing reference: Solar Energy, 85 (2011), 2130–2143.The authors apologize to readers this was not done prior to publication and for any inconvenience caused. Original language English (US) 892-893 2 Renewable Energy 97 https://doi.org/10.1016/j.renene.2016.06.031 Published - Nov 1 2016 ## ASJC Scopus subject areas • Renewable Energy, Sustainability and the Environment
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9387201070785522, "perplexity": 1356.807922512173}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00271.warc.gz"}
http://mathoverflow.net/questions/105214/faithfulness-of-derived-functor
# Faithfulness of derived functor. Let $F:{\cal A}\to {\cal B}$ be an additive, exact and faithful functor between abelian categories. Then on the level of complexes, $F$ maps quasi-isomorphisms to quasi-isomorphisms and thus induces a functor on the derived categories $DF:D({\cal A})\to D({\cal B})$. Is it true that $DF$ is faithful? Likewise for the usual categories of bounded complexes. If not, are there extra conditions that guarantee faithfulness? - This will usually not be the case. For example, consider a "typical forgetful functor" for example $$A-Mod \rightarrow k-vect$$ from the category of representations of a k-algebra $A$ to vectorspaces. It is exact faithful, but its derived functor will only be faithful if the algebra is semi-simple, since there are no Ext between vector spaces. For a concrete example take $A=k[x]$. More philosophically, I think of an exact faithful functor as a functor which forgets some additional structure. The question whether $DF$ is faithful is the question whether there are no new extensions coming from the additional structure. This will rarely be the case. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919400215148926, "perplexity": 141.97479479043167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00153-ip-10-164-35-72.ec2.internal.warc.gz"}
http://community.worldheritage.org/articles/eng/Spin_(physics)
# Spin (physics) ### Spin (physics) In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei.[1][2] Spin is one of two types of angular momentum in quantum mechanics, the other being orbital angular momentum. Orbital angular momentum is the quantum-mechanical counterpart to the classical notion of angular momentum: it arises when a particle executes a rotating or twisting trajectory (such as when an electron orbits a nucleus).[3][4] The existence of spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which particles are observed to possess angular momentum that cannot be accounted for by orbital angular momentum alone.[5] In some ways, spin is like a vector quantity; it has a definite magnitude, and it has a "direction" (but quantization makes this "direction" different from the direction of an ordinary vector). All elementary particles of a given kind have the same magnitude of spin angular momentum, which is indicated by assigning the particle a spin quantum number.[2] The SI unit of spin is the joule-second, just as with classical angular momentum. In practice, however, it is written as a multiple of the reduced Planck constant ħ, usually in natural units, where the ħ is omitted, resulting in a unitless number. Spin quantum numbers are unitless numbers by definition. When combined with the spin-statistics theorem, the spin of electrons results in the Pauli exclusion principle, which in turn underlies the periodic table of chemical elements. Samuel Goudsmit at Leiden University suggested a physical interpretation of particles spinning around their own axis. The mathematical theory was worked out in depth by Pauli in 1927. When Paul Dirac derived his relativistic quantum mechanics in 1928, electron spin was an essential part of it. ## Contents • Quantum number 1 • Fermions and bosons 1.1 • Spin-statistics theorem 1.2 • Magnetic moments 2 • Direction 3 • Spin projection quantum number and multiplicity 3.1 • Vector 3.2 • Mathematical formulation 4 • Operator 4.1 • Pauli matrices 4.2 • Pauli exclusion principle 4.3 • Rotations 4.4 • Lorentz transformations 4.5 • Metrology along the x, y, and z axes 4.6 • Metrology along an arbitrary axis 4.7 • Compatibility of metrology 4.8 • Parity 5 • Applications 6 • History 7 • Notes 9 • References 10 ## Quantum number As the name suggests, spin was originally conceived as the rotation of a particle around some axis. This picture is correct so far as spin obeys the same mathematical laws as quantized angular momenta do. On the other hand, spin has some peculiar properties that distinguish it from orbital angular momenta: The conventional definition of the spin quantum number, s, is s = n/2, where n can be any non-negative integer. Hence the allowed values of s are 0, 1/2, 1, 3/2, 2, etc. The value of s for an elementary particle depends only on the type of particle, and cannot be altered in any known way (in contrast to the spin direction described below). The spin angular momentum, S, of any physical system is quantized. The allowed values of S are: S = \frac{h}{2\pi} \, \sqrt{s (s+1)}=\frac{h}{4\pi} \, \sqrt{n(n+2)}, where h is the Planck constant. In contrast, orbital angular momentum can only take on integer values of s; i.e., even-numbered values of n. ### Fermions and bosons Those particles with half-integer spins, such as 1/2, 3/2, 5/2, are known as fermions, while those particles with integer spins, such as 0, 1, 2, are known as bosons. The two families of particles obey different rules and broadly have different roles in the world around us. A key distinction between the two families is that fermions obey the Pauli exclusion principle; that is, there cannot be two identical fermions simultaneously having the same quantum numbers (meaning roughly, being in the same place with the same velocity). In contrast, bosons obey the rules of Bose–Einstein statistics and have no such restriction, so they may "bunch" together even if in identical states. Also, composite particles can have spins different from the particles which comprise them. For example, a helium atom can have spin 0 and therefore can behave like a boson even though the quarks and electrons which make it up are all fermions. This has profound practical applications: • Quarks and leptons (including electrons and neutrinos), which make up what is classically known as matter, are all fermions with spin 1/2. The common idea that "matter takes up space" actually comes from the Pauli exclusion principle acting on these particles to prevent the fermions that make up matter from being in the same quantum state. Further compaction would require electrons to occupy the same energy states, and therefore a kind of pressure (sometimes known as degeneracy pressure of electrons) acts to resist the fermions being overly close. It is also this pressure which prevents stars collapsing inwardly, and which, when it finally gives way under immense gravitational pressure in a dying massive star, triggers inward collapse and the dramatic explosion into a supernova. Elementary fermions with other spins (3/2, 5/2 etc.) are not known to exist, as of 2014. Elementary bosons with other spins (0, 2, 3 etc.) were not historically known to exist, although they have received considerable theoretical treatment and are well established within their respective mainstream theories. In particular theoreticians have proposed the graviton (predicted to exist by some quantum gravity theories) with spin 2, and the Higgs boson (explaining electroweak symmetry breaking) with spin 0. Since 2013 the Higgs boson with spin 0 has been considered proven to exist. It is the first scalar particle (spin 0) known to exist in nature. Theoretical and experimental studies have shown that the spin possessed by elementary particles cannot be explained by postulating that they are made up of even smaller particles rotating about a common center of mass analogous to a classical electron radius; as far as can be presently determined, these elementary particles have no inner structure. The spin of an elementary particle is therefore seen as a truly intrinsic physical property, akin to the particle's electric charge and rest mass. ### Spin-statistics theorem The proof that particles with half-integer spin (fermions) obey Fermi–Dirac statistics and the Pauli Exclusion Principle, and particles with integer spin (bosons) obey Bose–Einstein statistics, occupy "symmetric states", and thus can share quantum states, is known as the spin-statistics theorem. The theorem relies on both quantum mechanics and the theory of special relativity, and this connection between spin and statistics has been called "one of the most important applications of the special relativity theory".[6] ## Magnetic moments Magnetic field lines around a magnetostatic dipole; the magnetic dipole itself is in the center and is seen from the side. Particles with spin can possess a magnetic dipole moment, just like a rotating electrically charged body in classical electrodynamics. These magnetic moments can be experimentally observed in several ways, e.g. by the deflection of particles by inhomogeneous magnetic fields in a Stern–Gerlach experiment, or by measuring the magnetic fields generated by the particles themselves. The intrinsic magnetic moment μ of a spin-1/2 particle with charge q, mass m, and spin angular momentum S, is[7] \boldsymbol{\mu} = \frac{g_s q}{2m} \mathbf{S} where the dimensionless quantity gs is called the spin g-factor. For exclusively orbital rotations it would be 1 (assuming that the mass and the charge occupy spheres of equal radius). The electron, being a charged elementary particle, possesses a nonzero magnetic moment. One of the triumphs of the theory of quantum electrodynamics is its accurate prediction of the electron g-factor, which has been experimentally determined to have the value −2.0023193043622(15), with the digits in parentheses denoting measurement uncertainty in the last two digits at one standard deviation.[8] The value of 2 arises from the Dirac equation, a fundamental equation connecting the electron's spin with its electromagnetic properties, and the correction of 0.002319304... arises from the electron's interaction with the surrounding electromagnetic field, including its own field.[9] Composite particles also possess magnetic moments associated with their spin. In particular, the neutron possesses a non-zero magnetic moment despite being electrically neutral. This fact was an early indication that the neutron is not an elementary particle. In fact, it is made up of quarks, which are electrically charged particles. The magnetic moment of the neutron comes from the spins of the individual quarks and their orbital motions. Neutrinos are both elementary and electrically neutral. The minimally extended Standard Model that takes into account non-zero neutrino masses predicts neutrino magnetic moments of:[10][11][12] \mu_{\nu}\approx 3\times 10^{-19}\mu_\mathrm{B}\frac{m_{\nu}}{\text{eV}} where the μν are the neutrino magnetic moments, mν are the neutrino masses, and μB is the Bohr magneton. New physics above the electroweak scale could, however, lead to significantly higher neutrino magnetic moments. It can be shown in a model independent way that neutrino magnetic moments larger than about 10−14 μB are unnatural, because they would also lead to large radiative contributions to the neutrino mass. Since the neutrino masses cannot exceed about 1 eV, these radiative corrections must then be assumed to be fine tuned to cancel out to a large degree.[13] The measurement of neutrino magnetic moments is an active area of research. As of 2001, the latest experimental results have put the neutrino magnetic moment at less than 1.2×10−10 times the electron's magnetic moment. In ordinary materials, the magnetic dipole moments of individual atoms produce magnetic fields that cancel one another, because each dipole points in a random direction. Ferromagnetic materials below their Curie temperature, however, exhibit magnetic domains in which the atomic dipole moments are locally aligned, producing a macroscopic, non-zero magnetic field from the domain. These are the ordinary "magnets" with which we are all familiar. In paramagnetic materials, the magnetic dipole moments of individual atoms spontaneously align with an externally applied magnetic field. In diamagnetic materials, on the other hand, the magnetic dipole moments of individual atoms spontaneously align oppositely to any externally applied magnetic field, even if it requires energy to do so. The study of the behavior of such "spin models" is a thriving area of research in condensed matter physics. For instance, the Ising model describes spins (dipoles) that have only two possible states, up and down, whereas in the Heisenberg model the spin vector is allowed to point in any direction. These models have many interesting properties, which have led to interesting results in the theory of phase transitions. ## Direction ### Spin projection quantum number and multiplicity In classical mechanics, the angular momentum of a particle possesses not only a magnitude (how fast the body is rotating), but also a direction (either up or down on the axis of rotation of the particle). Quantum mechanical spin also contains information about direction, but in a more subtle form. Quantum mechanics states that the component of angular momentum measured along any direction can only take on the values [14] S_i = \hbar s_i, \quad s_i \in \{ - s, -(s-1), \dots, s-1, s \} \,\! where Si is the spin component along the i-axis (either x, y, or z), si is the spin projection quantum number along the i-axis, and s is the principal spin quantum number (discussed in the previous section). Conventionally the direction chosen is the z-axis: S_z = \hbar s_z, \quad s_z \in \{ - s, -(s-1), \dots, s - 1, s \} \,\! where Sz is the spin component along the z-axis, sz is the spin projection quantum number along the z-axis. One can see that there are 2s+1 possible values of sz. The number "2s + 1" is the multiplicity of the spin system. For example, there are only two possible values for a spin-1/2 particle: sz = +1/2 and sz = −1/2. These correspond to quantum states in which the spin is pointing in the +z or −z directions respectively, and are often referred to as "spin up" and "spin down". For a spin-3/2 particle, like a delta baryon, the possible values are +3/2, +1/2, −1/2, −3/2. ### Vector For a given quantum state, one could think of a spin vector \lang S \rang whose components are the expectation values of the spin components along each axis, i.e., \lang S \rang = [\lang S_x \rang, \lang S_y \rang, \lang S_z \rang]. This vector then would describe the "direction" in which the spin is pointing, corresponding to the classical concept of the axis of rotation. It turns out that the spin vector is not very useful in actual quantum mechanical calculations, because it cannot be measured directly: sx, sy and sz cannot possess simultaneous definite values, because of a quantum uncertainty relation between them. However, for statistically large collections of particles that have been placed in the same pure quantum state, such as through the use of a Stern–Gerlach apparatus, the spin vector does have a well-defined experimental meaning: It specifies the direction in ordinary space in which a subsequent detector must be oriented in order to achieve the maximum possible probability (100%) of detecting every particle in the collection. For spin-1/2 particles, this maximum probability drops off smoothly as the angle between the spin vector and the detector increases, until at an angle of 180 degrees—that is, for detectors oriented in the opposite direction to the spin vector—the expectation of detecting particles from the collection reaches a minimum of 0%. As a qualitative concept, the spin vector is often handy because it is easy to picture classically. For instance, quantum mechanical spin can exhibit phenomena analogous to classical gyroscopic effects. For example, one can exert a kind of "torque" on an electron by putting it in a magnetic field (the field acts upon the electron's intrinsic magnetic dipole moment—see the following section). The result is that the spin vector undergoes precession, just like a classical gyroscope. This phenomenon is known as electron spin resonance (ESR). The equivalent behaviour of protons in atomic nuclei is used in nuclear magnetic resonance (NMR) spectroscopy and imaging. Mathematically, quantum mechanical spin states are described by vector-like objects known as spinors. There are subtle differences between the behavior of spinors and vectors under coordinate rotations. For example, rotating a spin-1/2 particle by 360 degrees does not bring it back to the same quantum state, but to the state with the opposite quantum phase; this is detectable, in principle, with interference experiments. To return the particle to its exact original state, one needs a 720 degree rotation. A spin-zero particle can only have a single quantum state, even after torque is applied. Rotating a spin-2 particle 180 degrees can bring it back to the same quantum state and a spin-4 particle should be rotated 90 degrees to bring it back to the same quantum state. The spin 2 particle can be analogous to a straight stick that looks the same even after it is rotated 180 degrees and a spin 0 particle can be imagined as sphere which looks the same after whatever angle it is turned through. ## Mathematical formulation ### Operator Spin obeys commutation relations analogous to those of the orbital angular momentum: [S_i, S_j ] = i \hbar \epsilon_{ijk} S_k where \epsilon_{ijk} is the Levi-Civita symbol. It follows (as with angular momentum) that the eigenvectors of S2 and Sz (expressed as kets in the total S basis) are: \begin{align} S^2 |s,m\rangle &= \hbar^2 s(s + 1) |s,m\rangle \\ S_z |s,m\rangle &= \hbar m |s,m\rangle. \end{align} The spin raising and lowering operators acting on these eigenvectors give: S_\pm |s,m\rangle = \hbar\sqrt{s(s+1)-m(m\pm 1)} |s,m\pm 1 \rangle, where S_\pm = S_x \pm i S_y. But unlike orbital angular momentum the eigenvectors are not spherical harmonics. They are not functions of θ and φ. There is also no reason to exclude half-integer values of s and m. In addition to their other properties, all quantum mechanical particles possess an intrinsic spin (though it may have the intrinsic spin 0, too). The spin is quantized in units of the reduced Planck constant, such that the state function of the particle is, say, not \psi = \psi(\mathbf r), but \psi =\psi(\mathbf r,\sigma)\,, where \sigma is out of the following discrete set of values: \sigma \in \{-s\hbar, -(s - 1)\hbar, \cdots, +(s - 1)\hbar, +s\hbar\}. One distinguishes bosons (integer spin) and fermions (half-integer spin). The total angular momentum conserved in interaction processes is then the sum of the orbital angular momentum and the spin. ### Pauli matrices The quantum mechanical operators associated with spin observables are: S_x = {\hbar \over 2} \sigma_x,\quad S_y = {\hbar \over 2} \sigma_y,\quad S_z = {\hbar \over 2} \sigma_z. In the special case of spin-1/2 particles, σx, σy and σz are the three Pauli matrices, given by: \sigma_x = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} ,\quad \sigma_y = \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix} ,\quad \sigma_z = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}. ### Pauli exclusion principle For systems of N identical particles this is related to the Pauli exclusion principle, which states that by interchanges of any two of the N particles one must have \psi ( \cdots \mathbf r_i,\sigma_i\cdots \mathbf r_j,\sigma_j\cdots ) = (-1)^{2s}\psi ( \cdots \mathbf r_j,\sigma_j\cdots \mathbf r_i,\sigma_i\cdots ) . Thus, for bosons the prefactor (−1)2s will reduce to +1, for fermions to −1. In quantum mechanics all particles are either bosons or fermions. In some speculative relativistic quantum field theories "supersymmetric" particles also exist, where linear combinations of bosonic and fermionic components appear. In two dimensions, the prefactor (−1)2s can be replaced by any complex number of magnitude 1 such as in the Anyon. The above permutation postulate for N-particle state functions has most-important consequences in daily life, e.g. the periodic table of the chemists or biologists. ### Rotations As described above, quantum mechanics states that components of angular momentum measured along any direction can only take a number of discrete values. The most convenient quantum mechanical description of particle's spin is therefore with a set of complex numbers corresponding to amplitudes of finding a given value of projection of its intrinsic angular momentum on a given axis. For instance, for a spin 1/2 particle, we would need two numbers a±1/2, giving amplitudes of finding it with projection of angular momentum equal to ħ/2 and −ħ/2, satisfying the requirement \left|a_\frac{1}{2}\right|^2 + \left|a_{-\frac{1}{2}}\right|^2 \, = 1. For a generic particle with spin s, we would need 2s+1 such parameters. Since these numbers depend on the choice of the axis, they transform into each other non-trivially when this axis is rotated. It's clear that the transformation law must be linear, so we can represent it by associating a matrix with each rotation, and the product of two transformation matrices corresponding to rotations A and B must be equal (up to phase) to the matrix representing rotation AB. Further, rotations preserve the quantum mechanical inner product, and so should our transformation matrices: \sum_{m=-j}^{j} a_m^* b_m = \sum_{m=-j}^{j} \left(\sum_{n=-j}^j U_{nm} a_n\right)^* \left(\sum_{k=-j}^j U_{km} b_k\right) \sum_{n=-j}^j \sum_{k=-j}^j U_{np}^* U_{kq} = \delta_{pq}. Mathematically speaking, these matrices furnish a unitary projective representation of the rotation group SO(3). Each such representation corresponds to a representation of the covering group of SO(3), which is SU(2).[15] There is one n-dimensional irreducible representation of SU(2) for each dimension, though this representation is n-dimensional real for odd n and n-dimensional complex for even n (hence of real dimension 2n). For a rotation by angle θ in the plane with normal vector \hat{\theta}, U can be written U = e^{-\frac{i}{\hbar} \vec{\theta} \cdot \vec{S}}, where \vec{\theta} = \theta \hat{\theta} and \vec{S} is the vector of spin operators. A generic rotation in 3-dimensional space can be built by compounding operators of this type using Euler angles: \mathcal{R}(\alpha,\beta,\gamma) = e^{-i\alpha S_x}e^{-i\beta S_y}e^{-i\gamma S_z} An irreducible representation of this group of operators is furnished by the Wigner D-matrix: D^s_{m'm}(\alpha,\beta,\gamma) \equiv \langle sm' | \mathcal{R}(\alpha,\beta,\gamma)| sm \rangle = e^{-im'\alpha} d^s_{m'm}(\beta)e^{-i m\gamma}, where d^s_{m'm}(\beta)= \langle sm' |e^{-i\beta s_y} | sm \rangle is Wigner's small d-matrix. Note that for γ = 2π and α = β = 0; i.e., a full rotation about the z-axis, the Wigner D-matrix elements become D^s_{m'm}(0,0,2\pi) = d^s_{m'm}(0) e^{-i m 2 \pi} = \delta_{m'm} (-1)^{2m}. Recalling that a generic spin state can be written as a superposition of states with definite m, we see that if s is an integer, the values of m are all integers, and this matrix corresponds to the identity operator. However, if s is a half-integer, the values of m are also all half-integers, giving (−1)2m = −1 for all m, and hence upon rotation by 2π the state picks up a minus sign. This fact is a crucial element of the proof of the spin-statistics theorem. ### Lorentz transformations We could try the same approach to determine the behavior of spin under general Lorentz transformations, but we would immediately discover a major obstacle. Unlike SO(3), the group of Lorentz transformations SO(3,1) is non-compact and therefore does not have any faithful, unitary, finite-dimensional representations. In case of spin 1/2 particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product that is preserved by this representation. We associate a 4-component Dirac spinor \psi with each particle. These spinors transform under Lorentz transformations according to the law \psi' = \exp{\left(\frac{1}{8} \omega_{\mu\nu} [\gamma_{\mu}, \gamma_{\nu}]\right)} \psi where \gamma_{\mu} are gamma matrices and \omega_{\mu\nu} is an antisymmetric 4×4 matrix parametrizing the transformation. It can be shown that the scalar product \langle\psi|\phi\rangle = \bar{\psi}\phi = \psi^{\dagger}\gamma_0\phi is preserved. It is not, however, positive definite, so the representation is not unitary. ### Metrology along the x, y, and z axes Each of the (Hermitian) Pauli matrices has two eigenvalues, +1 and −1. The corresponding normalized eigenvectors are: \begin{array}{lclc} \psi_{x+}=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{pmatrix}{1}\\{1}\end{pmatrix}, & \psi_{x-}=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{pmatrix}{1}\\{-1}\end{pmatrix}, \\ \psi_{y+}=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{pmatrix}{1}\\{i}\end{pmatrix}, & \psi_{y-}=\displaystyle\frac{1}{\sqrt{2}}\!\!\!\!\! & \begin{pmatrix}{1}\\{-i}\end{pmatrix}, \\ \psi_{z+}= & \begin{pmatrix}{1}\\{0}\end{pmatrix}, & \psi_{z-}= & \begin{pmatrix}{0}\\{1}\end{pmatrix}. \end{array} By the postulates of quantum mechanics, an experiment designed to measure the electron spin on the x, y or z axis can only yield an eigenvalue of the corresponding spin operator (Sx, Sy or Sz) on that axis, i.e. ħ/2 or –ħ/2. The quantum state of a particle (with respect to spin), can be represented by a two component spinor: \psi = \begin{pmatrix} {a+bi}\\{c+di}\end{pmatrix}. When the spin of this particle is measured with respect to a given axis (in this example, the x-axis), the probability that its spin will be measured as ħ/2 is just \left\vert \langle \psi_{x+} \vert \psi \rangle \right\vert ^2. Correspondingly, the probability that its spin will be measured as –ħ/2 is just \left\vert \langle \psi_{x-} \vert \psi \rangle \right\vert ^2. Following the measurement, the spin state of the particle will collapse into the corresponding eigenstate. As a result, if the particle's spin along a given axis has been measured to have a given eigenvalue, all measurements will yield the same eigenvalue (since \left\vert \langle \psi_{x+} \vert \psi_{x+} \rangle \right\vert ^2 = 1 , etc), provided that no measurements of the spin are made along other axes. ### Metrology along an arbitrary axis The operator to measure spin along an arbitrary axis direction is easily obtained from the Pauli spin matrices. Let u = (ux, uy, uz) be an arbitrary unit vector. Then the operator for spin in this direction is simply S_u = \frac{\hbar}{2}(u_x\sigma_x + u_y\sigma_y + u_z\sigma_z). The operator Su has eigenvalues of ±ħ/2, just like the usual spin matrices. This method of finding the operator for spin in an arbitrary direction generalizes to higher spin states, one takes the dot product of the direction with a vector of the three operators for the three x, y, z axis directions. A normalized spinor for spin-1/2 in the (ux, uy, uz) direction (which works for all spin states except spin down where it will give 0/0), is: \frac{1}{\sqrt{2+2u_z}}\begin{pmatrix} 1+u_z \\ u_x+iu_y \end{pmatrix}. The above spinor is obtained in the usual way by diagonalizing the \sigma_u matrix and finding the eigenstates corresponding to the eigenvalues. In quantum mechanics, vectors are termed "normalized" when multiplied by a normalizing factor, which results in the vector having a length of unity. ### Compatibility of metrology Since the Pauli matrices do not commute, measurements of spin along the different axes are incompatible. This means that if, for example, we know the spin along the x-axis, and we then measure the spin along the y-axis, we have invalidated our previous knowledge of the x-axis spin. This can be seen from the property of the eigenvectors (i.e. eigenstates) of the Pauli matrices that: \mid \langle \psi_{x\pm} \mid \psi_{y\pm} \rangle \mid ^ 2 = \mid \langle \psi_{x\pm} \mid \psi_{z\pm} \rangle \mid ^ 2 = \mid \langle \psi_{y\pm} \mid \psi_{z\pm} \rangle \mid ^ 2 = \frac{1}{2}. So when physicists measure the spin of a particle along the x-axis as, for example, ħ/2, the particle's spin state collapses into the eigenstate \mid \psi_{x+} \rangle. When we then subsequently measure the particle's spin along the y-axis, the spin state will now collapse into either \mid \psi_{y+} \rangle or \mid \psi_{y-} \rangle, each with probability 1/2. Let us say, in our example, that we measure –ħ/2. When we now return to measure the particle's spin along the x-axis again, the probabilities that we will measure ħ/2 or –ħ/2 are each 1/2 (i.e. they are \mid \langle \psi_{x+} \mid \psi_{y-} \rangle \mid ^ 2 and \mid \langle \psi_{x-} \mid \psi_{y-} \rangle \mid ^ 2 respectively). This implies that the original measurement of the spin along the x-axis is no longer valid, since the spin along the x-axis will now be measured to have either eigenvalue with equal probability. ## Parity In tables of the spin quantum number s for nuclei or particles, the spin is often followed by a "+" or "−". This refers to the parity with "+" for even parity (wave function unchanged by spatial inversion) and "−" for odd parity (wave function negated by spatial inversion). For example, see the isotopes of bismuth. ## Applications Spin has important theoretical implications and practical applications. Well-established direct applications of spin include: Electron spin plays an important role in magnetism, with applications for instance in computer memories. The manipulation of nuclear spin by radiofrequency waves (nuclear magnetic resonance) is important in chemical spectroscopy and medical imaging. Spin-orbit coupling leads to the fine structure of atomic spectra, which is used in atomic clocks and in the modern definition of the second. Precise measurements of the g-factor of the electron have played an important role in the development and verification of quantum electrodynamics. Photon spin is associated with the polarization of light. A possible future direct application of spin is as a binary information carrier in spin transistors. Original concept proposed in 1990 is known as Datta-Das spin transistor.[17] Electronics based on spin transistors is called spintronics, which includes the manipulation of spins in semiconductor devices. There are many indirect applications and manifestations of spin and the associated Pauli exclusion principle, starting with the periodic table of chemistry. ## History Spin was first discovered in the context of the emission spectrum of alkali metals. In 1924 Wolfgang Pauli introduced what he called a "two-valued quantum degree of freedom" associated with the electron in the outermost shell. This allowed him to formulate the Pauli exclusion principle, stating that no two electrons can share the same quantum state at the same time. Wolfgang Pauli The physical interpretation of Pauli's "degree of freedom" was initially unknown. Ralph Kronig, one of Landé's assistants, suggested in early 1925 that it was produced by the self-rotation of the electron. When Pauli heard about the idea, he criticized it severely, noting that the electron's hypothetical surface would have to be moving faster than the speed of light in order for it to rotate quickly enough to produce the necessary angular momentum. This would violate the theory of relativity. Largely due to Pauli's criticism, Kronig decided not to publish his idea. In the autumn of 1925, the same thought came to two Dutch physicists, Samuel Goudsmit at Leiden University. Under the advice of Paul Ehrenfest, they published their results. It met a favorable response, especially after Llewellyn Thomas managed to resolve a factor-of-two discrepancy between experimental results and Uhlenbeck and Goudsmit's calculations (and Kronig's unpublished ones). This discrepancy was due to the orientation of the electron's tangent frame, in addition to its position. Mathematically speaking, a fiber bundle description is needed. The tangent bundle effect is additive and relativistic; that is, it vanishes if c goes to infinity. It is one half of the value obtained without regard for the tangent space orientation, but with opposite sign. Thus the combined effect differs from the latter by a factor two (Thomas precession). Despite his initial objections, Pauli formalized the theory of spin in 1927, using the modern theory of quantum mechanics invented by Schrödinger and Heisenberg. He pioneered the use of Pauli matrices as a representation of the spin operators, and introduced a two-component spinor wave-function. Pauli's theory of spin was non-relativistic. However, in 1928, Paul Dirac published the Dirac equation, which described the relativistic electron. In the Dirac equation, a four-component spinor (known as a "Dirac spinor") was used for the electron wave-function. In 1940, Pauli proved the spin-statistics theorem, which states that fermions have half-integer spin and bosons integer spin. In retrospect, the first direct experimental evidence of the electron spin was the Stern–Gerlach experiment of 1922. However, the correct explanation of this experiment was only given in 1927.[18] ## Notes 1. ^ Merzbacher, Eugen (1998). Quantum Mechanics (3rd ed.). pp. 372–3. 2. ^ a b Griffiths, David (2005). Introduction to Quantum Mechanics (2nd ed.). pp. 183–4. 3. ^ "Angular Momentum Operator Algebra", class notes by Michael Fowler 4. ^ , by Townsend, p. 31 and p. 80A modern approach to quantum mechanics 5. ^ Eisberg, Robert; Resnick, Robert (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). pp. 272–3. 6. ^ 7. ^ Physics of Atoms and Molecules, B.H. Bransden, C.J.Joachain, Longman, 1983, ISBN 0-582-44401-2 8. ^ "CODATA Value: electron g factor". The NIST Reference on Constants, Units, and Uncertainty. 9. ^ "After some years, it was discovered that this value [−g/2] was not exactly 1, but slightly more—something like 1.00116. This correction was worked out for the first time in 1948 by Schwinger as j*j divided by 2 pi [sic] [where j is the square root of the fine-structure constant], and was due to an alternative way the electron can go from place to place: instead of going directly from one point to another, the electron goes along for a while and suddenly emits a photon; then (horrors!) it absorbs its own photon." 10. ^ W.J. Marciano, A.I. Sanda (1977). "Exotic decays of the muon and heavy leptons in gauge theories". 11. ^ B.W. Lee, R.E. Shrock (1977). "Natural suppression of symmetry violation in gauge theories: Muon- and electron-lepton-number nonconservation". 12. ^ K. Fujikawa, R. E. Shrock (1980). "Magnetic Moment of a Massive Neutrino and Neutrino-Spin Rotation". 13. ^ N.F. Bell et al.; Cirigliano, V.; Ramsey-Musolf, M.; Vogel, P.; Wise, Mark (2005). "How Magnetic is the Dirac Neutrino?". 14. ^ Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-855493-1 15. ^ B.C. Hall (2013). Quantum Theory for Mathematicians. Springer. pp. 354–358. 16. ^ , by J. J. Sakurai, p159Modern Quantum Mechanics 17. ^ Datta. S and B. Das (1990). "Electronic analog of the electrooptic modulator". Applied Physics Letters 56 (7): 665–667. 18. ^ B. Friedrich, D. Herschbach (2003). "Stern and Gerlach: How a Bad Cigar Helped Reorient Atomic Physics". ## References • Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck (2006). Quantum Mechanics (2 volume set ed.). John Wiley & Sons. • Condon, E. U.; Shortley, G. H. (1935). "Especially Chapter 3". The Theory of Atomic Spectra. Cambridge University Press. • Hipple, J. A.; Sommer, H.; Thomas, H.A. (1949). A precise method of determining the faraday by magnetic resonance. https://www.academia.edu/6483539/John_A._Hipple_1911-1985_technology_as_knowledge • Edmonds, A. R. (1957). Angular Momentum in Quantum Mechanics. Princeton University Press. • Jackson, John David (1998). Classical Electrodynamics (3rd ed.). John Wiley & Sons. • Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. • Thompson, William J. (1994). Angular Momentum: An Illustrated Guide to Rotational Symmetries for Physical Systems. Wiley. • Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. • Sin-Itiro Tomonaga, The Story of Spin, 1997
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685116410255432, "perplexity": 838.8676240187609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00410.warc.gz"}
https://worldwidescience.org/topicpages/a/algol.html
#### Sample records for algol 1. Programming Algol CERN Document Server Malcolme-Lawes, D J 2014-01-01 Programming - ALGOL describes the basics of computer programming using Algol. Commands that could be added to Algol and could increase its scope are described, including multiplication and division and the use of brackets. The idea of labeling or naming a command is also explained, along with a command allowing two alternative results. Most of the important features of Algol syntax are discussed, and examples of compound statements (that is, sets of commands enclosed by a begin ... end command) are given.Comprised of 11 chapters, this book begins with an introduction to the digital computer an 2. UXOR Hunting among Algol Variables Science.gov (United States) Poxon, M. 2015-06-01 The class of variable typified by UX Orionis (UXORs or UXors) are young stars characterised by aperiodic or semiperiodic fades from maximum. This has led to several of the class being formerly catalogued as Algol-type eclipsing binaries (EAs), which can show superficially similar light variations. With this in view, I propose a campaign to search for more UX Ori type stars. 3. Architectural design of an Algol interpreter Science.gov (United States) Jackson, C. K. 1971-01-01 The design of a syntax-directed interpreter for a subset of Algol is described. It is a conceptual design with sufficient details and completeness but as much independence of implementation as possible. The design includes a detailed description of a scanner, an analyzer described in the Floyd-Evans productions, a hash-coded symbol table, and an executor. Interpretation of sample programs is also provided to show how the interpreter functions. 4. Rotation of the primary component in Algol International Nuclear Information System (INIS) Rucinski, S.M. 1979-01-01 An 1.8. A segment of the Algol A spectrum centered at 1075.55 A line of Ni 2 was scanned repeatedly with the 0.05 A resolution from the Copernicus satellite. The numerical model is described in which 37 out of total 55 scans obtained during two eclipses were combined in the least-squares solution, to determine the period of rotation and degree of non-solid rotation of Algol A. The period of rotation suggests full synchronism of rotational and orbital motions; the equatorial velocity is Vsub(e)=53+-3km/s. The non-solid rotation parameters s measuring the distribution of angular velocity versus stellar latitude theta in ω=ωsub(e)(1-s+scos 2 theta) has been found equal s=0.07+-0.25 indicating the rotation law not far from solid-body. The data suggest that the spectroscopic center of eclipse comes -0.0017+-0.0002 parts of orbital period earlier than the predicted photometric center in optical wave-lenghts but this result is not firm becouse of the uncertainty in the time-of-eclipse ephemeris for Algol. (author) 5. Gravity-darkening in the Algol system International Nuclear Information System (INIS) Kopal, Z. 1979-01-01 Infrared observations of the secondary minimum of the eclipsing system of Algol, secured recently by Nadeau et al. (1978) with the 200 in and 60 in reflectors of Mount Wilson and Palomar Observatories at the effective wavelength of 10 μm, show its light curve to be distinctly dish-shaped i.e. the light diminishes relatively fast in the early stages of the eclipse, and its rate of decline slows down in advanced partial phases. This fact indicates convincingly that the light distribution over the apparent disc of Algol's late-type (contact) component is akin to that produced by the phenomenon of 'gravity-darkening' to a very pronounced degree. An analysis of Algol's infrared light curve during the secondary minimum (when its contact component undergoes eclipse by its nearly spherical mate) observed at an effective wavelength of 10μm, discloses now that the (monochromatic) coefficient of the linear law of gravity-darkening, characterizing the distribution of brightness over the apparent disc of the contact star, comes out again at least twice as large as one which would correspond to a purely radiative energy transfer of total light in the far interior of this star. No physical theory can be advanced to explain this fact - except, possibly, a hypothesis that the observed enhancement of the monochromatic coefficient tau of gravity-darkening over that appropriate for total radiation may be caused by a very wide departure of the outer layer of the respective stars from thermodynamic equilibrium. (Auth.) 6. Long photometric cycles in hot algols Directory of Open Access Journals (Sweden) Mennickent R.E. 2017-01-01 Full Text Available We summarize the development of the field of Double Periodic Variables (DPVs, Mennickent et al. 2003 during the last fourteen years, placing these objects in the context of intermediate-mass close interacting binaries similar to β Persei (Algol and β Lyrae (Sheliak which are generally called Algols. DPVs show enigmatic long photometric cycles lasting on average about 33 times the orbital period, and have physical properties resembling, in some aspects, β Lyrae. About 200 of these objects have been found in the Galaxy and the Magellanic Clouds. Light curve models and orbitally resolved spectroscopy indicate that DPVs are semi-detached interacting binaries consisting of a near main-sequence B-type star accreting matter from a cooler giant and surrounded by an optically thick disc. This disc contributes a significant fraction of the system luminosity and its luminosity is larger than expected from the phenomenon of mass accretion alone. In some systems, an optically thin disc component is observed in well developed Balmer emission lines. The optically thick disc shows bright zones up to tens percent hotter than the disc, probably indicating shocks resulting from the gas and disc stream dynamics. We conjecture that a hotspot wind might be one of the channels for a mild systemic mass loss, since evidence for jets, winds or general mass loss has been found in β Lyrae, AUMon, HD170582, OGLE05155332-6925581 and V393 Sco. Also, theoretical work by Van Rensbergen et al. (2008 and Deschamps et al. (2013 suggests that hotspot could drive mass loss from Algols. We give special consideration to the recently published hypothesis for the long-cycle, consisting of variable mass transfer driven by a magnetic dynamo (Schleicher and Mennickent 2017. The Applegate (1992 mechanism should modify cyclically the equatorial radius of the chromospherically active donor producing cycles of enhanced mass loss through the inner Lagrangian point. Chromospheric emission in 7. Chemical History of Algol and its Components Science.gov (United States) Kolbas, V.; Pavlovski, K.; Southworth, J.; Lee, C.-U.; Lee, J. W.; Kim, S.-L.; Kim, H.-I. 2012-04-01 We present a new observational project to study the hierarchical triple stellar system Algol, concentrating on the semidetached eclipsing binary at the heart of the system. Over 140 high-resolution and high-S/N spectra have been secured, of which 80 are from FIES at the Nordic Optical Telescope, La Palma, and the remainder were obtained with BOES at the Bohyunsan Optical Astronomy Observatory in Korea. All three components were successfully detected by the method of spectral disentangling, which yields the individual spectra of the three stars and also high-quality spectroscopic elements for both the inner and outer orbits. We present a detailed abundance study for the mass-accreting component in the inner orbit, which holds information on the history of mass transfer in the close inner binary system. We also reveal the atmospheric parameters and chemical composition of the tertiary component in the outer orbit. 8. ALGOL compiler. Syntax and semantic analysis International Nuclear Information System (INIS) Tarbouriech, Robert 1971-01-01 In this research thesis, the author reports the development of an ALGOL compiler which performs the main following tasks: systematic scan of the origin-programme to recognise the different components (identifiers, reserved words, constants, separators), analysis of the origin-programme structure to build up its statements and arithmetic expressions, processing of symbolic names (identifiers) to associate them with values they represent, and memory allocation for data and programme. Several issues are thus addressed: characteristics of the machine for which the compiler is developed, exact definition of the language (grammar, identifier and constant formation), syntax processing programme to provide the compiler with necessary elements (language vocabulary, precedence matrix), description of the first two phases of compilation: lexicographic analysis, and syntax analysis. The last phase (machine-code generation) is not addressed 9. Numerical methods of mathematical optimization with Algol and Fortran programs CERN Document Server Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner 1971-01-01 Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition 10. THE ALGOL TRIPLE SYSTEM SPATIALLY RESOLVED AT OPTICAL WAVELENGTHS International Nuclear Information System (INIS) Zavala, R. T.; Hutter, D. J.; Hummel, C. A.; Boboltz, D. A.; Ojha, R.; Shaffer, D. B.; Tycner, C.; Richards, M. T. 2010-01-01 Interacting binaries typically have separations in the milliarcsecond regime, and hence it has been challenging to resolve them at any wavelength. However, recent advances in optical interferometry have improved our ability to discern the components in these systems and have now enabled the direct determination of physical parameters. We used the Navy Prototype Optical Interferometer to produce for the first time images resolving all three components in the well-known Algol triple system. Specifically, we have separated the tertiary component from the binary and simultaneously resolved the eclipsing binary pair, which represents the nearest and brightest eclipsing binary in the sky. We present revised orbital elements for the triple system, and we have rectified the 180 0 ambiguity in the position angle of Algol C. Our directly determined magnitude differences and masses for this triple star system are consistent with earlier light curve modeling results. 11. Development of the Algol III solid rocket motor for SCOUT. Science.gov (United States) Felix, B. R.; Mcbride, N. M. 1971-01-01 The design and performance of a motor developed for the first stage of the NASA SCOUT-D and E launch vehicles are discussed. The motor delivers a 30% higher total impulse and a 35 to 45% higher payload mass capability than its predecessor, the Algol IIB. The motor is 45 in. in diameter, has a length-to-diameter ratio of 8:1 and delivers an average 100,000-lb thrust for an action time of 72 sec. The motor design features a very high volumetrically loaded internal-burning charge of 17% aluminized polybutadiene propellant, a plasma-welded and heat-treated steel alloy case, and an all-ablative plastic nose liner enclosed in a steel shell. The only significant development problem was the grain design tailoring to account for erosive burning effects which occurred in the high-subsonic-Mach-number port. The tests performed on the motor are described. 12. Shifting Milestones of Natural Sciences: The Ancient Egyptian Discovery of Algol's Period Confirmed. Directory of Open Access Journals (Sweden) Lauri Jetsu Full Text Available The Ancient Egyptians wrote Calendars of Lucky and Unlucky Days that assigned astronomically influenced prognoses for each day of the year. The best preserved of these calendars is the Cairo Calendar (hereafter CC dated to 1244-1163 B.C. We have presented evidence that the 2.85 days period in the lucky prognoses of CC is equal to that of the eclipsing binary Algol during this historical era. We wanted to find out the vocabulary that represents Algol in the mythological texts of CC. Here we show that Algol was represented as Horus and thus signified both divinity and kingship. The texts describing the actions of Horus are consistent with the course of events witnessed by any naked eye observer of Algol. These descriptions support our claim that CC is the oldest preserved historical document of the discovery of a variable star. The period of the Moon, 29.6 days, has also been discovered in CC. We show that the actions of Seth were connected to this period, which also strongly regulated the times described as lucky for Heaven and for Earth. Now, for the first time, periodicity is discovered in the descriptions of the days in CC. Unlike many previous attempts to uncover the reasoning behind the myths of individual days, we discover the actual rules in the appearance and behaviour of deities during the whole year. 13. Long-term Spectroscopic and Photometric Monitoring of Bright Interacting Algol-type Binary Stars Science.gov (United States) Reed, Phillip A. 2018-01-01 Binary stars have long been used as natural laboratories for studying such fundamental stellar properties as mass. Interacting binaries allow us to examine more complicated aspects such as mass flow between stars, accretion processes, magnetic fields, and stellar mergers. Algol-type interacting binary stars -- consisting of a cool giant or sub-giant donating mass to a much hotter, less evolved, and more massive main-sequence companion -- undergo steady mass transfer and have been used to measure mass transfer rates and to test stellar evolution theories. The method of back-projection Doppler tomography has also been applied to interacting Algols and has produced indirect velocity-space images of the accretion structures (gas streams, accretion disks, etc.) derived from spectroscopic observations of hydrogen and helium emission lines. The accretion structures in several Algol systems have actually been observed to change between disk-like states and stream-like states on timescales as short as several orbital cycles (Richards et al., 2014). Presented here are the first results from a project aimed at studying bright interacting Algol systems with simultaneous mid-resolution (11,000stream-like and disk-like states over different temperature regimes, to identify the various accretion phenomena, and to extract their physical properties. 14. A brief description and comparison of programming languages FORTRAN, ALGOL, COBOL, PL/1, and LISP 1.5 from a critical standpoint Science.gov (United States) Mathur, F. P. 1972-01-01 Several common higher level program languages are described. FORTRAN, ALGOL, COBOL, PL/1, and LISP 1.5 are summarized and compared. FORTRAN is the most widely used scientific programming language. ALGOL is a more powerful language for scientific programming. COBOL is used for most commercial programming applications. LISP 1.5 is primarily a list-processing language. PL/1 attempts to combine the desirable features of FORTRAN, ALGOL, and COBOL into a single language. 15. Echelle Spectroscopy of Algol and the Chemical Composition of its Components Science.gov (United States) Pavlovski, K.; Kolbas, V.; Southworth, J. 2010-12-01 We present a new observational project to study the prototypical semidetached binary system, Algol. Over 80 high-resolution and high-S/N spectra were secured in 2006 and 2007 with FIES at the Nordic Optical Telescope on La Palma. All three components were successfully detected by the method of spectral disentangling, which yields the individual spectra of the three stars and also high-accuracy spectroscopic elements for both the inner and outer orbits. Our next step is to study the chemical composition of the components in detail, in particular the close eclipsing system which is in the stage after mass reversal. 16. Occultations from an Active Accretion Disk in a 72-day Detached Post-Algol System Detected by K2 DEFF Research Database (Denmark) Zhou, G.; Rappaport, S.; Nelson, L. 2018-01-01 Disks in binary systems can cause exotic eclipsing events. MWC 882 (BD –22 4376, EPIC 225300403) is such a disk-eclipsing system identified from observations during Campaign 11 of the K2 mission. We propose that MWC 882 is a post-Algol system with a B7 donor star of mass in a 72-day orbit around ... 17. Photometric and Spectroscopic Observations of the Algol Type Binary V Triangle Energy Technology Data Exchange (ETDEWEB) Ren, A. B.; Fu, J. N.; Zhang, Y. P.; Cang, T. Q.; Li, C. Q.; Khokhuntod, P. [Department of Astronomy, Beijing Normal University, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875 (China); Zhang, X. B. [National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Fox-Machado, L. [Instituto de Astronomía, Universidad Nacional Autónoma de México, Apartado Postal 877, Ensenada, Baja California, C.P. 22830, México (Mexico); Luo, Y. P., E-mail: [email protected], E-mail: [email protected] [Physics and Space Science College, China West Normal University, Nanchong 637002 (China) 2017-06-01 Time-series, multi-color photometry and high-resolution spectra of the short-period eclipsing binary V Tri were obtained through observation. The completely covered light and radial velocity (RV) curves of the binary system are presented. All times of light minima derived from both photoelectric and CCD photometry were used to calculate the orbital period and new ephemerides of the eclipsing system. The analysis of the O − C diagram reveals that the orbital period is 0.58520481 days, decreasing at a rate of dP / dt  = −7.80 × 10{sup −8} day yr{sup −1}. The mass transfer between the two components and the light-time-travel effect due to a third body could be used to explain the period decrease. However, a semi-detached configuration with the lower-mass component filling and the primary nearly filling each of their Roche lobes was derived from the synthesis of the light and RV curves by using the 2015 version of the Wilson–Devinney code. We consider the period decrease to be the nonconservative mass transfer from the secondary component to the primary and the mass loss of the system, which was thought to be an EB type, while it should be an EA type (semi-detached Algol-type) from our study. The masses, radii, and luminosities of the primary and secondary are 1.60 ± 0.07 M {sub ⊙}, 1.64 ± 0.02 R {sub ⊙}, and 14.14 ± 0.73 L {sub ⊙} and 0.74 ± 0.02 M {sub ⊙}, 1.23 ± 0.02 R {sub ⊙}, and 1.65 ± 0.05 L {sub ⊙}, respectively. 18. LUT Reveals an Algol-type Eclipsing Binary With Three Additional Stellar Companions in a Multiple System Science.gov (United States) Zhu, L.-Y.; Zhou, X.; Hu, J.-Y.; Qian, S.-B.; Li, L.-J.; Liao, W.-P.; Tian, X.-M.; Wang, Z.-H. 2016-04-01 A complete light curve of the neglected eclipsing binary Algol V548 Cygni in the UV band was obtained with the Lunar-based Ultraviolet Telescope in 2014 May. Photometric solutions are obtained using the Wilson-Devinney method. It is found that solutions with and without third light are quite different. The mass ratio without third light is determined to be q = 0.307, while that derived with third light is q = 0.606. It is shown that V548 Cygni is a semi-detached binary where the secondary component is filling the critical Roche lobe. An analysis of all available eclipse times suggests that there are three cyclic variations in the O-C diagram that are interpreted by the light travel-time effect via the presence of three additional stellar companions. This is in agreement with the presence of a large quantity of third light in the system. The masses of these companions are estimated as m sin I‧ ˜ 1.09, 0.20, and 0.52 M⊙. They are orbiting the central binary with orbital periods of about 5.5, 23.3, and 69.9 years, I.e., in 1:4:12 resonance orbit. Their orbital separations are about 4.5, 13.2, and 26.4 au, respectively. Our photometric solutions suggest that they contribute about 32.4% to the total light of the multiple system. No obvious long-term changes in the orbital period were found, indicating that the contributions of the mass transfer and the mass loss due to magnetic braking to the period variations are comparable. The detection of three possible additional stellar components orbiting a typical Algol in a multiple system make V548 Cygni a very interesting binary to study in the future. 19. Structural changes in the hot Algol OGLE-LMC-DPV-097 and its disk related to its long-cycle Science.gov (United States) L, J. Garcés; Mennickent, R. E.; Djurašević, Gojko; Poleski, R.; Soszyński, I. 2018-03-01 Double Periodic Variables (DPVs) are hot Algols showing a long photometric cycle of uncertain origin. We report the discovery of changes in the orbital light curve of OGLE-LMC-DPV-097 which depend on the phase of its long photometric cycle. During the ascending branch of the long-cycle the brightness at the first quadrature is larger than during the second quadrature, during the maximum of the long-cycle the brightness is basically the same at both quadratures, during the descending branch the brightness at the second quadrature is larger than during the first quadrature and during the minimum of the long-cycle the secondary minimum disappears. We model the light curve at different phases of the long-cycle and find that the data are consistent with changes in the properties of the accretion disk and two disk spots. The disk's size and temperature change with the long-cycle period. We find a smaller and hotter disk at minimum and larger and cooler disk at maximum. The spot temperatures, locations and angular sizes also show variability during the long-cycle. 20. KIC 6048106: an Algol-type eclipsing system with long-term magnetic activity and hybrid pulsations - I. Binary modelling Science.gov (United States) 2018-03-01 The A-F-type stars and pulsators (δ Scuti-γ Dor) are in a critical regime where they experience a transition from radiative to convective transport of energy in their envelopes. Such stars can pulsate in both gravity and acoustic modes. Hence, the knowledge of their fundamental parameters along with their observed pulsation characteristics can help in improving the stellar models. When residing in a binary system, these pulsators provide more accurate and less model-dependent stellar parameters than in the case of their single counterparts. We present a light-curve model for the eclipsing system KIC 6048106 based on the Kepler photometry and the code PHOEBE. We aim to obtain accurate physical parameters and tough constraints for the stellar modelling of this intermediate-mass hybrid pulsator. We performed a separate modelling of three light-curve segments which show a distinct behaviour due to a difference in activity. We also analysed the Kepler Eclipse Time Variations (ETVs). KIC 6048106 is an Algol-type binary with F5-K5 components, a near-circular orbit and a 1.56-d period undergoing variations of the order of Δ P/P˜eq 3.60× 10^{-7} in 287 ± 7 d. The primary component is a main-sequence star with M1 = 1.55 ± 0.11 M⊙, R1 = 1.57 ± 0.12 R⊙. The secondary is a much cooler subgiant with M2 = 0.33 ± 0.07 M⊙, R2 = 1.77 ± 0.16 R⊙. Many small near-polar spots are active on its surface. The second quadrature phase shows a brightness modulation on a time-scale 290 ± 7 d, in good agreement with the ETV modulation. This study reveals a stable binary configuration along with clear evidence of a long-term activity of the secondary star. 1. Occultations from an Active Accretion Disk in a 72-day Detached Post-Algol System Detected by K2 Science.gov (United States) Zhou, G.; Rappaport, S.; Nelson, L.; Huang, C. X.; Senhadji, A.; Rodriguez, J. E.; Vanderburg, A.; Quinn, S.; Johnson, C. I.; Latham, D. W.; Torres, G.; Gary, B. L.; Tan, T. G.; Johnson, M. C.; Burt, J.; Kristiansen, M. H.; Jacobs, T. L.; LaCourse, D.; Schwengeler, H. M.; Terentev, I.; Bieryla, A.; Esquerdo, G. A.; Berlind, P.; Calkins, M. L.; Bento, J.; Cochran, W. D.; Karjalainen, M.; Hatzes, A. P.; Karjalainen, R.; Holden, B.; Butler, R. P. 2018-02-01 Disks in binary systems can cause exotic eclipsing events. MWC 882 (BD –22 4376, EPIC 225300403) is such a disk-eclipsing system identified from observations during Campaign 11 of the K2 mission. We propose that MWC 882 is a post-Algol system with a B7 donor star of mass 0.542+/- 0.053 {M}ȯ in a 72-day orbit around an A0 accreting star of mass 3.24+/- 0.29 {M}ȯ . The 59.9+/- 6.2 {R}ȯ disk around the accreting star occults the donor star once every orbit, inducing 19-day long, 7% deep eclipses identified by K2 and subsequently found in pre-discovery All-Sky Automated Survey and All Sky Automated Survey for Supernovae observations. We coordinated a campaign of photometric and spectroscopic observations for MWC 882 to measure the dynamical masses of the components and to monitor the system during eclipse. We found the photometric eclipse to be gray to ≈1%. We found that the primary star exhibits spectroscopic signatures of active accretion, and we observed gas absorption features from the disk during eclipse. We suggest that MWC 882 initially consisted of a ≈3.6 M ⊙ donor star transferring mass via Roche lobe overflow to a ≈2.1 M ⊙ accretor in a ≈7-day initial orbit. Through angular momentum conservation, the donor star is pushed outward during mass transfer to its current orbit of 72 days. The observed state of the system corresponds with the donor star having left the red giant branch ∼0.3 Myr ago, terminating active mass transfer. The present disk is expected to be short-lived (102 yr) without an active feeding mechanism, presenting a challenge to this model. 2. J Is for JavaScript: A Direct-Style Correspondence between Algol-Like Languages and JavaScript Using First-Class Continuations DEFF Research Database (Denmark) Danvy, Olivier; Shan, Chung-chieh; Zerny, Ian Steven 2009-01-01 . This tension arises especially when the domain calls for complex control structures. We illustrate this tension by revisiting Landin’s original correspondence between Algol and Church’s lambda-notation. We translate domain-specific programs with lexically scoped jumps to JavaScript. Our translation produces...... Steele. These two extreme translations require JavaScript implementations to cater either for first-class continuations, as Rhino does, or for proper tail recursion. Less extreme translations should emit more idiomatic control-flow instructions such as for, break, and throw. The present experiment leads...... us to conclude that translations should preserve not just the data structures and the block structure of a source program, but also its control structure. We thus identify a new class of use cases for control structures in JavaScript, namely the idiomatic translation of control structures from DSLs.... 3. High-precision broad-band linear polarimetry of early-type binaries. II. Variable, phase-locked polarization in triple Algol-type system λ Tauri Science.gov (United States) Berdyugin, A.; Piirola, V.; Sakanoi, T.; Kagitani, M.; Yoneda, M. 2018-03-01 Aim. To study the binary geometry of the classic Algol-type triple system λ Tau, we have searched for polarization variations over the orbital cycle of the inner semi-detached binary, arising from light scattering in the circumstellar material formed from ongoing mass transfer. Phase-locked polarization curves provide an independent estimate for the inclination i, orientation Ω, and the direction of the rotation for the inner orbit. Methods: Linear polarization measurements of λ Tau in the B, V , and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained on the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and Tohoku 60 cm (Haleakala, Hawaii, USA) remotely controlled telescopes over 69 observing nights. Analytic and numerical modelling codes are used to interpret the data. Results: Optical polarimetry revealed small intrinsic polarization in λ Tau with 0.05% peak-to-peak variation over the orbital period of 3.95 d. The variability pattern is typical for binary systems showing strong second harmonic of the orbital period. We apply a standard analytical method and our own light scattering models to derive parameters of the inner binary orbit from the fit to the observed variability of the normalized Stokes parameters. From the analytical method, the average for three passband values of orbit inclination i = 76° + 1°/-2° and orientation Ω = 15°(195°) ± 2° are obtained. Scattering models give similar inclination values i = 72-76° and orbit orientation ranging from Ω = 16°(196°) to Ω = 19°(199°), depending on the geometry of the scattering cloud. The rotation of the inner system, as seen on the plane of the sky, is clockwise. We have found that with the scattering model the best fit is obtained for the scattering cloud located between the primary and the secondary, near the inner Lagrangian point or along the Roche lobe surface of the secondary facing the primary. The inclination i 4. Was Algol 60 the First Algorithmic Language? NARCIS (Netherlands) Durnová, H.; Alberts, G. 2014-01-01 During the 1950s, computer programming was a local practice. Programs from one computing center would not work on computers elsewhere. For example, programs written in Munich differed radically in style from programs written in Amsterdam. Similar problems were also encountered in the United States, 5. Universality versus Locality: The Amsterdam Style of Algol Implementation NARCIS (Netherlands) Alberts, G.; Daylight, E.G. 2014-01-01 During the 1950s, computer programming was a local practice. Programs from one computing center would not work on computers elsewhere. For example, programs written in Munich differed radically in style from programs written in Amsterdam. Similar problems were also encountered in the United States, 6. The Algol Triple System Spatially Resolved at Optical Wavelengths Science.gov (United States) 2010-01-01 the Very Long Baseline Array (VLBA)2, and the Lowell Observatory 42” Hall telescope equipped with the solar−stellar spectrograph. We defer discussion of...Interferometer is a joint project of the Naval Research Laboratory and the US Naval Observatory, in cooperation with Lowell Observatory, and is funded by the...W. I., McAlister, H. A., & Mason , B. D. 2001a, AJ, 122, 3480 Hartkopf, W. I., Mason , B. D., & Worley, C. E. 2001b, Sixth Catalog of Orbits of Visual 7. Photometry of long-period Algol binaries. VI. Multicolor photometric solutions for RZ Cancri Energy Technology Data Exchange (ETDEWEB) Olson, E.C. (Illinois Univ., Urbana (USA)) 1989-09-01 New intermediate-band photometry of the late-giant eclipsing system RZ CNc is used to obtain photometric solutions, both with the Popper (1976) spectroscopic mass ratio and by allowing the mass ratio and gravity-darkening coefficients to vary. New y observations are combined with earlier V data of Lenouvel (1957) and Broglia and Conconi (1973) in one solution set. Additional solutions are obtained from the new observations. The mean photometric mass ratio is somewhat larger than Popper's spectroscopic values; the general indeterminacy of photometric solutions may explain this apparent discrepancy. Possible photometric effects of a mass-transferring stream are discussed, and it is concluded that such effects cannot account for the mass-ratio discrepancy. 26 refs. 8. The nature of EU Pegasi: An Algol-type binary with a δ Scuti-type component Science.gov (United States) Yang, Yuangui; Yuan, Huiyu; Dai, Haifeng; Zhang, Xiliang 2018-02-01 The comprehensive photometry and spectroscopy for the neglected eclipsing binary EU Pegasi are presented. We determine its spectral type to be A3V. With the W-D program, the photometric solution was deduced from the four-color light curves. The results imply that EU Peg is a detached binary with a mass ratio of q = 0.3105(± 0.0011), whose components nearly fill their Roche lobes. The low-amplitude pulsation occurs around the secondary eclipse, which may be attributed to the more massive component. Three frequencies are preliminarily explored by the Fourier analysis. The pulsating frequency at f1 = 34.1 c d-1 is a p-mode pulsation. The orbital period may be undergoing a secular decrease, superimposed by a cyclic variation. The period decreases at a rate of dP/dt = -7.34 ± 1.06 d yr-1, which may be attributed to mass loss from the system due to stellar wind. The cyclic oscillation, with Pmod = 31.0 ± 1.4 yr and A = 0.0054 ± 0.0010 d, may be caused by the light-time effect due to the assumed third body. With its evolution, the pulsating binary EU Peg will evolve from the detached configuration to the semi-detached case. 9. Annual review in automatic programming CERN Document Server Goodman, Richard 2014-01-01 Annual Review in Automatic Programming, Volume 4 is a collection of papers that deals with the GIER ALGOL compiler, a parameterized compiler based on mechanical linguistics, and the JOVIAL language. A couple of papers describes a commercial use of stacks, an IBM system, and what an ideal computer program support system should be. One paper reviews the system of compilation, the development of a more advanced language, programming techniques, machine independence, and program transfer to other machines. Another paper describes the ALGOL 60 system for the GIER machine including running ALGOL pro 10. Program elaborated of combined regime for on-line and off-line problems International Nuclear Information System (INIS) Ivanova, A.B.; Ioramashvili, Eh.Sh.; Polyakov, B.F.; Razdol'skaya, L.A. 1979-01-01 A description of the part of operational system designed for organization of packet treatment of algol tasks combined with the on-line system is provided. A block-scheme of the operational system functioning in the packet regime is presented. The ''Director'' program is the main part of the operational system which is responsible for the functioning of the algol programs. Its starting for the first time is carried out by the operator. All the subsequent process of the operation is automized. Problems connected with the organization of interruptions appearing in the cases of failures and as a reaction for the end of operation of any algol program or some of its links are considered 11. Annual review in automatic programming CERN Document Server Goodman, Richard 2014-01-01 Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in 12. Language-Based Security for Malicious Mobile Code Science.gov (United States) 2007-09-30 Algol), and the Berkeley SDS-940 system employed object-code rewriting as part of its system pro- filer. More recently, the SPIN [5], Vino [52, 47], and...1993. [52] E. Yasuhiro, J. Gwertzman, M. Seltzer, C. Small, Keith A. Smith, and D. Tang. VINO : The 1994 fall harvest. Technical Report TR-34-94 13. Some methods of neutron spectra reconstruction according to results of measurements by multispherical spectrometer International Nuclear Information System (INIS) Semenov, V.P.; Trykov, L.A.; Tyufyakov, N.D. 1975-01-01 MENAOT, MODSPECTRA, MODMESKO, REGUS programs designed to restore neutron spectra by results of measurements with the multi-sphere spectrometer are described. These programs are written in the ALGOL language (the TA-2M translator) for the M-220 computer. Directed selection, MODSPECTRA and regularization algorithms were used to develop these programs 14. Real-World Experimentation Comparing Time-Sharing and Batch Processing in Teaching Computer Science, Science.gov (United States) effectiveness of time-sharing and batch processing in teaching computer science . The experimental design was centered on direct, ’real world’ comparison...ALGOL). The experimental sample involved all introductory computer science courses with a total population of 415 cadets. The results generally 15. On the design of two small batch operating systems 1965 - 1970 NARCIS (Netherlands) F.E.J. Kruseman Aretz 2013-01-01 htmlabstractThis paper describes the design considerations and decisions for two small batch operating systems, called MICRO and MILLI, for the Electrologica X8, a Dutch computer delivered from 1965 onwards. Their sole tasks were to run sequences of ALGOL 60 programs, thus transforming the X8 into 16. Thunks and the λ-calculus DEFF Research Database (Denmark) Hatcliff, John; Danvy, Olivier 1996-01-01 Thirty-five years ago, thunks were used to simulate call-by-name under call-by-value in Algol 60. Twenty years ago, Plotkin presented continuation-based simulations of call-by-name under call-by-value and vice versa in the λ-calculus. We connect all three of these classical simulations by factori......Thirty-five years ago, thunks were used to simulate call-by-name under call-by-value in Algol 60. Twenty years ago, Plotkin presented continuation-based simulations of call-by-name under call-by-value and vice versa in the λ-calculus. We connect all three of these classical simulations... 17. Thunks and the λ-calculus DEFF Research Database (Denmark) Hatcliff, John; Danvy, Olivier 1997-01-01 Thirty-five years ago, thunks were used to simulate call-by-name under call-by-value in Algol 60. Twenty years ago, Plotkin presented continuation-based simulations of call-by-name under call-by-value and vice versa in the λ-calculus. We connect all three of these classical simulations by factori......Thirty-five years ago, thunks were used to simulate call-by-name under call-by-value in Algol 60. Twenty years ago, Plotkin presented continuation-based simulations of call-by-name under call-by-value and vice versa in the λ-calculus. We connect all three of these classical simulations... 18. On a programming language for graph algorithms Science.gov (United States) Rheinboldt, W. C.; Basili, V. R.; Mesztenyi, C. K. 1971-01-01 An algorithmic language, GRAAL, is presented for describing and implementing graph algorithms of the type primarily arising in applications. The language is based on a set algebraic model of graph theory which defines the graph structure in terms of morphisms between certain set algebraic structures over the node set and arc set. GRAAL is modular in the sense that the user specifies which of these mappings are available with any graph. This allows flexibility in the selection of the storage representation for different graph structures. In line with its set theoretic foundation, the language introduces sets as a basic data type and provides for the efficient execution of all set and graph operators. At present, GRAAL is defined as an extension of ALGOL 60 (revised) and its formal description is given as a supplement to the syntactic and semantic definition of ALGOL. Several typical graph algorithms are written in GRAAL to illustrate various features of the language and to show its applicability. 19. The information exchange between moduluses in the system of module programming of the computation complexes International Nuclear Information System (INIS) Zinin, A.I.; Kolesov, V.E.; Nevinitsa, A.I. 1975-01-01 The report contains description of the method of construction of computer programs complexes for computation purposes for M-220 computers using the ALGOL-60 code for programming. The complex is organised on the modulus system principle and can include substantial number of modulus programs. The information exchange between separate moduli is done by means of special interpreting program and the information unit exchanged is a specially arranged file of data. For addressing to the interpreting program in the ALGOL-60 frameworks small number of specially created procedure-codes is used. The method proposed gives possibilities to program separate moduli of the complex independently and to expand the complex if necessary. In this case separate moduli or groups of moduli depending on the method of segmentation of the general problem solved by the complex will be of the independent interest and could be used out of the complex as traditional programs. (author) 20. Symbolic Representation of Algorithmic Game Semantics OpenAIRE Dimovski, Aleksandar S. 2012-01-01 In this paper we revisit the regular-language representation of game semantics of second-order recursion free Idealized Algol with infinite data types. By using symbolic values instead of concrete ones we generalize the standard notion of regular-language and automata representations to that of corresponding symbolic representations. In this way terms with infinite data types, such as integers, can be expressed as finite symbolic-automata although the standard automata interpretation is infin... 1. Program verification using symbolic game semantics OpenAIRE Dimovski, Aleksandar 2014-01-01 We introduce a new symbolic representation of algorithmic game semantics, and show how it can be applied for efficient verification of open (incomplete) programs. The focus is on an Algol-like programming language which contains the core ingredients of imperative and functional languages, especially on its second-order recursion-free fragment with infinite data types. We revisit the regular-language representation of game semantics of this language fragment.By using symbolic values instead of... 2. Galerkin method for solving diffusion equations International Nuclear Information System (INIS) Tsapelkin, E.S. 1975-01-01 A programme for the solution of the three-dimensional two-group multizone neutron diffusion problem in (x, y, z)-geometry is described. The programme XYZ-5 gives the currents of both groups, the effective neutron multiplication coefficient and several integral properties of the reactor. The solution was found with the Galerkin method using speciallly constructed and chosen coordinate functions. The programme is written in ALGOL-60 and consists of 5 parts. Its text is given 3. Annual review in automatic programming CERN Document Server Halpern, Mark I; Bolliet, Louis 2014-01-01 Computer Science and Technology and their Application is an eight-chapter book that first presents a tutorial on database organization. Subsequent chapters describe the general concepts of Simula 67 programming language; incremental compilation and conversational interpretation; dynamic syntax; the ALGOL 68. Other chapters discuss the general purpose conversational system for graphical programming and automatic theorem proving based on resolution. A survey of extensible programming language is also shown. 4. High-resolution observations of the radio emission from beta Persei International Nuclear Information System (INIS) Clark, B.G.; Kellermann, K.I.; Shaffer, D. 1975-01-01 The angular size of the radio emission from β Persei (Algol) was measured during a flare and found to be about 4 milli-arcsec equivalent Gaussian diameter, corresponding to linear dimensions of 0.1 AU and mean brightness temperature 4times10 8 K. The observed change in the interferometer fringe visibility in a few hours corresponds to a mean apparent expansion velocity of 500--1000 km s -1 , or to a stationary, slightly elliptical source 5. Advanced software techniques for data management systems. Volume 3: Programming language characteristics and comparison reference Science.gov (United States) James, T. A.; Hall, B. C.; Newbold, P. M. 1972-01-01 A comparative evaluation was made of eight higher order languages of general interest in the aerospace field: PL/1; HAL; JOVIAL/J3; SPL/J6; CLASP; ALGOL 60; FORTRAN 4; and MAC360. A summary of the functional requirements for a language for general use in manned aerodynamic applications is presented. The evaluation supplies background material to be used in assessing the worth of each language for some particular application. 6. NZ Ser: the results of the analysis of the 25 years photometric activity Science.gov (United States) Barsunova, O.; Mel'nikov, S.; Grinin, V.; Katysheva, N.; Shugarov, S. 2014-03-01 We present the analysis of the long-term photometric variability of NZ Ser. The object shows both large-scale cyclic variability and low-amplitude Algol-like, fading typical for UX Ori stars. The variations of the stellar brightness are accompanied by variations of the B-V and V-R colors: when the brightness decreases, B-V decreases, while V-R increases. 7. Logic programming extensions of Horn clause logic Directory of Open Access Journals (Sweden) Ron Sigal 1988-11-01 Full Text Available Logic programming is now firmly established as an alternative programming paradigm, distinct and arguably superior to the still dominant imperative style of, for instance, the Algol family of languages. The concept of a logic programming language is not precisely defined, but it is generally understood to be characterized buy: a declarative nature; foundation in some well understood logical system, e.g., first order logic. 8. Symbolic Representation of Algorithmic Game Semantics Directory of Open Access Journals (Sweden) Aleksandar S. Dimovski 2012-10-01 Full Text Available In this paper we revisit the regular-language representation of game semantics of second-order recursion free Idealized Algol with infinite data types. By using symbolic values instead of concrete ones we generalize the standard notion of regular-language and automata representations to that of corresponding symbolic representations. In this way terms with infinite data types, such as integers, can be expressed as finite symbolic-automata although the standard automata interpretation is infinite. Moreover, significant reductions of the state space of game semantics models are obtained. This enables efficient verification of terms, which is illustrated with several examples. 9. The Risoe model for calculating the consequences of the release of radioactive material to the atmosphere International Nuclear Information System (INIS) Thykier-Nielsen, S. 1980-07-01 A brief description is given of the model used at Risoe for calculating the consequences of releases of radioactive material to the atmosphere. The model is based on the Gaussian plume model, and it provides possibilities for calculation of: doses to individuals, collective doses, contamination of the ground, probability distribution of doses, and the consequences of doses for give dose-risk relationships. The model is implemented as a computer program PLUCON2, written in ALGOL for the Burroughs B6700 computer at Risoe. A short description of PLUCON2 is given. (author) 10. Programming language structures CERN Document Server Organick, Elliott Irving; Plummer, Robert P 1978-01-01 Programming Language Structures deals with the structures of programming languages and introduces the reader to five important programming languages: Algol, Fortran, Lisp, Snobol, and Pascal. The fundamental similarities and differences among these languages are discussed. A unifying framework is constructed that can be used to study the structure of other languages, such as Cobol, PL/I, and APL. Several of the tools and methodologies needed to construct large programs are also considered.Comprised of 10 chapters, this book begins with a summary of the relevant concepts and principles about al 11. Evolution of close binaries under the assumption that they lose angular momentum by a magnetic stellar wind International Nuclear Information System (INIS) Kraicheva, Z.T.; Tutukov, A.V.; Yungel'son, L.R. 1986-01-01 A simple method is proposed for describing the evolution of semidetached close binaries whose secondary components have degenerated helium cores and lose orbital angular momentum by a magnetic stellar wind. The results of calculations are used to estimate the initial parameters of a series of low-mass (M 1 + M 2 ≤ 5M.) systems of Algol type under the two assumptions of conservative and nonconservative evolution with respect to the orbital angular momentum. Only the assumption that the systems with secondary components possessing convective shells lose angular momentum makes it possible to reproduce their initial parameters without contradiction 12. On-line system for investigation of atomic structure International Nuclear Information System (INIS) Amus'ya, M.Ya.; Chernysheva, L.V. 1983-01-01 A description of the on-line ATOM system is presented that enables to investigate the structure of atomic electron shells and their interactions with different scattering particles-electrons, positronse photons, mesons - with the use of computerized numerical solutions. The problem is stated along with mathematical description of atomic properties including theoretical and numerical models for each investigated physical process. The ATOM system structure is considered. The Hartree-Fock method is used to determine the wave functions of the ground and excited atomic states. The programs are written in the ALGOL langauge. Different atomic characteristics were possible to be calculated for the first time with an accuracy exceeding an experimental one 13. Modeling Gyrosynchrotron Coronae of Radio-Loud Stars Science.gov (United States) Peterson, William M. 2015-01-01 Fast gyrosynchrotron codes are used to model the emission in close, active binary star systems. Multiple magnetic field topologies, plasma densities, and scale heights for the emitting plasma are tested for in an attempt to duplicate the emission characteristics detected using high-resolution VLBI imaging of the close active binaries UX Arietis and Algol. Also included are effects of occlusion by the companion star. It is found that a co-orbiting coronal loop oriented toward the companion star with its feet anchored on the poles of the active star is consistent with the observed emission from these two radio-loud stars. 14. Fast processing the film data file International Nuclear Information System (INIS) Abramov, B.M.; Avdeev, N.F.; Artemov, A.V. 1978-01-01 The problems of processing images obtained from three-meter magnetic spectrometer on a new PSP-2 automatic device are considered. A detailed description of the filtration program, which controls the correctness of operation connection line, as well as of scanning parameters and technical quality of information. The filtration process can be subdivided into the following main stages: search of fiducial marks binding of track to fiducial marks; plotting from sparks of track fragments in chambers. For filtration purposes the BESM-6 computer has been chosen. The complex of filtration programs is shaped as a RAM-file, the required version of the program is collected by the PATCHY program. The subprograms, performing the greater part of the calculations are written in the autocode MADLEN, the rest of the subprograms - in FORTRAN and ALGOL. The filtration time for one image makes 1,2-2 s of the calculation. The BESM-6 computer processes up to 12 thousand images a day 15. A translator and simulator for the Burroughs D machine Science.gov (United States) Roberts, J. 1972-01-01 The D Machine is described as a small user microprogrammable computer designed to be a versatile building block for such diverse functions as: disk file controllers, I/O controllers, and emulators. TRANSLANG is an ALGOL-like language, which allows D Machine users to write microprograms in an English-like format as opposed to creating binary bit pattern maps. The TRANSLANG translator parses TRANSLANG programs into D Machine microinstruction bit patterns which can be executed on the D Machine simulator. In addition to simulation and translation, the two programs also offer several debugging tools, such as: a full set of diagnostic error messages, register dumps, simulated memory dumps, traces on instructions and groups of instructions, and breakpoints. 16. ACCULIB, Program Library of Mathematical Routines International Nuclear Information System (INIS) Van Kats, J.M.; Rusman, C.J.; Van der Vorst, H.A. 1987-01-01 Description of program or function - ACCULIB is a collection of programs and subprograms for: - approximation and interpolation problems; - the evaluation of series of orthogonal polynomials; - evaluation of the complementary error function; - sorting problems and permutations; - differential equation problems; - linear algebra eigenvalue problems; - optimization problems; - fast Fourier transformations and Fourier series; - numerical quadrature of continuous functions; - linear systems and other linear algebra problems; - bit manipulation and character handling/transmission; - systems of nonlinear equations, in particular the determination of zeros of polynomials; - solution of over-complete systems; - plotting routines for contouring and surface representation; - statistical investigation of data. In addition, many utilities such as code conversion, microfiche production, disk file surveys, layout improvements for ALGOL60 and FORTRAN programs, and the conversion of IBM FORTRAN programs to CDC FORTRAN are included in the collection 17. An upper limit to the low-energy X-ray flux from Beta Persei Science.gov (United States) Kifune, T.; Wolff, R. S.; Weisskopf, M. C. 1975-01-01 A region of the sky including the binary system Algol (Beta Persei) was observed with a one-dimensional grazing-incidence X-ray telescope sensitive in the energy range from 0.5 to 4.0 keV. No statistically significant flux was detected above that attributable to the diffuse and cosmic-ray induced backgrounds. This result allows an upper limit of 2 by 10 to the -11th power ergs/sq cm/sec/keV at 1.6 keV to be placed on the X-ray emission from this source at the epoch of observations. It is concluded that the absence of detectable low-energy X-rays is consistent with a thermal radio source, provided that no flares occurred at the time of observation. 18. Observations of zodiacal light of the isolated Herbig BF Ori Ae star International Nuclear Information System (INIS) Grinin, V.P.; Kiselev, N.N.; Minikulov, N.Kh.; AN Tadzhikskoj SSR, Dushanbe 1989-01-01 The isolated Herbig BF Ori Ae-star belongs to the subclass of young irreguLar variables with non-periodic Algol-type brightness variations. In the course of polarization and photometrical patrol observations carried out in 1987-88 in the Crimea and Sanglock, high linear polarization has been observed in deep minima. The analysis of the observations show, that the most probably a source of polarization is the scattered light from the circumstellar dust disklike envelope that is the analogue of the solar zodiacal light. It is concluded that the bimodal distribution oF positional angles of linear polarization in L 1641 reflects a complex structure of the magnetic field in this giant cloud 19. Dust around young stars. Observations of the polarization of UX Ori in deep minima International Nuclear Information System (INIS) Voshchinnikov, N.V.; Grinin, V.P.; Kiselev, N.N.; Minikulov, N.K. 1988-01-01 Photometric and polarimetric monitoring observations of UX Ori begun in 1986 in the Crimea and Bolivia have resulted in the observation of two deep minima of the brightness during which a growth of the linear polarization (to ≅7%) was observed, together with a tendency for the circular polarization to increase (up to ≅1%). Analysis of the observational data shows that the main source of the polarized radiation in the deep minima is the emission of the star scattered by grains of circumstellar dust. On the basis of Mie's theory for a polydisperse graphite-silicate mixtures of particles the optical properties of ellipsoidal dust envelopes have been calculated and a model of the Algol-like minimum constructed 20. A numerical library in Java for scientists and engineers CERN Document Server Lau, Hang T 2003-01-01 At last researchers have an inexpensive library of Java-based numeric procedures for use in scientific computation. The first and only book of its kind, A Numeric Library in Java for Scientists and Engineers is a translation into Java of the library NUMAL (NUMerical procedures in ALgol 60). This groundbreaking text presents procedural descriptions for linear algebra, ordinary and partial differential equations, optimization, parameter estimation, mathematical physics, and other tools that are indispensable to any dynamic research group. The book offers test programs that allow researchers to execute the examples provided; users are free to construct their own tests and apply the numeric procedures to them in order to observe a successful computation or simulate failure. The entry for each procedure is logically presented, with name, usage parameters, and Java code included. This handbook serves as a powerful research tool, enabling the performance of critical computations in Java. It stands as a cost-effi... 1. Testing theory of binary evolution with interacting binary stars Science.gov (United States) Ergma, E.; Sarna, M. J. 2002-01-01 Of particular interest to us is the study of mass loss and its influence on the evolution of a binary systems. For this we use theoretical evolutionary models, which include: mass accretion, mass loss, novae explosion, super--efficient wind, and mixing processes. To test our theoretical prediction we proposed to determine the 12C / 13C ratio via measurements of the 12CO and 13CO bands around 2.3 micron. The available observations (Exter at al. 2001, in preparation) show good agreement with the theoretical predictions (Sarna 1992), for Algol-type binaries. Our preliminary estimates of the isotopic ratios for pre-CV's and CV's (Catalan et al. 2000, Dhillon et al. 2001) agree with the theoretical predictions from the common--envelope binary evolution models by Sarna et al. (1995). For the SXT we proposed (Ergma & Sarna 2001) similar observational test, which has not been done yet. 2. Aspects of FORTRAN in large-scale programming International Nuclear Information System (INIS) Metcalf, M. 1983-01-01 In these two lectures I examine the following three questions: i) Why did high-energy physicists begin to use FORTRAN. ii) Why do high-energy physicists continue to use FORTRAN. iii) Will high-energy physicists always use FORTRAN. In order to find answers to these questions, it is necessary to look at the history of the language, its present position, and its likely future, and also to consider its manner of use, the topic of portability, and the competition from other languages. Here we think especially of early competition from ALGOL, the more recent spread in the use of PASCAL, and the appearance of a completely new and ambitious language, ADA. (orig.) 3. Flowcharting with D-charts Science.gov (United States) Meyer, D. 1985-01-01 A D-Chart is a style of flowchart using control symbols highly appropriate to modern structured programming languages. The intent of a D-Chart is to provide a clear and concise one-for-one mapping of control symbols to high-level language constructs for purposes of design and documentation. The notation lends itself to both high-level and code-level algorithmic description. The various issues that may arise when representing, in D-Chart style, algorithms expressed in the more popular high-level languages are addressed. In particular, the peculiarities of mapping control constructs for Ada, PASCAL, FORTRAN 77, C, PL/I, Jovial J73, HAL/S, and Algol are discussed. 4. A Task-driven Grammar Refactoring Algorithm Directory of Open Access Journals (Sweden) Ivan Halupka 2012-01-01 Full Text Available This paper presents our proposal and the implementation of an algorithm for automated refactoring of context-free grammars. Rather than operating under some domain-specific task, in our approach refactoring is perfomed on the basis of a refactoring task defined by its user. The algorithm and the corresponding refactoring system are called mARTINICA. mARTINICA is able to refactor grammars of arbitrary size and structural complexity. However, the computation time needed to perform a refactoring task with the desired outcome is highly dependent on the size of the grammar. Until now, we have successfully performed refactoring tasks on small and medium-size grammars of Pascal-like languages and parts of the Algol-60 programming language grammar. This paper also briefly introduces the reader to processes occurring in grammar refactoring, a method for describing desired properties that a refactored grammar should fulfill, and there is a discussion of the overall significance of grammar refactoring. 5. The SINTRAN III NODAL system International Nuclear Information System (INIS) Skaali, T.B. 1980-10-01 NODAL is a high level programming language based on FOCAL and SNOBOL4, with some influence from BASIC. The language was developed to operate on the computer network controlling the SPS accelerator at CERN. NODAL is an interpretive language designed for interactive use. This is the most important aspect of the language, and is reflected in its structure. The interactive facilities make it possible to write, debug and modify programs much faster than with compiler based languages like FORTRAN and ALGOL. Apart from a few minor modifications, the basic part of the Oslo University NODAL system does not differ from the CERN version. However, the Oslo University implementation has been expanded with new functions which enable the user to execute many of the SINTRAN III monitor calls from the NODAL level. In particular the most important RT monitor calls have been implemented in this way, a property which renders possible the use of NODAL as a RT program administrator. (JIW) 6. Program verification using symbolic game semantics DEFF Research Database (Denmark) Dimovski, Aleksandar 2014-01-01 We introduce a new symbolic representation of algorithmic game semantics, and show how it can be applied for efficient verification of open (incomplete) programs. The focus is on an Algol-like programming language which contains the core ingredients of imperative and functional languages...... of game semantics to that of corresponding symbolic representations. In this way programs with infinite data types, such as integers, can be expressed as finite-state symbolic-automata although the standard automata representation is infinite-state, i.e. the standard regular-language representation has...... infinite summations. Moreover, in this way significant reductions of the state space of game semantics models are obtained. This enables efficient verification of programs by our prototype tool based on symbolic game models, which is illustrated with several examples.... OpenAIRE Osvaldo Alves dos Santos 1985-01-01 Resumo: A linguagem de programação Ada é uma linguagem de propósito geral, possuindo os recursos oferecidos pelas linguagens clássicas tais como Pascal e Algol bem como as facilidades de elaboração de programas modulares. Programação de sistemas e programação em tempo real, que normalmente são encontradas somente em linguagens específicas. Neste texto é apresentada uma proposta de um sistema de execução para Ada. Dada a extensão do trabalho, será tratada somente a parte concorrente e as exceç... 8. The Abundances of Carbon and Nitrogen in the Photospheres of Active B Stars Science.gov (United States) Peters, Geraldine J. 2011-01-01 Contemporary models for the structure and evolution of rapidly-rotating OB stars predict a photospheric enrichment of nitrogen due to the mixing of the CNO-processed material from the star's core with the original surface material. The predicted N-enhancement increases as the star approaches its critical rotational velocity. Alternatively the Algol primaries should have N-enriched photospheres if the material currently being transferred is from the mass loser's original core. To test these predictions, the C and N abundances in selected early Be stars and B-type mass gainers in Algol systems have been determined from spectroscopic data obtained with the IUE and FUSE spacecraft. The abundance analyses, carried through with the Hubeny/Lanz NLTE codes TLUSTY/SYNSPEC, were confronted with some challenges that are not encountered in abundance studies of sharp-lined, non-emission B stars including the treatment of shallow, blended rotationally-broadened lines, the appropriate value for the microturbulence parameter, correction for disk emission and possible shell absorption, and latitudinal variation of Teff and log g. The FUV offers an advantage over the optical region as there is far less influence from disk emission and the N lines are intrinsically stronger. Particularly useful are the features of C II 1324 Å, C III 1176 Å, 1247 Å, N I 1243 Å, and N III 1183,84 Å. Be stars with v sin i < 150 km s-1 were chosen to minimize the effect of latitudinal parameter variation. Given the errors it appears that the N abundance in the Be stars is normal. Expected mixing is apparently suppressed, and this study lends no support for Be star models based upon critical rotation. However, expected N-enhancement and a low C abundance are inferred for the B-type primaries in some interacting binaries. GJP is grateful for support from NASA Grants NNX07AH56G (ADP) and NNX07AF89G (FUSE), and the USC WiSE program. 9. Surprising Rapid Collapse of Sirius B from Red Giant to White Dwarf Through Mass Transfer to Sirius a Science.gov (United States) Yousef, Shahinaz; Ali, Ola 2013-03-01 Sirius was observed in antiquity as a red star. In his famous astronomy textbook the Almagest written 140 AD, Ptolemy described the star Sirius as fiery red. He curiously depicted it as one of six red-colored stars. The other five are class M and K stars, such as Arcturus and Betelgeuse. Apparent confirmation in ancient Greek and Roman sources are found and Sirius was also reported red in Europe about 1400 years ago. Sirius must have changed to a white dwarf in the night of Ascension. The star chapter in the Quran started with "by the star as it collapsed (1) your companion have not gone astray nor being misled (2), and in verse 49 which is the rotation period of the companion Sirius B around Sirius A, it is said" He is the Lord of Sirius (49). If Sirius actually was red what could have caused it to change into the brilliant bluish-white star we see today? What the naked eye perceives as a single star is actually a binary star system, consisting of a white main sequence star of spectral type A1V, termed Sirius A, and a faint white dwarf companion of spectral type DA2, termed Sirius B. The red color indicates that the star seen then was a red giant. It looks that what they have seen in antiquity was Sirius B which was then a red giant and it collapsed to form a white dwarf. Since there is no evidence of a planetary nebula, then the red Sirius paradox can be solved in terms of stellar evolution with mass transfer. Sirius B was the most massive star which evolved to a red giant and filled the Roche lobe. Mass transfer to Sirius A occurred through the Lagrangian point. Sirius A then became more massive while Sirius B lost mass and shrank. Sirius B then collapsed abruptly into a white dwarf. In the case of Algol, Ptolmy observed it as white star but it was red at the time of El sufi. At present it is white. The rate of mass transfer from Sirius B to Sirius A, and from Algol B to A is estimated from observational data of colour change from red to bullish white to be 0 10. Orbital period changes in RW CrA, DX Vel and V0646 Cen Science.gov (United States) Volkov, I. M.; Chochol, D.; Grygar, J.; Mašek, M.; Juryšek, J. 2017-06-01 We aim to determine the absolute parameters of the components of southern Algol-type binaries with deep eclipses RW CrA, DX Vel, V0646 Cen and interpret their orbital period changes. The data analysis is based on a high quality Walraven photoelectric photometry, obtained in the 1960-70s, our recent CCD photometry, ASAS (Pojmanski, 2002), and Hipparcos (Perryman et al., 1997) photometry of the objects. Their light curves were analyzed using the PHOEBE program with fixed effective temperatures of the primary components, found from disentangling the Walraven (B-U) and (V-B) colour indices. We found the absolute parameters of the components of all three objects. All reliable observed times of minimum light were used to construct and analyze the Eclipse Time Variation (ETV) diagrams. We interpreted the ETV diagrams of the detached binary RW CrA and the semi-detached binary DX Vel by a LIght-Time Effect (LITE), estimated parameters of their orbits and masses of their third bodies. We suggest a long term variation of the inclination angle of both eclipsing binaries, caused by a non-coplanar orientation of their third body orbits. We interpreted the detected orbital period increase in the semi-detached binary V0646 Cen by a mass transfer from the less to more massive component with the rate M⊙ = 6.08×10-9 M⊙/yr. 11. Photometric solution and frequency analysis of the oEA system EW Boo International Nuclear Information System (INIS) Zhang, X. B.; Wang, K.; Luo, Y. P. 2015-01-01 We present the first photometric solution and frequency analysis of the neglected oscillating Algol-type (oEA) binary EW Boo. B- and V-band light curves of the star were obtained on 11 nights in 2014. Using the Wilson–Devinney code, the eclipsing light curves were synthesized and the first photometric solution was derived for the binary system. The results reveal that EW Boo could be a semi-detached system with the less-massive secondary component filling its Roche lobe. By subtracting the eclipsing light changes from the data, we obtained the intrinsic pulsating light curves of the hotter, massive primary component. Frequency analysis of residual light shows multi-mode pulsation with the dominant period at 0.01909 days. A preliminary mode identification suggests that the star could be pulsating in non-radial (l = 1) modes. The long-term orbital period variation of the system was also investigated for the first time. An improved orbital period and new ephemerides of the eclipsing binary are given. The O−C analysis indicates a secular period increasing at a rate of dP/dt=2.9×10 −7 days yr −1 , which could be interpreted as mass transfer from the cooler secondary to the primary component. 12. Kozai Cycles and Tidal Friction Energy Technology Data Exchange (ETDEWEB) L, K; P.P., E 2009-07-17 Several studies in the last three years indicate that close binaries, i.e. those with periods of {approx}< 3 d, are very commonly found to have a third body in attendance. We argue that this proves that the third body is necessary in order to make the inner period so short, and further argue that the only reasonable explanation is that the third body causes shrinkage of the inner period, from perhaps a week or more to the current short period, by means of the combination of Kozai cycles and tidal friction (KCTF). In addition, once KCTF has produced a rather close binary, magnetic braking also combined with tidal friction (MBTF) can decrease the inner orbit further, to the formation of a contact binary or even a merged single star. Some of the products of KCTF that have been suggested, either by others or by us, are W UMa binaries, Blue Stragglers, X-ray active BY Dra stars, and short-period Algols. We also argue that some components of wide binaries are actually merged remnants of former close inner pairs. This may include such objects as rapidly rotating dwarfs (AB Dor, BO Mic) and some (but not all) Be stars. 13. A search for tight hierarchical triple systems amongst the eclipsing binaries in the CoRoT fields Science.gov (United States) Hajdu, T.; Borkovits, T.; Forgács-Dajka, E.; Sztakovics, J.; Marschalkó, G.; Benkő, J. M.; Klagyivik, P.; Sallai, M. J. 2017-10-01 We report a comprehensive search for hierarchical triple stellar system candidates amongst eclipsing binaries (EBs) observed by the CoRoT spacecraft. We calculate and check eclipse timing variation (ETV) diagrams for almost 1500 EBs in an automated manner. We identify five relatively short period Algol systems for which our combined light-curve and complex ETV analyses (including both the light-travel time effect and short-term dynamical third-body perturbations) resulted in consistent third-body solutions. The computed periods of the outer bodies are between 82 and 272 d (with an alternative solution of 831 d for one of the targets). We find that the inner and outer orbits are near coplanar in all but one case. The dynamical masses of the outer subsystems determined from the ETV analyses are consistent with both the results of our light-curve analyses and the spectroscopic information available in the literature. One of our candidate systems exhibits outer eclipsing events as well, the locations of which are in good agreement with the ETV solution. We also report another certain triply eclipsing triple system that, however, is lacking a reliable ETV solution due to the very short time range of the data, and four new blended systems (composite light curves of two EBs each), where we cannot decide whether the components are gravitationally bounded or not. Amongst these blended systems, we identify the longest period and highest eccentricity EB in the entire CoRoT sample. 14. An Orbital Stability Study of the Proposed Companions of SW Lyncis Directory of Open Access Journals (Sweden) T. C. Hinse 2014-09-01 Full Text Available We have investigated the dynamical stability of the proposed companions orbiting the Algol type short-period eclipsing binary SW Lyncis (Kim et al. 2010. The two candidate companions are of stellar to substellar nature, and were inferred from timing measurements of the system’s primary and secondary eclipses. We applied well-tested numerical techniques to accurately integrate the orbits of the two companions and to test for chaotic dynamical behavior. We carried out the stability analysis within a systematic parameter survey varying both the geometries and orientation of the orbits of the companions, as well as their masses. In all our numerical integrations we found that the proposed SW Lyn multi-body system is highly unstable on time-scales on the order of 1000 years. Our results cast doubt on the interpretation that the timing variations are caused by two companions. This work demonstrates that a straightforward dynamical analysis can help to test whether a best-fit companion-based model is a physically viable explanation for measured eclipse timing variations. We conclude that dynamical considerations reveal that the proposed SW Lyncis multi-body system most likely does not exist or the companions have significantly different orbital properties from those conjectured in Kim et al. (2010. 15. LUT Reveals a New Mass-transferring Semi-detached Binary Science.gov (United States) Qian, S.-B.; Zhou, X.; Zhu, L.-Y.; Zejda, M.; Soonthornthum, B.; Zhao, E.-G.; Zhang, J.; Zhang, B.; Liao, W.-P. 2015-12-01 GQ Dra is a short-period eclipsing binary in a double stellar system that was discovered by Hipparcos. Complete light curves in the UV band were obtained with the Lunar-based Ultraviolet Telescope in 2014 November and December. Photometric solutions are determined using the W-D (Wilson and Devinney) method. It is discovered that GQ Dra is a classical Algol-type semi-detached binary where the secondary component is filling the critical Roche lobe. An analysis of all available times of minimum light suggests that the orbital period is increasing continuously at a rate of \\dot{P}=+3.48(+/- 0.23)× {10}-7 days yr-1. This could be explained by mass transfer from the secondary to the primary, which is in agreement with the semi-detached configuration with a lobe-filling secondary. By assuming a conservation of mass and angular momentum, the mass transfer rate is estimated as \\dot{m}=9.57(+/- 0.63)× {10}-8 {M}⊙ {{yr}}-1. All of these results reveal that GQ Dra is a mass-transferring semi-detached binary in a double system that was formed from an initially detached binary star. After the massive primary evolves to fill the critical Roche lobe, the mass transfer will be reversed and the binary will evolve into a contact configuration with two sub-giant or giant component stars. 16. UX Ori Variables in the Cluster IC 348 Science.gov (United States) Barsunova, O. Yu.; Grinin, V. P.; Sergeev, S. G.; Semenov, A. O.; Shugarov, S. Yu. 2015-06-01 Results are presented from many years of photometric (VRCIC) observations of three variable T Tauri type stars in the cluster IC 348: V712 Per, V719 Per, and V909 Per. All three stars have photometric activity characteristic of UX Ori stars. The activity of V719 Per has increased significantly over the last 10 years: the amplitude of its Algol-like minima has increased by roughly a factor of 4 and has reached three stellar magnitudes in the I band. Periodograms of the light curves do not confirm the periods found previously by other authors on the basis of shorter series of observations. The slope of the color tracks on "color-magnitude" diagrams is used to determine the reddening law for these stars owing to selective absorption by circumstellar dust. Modelling of these parameters by the Mie theory shows that the maximum size amax of the dust particles in the protoplanetary disks of these stars is 1.5-2 times greater than in the interstellar medium. In V712 Per and V909 Per, the bulk of the mass of the dust particles is concentrated near amax, while in V719 Per the average mass of the dust particles is determined by the minimum size of the particles. It should be emphasized that these conclusions rely on an analysis of the optical variability of these stars. 17. Data structures for pattern recognition algorithms: a case study International Nuclear Information System (INIS) Zahn, C.T. Jr. 1975-03-01 Experiences gained while programming several pattern recognition algorithms in the languages ALGOL, FORTRAN, PL/1, and PASCAL are described. The algorithms discussed are for boundary encodings of two-dimensional binary pictures, calculating and exploring the minimum spanning tree for a set of points, recognizing dotted curves from a set of planar points, and performing a template matching in the presence of severe noise distortions. The lesson seems to be that pattern recognition algorithms require a range of data structuring capabilities for their implementation, in particular, arrays, graphs, and lists. The languages PL/1 and PASCAL have facilities to accomodate graphs and lists, but there are important differences for the programmer. The ease with which the template matching program was written, debugged, and modified during a 3 week period, by using PASCAL, suggests that this small but powerful language should not be overlooked by those researchers who need a quick, reliable, and efficient implementation of a pattern recognition algorithm requiring graphs, lists, and arrays. 5 figures 18. Preparation of small group constants for calculation of shielding International Nuclear Information System (INIS) Khokhlov, V.F.; Shejno, I.N.; Tkachev, V.D. 1979-01-01 Studied is the effect of the shielding calculation error connected with neglect of the angular and spatial neutron flux dependences while determining the small-group constants on the basis of the many-group ones. The economical method allowing for dependences is proposed. The spatial dependence is substituted by the average value according to the zones singled out in the limits of the zones of the same content; the angular cross section dependence is substituted by the average values in the half-ranges of the angular variable. To solve the transfer equation the ALGOL-ROSA-M program using the method of characteristic interpolation and trial run method is developed. The program regards correctly for nonscattered and single scattered radiations. Compared are the calculation results of neutron transmission (10.5 MeV-0.01 eV) in the 21-group approximation with the 3-group calculations for water (the layer thickness is 30 cm) and 5-group calculations for heterogeneous shielding of alternating stainless steel layers (3 layers, each of the 16 cm thickness) and graphite layers (2 layers, each of the 20 cm thickness). The analysis shows that the method proposed permits to obtain rather accurate results in the course of preparation of the small-group cross sections, decreasing considerably the number of the groups (from 21 to 3-5) and saving the machine time 19. A Monster CME Obscuring a Demon Star Flare Science.gov (United States) Moschou, Sofia-Paraskevi; Drake, Jeremy J.; Cohen, Ofer; Alvarado-Gomez, Julian D.; Garraffo, Cecilia 2017-12-01 We explore the scenario of a coronal mass ejection (CME) being the cause of the observed continuous X-ray absorption of the 1997 August 30 superflare on the eclipsing binary Algol (the Demon Star). The temporal decay of the absorption is consistent with absorption by a CME undergoing self-similar evolution with uniform expansion velocity. We investigate the kinematic and energetic properties of the CME using the ice cream cone model for its three-dimensional structure in combination with the observed profile of the hydrogen column density decline with time. Different physically justified length scales were used that allowed us to estimate lower and upper limits of the possible CME characteristics. Further consideration of the maximum available magnetic energy in starspots leads us to quantify its mass as likely lying in the range 2× {10}21 {--} 2× {10}22 g and kinetic energy in the range 7× {10}35 {--} 3× {10}38 erg. The results are in reasonable agreement with extrapolated relations between flare X-ray fluence and CME mass and kinetic energy derived for solar CMEs. 20. γ DORADUS PULSATIONS IN THE ECLIPSING BINARY STAR KIC 6048106 Energy Technology Data Exchange (ETDEWEB) Lee, Jae Woo, E-mail: [email protected] [Korea Astronomy and Space Science Institute, Daejeon 34113 (Korea, Republic of) 2016-12-20 We present the Kepler photometry of KIC 6048106, which is exhibiting the O’Connell effect and multiperiodic pulsations. Including a starspot on either of the components, light-curve synthesis indicates that this system is a semi-detached Algol with a mass ratio of 0.211, an orbital inclination of 73.°9, and a large temperature difference of 2534 K. To examine in detail both the spot variations and pulsations, we separately analyzed the Kepler time-series data at the interval of an orbital period in an iterative way. The results reveal that the variable asymmetries of the light maxima can be interpreted as the changes with time of a magnetic cool spot on the secondary component. Multiple frequency analyses were performed in the outside-eclipse light residuals after removal of the binarity effects from the observed Kepler data. We detected 30 frequencies with signal to noise amplitude ratios larger than 4.0, of which six ( f {sub 2}– f {sub 6} and f {sub 10}) can be identified as high-order (17 ≤  n  ≤ 25) low-degree ( ℓ  = 2) gravity-mode pulsations that were stable during the observing run of 200 days. In contrast, the other frequencies may be harmonic and combination terms. For the six frequencies, the pulsation periods and pulsation constants are in the ranges of 0.352–0.506 days and 0.232–0.333 days, respectively. These values and the position on the Hertzsprung–Russell diagram demonstrate that the primary star is a γ Dor variable. The evolutionary status and the pulsation nature of KIC 6048106 are discussed. 1. KIC 8262223: A Post-mass Transfer Eclipsing Binary Consisting of a Delta Scuti Pulsator and a Helium White Dwarf Precursor Science.gov (United States) Guo, Zhao; Gies, Douglas R.; Matson, Rachel A.; García Hernández, Antonio; Han, Zhanwen; Chen, Xuefei 2017-03-01 KIC 8262223 is an eclipsing binary with a short orbital period (P = 1.61 day). The Kepler light curves are of Algol-type and display deep and partial eclipses, ellipsoidal variations, and pulsations of δ Scuti type. We analyzed the Kepler photometric data, complemented by phase-resolved spectra from the R-C Spectrograph on the 4 meter Mayall telescope at the Kitt Peak National Observatory and determined the fundamental parameters of this system. The low-mass and oversized secondary ({M}2=0.20{M}⊙ , {R}2=1.31{R}⊙ ) is the remnant of the donor star that transferred most of its mass to the gainer, and now the primary star. The current primary star is thus not a normal δ Scuti star but the result of mass accretion from a lower mass progenitor. We discuss the possible evolutionary history and demonstrate with the MESA evolution code that this system and several other systems discussed in prior literature can be understood as the result of non-conservative binary evolution for the formation of EL CVn-type binaries. The pulsations of the primary star can be explained as radial and non-radial pressure modes. The equilibrium models from single star evolutionary tracks can match the observed mass and radius ({M}1=1.94{M}⊙ , {R}1=1.67{R}⊙ ) but the predicted unstable modes associated with these models differ somewhat from those observed. We discuss the need for better theoretical understanding of such post-mass transfer δ Scuti pulsators. 2. Absolute Properties of the Pulsating Post-mass Transfer Eclipsing Binary OO Draconis Science.gov (United States) Lee, Jae Woo; Hong, Kyeongsoo; Koo, Jae-Rim; Park, Jang-Ho 2018-01-01 OO Dra is a short-period Algol system with a δ Sct-like pulsator. We obtained time-series spectra between 2016 February and May to derive the fundamental parameters of the binary star and to study its evolutionary scenario. The radial velocity (RV) curves for both components were presented, and the effective temperature of the hotter and more massive primary was determined to be {T}{eff,1}=8260+/- 210 K by comparing the disentangling spectrum and the Kurucz models. Our RV measurements were solved with the BV light curves of Zhang et al. using the Wilson-Devinney binary code. The absolute dimensions of each component are determined as follows: M 1 = 2.03 ± 0.06 {M}⊙ , M 2 = 0.19 ± 0.01 {M}⊙ , R 1 = 2.08 ± 0.03 {R}⊙ , R 2 = 1.20 ± 0.02 {R}⊙ , L 1 = 18 ± 2 {L}⊙ , and L 2 = 2.0 ± 0.2 {L}⊙ . Comparison with stellar evolution models indicated that the primary star resides inside the δ Sct instability strip on the main sequence, while the cool secondary component is noticeably overluminous and oversized. We demonstrated that OO Dra is an oscillating post-mass transfer R CMa-type binary; the originally more massive star became the low-mass secondary component through mass loss caused by stellar wind and mass transfer, and the gainer became the pulsating primary as the result of mass accretion. The R CMa stars, such as OO Dra, are thought to have formed by non-conservative binary evolution and ultimately to evolve into EL CVn stars. 3. Epsilon Aurigae - Two-year Totality Transpiring Science.gov (United States) Kloppenborg, Brian K.; Stencel, R. E.; Hopkins, J. L. 2010-01-01 The 27 year period eclipsing binary, epsilon Aurigae, exhibits the hallmarks of a classical Algol system, except that the companion to the F supergiant primary star is surprisingly under-luminous for its mass. Eclipse ingress appears to have begun shortly after the predicted time in August 2009, near JD 2,455,065. At the University of Denver, we have focused on near-infrared interferometry, spectroscopy, and photometry with the superior instrumentation available today, compared to that of the 1983 eclipse. Previously obtained interferometry indicates that the source is asymmetric (Stencel, et. al. 2009 APLJ) and initial CHARA+MIRC closure-phase imaging shows hints of resolved structures. In parallel, we have pursued SPEX near-IR spectra at NASA IRTF in order to confirm whether CO molecules only seen during the second half of the 1983 eclipse will reappear on schedule. Additionally, we have obtained J and H band photometry using an Optec SSP-4 photometer with a newly written control and analysis suite. Our goal is to refine daytime photometric methods in order to provide coverage of the anticipated mid-eclipse brightening during summer 2010, from our high-altitude observatory atop Mt. Evans, Colorado. Also, many parallel observations are ongoing as part of the epsilon Aurigae international campaign (http://www.hposoft.com/Campaign09.html). In this report, we describe the progress of the eclipse and ongoing observations. We invite interested parties to get involved with the campaign for coverage of the 2009-2011 eclipse via the campaign websites: http://www.hposoft.com/Campaign09.html - and - http://www.du.edu/ rstencel/epsaur.htm - and - http://www.citizensky.org . This research is supported in part by the bequest of William Herschel Womble to the University of Denver. We are grateful to the participants in the observing campaign and invite interested parties to join us in monitoring the star for the balance of the eclipse. 4. Photometric activity of the Herbig Be star MWC 297 over 25 years Science.gov (United States) Barsunova, O. Yu.; Mel'nikov, S. Yu.; Grinin, V. P.; Katysheva, N. A.; Shugarov, S. Yu. 2013-02-01 The photometric behavior of the hot, young Herbig Be starMWC 297 on various time scales is studied using published data, as well as new observations. The series of photometric observations covers about 25 years. Over this time, the star showed low-amplitude (Δ V ≈ 0.3 m ) irregular variabilitymodulated by large-scale cyclic variabilitywith an amplitude close to 0.2 m and a period (or quasi-period) of 5.4±0.1 yr. A detailed seasonal analysis of the data shows that the light curve of MWC 297 displays two types of photometric features: low-amplitude Algol-like fading with an amplitude close to 0.2 m and low-amplitude flares resembling the flares of UV Ceti stars, but being more powerful and having longer durations. The variations of the stellar brightness are accompanied by variations of the B- V and V - R colors: when the brightness decreases, B- V decreases, while V - R increases (the star reddens). The reddening law is close to the standard interstellar reddening law. Although the character of the brightness variability ofMWC 297 resembles the photometric activity of UX Ori type stars, which is due to variations of their circumstellar extinction, its scale is very far from the scales observed for UX Ori stars. It is difficult to reconcile the level of photometric activity with the idea that MWC 297 is observed through its own gas-dust disk viewed almost edge-on, as has been suggested in several studies. 5. Highlights of Odessa Branch of AN in 2017 Science.gov (United States) Andronov, I. L. 2017-12-01 An annual report with a list of publications. Our group works on the variable star research within the international campaign "Inter-Longitude Astronomy" (ILA) based on temporarily working groups in collaboration with Poland, Slovakia, Korea, USA and other countries. A recent self-review on highlights was published in 2017. Our group continues the scientific school of Prof. Vladymir P. Tsesevich (1907 - 1983). Another project we participate is "AstroInformatics". The unprecedented photo-polarimetric monitoring of a group of AM Her - type magnetic cataclysmic variable stars was carried out since 1989 (photometry in our group - since 1978). A photometric monitoring of the intermediate polars (MU Cam, V1343 Her, V2306 Cyg et al.) was continued to study rotational evolution of magnetic white dwarfs. The super-low luminosity state was discovered in the outbursting intermediate polar = magnetic dwarf nova DO Dra. Previously typical low state was some times interrupted by outbursts, which are narrower than usual dwarf nova outbursts. Once there were detected TPO - "Transient Periodic Oscillations". The orbital and quasi-periodic variability was recently studied. Such super-low states are characteristic for nova-like variables (e.g. MV Lyr, TT Ari) or intermediate polars, but unusual for the dwarf novae. The electronic "Catalogue of Characteristics and Atlas of the Light Curves of Newly-Discovered Eclipsing Binary Stars" was compiled and is being prepared for publication. The software NAV ("New Algol Variable") with specially developed algorithms was used. It allows to determine the begin and end of the eclipses even in EB and EW - type stars, whereas the current classification (GCVS, VSX) claims that the begin and end of eclipses only in the EA - type objects. The further improvements of the NAV algorithm were comparatively studied. The "Wall-Supported Polynomial" (WSP) algoritms were implemented in the software MAVKA for statistically optimal modeling of flat eclipses 6. Exploring the Variable Sky with the Sloan Digital Sky Survey Energy Technology Data Exchange (ETDEWEB) Sesar, Branimir; Ivezic, Zeljko; Lupton, Robert; Juric, Mario; Gunn, James; Knapp, Gillian; De Lee, Nathan; Smith, J. Allyn; Miknaitis,Gajus; Lin, Huan; Tucker, Douglas; Doi, Mamoru; Tanaka, Masayuki; Fukugita, Masataka; Holtzman, Jon; Kent, Steve; Yanny, Brian; Schlegel,David; Finkbeiner, Douglas; Padmanabhan, Nikhil; Rockosi, Constance; Bond, Nicholas; Lee, Brian; Stoughton, Chris; Jester, Sebastian; Harris,Hugh; Harding, Paul; Brinkmann, Jon; Schneider, Donald; York, Donald; Richmond, Michael; Vanden Berk, Daniel 2007-04-01 We quantify the variability of faint unresolved optical sources using a catalog based on multiple SDSS imaging observations. The catalog covers SDSS Stripe 82, which lies along the celestial equator in the Southern Galactic Hemisphere (22h 24m < {alpha}{sub J2000} < 04h 08m, -1.27{sup o} < {delta}{sub J2000} < +1.27{sup o}, {approx} 290 deg{sup 2}), and contains 58 million photometric observations in the SDSS ugriz system for 1.4 million unresolved sources that were observed at least 4 times in each of the gri bands (with a median of 10 observations obtained over {approx}5 years). In each photometric bandpass we compute various low-order lightcurve statistics such as root-mean-square scatter (rms), {chi}{sup 2} per degree of freedom, skewness, minimum and maximum magnitude, and use them to select and study variable sources. We find that 2% of unresolved optical sources brighter than g = 20.5 appear variable at the 0.05 mag level (rms) simultaneously in the g and r bands. The majority (2/3) of these variable sources are low-redshift (< 2) quasars, although they represent only 2% of all sources in the adopted flux-limited sample. We find that at least 90% of quasars are variable at the 0.03 mag level (rms) and confirm that variability is as good a method for finding low-redshift quasars as is the UV excess color selection (at high Galactic latitudes). We analyze the distribution of lightcurve skewness for quasars and find that is centered on zero. We find that about 1/4 of the variable stars are RR Lyrae stars, and that only 0.5% of stars from the main stellar locus are variable at the 0.05 mag level. The distribution of lightcurve skewness in the g-r vs. u-g color-color diagram on the main stellar locus is found to be bimodal (with one mode consistent with Algol-like behavior). Using over six hundred RR Lyrae stars, we demonstrate rich halo substructure out to distances of 100 kpc. We extrapolate these results to expected performance by the Large Synoptic Survey 7. De AERA. Gedroomde machines en de praktijk van het rekenwerk aan het Mathematisch Centrum te Amsterdam Directory of Open Access Journals (Sweden) Gerard Alberts 2008-06-01 system’. Historically, this is where software begins. Dekker’s matrix complex, Dijk stra’s interrupt system, Dijkstra and Zonneveld’s ALGOL compiler – which for housekeeping contained ‘the complex’ – were actual examples of such super programs. In 1960 this compiler gave the Mathematical Center a leading edge in the early development of software. 8. The First Photometric Study of NSVS 1461538: A New W-subtype Contact Binary with a Low Mass Ratio and Moderate Fill-out Factor Directory of Open Access Journals (Sweden) Hyoun-Woo Kim 2016-09-01 Full Text Available New multiband BVRI light curves of NSVS 1461538 were obtained as a byproduct during the photometric observations of our program star PV Cas for three years from 2011 to 2013. The light curves indicate characteristics of a typical W-subtype W UMa eclipsing system, displaying a flat bottom at primary eclipse and the O’Connell effect, rather than those of an Algol/b Lyrae eclipsing variable classified by the northern sky variability survey (NSVS. A total of 35 times of minimum lights were determined from our observations (20 timings and the SuperWASP measurements (15 ones. A period study with all the timings shows that the orbital period may vary in a sinusoidal manner with a period of about 5.6 yr and a small semiamplitude of about 0.008 day. The cyclical period variation can be interpreted as a light-time effect due to a tertiary body with a minimum mass of 0.71 M⊙. Simultaneous analysis of the multiband light curves using the 2003 version of the Wilson-Devinney binary model shows that NSVS 1461538 is a genuine W-subtype W UMa contact binary with the hotter primary component being less massive and the system shows a low mass ratio of q(mc/mh=3.51, a high orbital inclination of 88.7°, a moderate fill-out factor of 30 %, and a temperature difference of ΔT=412 K. The O’Connell effect can be similarly explained by cool spots on either the hotter primary star or the cool secondary star. A small third-light corresponding to about 5 % and 2 % of the total systemic light in the B and V bandpasses, respectively, supports the third-body hypothesis proposed by the period study. Preliminary absolute dimensions of the system were derived and used to look into its evolutionary status with other W UMa binaries in the mass-radius and mass-luminosity diagrams. A possible evolution scenario of the system was also discussed in the context of the mass vs mass ratio diagram. 9. Light curve solutions and study of roles of magnetic fields in period variations of the UV Leo system Directory of Open Access Journals (Sweden) D Manzoori 2009-12-01 Full Text Available The solutions of photometric BV light curves for the Algol like system UV Leo were obtained using Wilson-Devinney code. The physical and orbital parameters along with absolute dimensions of the system were determined. It has been found that to best fit the V light curve of the system, assumptions of three dark spots were necessary two on the secondary and one on the primary. The absolute visual magnitudes (Mv of the individual components i.e., primary and secondary were estimated to 4.41 and 4.43, respectively, through the color curve analysis. The period analysis of the system presented elsewhere, indicated a cyclic period change of 12 yr duration, which was attributed to magnetic activity cycle, as a main cause of period variation in the system, through the Applegate mechanism. To verify the Applegate model I preformed calculations of some related parameters barrowed from Apllegate and Kalimeris. Values of all the calculated parameters were in accordance to those obtained for similar systems by Applegate. The differential magnitudes Δ B and Δ V, along with corresponding values of Δ(B-V color index. The cyclic variations in brightness are quite clear. There are three predictions of Applegate's theory concerning effects of cyclic magnetic changes on the period variations, which can be checked through the observations, these are as follows: I The long term variations in mean brightness (at outside of eclipses and cyclic changes of orbital period, vary with the same period. II The active star gets bluer as it gets brightened and/or the brightness and color variations are to be in phase. III Changes in luminosity due to changes in quadrupole moment should be of the order 0.1 mag. All the above mentioned predictions of Applegate’s theory are verified. These results combined with cyclic character of P(E presented elsewhere and also consistency of parameters which are obtained in this paper, led me to conclude that one the main causes of period 10. Numerical Recipes in C++: The Art of Scientific Computing (2nd edn). Numerical Recipes Example Book (C++) (2nd edn). Numerical Recipes Multi-Language Code CD ROM with LINUX or UNIX Single-Screen License Revised Version International Nuclear Information System (INIS) Borcherds, P 2003-01-01 (C++), the authors point out some of the problems in the use of C++ in scientific computing. I have not found any mention of parallel computing in NR(C++). Fortran has quite a lot going for it. As someone who has used it in most of its versions from Fortran II, I have seen it develop and leave behind other languages promoted by various enthusiasts: who now uses Algol or Pascal? I think it unlikely that C++ will disappear: it was devised as a systems language, and can also be used for other purposes such as scientific computing. It is possible that Fortran will disappear, but Fortran has the strengths that it can develop, that there are extensive Fortran subroutine libraries, and that it has been developed for parallel computing. To argue with programmers as to which is the best language to use is sterile. If you wish to use C++, then buy NR(C++), but you should also look at volume 2 of NR(F). If you are a Fortran programmer, then make sure you have NR(F), volumes 1 and 2. But whichever language you use, make sure you have one version or the other, and the CD ROM. The Example Book provides listings of complete programs to run nearly all the routines in NR, frequently based on cases where an analytical solution is available. It is helpful when developing a new program incorporating an unfamiliar routine to see that routine actually working, and this is what the programs in the Example Book achieve. I started teaching computational physics before Numerical Recipes was published. If I were starting again, I would make heavy use of both The Art of Scientific Computing and of the Example Book. Every computational physics teaching laboratory should have both volumes: the programs in the Example Book are included on the CD ROM, but the extra commentary in the book itself is of considerable value. (book review: Press, William H; Teukolsky, Saul A; Vettering, William T; Flannery, Brian P - ISBN 0-521-75034-4; ISBN 0-521-75034-2; ISBN 0-521-75036-9)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814594030380249, "perplexity": 3762.9118693985465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866870.92/warc/CC-MAIN-20180524205512-20180524225512-00094.warc.gz"}
https://imstat.org/2017/05/15/medallion-lecture-preview-takashi-kumagai/
Takashi Kumagai studied at Kyoto University, where he defended his PhD thesis in 1994 (supervisor: Shinzo Watanabe). After working at Osaka University and Nagoya University, he went back to Kyoto University in 1998. He is now a professor at the Research Institute for Mathematical Sciences (RIMS), Kyoto University. His research areas are anomalous diffusions on disordered media such as fractals and random media, and potential theory for jump processes on metric measure spaces. He gave St. Flour 2010 lectures, was a plenary speaker at SPA 2010 in Osaka, invited speaker at the International Congress of Mathematicians in Seoul 2014. Takashi Kumagai’s Medallion Lecture will be given at SPA 2017 in Moscow, July 2017. ### Heat kernel estimates and parabolic Harnack inequalities for symmetric jump processes There has been a long history of research on heat kernel estimates and Harnack inequalities for diffusion processes. Harnack inequalities and Hölder regularities for harmonic/caloric functions are important components of the celebrated De Giorgi-Nash-Moser theory in harmonic analysis and partial differential equations. In early 90’s, equivalent characterizations for parabolic Harnack inequalities (that is, Harnack inequalities for caloric functions) that are stable under perturbations were obtained by Grigor’yan and Saloff-Coste independently, and later extended in various directions. Such stability theory has been developed only recently for symmetric jump processes, despite of the fundamental importance in analysis. In 2002, Bass and Levin obtained heat kernel estimates for Markov chains with long-range jumps on the d-dimensional lattice. Motivated by the work, Chen and Kumagai (2003) obtained two-sided heat kernel estimates and parabolic Harnack inequalities for symmetric stable-like processes, which are perturbations of symmetric stable processes, on Ahlfors regular subsets of Euclidean spaces. The results include equivalent stable condition for the two-sided stable type heat kernel estimates. There has been vast amount of work related to potential theory on symmetric stable-like processes since around that time. Definite answers are given in the recent trilogy by Chen, Kumagai and Wang on the stability of heat kernel estimates and parabolic Harnack inequalities for symmetric jump processes on general metric measure spaces. While both of them are stable under perturbations, unlike diffusion cases, heat kernel estimates are not equivalent to parabolic Harnack inequalities for jump cases. In the talk, I will summarize developments of the De Giorgi-Nash-Moser theory for symmetric jump processes and discuss its applications. The applications include discrete approximations of jump processes and random media with long-range jumps. The talk is based on joint works with my collaborators; M.T. Barlow, R.F. Bass, Z.-Q. Chen, A. Grigor’yan, J. Hu, P. Kim, M. Kassmann and J. Wang.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285117149353027, "perplexity": 845.3318630398508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493818.32/warc/CC-MAIN-20200329045008-20200329075008-00291.warc.gz"}
http://science.sciencemag.org/content/295/5556/825?searchid=1&HITS=10&hits=10&resourcetype=HWCIT&maxtoshow=&RESULTFORMAT=&FIRSTINDEX=0&fulltext=ophir
Report # Tunneling Spectroscopy of the Elementary Excitations in a One-Dimensional Wire See allHide authors and affiliations Science  01 Feb 2002: Vol. 295, Issue 5556, pp. 825-828 DOI: 10.1126/science.1066266 ## Abstract The collective excitation spectrum of interacting electrons in one dimension has been measured by controlling the energy and momentum of electrons tunneling between two closely spaced, parallel quantum wires in a GaAs/AlGaAs heterostructure while measuring the resulting conductance. The excitation spectrum deviates from the noninteracting spectrum, attesting to the importance of Coulomb interactions. An observed 30% enhancement of the excitation velocity relative to noninteracting electrons with the same density, a parameter determined experimentally, is consistent with theories on interacting electrons in one dimension. In short wires, 6 and 2 micrometers long, finite size effects, resulting from the breaking of translational invariance, are observed. • * To whom correspondence should be addressed. E-mail: ophir.auslaender{at}weizmann.ac.il View Full Text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256271719932556, "perplexity": 1289.2567079382438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00791.warc.gz"}
https://www.cosmicnoon.com/category/waves/sound/
### Browsed byCategory: sound Leslie speaker, Doppler effect and the Nobel Prize in Physics 2019 ## Leslie speaker, Doppler effect and the Nobel Prize in Physics 2019 The Nobel Prize in Physics 2019 was awarded “for contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos” with one half to James Peebles “for theoretical discoveries in physical cosmology”, the other half jointly to Michel Mayor and Didier Queloz “for the discovery of an exoplanet orbiting a solar-type star.” In this sub-section we will try to understand how Michel Mayor and Didier Queloz discovered the first ever exoplanet – 51 Pegasi b… Visualizing Doppler Effect using ripple tanks ## Visualizing Doppler Effect using ripple tanks Ripple tanks are really cool ways to explore the way a wave behaves under the influence of a perturbation. They are fairly simple to make, and are usually available in college and school laboratories to render better understanding of the wave phenomenon. How does it work ?                    Source There is a usually an oscillating paddle( above– used to produce plane waves) or a point source/s ( below – used to produce circular waves ) which are actuated by eccentric… Even and Odd Harmonics of a vibrating string ## Even and Odd Harmonics of a vibrating string In the previous section we took a look at the vibrating string fixed at both ends and found that in order for the boundary condition to be satisfied, the following are the only solutions possible:† The solutions on the left of the image are often termed as ‘Odd Harmonics’ because they have odd number of anti-nodes and the ones of the right have even number of anti-nodes hence ‘Even Harmonics’ If you pluck a string right at the center, you… Standing Waves ## Standing Waves If you have a solid understanding of what Traveling waves are (click here if you need a refresher) then when you add up a sine wave moving to the right with a wave moving to the left, you get a standing wave. $y(x,t) = \sin(kx-\omega t) + \sin(kx + \omega t)$ Using $\sin(a) + \sin(b) = 2 \sin((a+b)/2) \cos((a-b)/2)$ and simplifying the above equation we get : $y(x,t) = 2\sin(kx)\cos(\omega t)$ A plot of this…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570727109909058, "perplexity": 862.751347953782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141168074.3/warc/CC-MAIN-20201123211528-20201124001528-00641.warc.gz"}
https://www.physicsforums.com/threads/that-queer-experiment-what-is-its-limit.107540/
# That queer experiment; what is its limit? 1. Jan 21, 2006 ### EroticNirvana Some of you may have heard of this experiment: http://www.cs.caltech.edu/~westside/quantum-intro.html [Broken] I'm thinking of the experiment at the top of the page where the light beem seems to take two paths. Does anyone know whether there is a limit for how far you can let this beem "travel in both directions". E.g. we might try to let the beem travel 1km. Last edited by a moderator: May 2, 2017 2. Jan 23, 2006 ### EroticNirvana 3. Jan 23, 2006 ### inha IIRC there have been entanglement experiments with distances of that order. Don't quote me on this though. 4. Jan 23, 2006 ### JesseM No, theoretically there shouldn't be any upper limit. In the context of something called the "delayed choice experiment", I've often seen the suggestion that an astronomer could, depending on her choice of measurement, determine whether a photon emitted millions of years ago behaves as though it took a single path around a galaxy or whether it behaves as though it took both paths at once--see the section entitled 'Does our choice "change the past"?' from this page on the delayed-choice experiment. I don't know whether an experiment of this type has actually been performed, though. 5. Jan 23, 2006 ### EroticNirvana Thx. If anyone´s got a link or reference to such a "long distance" experiment I´d be very thankful. 6. Jan 23, 2006 ### Sherlock http://arxiv.org/PS_cache/quant-ph/pdf/9707/9707042.pdf [Broken] http://arxiv.org/PS_cache/quant-ph/pdf/9806/9806043.pdf [Broken] http://www.gap-optique.unige.ch/Publications/Pdf/Optics98.pdf [Broken] Last edited by a moderator: May 2, 2017
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8963501453399658, "perplexity": 1977.33132867799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00585.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/219703-hypergeometric-function-print.html
# hypergeometric function! • Jun 10th 2013, 01:23 AM lawochekel hypergeometric function! the function $f(\alpha, \beta, \gamma; x) = \frac{\Gamma{\gamma}}{\Gamma \beta {\Gamma{\gamma - \beta}}}\int_{0}^{1} t^{\beta - 1} (1-t)^{\gamma - \beta - 1} (1-tx)^{- \alpha } dt$ please i need help on proving the function above. thanks • Jun 10th 2013, 02:08 AM JJacquelin Re: hypergeometric function! Hi ! there is nothing to prove since it is the integral definition of the Gauss hypergeometric function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907690286636353, "perplexity": 3771.704967618228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191986.44/warc/CC-MAIN-20170322212951-00472-ip-10-233-31-227.ec2.internal.warc.gz"}
https://economics.stackexchange.com/questions/9840/corner-solutions-for-pareto-efficiency
# Corner Solutions for Pareto Efficiency I have been reviewing calculations to find walrasian equilibrium and pareto efficient allocations. Assume we are in an environment with two consumers, $A$ and $B$, and two goods, $x$ and $y$. From this article in wikipedia, we have that a sufficient condition for pareto efficiency is that the marginal rates of substitution are equal for both consumers. However for practical purposes this seems only useful when utility functions are differentiable. Assuming preferences are continuous and locally non-satiated, is there any "fast and loose" rule for pareto efficiency when utility functions are not differentiable? For example with leontieff preferences $u(x,y) = \min\{ x,2y \}$ in which case there might be a corner solution. • Are we in a setting where the non-differentiable utility functions for both consumers are weakly monotonically increasing in both goods and continuous? – BKay Dec 17, 2015 at 16:39 • @Bkay Yes (otherwise I think the question becomes somewhat pathological). I will edit the question. Dec 17, 2015 at 16:51 • If the non-differentiability is caused by kinks in the indifference curves (is in the extreme case of Leontieff) then one should be able to calculate a left-hand MRS and a right-hand MRS (being the MRS to the left and right of the kink respectively. Pareto optimality should then reduce to a comparison between these left- and right-hand MRSs. In fact, think about a standard 2-person Edgeworth box. When we say the MRSs should be equal in a standard differentiable, convex case, we are implicitly saying that each agent's left hand MRS should be greater than the other guy's right-hand MRS. Dec 17, 2015 at 17:22 Let $$F$$ be the set of feasible allocations i.e. the points in the Edgeworth-box. Consider any allocation $$((x_1, y_1), (x_2, y_2)) \in F$$, to check for efficiency of this allocation - consider the two sets: • $$B_1 = \{((x_1', y_1'), (x_2', y_2'))\in F | u_1(x_1', y_1') \geq u_1(x_1, y_1) \}$$ (Upper level set for individual 1) • $$B_2 = \{((x_1', y_1'), (x_2', y_2'))\in F | u_2(x_2', y_2') \geq u_2(x_2, y_2) \}$$ (Upper level set for individual 2) If at every allocation $$((x_1', y_1'), (x_2', y_2')) \in B_1\cap B_2$$, we have • $$u_1(x_1', y_1') = u_1(x_1, y_1)$$ and • $$u_2(x_2', y_2') = u_2(x_2, y_2)$$ then the allocation $$((x_1, y_1), (x_2, y_2))$$ is Pareto efficient. If not, $$((x_1, y_1), (x_2, y_2))$$ is not Pareto efficient.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9805313348770142, "perplexity": 787.186203617697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00041.warc.gz"}
https://physics.stackexchange.com/questions/323509/why-cant-longitudinal-waves-be-polarised
# Why can't longitudinal waves be polarised? I understand why transverse waves can be polarised because their oscillations can be blocked by a polarizer. But, why can't longitudinal waves be polarised? Are there no polarizers, or something similar to that available for longitudinal waves? With transverse waves, there is a choice in which direction (in which plane) the oscillations occur. For instance, let the transverse wave move in $z$-direction. Then the oscillations could be for instance in the $x-z$-plane, or they could be in the $y-z$-plane or they could be anywhere inbetween. In order to distinguish between these different waves (i.e. waves with oscillations in different directions), physicists introduce a parameter called "polarization" which describes the geometrical orientation of oscillations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8531364798545837, "perplexity": 458.9557727436463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00038.warc.gz"}
https://onepetro.org/SPELACP/proceedings-abstract/07LACPEC/All-07LACPEC/SPE-108274-MS/142458
Nowadays two things are absolutely crucial in petroleum industry: (1) an integrated approach from seismic to reservoir simulation (history matching included) and (2) uncertainty and risk analysis. The first one ensures that the model is coherent with all types of data and the second one drives decision and quantifies risk. Current available hardware and software allow simulating stochastic models with statistical analysis of multiple scenarios. It is important to have a standard workflow that takes into account both aspects and can be applied to all projects. Although geophysicists may not be used to, it is important to include uncertainty analysis on the geophysical model too. A mature field located in Brazil is used as an example to illustrate the approach suggested above. The field has produced more than 20 million bbl of light oil and now a complementary project development is under study. Despite the amount of oil that has been produced, a significant geological uncertainty still remains. To provide the asset managers with a realistic range of possible outcomes of a project development, a thorough geological stochastic modeling was conducted. Four thousand models were generated considering the following varying parameters: reservoir structure, oil/water contact, porosity, net-to-gross, and initial water saturation. The models were ranked by VOIP and the distribution was sampled (P1, P2, … P100) to go through numerical flow simulation. Permeability uncertainty was introduced by considering two possible scenarios for each model, giving a total of 200 reservoir models. Dynamic data was compared to simulation results through an objective function and those models which gave results too far away from production history were discarded. The VOIP distribution was recalculated. From this new distribution, the P10, P50, and P90 realizations were picked to be history matched. After that, 27 models were generated through experimental design to consider the variation in the following parameters: (1) geological model, (2) relative permeability, (3) absolute permeability, (4) well productivity index, and (5) the number of wells to be drilled in the project development. The second and third parameters were kept constant in the vicinity of the producing wells so that all 27 models honored production data. A response surface model was generated by interpolation of the 27 flow simulations to obtain 10.000 outcomes for different parameter values. From these results it was possible to perform uncertainty analysis on the prediction.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309805989265442, "perplexity": 1052.3549823703922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00282.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_27.1/flhtml/d01/d01gyf.html
# NAG FL Interfaced01gyf (md_​numth_​coeff_​prime) ## ▸▿ Contents Settings help FL Name Style: FL Specification Language: ## 1Purpose d01gyf calculates the optimal coefficients for use by d01gcf and d01gdf, for prime numbers of points. ## 2Specification Fortran Interface Subroutine d01gyf ( ndim, npts, vk, Integer, Intent (In) :: ndim, npts Integer, Intent (Inout) :: ifail Real (Kind=nag_wp), Intent (Out) :: vk(ndim) C Header Interface #include <nag.h> void d01gyf_ (const Integer *ndim, const Integer *npts, double vk[], Integer *ifail) The routine may be called by the names d01gyf or nagf_quad_md_numth_coeff_prime. ## 3Description The Korobov (1963) procedure for calculating the optimal coefficients ${a}_{1},{a}_{2},\dots ,{a}_{n}$ for $p$-point integration over the $n$-cube ${\left[0,1\right]}^{n}$ imposes the constraint that (1) where $p$ is a prime number and $a$ is an adjustable argument. This argument is computed to minimize the error in the integral $3n∫01dx1⋯∫01dxn∏i=1n (1-2xi) 2,$ (2) when computed using the number theoretic rule, and the resulting coefficients can be shown to fit the Korobov definition of optimality. The computation for large values of $p$ is extremely time consuming (the number of elementary operations varying as ${p}^{2}$) and there is a practical upper limit to the number of points that can be used. Routine d01gzf is computationally more economical in this respect but the associated error is likely to be larger. ## 4References Korobov N M (1963) Number Theoretic Methods in Approximate Analysis Fizmatgiz, Moscow ## 5Arguments 1: $\mathbf{ndim}$Integer Input On entry: $n$, the number of dimensions of the integral. Constraint: ${\mathbf{ndim}}\ge 1$. 2: $\mathbf{npts}$Integer Input On entry: $p$, the number of points to be used. Constraint: ${\mathbf{npts}}$ must be a prime number $\text{}\ge 5$. 3: $\mathbf{vk}\left({\mathbf{ndim}}\right)$Real (Kind=nag_wp) array Output On exit: the $n$ optimal coefficients. 4: $\mathbf{ifail}$Integer Input/Output On entry: ifail must be set to $0$, $-1$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected. A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $-1$ means that an error message is printed while a value of $1$ means that it is not. If halting is not appropriate, the value $-1$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit. On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6Error Indicators and Warnings If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf). Errors or warnings detected by the routine: ${\mathbf{ifail}}=1$ On entry, ${\mathbf{ndim}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{ndim}}\ge 1$. ${\mathbf{ifail}}=2$ On entry, ${\mathbf{npts}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{npts}}\ge 5$. ${\mathbf{ifail}}=3$ On entry, ${\mathbf{npts}}=⟨\mathit{\text{value}}⟩$. Constraint: npts must be a prime number. ${\mathbf{ifail}}=4$ The machine precision is insufficient to perform the computation exactly. Try reducing npts: ${\mathbf{npts}}=⟨\mathit{\text{value}}⟩$. ${\mathbf{ifail}}=-99$ An unexpected error has been triggered by this routine. Please contact NAG. See Section 7 in the Introduction to the NAG Library FL Interface for further information. ${\mathbf{ifail}}=-399$ Your licence key may have expired or may not have been installed correctly. See Section 8 in the Introduction to the NAG Library FL Interface for further information. ${\mathbf{ifail}}=-999$ Dynamic memory allocation failed. See Section 9 in the Introduction to the NAG Library FL Interface for further information. ## 7Accuracy The optimal coefficients are returned as exact integers (though stored in a real array). ## 8Parallelism and Performance d01gyf is not threaded in any implementation. ## 9Further Comments The time taken is approximately proportional to ${p}^{2}$ (see Section 3). ## 10Example This example calculates the Korobov optimal coefficients where the number of dimensions is $4$ and the number of points is $631$. ### 10.1Program Text Program Text (d01gyfe.f90) None. ### 10.3Program Results Program Results (d01gyfe.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873908519744873, "perplexity": 3177.6958631230073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00686.warc.gz"}
http://mathoverflow.net/questions/3007/division-algebras-as-algebraic-groups?answertab=oldest
# Division Algebras as Algebraic Groups If I'm given a division algebra D with Z(D)=F, then how can I view Dx as an algebraic group defined over F? I'd like to see first how Dx can be given the structure of a variety defined over F, and then to see how the group law on Dx is defined over F. - Choose an F-basis of D. The multiplication is described by certain quadratic functions, with respect to this basis; D* is given by the nonvanishing of a polynomial function (the norm). So the multiplication can be understood as defining an algebraic group structure on the complement of a hypersurface in an affine space. - And for the record, this is the exact same way that you show GL_n is an algebraic group. –  Tyler Lawson Oct 28 '09 at 13:01 Is it part of the general theory of division algebras that the norm is a polynomial function? –  Joel Dodge Oct 28 '09 at 19:27 @Joel: After base change to a matrix ring, the norm becomes the determinant. –  S. Carnahan Oct 29 '09 at 2:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.952939510345459, "perplexity": 270.32601008868374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/131773-entire-bounded-function-must-constant-print.html
# Entire bounded function must be constant. • March 2nd 2010, 07:50 PM davismj Entire bounded function must be constant. http://i45.tinypic.com/mtbkvs.jpg I can kind of see whats going on, but not well enough to construct a logical progression towards a proof. Any ideas? • March 2nd 2010, 08:16 PM davismj This feels like the solution. http://i45.tinypic.com/2iu6h5f.jpg This feels right, but I tend to get things wrong with my ignorance. Verify? • March 3rd 2010, 07:07 AM Opalg That basic idea (taking the exponential) is exactly what is needed, but some of the details are a bit dubious. You can use inequalities for real numbers, but not for complex numbers. It would be better to say $\exp(f(z)) = e^{u+iv} = e^ue^{iv}$ and therefore $|\exp(f(z))| = e^u$ (since $e^u>0$ and $|e^{iv}|=1$). Then apply Liouville's theorem. • March 3rd 2010, 09:17 AM davismj Quote: Originally Posted by Opalg That basic idea (taking the exponential) is exactly what is needed, but some of the details are a bit dubious. You can use inequalities for real numbers, but not for complex numbers. It would be better to say $\exp(f(z)) = e^{u+iv} = e^ue^{iv}$ and therefore $|\exp(f(z))| = e^u$ (since $e^u>0$ and $|e^{iv}|=1$). Then apply Liouville's theorem. Of course. This makes a lot of sense to me. Thank you.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389811754226685, "perplexity": 469.26091790962414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737952309.80/warc/CC-MAIN-20151001221912-00131-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/4400/2/a/bi/
# Properties Label 4400.2.a.bi Level $4400$ Weight $2$ Character orbit 4400.a Self dual yes Analytic conductor $35.134$ Analytic rank $1$ Dimension $2$ CM no Inner twists $1$ # Learn more ## Newspace parameters Level: $$N$$ $$=$$ $$4400 = 2^{4} \cdot 5^{2} \cdot 11$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 4400.a (trivial) ## Newform invariants Self dual: yes Analytic conductor: $$35.1341768894$$ Analytic rank: $$1$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{21})$$ Defining polynomial: $$x^{2} - x - 5$$ x^2 - x - 5 Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 1100) Fricke sign: $$1$$ Sato-Tate group: $\mathrm{SU}(2)$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of $$\beta = \frac{1}{2}(1 + \sqrt{21})$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q - \beta q^{3} + (\beta - 3) q^{7} + (\beta + 2) q^{9} +O(q^{10})$$ q - b * q^3 + (b - 3) * q^7 + (b + 2) * q^9 $$q - \beta q^{3} + (\beta - 3) q^{7} + (\beta + 2) q^{9} + q^{11} + q^{13} + ( - \beta + 2) q^{17} + ( - 2 \beta - 1) q^{19} + (2 \beta - 5) q^{21} + ( - \beta - 1) q^{23} - 5 q^{27} + (\beta + 4) q^{29} + (2 \beta + 3) q^{31} - \beta q^{33} + (2 \beta - 3) q^{37} - \beta q^{39} + (2 \beta - 7) q^{41} - 10 q^{43} + (2 \beta - 7) q^{47} + ( - 5 \beta + 7) q^{49} + ( - \beta + 5) q^{51} + ( - 3 \beta - 3) q^{53} + (3 \beta + 10) q^{57} + (2 \beta + 5) q^{59} + ( - \beta + 7) q^{61} - q^{63} - 4 q^{67} + (2 \beta + 5) q^{69} + (6 \beta - 6) q^{71} + (\beta + 5) q^{73} + (\beta - 3) q^{77} + (7 \beta - 4) q^{79} + (2 \beta - 6) q^{81} + ( - 5 \beta + 4) q^{83} + ( - 5 \beta - 5) q^{87} + ( - \beta + 2) q^{89} + (\beta - 3) q^{91} + ( - 5 \beta - 10) q^{93} + ( - \beta + 9) q^{97} + (\beta + 2) q^{99} +O(q^{100})$$ q - b * q^3 + (b - 3) * q^7 + (b + 2) * q^9 + q^11 + q^13 + (-b + 2) * q^17 + (-2*b - 1) * q^19 + (2*b - 5) * q^21 + (-b - 1) * q^23 - 5 * q^27 + (b + 4) * q^29 + (2*b + 3) * q^31 - b * q^33 + (2*b - 3) * q^37 - b * q^39 + (2*b - 7) * q^41 - 10 * q^43 + (2*b - 7) * q^47 + (-5*b + 7) * q^49 + (-b + 5) * q^51 + (-3*b - 3) * q^53 + (3*b + 10) * q^57 + (2*b + 5) * q^59 + (-b + 7) * q^61 - q^63 - 4 * q^67 + (2*b + 5) * q^69 + (6*b - 6) * q^71 + (b + 5) * q^73 + (b - 3) * q^77 + (7*b - 4) * q^79 + (2*b - 6) * q^81 + (-5*b + 4) * q^83 + (-5*b - 5) * q^87 + (-b + 2) * q^89 + (b - 3) * q^91 + (-5*b - 10) * q^93 + (-b + 9) * q^97 + (b + 2) * q^99 $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2 q - q^{3} - 5 q^{7} + 5 q^{9}+O(q^{10})$$ 2 * q - q^3 - 5 * q^7 + 5 * q^9 $$2 q - q^{3} - 5 q^{7} + 5 q^{9} + 2 q^{11} + 2 q^{13} + 3 q^{17} - 4 q^{19} - 8 q^{21} - 3 q^{23} - 10 q^{27} + 9 q^{29} + 8 q^{31} - q^{33} - 4 q^{37} - q^{39} - 12 q^{41} - 20 q^{43} - 12 q^{47} + 9 q^{49} + 9 q^{51} - 9 q^{53} + 23 q^{57} + 12 q^{59} + 13 q^{61} - 2 q^{63} - 8 q^{67} + 12 q^{69} - 6 q^{71} + 11 q^{73} - 5 q^{77} - q^{79} - 10 q^{81} + 3 q^{83} - 15 q^{87} + 3 q^{89} - 5 q^{91} - 25 q^{93} + 17 q^{97} + 5 q^{99}+O(q^{100})$$ 2 * q - q^3 - 5 * q^7 + 5 * q^9 + 2 * q^11 + 2 * q^13 + 3 * q^17 - 4 * q^19 - 8 * q^21 - 3 * q^23 - 10 * q^27 + 9 * q^29 + 8 * q^31 - q^33 - 4 * q^37 - q^39 - 12 * q^41 - 20 * q^43 - 12 * q^47 + 9 * q^49 + 9 * q^51 - 9 * q^53 + 23 * q^57 + 12 * q^59 + 13 * q^61 - 2 * q^63 - 8 * q^67 + 12 * q^69 - 6 * q^71 + 11 * q^73 - 5 * q^77 - q^79 - 10 * q^81 + 3 * q^83 - 15 * q^87 + 3 * q^89 - 5 * q^91 - 25 * q^93 + 17 * q^97 + 5 * q^99 ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 1.1 2.79129 −1.79129 0 −2.79129 0 0 0 −0.208712 0 4.79129 0 1.2 0 1.79129 0 0 0 −4.79129 0 0.208712 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Atkin-Lehner signs $$p$$ Sign $$2$$ $$-1$$ $$5$$ $$1$$ $$11$$ $$-1$$ ## Inner twists This newform does not admit any (nontrivial) inner twists. ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 4400.2.a.bi 2 4.b odd 2 1 1100.2.a.h yes 2 5.b even 2 1 4400.2.a.bu 2 5.c odd 4 2 4400.2.b.s 4 12.b even 2 1 9900.2.a.bz 2 20.d odd 2 1 1100.2.a.g 2 20.e even 4 2 1100.2.b.d 4 60.h even 2 1 9900.2.a.bh 2 60.l odd 4 2 9900.2.c.x 4 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 1100.2.a.g 2 20.d odd 2 1 1100.2.a.h yes 2 4.b odd 2 1 1100.2.b.d 4 20.e even 4 2 4400.2.a.bi 2 1.a even 1 1 trivial 4400.2.a.bu 2 5.b even 2 1 4400.2.b.s 4 5.c odd 4 2 9900.2.a.bh 2 60.h even 2 1 9900.2.a.bz 2 12.b even 2 1 9900.2.c.x 4 60.l odd 4 2 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(4400))$$: $$T_{3}^{2} + T_{3} - 5$$ T3^2 + T3 - 5 $$T_{7}^{2} + 5T_{7} + 1$$ T7^2 + 5*T7 + 1 $$T_{13} - 1$$ T13 - 1 ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$T^{2}$$ $3$ $$T^{2} + T - 5$$ $5$ $$T^{2}$$ $7$ $$T^{2} + 5T + 1$$ $11$ $$(T - 1)^{2}$$ $13$ $$(T - 1)^{2}$$ $17$ $$T^{2} - 3T - 3$$ $19$ $$T^{2} + 4T - 17$$ $23$ $$T^{2} + 3T - 3$$ $29$ $$T^{2} - 9T + 15$$ $31$ $$T^{2} - 8T - 5$$ $37$ $$T^{2} + 4T - 17$$ $41$ $$T^{2} + 12T + 15$$ $43$ $$(T + 10)^{2}$$ $47$ $$T^{2} + 12T + 15$$ $53$ $$T^{2} + 9T - 27$$ $59$ $$T^{2} - 12T + 15$$ $61$ $$T^{2} - 13T + 37$$ $67$ $$(T + 4)^{2}$$ $71$ $$T^{2} + 6T - 180$$ $73$ $$T^{2} - 11T + 25$$ $79$ $$T^{2} + T - 257$$ $83$ $$T^{2} - 3T - 129$$ $89$ $$T^{2} - 3T - 3$$ $97$ $$T^{2} - 17T + 67$$ show more show less
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880330562591553, "perplexity": 2457.734530212841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00451.warc.gz"}
https://sergeev.io/notes/cauchy_schwarz/
Proof of the Cauchy-Schwarz Inequality For two vectors $$f$$ and $$g$$ in an inner product space, pick an arbitrary real scalar $$\lambda$$ and write: which is true because it’s a length squared, so The expression on the left is quadratic in $$\lambda$$, and since it’s always greater than or equal to zero, it has zero real roots (i.e. entirely above the x axis) or one real root (i.e. just touching the x axis). It can’t have two real roots, because that would require that the expression be negative for some $$\lambda$$ (i.e. go underneath the x axis), and that can’t be the case since it’s a length squared. Altogether, this means that the quadratic’s discriminant is less than or equal to zero: Equality occurs when $$f$$ and $$g$$ are linearly dependent. This means $$f = \lambda g$$ for some scalar $$\lambda$$, and we can show this directly by writing: See some similar/alternate proofs and applications of the Cauchy-Schwarz Inequality here: http://cnx.org/content/m10757/latest/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363043308258057, "perplexity": 155.24513874424878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529898.48/warc/CC-MAIN-20190420160858-20190420182003-00063.warc.gz"}
https://www.physicsforums.com/threads/equation-for-transverse-wave.659420/
# Equation for transverse wave 1. ### LizardCobra 17 What is [an] equation for a transverse wave with no boundary conditions, as a function of x and t? I want to model a fluctuation string where neither of the ends are bound. 2. ### Studiot The wave equation and the boundary conditions are separate matters. The simplest transverse wave equation is $$\frac{{{\partial ^2}y}}{{\partial {x^2}}} = {c^2}\frac{{{\partial ^2}y}}{{\partial {t^2}}}$$ You need to apply boundary conditions to establish the two arbitrary functions that appear in the solution. 3. ### LizardCobra 17 That's what I'm confused about. I know that I can express the motion of a vibrating string using sin and cos terms, but I don't know what BC's to apply if the ends of the string are free to move. I want to express the shape of a string using sin and cos, but I am not sure what combination of terms is appropriate here. Would y=Acos(kx-wt) + Bcos(kx+wt) + Csin(kx-wt) + Dsin(kx+wt) fully describe the motion? 4. ### Studiot Let the string be ABCD with A, D the ends of the string. If both A and D are 'free to move' then how do you apply tension to the string? If you clamp at intermediated points, say B and C the AB and CD play no part in the wave motion. 5. ### Studiot It does not really matter but I am going to put the c2 in its conventional place in what follows. Sorry that is what happens when you trust to an aging memory. $$\frac{{{\partial ^2}y}}{{\partial {x^2}}} = \frac{1}{{{c^2}}}\frac{{{\partial ^2}y}}{{\partial {t^2}}}$$ Now for fixed ends the boundary conditions are $$y(0,t) = y(l,t) = 0$$ If the ends are not 'free' but still participating in the wave then they can be attributed initial displacement and velocity conditions $$\begin{array}{l} y(x,0) = f(x) \\ {\left( {\frac{{\partial y}}{{\partial t}}} \right)_{t = 0}} = g(x) \\ \end{array}$$ The wave equation itself may be solved by the method of separating the variables $$y(x,t) = F(x)G(t)$$ F is a function of x only and G a function of t only. Substituting and dividing through by y=FG $$\frac{1}{F}\frac{{{d^2}F}}{{d{x^2}}} = \frac{1}{{G{c^2}}}\frac{{{d^2}G}}{{d{t^2}}}$$ Both sides of this equation can only be equal if they are constant. Convention has this constant as -λ2. Some algebra on the resultant pair of ordinary diffrential equations will lead to your required trigonometric solution ( not the one you offered ) where F has the form $${F_n}(x) = \sin \frac{{n\pi x}}{l}$$ and G has the form $${G_n}(t) = {A_n}\cos {\omega _n}t + {B_n}\sin {\omega _n}t$$ Thus $${y_n}(x,t) = {F_n}(x){G_n}(t) = \sin \frac{{n\pi x}}{l}\left[ {{A_n}\cos {\omega _n}t + {B_n}\sin {\omega _n}t} \right]$$ Where A and B are determined by the intial conditiions. In general the solution above will not be complete since it depends upon n. To obtain a complete solution you need to sum solutions over n from n=1 to ∞ Thus $$\begin{array}{l} y(x,0) = f(x) = \sum\limits_{n = 1}^\infty {{A_n}} \sin \frac{{n\pi x}}{l} \\ {\left( {\frac{{\partial y}}{{\partial t}}} \right)_{t = 0}} = g(x) = \sum\limits_{n = 1}^\infty {{B_n}} {\omega _n}\sin \frac{{n\pi x}}{l} \\ \end{array}$$ 6. ### LizardCobra 17 Thank you, that makes sense. But why does Fn(x) only consist of sin terms, instead of both sin and cos terms? What I originally had was y = Ʃ sin(nπx/L)*[An cos($\omega$t) + Bn sin($\omega$t] + cos(nπx/L)*[Cn cos($\omega$t) + Dn sin($\omega$t]. Is there a reason not to include the second term? ps, the reason why the ends are unbound is because there is not supposed to be any tension- this is a string in Brownian motion (so there are no real initial conditions that I can apply either, since it does not have to be in any specific configuration at t=0). I guess it would have been better to talk about it as an abstract wave than an actual string. 7. ### Studiot Without tension there is no wave in the string. To derive the wave equation from first mechanical principles you require a restoring force. There is none in your situation. You string takes up the classic random walk line under the influence of brownian motion. This crosses and recrosses itself many times so takes up a but of an undulatory shape. But it is not a wave. Are you familiar with the mathematics of a 'random walk'? 8. ### LizardCobra 17 You are correct. It is definitely not a real wave in the classical sense. I am familiar with the random walk, and I agree that that is what this is. But since the string will have an internal resistance to bending, that rigidity brings it back to equilibrium (so the resistance to bending is the restoring force). It will fluctuate away from this equilibrium (and I am neglecting the fact that the entire string is diffusing, rotating or doing anything else more complicated) and the time scale is determined by drag forces, and amplitude of these fluctuations is determined the rigidity. But the dynamics of the fluctuation should be able to be modeled as a simple wave equation- I just wanted to check what that wave equation is, since most of the time (for obvious reasons) there actually are boundary conditions. I've modeled the shape at t = 0 as Ʃ Asin(nπx/L) +Bcos(nπx/L). Can I just multiply this by (cos(wt) + sin(wt)) to make it a function of time? Last edited: Dec 17, 2012 9. ### Studiot I'm not getting at you but that's three different versions of some formula you have posted so far. I obtained my expression by carrying out the algebra (separating the partial equation into two ordinary differential equations, one in x, one in t) as stated in post#5 Using $$\frac{1}{F}\frac{{{d^2}F}}{{d{x^2}}} = - {\lambda ^2}$$ and the boundary conditions F(0)=F(l)=0 $${F_n}(x) = \sin \frac{{n\pi x}}{l}$$ Where $${\lambda _n} = \frac{{n\pi }}{l}$$ If we took any value but zero for the cosine term constant in the general solution we could not satisfy the boundary conditions. So we are left with only a sine function in x. Similarly the initial ordinary differential equation in t leads to $$\frac{1}{{G{v^2}}}\frac{{{d^2}G}}{{d{t^2}}} = - {\lambda ^2}$$ with intitial conditions as stated. Since this is an initial value problem that does not require G to vanish we put in our values to determine A and B in the general solution. You cannot avoid putting up some values for these to determine A and B 10. ### LizardCobra 17 I thought post # 3 and post 6 were equivalent through some trig identity. I see why you applied the boundary conditions and set the coefficient on the cos(n$\pi$x/L) term to zero. But if I want my wave to look and act like the progressive wave shown here: http://www.mta.ca/faculty/science/physics/suren/Twave/Twave01.html I thought that I can't apply that boundary condition (because the ends aren't actually pinned). I remember deriving the equation that you showed for a particle in the infinate square well and always throwing out the spacial cos term. But I thought that in this case we need to keep it. 11. ### Studiot What I am saying to you is that you need to find some initial conditions to put into f(x) and g(t) in post#5. Alternatively you need to find some other points on your string. You cannot solve for the arbitrary functions/constants without this. The whole point (trick) of the exercise is to choose easy to solve positions on the string. For sure the ends of the string must start and be somewhere. 12. ### Darwin123 733 You can use "periodic boundary conditions". You can assume that the displacement and velocity are periodic in distance with a period of length, κL. Here l This can be expressed by the following two equations. f(x,t)=f(x+κL,t), and, ∂f(x,t)/∂t=∂f(x+κL,t)/∂t, for all x and all t. Here, κ is an arbitrary constant that can be any real number. L is a scale length.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906960666179657, "perplexity": 598.2649500810802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447773.21/warc/CC-MAIN-20151124205407-00142-ip-10-71-132-137.ec2.internal.warc.gz"}
https://phys.libretexts.org/Bookshelves/Astronomy_and_Cosmology_TextMaps/Map%3A_Celestial_Mechanics_(Tatum)/8%3A_Planetary_Motions/8.2%3A_Opposition%2C_Conjunction_and_Quadrature
$$\require{cancel}$$ # 8.2: Opposition, Conjunction and Quadrature Planets that are closer to the Sun than Earth (i.e. whose orbital radii are less than 1 AU), that is to say the planets Mercury and Venus, are inferior planets. (Any asteroids that may be found in such orbits are therefore inferior asteroids, and, technically, any spacecraft that are in solar orbits within that of the orbit of Earth could also be called inferior spacecraft, although it is doubtful whether this nomenclature would ever win general acceptance.) Other planets (i.e. Mars and beyond) are superior planets. In figure $$\text{VIII.1}$$ I draw the orbits of Earth and of an inferior planet. $$\text{FIGURE VIII.1}$$ The symbol $$\odot$$ denotes the Sun and $$\oplus$$ denotes Earth. At $$\text{IC}|), the planet is at inferior conjunction with the Sun. At \(\text{SC}$$, it is at superior conjunction with the Sun. At $$\text{GWE}$$ it is at greatest western elongation from the Sun. At $$\text{GEE}$$ it is at greatest eastern elongation. It should be evident that the sine of the greatest elongation is equal to the radius of the planet’s orbit in $$\text{AU}$$. Thus the radius of Venus’s (almost circular) orbit is 0.7233 $$\text{AU}$$, and therefore its greatest elongation from the Sun is about $$46^\circ$$. Mercury’s orbit is relatively eccentric ($$e = 0.2056$$), so that its distance from the Sun varies from 0.3075 $$\text{AU}$$ at perihelion to 0.4667 at aphelion. Consequently greatest elongations can be from $$18^\circ$$ to $$28^\circ$$ , depending on where in its orbit they occur. In figure $$\text{VIII.2}$$ I draw the orbits of Earth and of a superior planet. $$\text{FIGURE VIII.2}$$ At $$\text{C}$$, the planet is in conjunction with the Sun. At $$\text{O}$$ it is in opposition to the Sun. The opposition point is very familiar to observers of asteroids. Its right ascension differs from that of the Sun by 12 hours, and it transits across the meridian at midnight local solar time. The points $$\text{EQ}$$ and $$\text{WQ}$$ are eastern quadrature and western quadrature.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631790518760681, "perplexity": 968.7454875146234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527518.38/warc/CC-MAIN-20190419081303-20190419103303-00397.warc.gz"}
http://math.stackexchange.com/questions/241087/show-that-the-equation-x55px35p2xq-0-will-have-a-pair-of-equal-roots-if/241101
# show that the equation $x^5+5px^3+5p^2x+q=0$ will have a pair of equal roots, if $q^2+4p^5=0$ how can I show that the equation $x^5+5px^3+5p^2x+q=0$ will have a pair of equal roots, if $q^2+4p^5=0$. can anyone help me.thanks a lot. - There is a more systematic approach, involving the resultant (which you can look up). The polynomial and its derivative have a common root if and only if the resultant of the polynomial and its derivative is zero. So, you can calculate the resultant, then see whether $q^2+4p^5$ is a factor of the resultant. The discriminant vanishes iff there are common roots of the polynomial. In this case the discriminant is $3125(4p^2+q^2)^2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9735611081123352, "perplexity": 66.86165884370084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133485.50/warc/CC-MAIN-20140914011213-00320-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://math.stackexchange.com/questions/226436/suggestion-for-a-project-on-harmonic-measure-and-fourier-analysis
# Suggestion for a project on Harmonic measure and Fourier analysis I have a course project on harmonic measure and Fourier analysis. The goal is to give a presentation on a part of harmonic measure theory which relates to Fourier analysis. Harmonic measure is a vast body of knowledge with which I am unfamiliar. I dwell into Garnet Harmonic measure book, but I don't see which area of the theory has connection with Fourier analysis. Can you point to some inter connected areas? I initialy planned to study hitting time for Brownian motion, but I think it exploit little of Fourier analysis.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462359547615051, "perplexity": 662.190965056273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826908.63/warc/CC-MAIN-20160723071026-00061-ip-10-185-27-174.ec2.internal.warc.gz"}
http://cs.stackexchange.com/questions/6719/cfg-and-pda-for-the-grammar-that-has-perfectly-nested-parentheses-and-brackets
# CFG and PDA for the grammar that has perfectly nested parentheses and brackets I gotta make a CFG and PDA for the grammar that has perfectly nested parentheses and brackets. \qquad\begin{align} S &\to [S] \\ S &\to (S) \\ S &\to SS \\ S &\to \varepsilon \end{align} Not sure if this is correct, or how to make the PDA from it? - Try using the standard construction from the proof that CFG and NPDA are equally powerful! Does "perfectly nested" exclude $([)]$ here? –  Raphael Nov 19 '12 at 18:11 The language you study is a classic, the one-sided Dyck language (on two pairs of brackets). You can directly make a PDA by considering the following property of nested strings: every symbol closing bracket you read should match the last unmatched opening bracket. Keep the unmatched $[$ and $($ on the stack and you are ready to go. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9893457293510437, "perplexity": 2016.8947328395957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064503.53/warc/CC-MAIN-20150827025424-00268-ip-10-171-96-226.ec2.internal.warc.gz"}
http://aas.org/archives/BAAS/v26n2/aas184/abs/S4908.html
Inhomogeneous Hot Gas in NGC 507 Session 49 -- Elliptical Galaxies Display presentation, Wednesday, 1, 1994, 9:20-6:30 ## [49.08] Inhomogeneous Hot Gas in NGC 507 D.-W. Kim and G. Fabbiano (SAO) We present the X-ray properties of NGC 507 observed by the ROSAT PSPC and provide observational data of radial changes in X-ray emission and its emitting temperature which are critical to understand the nature of the X-ray emitting gas and to build theoretical cooling flow models. The X-ray emission is extended at least out to 1000 arcsec (480 kpc at a distance of 99.3 Mpc). The radial profile of X-ray surface brightness is $\Sigma_X \sim r^{-1.8}$ outside the core region. The radial profile is a function of energy such that the soft X-rays have a smaller core radius and a flatter slope. The spectral analysis reveals that the emission temperature peaks at the intermediate radius at 2-3 arcmin and falls both inward and outward. The absorption column density is consistent with the galactic line of sight value. The observed radial profile of X-ray surface brightness and emission temperature is qualitatively best matched to a model with the full Tammann's supernova rate, a large amount of heavy halo and mass sinks over a wide range of radii. Assuming hydrostatic equilibrium the estimated mass to light ratio is 76 $\pm$ 13, indicating large amount of heavy halo. Near the edge of the X-ray emitting region there are many faint sources and we discuss various possibilities of their nature.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950416088104248, "perplexity": 1762.4162973898015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928754.15/warc/CC-MAIN-20150521113208-00163-ip-10-180-206-219.ec2.internal.warc.gz"}
http://kamilov.info/blog/2016/07/
# A simple ISTA ↔ FISTA switch Today, let us revisit the topic of enforcing sparsity and see an easy trick to accelerate the convergence speed of the algorithm (demo). The basic algorithm that we discussed last time is the iterative shrinkage/thresholding algorithm (ISTA) that can be specified as follows where $t = 1, 2, 3, \dots$ is the iteration number, $\mathbf{y}$ is the measurement vector, $\mathbf{H}$ is the measurement matrix that models the acquisition system, $\gamma > 0$ is a step-size of the algorithm that we can always set to the inverse of the largest eigenvalue of $\mathbf{H}^\mathrm{T}\mathbf{H}$ to ensure convergence (i.e., set $\gamma = 1/L$ with $L = \lambda_{\text{max}}(\mathbf{H}^\mathrm{T}\mathbf{H})$), $\tau > 0$ is the regularization parameter that controls the sparsity of the final solution (larger $\tau$ leads to a sparser solution), and finally $\eta$ is a scalar thresholding function applied in a component-wise fashion. One of the most popular thresholding functions is the soft-thresholding defined as where $(\cdot)_+$ returns the positive part of its argument, and $\mathrm{sgn}(\cdot)$ is a signum function that returns $+1$ if its argument is positive and $-1$ when it is negative. ISTA is a very well understood method, and it is well known that its rate of convergence corresponds to that of the gradient-descent method, which is $O(1/t)$. Let us considering the following simple iteration where $\mathbf{f}^0 = \mathbf{s}^0 = \mathbf{f}_{\text{init}}$. When $q_t = 1$ for all $t = 1, 2, 3, \dots$ the iteration corresponds to ISTA, however, an appropriate choice of$\{q_t\}_{t \in [0, 1, \dots]}$ leads to a faster $O(1/t^2)$  convergence, which is crucial for larger scale problems where one tries to reduce the amount of matrix-vector products with $\mathbf{H}$ and $\mathbf{H}^\mathrm{T}$. The faster version of ISTA was originally proposed by Beck & Teboulle 2009 and is widely known as fast ISTA (FISTA). So, what is that appropriate choice of $\{q_t\}_{t \in [0, 1, \dots]}$? Beck & Teboulle proposed to initialize $q_0 = 1$ and then setting the rest iteratively as follows A short note on the Python demo. It was done as an IPython notebook in a fully self-contained way. The first two cells create and save two files matrixmodel.py and algorithm.py that are re-usable as stand-alone files. The switch from ISTA to FISTA is done in cell 8: # Reconstruct with ISTA [fhatISTA, costISTA] = fistaEst(y, forwardObj, tau, numIter, accelerate=False) # Reconstruct with FISTA [fhatFISTA, costFISTA] = fistaEst(y, forwardObj, tau, numIter, accelerate=True) The output of the demo is the following figure plotting the results of both algorithms within 100 iterations:
{"extraction_info": {"found_math": true, "script_math_tex": 24, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089363217353821, "perplexity": 933.5472905941217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00477-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/relative-sign-in-diagrams-for-z-boson-exchange-for-ww-scattering.909156/
# A Relative sign in diagrams for $Z$-boson exchange for $WW$ scattering 1. Mar 26, 2017 ### spaghetti3451 Consider the scattering process $W^{+}W^{-} \to W^{+}W^{-}$. This process is mediated in the Standard Model by 1. a four-$W$ scattering, 2. $Z$-boson exchange, 3. Higgs exchange. ------------------------------------------------------------------------------------------------------------------------------ Let us consider the diagrams for $Z$-boson exchange: Why is there a relative minus sign between the matrix elements for the two diagrams? 2. Mar 26, 2017 ### Deneen2000 Newbie here...have you considered that the relative minus sign btwn two terms is dictated by the Fermi-Dirac stats? something about a possible closed loop - which carries the extra minus sign. I hope this helps. Deneen 3. Mar 26, 2017 ### spaghetti3451 Could you be more specific? 4. Mar 26, 2017 ### Deneen2000 ] 5. Mar 27, 2017 ### ChrisVer how can you use Fermi-Dirac statistics for W/Z bosons? Also I think that that minus sign mentioned in your reference appears between t- and u- channels... Last edited: Mar 27, 2017 6. Mar 27, 2017 ### CAF123 @spaghetti3451 Could you explain why you think there should be a relative minus in the first place for this process? There is no u channel process here in the case of a Z mediator. Draft saved Draft deleted Similar Discussions: Relative sign in diagrams for $Z$-boson exchange for $WW$ scattering
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8793041706085205, "perplexity": 3606.591339257166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588420.68/warc/CC-MAIN-20171216181940-20171216203940-00663.warc.gz"}
https://diagram.nu/haiws6w/bad475-logarithmic-differentiation-formulas-pdf
Polyester Spandex Fabric Canada, Ford Falcon Fg Ute Tub Dimensions, Jersey Property For Sale, Sean Murphy Director, Tides Europe 2021, Uic Hospital Specialty, Case Western Dental School Curriculum, Dhawal Kulkarni Ipl 2020, Animal Kingdom Kidani Village 2 Bedroom Villa, Ashes Highlights 5th Test, Bosch Id Login, " /> # logarithmic differentiation formulas pdf The Natural Logarithmic Function: Integration Trigonometric Functions Until learning about the Log Rule, we could only find the antiderivatives that corresponded directly to the differentiation rules. Programme complet du congrès à télécharger - SMAI Congrès SMAI 2013 Seignosse le Penon (Landes) 27-31 Mai 2013 Programme complet du congrès Version 3.1, 6 juin 2013, 18h00 Table des matières : page 325 0 3 n a s Congrès I SMA de la SMAI 2013 6ème biennale des mathématiques appliquées et industrielles 27-31 MAI 2013 Seignosse (Landes) PROGRAMME CONFÉRENCES PLÉNIÈRES DEMI … Key Point A function of the form f(x) = ax (where a > 0) is called an exponential function. The function f(x) = ax for a > 1 has a graph which is close to the x-axis for negative x and increases rapidly for positive x. 7.Rules for Elementary Functions Dc=0 where c is constant. Given an equation y= y(x) express-ing yexplicitly as a function of x, the derivative 0 is found using loga-rithmic di erentiation as follows: Apply the natural logarithm ln to both sides of the equation and use laws of logarithms to simplify the right-hand side. The idea of each method is straightforward, but actually using each of them … D(ax+b)=a where a and b are constant. If f(x) is a one-to-one function (i.e. Basic Differentiation Formulas Differentiation of Log and Exponential Function Differentiation of Trigonometry Functions Differentiation of Inverse Trigonometry Functions Differentiation Rules Next: Finding derivative of Implicit functions→ Chapter 5 Class 12 Continuity and Differentiability; Concept wise; Finding derivative of a function by chain rule. 8 Miami Dade College -- Hialeah Campus Differentiation Formulas Antiderivative(Integral) Formulas . Derivatives of Logarithmic Functions Recall that if a is a positive number (a constant) with a 1, then y loga x means that ay x. Common Integrals Indefinite Integral Method of substitution ... Integrals of Exponential and Logarithmic Functions ∫ln lnxdx x x x C= − + ( ) 1 1 2 ln ln 1 1 n n x xdx x Cn x x n n + + = − + + + ∫ ∫e dx e Cx x= + ln x b dx Cx b b ∫ = + ∫sinh coshxdx x C= + ∫cosh sinhxdx x C= + www.mathportal.org 2. Key Point log a a = 1 www.mathcentre.ac.uk 3 c mathcentre 2009. Misc 1 Example 22 Ex 5.2, … Figure 1 . Section 3-13 : Logarithmic Differentiation. View 10. differentiation of trigonometric functions. Integration Formulas 1. F(x) is called Antiderivative of on an interval I if . 3.10 IMPLICIT and LOGARITHMIC DIFFERENTIATION This short section presents two final differentiation techniques. *Member of the family of Antiderivatives of y 0 0 x 3 -3 -3 (C is an arbitrary constant.) These functions require a technique called logarithmic differentiation, which allows us to differentiate any function of the form $$h(x)=g(x)^{f(x)}$$. The idea of each method is straightforward, but actually using each of … The formula for log differentiation of a function is given by; d/dx(x x) = x x (1+ln x) Get the complete list of differentiation formulas here. It can also be used to convert a very complex differentiation problem into a simpler one, such as finding the derivative of $$y=\dfrac{x\sqrt{2x+1}}{e^x\sin^3 x}$$. Differentiation Formulas Let’s start with the simplest of all functions, the constant function f (x) = c. The graph of this function is the horizontal line y = c, which has slope 0, so we must have f ′(x) = 0. Logarithmic Functions . a y = 1 x ln a From the formula it follows that d dx (ln x) = 1 x Differentiation Formula: In mathmatics differentiation is a well known term, which is generally studied in the domain of calculus portion of mathematics.We all have studied and solved its numbers of problems in our high school and +2 levels. Logarithmic differentiation will provide a way to differentiate a function of this type. This video tell how to differentiate when function power function is there. In general, for any base a, a = a1 and so log a a = 1. Logarithmic Differentiation ... Differentiation Formulas – Here we will start introducing some of the differentiation formulas used in a calculus course. It requires deft algebra skills and careful use of the following unpopular, but well-known, properties of logarithms. In the same way that we have rules or laws of indices, we have laws of logarithms. Detailed step by step solutions to your Logarithmic differentiation problems online with our math solver and calculator. Similarly, the logarithmic form of the statement 21 = 2 is log 2 2 = 1. 3.6 Derivatives of Logarithmic Functions Math 1271, TA: Amy DeCelles 1. 1 Derivatives of exponential and logarithmic func-tions If you are not familiar with exponential and logarithmic functions you may wish to consult the booklet Exponents and Logarithms which is available from the Mathematics Learning Centre. The graph of f (x) = c is the line y = c, so f ′(x) = 0. Exponential & Logarithmic Forms Hyperbolic Forms . For problems 1 – 3 use logarithmic differentiation to find the first derivative of the given function. Differentiation Formulas . Integration Guidelines 1. Youmay have seen that there are two notations popularly used for natural logarithms, log e and ln. Use logarithmic differentiation to avoid product and quotient rules on complicated products and quotients and also use it to differentiate powers that are messy. Solution: We can differentiate this function using quotient rule, logarithmic-function. Use log b jxj=lnjxj=lnb to differentiate logs to other bases. Overview Derivatives of logs: The derivative of the natural log is: (lnx)0 = 1 x and the derivative of the log base bis: (log b x) 0 = 1 lnb 1 x Log Laws: Though you probably learned these in high school, you may have forgotten them because you didn’t use them very much. We outline this technique in the following problem-solving strategy. Page 2 Draft for consultation Observations are invited on this draft booklet of Formulae and Tables, which is intended to replace the Mathematics Tables for use in the state examinations. These two techniques are more specialized than the ones we have already seen and they are used on a smaller class of functions. 9 Miami Dade College -- Hialeah Campus Antiderivatives of = Indefinite Integral is continuous. Logarithmic Differentiation Formula. Dxp = pxp 1 p constant. Though the following properties and methods are true for a logarithm of any base, only the natural logarithm (base e, where e 2.9 Implicit and Logarithmic Differentiation This short section presents two more differentiation techniques, both more specialized than the ones we have already seen—and consequently used on a smaller class of functions. 2. Learn your rules (Power rule, trig rules, log rules, etc.). Implicit Differentiation, Derivatives of Logarithmic The function f(x) = ax for 0 < a < 1 has a graph which is close to the x-axis for positive x Logarithmic differentiation Calculator online with solution and steps. this calculus video tutorial explains how to perform logarithmic differentiation on natural logs and regular logarithmic functions including exponential functions such as e^x., differentiation rules are formulae that allow us to find the derivatives of functions quickly. One can use bp =eplnb to differentiate powers. Example 3.80 Finding the Slope of a Tangent Line Find the slope of the line tangent to the graph of y=log2(3x+1)atx=1. Solved exercises of Logarithmic differentiation. We outline this technique in the following problem-solving strategy. INTEGRALS OF THE SIX BASIC TRIGONOMETRIC FUNCTIONS. Integration of Logarithmic Functions Relevant For... Calculus > Antiderivatives. It can also be used to convert a very complex differentiation problem into a simpler one, such as finding the derivative of $$y=\frac{x\sqrt{2x+1}}{e^xsin^3x}$$. Logarithmic differentiation. These functions require a technique called logarithmic differentiation, which allows us to differentiate any function of the form $$h(x)=g(x)^{f(x)}$$. For some functions, however, one of these techniques may be the only method that works. The formula as given can be applied more widely; for example if f(z) is a meromorphic function, it makes sense at all complex values of z at which f has neither a zero nor a pole. Implicit Differentiation, Derivatives of Logarithmic and Exponential Functions.pdf from MATH 21 at University of the Philippines Diliman. 3 xln3 (3x+2)2 Simplify. To find the derivative of the base e logarithm function, y loge x ln x , we write the formula in the implicit form ey x and then take the derivative of both sides of this The function f(x) = 1x is just the constant function f(x) = 1. Implicit Differentiation Introduction Examples Derivatives of Inverse Trigs via Implicit Differentiation A Summary Derivatives of Logs Formulas and Examples Logarithmic Differentiation Derivatives in Science In Physics In Economics In Biology Related Rates Overview How to tackle the problems Example (ladder) Example (shadow) We can see from the Examples above that indices and logarithms are very closely related. Replace ywith y(x). In this method logarithmic differentiation we are going to see some examples problems to understand where we have to apply this method. 2 EX #1: EX #2: 3 EX #3:Evaluate. Find an integration formula that resembles the integral you are trying to solve (u-substitution should accomplish this goal). 3. Example 1: Differentiate [sin x cos (x²)]/[ x³ + log x ] with respect to x . Use logarithmic differentiation to find the first derivative of $$f\left( x \right) = {\left( {5 - 3{x^2}} \right)^7}\,\,\sqrt {6{x^2} + 8x - 12}$$. Find y0 using implicit di erentiation. Now, we have a list of basic trigonometric integration formulas. For some functions, however, one of these may be the only method that works. Derivatives of Exponential, Logarithmic and Trigonometric Functions Derivative of the inverse function. See Figure 1. The equations which take the form y = f(x) = [u(x)] {v(x)} can be easily solved using the concept of logarithmic differentiation. Product and Quotient Rule – In this section we will took at differentiating products and quotients of functions. Derivatives of Trig Functions – We’ll give the derivatives of the trig functions in this section. Logarithmic di erentiation; Example Find the derivative of y = 4 q x2+1 x2 1 I We take the natural logarithm of both sides to get lny = ln 4 r x2 + 1 x2 1 I Using the rules of logarithms to expand the R.H.S. 1. The function y loga x , which is defined for all x 0, is called the base a logarithm function. 3 . Math Formulas: Logarithm formulas Logarithm formulas 1. y = log a x ()ay = x (a;x > 0;a 6= 1) 2. log a 1 = 0 3. log a a = 1 4. log a (mn) = log a m+log a n 5. log a m n = log a m log a n 6. log a m n = nlog a m 7. log a m = log b mlog a b 8. log a m = log b m log b a 9. log a b = a log b a 10. log a x = lna lnx 1. Further, at a zero or a pole the logarithmic derivative behaves in a way that is easily analysed in terms of the particular case z n. with n an integer, n ≠ 0. Formulae and Tables for use in the State Examinations PDF Watermark Remover DEMO : Purchase from www.PDFWatermarkRemover.com to remove the watermark. 21 = 2 is log 2 2 = 1 3 -3 -3 ( c is constant..... U-Substitution should accomplish this goal ) Logarithmic functions Math 1271, TA: Amy DeCelles.! Key Point log a a = 1 # 3: Evaluate – we ll. = 1x is just the constant function f ( x ) = ax ( where a > 0 ) called... Provide a way to differentiate logs to other bases and Trigonometric functions Derivative of the following unpopular, well-known. Technique in the State Examinations PDF Watermark Remover DEMO: Purchase from to. This technique in the State Examinations PDF Watermark Remover DEMO: Purchase from to... Of them … Logarithmic differentiation this short section presents two final differentiation techniques ) = ax ( where and. Some functions, however, one of these may be the only method that.... ( x² ) ] / [ x³ + log x ] with respect to x Math 21 at of! The form f ( x ) = 1 differentiation we are going to see some Examples problems to understand we. ( ax+b ) =a where a and b are constant. ) that the. Similarly, the Logarithmic form of the statement 21 = 2 is log 2 2 = 1 respect to.... Math solver and Calculator will provide a way to differentiate logs to other bases the function f ( ). A way to differentiate when function Power function is there where c is the line y c... … Logarithmic differentiation problems online with solution and steps a function of the inverse function Exponential, and... The only method that works this goal ) 3.10 IMPLICIT and Logarithmic differentiation problems online with our Math solver Calculator!, the Logarithmic form of the inverse function the given function have already and! Which is defined for all x 0, is called the base a, =. If f ( x ) is called Antiderivative of on an interval I if and Logarithmic differentiation we going. 1 – 3 use Logarithmic differentiation will provide a way to differentiate to. Function f ( x ) = 1x is just the constant function f ( x ) = logarithmic differentiation formulas pdf! Power rule, logarithmic-function now, we have to apply this method have rules or of... 1 example 22 EX 5.2, … integration Formulas 1 Exponential Functions.pdf Math... Goal ): Amy DeCelles 1 are two notations popularly used for natural logarithms, log rules,.. Function f ( x ) = 1x is just the constant function f x. Logarithmic and Trigonometric functions Derivative of the given function but actually using each them! 1X is just the constant function f ( x ) is called the base a, a a1... Two final differentiation techniques given function are used on a smaller class of.! > 0 ) is called the base a logarithm function, derivatives of Logarithmic and Trigonometric functions of. Differentiate [ sin x cos ( x² ) ] / [ x³ + log x with. Y = c is constant. ), which is defined for all x 0, is the. 7.Rules for Elementary functions Dc=0 where c is constant. ) PDF Remover! Resembles the Integral you are trying to solve ( u-substitution should accomplish goal... Functions.Pdf from Math 21 at University of the following logarithmic differentiation formulas pdf strategy line y = c, so f (. These techniques may be the only method that works integration Formulas 1 Logarithmic! Section we will took at differentiating products and quotients of functions logarithm function we can see from Examples! Quotients of functions with solution and steps these may be the only that. Called the base a, a = 1 the Integral you are trying to solve ( u-substitution accomplish. Constant function f ( x ) is called an Exponential function from www.PDFWatermarkRemover.com to remove the Watermark Antiderivatives! Graph of f ( x ) is a one-to-one function ( i.e and... The statement 21 = 2 is log 2 2 = 1 www.mathcentre.ac.uk 3 c mathcentre 2009 natural logarithms log! Formulas Antiderivative ( Integral ) Formulas functions Dc=0 where c is an arbitrary constant )!, trig rules, log rules, etc. ) the form f ( x ) is called base! X, which is defined for all x 0, is called an Exponential function youmay have seen that are. Straightforward, but well-known, properties of logarithms of on an interval I if accomplish this goal ),! 2: 3 EX # 3: Evaluate are more specialized than ones! The Integral you are trying to solve ( u-substitution should accomplish this goal ) step solutions to Logarithmic... To remove the Watermark an interval I if Math 21 at University of the statement 21 = is... Formulas Antiderivative ( Integral ) Formulas PDF Watermark Remover DEMO: Purchase from www.PDFWatermarkRemover.com remove. This type be the only method that works of these may be only..., which is defined for all x 0, is called an Exponential function these may be the method! Www.Pdfwatermarkremover.Com to remove the Watermark and Tables for use in the following strategy. Use log b jxj=lnjxj=lnb to differentiate logs to other bases of basic Trigonometric Formulas! From the Examples above that indices and logarithms are very closely related function loga. A list of basic Trigonometric integration Formulas this function using quotient rule, trig,... U-Substitution should accomplish this goal ) for problems 1 – 3 use Logarithmic differentiation problems online with solution steps. Your Logarithmic differentiation d ( ax+b ) =a where a and b are constant..! First Derivative of the statement 21 = 2 is log 2 2 = 1 www.mathcentre.ac.uk 3 c mathcentre.! Philippines Diliman – we ’ ll give the derivatives of Exponential, Logarithmic Exponential! First Derivative of the statement 21 = 2 is log 2 2 = www.mathcentre.ac.uk. From Math 21 at University of the following unpopular, but actually using each of them … Logarithmic to. Ta: Amy DeCelles 1 be the only method that works the function f x! Ll give the derivatives of Logarithmic functions Math 1271, TA: Amy DeCelles 1 … Logarithmic differentiation seen they... E and ln where c is the line y = c is an arbitrary constant. ) the inverse.! Of the trig functions in this section we will took at differentiating products and quotients of.! 1 www.mathcentre.ac.uk 3 c mathcentre 2009 a = 1 Power rule, trig rules, log,! Will took at differentiating products and quotients of functions Integral ) Formulas going! D ( ax+b ) =a where a > 0 ) is called of... = ax ( where a and b are constant. ) provide a way to differentiate logs other. Step solutions to your Logarithmic differentiation will provide a way to differentiate function! 0 ) is a one-to-one function ( i.e 21 = 2 is log 2 2 = 1 3! Solution: we can see from the Examples above that indices and logarithms very. Ta: Amy DeCelles 1 this technique in the following unpopular, but well-known, properties of logarithms 0 is! Laws of indices, we have a list of basic Trigonometric integration Formulas 1 the derivatives of the Philippines.! Of these may be the only method that works each of … section 3-13: differentiation... Of logarithms our Math solver and Calculator from www.PDFWatermarkRemover.com to remove the Watermark Amy 1! It requires deft algebra skills and careful use of the inverse function have a list of Trigonometric! They are used on a smaller class of functions an integration formula that resembles the Integral you are to... A logarithm function other bases Derivative of the given function for some functions, however, of... All x 0, is called the base a logarithm function our Math solver and Calculator # 1: #. We can see from the Examples above that indices and logarithms are very closely related and Calculator way that have. Functions Dc=0 where c is constant. ) = Indefinite Integral is continuous any base a, a =.! Line y = c, so f ′ ( x ) = 1 f ( x ) = c so. Function ( i.e is straightforward, but actually using each of them … Logarithmic differentiation to the! 3.6 derivatives of Logarithmic functions Math 1271, TA: Amy DeCelles 1 the given.... To other bases step solutions to your Logarithmic differentiation to find the first of! With respect to x functions in this method Logarithmic differentiation the derivatives of the family of Antiderivatives =! 1 www.mathcentre.ac.uk 3 c logarithmic differentiation formulas pdf 2009 function f ( x ) = 0 Formulas 1 base a logarithm function by. Log e and ln are trying to solve ( u-substitution should accomplish this goal ) x³ + x! Have seen that there are two notations popularly used for natural logarithms, log and! Logarithmic and Trigonometric functions Derivative of the inverse function a one-to-one function ( i.e logarithms log... And Tables for use in the same way that we have rules or laws of indices, we have apply... We have to apply this method following unpopular, but well-known, properties of logarithms Examinations PDF Watermark Remover:! Two final differentiation techniques the Watermark logs to other bases Examples problems understand... X, which is defined for all x 0, is called the base a logarithm.... Functions Math 1271, TA: Amy DeCelles 1, the Logarithmic of... At differentiating products and quotients of functions derivatives of the following problem-solving strategy functions – we ’ ll the. Examples above that indices and logarithms are very closely related of them … Logarithmic to. Well-Known, properties of logarithms c mathcentre 2009 your rules ( Power rule, logarithmic-function trig rules, rules.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615443348884583, "perplexity": 1106.278305302531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00553.warc.gz"}
http://resolutionofriemannhypothesis.blogspot.com/2014/11/
## Saturday, November 29, 2014 ### Do Numbers Evolve? (7) In all that has been said so far, it is plain that we have in fact two distinct notions of number, which dynamically interact in experience. For example the number 3 can be given an independent (cardinal) existence or alternatively an interdependent (ordinal) type definition. Thus, from the first perspective, 3 is viewed in quantitative terms, which can be expressed in terms of its component units as 1 + 1 + 1 .So the independent (absolute) units are all homogeneous in nature (thereby lacking any qualitative distinction). However 3 can also be given an interdependent ordinal type definition, which is expressed in terms of its component members as 1st + 2nd + 3rd. Thus the units here are of an interdependent (relative) nature (thereby lacking quantitative distinction). I have dealt with this latter ordinal (Type 2) notion of number in my previous entries. This established the vitally important fact that associated with it is a simple function (Zeta 2) with a corresponding set of zeros. These in effect show how to convert Type 2 qualitative type meaning in a (reduced) Type 1 quantitative manner. So the various prime roots of 1 (excluding 1) can thereby be indirectly used to uniquely express its various ordinal members. So again the 3 roots of 1 can be used to express the unique ordinal nature of 1st 2nd and 3rd members (in the context of 3). However since all ordinal type relationships necessarily entail the fixing of position with respect to one member in an independent fashion, one solution i.e. 1 is thereby trivial in this respect! Once again therefore the importance of the (unrecognised) Zeta 2 zeros is that they enable the unique conversion of the  qualitative (Type 2) ordinal nature of number in an indirect quantitative (Type 1) manner. It is very important to appreciate this fact, as the Zeta 1 (Riemann) zeros can be then shown to play a direct complementary role with respect to number. The general Zeta 2 equation can be expressed as follows: ζ2(s) =  1 + s + s + s +….. + st – 1   (with t prime). However ultimately this can be extended to all natural numbers. Now this can equally be written as ζ2(s) =  1 + s– 1  + s– 2  + s– 3  +….. + s– (t – 1) Therefore the zeros for this function are given as ζ2(s) =  1 + s– 1  + s– 2  + s– 3  +….. + s– (t – 1)   = 0. The corresponding Zeta 1 equation, i.e. the Riemann zeta function can be expressed by the infinite equation. ζ1(s) =   1– s  + 2–  s + 3– s  + 4– s   +….. And the zeros for this function are given by ζ1(s) =   1– s  + 2–  s + 3– s  + 4– s   +…..  = 0. Notice the complementarity as between both expressions! Whereas the first expression represents a finite, the second represents an infinite series of terms. Then whereas the natural numbers 1, 2, 3, …. represent dimensional powers in the Zeta 2, they represent base quantities with respect to the Zeta 1 and in reverse fashion whereas the natural numbers represent base quantities with respect to the Zeta 1, they represent dimensional powers with respect to the Zeta 2. Now in the case of the Riemann (Zeta 1) function we start with the notion of numbers, expressed as the unique product of primes (representing base quantities) as independent entities. However this begs the obvious question of how to express the corresponding interdependence of these numbers in their consistent interaction with respect to the overall number system! In other words, in dynamic interactive terms, quantitative independence (with respect to individual numbers) implies the opposite notion of qualitative interdependence (with respect to their overall relationship with each other). Thus the Riemann (Zeta 1) zeros represent an infinite set of paired numbers that uniquely expresses in Type 2 fashion the qualitative interdependent nature of the number system. So just as the Zeta 2 zeros, as we have seen convert (Type 2) qualitative ordinal type natural number notions in a (Type 1) quantitative manner, in reverse fashion, the Zeta 1 zeros convert (Type 1) quantitative cardinal type prime notions in a (Type 2) qualitative manner. Indeed we could rightly say - though Conventional Mathematics has no way of adequately interpreting this notion - that the Zeta 1 zeros express the collective ordinal nature of the primes (with respect to the natural number system). Indeed in even simpler terms, the Zeta 1 zeros express the holistic basis of the cardinal number system. ## Friday, November 28, 2014 ### Do Numbers Evolve? (6) As we have seen there are two possible extremes in terms of the appreciation of number. At one extreme we attempt to separate polarities (such as external and internal, quantitative and qualitative) in an absolute independent manner. This leads to the apparent existence of numbers as absolute fixed entities (of phenomenal form). This in fact represents the abstract analytic approach to number that characterises conventional mathematical interpretation. At the other extreme we attempt to view such opposite polarities ultimately as totally interdependent with each other  leading to the appreciation of number as pure energy states (ultimately of an ineffable nature). For simplicity I refer to the first as the analytic aspect of interpretation (identified with linear reason) and the second to the corresponding holistic aspect (identified with pure intuition, that indirectly has a circular paradoxical interpretation in rational terms). Actual experience of number is implicitly of a relative nature that necessarily falls between the two extremes. So (absolute) analytic interpretation represents just one limiting perspective that can be approached (but never fully achieved). Likewise (purely relative) holistic interpretation represents the other limiting perspective that can be approached (but again never fully achieved). So both aspects are in fact controlled by a fundamental uncertainty principle. So the attempt to achieve analytic understanding (in an absolute manner) therefore tends to blot out recognition of the equally important holistic aspect; equally the attempt to achieve holistic understanding in a purely relative manner, likewise tends to block out corresponding recognition of the analytic aspect. Conventional Mathematics is however characterised by such an extreme attention on the analytic aspect, that the holistic aspect (which in truth is equally important) is not even formally recognised. So it must be said – and continually repeated that current Mathematics – despite its admitted great achievements in the quantitative arena is hugely unbalanced and thereby hugely distorted in nature. Now, properly understood, the zeta zeros (Zeta 1 and Zeta 2) represent the holistic extreme with respect to mathematical interpretation (where it approaches a purely relative state). Again it might be instructive to illustrate this with respect to the first of the Zeta 2 zeros, indirectly represented by the two roots of 2. So these two roots, + 1 and – 1, now relate directly to the opposite polarities (such as external and internal) that condition all phenomenal experience. Now when experience becomes highly refined in an increasingly dynamically interactive manner, one better realises that each pole only has meaning in terms of the other. So as soon as one posits understanding with respect to one pole e.g. as a number in objective terms, one quickly realises that this has no meaning in the absence of the corresponding perception of number that is opposite and thereby negative. So now one posits the internal perception of number, before again quickly realising (directly through intuition) that is has no meaning independent of its external object. Thus a ceaseless dynamic interplay takes place in experience as between two opposite poles that momentarily are identified as separate in quantitative terms (in a fixed rational manner). However these poles are then equally experienced as complementary and ultimately identical (in a directly intuitive manner). So through the interplay, the opposite poles continually keep switching as between their positive and negative identities. Now indirectly this holistic understanding can be represented as + 1 – 1 = 0. And it must be clearly recognised that each pole (external and internal) has both positive and negative states that continually alternate between each other. So here we combine the momentary quantitative existence of each pole as independent with the combined qualitative existence of both poles as interdependent. And such quantitative interdependence = 0 (which in holistic terms entails a purely qualitative meaning i.e. without quantitative identity) And if we take any prime number and then express its prime roots, all of these (except 1) will be unique in nature and cannot recur with any other prime. So in  holistic terms, each prime number is thereby uniquely expressed through its ordinal members indirectly expressed in a quantitative manner by all roots (except 1) . And the momentary separate identity of each root (as quantitative and independent) is perfectly balanced in each case by the collective identity of all roots (as qualitative and interdependent). Now the ultimate limit of such understanding approaches a timeless (and spaceless) state where we can no longer distinguish the (separate) quantitative identity of each member from the (collective) qualitative identity of all members. And this represents ineffable reality (of pure emptiness). So properly understood the evolution of the number system spans the holistic extreme of pure ineffable reality (of emptiness) and the corresponding analytic extreme of absolutely fixed phenomenal reality (of form). Thus properly understood in experiential terms, both analytic and holistic aspects interact as matter and energy in the ceaseless transformation of number. Now we have seen in Type 1 terms that all natural numbers are viewed in quantitative terms as the unique product of natural numbers. So for example 6 is uniquely presented as 2 * 3. However there is a complementary Type 2 approach to the primes where the natural numbers in ordinal terms are the building blocks of each prime. Besides prime numbers (as dimensions) we also have natural numbers as dimensions. However the roots of these natural numbers can be directly derived from constituent primes. So in type 1 terms, 21 * 31 = 61 Equally 12 * 3 = 16 Then when we find the six roots of 1, holistic order is fully preserved in that these roots while preserving a relative quantitative independence can again be collective combined to give a total of zero (representing their qualitative interdependence) In this way the primes can be seen to be unique in both Type 1 and Type 2 terms (though the order of relationship with the natural numbers is inverted in each case). In fact both relationships - ultimately expressing the two way interdependence of primes and natural numbers - mutually imply each other. ## Thursday, November 27, 2014 ### Do Numbers Evolve? (5) I will attempt here to provide additional clarification on the holistic - as opposed to the standard analytic - interpretation of number. Once again analytic interpretation is by its very nature linear (i.e. 1-dimensional) thus enabling numbers to be interpreted with respect to their reduced quantitative values. This entails interpretation  with single polar reference frames (e.g. as unambiguously objective) in an independent absolute manner. Thus with 1-dimensional interpretation (i.e. single poles of reference) dynamic interdependence (resulting from the interaction of more than one pole) cannot properly be interpreted and is thereby reduced in an independent manner. Therefore at a minimum we require at least two interacting polar frames of reference to establish genuine interdependence. And in short holistic appreciation relates to the explicit recognition of the nature of such interdependence. Now all holistic interdependence necessarily starts with the initial recognition of independence (which is conscious posited as the 1st dimension). However 2-dimensional appreciation combines this 1st dimension (entailing analytic type appreciation) with a second dimension that entails the negation (of what has been posited). Now this in fact is deeply relevant to the multiplication of two numbers. In standard analytic terms when we multiply - say - 3 * 5, the answer is given in a reduced 1-dimensional fashion as 15 (i.e. 151). However a simple geometrical representation of this relationship will suggest that through multiplication the nature of the units has changed from linear (1-dimensional) to square (2-dimensional) format. Thus there is something fundamentally missing from the conventional mathematical treatment of multiplication. So when we probe more deeply into the nature of this simple operation (i.e. 3 * 5) we find that it cannot be properly explained in the absence of both the quantitative notion of independence and the qualitative notion of interdependence respectively. So imagine 5 units laid out in 3 separate rows (in a rectangular fashion)! Now this implies that we recognise each unit in a (separate) independent fashion. However to then multiply by 3 we must also recognise that the units in each row share a common identity (thus enabling each row to placed in correspondence  with each other). Now this recognition of interdependence (in a mutual shared identity of each unit) literally entails the (temporary) negation with respect to the (conscious) recognition of a posited independent identity. Thus the qualitative recognition of a shared identity (through negation of each separate unit) in fact implies the 2nd dimension of understanding in this case. Thus a comprehensive appreciation of  the multiplication of 3 * 5 entails recognition of the quantitative independence of each individual unit with the qualitative interdependence of all units (i.e. as sharing a common quality). So comprehensive appreciation is here 2-dimensional, entailing a 1st dimension (relating to independent recognition) and a 2nd dimension (relating to qualitative interdependence in a mutual common recognition). Now if we were to now properly explain - say - 3 * 5 * 4, this would entail 3-dimensional interpretation. So once again the 1st dimension would relate to the standard analytic appreciation of 60 independent units. However we would now have two layers of interdependence to appreciate. So for example if we arranged 5 units each in 3 rows on a bottom layer, this would entail - as before - the 1st level of interdependence. Then we would could lay each of these rectangles four units high creating a second compounded level of interdependence. Now remarkably the various roots of 1 (when appropriately interpreted) provide the appropriate means to properly resolve the true nature of multiplication. So the multiplication of 2 numbers requires 2-dimensional interpretation (with a 1st and 2nd dimension applying); the multiplication of 3 numbers would then entail 3-dimensional interpretation (with a 1st 2nd and 3rd dimension applying). In general the multiplication of n numbers would require n-dimensional interpretation (with a 1st, 2nd, 3rd,....nth dimension applying). Now - what I refer to as - the Zeta 2 zeros relate to all the dimensions (other than the 1st) which provide the general means for holistically interpreting all such relationships. So =  x; Thus 1 – x = 0. Therefore (1 – x)(1 + x1 + xx+ ....+ x– 1= 0 Now 1 – x = 0 represents the trivial solution (i.e. x = 1), which relates to the 1st dimension and the initial recognition of the independence of all units. However 1 + x1  + xx+ ....+ x– 1 = 0, provides the equation for establishing the true holistic nature of all higher dimensions. The simplest possible case (which serves as a holistic template for all others) occurs when n = 2. So here 1 + x(i.e. 1 + x) = 0; therefore x = – 1. This is the first of the Zeta 2 zeros and has vitally important role to play. Basically it serves to express (in an indirect manner) the nature of holistic interdependence in the 2-dimensional case. Now if we look for the extreme example (of the most highly refined intuitive understanding possible) then – 1 (i.e. the 1st trivial zero) can holistically be understood as representing a pure psycho spiritual energy state (with a complementary interpretation as a pure physical energy state). So just as anti-matter (when in contact with matter) particles will fuse in a pure physical energy state, likewise this is true of number (which represents the encoded nature of reality in both physical and psychological terms). Equally all other Zeta 2 zeros can be understood (in their fullest experiential attainment) as representing in holistic terms pure energy states. What this implies is that one can then directly intuit in experience the purely relative nature of an ever increasing number of different frames. This requires therefore great transparency with respect to understanding, where phenomenal rigidity is greatly eroded. However the Zeta 2 zeros equally play a remarkably important role with respect to our everyday understanding of number (that is not at all well realised). If we return again to the simplest case of 2, we see that this is identified with a 1st and 2nd dimension that can be holistically represented as + 1 and – 1 respectively. Now in qualitative terms + 1 simply relates to analytic type understanding (where polar frames are understood as separate). However – 1 represents the unconscious negation of such understanding leading to the directly intuitive realisation (at an unconscious level) of their mutual identity. Now implicitly such understanding is required to understand the ordinal relationship of 2 members (of a group of 2). Thus the ordinal identification of 1st and 2nd (with respect to this group of members) implicitly entails corresponding realisation of the first two zeros (trivial and non-trivial). Likewise the identification of the 1st, 2nd and 3rd (in the context of 3 members) implicitly entails corresponding realisation of the zeros corresponding to n = 3 (with again one trivial corresponding to the root of 1 and the other two non-trivial corresponding to the other two roots). And in general the ordinal identification of 1st, 2nd, 3rd,....nth (in the context of n members) implicitly entails corresponding realisation of the zeros corresponding to n = n (with again one trivial and the the other n – 1 corresponding to non-trivial solutions). Thus to put it briefly, the Zeta 2 zeros intimately underlie our everyday analytic appreciation of the ordinal nature of number (as its unrecognised holistic basis). And this unrecognised holistic basis equally implies its unrecognised unconscious basis! So without implicit interaction of this deepest holistic (unconscious) layer of understanding, the conventional ordinal appreciation of number would simply not be possible. One important consequence of this is that it demonstrates the merely relative nature of ordinal understanding. For example we might initially think that the notion of 2nd has an unambiguous identity. However 2nd (in the context of 2) is distinct from 2nd (in the context of 3) which is distinct from 2nd in the context of 4 and so on! Thus the notion of 2nd - as indeed all other ordinal number notions - can potentially be given an unlimited number of possible definitions. ## Wednesday, November 26, 2014 ### Do Numbers Evolve? (4) I have referred repeatedly to the dynamic interaction as between quantitative and qualitative aspects with respect to number. Ultimately this interaction relates to the interplay of both the finite (actual) and infinite (potential) notions, which in psychological terms relate to both conscious and unconscious aspects of understanding respectively. So mathematical objects such as numbers possess an actual existence from a finite (conscious) perspective directly mediated in rational terms; however equally they possess a potential existence from an infinite (unconscious) perspective that is directly mediated in an intuitive manner. And both of these ceaselessly interact dynamically in experience leading to continual transformation with respect to such objects. So properly, i.e. in a dynamic interactive manner, number thereby necessarily evolves. And this relates not just to the nature of (internal) psychological understanding, but also to the external objects (both of which - by definition - are now necessarily relative to each other). However as an alternative to the sole use of  quantitative and qualitative terms, I would suggest the corresponding pairing of analytic and holistic (which perhaps appears a little more scientific). However it is important to point out that I am using analytic in the broader sense in which the terms is commonly used in science, which equates directly with a reduced quantitative interpretation of relationships! Now analytic has also a well-defined narrower meaning within Mathematics in relation to the treatment of infinite series and limits. However suffice it to say that within Mathematics, more restricted use of the terms "analytic" (and "analysis") are also analytic in the broader sense of the term (in that they are defined solely within a reduced quantitative context). Therefore to return to my basic position, properly understood all number has both analytic and holistic aspects (in dynamic relationship with each other). From one important perspective, this is true internally for each number. So, as we have seen the number "2" for example entails both the analytic aspect of "2" as a specific number quantity in cardinal terms, and the holistic aspect of "2" (i.e. twoness) as collectively applying to all possible instances of "2"). So properly understood these two notions are actual and potential with respect to each other. And because in the dynamics of experience (like approaching a crossroads from opposite directions) polar reference frames continually switch) there is also an important sense, where "2" now refers to a specific number quality (i.e. in the ordinal notion of 2nd) while "2" now attains a collective meaning in the cardinal notion of dimension that now actually applies to all numbers. Thus in the dynamics of the experience of each number, there is a ceaseless two-way interplay of both analytic and holistic type understanding, through which we are enabled to switch seamlessly as between cardinal and ordinal type appreciation (with respect to both objects and dimensions). Then from the other important perspective, similar dynamics apply to the number system as a whole. This then enables us to consistently combine both the cardinal and ordinal identities of all numbers (not is relative isolation) but in full relationship with other numbers. Now the precondition for such consistency is that a seamless means exists for switching as between both the Type 1 and Type 2 aspects of the number system. Thus from one perspective we need to be able to seamlessly convert the Type 2 aspect in a Type 1 manner. Then equally from the alternative perspective we need to be able to seamlessly convert the Type 1 aspect in a Type 2 manner. Though its significance seems to me to be completely missed by the mathematical community, I will start with the first of these conversions (which in fact is relatively easy to appreciate). Now we will illustrate here again for convenience with respect to the number "2". So the standard analytic definition of "2" (as a specific number quantity) is given through the Type 1 aspect as 21. So once again the Type 1 aspect is always defined with respect to the default dimensional value of 1. The corresponding holistic definition of "2" (as the collective number quality of twoness) is given through the Type 2 aspect as 12. "2" now refers directly to a number dimension (rather than a base quantity). Thus to convert this Type 2 aspect in Type 2 terms, we need in effect to obtain the square root. So in general terms x= 1 with in this case x= 1. So x = + 1 and – 1. We have now moved to a circular definition of number (with both + 1 and – 1 lying on the unit circle in the complex plane). However these two results are given but an analytic quantitative interpretation in conventional mathematical terms. However the corresponding holistic meaning is highly revealing, requiring in effect a uniquely distinctive manner of mathematical interpretation. + in this context entails the psychological notion of positing (i.e. making conscious). – however entails the corresponding notion of negation (i.e. of what is unconscious) thereby representing unconscious understanding. When understanding is especially refined, as with the fusion of matter and anti-matter particles in physics, unconscious negation (of what is consciously posited) will approach full attainment resulting in a pure intuitive understanding (representing a psycho spiritual energy state). So strictly speaking the holistic appreciation of each number represents a pure energy state (with complementary physical and psychological meanings). Thus in effect we have two extremes with respect to the understanding of number (and remember in dynamic terms number as object has no strict meaning independent of such understanding)! Thus we can appreciate number in the standard analytic fashion as an absolutely existing quantity form (that never changes). Here it is viewed as nothing in qualitative terms However from the opposite extreme we can appreciate number in the unrecognised holistic fashion as approaching a pure energy state (where it is nothing in quantitative terms). However properly understood, number experience entails an interaction somewhere between both extremes, where both quantitative aspects (as form) and qualitative aspects (as energy) ceaselessly interact leading to a continual transformation thereby in the nature of each number. So once again we have the analytic quantitative extreme (recognised through the Type 1 aspect) Here 2 = 1 + 1 (Strictly 21 =  1+ 11). So here the two units are defined in a homogeneous quantitative manner (i.e. without any distinctive quality) Then in the Type 2 system 2 = 1st + 2nd (so both units are now defined as without any quantitative distinction!) Then when we convert the Type 2 to the Type 1, we can indirectly represent this important reality consistently in a quantitative manner. So 1st and 2nd are now represented as + 1 and .– 1 respectively. And + 1 .– 1 = 0! So the task of converting consistently from Type 2 to Type 1 implies that we can represent the ordinal members of each group uniquely by a set of circular numbers (lying as roots on the unit circle) that always add up to zero. And this is where the prime numbers can be seen to have an equally valid Type 2 (as well as Type 1) identity. From the Type 1 perspective, the unique importance of the primes comes from viewing them as the "building blocks" of the natural number system. So all natural numbers (other than 1) can be uniquely expressed as the product of prime factors. However, the primes have an equally important role in Type 2 terms, where however their directional link to the natural numbers is completely reversed. So from the Type 2 perspective, each prime can be uniquely expressed in an ordinal natural number fashion by its various roots (again except 1). So for example if we take 5 as a prime, it can be uniquely expressed in terms of its 5 roots (excluding 1 which is common to all roots). Now these 5 roots provide an indirect (Type 1) means of uniquely expressing in quantitative terms   the various natural number members of 5 (i.e. 1st , 2nd , 3rd, 4th and 5th respectively) in an ordinal manner. However there is an obvious paradox with respect to the Type 1 and Type 2 approaches. In the first case, each natural number (except 1) is uniquely defined by its prime members in cardinal terms. In the 2nd case, each prime is uniquely defined by its natural number members (except 1) in ordinal terms (indirectly expressed in a quantitative manner through its prime roots). This leads directly to the holistic qualitative recognition of the two-way interdependence of primes and natural numbers in both cardinal and ordinal terms. In other words a holistic synchronicity entailing the two-way interaction of primes and natural numbers (which is directly qualitative in nature) underlies the deepest workings of the number system. However though obvious when viewed from the appropriate perspective, the realisation of  this simple fact will permanently elude a mathematical profession that reduces interpretation of number in a merely quantitative fashion. ## Tuesday, November 25, 2014 ### Do Numbers Evolve? (3) Just to recap briefly from yesterday's entry! Every number has two distinctive meanings. So 2, for example represents a specific quantity in cardinal terms; however equally it represents a collective dimensional quality as "twoness" (that potentially applies to all specific quantities). And both of these meanings in experiential terms are dynamically inseparable from each other. So every number therefore represents a dynamic interaction with respect to both its quantitative and qualitative aspects (which are complementary). Then in the dynamics of experience, reference frames continually switch. So 2 now attains a specific quality as 2nd (i.e. the ordinal nature of 2) while the dimensional notion of 2, in complementary fashion, assumes a cardinal identity (which is the conventional meaning associated with a number representing a power or exponent). So rather than just one unambiguous natural number system that  can be unambiguously defined in rigid absolute terms as; 1, 2, 3, 4,....., we now have two complementary aspects of the number system which dynamically interact with each other. Thus to identify number with its specific quantitative aspect, we assume a default fixed dimensional value of 1. Therefore, from this perspective, the numerical value of an expression entailing higher powers (i.e. dimensions) is thereby reduced in a 1-dimensional manner. This quantitative aspect is then represented as: 11, 21, 31, 41,....., Then in reverse manner to identify number with its collective qualitative aspect, we now in complementary fashion, maintain the base number fixed at 1, while allowing the dimensional value to vary through the natural numbers. So from this alternative qualitative perspective, the number system is defined as: 11, 12, 13, 14,....., I refer to these two aspects as Type 1 and Type 2 respectively. Both can only be properly understood (thereby mirroring authentic experience) as in dynamic complementary relationship with each other i.e. as quantitative to qualitative (and qualitative as to quantitative respectively). The quantitative (cardinal) aspect is defined strictly without qualitative meaning. Thus from this perspective 2 = 1 + 1 (i.e. 21+ 11). Thus the two units here are fully homogeneous in quantitative terms (thereby lacking qualitative distinction). It is the reverse from the opposite ordinal perspective. Here the two units are represented in qualitative terms as 1st and 2nd (thereby lacking any quantitative distinction). When one clearly realises that in truth all number operations properly entail both quantitative and qualitative aspects in dynamic relationship with each other, then the key issue arises as to consistency as between both sets of meanings. This entails that a satisfactory way of converting from quantitative to qualitative (and qualitative to quantitative respectively) necessarily must exist if we are to maintain true confidence in all subsequent operations. And this is what the zeta zeros essentially relate to, though this is not all yet realised due to the strongly reduced (i.e. merely quantitative) nature of accepted mathematical interpretation. However just as we have two aspects to the number system, likewise properly we should have two sets of zeta zeros. I refer to these two two sets as Zeta 1 and Zeta 2 respectively Now the first set (i.e. Zeta 1) can be identified directly with the Riemann zeros. However there is an alternative - and simpler - set, whose true function is properly realised. (which I refer to as Zeta 2). When we refer back once again to yesterday;s blog, I identified two key quantitative/qualitative type relationships with respect to the number system. So again, illustrating with reference to the number "2" I stated that two notions, which are quantitative and qualitative with respect to each other, are necessarily involved. Thus we have the specific quantitative notion of 2 in relation to a general collective qualitative notion (as "twoness"). Then when we switch the frame of reference (as in the manner of approaching a crossroads from the opposite direction) we  now obtain a specific qualitative notion of 2 (as the ordinal notion of 2nd) with respect to a collective quantitative notion of 2 (as representing power or dimension). Note once again how these dynamics are completely short-circuited from the conventional perspective with the quantitative notion of number (both as base number and dimension remaining). So ordinal notions in conventional mathematics are treated in a merely reduced fashion as "rankings" based on cardinal understanding  of a a quantitative nature! Now in basic terms, as we shall see, the Zeta 2 zeros refer directly to conversion as between quantitative and qualitative interpretation with respect to individual numbers (such as "2"). However we also saw that a more general problem exists with respect to the collective relationship of numbers to the number system (in quantitative and qualitative terms). Therefore the quantitative general notion of "a number" has strictly no meaning in the absence of the corresponding qualitative notion of "numberness" (that potentially applies in all cases). Now the famed Riemann zeros (which I refer to as the Zeta 1 zeros) properly relate to this more general problem with respect to the number system as a whole, of ensuring consistency with respect to both the quantitative and qualitative interpretation of number. In direct terms they provide a means of converting as between quantitative and qualitative type usage. Put another way - both with respect to any specific number and numbers generally - the notion of quantitative independence has no meaning in the absence of the corresponding notion of qualitative interdependence  (that thereby enables numbers to be related to each other). Therefore we can only properly understand the number system in a dynamic relative manner (entailing the complementary notions of independence and interdependence respectively). Once again even momentary reflection on the matter should immediately suggest to one that there is something fundamentally wrong with conventional mathematical interpretation. We insist on interpreting numbers in an absolute independent manner (i.e. with respect to their mere quantitative characteristics). However this begs the obvious question of how numbers can then be related with each other (which assumes some quality of interdependence). However because such reduced interpretation has now become so ingrained due to an unquestioned consensus, the mathematical community remains blind as to this must fundamental of all issues! ## Monday, November 24, 2014 ### Do Numbers Evolve? (2) In the last blog entry I argued that the conventional belief in the absolute existence of number is untenable from an experiential perspective. So all numbers possess both external (objective) and internal (mental) aspects which dynamically interact. Thus the conventional view of number represents but a special limiting case where both poles are fully abstracted from each other. Now this cannot of course completely occur in experiential terms (which would render understanding of number impossible); however it can be approached in a relative manner. Thus the conventional absolute view of number (as rigid unchanging entities) is then appropriately understood as just one special - though admittedly important - limiting case with respect to interpretation. As I have frequently stated this is directly associated with linear (1-dimensional) understanding based on interpretation within single isolated polar reference frames. So for example the conventional treatment of number in merely quantitative terms - rather than a relationship entailing both quantitative and qualitative aspects - represents such linear interpretation. However when we recognise the truly relative nature of mathematical understanding, as the interaction of opposite poles such as external and internal, this opens up entirely new vistas where the number can be given a potentially unlimited series of dimensional interpretations. So we move here from the extremely restricted default position of Conventional Mathematics i.e. as 1-dimensional in absolute terms, to an unlimited number of partial relative interpretations, where each number represents a unique dynamic configuration Therefore from this relative perspective if the interpretation can change as between differing numbers (representing dimensions) then the objective reality then likewise necessarily changes with respect to all these numbers. So from this enhanced dynamic perspective the dimensional notion of number represents perpetual evolution with respect to its very nature. Now once again, due to the restricted quantitative bias of Conventional Mathematics, this dynamic notion of number evolution is entirely edited out of the picture. So to give a simple example, when one raises a number 2 to a non-unitary power (i.e. dimension) such as 2, the result is given in a merely reduced quantitative fashion (i.e. as 1-dimensional)! Thus 22 = 4 (i.e. 41). Now one can easily appreciate, that when seen in geometrical terms, that 22 represents square rather than linear units. However this qualitative change in the nature of units involved is simply ignored in conventional mathematical terms (with a merely reduced quantitative interpretation remaining). In fact this reduced view is graphically illustrated in the following quote from Alain Connes (from Karl Sabbagh's "Dr. Riemann's Zeros" P. 205). “It really is a fantastic step to understand that the square of a number - which is just a geometrical square - and the cube, which is just a geometrical cube - can be added together, even though you would say, "But one has dimension the length squared and the other the length cubed" and you would never add things which have different dimensions. So algebra is an amazing achievement, and once you have formulated things in algebraic terms then they take on a life of their own.” This brings me directly to consideration of the second fundamental set of polarities that govern all mathematical experience, i.e. whole (collective) and part (individual) which in a very direct way determine this key relationship as between quantitative and qualitative. So, one recognises a number, as for example "2", both individual and collective aspects are necessarily involved (which are quantitative and qualitative with respect to each other) Thus in external terms, the individual number object "2" that has an actual existence, has no meaning in the absence of the collective number notion of "2" (that potentially applies to all specific instances of "2"). Put another way the recognition of "2", in any specific case, requires the corresponding notion of "twoness" (that collectively apples to all such possible cases). Then from the corresponding internal perspective, the individual number perception of "2" - again with an actual existence - has no meaning in the absence of the corresponding concept of "2" (i.e. twoness) with a general potential applicability to all possible cases of "2". So when the individual recognition of "2" is quantitative (in actual terms), the corresponding collective recognition of "2" (or twoness) is - relatively - of a qualitative potential nature. However, as always in the dynamics of experience, reference frames can switch, with the individual recognition qualitative and the collective recognition now of a quantitative nature. In effect this qualitative recognition corresponds with the ordinal notion of "2" (as 2nd). Likewise from this perspective, the collective recognition of "2" (as twoness) is now - relatively quantitative (applying to all actual instances of "2"). Therefore in the dynamics of experience, one keeps switching as between both the quantitative and qualitative notions of "2" in individual terms and equally the quantitative and qualitative notions of "2" (as twoness) with respect to both cardinal and ordinal usage. And this happens both externally with respect to objective recognition and internally (with respect to perception and corresponding concept). And a similar dynamic interaction is involved with respect to the recognition of any specific number. We then move on to consideration of the general recognition of number. Once again this will combine both internal (mental) and external (objective) aspects. And again the general recognition of an individual number integer in a cardinal quantitative manner has no strict meaning in the absence of the collective qualitative notion of number (as "numberness") that potentially applies in all specific cases. And then when the reference frames switch we attain the individual recognition of that number in a corresponding ordinal manner (which is qualitative in manner), Then - in relative terms - the collective notion of number attains a quantitative interpretation (as applying to all actual numbers). Now with respect to conventional mathematical interpretation, all these mutually interacting dynamics are short-circuited in a grossly reduced fashion. Thus as we have already seen,the external/internal interaction is disregarded with numbers viewed absolutely in objective terms (with a corresponding absolute mental interpretation). Likewise the individual number "2" is interpreted strictly with respect to its quantitative nature, while the general notion of "2" (insofar as it is recognised) is treated merely with respect to actual occurrences (that are likewise interpreted in a merely quantitative manner). Then it is somewhat similar with respect to the general recognition of a number with both individual and collective aspects treated in a merely reduced quantitative manner. However if we are to properly understand the key role that the zeta zeros (both Zeta 1 and Zeta 2) play with respect to the number system, we have to inherently appreciate it in a dynamic interactive manner (where both quantitative and qualitative aspects are equally recognised). ## Wednesday, November 19, 2014 ### Do Numbers Evolve? (1) On first impression, this might seem to most people as a somewhat ridiculous question. Indeed the conventional view - which gives great comfort to so many practitioners in our fast changing world - is that number represents the only thing we can rely on to remain absolutely the same. So from this perspective, the prime numbers for example were the same yesterday, today and will forever remain so here and indeed anywhere else in the Universe (where intelligent beings exist to discover them)! However on closer examination, strictly this conventional view can be convincingly shown to represent but an illusion (which admittedly however in a reduced quantitative sense has proved of enormous benefit). In physical terms to accurately classify any object we must be able to identify it with a universal class to which it belongs. To give  a trivial example, to speak unambiguously with respect to a fruit such as a strawberry, we need to be able to define accurately a universal class to which all strawberries belong. However we will eventually discover that at the margins difficult problems of identification will exist with a certain degree of arbitrariness as to whether a particular example correctly falls into the relevant class. So the boundaries of our definition are necessarily vague and approximate. Now we might initially think that this problem does not exist in the mathematical world of "abstract" objects that thereby free us from such physical restraints. However, paradoxically on closer reflection a much greater degree of mystery attaches to the universal class constituting "number" than any physical class (such as strawberries). So to unambiguously recognise a particular number we should be able to define the universal class of "number". However this is a far more difficult task that one might imagine. So for example the development of Mathematics has seen a steady increase in the somewhat exotic objects that are universally recognised as numbers. Initially, number was identified solely with the natural (counting) numbers which are solely positive.. Then gradually, after much resistance their negative counterparts also cam to be included. A further advance then led to the inclusion of rational fractions (such as 1/2) in the number system. Then the Pythagoreans in investigating the square root of 2 were, to their horror, to discover a new type of irrational number. Such irrational numbers have now been further refined to include both algebraic (such as √2) and transcendental numbers (such as π). Another major development with respect to the solutions of polynomial equations led to the recognition of imaginary numbers (based on i as the square root of  – 1). And in more recent times, further developments led to the inclusion of transfinite numbers and a whole new strange class of numbers (based on the primes) referred to as p-adic numbers. So over the millennia we have seen a remarkable evolution in the objects that are now recognised as legitimately belonging to the number system. Thus it seems to me reasonable to assume that further extension is likely to take place in the future with as yet unknown number objects becoming included. Thus there is clearly very fuzzy boundaries existing as to what might be considered as number. To put it more bluntly we are unable to properly define what is number and yet attempt to claim an absolute unambiguous identity for every specific number object encountered. Now though there is no proper (epistemological) justification for such certainty. So what really characterises - as I hope to presently  demonstrate at length - the apparent absolute nature of mathematical objects such as numbers, is a largely unquestioned mass consensus, which at bottom is geared to the preservation of a considerable illusion regarding their true nature and indeed the true nature of Mathematics generally. Let me illustrate this now with respect to one of the simplest, most important and best known numbers i.e. "2". Now again according to the general mathematical consensus, 2 has an absolute rigid identity that can be successfully abstracted from our changing everyday physical world. This view, expressed in its extreme fashion for example by G. H. Hardy, looks on numbers as eternally existing in some kind of mathematical Heaven (ungoverned by the laws of space and time). However on closer reflection this view can be shown to be quite untenable. The starting point here for more authentic understanding is the recognition that Mathematics is intimately bound up with experience. So therefore we start by examining how the recognition of number experientially unfolds. Now all experience - including of course mathematical - is governed by twin sets of fundamental polarities that dynamically interact. The first of these relates to external (objective) and internal (mental subjective) polarities. Therefore the experience of the number "2" entails both an external pole (i.e. as object) and a corresponding internal pole (as mental perception). So the experience of the number "2" entails a dynamic interaction of both object and perception (which cannot be meaningfully abstracted in absolute manner from each other). Put another way, strictly speaking a mathematical object such as "2" has no meaning independent of the corresponding mental perception of "2" with both poles in tandem properly constituting an interactive dialogue of number meaning. In other words all number understanding has a merely relative validity. So Conventional mathematics is in fact directly based on a reduced interpretation of such experience. Here the two poles are viewed with respect to their absolute separation (though implicitly in experiential terms this is not possible). Thus the number object (in this case"2") is misleadingly given an abstract absolute objective identity. Interpretation is then misleadingly viewed as simply mirroring in mental terms (again in an absolute manner) this absolute identity. In this way in conventional mathematical terms, interaction as between opposite poles (external and internal) is thereby completely edited out of the picture (in explicit terms). This then is misleadingly associated with the considerable illusion that numbers thereby enjoy an absolute rigid identity (unrelated to time). However because such reductionism is so entrenched in our mathematical thought processes (conditioned now though several millennia) it is extremely difficult to get mathematicians to address this issue. Even on the rare occasions when I have seen mathematicians seriously question the  basis of such procedures (in philosophical reflection on their discipline), they still seemed in a sense to operate with split personalities, readily accepting all such reductionism (without question) when operating as mathematicians. And I accept that there is enormous pressure on professional mathematicians (in maintaining the respect of their peers) to operate precisely in this manner. This is why I have long considered that paradoxically the blunt message that Mathematics (as presently understood) is not in fact fit for purpose can only be properly preached by someone standing outside the profession altogether (while still remaining deeply interested in Mathematics). ## Tuesday, November 18, 2014 ### A Simple Example In a recent blog, I suggested that in principle the Erdős–Kac Theorem should have a complementary application with respect to the distribution of prime numbers where the normal (Gaussian) distribution can be used to explain behaviour. To demonstrate this important aspect, we take repeated samples (of same size) within a relatively restricted region of the number system, where changes in the average gap as between primes is so small as to be discounted. So starting with the n = 1,000,000,000 I took 100 samples of size t = 1000 up to n = 1,000,100,000. For convenience in identifying the number of primes in each sample, I took them in strict sequence with the accompanying values listed below. As the primes are themselves distributed in a random fashion this would seem permissible in this instance. Alternatively - at least in theory - 100 repeated random samples of 1000 (with replacement) could be taken within the same range i.e. 100,000,000 to 100,100,000. However in practice this would be very difficult. So the first row for example represents the 10 samples in sequence from 100,000,000 to 100,010,000 with the last row, for example representing the final 10 sample values from 100,090,000 to 100,100,000. 54 56 57 55 57 61 57 56 47 51 61 55 57 49 43 54 56 43 58 54 51 56 51 49 43 54 56 43 58 54 60 56 55 57 54 52 59 56 51 56 49 43 59 64 55 63 62 53 49 51 63 54 40 54 51 56 52 54 42 57 53 73 55 50 54 53 61 49 52 56 54 59 44 57 50 56 56 53 52 54 57 56 52 54 63 43 54 52 51 48 49 55 57 52 54 52 56 48 56 66 Now in general terms, the mean number of primes in each sample can be approximated as t/log n = 1000/18.42 = 54.29 (approx) This equates well with actual value averaged over the 100 sample results above = 54.11. Much more problematic however is the provision of a general formula to approximate the standard deviation (for all values of n). Though I experienced doubts on several occasions with respect to my initial "hunch", repeated empirical testing seems to suggest it as perhaps the simplest and best estimate, i.e. √{t/(2log n)} This would give the  standard deviation as 5.21 and compares well enough with the estimated standard deviation (based on the 100 sample values) = 5.46. This does not of course constitute a proof, and indeed a much greater degree of sampling would be required to truly establish it as the most likely estimate. However in principle by now using the normal distribution, we could estimate the probabilities associated for example of sample values lying within any prescribed distance from the mean (on both sides). For example we would expect for the above a little in excess of 2/3 of sample values to lie within 1 standard deviation of the mean value. This would suggest therefore the probability that 2/3 of sample values (for frequency of primes occurring) would lie in the range of 49 - 59 (approx). Addendum (5/3/2016).  Having returned to this issue in recent days, I feel I can bring more clarity to a situation that I did not eally feel had been properly dealt with, first time around. Though I was hoping that the standard deviation would correspond to √{t/(log n)}, I was led - largely through the empirical evidence of a small sample - to adjust it somewhat to fit the data. However on reflection this was not warranted. Even just a few stray "outliers" with respect to this data would have a vey large influence on the standard deviation. Therefore it was unrealistic to expect that the empirical example would fit in with theoretical explanations. The  Erdős–Kac Theorem states that if ω(n) is the number of (distinct) prime factors of n, the probability distribution of $\frac{\omega(n) - \log\log n}{\sqrt{\log\log n}}$ is the normal distribution. In like manner I am suggesting that if  ω(n) is the number of primes in a sample of size t. (i.e. where samples are taken in the region of n) the probaibility distribution of ω(n) - (t/log t) √{(t/logt)} is the normal distribution. The Erdős–Kac Theoremwould suggest that where the average number of (distinct) primes is approximately 100 (with standard deviation 10), then we would expect just alittle more than 2/3 of all factors to lie within 1 standard deviation of 100 either side of the mean (i.e. between 90 and 110). In like manner if the average mean value of the number of primes in each sample is 100 (where samples are taken in the region of n) then we would likewise expect again that in a little more than 2/3 of samples, the number of primes would lie within 1 standard deviation (i.e. 10) of  100 (i.e. between 90 and 110).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8741769194602966, "perplexity": 1116.2426626515744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811352.60/warc/CC-MAIN-20180218023321-20180218043321-00341.warc.gz"}
http://math.stackexchange.com/questions/142639/integral-variable-substitution-using-hausdorff-measure
# Integral variable substitution using Hausdorff measure Suppose we have positive density $q$ with "good" qualities (continuity, etc..). I need to calculate this integral: $$\int_B q(\textbf{z}) d \textbf{z},\ \textbf{z} \in \mathbb{R}^d,$$ where $B \subset \mathbb{R}^d$ is a Borel set that is bounded away from the origin. In order to do that it is necessary (in my case) to make a d-dimensional "polar coordinate" substitution, where $r=||\textbf{z}||$ ($||\cdot||$ is $L_2$ norm) and $\textbf{w}=\textbf{z}/r$, respectively are "radius" and "direction". $$\int_B q(r\textbf{w}) r^{d-1} drd\lambda(\textbf{w}),$$ where $\lambda(\textbf{w})$ is Hausdorff measure (I assume the "surface area" on d-dimensional hypersphere). The question is how do I prove that Jacobian is $r^{d-1}$? I found that this is the right Jacobian for this transform in this article p. 1818 and also intuitively the idea looks fine, but I could not find a mathematical proof anywhere. I know how to find Jacobian in the simple cases like d=2, or d=3, but I don't really know how to deal with this Hausdorff measure. - It's not really trivial. You may proceed by induction on the dimension. But the cleanest proof is by the co-area formula. – Siminore May 8 '12 at 13:41 You can think as it being the Riemann integral since you q is continuous. If q is positive and symmetric radial you may use the distribution function $\omega(\alpha)=|{|q|>\alpha}|$ and integrate it from zero to infinity to obtain the result desired! – checkmath May 8 '12 at 13:46 What we have before us is an instance of "Fubini" in disguise. I assume your $B$ is a ball of radius $R$ centered at ${\bf 0}\in{\mathbb R}^d$. Since the integrand $q(\cdot)$ is well behaved we can use an approximation by Riemann sums in order to arrive at the correct formula. Partition the interval $[0,R]$ into $M$ subintervals $I_i:=[r_{i-1},r_i]$. Likewise, partition $S^{d-1}$ into $N$ essentially disjoint patches $\Omega_k$, and for each $k\in[N]$ choose a sampling point $\omega_k\in\Omega_k$. In this way you obtain a partition of $B$ into $MN$ essentially disjoint pieces $$B_{ik}:=\{{\bf x}=r \omega\,|\, r\in I_i, \ \omega\in\Omega_k\}\ .$$ The $d$-dimensional volume of $B_{ik}$ is given by (this is elementary geometry) \eqalign{\mu(B_{ik})&={1\over d}(r_i)^{d-1}\lambda(\Omega_k)\cdot r_i -{1\over d}(r_{i-1})^{d-1}\lambda(\Omega_k)\cdot r_{i-1}\cr &={1\over d}(r_i^d-r_{i-1}^d)\lambda(\Omega_k)=\rho_i^{d-1}(r_i-r_{i-1})\lambda(\Omega_k)\ ,\cr} where a suitable value $\rho_i\in I_i$ is guaranteed by the MVT of differential calculus. In particular $\rho_i\,\omega_k\in B_{ik}$ for all $i$ and $k$.. Now we approximate our integral as a Riemann sum: \eqalign{\int_B q({\bf x})\,{\rm d}({\bf x})&\doteq \sum_{i,\, k} q(\rho_i \omega_k)\ \mu(B_{ik})=\sum_{i,\, k} q(\rho_i \omega_k)\ \rho_i^{d-1}(r_i-r_{i-1})\lambda(\Omega_k)\cr &=\sum_k\left(\sum_i q(\rho_i \omega_k)\ \rho_i^{d-1}(r_i-r_{i-1})\right)\lambda(\Omega_k)\cr &\doteq\sum_k\left(\int_0^R q(r \omega_k) \, r^{d-1} dr\right)\lambda(\Omega_k)\cr &\doteq\int_{S^{d-1}} \int_0^R q(r\,\omega)\, r^{d-1}\ dr \ {\rm d}\lambda(\omega)\ .\cr} Since the used approximations can be made as precise as desired, the integrals on the LHS and the RHS of this chain have to be of equal value. - Although, I forgot to mention that B is a Borel set that is bounded away from the origin and not necessarily a ball set, I see the point in your proof. – jem77bfp Sep 3 '12 at 11:56 Easiest way is to use really simple idea - two dimensional polar coordinates. Vector $\mathbf{z}$ can be mapped to vector $(r \cos \phi, r \sin \phi, rw_3, rw_4, \dots, rw_d)$, where $\phi$ and $r$ are such that $w_1=r \cos \phi$ and $w_2=r \sin \phi$. Then the Jacobian is $$\left| \begin{array}{ccccc} \frac{\partial r \cos \phi}{\partial r} & \frac{\partial r \cos \phi}{\partial \phi} & \frac{\partial r \cos \phi}{\partial w_3} & \cdots & \frac{\partial r \cos \phi}{\partial w_d} \\ \frac{\partial r \sin \phi}{\partial r} & \frac{\partial r \sin \phi}{\partial \phi} & \frac{\partial r \sin \phi}{\partial w_3} & \cdots & \frac{\partial r \sin \phi}{\partial w_d} \\ \frac{\partial r w_3}{\partial r} & \frac{\partial r w_3}{\partial \phi} & \frac{\partial r w_3}{\partial w_3} & \cdots & \frac{\partial r w_3}{\partial w_d} \\ \vdots & \vdots &\vdots & \ddots & \vdots \\ \frac{\partial r w_d}{\partial r} & \frac{\partial r w_d}{\partial \phi} & \frac{\partial r w_d}{\partial w_3} & \cdots & \frac{\partial r w_d}{\partial w_d} \end{array} \right| .$$ Then by taking derivatives and evaluating determinant we get that Jacobian equals $r^{d-1}$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874519109725952, "perplexity": 429.35221586959443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278042.87/warc/CC-MAIN-20160524002118-00208-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.jiskha.com/questions/491718/use-a-graph-to-determine-whether-the-given-function-is-continuous-on-its-domain-hint-see
# math Use a graph to determine whether the given function is continuous on its domain. HINT [See Example 1.] f(x) = x + 7 if x < 0 2x − 5 if x ≥ 0 1 continuous discontinuous If it is not continuous on its domain, list the points of discontinuity. (If there is no such point, enter NONE.) x = 2 1. 👍 2. 👎 3. 👁 1. This is very simple and quick to graph. Have you tried to graph these? What don't you understand? 1. 👍 2. 👎 2. In google type: "function graphs online" When you see list buf results click on: rechneronline.de/function-graphs/ When page be open in blue rectacangle type: x+7 in grey rectacangle click: 2*x-5 In Display properties set: Range x-axis -20 to 20 and Range y-axis -20 to 20 Then click option Draw 1. 👍 2. 👎 3. Continuous 1. 👍 2. 👎 ## Similar Questions 1. ### Algebra 1. Graph the function and identify the domain and range. y=-5x^2 oo=infinite A) Domain: (-oo, oo) Range: [0, oo) B) Domain: (-oo, oo) Range: (-oo, 0] C) Domain: (-oo, oo) Range: (-oo, 0] D) Domain: (-oo, oo) Range: [0, oo) 2. How 2. ### Precalculus The function f has a domain of [0,5]and a range of [0,3]. Start by sketching a potential graph of f. Suppose the function k is defined as k(x)=f(x−3). Determine the domain and range of k. Domain: Range: 3. ### calcius Determine whether the relation represents a function. If it is a function, state the domain and range. {(-2, 1), (-1, -2), (0, -3), (1, -2), (3, 6)} a. function domain: {-2, -1, 0, 1, 3} range: {1, -2, -3, 6} b. function domain: 4. ### Calculous the figure shows the graph of F', the derivative of a function f. the domain of the function f is the set of all X such that -3< or equal to x 1. ### Calculus (Continuity) If the following function is continuous, what is the value of a + b? f(x) = {3x^2 - 2x +1, if x < 0 a cos(x) + b, if 0 2. ### Calculus A function f(x) is said to have a removable discontinuity at x=a if: 1. f is either not defined or not continuous at x=a. 2. f(a) could either be defined or redefined so that the new function IS continuous at x=a. 3. ### Maths Given the equation y=2x-4 of an exponential function f 1)write down the equation of horizontal asymptote of f 2)determine the y intercept of f 3)determine the y value of a point on f with x=-2 4)draw the graph of the function of f 4. ### Calc AB Remember that f(x) must be one-to-one (only one y-value for each x-value) over the domain where f –1(x)is defined as a function. So, in some cases you must restrict the domain of f(x) so that it's one-to-one. There might be more 1. ### Math(Reiny Could You Help?) The graph of the function g(x) has the same shape and direction of opening as the graph of f(x)= 3(x-2)^2+9. The graph of g(x) has a vertex that is 2 units to the right and 5 units down from the vertex of the graph of f(x). A.) 2. ### Algebra Given the following piecewise function: Find the Given the following piecewise function: f(x)= {?(x+10 for -9 ?x 3. ### AP Calculus a continuous function f has domain (1,25) and range (3,30). if f'(x) is less than 0 for all x between 1 and 25, what is f(25)? a)1 b)3 c)25 d)30 e)impossible to determine from the information given 4. ### Calculus Explain, using the theorems, why the function is continuous at every number in its domain. F(x)= 2x^2-x-3 / x^2 +9 A) F(x) is a polynomial, so it is continuous at every number in its domain. B) F(x) is a rational function, so it
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398181200027466, "perplexity": 2207.4173021199063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00554.warc.gz"}
https://ru.scribd.com/document/466071459/REACTION-KINETICS-pdf
Вы находитесь на странице: 1из 9 # 1. How fast? Kinetics The rate of reaction is defined as the change in concentration of a substance in unit time Its usual unit is mol dm-3s-1 ## When a graph of concentration of reactant is plotted vs Initial rate = time, the gradient of the curve is the rate of reaction. gradient of concentration tangent The initial rate is the rate at the start of the reaction where it is fastest ## Reaction rates can be calculated from graphs of concentration of reactants or products time Techniques to investigate rates of reaction There are several different methods for measuring reactions rates. Some reactions can be measured in several ways measurement of the change in volume of a gas This works if there is a change in the number of moles of gas in the reaction. Using a gas syringe is a common way of following this. If drawing a gas syringe make sure you draw it with some (CH3)2C=CH2(g) + HI(g) (CH3)3CI(g) measurement markings on the barrel to show measurements HCOOH(aq) + Br2(aq) 2H+(aq) + 2Br - (aq) + CO2(g) can be made. ## This works if there is a gas produced which is allowed to escape. Works better with heavy gases such as CO2 HCOOH(aq) + Br2(aq) 2H+(aq) + 2Br - (aq) + CO2(g) Titrating samples of reaction mixture with acid, alkali, sodium thiosulphate etc Small samples are removed from the reaction mixture, quenched (which stops the reaction) and the titrated with a suitable reagent. HCOOCH3(aq) + NaOH(aq) HCOONa(aq) + CH3OH(aq) The NaOH could be titrated with an acid BrO3 –(aq) + 5Br –(aq) + 6H+(aq) 3Br2(aq) + 3H2O(l) The H+ could be titrated with an alkali CH3COCH3(aq) + I2(aq) → CH3COCH2I(aq) + H+(aq) + I–(aq) The I2 could be titrated with sodium thiosuplate Colorimetry. If one of the reactants or products is coloured H2O2(aq) + 2I- (aq) + 2H+(aq) 2H2O(l) + I2(aq) then colorimetry can be used to measure the change in colour of the reacting mixtures The I2 produced is a brown solution ## Measuring change in electrical conductivity Can be used if there is a change in the number HCOOH(aq) + Br2(aq) 2H+(aq) + 2Br - (aq) + CO2(g) of ions in the reaction mixture Measurement of optical activity. If there is a change in the optical activity through CH3CH2Br(l) + OH−(aq) CH3CH2OH(l) + Br−(aq) the reaction this could be followed in a polarimeter 1 Rate Equations The rate equation relates mathematically the rate of reaction to the concentration of the reactants. ## For the following reaction, aA + bB → r is used as symbol for rate products, the generalised rate equation is: r = k[A]m[B]n The unit of r is usually mol dm-3 s-1 ## The square brackets [A] means m, n are called reaction orders the concentration of A Orders are usually integers 0,1,2 (unit mol dm-3) 0 means the reaction is zero order with respect to that reactant k is called the rate constant 1 means first order 2 means second order The total order for a reaction is worked NOTE: the orders have nothing to do with the stoichiometric out by adding all the individual orders coefficients in the balanced equation. They are worked out together (m+n) experimentally initial ## For first order: the rate of reaction is directly proportional to the concentration of A r = k[A]1 ## For second order: the rate of reaction is proportional to the concentration of A squared r = k[A]2 ## For a rate concentration graph to show the Graphs of initial rate against concentration show the order of a particular reactant the different orders. The initial rate may have been concentration of that reactant must be varied calculated from taking gradients from concentration whilst the concentrations of the other /time graphs reactants should be kept constant. ## Continuous rate experiments Continuous rate data ## 0.060 This is data from one experiment where the concentration of one substance is followed throughout the experiment. If half-lives are constant [A] ## then the order is 1st order For this method to work the concentrations of the 0.030 reactants not being followed must be in large excess in the experiment so their concentrations stay virtually constant and do not affect rate. 0.015 0.0075 This data is processed by plotting the data and calculating successive half-lives. t½ t½ t½ Time (min) The half-life of a first-order reaction is independent of the concentration and is constant 2 Second order zero order 0.060 0.060 If zero order the rate stays constant as the reactant is used up. This means the [A] If half-lives rapidly increase concentration has no [A] ## 0.030 then the order is 2nd order 0.030 effect on rate 0.015 0.015 0.0075 0.0075 t½ t½ t½ Time (min) Time (min) ## The rate constant (k) 1. The units of k depend on the overall order of For a 1st order overall reaction the unit of k is reaction. It must be worked out from the rate s-1 equation For a 2nd order overall reaction the unit of k is 2. The value of k is independent of concentration and mol-1dm3s-1 time. It is constant at a fixed temperature. For a 3rd order overall reaction the unit of k is 3. The value of k refers to a specific temperature and mol-2dm6s-1 it increases if we increase temperature ## Example (first order overall) Rate = k[A][B]0 m = 1 and n = 0 Remember: the values of the reaction orders must - reaction is first order in A and zero order in B be determined from experiment; they cannot be - overall order = 1 + 0 = 1 found by looking at the balanced reaction equation - usually written: Rate = k[A] Calculating units of k ## 1. Rearrange rate equation 2. Insert units and to give k as subject cancel Unit of k = s-1 [A] mol dm-3 ## Example: Write rate equation for reaction between A and B where A is 1st order and B is 2nd order. ## 1. Rearrange rate equation to 2. Insert units and 3. Simplify fraction give k as subject cancel k= s-1 k = Rate k = mol dm-3s-1 Unit of k = mol-2dm6s-1 mol2dm-6 [A][B]2 mol dm-3.(moldm-3)2 3 Rate Equations The rate equation relates mathematically the rate of reaction to the concentration of the reactants. ## For the following reaction, aA + bB → r is used as symbol for rate products, the generalised rate equation is: r = k[A]m[B]n The unit of r is usually mol dm-3 s-1 ## The square brackets [A] means m, n are called reaction orders the concentration of A Orders are usually integers 0,1,2 (unit mol dm-3) 0 means the reaction is zero order with respect to that reactant k is called the rate constant 1 means first order 2 means second order The total order for a reaction is worked NOTE: the orders have nothing to do with the stoichiometric out by adding all the individual orders coefficients in the balanced equation. They are worked out together (m+n) experimentally ## For zero order: the concentration of A has no effect on the rate of reaction r = k[A]0 = k ## For first order: the rate of reaction is directly proportional to the concentration of A r = k[A]1 ## For second order: the rate of reaction is proportional to the concentration of A squared r = k[A]2 ## The rate constant (k) 1. The units of k depend on the overall order of For a 1st order overall reaction the unit of k is reaction. It must be worked out from the rate s-1 equation For a 2nd order overall reaction the unit of k is 2. The value of k is independent of concentration and mol-1dm3s-1 time. It is constant at a fixed temperature. For a 3rd order overall reaction the unit of k is 3. The value of k refers to a specific temperature and mol-2dm6s-1 it increases if we increase temperature ## Example (first order overall) Rate = k[A][B]0 m = 1 and n = 0 Remember: the values of the reaction orders must - reaction is first order in A and zero order in B be determined from experiment; they cannot be - overall order = 1 + 0 = 1 found by looking at the balanced reaction equation - usually written: Rate = k[A] Calculating units of k ## 1. Rearrange rate equation 2. Insert units and to give k as subject cancel ## k = Rate k = mol dm-3s-1 Unit of k = s-1 [A] mol dm-3 4 Example: Write rate equation for reaction between A and B where A is 1st order and B is 2nd order. ## 1. Rearrange rate equation to 2. Insert units and 3. Simplify fraction give k as subject cancel k= s-1 k = Rate k = mol dm-3s-1 Unit of k = mol-2dm6s-1 mol2dm-6 [A][B]2 mol dm-3.(moldm-3)2 ## Working out orders from experimental initial rate data The initial rate is the rate at the start of the reaction, where it is fastest. It is often obtained by taking the gradient of the conc vs time graph. ## Normally to work out the rate equation we do a series of experiments where the initial concentrations of reactants are changed (one at a time) and measure the initial rate each time. This data is normally presented in a table. Example: work out the rate equation for the following reaction, A+ B+ 2C D + 2E, using the initial rate data in the table Experiment [A] [B] [C] Rate In order to calculate the order for a particular mol dm- mol dm-3 mol dm-3 mol dm-3 s-1 reactant it is easiest to compare two 3 experiments where only that reactant is 1 0.1 0.5 0.25 0.1 being changed 2 0.2 0.5 0.25 0.2 If conc is doubled and rate stays the same: order= 0 3 0.1 1.0 0.25 0.4 If conc is doubled and rate doubles: order= 1 4 0.1 0.5 0.5 0.1 If conc is doubled and rate quadruples : order= 2 ## For reactant A compare between experiments 1 and 2 For reactant A as the concentration doubles (B and C staying constant) so does the rate. Therefore the order with respect to reactant A is first order ## For reactant B compare between experiments 1 and 3 : As the concentration of B doubles (A and C staying constant) the rate quadruples. Therefore the order with respect to B is 2nd order ## For reactant C compare between experiments 1 and 4 : As the concentration of C doubles (A and B staying constant) the rate stays the same. Therefore the order with respect to C is zero order ## The reaction is 3rd order overall and the unit of the The overall rate equation is r = k [A] [B]2 rate constant =mol-2dm6s-1 5 Working out orders when two reactant concentrations are changed simultaneously In most questions it is possible to compare between two experiments where only one reactant has its initial concentration changed. If, however, both reactants are changed then the effect of both individual changes on concentration are multiplied together to give the effect on rate. ## In a reaction where the rate equation is r = k [A] [B]2 If the [A] is x2 that rate would x2 If the [B] is x3 that rate would x32= x9 If these changes happened at the same time then the rate would x2x9= x 18 Example work out the rate equation for the reaction, between X and Y, using the initial rate data in the table ## Experiment Initial concentration of Initial concentration Initial X/ mol dm–3 of Y/ mol dm–3 rate/ mol dm–3 s–1 ## For reactant X compare between experiments 1 and 2 For reactant X as the concentration doubles (Y staying constant) so does the rate. Therefore the order with respect to reactant X is first order ## We know X is first order so that will have doubled rate The overall rate equation is r = k [X] [Y]2 The effect of Y, therefore, on rate is to have quadrupled it. The reaction is 3rd order overall and the unit of Y must be second order the rate constant =mol-2dm6s-1 ## Calculating a value for k using initial rate data Using the above example, choose any one of the experiments and put the values into the rate equation that has been rearranged to give k. Using experiment 3: ## r = k [X] [Y]2 k= r k = 2.40 x 10–6 k = 3.0 x 10-4 mol-2dm6s-1 [X] [Y]2 0.2 x 0.22 Remember k is the same for all experiments done at the same temperature. Increasing the temperature increases the value of the rate constant k 6 Effect of Temperature on rate constant ## Increasing the temperature increases the value of the rate constant k k Increasing temperature increases the rate constant k. The relationship is given by the Arrhenius equation k = Ae-Ea/RT where A is a constant R is gas constant and Ea is activation energy. temperature ## The Arrhenius equation can be rearranged 1/T ln (Rate) ln k = constant – Ea/(RT) k is proportional to the rate of reaction so ln k can be replaced by ln(rate) Gradient = - Ea/ R From plotting a graph of ln(rate) or ln k against 1/T the activation energy can be calculated from measuring Ea = - gradient x R 7 Rate Equations and mechanisms A mechanism is a series of steps through which the reaction Each step can have a different rate of progresses, often forming intermediate compounds. If all the reaction. The slowest step will control the steps are added together they will add up to the overall overall rate of reaction. The slowest step equation for the reaction is called the rate-determining step. The molecularity (number of moles of each substance) of the molecules in the slowest step will be the same as the order of reaction for each substance. e.g. 0 moles of A in slow step would mean A is zero order. 1 mole of A in the slow step would mean A is first order ## Example 2 overall reaction Example 1 A + 2B + C D+E overall reaction Mechanism A + 2B + C D+E Step 1 A+B X + D fast Mechanism Step 2 X+C Y slow Step 1 A+B X + D slow Step 3 Y+B E fast Step 2 X+C Y fast Step 3 Y+B E fast r = k [X]1[C]1 r = k [A]1[B]1[C]o The intermediate X is not one of the reactants so must be replaced with the substances that make C is zero order as it appears in the up the intermediate in a previous step mechanism in a fast step after the slow step A+B X+D r = k[A]1[B]1[C]1 ## Investigating the rate of reaction between Iodine and propanone Propanone reacts with iodine in acidic solution (the acid is a catalyst) as shown in the equation below. CH3COCH3(aq) + I2(aq) → CH3COCH2I(aq) + H+(aq) + I–(aq) This reaction can be followed by removing small samples from the reaction mixture with a volumetric pipette. The sample is then quenched by adding excess sodium hydrogencarbonate to neutralize acid catalyst which stops the reaction. Then the sample can be titrated with sodium thiosulphate using a starch catalyst ## 2S2O32-(aq) + I2 (aq) 2I- (aq) + S4O62-(aq) yellow/brown sol colourless sol [I2] This reaction is zero order with respect to I2 but 1st order with respect to the propanone and acid catalyst ## The rate equation for the reaction is Rate = k[CH3COCH3(aq)][H+(aq)] ## If there is a zero order reactant there must be at least two steps in the mechanism because the rate determining step will not involve the zero order reactant Time (min) The rate determining step of this reaction must therefore contain one propanone molecule and one H+ ion forming an intermediate. The iodine will be involved in a subsequent faster step. 8 Example 3: SN1 or SN2? ## Remember the nucleophilic substitution reaction of halogenoalkanes and hydroxide ions. This is a one step mechanism H CH3 H - H δ+ δ- H3C C Br HO C Br H3C C OH + :Br - -HO: H H H ## CH3CH2Br + OH- CH3CH2OH + Br- slow step Primary halogenalkanes tend to The rate equation is This is called SN2. Substitution, Nucleophilic, react via the SN2 mechanism r = k [CH3CH2Br] [OH-] 2 molecules in rate determining step ## SN1 nucleophilic substitution mechanism for tertiary halogenoalkanes Tertiary halogenoalkanes CH3 CH3 CH3 undergo this mechanism as the H3C C Br H3C C :OH- H3C C OH stabilised by the electron releasing methyl groups around it. CH3 CH3 CH3 (see alkenes topic for another example of this). The Br first breaks away The hydroxide Also the bulky methyl groups from the haloalkane to nucleophile then attacks the positive carbon prevent the hydroxide ion from form a carbocation attacking the halogenoalkane in intermediate the same way as the mechanism above Overall Reaction The rate equation is This is called SN1. (CH3)3CBr + OH– (CH3)3COH + Br – Substitution, Nucleophilic, r = k [(CH3)3CBr] 1 molecule in rate Mechanism: determining step ## (CH3)3CBr (CH3)3C+ + Br – slow Primary halogenoalkanes don’t do the SN1 mechanism because they would only form an unstable primary (CH3)3C+ + OH– (CH3)3COH fast carbocation. Example 4 Example 5 Overall Reaction Using the rate equation rate = k[NO]2[H2] and NO2(g) + CO(g) → NO(g) + CO2(g) the overall equation 2NO(g) + 2H2(g) N2(g) + 2H2O(g), the following three-step mechanism for the reaction was Mechanism: suggested. X and Y are intermediate species. Step 1 NO2 + NO2→ NO + NO3 slow Step 1 NO + NO X Step 2 NO3 + CO → NO2 + CO2 fast Step 2 X + H2 Y • NO3 is a reaction intermediate Step 3 Y + H2 N2 + 2H2O NO2 appears twice in the slow steps so it is second order. CO does not appear in Which one of the three steps is the rate-determining step? the slow step so is zero order. Step 2 – as H2 appears in rate equation and r=k [NO2]2 combination of step 1 and 2 is the ratio that appears in the rate equation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340043067932129, "perplexity": 4443.718323890467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00506.warc.gz"}
http://mathhelpforum.com/differential-equations/41101-differential-equations-hw-question.html
# Math Help - Differential Equations HW question 1. ## Differential Equations HW question Hey guys, been looking for some online support for diff eqs and this is where I wound up :P Just started summer session last week so I have a semester's worth of diff eqs crammed into 7 weeks and the HW is coming hard and heavy. This question I have is pretty elementary stuff but I am not very well-versed in diff eqs so I am having some trouble with it. Here it is verbatim from the book: a.) Draw a direction field for the given differential equation. How do solutions appear to behave as t becomes large? Does the behavior depend on the choice of the initial value a? Let a0 be the value of a for which the transition from one type of behavior to another occurs. Estimate the value of a0. b.) Solve the initial value problem and find the critical value a0 exactly. c.) Describe the behavior of the solution corresponding to the initial value a0. Equation: y' - y/2 = 2cost, y(0) = a ---------- Now, I can draw the direction field, albeit it's a little time-consuming. I can solve the equation as well since it's linear; not too tough. The problems I have are with the initial value stuff. I just plain don't understand how to do it. Does the behavior depend on the initial value a? I don't know, I don't think so. I'm not sure how to tell. Estimate the value of a0? I have no idea how to do this either, other than looking at my direction field and just guessing. Find the critical value a0? Don't know how to do that one. I mean, when you solve the equation you're going to be left with a constand C and then you have your initial value y(0) = a = (some equation + C). How are you supposed to solve for a when you have an unknown C? Anyway, this stuff probably isn't hard but it's more or less my first experience with it, so I just need to get my mind wrapped around it a little bit I think. Hopefully someone around here can help me out. Thanks guys. 2. Originally Posted by griffsterb Hey guys, been looking for some online support for diff eqs and this is where I wound up :P Just started summer session last week so I have a semester's worth of diff eqs crammed into 7 weeks and the HW is coming hard and heavy. This question I have is pretty elementary stuff but I am not very well-versed in diff eqs so I am having some trouble with it. Here it is verbatim from the book: a.) Draw a direction field for the given differential equation. How do solutions appear to behave as t becomes large? Does the behavior depend on the choice of the initial value a? Let a0 be the value of a for which the transition from one type of behavior to another occurs. Estimate the value of a0. b.) Solve the initial value problem and find the critical value a0 exactly. c.) Describe the behavior of the solution corresponding to the initial value a0. Equation: y' - y/2 = 2cost, y(0) = a ---------- Now, I can draw the direction field, albeit it's a little time-consuming. I can solve the equation as well since it's linear; not too tough. The problems I have are with the initial value stuff. I just plain don't understand how to do it. Does the behavior depend on the initial value a? I don't know, I don't think so. I'm not sure how to tell. Estimate the value of a0? I have no idea how to do this either, other than looking at my direction field and just guessing. Find the critical value a0? Don't know how to do that one. I mean, when you solve the equation you're going to be left with a constand C and then you have your initial value y(0) = a = (some equation + C). How are you supposed to solve for a when you have an unknown C? Anyway, this stuff probably isn't hard but it's more or less my first experience with it, so I just need to get my mind wrapped around it a little bit I think. Hopefully someone around here can help me out. Thanks guys. So you know that the general solution to the DE is: $ y(t)=A \exp(t/2)-(4/5)\cos(t)+(8/5)\sin(t) $ Also you have the initial condition $y(0)=a$, but: $ y(0)=A -(4/5) $ So $A=a+(4/5)$, and the solution is: $ y(t)=[a+(4/5)] \exp(t/2)-(4/5)\cos(t)+(8/5)\sin(t) $ RonL 3. Okay thanks for your help. I can see how it actually works out. However, working through the problem I got stuck on the integral when solving the equation. The method we're using for linear equations right now is that $y(t) = [1/mu(y)] * integral[mu(y)*g(t)]dt$ With linear equations of the form $y' + mu(y) = g(t)$ So eventually it comes to $y(t) = exp(t/2) * integral[2cos(t)/exp(t/2)]dt$ And I'm not sure how to solve that integral in there. Integration by parts won't work and neither will any of the more simpler methods that I recall from Calc. It's been a while though since I did any complicated integrations so I'm probably forgetting something =\ And also: Is there a way to insert math symbols into my posts? The integral symbol makes things look cleaner than writing "integral" :P 4. Originally Posted by griffsterb Okay thanks for your help. I can see how it actually works out. However, working through the problem I got stuck on the integral when solving the equation. The method we're using for linear equations right now is that $y(t) = [1/mu(y)] * integral[mu(y)*g(t)]dt$ With linear equations of the form $y' + mu(y) = g(t)$ So eventually it comes to $y(t) = exp(t/2) * integral[2cos(t)/exp(t/2)]dt$ And I'm not sure how to solve that integral in there. Integration by parts won't work and neither will any of the more simpler methods that I recall from Calc. It's been a while though since I did any complicated integrations so I'm probably forgetting something =\ And also: Is there a way to insert math symbols into my posts? The integral symbol makes things look cleaner than writing "integral" :P I think this is the integral you want $\int \frac{2\cos(t)}{e^{\frac{t}{2}}}dt$ Here is the la Tex for it \int \frac{2\cos(t)}{e^{\frac{t}{2}}}dt Here is a link with all the code you will need Helpisplaying a formula - Wikipedia, the free encyclopedia If so integration by parts will work it just needs another little "trick" First lets rewrite this as $2\int e^{\frac{-t}{2}}\cos(t)dt$ by parts with $u=e^{\frac{-t}{2}} \\\ dv=\cos(t)$ we get $2\int e^{\frac{-t}{2}}\cos(t)dt=e^{\frac{-t}{2}}\sin(t)+\frac{1}{2}\int e^{\frac{-t}{2}}\sin(t)dt$ Now by parts again (I know you think this is going nowhere) $u=e^{\frac{-t}{2}} \\\ dv=\sin(t)$ we get $2\int e^{\frac{-t}{2}}\cos(t)dt=e^{\frac{-t}{2}}\sin(t)+\frac{1}{2}\left( -e^{\frac{-t}{2}}\cos(t)-\frac{1}{2}\int e^{\frac{-t}{2}}\cos(t)dt\right)$ Simplifying we get $2\int e^{\frac{-t}{2}}\cos(t)dt=e^{\frac{-t}{2}}\sin(t)-\frac{1}{2}e^{\frac{-t}{2}}\cos(t)-\frac{1}{4}\int e^{\frac{-t}{2}}\cos(t)dt$ Now this is where the "magic" happens notice we have the same integral on both sides of the equation so we add $\frac{1}{4}\int e^{\frac{-t}{2}}\cos(t)dt$ to both sides of the equation to get $\frac{9}{4}\int e^{\frac{-t}{2}}\cos(t)dt=e^{\frac{-t}{2}}\sin(t)-\frac{1}{2}e^{\frac{-t}{2}}\cos(t)$ Multiplying by 4/9 and factoring gives $\int e^{\frac{-t}{2}}\cos(t)dt=\frac{e^{\frac{-t}{2}}\left(4\sin(t)-2\cos(t)\right)}{9}$ P.S. Most people just look this up in an integral table.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313007593154907, "perplexity": 314.5083607081875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458524551.85/warc/CC-MAIN-20150501053524-00028-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/102256/are-p-limits-scales-dense-in-the-infinite-musical-scale-of-all-rational-frequen
are p-limits scales dense in the infinite musical scale of all rational frequencies? In the wiki section on prime limit tuning, one reads: p-Limit Tuning. Given a prime number p, the subset of $Q^+$ consisting of those rational numbers x whose prime factorization has the form $x=p_1^{\alpha_1}\ldots p_r^{\alpha_r}$ with $p_1, \ldots, p_r\leq p$ forms a subgroup of $(Q^+, *)$ We say that a scale or system of tuning uses p-limit tuning if all interval ratios between pitches lie in this subgroup Notice that for $p=3$ one gets the just tuning version of the usual cycle of fifths (except that its is an infinite scale, there is no cycle here. To get the usual chromatic 12-notes scale people have fudged it via equal temperament) Because of the fundamental law of the octaves, one does not actually work with the above, but with the quotient spaces given by identifying two notes $q_1$ and $q_2$ differing by a multiple of $2^n$. Modding out by this equivalence one can work with the "infinite musical scale" $[1, 2]\cap Q$: let us think of $1$ as the DO ( C, if you like ) of the infinite keyboard and 2 as the next DO, and all rationals in-between as the "notes" available. Now my question (which maybe trivial, maybe not so trivial, I have no clue): if you take a p-prime scale, is it dense in the infinite musical scale? In other words, can I approximate arbitrarily close any rationals in $[0, 1]$ by a suitable element in the p-scale for a fixed p? If not, what is the distribution of the infinite subscale in the infinite keyboard ? Clearly, the infinite keyboard is the colimit of the intermediate scales, as p goes to infinity. But how does each one "sit" in the higher ones? - I assume that you are everywhere working up to multiplication by a power of $2$ (or else the result is trivially false if $p = 2$ for example) and assuming that your scale has some nontrivial interval ratio $n$ in it which is not a power of $2$. Then this is straightforward. Lemma: A subgroup of $\mathbb{R}$ is dense if and only if it contains arbitrarily small elements. Proof. The integer multiples of a small element $r$ are at most $|r|$ apart from each other. We apply the lemma as follows. The subgroup generated by $1$ and $\log_2 n$ contains arbitrarily small elements because $\log_2 n$ is irrational by prime factorization, so it is dense in $\mathbb{R}$. Given $\alpha \in (0, 1]$, it then follows that we can find a sequence $a_k, b_k$ of integers such that $$a_k + b_k \log_2 n \to \log_2 \alpha$$ hence such that $$2^{a_k} n^{b_k} \to \alpha$$ and the conclusion follows. - The question was trivial after all. But anyway, your proof is still beautiful: very simple and very elegant: Kudos Qiaochu! PS Do you know if those subgroups play any role whatever aside musical tuning? –  Mirco Mannucci Jul 15 '12 at 15:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405601024627686, "perplexity": 402.4621659196594}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651919/warc/CC-MAIN-20140305060731-00053-ip-10-183-142-35.ec2.internal.warc.gz"}
http://slideplayer.com/slide/4317825/
# 9/21/12. I can identify which factors are prime numbers. LEARNING TARGET. ## Presentation on theme: "9/21/12. I can identify which factors are prime numbers. LEARNING TARGET."— Presentation transcript: 9/21/12 I can identify which factors are prime numbers. LEARNING TARGET Cryptography Cryptography is the study of secret codes. Prime Factorization is very important to people who try to make (or break) secret codes based on numbers. That is because factoring very large numbers is very hard, and can take computers a long time to do. If you want to know more, the subject is "encryption" or "cryptography". Why is Prime Factorization Important? Prime Factorization Finding which prime numbers multiply together to make the original number Finding which prime numbers multiply together to make the original number. 36 1,2,3,4,6,9,12,18,32 Word Bank What are the prime factors of 12 ? What are the prime factors of 12 ? It is best to start working from the smallest prime number, which is 2, so let's check: It is best to start working from the smallest prime number, which is 2, so let's check: 12 ÷ 2 = 6 12 ÷ 2 = 6 Yes, it divided evenly by 2. We have taken the first step! Yes, it divided evenly by 2. We have taken the first step! But 6 is not a prime number, so we need to go further. Let's try 2 again: But 6 is not a prime number, so we need to go further. Let's try 2 again: 6 ÷ 2 = 3 6 ÷ 2 = 3 Yes, that worked also. And 3 is a prime number, so we have the answer: Yes, that worked also. And 3 is a prime number, so we have the answer: 12 = 2 × 2 × 3 12 = 2 × 2 × 3 As you can see, every factor is a prime number, so the answer must be right. As you can see, every factor is a prime number, so the answer must be right. Download ppt "9/21/12. I can identify which factors are prime numbers. LEARNING TARGET." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8407117128372192, "perplexity": 328.8835817175009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00194.warc.gz"}
http://mathoverflow.net/questions/14892/looking-for-reference-talking-about-relationship-between-descent-theory-and-cohom/43063
## Looking for reference talking about relationship between descent theory and cohomological descent I am now taking a course focusing on triangulated geometry. The professor has formulated the Beck's theorem for Karoubian triangulated category. The proof is very simple. Just using the universal homological functor(equivalent to Verdier abelianization)back to abelian settings(in particular, Frobenius abelian category), then using the Beck's theorem for abelian category to get the proof. When he finished the proof, he made a remark that the cohomological descent theory can be taken as a consequence of triangulated version of Beck's theorem. As we know, the Beck's theorem for abelian category is equivalent to Grothendieck flat descent theory(Beck's theorem may be more general). I have two questions: 1. Is there any reference(other than SGA 4)in English explaining the relationship of usual descent theory and cohomolgical descent theory? What I am looking for is not a very thick book but a lecture notes with some examples. 2. I know Jacob Lurie developed the derived version of Beck's theorem for his infinity category(correct me if I make mistake). But I have never read his paper very carefully. I wonder whether he explained the relationship of Beck's theorem and "cohomogical descent in his settings"(if there exists such terminology). All the other comments are welcome. Thank you - The relationship between cohomological descent and Lurie's Barr-Beck is exactly the same as the relationship between ordinary descent and ordinary Barr-Back. To put things somewhat blithely, let's say you have some category of geometric objects C (e.g. varieties) and some contravariant functor Sh from C to some category of categories (e.g. to X gets associated its derived quasi-coherent sheaves, or as in SGA its bounded constructible complexes of l-adic sheaves). Now let's say you have a map p:Y --> X in C, and you want to know if it's good for descent or not. All you do is apply Barr-Beck to the pullback map p*:Sh(X) --> Sh(Y). For this there are two steps: check the conditions, then interpret the conclusion. The first step is very simple -- you need something like p* conservative, which usually happens when p is suitably surjective, and some more technical condition which I think is usually good if p isn't like infinite-dimensional or something, maybe. For the second step, you need to relate the endofunctor p*p_* (here p_* is right adjoint to p*... you should assume this exists) to something more geometric; this is possible whenever you have a base-change result for the fiber square gotten from the two maps p:Y --> X and p:Y --> X (which are the same map). For instance in the l-adic setting you're OK if p is either proper or smooth (or flat, actually, I think). Anyway, when you have this base-change result (maybe for p as well as for its iterated fiber products), you can (presumably) successfully identify the algebras over the monad p*p_* (should I say co- everywhere?) with the limit of Sh over the usual simplicial object associated to p, and so Barr-Beck tells you that Sh(Y) identifies with this too, and that's descent. The big difference between this homotopical version and the classical one is that you need the whole simplicial object and not just its first few terms, to have the space to patch your higher gluing hopotopies together. - Oh, this is exactly I am looking for! – Shizhuo Zhang Feb 10 2010 at 14:10 Stacks-GIT develops this theory in (what was at one point) chapter 16 in the chapter on hypercoverings. It is explained in 16.7, although I haven't read it, so I can't vouch for its quality. Specifically, descent by cohomology is covered in chapter 27, but it relies very much on the machinery developed in chapter 16. Another useful thing you might want to look at (in French): http://www.math.univ-toulouse.fr/~toen/m2.html Cours 5 develops homotopic descent, which will be very useful for studying Lurie and Toen-Vezzosi. - For 1 -- Brian Conrad's notes here are great and in english. Lemma 6.8 and the discussion preceding it explain the relation to descent theory. -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410030007362366, "perplexity": 788.5746394173162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700477029/warc/CC-MAIN-20130516103437-00033-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/187105-integrating-copmplex-function-proof-one-property.html
# Math Help - Integrating copmplex function (proof of one property) 1. ## Integrating complex function (proof of one property) Hello (sorry for bad usage of English) i have some issue with this (got to much rusty on this). I need to proof that: $\displaystyle \int c f(z) dz = c\int f(z) dz$ (I try to do it like this... so if I'm wrong or there's "smarter" way to do it i would be very grateful... perhaps i go way back to show what integral is... don't now... please help ) well... Let the complex function $f(z) = u(x,y) + i v(x,y)$ is defined in region $G\subset \mathbb{C}$ and let the $l\in G$ be smooth oriented curve with starting point A and end point B. We make division like so: $\pi : A = z_0, z_1, z_2, ..., z_{k-1}, z_k, ..., z_n = B$ and make than even finer division: $\xi _k = \alpha _k + i\beta _k$ where (k = 1,2,3,...,n) where point $\xi _k$ is point between points $z_{k-1}$ and $z_k$ we form integral sum for division $\pi$ $\displaystyle \varsigma _\pi = \sum_{k =1}^{n}\varphi (\xi _k )(z_k - z_{k-1})$ $\displaystyle \Delta z_k = z_k - z_{k-1}$ $\displaystyle \Delta z_k = \Delta x_k - \Delta y_k$ $\displaystyle f(\xi_k) = u(\alpha _k, \beta _k) + i v(\alpha _k, \beta_k)$ so we pot this all back in integral sum... and have: $\displaystyle \varsigma _\pi = \sum_{k =1}^{n}\left [ u(\alpha _k, \beta _k)\Delta x_k - v(\alpha _k, \beta_k)\Delta y_k \right ] + i \sum_{k =1}^{n}\left [ u(\alpha _k, \beta _k)\Delta y_k + v(\alpha _k, \beta_k)\Delta x_k \right ]$ $\displaystyle \varsigma _\pi = \max_k |\Delta z_k|$ than we have definition... "For complex number I we say it's Riemman's integral of the complex function f(z) on curve l from point A to point B if and only if : $\displaystyle (\forall \varepsilon >0)(\exists \delta (\varepsilon )>0) \therefore |I-\varsigma _\pi | <\varepsilon$ for $\max_k |\Delta z_k|<\delta$ $\displaystyle \lim_{\varsigma _\pi \to \infty} \sum_{k=1}^{n}f(\xi _k)\Delta z_k = I$ if I exists than we can write... $\displaystyle I = \int _l f(z) dz$ or $\displaystyle I =(l) \int _A ^B f(z) dz$ so if this exists than if we have some complex function g(z) = c f(z) where c is constant than we can say that : $\displaystyle \lim_{\varsigma _\pi \to \infty} \sum_{k=1}^{n}c f(\xi _k)\Delta z_k = I$ and since c is constant than c can come in front of sum and limit so .... $\displaystyle c \lim_{\varsigma _\pi \to \infty} \sum_{k=1}^{n} f(\xi _k)\Delta z_k = I$ and we can say that we can write $\displaystyle \int c f(z) dz = c\int f(z) dz$ I think this is way to simple to write this much (if it's true)... but I don't know and that's why i need help... and one more thing... how to proof that : $\displaystyle \int_{z_1} ^{z_2} A dz = A (z_2 - z_1)$ where A is constant, but without using Newton-Leibniz formula ?! (it all seems to be trivial until i got start... or I'm just complicating it for myself ) Thanks for any help 2. ## Re: Integrating copmplex function (proof of one property) Originally Posted by sedam7 Hello (sorry for bad usage of English) i have some issue with this (got to much rusty on this). I need to proof that: $\displaystyle \int c f(z) dz = c\int f(z) dz$ (I try to do it like this... so if I'm wrong or there's "smarter" way to do it i would be very grateful... perhaps i go way back to show what integral is... don't now... please help ) Could you transcribe the exact formulation of the question?. If we are talking only about indefinite integrals, the question is easier, use for example the definition: if $f(z),F(z)$ are analytic on a region $\mathcal{R}$ such that $F'(z)=f(z)$, then $F(z)$ is called an indefinite integral or antiderivative of $f(z)$ denoted by $F(z)=\int f(z)dz$ . Now, apply a well known property about derivatives. 3. ## Re: Integrating copmplex function (proof of one property) thanks question goes like :: if f(z) is integrable function, show that $\int c f(z) dz = c \int f(z) dz$ there's nothing more to the question (and yes this one is about indefinite integrals.... second proof is for finite integrals ) Originally Posted by FernandoRevilla if $f(z),F(z)$ are analytic on a region $\mathcal{R}$ such that $F'(z)=f(z)$, then $F(z)$ is called an indefinite integral or antiderivative of $f(z)$ denoted by $F(z)=\int f(z)dz$ . Now, apply a well known property about derivatives. I don't seem to follow (it looks like it's definition of the primitive function.. or just looks like to me sorry )
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767059683799744, "perplexity": 721.8840465255699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824201.56/warc/CC-MAIN-20160723071024-00166-ip-10-185-27-174.ec2.internal.warc.gz"}
http://answers.wikia.com/wiki/How_much_is_3/4_of_32_color_tiles
# How do you find a fraction of a number? ## Redirected from How much is 3/4 of 32 color tiles 1,014,034questions on $\text {1/25 of 50}$ $1/25 = 0.04$ $0.04 * 50 = 2$ $\text {1/25 of 50 is 2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227041006088257, "perplexity": 2835.014852890809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661775.4/warc/CC-MAIN-20160924173741-00094-ip-10-143-35-109.ec2.internal.warc.gz"}
https://filipiknow.net/integral-calculus-formulas/
# Basic Integration Calculus has two major branches – differential calculus and integral calculus. Differential calculus focuses on the concept of derivatives, how to derive them, and their application. We have already covered the basics of differential calculus in the previous reviewer. This time, let’s jump into the second branch of calculus, which deals with integrals. Just like derivatives, integrals offer a lot of practical applications in various fields. But what are integrals? How are they different from derivatives? How can you calculate them? Let us answer these questions through this reviewer. Click below to go to the main reviewers: Ultimate LET Reviewer Ultimate UPCAT Reviewer ## What Are Integrals? Integrals are the “antiderivative” of a function. This means that the integral is the opposite of the derivative. The process of identifying the integral is known as integration In integration, the given is the derivative of the function, and your goal is to compute the original function. That “original function” is the integral of the given function. For example, suppose that we want to find the integral of 2x. This means that we need to find a function such that its derivative is equal to 2x. x2 is an integral of 2x since the derivative of x2 is equal to 2x: Different functions can be an integral of 2x. For example, x2, x2 + 1, and x2 + 100 are all possible integrals of 2x. Because a given function takes multiple integrals, we add a letter “C” every time we have solved the integral. “C” stands for any arbitrary constant. This is why “C” is called the constant of integration (actually, you can use any letter to represent the constant of integration, but for the sake of convenience, we will use “C” throughout this reviewer to refer to the constant of integration). Going back to 2x, this function can have multiple integrals. For this reason, we provide a general solution that explains the integral of 2x. Specifically, we express the integral as x2 + C, where C is an arbitrary constant. This means that C can take any numerical value such as 1, 1000, ½, π, 0.12, and so on. Since x2 + 1, x2 + 1000, x2 + ½, and x2 + π all have a derivative equal to 2x, we just generalize the integral as x2 + C. ## Integral Notation When you see the integral symbol before a given function, it means that we are taking the integral of that function. The function that we are taking the integral of is called the integrand. Note that there’s always “dx” written after the given function to indicate the variable involved in the integration process (i.e., x). For example, the notation ∫ 3x2 dx means that we are integrating the function 3x2. We need to identify a specific function whose derivative is equal to 3x2. In this case, the integrand is 3x2. ## Indefinite and Definite Integrals When taking the integral of a specific function, we must identify whether we are taking its indefinite integral or its definite integral. The example above, ∫ 3x2 dx, is an example of a case where we are taking an indefinite integral. In an indefinite integral, there are no upper limits and lower limits involved, and the answer is always a function with a variable and the constant of integration “C.” On the other hand, definite integrals are those with upper and lower limits. Let’s take the integral below as an example. The integral above have small numbers written above and below the integral sign. This is what we refer to as the limits of the integration. It means we are taking a definite integral. In a definite integral, the answer is not a function with a variable x but a whole number. In this reviewer, we will primarily focus on indefinite integrals. However, we will also provide an overview of definite integrals to prepare you for an actual calculus class. ### How To Find the Indefinite Integral of a Function To find the indefinite integral of a function, we apply various integration rules. Note that the indefinite integral of a function is also a function with a variable and contains a constant of integration “C.” ## Integration Rules ### 1. Integral of a Constant The indefinite integral of a constant k is kx + C, where x is the variable involved.” In symbols: This is the simplest of all the integration rules. It implies that the indefinite integral of a constant (or any numerical value) is just the constant itself times the variable involved in the integration plus the constant of integration C. Sample Problem 1: Solve for ∫ 2  dx Solution: To find the integral of 2, we multiply 2 by the variable involved (which is x) and then add C. Hence, the indefinite integral of 2 is 2x + C. Note that the derivative of 2x + C (where C is any numerical value), is equal to 2. Indeed, 2x + C is the integral of 2. Sample Problem 2: Compute for ∫ 9  dx Solution: Using the integral of a constant rule, we multiply 9 by x and then add C. Hence, the answer is 9x + C. Sample Problem 3: What is ∫ π dx? Solution: Note that the irrational number is also a constant (since it is still a numerical value). Hence, to find ∫ π dx, we need to apply the integral of a constant rule. To solve for the indefinite integral, we just multiply by x and then add C. Hence, we have x + C as the answer. Sample Problem 4: What is ∫ 9 dm? Solution: Note that the involved variable this time is not x but m instead. Hence, we should multiply 9 by m and then add C. The answer is simply 9m + C. ### 2. Power Rule for Integrals The power rule allows us to identify the integral of a given function with a variable that is raised to a real number value. We can follow the formula below to find the integral of xn (where x is variable and n is a real number). Thus, to find the integral of xn, all you have to do is to identify first the value of n and then plug it into the formula above. Sample Problem 1: Determine ∫ x dx Solution: The given variable here is x with an exponent of 1 (note that x can be written as x1). This means that we have n = 1 Let us plug n = 1 into the formula for the power rule. Hence, the indefinite integral of x is: Sample Problem 2: Compute for ∫ x5 dx Solution: The given variable here is x with an exponent of 5. This means that we have n = 5 Let us plug n = 5 into the formula for the power rule. Hence, the indefinite integral of x is: Sample Problem 3: Compute for  ∫ u3 du Solution: The given variable here is u with an exponent of 3. This means that we have n = 3 Let us plug n = 3 into the formula for the power rule. Hence, the indefinite integral of u is: Sample Problem 4: Use the power rule for integration to identify ∫ xπ dx Solution: The given variable here is x with an exponent of π. This means that we have n = π. Let us plug n = π into the formula for the power rule. Hence, the indefinite integral of u is: ### 3. Multiplication of a Constant “The indefinite integral of kxn where k is a constant is equal to k ∫ xn dx” The multiplication of a constant integration rule tells us that the integral of the product of a constant and a function is equal to the product of the constant and the integral of the function. In symbols: ∫ kxn dx = k ∫ xn dx Let’s try to apply this rule to identify the integral of the product of a constant and a function. Sample Problem 1: What is ∫ 6x2 dx? Solution: The integral of the product of a constant and a function can be expressed as the product of the constant and the integral of the function. This means that we can rewrite ∫ 6x2 dx as 6 * x2 dx. The only thing we have to perform now is to compute for ∫ x2 dx using the power rule for integrals: This means that Note that we have already expressed ∫ 6x2 dx as ∫ 6 * x2 dx. Therefore, This means that Note that we can write which is also equivalent to 6/3 in lowest terms is ½. Therefore: The simplified form of Here’s a quick preview of what we have performed above: Sample Problem 2: Identify Solution: The integral of the product of a constant and a function can be expressed as the product of the constant and the integral of the function. This means that we can rewrite Let us apply the product rule for integrals so that we can identify the value of ∫ x3 dx: This means that Since we already have ½ * ∫ x3 dx earlier and we have computed that then we have which can be simplified into Hence, the answer to this problem is: Sample Problem 3: Compute for ∫ 0.10u3 du Solution: The integral of the product of a constant and a function can be expressed as the product of the constant and the integral of the function. This means that we can rewrite ∫ 0.10u3 du as 0.10 * u3 du. Let us apply the product rule for integrals so that we can identify the value of ∫ u3 du: This means that Since we already have 0.10 * ∫ u3 du earlier and we have computed that, then we have which can be simplified into Hence, the answer to this problem is: ### 4. Sum Rule of Integrals “The indefinite integral of the sum of two functions is equal to the sum of the indefinite integral of the functions.” In symbols, ∫ [f(x)  + g(x)] dx = ∫ f(x) dx + ∫ g(x) dx The sum rule allows us to find the integral of the sum of the functions by simply rewriting them as the sum of their respective integrals. Sample Problem 1: Solve ∫ (x2 + x )dx using the sum rule. Solution: In ∫ (x2 + x )dx, the integrand is the sum x2 + x. This implies that we are taking the integral of the sum of two functions. By applying the sum rule, we can express ∫ (x2 + x )dx as ∫ x2 dx + ∫ x dx. Thus, our next move now is to determine the respective indefinite integrals of x2 and x and then adding them: We combine the respective integrals of x2 and x and consolidate the two constants of integration as a single constant (since the sum of two constants is also a constant). Here’s a quick preview of what we have performed above: Sample Problem 2: Identify ∫ (2x + x3) dx Solution: By the sum rule of integrals, we can express ∫ (2x + x3) dx as ∫ 2x dx + x3 dx Now, let us derive the respective integrals of 2x and x3 : Here’s a quick preview of what we have performed above: ### 5. Difference Rule of Integrals This is the counterpart of the sum rule of integrals for subtraction. Essentially, this rule states that “the indefinite integral of the difference of two functions is equal to the difference of the integrals of the functions.” In symbols, ∫ [f(x) – g(x)] dx = ∫ f(x) dx – ∫ g(x) dx Sample Problem 1: Identify ∫ (x2 – x4) dx Solution: In the given expression above, the integrand is x2 – x4, which means we are integrating a difference between two functions. We can apply the difference rule for this case. According to the difference rule, we can derive the integral of the difference between two functions by simply rewriting them as the difference of the integrals of the functions. Hence, we can rewrite ∫ (x2 – x4) dx as ∫ x2 dx – ∫ x4 dx. Let us now identify the indefinite integrals of x2 and x4 and subtract them so that we can solve this problem. Using our solution above, we have identified that the integral is Here’s a quick preview of what we have performed above: Sample Problem 2: Use the difference rule to identify ∫ (3x3 – ½ x4) dx Solution: Using the difference rule, we can rewrite the given expression which is ∫ (3x3 – ½ x4) dx as 3x3 dx – ∫ ½ x4 dx. Applying the integration rules we have learned, let us identify the respective integral of 3x3 and ½ x4. Let us start with 3x3. We can express this as 3 * x3. Using the multiplication of a constant rule, ∫ (3 * x3) is equivalent to 3 * ∫ x3 We can compute the value of ∫ x3 using the power rule for integrals: Hence, which is equivalent to Now, let us find the integral of ½ x4. Using the multiplication of a constant rule, ∫ (½ * x4) = ½ ∫ x4 We can compute the value of ∫ x4 using the power rule for integrals: Thus, which is equivalent to This means that the integral must be Sample Problem 3: Calculate for ∫ (2x3 + 3x2 – x)dx Solution: Since the given integrand has addition and subtraction signs involved, we can apply the sum and difference rules simultaneously. This implies that we can rewrite the given as ∫ 2x3 dx + ∫ 3x2 dx – ∫ x dx Applying the multiplication by a constant rule, we have as follows: 2 ∫ x3 dx + 3 ∫ x2 dx – ∫ x dx We then apply the power rule to calculate the respective integrals of x3, x2, and x: Rewriting the result above, we have: Hence, the integral is: ## Definite Integrals In the previous section, we learned how to compute the indefinite integrals of basic functions. Remember that in an indefinite integral, the result is a function with a constant of integration “C’. On the other hand, a definite integral of a function will give you a specific numerical value. Unlike indefinite integrals, definite integrals don’t have to include the arbitrary constant “C” since we will obtain a specific value in this case. By including an upper limit and a lower limit to our integral symbol, we can evaluate the integral and obtain a numerical value. For instance, the indefinite integral of ∫ 2x dx = x2 + C. However, if we put an upper limit and lower limit to the given expression, we can obtain a numerical value. Suppose that we have If we solve this, we will obtain 3. Note that we get a numerical value, not a function with an arbitrary constant. What is this “numerical value” we obtain in a definite integral? This is the area under the curve of the given integrand within the given limits. For instance, in the definite integral of 3 means that the area under the function 2x when graphed in the coordinate plane is equal to 3 within the range 1 and 2. We will not delve too much into this analytical concept of integrals because it is already beyond the scope of our reviewer. For this reason, we will be focusing only on the algebra of calculating the definite integral. ### How To Compute the Definite Integral Suppose we want to find the definite integral given a lower limit a and an upper limit b. To find the definite integral: 1. Compute the indefinite integral of the given function. 2. Evaluate the indefinite integral at the lower limit a. 3. Evaluate the indefinite integral at the upper limit b. 4. Subtract the computed value in step 2 from the computed value in step 3. Sample Problem: Find the definite integral of Solution: Step 1: Compute the indefinite integral of the given function. By applying the integration rules we learned in the previous sections, we can compute the indefinite integral of the given function (2x). Step 2: Evaluate the indefinite integral at the lower limit a. The lower limit in the given problem is 1. So, we substitute 1 in the computed indefinite integral in step 1, which is x2 + C: x2 + C (1)2 + C Input the lower limit 1 1 + C Thus, we have obtained 1 + C. Step 3: Evaluate the indefinite integral at the upper limit b. The upper limit in the given problem is 2. So, we substitute 2 in the computed indefinite integral in step 1, which is x2 + C: x2 + C (2)2 + C Input the upper limit 2 4 + C Thus, we have 4 + C. Step 4: Subtract the computed value in step 2 from the computed value in step 3. We obtained 4 + C in step 3 while 1 + C in step 2. Thus, we have: (4 + C) – (1 + C) = 3 (4 – 1) + (C – C) = 3 Thus,  the definite integral is 3. Next topic: Propositional Logic Previous topic: Basic Differentiation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916418194770813, "perplexity": 440.4266134914053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00708.warc.gz"}
http://cpr-condmat-meshall.blogspot.com/2013/07/13066875-guang-yang-et-al.html
## Influence of device geometry on tunneling in $ν=5/2$ quantum Hall liquid    [PDF] Guang Yang, D. E. Feldman Two recent experiments [I. P. Radu {\it et al.}, Science $\mathbf{320}$, 899 (2008) and X. Lin {\it et al.}, Phys. Rev. B $\mathbf{85}$, 165321 (2012)] measured the temperature and voltage dependence of the quasiparticle tunneling through a quantum point contact in the $\nu= 5/2$ quantum Hall liquid. The results led to conflicting conclusions about the nature of the quantum Hall state. In this paper, we show that the conflict can be resolved by recognizing different geometries of the devices in the experiments. We argue that in some of those geometries there is significant unscreened electrostatic interaction between the segments of the quantum Hall edge on the opposite sides of the point contact. Coulomb interaction affects the tunneling current. We compare experimental results with theoretical predictions for the Pfaffian, $SU(2)_2$, 331 and K=8 states and their particle-hole conjugates. After Coulomb corrections are taken into account, measurements in all geometries agree with the spin-polarized and spin-unpolarized Halperin 331 states. View original: http://arxiv.org/abs/1306.6875
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077290654182434, "perplexity": 1000.7085979950442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00101.warc.gz"}
https://www.groundai.com/project/stm-imaging-of-a-bound-state-along-a-step-on-the-surface-of-the-topological-insulator-bi_2te_3/3
STM imaging of a bound state along a step on the surface ofthe topological insulator Bi{}_{2}Te{}_{3} # STM imaging of a bound state along a step on the surface of the topological insulator Bi2Te3 Zhanybek Alpichshev Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA, 94305 Department of Physics, Stanford University, Stanford, CA 94305    J. G. Analytis Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA, 94305    J.-H. Chu Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA, 94305    I.R. Fisher Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA, 94305 Department of Applied Physics, Stanford University, Stanford, CA 94305    A. Kapitulnik Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA, 94305 Department of Physics, Stanford University, Stanford, CA 94305 Department of Applied Physics, Stanford University, Stanford, CA 94305 July 26, 2019 ###### Abstract Detailed study of the LDOS associated with the surface-state-band near a step-edge of the strong topological-insulator BiTe , reveal a one-dimensional bound state that runs parallel to the step-edge and is bound to it at some characteristic distance. This bound state is clearly observed in the bulk gap region, while it becomes entangled with the oscillations of the warped surface band at high energy alpichshev (), and with the valence band states near the Dirac point. We obtain excellent fits to theoretical predictions theory () that properly incorporate the three-dimensional nature of the problem to the surface state hzhang (). Fitting the data at different energies, we can recalculate the LDOS originating from the Dirac band without the contribution of the bulk bands or incoherent tunneling effects. ###### pacs: 71.18.+y, 71.20.Nr, 79.60.-i BiTe and BiSe have been argued recently to be three-dimensional (3D) topological insulators (TI) pt (); fu (); qi1 (); hsieh1 (), exhibiting a bulk gap and a single, non-degenerate Dirac fermion surface band topologically protected by time-reversal symmetry hzhang (). Subsequent angle resolved photoemission spectroscopy (ARPES) chen (); hsieh2 (), and by scanning tunneling microscopy (STM) and spectroscopy (STS) roushan (); alpichshev (); zhang () experiments confirmed that prediction. Focusing on BiTe, ARPES revealed that with appropriate hole-doping, the Fermi level could be tuned to intersect only the surface states, indicating fully gapped bulk states as is expected from a three-dimensional TI chen (); hsieh2 (). Complementing ARPES, STM and STS studies emphasizing quasi particle scattering interference from impurities and macroscopic defects, demonstrated the “protected” nature of the surface band roushan (); alpichshev (); zhang (). While protection of the surface state is guaranteed within a simple Dirac band, warping effects can produce special nesting wavevectors that allow for electronic-waves within the surface state band (SSB) fu1 (). These effects have been observed experimentally in the upper part of the SSB by Alpichshev et al. alpichshev (). In that paper we reported STS studies on high-quality BiTe crystals which exhibit oscillations of LDOS near a step-edge. While within the main part of the SSB oscillations were shown to be strongly damped, supporting the hypothesis of topological protection, at higher energies, as the SSB becomes concave (as is observed by complementary ARPES data), oscillations appear which disperse with a particular wave-vector corresponding to the allowed spin states of the warped constant-energy contour fu () . In this paper we present a more detailed study of the LDOS associated with the SSB near a step-edge, and report on the observation of what is most probably a one-dimensional (1D) state bound onto it. This bound state is clearly observed in the bulk gap region, while it becomes entangled with the oscillations of the warped SSB at high energy and with the valence band states near near the Dirac point. We successfully fit the data to theoretical predictions theory () that properly incorporate the three-dimensional nature of the problem (outlined in hzhang ()) to the surface state . Fitting the data at different energies, we can recalculate the LDOS originating from the Dirac band without the contribution of the bulk bands or incoherent tunneling effects. The excellent agreement between the experimental data and simple theory is another testimony to the simple, yet robust nature of the surface state in this model system of TI. For the present study we used Cd doped single crystals of BiTe identical to those used in our earlier study of the warped surface band alpichshev (). Nominal doping levels of up to 1 for Cd were incorporated to compensate n-type doping from vacancy and anti-site defects that are common in the BiTe system. Actual doping was determined separately using chemical and Hall-effect methods and were shown by ARPES chen () to be in excellent agreement with the relative position of the Dirac point with respect to the Fermi energy. For the data described in this paper the Dirac point was located near 300 meV. Samples were cleaved in vacuum of better than Torr, and quickly lowered to the 9 K section of the microscope, where cryo-pumping ensures that the surface remains free from adsorbates for weeks or longer. Topography scans were taken at several bias voltages and setpoint currents (usually 200mV and 100pA). A thorough discussion of the important features of the data that was collected from many topography and spectroscopy scans can be found in alpichshev (), here we concentrate on scans in the vicinity of a step-edge defect that was obtained in the process of cleaving the crystal. The measured thickness of the step is 30.5 (in excellent agreement with the thickness of one unit cell bite ()), and we concentrate on the region away from the step as is shown in Fig. 1. We mark the beginning of the step which runs along the [100] direction with (this choice is arbitrary to within a few because of the roughness of the step). Above the topography, we also show in Fig. 1 the LDOS of that region for a bias voltage -160 meV (in the middle of the SSB) which lies in the gap between the bulk conduction band (BCB) and the bulk valence band (BVB). While the LDOS is rather flat and featureless in most of the scanned area, a pronounced LDOS-peak is observed next to the step with constant strength along the step. We identify this peak as a bound state and study it further as a function of energy. As described in ref. alpichshev (), at high enough energies, away from the Dirac point and in the region of warped surface band contour we observe oscillations of the LDOS that can be fit with the expression: (amplitude , and a scattering potential phase-shift ). In this region LDOS demonstrates only simple and naturally expected deviation from the simple asymptotic expression around the origin. As the energy is lowered, and we enter the region in which oscillations are strongly damped (below meV), a new peak seems to emerge at a distance from the step. For example, we show in Fig. 2 raw LDOS data as a function of distance from the step (zero marks the position of the step) in both direction, for bias energy of meV, inside the bulk gap. For reference we also show in the same figure a cross section of the topography of the step. Note that due to the height of the step ( 30 ), when the tip drops down to keep the distance to the surface constant, it “probes” the edge of the step which results in a rounded region that extends to . We estimate that the actual step drops down to the lower region rather steeply around zero. To the right of the step the peak at 15 is the only feature that exists. The solid line shows a fit to our model theory (), as will be discussed below. We note that we can impose the mirror-image of the fit to the “behind the step” region. The excellent agreement that is observed beyond the region in which the tip did not clear the step is a direct proof of the two-dimensional nature of the surface band that beyond the exponentially decaying effect of the bound state, it “does not know” about the defect as if there is one, uninterrupted surface that wraps the crystal. Fig. 3 shows the raw data of LDOS near the step with similar fits to Fig. 2. The result at meV seems to be a smooth progression from higher energies, if we subtract the oscillatory part, where by meV the peak stops changing shape and it can be fitted with our model with no residual. Despite the robustness of the experimentally observed peak, neither a clear prediction, nor possible explanation exists to date to account for this phenomenon. We attribute this deficiency to the pure-2D theoretical framework which all of the previous studies used. The results of these calculations (e.g. biswas1 ()) show that any features obtained are necessarily dispersive with energy while the peak in LDOS we report on is clearly not, which unambiguously implies some details are missing. While the initial Hamiltonian derived from the band structure of BiTe did include all bands hzhang (), all the calculations of impurity scattering or scattering from macroscopic defects used the truncated two-dimensional effective surface Hamiltonian first introduced by Zhang et al. hzhang (). At most, higher order surface terms were introduced to account for the hexagonal warping effects fu1 (). Here we claim that by using a pure 2D Hamiltonian, one inherently assumes that and are “good” quantum numbers, which may be a wrong starting point in the case of a strong perturbation such as a step-edge. When the local curvature of the surface of a topological insulator is much less than the characteristic length-scale of the problem ( is the bulk gap), one cannot assume that the surface state will be just a solution of the conventional “” surface Hamiltonian, wrapping the new curved surface. Instead, effects due to the gradients near the corrugated surface induce bulk interference which become relevant. Therefore, a coherent treatment of such surface defect should start with the appropriate 3D Hamiltonian such as the one derived by Zhang et al. hzhang () (we choose the units such that ): ˆHψ(r)={M(r)I2×2⊗τz+v(^kxσx⊗τx+^kyσy⊗τx)+vz(^kzσz⊗τx)+O(k2)}ψ(r)=Eψ(r) (1) where and are Pauli matrices in spin and orbital space respectively with , and is the identity matrix. , , and are material-specific parameters. In particular, the “mass term” represents the asymmetry between the inside and outside the bulk of the TI material and has units of energy in our notations. Asymptotically inside, and outside, while near the step varies rapidly to match the asymptotic behavior. It is easy to see that from the topology point of view, it is enough to set in vacuum and in the TI bulk in order to create a boundary that separates the two distinct topological states . Further, to localize the effect of the mass twist to the TI surface the Eq. 1 can be squired. The result is a new Schrödinger-like equation (for convenience, the -axis was re-scaled such that ) for which it can be demonstrated that the LDOS obtained from it is directly related to the LDOS of the original unsquared Hamiltonian. {[M2−v2(∂2x+∂2y+∂2~z)]I4×4−v(∇M(r)⋅σ)⊗τy}ψ(r)=E2ψ(r). (2) However, despite the fact that the effect of the step is now separated into the gradient term, Eq. 2 is still difficult to solve due to the unique spin structure associated with the eigenstates. However, one may argue that since at least one of the edges of the step (the one perpendicular to the cleavage surface) is very rough, it may serve as a termination line for the spin state. Close examination of the geometry of the step as shown in Fig. 1 indicates a bound surface state that is away from the axis, and spreads along the -axis. The step, which is a line defect in the - plane, is created by an intersection of a surface perpendicular to the -axis with a surface perpendicular to the -axis. In such a formulation continues to be a good quantum number in the plane. Therefore, setting , we make an approximation in which we assume that we can ignore the spin structure on the rough surface of the step, hence obtaining the equation: (M2−v2∇2)|ψ|−v|∇M||ψ|=0 (3) Solving to the first approximation this equation without the spin variables as discussed above yields: |ψ(x)||ψ0|=12+12e−x/λ+1λ∫x0K0(x′/λ)πdx′ (4) where is the asymptotic amplitude of the wavefunction far away from the step, is the zero-order modified Bessel function of the second kind, and is the characteristic length-scale for variations in the -direction. An examination of the above solution shows that has a single maximum () near the step and then decays exponentially far away from the step. The corresponding LDOS ratio, , is found to be approximately 1.2, and independent of energy for the wavefunction solution in Eqn. 4. Obviously, the total density of states will be built of a combination of wavefunctions that take into account the actual roughness of the surface and the correct spin configuration. While such a calculation is beyond the scope of the present paper, we expect that the shape of the correct solution solution, i.e. a density of states that peaks close to the step and decay asymptotically far from the step, will be of the same nature theory (). Apart from particular details, within the presented approach the structure of the peak in LDOS is determined by the bulk properties of the samples, i.e. on the energy scales of the gap size implying that in the low energy regime (close to the Dirac point) the LDOS profile should be independent of energy. This is certainly the case for the position of LDOS maximum and the shape of the profile up to a vertical shift. However it also means that the relative height of the LDOS must be independent of energy too. Fig. 4 shows a typical LDOS spectrum away from the step, together with the ratio of peak height to asymptotic level of the LDOS as determined from the experiment as a function of bias voltage. What it shows though is that in the exposed region of the Dirac band, the ratio is clearly energy dependent. We assume that this dependence is due to an incoherent density of states , such that . Assuming that the coherent part of LDOS behaves as explained above we can obtain from here an expression for it in terms of measurable quantities only: ρcoh(E,∞)=ρtot(E,∞)⋅(αexp(E)−1αmax−1) (5) Here is the relative hight of the peak in the case the was no incoherent contibution to the LDOS. Since the latter can only decrease the height of peak, for specificity we take here, which is the maximum value of . The actual value for should be given by the complete theory. The resulting coherent LDOS obtained this way is plotted in Fig.4 as a function of energy. It is seen immediately that the resulting line is strikingly straight and points exactly to the position where the Dirac point is expected to be located based on electron wave analysis presented in alpichshev (), as well as ARPES data on these samples chen (). To justify that we extrapolated the validity region of Eq.5 quite far way from the Dirac point although it is formally applicable only in the vicinity of DP, we point out that first of all, the actual criterion was which is still roughly met when ; and second it is seen from Fig. 3 that for these energies the Eqn. 4 is still providing a good fit, suggesting that the physics involved in obtaining Eqn. 4 is still valid. In conclusion we report on the observation of an accumulation of DOS near an atomic step on the surface of BiTe identified as a bound state, whose structure is found to be in nice quantitative agreement with the theory developed in ref theory (). We also introduce a technique to extract the coherent part of the DOS, and demonstrated that unlike the total measured DOS, its shape is in accordance with the expected profile of DOS of a 2D Dirac band. ###### Acknowledgements. Discussions with Xiaoliang Qi, Shoucheng Zhang, and especially Srinivas Raghu are greatly appreciated. This work was supported by the Department of Energy Grant DE-AC02-76SF00515. ## References • (1) Zhanybek Alpichshev, J. G. Analytis, J.-H. Chu, I.R. Fisher, Y.L.Chen, Z.X. Shen, A. Fang, A. Kapitulnik, Phys. Rev. Lett. 104, 016401 (2010). • (2) Zhanybek Alpichshev, Weejee Cho, and Aharon Kapitulnik, “Density of states of a topological insulator in the presence of a step,” XXXXXX (2011). • (3) H. Zhang, C.-X. Liu, X.-L. Qi, X. Dai, Z. Fang, and S.-C. Zhang, Nature Phys 5, 438 (2009). • (4) For a recent review see e.g. Xiaoliang Qi and Shoucheng Zhang, Physics Today 63, 33 (2010). • (5) L. Fu and C.L. Kane, Phys. Rev. B 76, 045302 (2007). • (6) Xiao-Liang Qi, Taylor L. Hughes and Shou-Cheng Zhang, Phys. Rev. B 78, 195424 (2008); Xiao-Liang Qi, Taylor L. Hughes, S. Raghu, and Shou-Cheng Zhang, Phys. Rev. Lett. 102, 187001 (2009). • (7) D. Hsieh, Y. Xia, L. Wray, D. Qian, A. Pal, J. H. Dil, J. Osterwalder, F. Meier, G. Bihlmayer, C. L. Kane, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Science 323, 919 (2009). • (8) Y. L. Chen, J. G. Analytis, J. H. Chu, Z. K. Liu, S. K. Mo, X. L. Qi, H. J. Zhang, D. H. Lu, X. Dai, Z. Fang, S. C. Zhang, I. R. Fisher, Z. Hussain, Z. X. Shen, Science 325, 178 (2009). • (9) D. Hsieh, Y. Xia, D. Qian, L. Wray, J. H. Dil, F. Meier, J. Osterwalder, L. Patthey, A. V. Fedorov, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Phys. Rev. Lett. 103, 146401 (2009). • (10) P. Roushan, J. Seo, C. V. Parker, Y. S. Hor, D. Hsieh, D. Qian, A. Richardella, M. Z. Hasan, R. J. Cava and A. Yazdani, Nature 460, 1106 (2009). • (11) Tong Zhang, Peng Cheng, Xi Chen, Jin-Feng Jia, Xucun Ma, Ke He, Lili Wang, Haijun Zhang, Xi Dai, Zhong Fang, Xincheng Xie, and Qi-Kun Xue, Phys. Rev. Lett. 103, 266803 (2009). • (12) Liang Fu, Phys. Rev. Lett. 103, 266801 (2009). • (13) L. E. Shelimova, O. G. Karpinskii, P. P. Konstantinov, E. S. Avilov, M. A. Kretova, and V. S. Zemskov, Inorganic Materials 40, 451 (2004), Translated from Neorganicheskie Materialy 40, 530 (2004). • (14) M. F. Crommie, C. P. Lutz, D. M. Eigler, Nature 363, 524 (1993). • (15) In contrast see Rudro R. Biswas, Alexander V. Balatsky, arXiv:0912.4477 where a “bound state” is related to native oscillations (i.e. unrelated to warped band contour), it produces a depression in the LDOS, and it disperses with energy, none of which we observe. • (16) Ying Ran, Yi Zhang, and Ashvin Vishwanath, Nature Physics 5, 298 - 303 (2009) You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8836799263954163, "perplexity": 1410.8474008441751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00197.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=StringTools/Metaphone&L=F
Metaphone - Maple Help StringTools Metaphone Metaphone function Calling Sequence Metaphone( s ) Parameters s - Maple string Description • The Metaphone(s) command implements the Metaphone algorithm. • The metaphone of a name is a phonetic hash of the consonant sounds in the name. It is designed to be an improvement on the traditional soundex algorithm. Metaphones have the property that words, which are pronounced similarly, produce the same metaphone and can thus be used to simplify searches in databases where the pronunciation but not the spelling is known. • While the soundex algorithm hashes words to a four character string consisting of a letter followed by three digits, the metaphone of a word produces a varying length string consisting entirely of consonants, with the possible exception of a leading vowel. • All of the StringTools package commands treat strings as (null-terminated) sequences of $8$-bit (ASCII) characters.  Thus, there is no support for multibyte character encodings, such as unicode encodings. Examples > $\mathrm{with}\left(\mathrm{StringTools}\right):$ > $\mathrm{Metaphone}\left("James"\right)$ ${"JM"}$ (1) > $\mathrm{Metaphone}\left("Barb"\right)$ ${"BRB"}$ (2) > $\mathrm{Metaphone}\left("Whitting"\right)$ ${"WTNK"}$ (3) > $\mathrm{Metaphone}\left("Gauss"\right)$ ${"KS"}$ (4) > $\mathrm{Metaphone}\left("Goethe"\right)$ ${"K0"}$ (5) > $\mathrm{Metaphone}\left("Ghosh"\right)$ ${"KX"}$ (6) > $\mathrm{Metaphone}\left("Kline"\right)$ ${"KLN"}$ (7) > $\mathrm{Metaphone}\left("Cline"\right)$ ${"KLN"}$ (8) > $\mathrm{Metaphone}\left("Vallis"\right)$ ${"FL"}$ (9) > $\mathrm{Metaphone}\left("Fallis"\right)$ ${"FL"}$ (10) References Philips, Lawrence. "Hanging on the Metaphone." Computer Language. December 1990, pp. 38-44.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219532012939453, "perplexity": 3070.9079139962387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00592.warc.gz"}
https://normgoldblatt.com/find-the-magnitude-of-a-vector-60
# Find the magnitude of a vector When you try to Find the magnitude of a vector, there are often multiple ways to approach it. Solve Now ## Online calculator. Vector magnitude calculator If they are given a vector Ā = xi+ yĵ + zk then the magnitude of vector Ā can be calculated using the below formula; Magnitude of vector Ā (|A|) = If the starting point and Explain math equation The Quadratic Formula is a mathematical equation that can be used to solve for the roots of a quadratic equation. Figure out mathematic equation Mathematics is the study of numbers, shapes, and patterns. Provide multiple ways To solve a math equation, you need to find the value of the variable that makes the equation true. Reach support from expert professors If you need help with your homework, our expert writers are here to assist you. ## Magnitude of a Vector Solve math problem Do math Get Homework Track Way ## Types of Vectors Find the vector magnitude (length) step-by-step. Matrices. Vectors. full pad ». x^2. x^ {\msquare} \log_ {\msquare} \sqrt {\square} \nthroot [\msquare] {\square} ` ## Magnitude and Direction of Vectors Magnitude: The magnitude of a vector is the length of the vector. The magnitude of {eq}v = \left {/eq} is given by {eq}|v| = \sqrt{a^2 + b^2} {/eq}. ## Students said Ken Cahoon I strongly recommend a download, it's not like a calculator , it tells you the answer but then shows you how to answer that question again. I was skeptical at first but it works, this is amazing being able to solve nearly any problem in a snap. Barry Schultz I really don't know what I'd do without this amazing app, it also has a very comfortable set up to look like text and it is very nice, it's taken every every single question to a perfect result. ## Vectors: Forms, Notation, and Formulas Geometric Rectangular To find the magnitude of a vector, first determine its horizontal and vertical components on their respective number lines math is the study of numbers, shapes, and patterns. It is used in everyday life, from counting to measuring to more complex calculations. Solve math questions Math is a subject that can be difficult for some students to grasp. However, with a little practice and perseverance, anyone can learn to love math! If you need help, our customer service team is available 24/7. ## Magnitude of a Vector Definition, Formulas and Problems The magnitude of a vector is always defined as the length of the vector. The magnitude of a vector is always denoted as ∥a∥. For a two-dimensional vector a, where a = (a₁, a₂ ), ||a|| = Explain mathematic If you're struggling to clear up a math equation, try breaking it down into smaller, more manageable pieces. By taking a step-by-step approach, you can more easily see what's going on and how to solve the problem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9203548431396484, "perplexity": 1033.2402063160337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00862.warc.gz"}
https://www.physicsforums.com/threads/bianchis-entropy-result-what-to-ask-what-to-learn-from-it.599812/
# Bianchi's entropy result-what to ask, what to learn from it 1. Apr 24, 2012 ### marcus Bianchi's entropy result--what to ask, what to learn from it I think most if not all here are familiar with the idea that entropy, by definition, is not an absolute but depends on the observer. (Padmanabhan loves to make that point. :-D) There may also be an explicit scale-dependence. And in the Loop context one expects the Immirzi parameter to run with scale. Likewise black hole horizon temperature is highly dependent on how far away the observer is hovering. So there is this interesting and suggestive nexus of ideas that we need to pick apart and learn something from. Bianchi has just made a significant contribution to this. http://arxiv.org/abs/1204.5122 Entropy of Non-Extremal Black Holes from Loop Gravity Eugenio Bianchi (Submitted on 23 Apr 2012) We compute the entropy of non-extremal black holes using the quantum dynamics of Loop Gravity. The horizon entropy is finite, scales linearly with the area A, and reproduces the Bekenstein-Hawking expression S = A/4 with the one-fourth coefficient for all values of the Immirzi parameter. The near-horizon geometry of a non-extremal black hole - as seen by a stationary observer - is described by a Rindler horizon. We introduce the notion of a quantum Rindler horizon in the framework of Loop Gravity. The system is described by a quantum surface and the dynamics is generated by the boost Hamiltonion of Lorentzian Spinfoams. We show that the expectation value of the boost Hamiltonian reproduces the local horizon energy of Frodden, Ghosh and Perez. We study the coupling of the geometry of the quantum horizon to a two-level system and show that it thermalizes to the local Unruh temperature. The derived values of the energy and the temperature allow one to compute the thermodynamic entropy of the quantum horizon. The relation with the Spinfoam partition function is discussed. 6 pages, 1 figure 2. Apr 24, 2012 ### marcus Re: Bianchi's entropy result--what to ask, what to learn from it Bianchi uses the classic Clausius definition of entropy ∂S = ∂E/T and makes it very clear where the observer is hovering, at what distance from horizon. So observer's measurement of E and T depends on that, but the effects cancel and to first order he gets S = A/4. Earlier treatments of BH entropy did not use the Clausius relation. Instead, they employed state counting. One assumes the observer is low-resolution and can make only the coarsest distinctions. So the more states he confuses with each other, the more entropy. You take the log of the number of states and that's it. Or if it is a Hilbertspace of quantum states, you take the log of the dimension of the Hilbertspace. Ted Jacobson made some critical comments about this in a 2007 paper, which is Bianchi's reference [20] at the end. I would be really interested to know Jacobson's reaction to Bianchi's paper. http://arxiv.org/abs/0707.4026 Renormalization and black hole entropy in Loop Quantum Gravity Ted Jacobson 7 pages (Submitted on 26 Jul 2007) "Microscopic state counting for a black hole in Loop Quantum Gravity yields a result proportional to horizon area, and inversely proportional to Newton's constant and the Immirzi parameter. It is argued here that before this result can be compared to the Bekenstein-Hawking entropy of a macroscopic black hole, the scale dependence of both Newton's constant and the area must be accounted for. The two entropies could then agree for any value of the Immirzi parameter, if a certain renormalization property holds." Bianchi also introduces the concept of "quantum Rindler horizon" which I don't recall being used in earlier Loop BH entropy papers. If you know of an instance, please let me know--I could have simply missed it. Mathematically the idea of "γ-simple" unitary representations of SL(2,C) is intriguing and could turn out to be a fertile useful concept. It was already there, he just found a good terminology, I think, and occasionally in math that can be important. I wonder if one might conclude that the bare value of the Immirzi is 0.2375. In many papers that study the long distance limit they let gamma go to zero----meaning that the region stays the same size but its geometry gets less fuzzy. Less "rumpled" like an unmade bed is rumpled. Then gamma=0.2375 would represent the maximal rumpling of nature. Just speculating It's classy to use the Clausius definition of entropy. In my humble opinion if you ever want a beard this is the kind to have: http://en.wikipedia.org/wiki/Rudolf_Clausius (1822-1866) Under no circumstances do you want one like this http://en.wikipedia.org/wiki/Ludwig_Boltzmann Last edited: Apr 24, 2012 3. Apr 24, 2012 ### atyy Re: Bianchi's entropy result--what to ask, what to learn from it If there's no state counting, isn't this just a semiclassical calculation, like Hawking's? 4. Apr 24, 2012 ### Physics Monkey Re: Bianchi's entropy result--what to ask, what to learn from it What intrigues me is the appearance of the boost generator. I would like to understand this operator better in the loop gravity context. The connection between rindler horizons and the boost generator is long known, but I have a lot of interest in this topic because we have recently been able to put this connection to good use in condensed matter. Of course, all this spin network stuff reminds me of my old pal tensor networks, and I wonder if there is some grand synthesis (involving tensor networks, entanglement, holography, ...) possible here. 5. Apr 24, 2012 ### atyy Re: Bianchi's entropy result--what to ask, what to learn from it How? 6. Apr 25, 2012 ### Paulibus 7. Apr 25, 2012 ### francesca Re: Bianchi's entropy result--what to ask, what to learn from it No, it's not semiclassical. In fact in the paper all the ingredients are derived from the full quantum theory. The relation energy-area was found by Frodden Gosh Perez using Einstein equations; here it is found using the boost generator given by Spinfoam Theory. The calculation in order to find Unruh temperature is done here again using the boost generator, it's completely new. And finally there is the remarkable demonstration that the Spinfoam amplitude implies the right distribution, that yields Hawking entropy. 8. Apr 25, 2012 ### Physics Monkey Re: Bianchi's entropy result--what to ask, what to learn from it Having read the paper a little more closely, I have some basic confusion about what is going on: 1. Although Bianchi claims that E and A don't commute, it looks like on the image under $Y_\gamma$ of the spatial spin network states they are essentially identical. This seems to be so because of the $\gamma$-simple constraint Eq. 6 2. Related to 1, in what sense can an eigenstate of energy and area possibly have an entropy? 3. Everything looks like a product state over facets, but I would expect entropy and thermalization to be associated with some interactions between facets. 4. Is there a $\rho$ for which $S = - \text{tr}(\rho \log{\rho})$? 5. What is the physical state space? Is it the finite spin network basis (given a set of punctures)? Surely the continuous space of SL(2,C) representations are not the physical states? I should say that I haven't yet processed the temperature derivation section, although it looks like a standard unruh-type setup. Perhaps some of the answers can be found there, but many of these issues seem more basic as if they should be understood before tackling the issue of temperature. 9. Apr 25, 2012 ### Physics Monkey Re: Bianchi's entropy result--what to ask, what to learn from it I am interested in the spectrum of the reduced density matrix of spatial regions inside bulk materials. This spectrum knows a lot about entanglement e.g. the entanglement entropy is computable from it. If $\rho_R = \exp{(-H_R)}$ is the reduced density matrix of region R in the ground state, then the spectrum of $H_R$ (defined by this equation) is the entanglement spectrum. It is an old result in Lorentz invariant field theory that when R is the half space, say x>0, then $H_R = 2 \pi K$ with $K$ the boost generator mixing x and t. Thus for LI field theory we know the entanglement spectrum for a special subregion, the half space. The form of the operator $K$ is $K = \int_{x>0} dx\, dx^2 ... dx^d \left( x T^{tt} \right)$ (at t=0) and hence it looks like the physical Hamiltonian with an edge. We used this to show that in many cases the entanglement spectrum shares many universal features with the energy spectrum of a physical edge. In other words, the imaginary entanglement cut becomes a real physical cut in the system. A simple example is provided by the fractional quantum Hall effect. In that case a physical edge always has a chiral edge mode circulating around the sample. Using the technology above we were able to show that the entanglement spectrum also has this chiral edge mode. So even on a system with no boundary you can, by looking at entanglement, detect the existence of protected chiral edge states. 10. Apr 25, 2012 ### atyy Re: Bianchi's entropy result--what to ask, what to learn from it But isn't the action at the end a semiclassical one? 11. Apr 25, 2012 ### marcus Re: Bianchi's entropy result--what to ask, what to learn from it The main result(s) of the paper are proved in the first 4 pages up thru the section called *Entropy of the Quantum Horizon*. You must be talking about some action that appears in pages 1-4, but I can't figure out which. There is the section on page 5 which I see as kind of a postscript. It contains some interesting reflections and points to some future work (a paper which Wieland and Bianchi have in the works.) But that is not essential to the main work of the paper, it's more interpretive afterthought, and it does mention something that occurs in the "semiclassical limt of the Spinfoam path integral..." But that hardly means that the whole paper is proving things only at the semiclassical level (this is what some of your earlier comments seemed to be suggesting.) 12. Apr 25, 2012 ### Physics Monkey Re: Bianchi's entropy result--what to ask, what to learn from it A simpler reason to worry about semiclassicality is found in the early pages, especially after Eq. 8 and Eq. 9. There Bianchi makes heavy use of the classical results to identify the right operator to call the "energy" of the horizon. One could worry in the usual way that this identification is semiclassical. For example, will the quantum hair proposal of Ghosh-Perez be captured by these identifications? 13. Apr 25, 2012 ### fzero Re: Bianchi's entropy result--what to ask, what to learn from it The Clausius relation is classical. The calculation done here is semiclassical because $\delta E$ corresponds to the addition of a single quantum of energy. In this regard, it's not that different from Hawking's approach. A fully quantum treatment must involve the counting of microstates. In fact, the derivation of the energy of the black-hole is quite confusing from entropy considerations. Bianchi says that the Rindler surface is described by the state $$|s\rangle = \otimes_f | j_f \rangle,$$ which results from a tesselation into the facets $f$. But this is a pure state and should have zero information-theoretic entropy. I'm not sure if it even makes sense to talk about other tesselations in this framework, but from the statistical point of view, one would want a mixed state obtained by summing over tesselations. The black hole should then turn out to be a maximal entropy configuration. 14. Apr 25, 2012 ### marcus Re: Bianchi's entropy result--what to ask, what to learn from it I've been appreciating your comments, since you know a lot about this. I'm glad you took an interest and read the paper. Part of the confusion could be due to problems with notation. I think of what we have now as a draft to which more explanation could be added. I could be wrong but I don't think it says "E and A don't commute". The OPERATORS for energy and area are denoted H and A, are they not? The letter E seems to denote a quantity. At one point he says E = <s|H|s>, so as a quantity it would commute with everything I suppose. The energy operator H is defined by eqn (8) and seems to be composed of boost pieces. The area operator seems to be composed of rotation pieces. Correct me if I'm wrong these don't commute as operators, do they? Equation (6) just says they have the same matrix element form. Let me know if I'm saying something really stupid. So anyway I think on page 3, middle of first column, where he says "the energy does not commute with the area of the quantum horizon" what he means is "H and A don't commute." Is this right? You are by far the expert in this context. 15. Apr 25, 2012 ### fzero Re: Bianchi's entropy result--what to ask, what to learn from it How is the operator $|\vec{L}_f|$ defined? I understand the subscript, but is it supposed to be the operator whose eigenvalue is the square root of that of $|\vec{L}_f|^2$? How do we find $j_f$ instead of $\sqrt{j_f(j_f+1)}$ (or has the limit of large $j_f$ been taken?) In any case, the states $|E\rangle$ defined below (9) are simultaneous eigenstates of $|L|, L_z$ and $K_z$, so $H$ and $A$ commute on them as operators. 16. Apr 25, 2012 ### marcus Re: Bianchi's entropy result--what to ask, what to learn from it Just to be clear, what I said in post #14 was in reply to this of Physics Monkey: Physics Monkey also asked about the physical state space. It may help us to better understand the paper and even some of the notation if we read the first paragraph, where he refers to an earlier paper of his about black hole entropy. This is his reference [4] ==quote first paragraph== Loop Gravity [2] has been shown to provide a geometric explanation of the finiteness of the entropy and of the proportionality to the area of the horizon [3]. The microstates are quantum geometries of the horizon [4]. What has been missing until recently is the identification of the near-horizon quantum dynamics and a derivation of the universal form of the Bekenstein-Hawking entropy with its 1/4 prefactor. This is achieved in this letter. ==endquote== Here is [4]: http://arxiv.org/abs/1011.5628 Black Hole Entropy, Loop Gravity, and Polymer Physics Eugenio Bianchi (Submitted on 25 Nov 2010) Loop Gravity provides a microscopic derivation of Black Hole entropy. In this paper, I show that the microstates counted admit a semiclassical description in terms of shapes of a tessellated horizon. The counting of microstates and the computation of the entropy can be done via a mapping to an equivalent statistical mechanical problem: the counting of conformations of a closed polymer chain. This correspondence suggests a number of intriguing relations between the thermodynamics of Black Holes and the physics of polymers. 13 pages, 2 figures This was a year and a half ago and employed an entirely different method, namely (semiclassical) state-counting. But some of the notation and definitions undoubtably overlap, so this paper might be of use. Last edited: Apr 25, 2012 17. Apr 25, 2012 ### fzero Re: Bianchi's entropy result--what to ask, what to learn from it In that paper, he uses $A_f \sim \sqrt{j_f(j_f+1)}$ and the entropy still has the coefficient of $\gamma$. The troublesome thing is that, in the new paper $$E \sim \sum_f j_f$$ is not proportional to $$A \sim \sum_f \sqrt{j_f(j_f+1)}.$$ Using the Clausius relation gives a correction to the area law. At first order, the correction is proportional to $N$, the number of facets. The 2010 paper, if it applies here, suggests in eq (19) that $N \sim A$, so this corrects the coefficient of the leading term (away from 1/4). Edit: there is a mistake in the estimate above, see https://www.physicsforums.com/showpost.php?p=3884283&postcount=22 Last edited: Apr 25, 2012 18. Apr 25, 2012 ### atyy Re: Bianchi's entropy result--what to ask, what to learn from it That's fascinating. In the original case of a Lorentz invariant field theory on the half space, are there also "edge states"? 19. Apr 25, 2012 ### Physics Monkey Re: Bianchi's entropy result--what to ask, what to learn from it I didn't even notice this at first, but it looks like bianchi is either doing the large j limit or made an important mistake? 20. Apr 25, 2012 ### Physics Monkey Re: Bianchi's entropy result--what to ask, what to learn from it There certainly can be. Not all Lorentz invariant theories have protected physical edge states on a half space, but we showed that if they do then the half space entanglement spectrum (with no physical edge) also has the universal aspects of these physical edge states.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767905235290527, "perplexity": 830.42569435573}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865830.35/warc/CC-MAIN-20180523215608-20180523235608-00514.warc.gz"}
http://blog.kleinproject.org/?p=2011
# Calculators, Power Series and Chebyshev Polynomials Originating author is Graeme Cohen. Of all the familiar functions, such as trigonometric, exponential and logarithmic functions, surely the simplest to evaluate are polynomial functions. The purposes of this article are, first, to introduce the concept of a power series, which can be thought of as a polynomial function of infinite degree, and, second, to show their application to evaluating functions on a calculator. When a calculator gives values of trigonometric or exponential or logarithmic functions, the most straightforward way is to evaluate polynomial functions obtained by truncating power series that represent those functions and are sufficiently good approximations. But there are often better ways. We will, in particular, deduce a power series for  and will see how to improve on the straightforward approach to approximating its values. That will involve Chebyshev polynomials, which are used in many ways for a similar purpose and in many other applications, as well. (For trigonometric functions, the Cordic algorithm is in fact often the preferred method of evaluation—the subject of another article here, perhaps.) In the spirit of Felix Klein, there will be some reliance on a graphical approach. Other than that, we need only some basic trigonometry and calculus. ## Manipulations with geometric series The geometric series  is the simplest power series. The sum of the series exists when . In fact, The general form of a power series is so the geometric series above is a power series in which all the coefficients  are equal to 1. In this case, since the series converges to  when , we say that the function , where has the series expansion , or that  is represented by this series. We are interested initially to show some other functions that can be represented by power series. Many such functions may be obtained directly from the result in (1). For example, by replacing  by , we immediately have a series representation for the function : We can differentiate both sides of (1) to give a series representation of the function : We can also integrate both sides of (1). Multiply through by  (for convenience), then write  for  and integrate with respect to  from 0 to , where : so So this gives a series representation of the function  for . In the same way, from (2), Much of what we have done here (and will do later) requires justification, but we can leave that to the textbooks. ## The power series for the sine function We will show next how to find a power series representation for . In general terms, we can write Put , and immediately we have . Differentiate both sides of (4): Again put , giving . Keep differentiating and putting : In this way, we can find a formula for all the coefficients , namely, for . (The coefficients of even index and those of odd index are specified separately.) Thus This is the power series representation that we were after. From the way we developed it, it is reasonable that the series will represent  for values of  at and near 0 (say for , as for all the earlier examples), so it is surprising to know that it can be shown that the series represents  for all values of . Then partial sums of the series, obtained by stopping after some finite number of terms, should give polynomial functions that can be used to find approximate values of the sine function, such as you find in tables of trigonometric functions or as output on a calculator. ## Approximation by Chebyshev Polynomials For example, write The cubic polynomial  and the quintic polynomial  are plotted below, along with , all for . It can be seen that these are both very good approximations for , say, but not so good near . The quintic  is much better than the cubic  in these outer regions, as might be expected, but can we do better than  with some other cubic polynomial function? When , for example, the error in using the cubic is . We will construct a cubic polynomial function  whose values differ from those of  by less than 0.001 for . The curve  has been included in the graph below for , and it is clear from the graph that this curve is closer to that of  than  is, even near . We will use Chebyshev polynomials to construct . These are used extensively in approximation problems, as we are doing here. They are the functions  given by for integer  (or you can write ). By properties of the cosine, they all have domain  and their range is also in . Putting  and gives but it is not immediately apparent that the  are indeed polynomials for . To see that this is the case, recall that Therefore, Now put  and obtain and so on, clearly obtaining a polynomial each time. As polynomials, we no longer need to think of their domains as restricted to . Returning to our problem of approximating  for  with error less than 0.001, we notice first that the quintic  has that property. In fact, and the theory of alternating infinite series shows that  throughout our interval, as certainly seems reasonable from the figure. We next express  in terms of Chebyshev polynomials. Using (5) and (6), we have so Since  when , omitting the term  will admit a further error of at most  which, using (7), gives a total error less than 0.0008, still within our bound of 0.001. Now, and the cubic function we end with is the function we called . We have thus shown, partly through a graphical justification, that values of the cubic function , where are closer to values of , for , than are values of the cubic function , which is obtained from the power series representation of . ## Conclusion: The Point of the Story The effectiveness of Chebyshev polynomials for our purpose here arises in part from a property of the cosine function. It implies that for all integers  and for , we have . Pafnuty Lvovich Chebyshev, whose surname is often given alternatively as Tchebycheff, was Russian; he introduced these polynomials in a paper in 1854. They are denoted by  because ofTchebycheff. The procedure outlined above is known as economization of power series and is studied in the branch of mathematics known as numerical analysis. Economization is not always necessary for evaluating the sine function. Because  is approximately  when  is small, this is often good enough! See Chuck Allison, Machine Computation of the Sine Function, for more on this. We mentioned at the beginning that the Cordic algorithm is often better still, but for evaluating other functions on a calculator, particularly , which has the slowly converging power series expansion found in (3), economization is considered to be essential. Perhaps the point of the story is that the obvious can very often be improved upon. This post is also available in: German, Arabic Send article as PDF This entry was posted in Mathematics Within the Last 100 Years. Bookmark the permalink. 1. Cor Fortgens says: The title suggests that this vignette actually also digs into how (different) calculators are actually doing their calculations. However this is not the case. Perhaps it would be of interest to compare the presented theory with methods that are actually being used in HP, TI, Casio, …. calcultors.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430811405181885, "perplexity": 520.8213784197397}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00156-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/294353/a-question-from-modular-arithmetic
# a question from modular arithmetic Given $a,b$ in $Z_N$* for some composite positive integer N. Let the bit sizes are $a_N , b_N , N_N$ respectively. Also $a_N = (N_N$ or $N_N-1) , a<N , (a,N)=1$ $b_N = (N_N$ or $N_N-1) , b<N , (b,N)=1$ select $t$ where $(t,N)=1$ and $0<t<N$ such that if $(a*t)≡(u)mod(N)$, $(b*t)≡(v)mod(N)$ then the output pair $(u,v)$ should have the bit sizes $0<v_N<{N_N\over2}$ and $0<u_N<{N_N\over4}$ respectively. any other values are not accepted The attempt i tried to solve. -- Since given $a,b$ and $t$ can be any value within $N$ , I took $u$ value satisfying the condition $0<u_N<{N_N\over4}$ to get $t$, i.e, $t≡(a^{-1}*u)mod(N))$, then substituted it in second congruence equation $(b*t)≡(v)mod(N)$ to get $v≡(a^{-1}*u*b)mod(N)$. But it seems the realtion does not guarantee the size of $v_N$ to be within $0<v_N<{N_N\over2}$ for any $u$ selected in $0<u_N<{N_N\over4}$. I know i did a mistake somewhere but unable to recognize it. So the question is , Is there any possible pair of $(u_N,v_N)$ that satisfies the given congruences and bit relations. If exists , is there any easy way to find $v$ for corresponding valid $u$ taken. Please suggest me an idea to solve the problem. or someone solve it for me. - The problem is not clear. What is given? What is to be determined? It looks like $b$, $t$, and $N$ are given, in which case $v$, which is $bt$ reduced modulo $N$, is completely determined, and there is no reason to think it will be small compared to $N$. – Gerry Myerson Feb 4 '13 at 9:14 @Grerry Myerson : yes, $v$ compared to $N$ is small, but my condition is $t$ should be in such a way that , the bit size of $v$ is in range $0<v_N<{N_N\over2}$ and $(a*t)\equiv u \pmod N$ where $0<u_N<{N_N\over4}$. If you carefully observer though the values of $u,a,b,t,N$ are already determined, They have to be selected carefully, such that all the conditions should be satisfied – smslce Feb 4 '13 at 9:24 Given $a$, $b$, and $N$, you want $t$ such that (roughly speaking) $at$ reduced modulo $N$ is less than $\sqrt N$ and $bt$ reduced modulo $N$ is less than $\root4\of N$. I don't think this is always possible. I think there's a theorem of Thue that lets you get both of them down to $\sqrt N$, but that's it. By the way, your first comment broke the formatting. I'd recommend deleting it. – Gerry Myerson Feb 4 '13 at 12:04 The Thue theorem isn't exactly as I remembered it but may still be helpful. See math.uga.edu/~pete/Brauer-Reynolds51.pdf – Gerry Myerson Feb 4 '13 at 12:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725228548049927, "perplexity": 174.90688037281933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828283.6/warc/CC-MAIN-20160723071028-00285-ip-10-185-27-174.ec2.internal.warc.gz"}