url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://cstheory.stackexchange.com/questions/26003/what-if-an-mathsf-l-complete-problem-has-mathsfnc1-circuits-more-gener
# What if an $\mathsf L$-complete problem has $\mathsf{NC}^1$ circuits? More generally, what evidence is there against $\mathsf{NC}^1=\mathsf{L}$? Edit: let me reformulate the question in a more specific way (and change the title accordingly). A slightly edited version of the original question follows. Is there a result comparable to the Karp-Lipton theorem starting from the assumption $L\in\mathsf{NC}^1/\mathsf{poly}$ with $L$ an $\mathsf L$-complete language (under, say, $\mathsf{AC}^0$ reductions)? By "comparable" I mean that it derives non-trivial consequences (perhaps considered unlikely) from the assumption. I am having trouble finding in the literature (or online) a discussion of the relationship between $\mathsf{NC}^1$ and logarithmic space going beyond "the former is included in the latter and it is open whether the inclusion is strict". More specifically, I could find only two three pieces of evidence against equality of the two classes: • Barrington's theorem and the characterization of non-uniform deterministic logspace in terms of branching programs gives us $\mathsf{NC}^1/\mathsf{poly}=\mathsf{L}/\mathsf{poly}$ iff bounded-width polysize branching programs are as expressive as arbitrary polysize branching programs, which I guess would be highly surprising (especially considering how surprising Barrington's theorem itself is). • Markus Holzer [Hol02] proved that $\mathsf{NC}^1/\mathsf{poly}=\mathsf{L}/\mathsf{poly}$ iff one-head two-way non-uniform deterministic finite automata have the same expressive power whether they are oblivious or not ("oblivious" means that the movement of the head during the computation depends only on the length of the input, not on the input itself). Oblivious polytime Turing machines do have the same power as non-oblivious ones, but I guess it is hard to see how that simulation may be done in the much more restricted framework of finite automata. • Edit: there is a paper by Allender et al. [ABCDR09] in which a number of reachability problems for certain classes of graphs are shown to be hard for $\mathsf{NC}^1$ under $\mathsf{AC}^0$ reductions, whereas the same problems are not known to be hard for $\mathsf{L}$. As stated by the authors, "this gives a cluster of natural problems that are candidates for having complexity intermediate between $\mathsf{NC}^1$ and $\mathsf{L}$". Besides the above points and the usual empirical evidence ($\mathsf{L}$-complete problems do not seem to have logarithmic-depth bounded fan-in circuits), is there any other evidence against $\mathsf{NC}^1=\mathsf{L}$? [Hol02] Markus Holzer. Multi-head finite automata: data-independent versus data-dependent computations. Theor. Comput. Sci. 286(1):97-116 (2002). [ABCDR09] Eric Allender, David A. Mix Barrington, Tanmoy Chakraborty, Samir Datta and Sambuddha Roy. Planar and Grid Graph Reachability Problems. Theory Comput. Syst. 45(4):675--723 (2009). • AP = PSPACE is no more evidence for ALOGTIME = L than PSPACE = NPSPACE is evidence for L = NL. That is, it is not evidence at all. The simulation of deterministic space by alternating time squares the bound, so the proper analogue of the “polynomial and higher cases” is APOLYLOGTIME = POLYLOGSPACE. – Emil Jeřábek Oct 9 '14 at 10:58 • @EmilJeřábek: yes, that's what I meant by "this is hardly evidence". I guess you're saying that it's pointless to even mention it. I will edit the question. – Damiano Mazza Oct 9 '14 at 14:05 • Ah, all right. I’m not saying it is pointless to mention it, but it wasn’t clear what you meant by the remark. – Emil Jeřábek Oct 10 '14 at 13:54 • FYI, the algebraic analogue is the formula size of det, for which the best known upper bound is currently quasi-poly. (Ben-Or & Cleve showed poly-size width 3 ABPs are equivalent to poly-size formulas; it is well-known that arbitrary poly-size ABPs are equivalent to poly-size projections of the determinant.) Det having poly-size formulas is not just surprising, but would likely also have consequences in proof complexity as well (e.g. whether the Hard Matrix Identities are provable in Frege, rather than $\mathsf{NC}^2$-Frege). Of course, none of this is "evidence" analogous to Karp-Lipton... – Joshua Grochow Dec 30 '17 at 3:49 • Thank you for your comment! It makes me realize how much I don't know about algebraic complexity... – Damiano Mazza Dec 31 '17 at 16:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682978749275208, "perplexity": 1004.7174925993218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00253.warc.gz"}
https://www.arxiv-vanity.com/papers/1806.07043/
# Production Rate Measurement of Tritium and Other Cosmogenic Isotopes in Germanium with CDMSlite R. Agnese T. Aralis T. Aramaki I.J. Arnquist E. Azadbakht W. Baker S. Banik D. Barker D.A. Bauer T. Binder M.A. Bowles P.L. Brink R. Bunker B. Cabrera R. Calkins C. Cartaro D.G. Cerdeño Y.-Y. Chang J. Cooley B. Cornell P. Cushman T. Doughty E. Fascione E. Figueroa-Feliciano C.W. Fink M. Fritts G. Gerbier R. Germond M. Ghaith S.R. Golwala H.R. Harris Z. Hong E.W. Hoppe L. Hsu M.E. Huber V. Iyer D. Jardin A. Jastram C. Jena M.H. Kelsey A. Kennedy A. Kubik N.A. Kurinsky R.E. Lawrence B. Loer E. Lopez Asamar P. Lukens D. MacDonell R. Mahapatra V. Mandic N. Mast E. Miller N. Mirabolfathi B. Mohanty J.D. Morales Mendoza J. Nelson J.L. Orrell S.M. Oser W.A. Page R. Partridge M. Pepin F. Ponce S. Poudel M. Pyle H. Qiu W. Rau A. Reisetter R. Ren T. Reynolds A. Roberts A.E. Robinson H.E. Rogers T. Saab B. Sadoulet J. Sander A. Scarff R.W. Schnee S. Scorza K. Senapati B. Serfass D. Speller M. Stein J. Street H.A. Tanaka D. Toback R. Underwood A.N. Villano B. von Krosigk S.L. Watkins J.S. Wilson M.J. Wilson J. Winchell D.H. Wright S. Yellin B.A. Young X. Zhang X. Zhao Department of Physics, University of Florida, Gainesville, FL 32611, USA Division of Physics, Mathematics, & Astronomy, California Institute of Technology, Pasadena, CA 91125, USA SLAC National Accelerator Laboratory/Kavli Institute for Particle Astrophysics and Cosmology, Menlo Park, CA 94025, USA Pacific Northwest National Laboratory, Richland, WA 99352, USA Department of Physics and Astronomy, and the Mitchell Institute for Fundamental Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA School of Physical Sciences, National Institute of Science Education and Research, HBNI, Jatni - 752050, India School of Physics & Astronomy, University of Minnesota, Minneapolis, MN 55455, USA Fermi National Accelerator Laboratory, Batavia, IL 60510, USA Department of Physics, University of South Dakota, Vermillion, SD 57069, USA Department of Physics, South Dakota School of Mines and Technology, Rapid City, SD 57701, USA Department of Physics, Stanford University, Stanford, CA 94305, USA Department of Physics, Southern Methodist University, Dallas, TX 75275, USA Department of Physics, Durham University, Durham DH1 3LE, UK Instituto de Física Teórica UAM/CSIC, Universidad Autónoma de Madrid, 28049 Madrid, Spain Department of Physics, Queen’s University, Kingston, ON K7L 3N6, Canada Department of Physics, University of California, Berkeley, CA 94720, USA Department of Physics & Astronomy, Northwestern University, Evanston, IL 60208-3112, USA Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA Department of Physics, University of Colorado Denver, Denver, CO 80217, USA Department of Electrical Engineering, University of Colorado Denver, Denver, CO 80217, USA Department of Physics & Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1, Canada TRIUMF, Vancouver, BC V6T 2A3, Canada Department of Physics, University of Evansville, Evansville, IN 47722, USA Département de Physique, Université de Montréal, Montréal, Québec H3C 3J7, Canada Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA SNOLAB, Creighton Mine #9, 1039 Regional Road 24, Sudbury, ON P3Y 1N2, Canada Department of Physics, University of Toronto, Toronto, ON M5S 1A7, Canada Department of Physics, Santa Clara University, Santa Clara, CA 95053, USA ###### Abstract Future direct searches for low-mass dark matter particles with germanium detectors, such as SuperCDMS SNOLAB, are expected to be limited by backgrounds from radioactive isotopes activated by cosmogenic radiation inside the germanium. There are limited experimental data available to constrain production rates and a large spread of theoretical predictions. We examine the calculation of expected production rates, and analyze data from the second run of the CDMS low ionization threshold experiment (CDMSlite) to estimate the rates for several isotopes. We model the measured CDMSlite spectrum and fit for contributions from tritium and other isotopes. Using the knowledge of the detector history, these results are converted to cosmogenic production rates at sea level. The production rates in atoms/(kgday) are 74  9 for H, 1.5  0.7 for Fe, 17  5 for Zn, and 30  18 for Ge. ###### keywords: Dark Matter, SuperCDMS, CDMSlite, Germanium Detectors, Cosmogenic Activation ## 1 Introduction Astrophysical observations indicate that dark matter constitutes a majority of the matter in the Universe (1; 2). Weakly interacting massive particles (WIMPs) are a well-motivated class of candidates that could explain these observations (3; 4) and may be directly detectable with a sufficiently sensitive Earth-based detector (5). Traditionally, direct searches have focused on WIMPs with masses in the range of 10 GeV/c to several TeV/c. Although searches in this mass range are ongoing, the lack of evidence for such particles (6; 7; 8), or for supersymmetry at the Large Hadron Collider (9; 10), motivates exploration of lower-mass alternatives (11; 12; 13; 14; 15). The kinematics of low-mass dark matter interactions with atomic nuclei lead to low energy nuclear recoils (NRs). The performance of discrimination techniques typically used to distinguish electron-recoil (ER) background from NRs generally degrades with decreasing recoil energy (16; 7; 17; 18; 19). The ER background is therefore likely to become the primary limiting factor for the experimental reach of low-mass dark matter searches (20). A particularly important source of ERs is radioactivity produced through cosmogenic activation of the detector material. ### 1.1 SuperCDMS and CDMSlite The Super Cryogenic Dark Matter Search experiment (SuperCDMS) operated an array of 15 interleaved Z-sensitive ionization and phonon (iZIP) Ge detectors (21) from 2012 to 2015 in the Soudan Underground Laboratory to search for NRs from dark matter interactions (22; 16). Each detector was equipped with four phonon and two charge readout channels on each of the flat faces. One channel of each type acted as an outer guard ring on each side to reduce background by identifying and removing events at high radius. When operated in their normal iZIP mode with a modest bias voltage of a few volts, applied between charge and phonon sensors, simultaneous readout of phonon and charge signals enabled an effective ER-background identification for recoil energies larger than 8 keV (23). This provided world-leading sensitivity among all solid-state detectors to WIMP masses  12 GeV/c (16). Sensitivity to interactions of low-mass dark matter particles (6 GeV/c) was enhanced by operating one of the detectors in an alternative mode. In the CDMS low ionization threshold experiment (CDMSlite), a larger bias voltage of 70 V was applied between the two flat faces of the detector. In this mode, the detector no longer has the capability to discriminate ER events from NR events. However, the Neganov-Trofimov-Luke mechanism (24; 25) amplifies the charge signal (in proportion to the voltage bias) into a large phonon signal, without a corresponding increase in electronic noise. In this way a much larger signal-to-noise ratio is achieved, lowering the threshold to well below a keV and thus gaining sensitivity to dark matter particles with masses of a few GeV/c. Further details on searches for low-mass dark matter with CDMSlite can be found in Refs. (22; 26; 27). The next-generation experiment SuperCDMS SNOLAB will further extend the low-mass experimental reach by operating new detectors (Si and Ge) based on the CDMSlite concept but optimized to achieve even lower energy thresholds (HV detectors(20; 28). ### 1.2 Cosmogenic Background in CDMSlite For CDMSlite and SuperCDMS SNOLAB, ERs from cosmogenic isotopes produced in the detector crystals during detector fabrication, testing, and storage above ground are a significant source of background. A cosmogenic isotope is of concern if its half-life is long enough that it does not decay away between the time the detectors are brought underground and the start of the dark matter search, but short enough that the decay rate is comparable to other sources of background. Half-lives of isotopes relevant to our analysis range from 100 days to a few tens of years. Table 1 lists all isotopes with half-lives in the relevant range that could potentially be produced in germanium by cosmogenic radiation. In addition, we include Ge and Ga. The latter has a very short half-life but is produced by the decay of the long-lived Ge, while Ge is produced during calibration measurements with a Cf neutron source through neutron capture on Ge (26). A number of publications (listed in Table 2) discuss cosmogenic activation in germanium. For a review of cosmogenic production rates in various materials, including germanium, see Ref. (29). As Table 2 demonstrates, the different published calculations are not always in agreement with one another or with the sparse experimental results. Tritium (H) produced by cosmogenic radiation in germanium is expected to be the dominant background for the SuperCDMS SNOLAB HV germanium detectors (20). For this isotope, only one experimental result is available (30) and the theoretical calculations show a relatively large spread in predicted activation rates. We perform a calculation in Section 2, addressing some of the known shortcomings of previous approaches. In Section 3 we analyze the spectrum acquired during the second run of CDMSlite (26) and extract the tritium production rate in germanium in Section 4. In Section 5, we evaluate rates for several other isotopes either identified in CDMSlite data or reported by other experimental efforts. ## 2 Cosmogenic Activation The energy transferred by cosmic radiation to an atomic nucleus may cause protons, neutrons, or nuclear clusters to escape from the core nuclear potential, dispersing the absorbed energy, and producing radioactive isotopes such as those listed in Table 1. In principle, the production rate of isotopes, , by cosmic ray secondaries (neutrons, protons, muons, and pions), dominated by the contribution from neutrons, can be calculated from the production cross section excitation functions, , and measured cosmic-ray flux spectra, , for cosmic ray energy as R=∑i=n,p,μ,π∫σiΦidEi. (1) In practice, values for the isotope-production excitation functions rely heavily on extrapolations using nuclear models since measurements are often unavailable. The previous efforts at calculations listed in Table 2 vary depending on the particular nuclear models used. This section reevaluates these models and recalculates expected production rates for tritium and other isotopes observed in CDMSlite, as well as tritium from neutron spallation in silicon. ### 2.1 Excitation Functions from Neutron Spallation In order to understand the effect of cosmogenic radiation, we need nuclear models that describe how energy is transferred within a nucleus, how particles are ejected from an excited nucleus, and what residual nucleus remains once the energy is dissipated. For nuclear excitation energies below 100 MeV, most particle emission occurs relatively slowly and the excitation energy is able to equilibrate among the internal degrees of freedom of the nucleus. At higher energies, some nucleons escape before the nucleus reaches thermal equilibrium, and nucleons need to be modeled individually. In Ref. (32), this difference in excitation function behavior at low and high energies was recognized, and appropriate codes were benchmarked and used in each region to estimate the production of mid-mass radioisotopes. However, as tritium is produced as an ejectile rather than as a residual nucleus, models that account for clustering of ejected nucleons are required, and the tools used in Ref. (32) cannot be applied. At excitation energies below 100 MeV, detailed models for thermalized decay mechanisms and approximations to pre-equilibrium behaviour are required. TALYS is one of several codes available to implement these models and is widely used (33), including for the activation calculations in references (34) and (35). To more accurately model processes for spallation at energies of hundreds of MeV, the Liège Intranuclear Cascade model (INCL) implements Monte Carlo algorithms to simulate energy cascading amongst nucleons, and to predict how escaping nucleons cluster into nuclear fragments (36). INCL comes packaged with the ABLAtion code (37), which performs calculations similar to TALYS once the nucleus is thermalized. Ref. (38) compares available experimental data to a wide range of available spallation models, and INCL4.5-ABLA is shown to be significantly better than other models at predicting the production of residual nuclei from the spallation of iron, a mid-mass nucleus analogous to the germanium and silicon targets considered here. For the estimates presented here, we use a slightly newer version of this code (INCL++-ABLA version 5.2.9.5) with its default parameters, and TALYS version 1.8 with custom parameters. 111Some parameters whose default values help reduce computation time were relaxed. Specifically, maxlevelstar, maxlevelsres, and maxlevelsbin for all light ejectiles up to mass 4 were increased to 40 to account for known nuclear levels that may affect nucleon production, the pre-equilibrium model contribution was calculated for all incident energies, and thresholds for discarding negligible reaction channels, xseps and popeps, were reduced to . For all other parameters, default values were used. Cross sections calculated with TALYS are used for neutron energies below 100 MeV, and those calculated with INCL++-ABLA are used above neutron energies of 100 MeV. Figure 1 shows the calculated tritium production excitation functions in Ge and Si with natural isotopic composition (Ge and Si, respectively). The same method was used to produce, in Figure 2, the production excitation functions of the other isotopes listed in Table 2. For these isotopes, the excitation functions have similar shapes to those in Ref. (32) using complementary methods, but are generally slightly lower. As a check, the general shape of the isotope production excitation functions can be inferred before running the calculations. For most isotopes, the cross section peaks near a small multiple of the nucleon separation energy (10 MeV per ejected nucleon) then falls as the number of alternative exit channels in the reaction increases. For tritium, which may be emitted multiple times during nuclear deexcitation, the production cross section grows monotonically and sub-linearly with the collision energy, from threshold to energies on the order of the total nuclear binding energy ( GeV). Note that in Ref. (34) TALYS was used to calculate a tritium production cross section that did not increase monotonically, thus significantly reducing the calculated production rate. 222An attempt to calculate the tritium production cross sections using TALYS 1.0 as used in Ref. (34) did not reproduce their result for neutron energies above 80 MeV. In addition, in Ref. (34) the exposure of the IGEX detector crystals is overstated by nearly a factor of nine (59), leading to an apparent, but false, confirmation of their calculated value. Studies to benchmark the TALYS and INCL-ABLA models have demonstrated accuracies of better than 40 % in most of their respective domains of applicability for reactions similar to those considered above. TALYS has been benchmarked to the measured production of residual nuclei from proton irradiation at various energies, as well as from neutron irradiation up to 180 MeV in Si, Co, Fe, Ni, and Cu targets (60). The latter study shows disagreements for the production of light residual nuclei that require multiple emissions in their production, such as a factor of 5 overprediction for Cr production from an iron target. By restricting TALYS to excitation energies below 100 MeV, this multiple emission regime is partially avoided. Benchmark studies of INCL-ABLA calculations for the production of residual nuclei with between 13 and 24 by proton irradiation at 300 MeV of an Fe target show that predictions are generally within 40 % of the measured values (38). INCL-ABLA fails to accurately predict the production of the isotopes Co and Fe, suggesting that it may not be suitable for calculating the cross section of processes that do not require charged particle emission from the target, such as the production of Ge from spallation in germanium. Fortunately, the cosmogenic production of such isotopes is dominated by neutrons with energies below 100 MeV for which TALYS provides reasonably accurate excitation functions. By using TALYS for excitation energies below 100 MeV and INCL-ABLA at higher energies, the known failures of these models are avoided, and an uncertainty of can be propagated to the predicted cosmogenic production rates. As the excitation functions of both TALYS and INCL-ABLA agree at 100 MeV for most of the cosmogenic radioisotopes considered in Figures 1 and 2, the exact choice of 100 MeV versus other nearby energies contributes negligibly to uncertainties in the predicted cosmogenic production rates. However a significant difference is observed at a neutron energy of 100 MeV in the production of Zn. Other choices for this cutoff between 20 MeV and 300 MeV may change the predicted production rate by up to , still small compared to the considered uncertainty. Calculations of tritium production from neutron spallation can be compared to measurements using 96 MeV neutrons on silicon (56) and iron (61). The calculations and experiments agree within the small 5 %-level experimental uncertainties for both TALYS and INCL-ABLA calculations, with calculated production cross sections in iron of 21.8 and 21.6 mb respectively versus a measurement of 211.1 mb. Despite the good agreement for these specific data points, the overall uncertainty on the calculations for tritium should not be considered more precise than the typical uncertainty of 40 % observed in general. ### 2.2 Predicted Cosmogenic Activation Rates Several competing parameterizations of the sea-level neutron flux exist, as noted in Ref. (29). We adopt the model of Gordon (43) for consistency with other recent estimates (34; 35; 32). Figure 3 uses the excitation functions of Figures 1 and 2 and the adopted sea level neutron spectrum to show the expected contribution of neutrons of different energies to the production of particular radioisotopes. The cosmic-ray neutron fluxes published in Ref. (43) are normalized to the average cosmic-ray flux observed at sea level in New York. Adjustments for solar cycle variation 333The solar cycle correction was obtained using data from the neutron monitor at Newark/Swarthmore. These are provided by the University of Delaware Department of Physics and Astronomy and the Bartol Research Institute. These data were accessed using the NMDB database at www.nmdb.eu., altitude, and the latitude-dependent geomagnetic cutoff were considered, but it was found that for the location and time period of above-ground fabrication and storage of the CDMSlite detector — Stanford University, from 2009 to 2011 — these percent-level corrections largely cancel, so the New York sea-level normalization from Ref. (43) is used. Forms of radiation other than fast neutrons may also cause transmutation into the isotopes listed in Table 1. The most important of these is spallation by cosmic-ray protons. In the energy range that contributes most to the production of cosmogenic radioisotopes, from 0.1 to 1 GeV, the proton flux is 5 % of the neutron flux. At these energies, the spallation processes induced by protons and neutrons are very similar; thus, the calculated production cross sections from neutrons have been increased by 5 % to approximately account for the proton flux. Other publications find contributions of 3% to 25% (53; 55; 50; 35; 51) for the isotopes considered herein. In addition to spallation processes, cosmogenic activation can occur from stopped negative muon and pion capture. Approximately 500 muons/(kgday) are stopped in materials at the Earth’s surface, and at shallow depths up to 5 meters of water equivalent (62). The capture of these negative muons converts a proton into a neutron while releasing tens of MeV into the nucleus. Ref. (63) reports the measured fraction of these captures that generate various residual isotopes. This provides a small ((1 %)) addition to the production rate of tritium and some other radioisotopes at the earth’s surface with production energy thresholds below 100 MeV. We ignore this contribution in our calculated rates; however, it may be important for the production of cosmogenic radioisotopes for materials stored for long periods in sites with shallow overburden, where the production rate from cosmic-ray neutrons is substantially reduced. This process was also considered in (53) for the production of Co from germanium and found to be negligible. The total calculated production rates in Ge are 95 atoms/(kgd) for H, 5.6 atoms/(kgd) for Fe, 51 atoms /(kgd) for Zn, and 49 atoms/(kgd) for Ge; these values are also listed in Table 5. The calculated production rate of H in Si is 124 atoms/(kgd). ## 3 Experimental Analysis of CDMSlite Run 2 In this section, we reanalyze the CDMSlite Run 2 spectrum (originally used in Ref. (26) to search for low-mass WIMPs) using a likelihood method to extract background event rates due to cosmogenically produced radioisotopes. A background model is constructed that includes the tritium beta-decay spectrum, a relatively flat component due to scattering of higher energy gamma rays (mostly from radioactive contaminants such as the U, Th chains and K in the experimental setup) with incomplete energy transfer (“Compton background”), and several peaks. The latter are produced by X-ray/Auger-electron cascades following electron-capture (EC) decays of radio-isotopes to the ground states of their daughter nuclei. Table 3 lists the total cascade energies and branching ratios (BR) for captures from different shells for the EC-decay isotopes that we consider. We include all those listed in Table 1 except for Na and Ti, for which there is no evidence in the CDMSlite spectrum. Potential contributions from non-tritium beta decays with higher-energy endpoints are not explicitly considered, but are accounted for in the fit by the Compton background contribution (see Section 3.3). The known above-ground exposure history of the detector is then used to convert statistically significant detections from the likelihood fit to cosmogenic production rates. The prior analysis of the CDMSlite Run 2 spectrum included energies only up to 2 keV (26), including evaluation of the detection efficiency. All of the EC decays that we consider dominantly give rise to peak energies above 2 keV (cf. K-shell captures in Table 3). Furthermore, to effectively differentiate between spectral contributions from tritium betas and the Compton background, the likelihood fit should include energies above the tritium beta-decay endpoint. Consequently, an important aspect of the analysis presented here is an extension of the published CDMSlite detection efficiency to higher energies. ### 3.1 CDMSlite Detection Efficiency above 2 keV For CDMSlite Run 2 the efficiency above 100 eV is of order 50 % and is largely determined by the radial fiducialization (radial cut) that is necessary to remove data from regions of the detector where an inhomogeneous electric fields leads to a reduced Neganov-Trofimov-Luke amplification and thus a significantly distorted energy spectrum (26; 27). Using the Ge capture lines and a simulation method based on pulse shape (65), the radial cut efficiency was shown to be fairly flat below the 1.3 keV L-shell line. For energies directly above the 1.3 keV line, up to about 2 keV, there is no indication that the radial event distribution changes significantly; the radial cut efficiency is thus linearly interpolated from 1.3 keV to 10.37 keV for energies between 1.3 keV and 2 keV. However, at energies above about 2 keV the outer phonon channel shows partial signal saturation, leading to a reduction in efficiency of the radial fiducialization (27). This downward trend is confirmed by an estimate of the efficiency using events from the 10.37 keV K-shell line (27). The decreasing selection efficiency with increasing energy can be observed in Figure 4, which shows the distribution of the radial parameter (on which the radial cut is based) as a function of energy. Based on the observed distribution, we start our study with an initial hypothesis for the detection efficiency over the full energy range of interest (threshold to 20 keV) defined as follows: • Below 2 keV the previously published efficiency is used. • From 2 to 10.37 keV we assume that the efficiency drops linearly down to 45.4 % — the efficiency reported in Ref. (27) for the Ge K-shell line. • Above 10.37 keV the efficiency is presumed to be constant. This is a simple choice based on the behaviour of the radial distribution below 20 keV, and is not expected to account for the decreasing selection efficiency at higher energies. Figure 5 shows this initial estimate of the efficiency function. In order to test this initial hypothesis we compare Ba -calibration data to a Monte Carlo simulation generated for the same experimental configuration using Geant4 (66; 67; 68; 69). The initial-hypothesis efficiency is applied to the simulated energy spectrum, which is then normalized to the corresponding measured rate in the energy range between 3 and 10 keV. The top panel of Figure 6 shows the resulting simulation together with the measured spectrum. The two spectra are in good agreement below 18 keV, thus supporting the initial efficiency hypothesis. However, there is a significant discrepancy above 20 keV, growing with increasing energy, that reflects the diminishing performance of the radial parameter in this energy range (as seen in Figure 4). We derive a correction to the initial efficiency hypothesis based on the ratio of the measured to simulated spectra. As shown in the bottom panel of Figure  6, this ratio decreases approximately linearly with increasing energy in the upper portion of the energy range. Therefore, we introduce as a correction a piecewise defined function of energy , which is constant (unity) below some energy and decreases linearly with a slope above this energy. The values of the parameters and are determined by fitting this correction function to the ratio of measured and simulated spectra in the energy range from 0.5 to 30 keV, as shown in Figure 6 (bottom panel). The best-fit values and their 95 % C.L. uncertainties are (17.3  2.7) keV and (0.026  0.009) keV, respectively. A reduced of 0.84 indicates that this is a good fit. Figure 7 shows the final, corrected detection efficiency over the full energy region considered. Also shown are the measured and simulated Ba calibration spectra, where the final efficiency has been applied to the latter. Use of a Kolmogorov-Smirnov test (70) to compare the measured and simulated spectra, both before and after application of the correction function , confirms that the corrected efficiency is a more accurate representation of the true detection efficiency. With p values of 1.6 and 0.78 for the initial and corrected efficiencies, respectively, the initial hypothesis is rejected at 99.9 % confidence, while the final efficiency is consistent with the measured spectrum. The uncertainty on the final efficiency is determined by propagating the uncertainties on the initial efficiency (determined analogously to the efficiency itself) with the fit uncertainties of , leading to a maximum relative uncertainty of 8 %. To gauge how the choice of the 3 keV lower bound of the energy normalization range impacts the results, this value was varied between 0.5 and 4.5 keV, resulting in a maximum variation of the efficiency of 3.5 % (relative). This is sub-dominant compared to the uncertainty for the final efficiency function discussed above. ### 3.2 Analysis of the CDMSlite Spectrum In order to determine the contributions of the different components to the CDMSlite Run 2 spectrum, a maximum likelihood fit is performed. The likelihood analysis includes models for EC X-ray peaks, the tritium beta-decay spectrum, and a component corresponding to interactions of higher energy gamma rays from the decay of radio-contaminants in the setup (e.g. U, K, and Th) depositing only a fraction of their energy (hereafter referred to as the “Compton” component). The energy spectrum of each component — EC peaks, tritium and Compton — is modeled by a probability distribution function (PDF), to which the final efficiency determined in Sec. 3.1 is applied), with an associated likelihood estimator corresponding to the number of events that the component contributes to the overall spectrum. For events, with energies denoted by , the negative log-likelihood function is −ln(L)=∑bnb−N∑i=1ln(∑bnbfb(Ei)) (2) where are the individual background PDFs and are the number of events that each background contributes to the spectrum. #### 3.2.1 Electron-Capture Peaks The EC peaks of each radioisotope in Table 3 are modeled by Gaussian functions centered at the K-, L- and M-shell binding energies of the respective daughter isotope, with the standard deviation of the Gaussian set by the energy-dependent resolution function reported in Ref. (27) and listed in Table 3. In the likelihood fit, the amplitude ratios between the K-, L-, and M-shell peaks for each radioisotope are fixed according to the expected branching ratios in Ref. (64) (listed in Table 3), ignoring potential uncertainties. The L-shell contribution is neglected for all isotopes other than germanium, as the branching ratio is on the order of 0.1 % (compared to the L-shell branching ratio on the order of 10 %). As Ge makes a significant contribution to the spectrum, the germanium L-shell is included in this analysis. #### 3.2.2 Tritium-Beta Decay Spectrum The tritium beta-decay spectrum is given by N(Te)=C√T2e+2Temec2(Q−Te)2(Te+mec2)F(Z,Te), (3) where is a normalization constant, is the kinetic energy of the emitted electron (i.e. the energy measured by our detector), is the mass of the electron, and is the Q-value (71). For the Fermi function, , where is the atomic number of the daughter nucleus, we use the following non-relativistic approximation (72): F(Z,Te)=2πη1−e−2πη, (4) where with the fine structure constant , and is the electron velocity. This spectrum is convolved with the energy-dependent resolution function. #### 3.2.3 Compton Background Component The spectral shape of the Compton model is simulated with Geant4 based on the Monash model (73; 74). The Monash model takes into account changes to the gamma-ray scattering rate that occur at small scattering angles where the energy transfer is of order of the atomic binding energies. Steps at the germanium K-, L-, and M-shell binding energies appear as fewer and fewer electrons are available for the scattering process, as shown in Figure 8. #### 3.2.4 Likelihood Fit Results The results of the likelihood fit are shown in Figure 8. The uncertainty on each fit parameter is determined from its likelihood distribution by varying the value of the parameter over a wide range about the best-fit value, calculating the likelihood at each value. The uncertainties are then extracted from the resulting likelihood distribution. Similarly, we also calculated the 2-dimensional correlations. The two examples with the strongest correlation (tritium vs. Compton and tritium vs. Ge) are shown in Figure 9, indicating that the uncertainties on the other components only have a small effect on the tritium result. The fit results are summarized in Table 4. All values refer to the number of events contributed by the respective component to the measured spectrum. Other Ge-based rare event searches have identified additional isotopes such as V, Mn, Co, Co, Co, Co, Ni, and Ga (39; 52; 53; 34; 32; 55; 30; 50; 35). Thus, the fit includes not only those isotopes for which there is clear evidence in the CDMSlite data, but also three additional isotopes: V, Mn, and Co. All three have half lives within the relevant range. The fit values for these isotopes (also included in Table 4) are compatible with zero. The remaining isotopes are neglected: Co and Co have half-lives that are too short and, in addition, would not be distinguishable from Co; the same argument applies for Ga (indistinguishable from Ga and having too short of a half-life). Co and Ni are emitters (Ni with a half-life of 101.2 y, therefore not included in Table 1) with Q-values well above our energy region of interest, thus contributing an almost flat component that absorbed in the Compton component. Section 3.3 further motivates neglecting these beta emitters. As mentioned in Section 1.2, Na and Ti have appropriate half-lives and could potentially be produced cosmogenically in germanium, but inclusion of these isotopes in the fit results in negligible amplitudes for both. #### 3.2.5 Time-Dependence of the EC Rates The half-lives of the observed EC decays from cosmogenic isotopes are between 240 days and about 3 years (see Table 1). As a result, over the course of the measurement period of roughly one year, the measured rate in the EC peaks is expected to drop. We have studied the time dependence, and in all cases the rate as function of time is consistent with the respective decay time, but due to the small number of observed events, the half-lives cannot be positively confirmed. No constraint on the decay of the EC peaks was used in the likelihood analysis of Section 3.2.4. The only case where a clear time dependence is observed is the decay of the Ge EC peak, which is dominated by the decay of Ge produced in-situ by neutron activation during three nuclear recoil calibration campaigns separated by several months (26). In principle, the time dependence of the rate in the Ge EC peak could provide an additional way to extract the Ge decay rate. However, the time distribution and strength of the Ge signal together with the overall measurement schedule, with a significant gap for maintenance of the cryogenic equipment in the summer of 2014, led to a large uncertainty in this analysis. The constraints on the Ge decay from the time dependence are considerably weaker than (but compatible with) those deduced using likelihood fit results for the Ga EC peak. ### 3.3 Systematic Uncertainties from the Choice of the Background Model Before drawing conclusions from the fit results about the cosmogenic production rates of the observed isotopes, it is important to understand how the presence of unidentified background components could impact those fit results. #### β− Decays In addition to tritium there are other -active nuclei that can be produced cosmogenically. However, all of the isotopes that can be produced and fall within the relevant decay-time window have considerably higher endpoint energies. This means that the contribution in the energy range of interest is reduced accordingly and that their spectra are close to flat. Therefore our fit would absorb them in the Compton contribution. Literature values for the production rate of Co and Ni are available and range from 2.0 to 6.6 and from 1.9 to 5.2 atoms/(kgday) respectively (34; 52). Even assuming the highest values, they would make up only a few percent of the deduced Compton contribution and thus can be safely neglected as separate terms in the likelihood fit. #### β+ Decays For both Ga and Zn, -decay is an alternative to the previously considered EC, with branching ratios of 88.9 % and 1.7 %, respectively (see Table 1). However, the expected combined contribution of these backgrounds to the measured spectrum below 20 keV is less than one event. #### Instrumental Noise The instrumental noise background is effectively removed by the analysis (27) and is therefore ignored here. #### Surface events from 210Pb Surface events may be a non-negligible component of the observed spectrum. The spectral shape of this background depends critically on the geometrical distribution of this contaminant but is generally expected to rise in the energy range below a few keV (75). If such a component is present in the data (but ignored in the fit) it would lead to an over-estimate of the tritium rate. A detailed study of a potential contribution of this background to the data discussed here has not yet been carried out, but estimates based on the observed alpha rates in the detector suggest that it contributes not more than about 8 % to the continuous low-energy spectrum. A correction to the extracted tritium rate would be subdominant compared to the statistical uncertainty. #### Unidentified backgrounds There is no indication of significant contributions to the observed spectrum from other sources. However, since the cosmogenic tritium is expected to be the dominant background in the Ge detectors of SuperCDMS SNOLAB (20), it is important to understand how unidentified background could impact the conclusion about the tritium production rate. The two most extreme assumptions about unidentified background in this context would be a background that has a shape similar to the tritium spectrum or a background that dominates the spectrum at high energy but drops to zero in the range where the tritium may contribute. In the former case we could explain the spectrum without the presence of tritium while the latter case provides a very conservative upper limit for the tritium rate and thus can be used for a conservative prediction of the expected sensitivity of SuperCDMS SNOLAB. In order to produce such a conservative estimate, we performed a second likelihood fit where the Compton component is set to zero. Because a pure tritium spectrum is incompatible with the observed spectral shape near the endpoint, this fit is performed over a restricted energy range, only up to 11 keV. The result of this fit is shown in Figure 10. The extracted tritium rate in this case is roughly 30 % higher than for the best fit discussed earlier. ## 4 Experimental Production Rates ### 4.1 Efficiency Correction for Gamma Emitting Isotopes Both Zn and Ga can decay via EC to an excited state of the daughter nucleus, releasing a in the subsequent transition to the ground state. These decays only appear in the EC peaks if the escapes the CDMSlite detector without interaction; if the gamma-ray does interact in the CDMSlite detector, it will shift the event’s energy out of the EC peak. If the gamma-ray escapes the CDMSlite detector but strikes another operating detector in the same tower, the event is classified as a multiple-scatter event and removed as part of the standard dark matter analysis event selection. In both of these cases the number of events in the EC peak is reduced when compared to the decay rate of the respective isotope. The spectrum discussed above (Figures 8, 10) only includes single-scatter events as they are derived from the SuperCDMS standard WIMP event selection criterion 444One may expect the EC peaks of these two isotopes to also appear in the multiple-scatter spectrum. However, we confirmed using Geant4 Monte Carlo simulations that the probability for the gamma to leave the CDMSlite detector without interaction and subsequently interact in a neighbouring detector is too small for a resulting feature to be visible in our multiple-scatter data. As it is our goal to determine cosmogenic production rates for the various isotopes, we will consider these inefficiencies in more detail. We use data from a Geant4 simulation to determine the fraction of events removed from the measured EC peaks due to a interaction in the same or another detector. The simulation model is the same as used in (20), but adapted for the experiment at the Soudan Underground Laboratory and modified to simulate the decays of Zn and Ga in the CDMSlite detector. An analysis of the simulation output, mirroring that of the CDMSlite dark matter analysis, shows that (64.3  0.1) % of Zn events and (9.64  0.04) % of Ge events are expected to appear in their respective single-scatter EC peak. The given uncertainties are from simulation statistics. ### 4.2 Detector History The CDMSlite detector has a well documented location history. After crystal pulling on November 24, 2008 at ORTEC, in Oak Ridge, TN, the detector spent 1065 days above ground during the fabrication and testing process at various locations in the San Francisco Bay Area (including Berkeley, Stanford, and SLAC), with intermittent storage periods in a shallow underground tunnel at Stanford that has shielding of 16 m water equivalent (76) against cosmic rays. Subsequently the detector was brought to the Soudan Underground Laboratory (1100 m water equivalent) on October 25, 2011. CDMSlite Run 2 started 833 days later on February 4, 2014, and took place over a period of 279 days. If a detector is exposed for a time much longer than the life-time of a given isotope, it will eventually reach saturation with a constant decay rate for atoms of this species determined by the cosmogenic production rate. Given the measurement schedule, detector history and the life-time of the respective isotope we appropriately integrate the production and decay equations to convert the measured number of decays, using the detection efficiency, into a production rate in atoms per kg of detector material per day of exposure to comic radiation. The production rate is assumed to be constant during the times the detector was exposed to cosmic radiation. Corrections are made for the EC decays accompanied by emission, as discussed in Section 4.1. For tritium we additionally determine a conservative upper limit using the result from the second likelihood fit that neglects the Compton background and thus attributes all events between the EC peaks to the tritium spectrum. ### 4.3 Production Rates It is assumed that all cosmogenic isotopes, with the exception of Ge, are expelled during the pulling of the crystal. For Ge we make two extreme assumptions: either the amount of this isotope at the time of pulling is zero, or the crystal is already in full saturation. We calculate the production rate for both of these extreme assumptions. Since the exposure history after pulling is long compared to the life-time of the isotope and the rate of observed Ga events from which the Ge activity is deduced is rather small, the effect of this uncertainty in the final production rate is small compared to the statistical uncertainty. Both values are listed in Table 5 together with the results for all other isotopes. The calculated production rates from Section 2 are also listed for comparison. While the detector history during detector production and testing is well documented, there is some uncertainty in the travel history of the detector which is important for elevation and shielding. As a conservative approach, the maximum uncertainties in the recorded detector history are considered and then propagated with the uncertainties (68 percentile) from the likelihood fit (cf. Table 4), detection efficiency (cf. Section 3.1), and efficiency for gamma emitting isotopes (cf. Section 4.1). ## 5 Discussion and Conclusion With this analysis of the data from the second run of CDMSlite at Soudan we expand the knowledge base of cosmogenic production rates in natural germanium for various isotopes, including tritium. The best-fit tritium production rate of (74  9) atoms /(kgday) determined here is slightly lower than, though within uncertainty of, the production rate of (82  21) atoms/(kgday) measured by EDELWEISS (30). This holds true even if we consider potential contributions from additional backgrounds discussed in Section 3.3 that are ignored in the main analysis, which would likely reduce the extracted tritium rate by a few %. The measured production rates for the other isotopes, Fe, Zn and Ge, however, are considerably lower (see Table 2) than those measured by EDELWEISS. At first glance the CDMSlite and EDELWEISS measurements appear incompatible. However, it is conceivable that the discrepancy can be explained with a difference in the flux and spectra of the cosmogenic radiation between the two experiments and the assumption that other factors may impact the concentration of Ge. This is also a possible explanation for the discrepancy between the calculations and the measurements. A conclusive interpretation of the data will likely require a better understanding of the production mechanisms, including an improved knowledge of the temporal and spatial variation of cosmogenic neutron fluxes, as well as additional well-controlled activation measurements. ## 6 Acknowledgements The SuperCDMS collaboration gratefully acknowledges technical assistance from the staff of the Soudan Underground Laboratory and the Minnesota Department of Natural Resources. The iZIP detectors were fabricated in the Stanford Nanofabrication Facility, which is a member of the National Nanofabrication Infrastructure Network, sponsored and supported by the NSF. Funding and support were received from the National Science Foundation, the U.S. Department of Energy, Fermilab URA Visiting Scholar Grant No.  15-S-33, NSERC Canada, the Canada First Research Excellence Fund, and MultiDark (Spanish MINECO). This document was prepared by the SuperCDMS collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. Pacific Northwest National Laboratory is operated by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830 for the U.S. Department of Energy. SLAC is operated under Contract No. DEAC02-76SF00515 with the U.S. Department of Energy. ## References • (1) K. Olive et al. (Particle Data Group), Review of particle physics, Chin. Phys. C 38 (2014) 090001. • (2) P. A. R. Ade et al. (Planck Collaboration), Planck 2015 results: XIII. Cosmological parameters, Astron. Astrophy. 594 (2016) A13. • (3) G. Steigman, M. Turner, Cosmological constraints on the properties of weakly interacting massive particles, Nucl. Phys. B 253 (1985) 375–386. • (4) B. Lee, S. Weinberg, Cosmological lower bound on heavy neutrino masses, Phys. Rev. Lett. 39 (1977) 165–168. • (5) M. W. Goodman, E. Witten, Detectability of certain dark-matter candidates, Phys. Rev. D 31 (1985) 3059. • (6) D.S. Akerib et al. (LUX Collaboration, Results from a search for dark matter in the complete LUX exposure, Phys. Rev. Lett. 118 (2017) 021303. • (7) E. Aprile et al. (XENON Collaboration), First dark matter search results from the XENON1T experiment, Phys. Rev. Lett. 119 (2017) 181301. • (8) Xiangyi Cui et al. (PandaX-II Collaboration), Dark matter results from 54-ton-day exposure of PandaX-II experiment, Phys. Rev. Lett. 119 (2017) 181302. • (9) M. Aaboud et al. (ATLAS Collaboration), Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at = 13 TeV using the ATLAS detector., Phys. Rev. D 94 (2016) 032005. • (10) V. Khachatryan et al. (CMS Collaboration), Search for dark matter, extra dimensions, and unparticles in monojet events in proton-proton collisions at = 8 TeV, Eur. Phys. Journal. C 75 (2015) 235. • (11) D. E. Kaplan, Single explanation for both baryon and dark matter densities., Phys. Rev. Lett. 68 (1992) 741. • (12) S. Barr, Baryogenesis, sphalerons and the cogeneration of dark matter, Phys. Rev. D 44 (1991) 3062–3066. • (13) T. Cohen, D. Phalen, A. Pierce, K. Zurek, Asymmetric dark matter from a GeV hidden sector, Phys. Rev. D 82 (2010) 056001. • (14) K. Sigurdson, M. Doran, A. Kurylov, R. Caldwell, M. Kamionkowski, Dark-matter electric and magnetic dipole moments, Phys. Rev. D 70 (2004) 083501, [Erratum: Phys. Rev. D 73 (2006) 089903.]. • (15) J.-F. Fortin, T. Tait, Collider constraints on dipole-interacting dark matter, Phys. Rev. D 85 (2012) 063506. • (16) R. Agnese et al. (SuperCDMS Collaboration), Results from the Super Cryogenic Dark Matter Search Experiment at Soudan, Phys. Rev. Lett 120 (2018) 061802. • (17) C. Amole et al. (PICO Collaboration), Dark matter search results from the PICO-60 CF bubble chamber, Phys. Rev. Lett 118 (2017) 251301. • (18) P.-A. Amaudruz et al. (DEAP-3600 Collaboration), First results from the DEAP-3600 dark matter search with argon at SNOLAB, Phys. Rev. Lett. 121 (2018) 071801. • (19) F. Petricca et al. (CRESST Collaboration), First results on low-mass dark matter from the CRESST-III experiment, to be published in the proceedings of TAUP 2017 (2017) arXiv:1711.07692. • (20) R. Agnese et al. (SuperCDMS Collaboration), Projected sensitivity of the SuperCDMS SNOLAB experiment, Phys. Rev. D 95 (2017) 082002. • (21) P. L. Brink et al. (SuperCDMS Collaboration), First test runs of a dark-matter detector with interleaved ionization electrodes and phonon sensors for surface-event rejection, Nucl. Instrum. Meth. A 559 (2016) 414–416. • (22) R. Agnese et al. (SuperCDMS Collaboration), Search for low-mass Weakly Interacting Massive Particles with SuperCDMS, Phys. Rev. Lett. 112 (2014) 241302. • (23) R. Agnese et al. (SuperCDMS Collaboration), Demonstration of surface electron rejection with interleaved germanium detectors for dark matter searches, Appl. Phys. Lett. 103 (2013) 164105. • (24) B. S. Neganov, V. N. Trofimov, Calorimetric method measuring ionizing radiation, Otkryt. Izobret. 146 (1985) 215, ISSR Patent No 1037771. • (25) P. N. Luke, Voltage assisted calorimetric ionization detector, Journal of Applied Physics 64 (1988) 6858–6860. • (26) R. Agnese et al. (SuperCDMS Collaboration), New results from the search for low-mass Weakly Interacting Massive Particles with the CDMS low ionization threshold experiment, Phys. Rev. Lett. 116 (2016) 071301. • (27) R. Agnese et al. (SuperCDMS Collaboration), Low-mass dark matter search with CDMSlite, Phys. Rev. D 97 (2018) 022002. • (28) R. Agnese et al. (SuperCDMS Collaboration), First dark matter constraints from SuperCDMS single-charge sensitive detectors, Phys. Rev. Lett. 121 (2018) 051301. • (29) S. Cebrian, Cosmogenic activation of materials, Int. J. Mod. Phys. A, 32 (2017) 1743006. • (30) E. Armengaud et al. (EDELWEISS Collaboration), Measurement of the cosmogenic activation of germanium detectors in EDELWEISS-III, Astropart. Phys. 91 (2017) 51–64. • (31) M. B. Chadwick, et al., ENDF/B-VII.1 nuclear data for science and technology: Cross sections, covariances, fission product yields and decay data, Nuclear Data Sheets 112 (2011) 2887–2996. • (32) S. Cebrian, et al., Cosmogenic activation in germanium and copper for rare event searches, Astropart. Phys. 33 (2010) 316–329. • (33) A. J. Koning, S. Hilaire, M. Duijvestijn, TALYS-1.0, in: Proceedings of the International Conference on Nuclear Data for Science and Technology, April 22-27, 2007, EDP Sciences, Nice, France, 2008, pp. 211–214. • (34) D.-M. Mei, Z.-B. Yin, S. Elliott, Cosmogenic production as a background in searching for rare physics processes, Astropart. Phys. 31 (2009) 417–420. • (35) J. Amare, et al., Cosmogenic production of tritium in dark matter detectors, Astropart. Phys. (2018) 96–105doi:10.1016/j.astropartphys.2017.11.004. • (36) S. Leray, D. Mancusi, P. Kaitaniemi, J. C. David, A. Boudard, B. Braunn, J. Cugnon, Extension of the Liège Intra Nuclear Cascade model to light ion-induced collisions for medical and space applications, J. Phys.: Conf. Ser. 420 (2013) 012065. • (37) A. Junghans, et al., Projectile-fragment yields as a probe for the collective enhancement in the nuclear level density, Nucl. Phys. A 629 (1998) 635–655. • (38) J. C. David, Spallation reactions: A successful interplay between modeling and applications, Eur. Phys. J. A 51. • (39) F. T. Avignone III, et al., Theoretical and experimental investigation of cosmogenic radioisotope production in germanium, Nucl. Phys. B (Proc. Suppl.) 28 (1992) 280–285. • (40) D. Lal, B. Peters, Cosmic Ray Produced Radioactivity on the Earth, Springer, Berlin-Heidelberg, 1967. • (41) W. N. Hess, H. W. Patterson, R. Wallace, E. L. Chupp, Cosmic-ray neutron energy spectrum, Phys. Rev. 116 (1959) 445. • (42) J. F. Ziegler, Terrestrial cosmic ray intensities, IBM Journal of Research and Development 42 (1998) 117–140. • (43) M. S. Gordon, et al., Measurement of the flux and energy spectrum of cosmic-ray induced neutrons on the ground, IEEE Trans. Nucl. Sci. 51 (2004) 3427–3434. • (44) R. Silberberg, C. Tsao, Partial cross-sections in high-energy nuclear reactions, and astrophysical applications. II. targets heavier than nickel, Astrophys. J. Suppl 25 (1973) 335–367. • (45) R. Silberberg, C. Tsao, Cross sections for (p, xn) reactions, and astrophysical applications, Astrophys. J. Suppl 35 (1977) 129–136. • (46) R. Silberberg, C. Tsao, J. Letaw, Improved cross section calculations for astrophysical applications, Astrophys. J. Suppl 58 (1985) 873–881. • (47) R. Silberberg, C. Tsao, Spallation processes and nuclear interaction products of cosmic rays, Phys. Rep. 191 (1990) 351–408. • (48) R. Silberberg, C. Tsao, A. Barghouty, Updated partial cross sections of proton-nucleus reactions, Astrophys. J. Suppl 501 (1998) 911–919. • (49) Y. Shubin, V. Lunev, A. Konobeyev, A. Dityuk, MENDL-2P: Proton reaction data library for nuclear activation (medium energy nuclear data library), IAEA-NSD-204. • (50) W.-Z. Wei, D.-M. Mei, C. Zhang, Cosmogenic activation of germanium used for tonne-scale rare event search experiments, Astropart. Phys. 96 (2017) 24–31. • (51) J. L. Ma, et al., Study on cosmogenic activation in germanium detectors for future tonne-scale CDEX experiment (Accepted for publication by Sci. China Phys. Mech.)arXiv:1802.09327. • (52) H. Klapdor-Kleingrothaus, et al., GENIUS-TF: a test facility for the GENIUS project, Nucl. Instrum. Meth. A 481 (2002) 149–159. • (53) I. Barabanov, et al., Cosmogenic activation of germanium and its reduction for low background experiments, Nucl. Instrum. Meth. B 251 (2006) 115–120. • (54) J. J. Back, Y. A. Ramachers, ACTIVIA: Calculation of isotope production cross-sections and yields, Nucl. Instrum. Meth. A 586 (2008) 286–294. • (55) C. Zhang, et al., Cosmogenic activation of materials used in rare event search experiments, Astropart. Phys. 84 (2016) 62–69. • (56) U. Tippawan, et al., Light-ion production in the interaction of neutrons with silicon, Phys. Rev. C 69 (2004) 064609. • (57) S. M. Qaim, R. Wölfle, Triton emission in the interactions of fast neutrons with nuclei, Nucl. Phys. A 295 (1978) 150–162. • (58) S. Benck, I. Slypen, J.-P. Meulders, V. Corcalciuc, Secondary light charged particle emission from the interaction of 25- to 65-MeV neutrons on silicon, Nucl. Sci. Eng. 141 (2002) 55–65. • (59) S. Cebrian, et al., Status of the non-cryogenic dark matter searches at the canfranc underground laboratory, Nucl. Phys. B (Proc. Suppl.) 138 (2005) 147–149. • (60) R. Michel, et al., Excitation functions for the production of radionuclides by neutron-induced reactions on C, O, Mg, Al, Si, Fe, Co, Ni, Cu, Ag, Te, Pb, and U up to 180 MeV, Nucl. Instrum. Meth. B 43 (2015) 30 – 43. • (61) V. Blideanu, et al., Nucleon-induced reactions at intermediate energies: New data at and theoretical status, Phys. Rev. C 70 (2004) 014607. • (62) S. Charalambus, Nuclear transmutation by negative stopped muons and the activity induced by the cosmic-ray muons, Nucl. Phys. A 166 (1971) 145–161. • (63) A. Wyttenbach, et al., Probabilities of muon induced nuclear reactions involving charged particle emission, Nucl. Phys. A 294 (1978) 278–292. • (64) E. Schönfeld, Calculation of fractional electron capture probabilities, Appl. Radiat. Isot. 49 (1998) 1353–1357. • (65) R. Underwood, Rejecting outer radius backgrounds in SuperCDMS high voltage dark matter searches, Master’s thesis, Queen’s University (2016). • (66) S. Agostinelli, et al., Geant4 – a simulation toolkit, Nucl. Instrum. Methods Phys. Res. Sect. A 506 (2003) 250–303. • (67) J. Allison, et al., Geant4 developments and applications, IEEE Transactions on Nuclear Science 53 (2006) 270. • (68) J. Allison, et al., Recent developments in Geant4, Nucl. Instrum. Meth. A 835 (2016) 186. • (69) V. Ivanchenko, et al., Recent improvements in Geant4 electromagnetic physics models and interfaces, Progress in Nuclear Science and Technology 2 (2011) 898–903. • (70) I. Chakravarti, R. Laha, J. Roy, Handbook of Methods of Applied Statistics, Vol. 1, John Wiley and Sons, 1967, pp. 392–394. • (71) K. S. Krane, Introductory Nuclear Physics, Wiley, 1988. • (72) B. Povh, K. Rith, C. Scholz, F. Zetsche, M. Lavelle, Particles and Nuclei, Springer, 2008. • (73) J. Brown, M. Gimmick, J. Gillam, D. Paganin, A low energy bound atomic electron compton scattering model for Geant4, Nuc. Inst. Sec. B 338 (2014) 77–88. • (74) D. Barker for the SuperCDMS Collaboration, Low energy background spectrum in CDMSlite, ICHEP 874. arXiv:1611.05792. • (75) R. Agnese et al. (SuperCDMS Collaboration), Maximum likelihood analysis of low energy CDMS II germanium data, Phys. Rev. D 91 (2015) 052021. • (76) A. Da Silva, Development of a low background environment for the cryogenic dark matter search, Ph.D. thesis, University of British Columbia (1996).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003866314888, "perplexity": 2144.2662409230247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153392.43/warc/CC-MAIN-20210727135323-20210727165323-00177.warc.gz"}
https://collaborate.princeton.edu/en/publications/studies-of-high-pressure-13-butadiene-flame-speeds-and-high-tempe
# Studies of high pressure 1,3-butadiene flame speeds and high temperature kinetics using hydrogen and oxygen sensitization Hao Zhao, Zunhua Zhang, Yacine Rezgui, Ningbo Zhao, Yiguang Ju Research output: Contribution to journalArticlepeer-review 11 Scopus citations ## Abstract The high pressure flame speeds and high temperature kinetics of 1,3-butadiene mixtures are studied by using spherical flames at fuel rich and lean conditions at 1–18 atm with H2 and O2 additions. H2 and O2 are added in the mixture to perturb concentrations of H/O radicals and the flame speed sensitivity to 1,3-butadiene + O/H reactions. The presently measured 1,3-butadiene/air flame speeds agree with previous studies at ambient pressure, but are lower at 5 atm. Comparison between the new experimental data and the prediction by the recently developed 1,3-butadiene model (Zhou et al., 2018) shows over-prediction of the flame speeds, especially at high pressure and fuel lean conditions. With H2 and O2 sensitization, the discrepancy between the model prediction and experiments becomes even larger. Sensitivity analysis shows that the flame speed is very sensitive to 1,3-butadiene + O/H reactions, especially to the branching and termination reaction ratio of 1,3-C4H6 + O = C2H3+ CH2CHO (R1) and 1,3-C4H6+ O = CH2O + C3H4-a (R2). The sensitivity of the flame speed to the branching ratio of these two reactions increases with the increase of H2 and O2 enrichment. Due to the lack of accurate theoretical calculations of these two reaction rates, the new flame speed data and previously measured ignition delay time were used to assess the uncertainty of the branching ratio and reaction rates of R1 and R2. It shows that the optimized branching ratio significantly improves the flame speed and ignition delay time predictions, especially at high pressure and fuel lean conditions. The present study reveals that H2 and O2 sensitization in flames provides an important way to identify the uncertainties of fuel + O/H reactions and to improve the model predictability for flames. Original language English (US) 135-141 7 Combustion and Flame 200 https://doi.org/10.1016/j.combustflame.2018.11.018 Published - Feb 2019 ## All Science Journal Classification (ASJC) codes • Chemistry(all) • Chemical Engineering(all) • Fuel Technology • Energy Engineering and Power Technology • Physics and Astronomy(all)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861342191696167, "perplexity": 4528.818052880939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00516.warc.gz"}
http://www.dummies.com/how-to/content/how-to-determine-harmonic-oscillator-eigenstates-o.navId-817347.html
In quantum physics, when you have the eigenstates of a system, you can determine the allowable states of the system and the relative probability that the system will be in any of those states. The commutator of operators A, B is [A, B] = AB – BA, so note that the commutator of is the following: This is equal to the following: This equation breaks down to And putting together this equation with the Hamiltonian, Okay, with the commutator relations, you’re ready to go. The first question is: if the energy of state | n > is En, what is the energy of the state a | n >? Well, to find this, rearrange the commutator Then use this to write the action of like this: So a | n > is also an eigenstate of the harmonic oscillator, with energy not En. That’s why a is called the annihilation or lowering operator: It lowers the energy level of a harmonic oscillator eigenstate by one level. So what’s the energy level of You can write that like this: All this means that is an eigenstate of the harmonic oscillator, with energy not just En — that is, the raises the energy level of an eigenstate of the harmonic oscillator by one level. So now you know that You can derive the following from these equations: C and D are positive constants, but what do they equal? The states |n – 1> and |n + 1> have to be normalized, which means that <n – 1|n – 1> = <n + 1|n + 1> = 1. So take a look at the quantity using the C operator: And because |n – 1> is normalized, <n – 1|n – 1> = 1: But you also know that the energy level operator, so you get the following equation: < n | N | n > = C2 N | n > = n | n >, where n is the energy level, so n < n | n > = C2 However, < n | n > = 1, so This finally tells you, from a | n > = C | n – 1 >, that That’s cool — now you know how to use the lowering operator, a, on eigenstates of the harmonic oscillator.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483536839485168, "perplexity": 700.3180290344054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00055-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/39659-support-distribution.html
# Math Help - Support of a Distribution 1. ## Support of a Distribution Can someone define the term support of a distribution with a simple example? Thanks a lot. 2. Originally Posted by vioravis Can someone define the term support of a distribution with a simple example? Thanks a lot. Informally; it is the largets set on which the density (or mass function) is nowhere zero. RonL 3. Thanks, Captain. Is it possible for you to provide a mathematical example for this? I am particularly interested to know whether the density of two normal distributions with different supports can be multiplied and if so, what would be the resultant distribution. 4. Originally Posted by vioravis Thanks, Captain. Is it possible for you to provide a mathematical example for this? I am particularly interested to know whether the density of two normal distributions with different supports can be multiplied and if so, what would be the resultant distribution. What would having different supports mean in the context of the normal distribution, the support is the entire real line whatever the mean and standard deviation? RonL 5. Originally Posted by CaptainBlack What would having different supports mean in the context of the normal distribution RonL Captain, that's what I am trying to understand myself. All I have is a methodology to sample from the product of two distributions (obtained from a professor) for a problem that I am working on and one of the necessary conditions to use this methodology is that the two distributions under consideration must have the same support. In most cases we would be considering Normal distributions for both and that's why I raised the question in the first place. Thanks. 6. Originally Posted by vioravis Captain, that's what I am trying to understand myself. All I have is a methodology to sample from the product of two distributions (obtained from a professor) for a problem that I am working on and one of the necessary conditions to use this methodology is that the two distributions under consideration must have the same support. In most cases we would be considering Normal distributions for both and that's why I raised the question in the first place. Thanks. Well if the methodology is for general distributions, but you have two normals you should not need to worry as they automaticaly have the same support. RonL 7. Captain, Do you mean to say that if we are considering different distributions the support would be different or otherwise it is always the same? I am still not able to understand the concept of support. A mathematical example would really help. Thanks a lot. 8. Originally Posted by vioravis Captain, Do you mean to say that if we are considering different distributions the support would be different or otherwise it is always the same? I am still not able to understand the concept of support. A mathematical example would really help. Thanks a lot. The method he gives may require that the distributions have the same support, but as you are using normals they do have the same support and so you don't have to worry. The uniform distribution on $[0,1]$ has the interval $[0,1]$ as its support, the exponetial distribution has support $[0,\infty),$ The binomial distribution $B(N,p)$ has support $\{0, 1, .., N\}$ RonL 9. Captain, Thanks a lot. Is this true in case of multivariate normal also? 10. Originally Posted by vioravis Captain, Thanks a lot. Is this true in case of multivariate normal also? If you're familiar with the multivariate normal distribution you should suspect what the answer to this question is going to be ..... 11. Fantastic, Thanks. But the answer is not too obvious to me. That is why I am asking. Has it got anything to do with the number of variables in each of the distributions? 12. Originally Posted by vioravis Fantastic, Thanks. But the answer is not too obvious to me. That is why I am asking. Has it got anything to do with the number of variables in each of the distributions? No. Two things: 1. Recall the definition of support given by CaptainB. 2. Look at the pdf for a multivariate normal. (The answer to the previous question is yes, the supports are the same in the multivariate normal case). 13. Fantastic, I understand that if the two multivariate normals have the same number of variables as in X1' = [X1, X2, X3] and X2' = [X1, X2, X3]. However, I am not able to see how it can extend to the following case: 1. X1' = [X1, X2, X3] and X2' = [X1, X2, X3,X4] - There is an additional variable. Any help would be appreciated? Thanks a lot. 14. Originally Posted by vioravis Fantastic, I understand that if the two multivariate normals have the same number of variables as in X1' = [X1, X2, X3] and X2' = [X1, X2, X3]. However, I am not able to see how it can extend to the following case: 1. X1' = [X1, X2, X3] and X2' = [X1, X2, X3,X4] - There is an additional variable. Any help would be appreciated? Thanks a lot. Sorry, I misunderstood. I think that you'd need the the dimensions to match. So I recant on my no. But ....... It's probably best to wait and see what CaptainB says as I'm not on solid ground here. 15. Thanks, fantastic. I will wait for Captain's answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771182298660278, "perplexity": 262.5285428455163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769373.55/warc/CC-MAIN-20141217075249-00152-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.zora.uzh.ch/id/eprint/121573/
# Hadronic Higgs production through NLO + PS in the SM, the 2HDM and the MSSM Mantler, Hendrik; Wiesemann, Marius (2015). Hadronic Higgs production through NLO + PS in the SM, the 2HDM and the MSSM. European Physical Journal C - Particles and Fields, 75:257. ## Abstract The next-to-leading order (NLO) cross section of the gluon fusion process is matched to parton showers in the MC@NLO approach. We work in the framework of MadGraph5_aMC@NLO and document the inclusion of the full quark-mass dependence in the Standard Model (SM) as well as the state-of-the-art squark and gluino effects within the Minimal Supersymmetric SM embodied in the program SusHi. The combination of the two programs is realized by a script which is publicly available and whose usage is detailed. We discuss the input cards and the relevant parameter switches. One of our focuses is on the shower scale which is specifically important for gluon-induced Higgs production, particularly in models with enhanced Higgs-bottom Yukawa coupling. ## Abstract The next-to-leading order (NLO) cross section of the gluon fusion process is matched to parton showers in the MC@NLO approach. We work in the framework of MadGraph5_aMC@NLO and document the inclusion of the full quark-mass dependence in the Standard Model (SM) as well as the state-of-the-art squark and gluino effects within the Minimal Supersymmetric SM embodied in the program SusHi. The combination of the two programs is realized by a script which is publicly available and whose usage is detailed. We discuss the input cards and the relevant parameter switches. One of our focuses is on the shower scale which is specifically important for gluon-induced Higgs production, particularly in models with enhanced Higgs-bottom Yukawa coupling. ## Statistics ### Citations Dimensions.ai Metrics 12 citations in Web of Science® 12 citations in Scopus® ### Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2015 12 Feb 2016 12:36 14 Feb 2018 11:10 Springer 1434-6044 Gold PubMed ID. An embargo period may apply. https://doi.org/10.1140/epjc/s10052-015-3462-1 26097409 arXiv:1504.06625v1 Preview Content: Published Version Filetype: PDF Size: 1MB View at publisher Licence: Preview Content: Accepted Version Filetype: PDF Size: 950kB Licence:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817990243434906, "perplexity": 3760.525286404979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515088.88/warc/CC-MAIN-20181022134402-20181022155902-00335.warc.gz"}
http://m.intmath.com/laplace-transformation/intro.php
# The Laplace Transformation Pierre-Simon Laplace (1749-1827) Laplace was a French mathematician, astronomer, and physicist who applied the Newtonian theory of gravitation to the solar system (an important problem of his day). He played a leading role in the development of the metric system. The Laplace Transform is widely used in engineering applications (mechanical and electronic), especially where the driving force is discontinuous. It is also used in process control.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9129674434661865, "perplexity": 843.3818526504759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188924.7/warc/CC-MAIN-20170322212948-00290-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/147567/prove-that-the-order-of-an-element-in-the-group-n-is-the-lcmorder-of-the-elemen?answertab=votes
# Prove that the order of an element in the group N is the lcm(order of the element in N's factors p and q) How would you prove that $$\operatorname{ord}_N(\alpha) = \operatorname{lcm}(\operatorname{ord}_p(\alpha),\operatorname{ord}_q(\alpha))$$ where $N=pq$ ($p$ and $q$ are distinct primes) and $\alpha \in \mathbb{Z}^*_N$ I've got this: The order of an element $\alpha$ of a group is the smallest positive integer $m$ such that $\alpha^m = e$ where $e$ denotes the identity element. And I guess that the right side has to be the $\operatorname{lcm}()$ of the orders from $p$ and $q$ because they are relatively prime to each other. But I can't put it together, any help would be appreciated! - Ehr, $p$ and $q$ are not relatively prime to $N$: they divide $N$. They are relatively prime to each other, but not to $N$. – Arturo Magidin May 20 '12 at 23:43 Ops, I guess I was tired when I wrote it. Of course they divide N and are relative prime to each other. – Sup3rgnu May 21 '12 at 21:19 Hint. There are natural maps $\mathbb{Z}^*_N\to\mathbb{Z}^*_p$ and $\mathbb{Z}^*_N\to\mathbb{Z}^*_q$ given by reduction modulo $p$ and reduction modulo $q$. This gives you a homomorphism $\mathbb{Z}^*_N\to \mathbb{Z}^*_p\times\mathbb{Z}^*_q$. What is the kernel of the map into the product? What is the order of an element $(x,y)$ in the product? Hint $\rm\ \ pq\:|\:a^n\!-\!1\iff p,q\:|\:a^n\!-\!1\iff ord_p a, ord_q a\:|\:n\iff lcm(ord_p a, ord_q a)\:|\: n$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203270435333252, "perplexity": 109.69882473146434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111809.10/warc/CC-MAIN-20160428161511-00139-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/rope-swinging-with-periodic-radius-changes.775050/
# Rope swinging with periodic radius changes 1. Oct 8, 2014 ### David Carroll If one simply swings a rope with an object tied to the end of it, the object describes a circle. But if one were to create a contraption that caused the radius of the rope to periodically decrease 4 times every revolution, one could cause the path of the object to describe a square. My question is, why is the object's acceleration not infinite once it makes the 90 degree angle? 2. Oct 8, 2014 ### Staff: Mentor For a perfect square and with a finite velocity of the object, acceleration "is" infinite at the corners. And you would need an infinite force. 3. Oct 8, 2014 ### David Carroll So the above experiment would be impossible? 4. Oct 8, 2014 ### jbriggs444 For a centrally-directed force such as a rope, yes it would be impossible to make the corners of the path perfectly square. The loophole that mfb left open ("finite velocity") could allow for a finite force if the velocity at the corners were zero. But with a centrally-directed force, the resulting trajectory cannot be a square of non-zero size. It would have to be a line headed straight toward the center. The remaining loophole, a square of zero size, is probably not what you had in mind. 5. Oct 8, 2014 ### David Carroll If the period of radius contraction were perfectly continuous, then the velocity at the corners couldn't be zero, could it? In other words, the centrally-directed force is a reel which reels in the rope slightly 4 times per revolution in such a way to create a trajectory of a square for a ball tied to the other end of the rope. And if the reel were connected to a perfectly timed motor, then the reeling would be smooth and continuous. But if that's the case, how could the velocity of the ball be zero at the corners? Wouldn't a motorized reel using a constant force result in constant velocity for the ball at the end of the rope? Oh, wait a minute.....when the motorized reel has extended the rope to its upper limit, the reel itself has reached zero velocity because it cannot go from reeling out to reeling in zero time. Otherwise the reel itself would have infinite acceleration. I just answered my own question! 6. Oct 9, 2014 ### Staff: Mentor If your force is central, then angular momentum is conserved and not zero, so the velocity can never get zero. 7. Oct 9, 2014 ### David Carroll The velocity of the reel would be zero. The reel has an axis that is perpendicular to another contraption that is spinning that reel. That contraption would have a constant angular momentum. But the axis of the reel itself, qua reel, would change momentum once it ceased to be reeling out and started reeling in. Similar Discussions: Rope swinging with periodic radius changes
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971661329269409, "perplexity": 1156.3396514079293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00796.warc.gz"}
https://www.semanticscholar.org/paper/A-globally-convergent-proximal-Newton-type-method-Mordukhovich-Yuan/b092ad424f81754973c3348b783f3bc4c7c4a804
# A globally convergent proximal Newton-type method in nonsmooth convex optimization @article{Mordukhovich2022AGC, title={A globally convergent proximal Newton-type method in nonsmooth convex optimization}, author={Boris S. Mordukhovich and Xiaoming Yuan and Shangzhi Zeng and Jin Zhang}, journal={Mathematical Programming}, year={2022} } The paper proposes and justifies a new algorithm of the proximal Newton type to solve a broad class of nonsmooth composite convex optimization problems without strong convexity assumptions. Based on advanced notions and techniques of variational analysis, we establish implementable results on the global convergence of the proposed algorithm as well as its local convergence with superlinear and quadratic rates. For certain structural problems, the obtained local convergence conditions do not… 7 Citations Generalized Damped Newton Algorithms in Nonsmooth Optimization with Applications to Lasso Problems • Mathematics, Computer Science • 2021 New globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems with extended-real-valued cost functions, which typically arise in machine learning and statistics. ADVANCES IN CONVERGENCE AND SCOPE OF THE PROXIMAL POINT ALGORITHM The proximal point algorithm, as an approach to finding a zero of a maximal monotone mapping, is well known for its role in numerical optimization, such as in methods of multipliers (ALM). Although A globally convergent regularized Newton method for $\ell_q$-norm composite optimization problems • Mathematics • 2022 This paper is concerned with lq (0<q<1)-norm regularized minimization problems with a twice continuously differentiable loss function. For this class of nonconvex and nonsmooth composite problems, Minimizing oracle-structured composite functions • Computer Science Optimization and Engineering • 2022 A method that makes minimal assumptions about the two functions, does not require the tuning of algorithm parameters, and works well in practice across a variety of problems, showing that the method is more efficient than standard solvers when the oracle function contains much data. Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization • Mathematics • 2021 This paper proposes and justifies two new globally convergent Newton-type methods to solve unconstrained and constrained problems of nonsmooth optimization by using tools of variational analysis and Generalized Damped Newton Algorithms in Nonsmooth Optimization via Second-Order Subdifferentials • Mathematics, Computer Science • 2021 New globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems with extended-real-valued cost functions, which typically arise in machine learning and statistics. Estimates of Generalized Hessians for Optimal Value Functions in Mathematical Programming • A. Zemkoho • Mathematics, Computer Science Set-Valued and Variational Analysis • 2021 The main goal of this paper is to provide estimates of the generalized Hessian for the optimal value function, which could enable the development of robust solution algorithms, such as the Newton method. ## References SHOWING 1-10 OF 50 REFERENCES Generalized Newton Algorithms for Tilt-Stable Minimizers in Nonsmooth Optimization • Mathematics SIAM J. Optim. • 2021 Two versions of the generalized Newton method are developed to compute not merely arbitrary local minimizers of nonsmooth optimization problems but just those, which possess an important stability property known as tilt stability, which are based on graphical derivatives of the latter. Proximal quasi-Newton methods for nondifferentiable convex optimization • Computer Science Math. Program. • 1999 The method monitors the reduction in the value of ∥vk∥ to identify when a line search on f should be used and converges globally and the rate of convergence is Q-linear. A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo–Tseng error bound property • Mathematics, Computer Science Math. Program. • 2019 This work proves that when the problem possesses the so-called Luo–Tseng error bound (EB) property, IRPN converges globally to an optimal solution, and the local convergence rate of the sequence of iterates generated by IRPN is linear, superlinear, or even quadratic, depending on the choice of parameters of the algorithm. On the linear convergence of descent methods for convex essentially smooth minimization • Mathematics, Computer Science • 1992 The linear convergence of both the gradient projection algorithm of Goldstein and Levitin and Polyak, and a matrix splitting algorithm using regular splitting, is established, which does not require that the cost function be strongly convex or that the optimal solution set be bounded. Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization • Computer Science NIPS • 2011 This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates. Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods • Mathematics, Computer Science Math. Oper. Res. • 2018 This work explains the observed linear convergence intuitively by proving the equivalence of such an error bound to a natural quadratic growth condition and generalizes to linear convergence analysis for proximal methods for minimizing compositions of nonsmooth functions with smooth mappings. A Generalized Newton Method for Subgradient Systems • Mathematics • 2020 This paper proposes and develops a new Newton-type algorithm to solve subdifferential inclusions defined by subgradients of extended-real-valued prox-regular functions. The proposed algorithm is Linear convergence of first order methods for non-strongly convex optimization • Computer Science, Mathematics Math. Program. • 2019 This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. Local behavior of an iterative framework for generalized equations with nonisolated solutions Results deal with error bounds and upper Lipschitz-continuity properties for these problems, including monotone mixed complementarity problems, Karush-Kuhn-Tucker systems arising from nonlinear programs, and nonlinear equations. An inexact successive quadratic approximation method for L-1 regularized optimization • Mathematics, Computer Science Math. Program. • 2016 The inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9346172213554382, "perplexity": 1327.4296929473917}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00157.warc.gz"}
https://researchportal.unamur.be/en/publications/scaling-law-in-the-standard-map-critical-function-interpolating-h
# Scaling law in the standard map critical function. Interpolating Hamiltonian and frequency map analysis Research output: Contribution to journalArticlepeer-review ## Abstract We study the behaviour of the standard map critical function in a neighbourhood of a fixed resonance, that is the scaling law at the fixed resonance. We prove that for the fundamental resonance the scaling law is linear. We show numerical evidence that for the other resonances p/q, q>1, p \neq 0 and p and q relatively prime, the scaling law follows a power-law with exponent 1/q. Original language English 2033-2061 29 Nonlinearity 13 Published - 2000 ## Keywords • Bruno number • Critical function • Frequency map analysis • standard map ## Fingerprint Dive into the research topics of 'Scaling law in the standard map critical function. Interpolating Hamiltonian and frequency map analysis'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792159795761108, "perplexity": 2961.0853262527125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00667.warc.gz"}
https://scholars.ttu.edu/en/publications/transient-penetration-of-a-viscoelastic-fluid-in-a-narrow-capilla-15
# Transient penetration of a viscoelastic fluid in a narrow capillary channel Udugama R. Sumanasekara, Martin N. Azese, Sukalyan Bhattacharya Research output: Contribution to journalArticlepeer-review 6 Scopus citations ## Abstract This article describes an unexplored transport phenomenon where a mildly viscoelastic medium encroaches a narrow capillary channel under the action of surface-tension force. The ultimate goal of the study is to provide the penetration length and the intrusion rate of the liquid as functions of time. The resulting analysis would be instrumental in building an inexpensive and convenient rheometric device which can measure the temporal scale for viscoelastic relaxation from the stored data of the aforementioned quantities. The key step in the formulation is a transient eigenfunction expansion of the instantaneous velocity profile. The time-dependent amplitude of the expansion as well as the intruded length are governed by a system of integro-differential relations which are derived by exploiting the mass and momentum conservation principles. The obtained integro-differential equations are simultaneously solved by using a fourth-order Runge-Kutta method assuming a start-up problem from rest. The resulting numerical solution properly represents the predominantly one-dimensional flow which gradually slows down after an initial acceleration and subsequent oscillation. The computational findings are independently verified by two separate perturbation theories. The first of these is based on a Weissenberg number expansion revealing the departure in the unsteady imbibition due to small but finite viscoelasticity. In contrast, the second one explains the long-time behaviour of the system by analytically predicting the decay features of the dynamics. These asymptotic results unequivocally corroborate the simulation inferring the accuracy of the numerics as well as the utility of the simplified mathematical models. Original language English 528-552 25 Journal of Fluid Mechanics 830 https://doi.org/10.1017/jfm.2017.576 Published - Nov 10 2017 ## Keywords • capillary flows • non-Newtonian flows • viscoelasticity
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516845107078552, "perplexity": 1205.2984643234765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00340.warc.gz"}
https://www.groundai.com/project/toric-varieties-vs-horofunction-compactifications-of-polyhedral-norms/
Toric varieties and polyhedral horofunction compactifications # Toric Varieties vs. Horofunction Compactifications of Polyhedral Norms ## Abstract. We establish a natural and geometric 1-1 correspondence between projective toric varieties of dimension and horofunction compactifications of with respect to rational polyhedral norms. For this purpose, we explain a topological model of toric varieties. Consequently, toric varieties in algebraic geometry, normed spaces in convex analysis, and horofunction compactifications in metric geometry are directly and explicitly related. The first author acknowledges support from NSF grants DMS 1107452, 1107263, 1107367 GEometric structures And Representation varieties (the GEAR Network) and partial support from Simons Fellowship (grant #305526) and the Simons grant #353785. The second author was partially supported by the European Research Council under ERC-Consolidator grant 614733, and by the German Research Foundation in the RTG 2229 Asymptotic Invariants and Limits of Groups and Spaces. ## 1. Introduction In this paper we give a correspondence between the three seemingly different concepts toric varieties, horofunction compactifications and polyhedral norms. Toric varieties provide a basic class of algebraic varieties which are relatively simple. The nonnegative part and the moment map of toric varieties are essential ingredients of the rich structure of toric varieties. The horofunction compactification of metric spaces is a general method to construct compactifications of metric spaces introduced by Gromov [Gr1, §1.2] in 1981 (see §4 below). Finally, polyhedral norms on give a special class of normed linear spaces (or Minkowski spaces [Th]) and metric spaces (see §3 below). In this paper we establish an explicit geometric connection between projective toric varieties of dimension and horofunction compactifications of with respect to rational polyhedral norms. ###### Theorem 1.1. In every dimension , there exists a bijective correspondence between projective toric varieties of dimension and rational polyhedral norms on up to scaling such that: 1. The nonnegative part of a projective toric variety is homeomorphic to the horofunction compactification of with respect to the distance induced by the corresponding polyhedral norm . 2. Equivalently, the image of the moment map of the toric variety is homeomorphic to the horofunction compactification of with respect to the distance induced by the corresponding polyhedral norm . This correspondence is canonical and given as follows: The unit ball of a rational polyhedral norm is a rational convex polytope in which contains the origin as an interior point, which in turn gives a fan in by taking cones over the faces of , and hence gives a toric variety . Note that the fan does not change when the polytope is scaled, and hence the correspondence is up to scaling on the polyhedral norms . This result adds another perspective on the close relations between integral convex polytopes and toric projective varieties, for a detailed description see [Od1, Chap 2]. Theorem 1.1 implies the following ###### Corollary 1.2. Let be a rational polyhedral norm on , and its unit ball. Let be the polar set1 of , a polytope dual to . Then the horofunction compactification of with respect to is homeomorphic . This gives a bounded realization of the horofunction compactification of . The same result holds for the horofunction compactification of any polyhederal norm on whether it is rational or not and is proven in [JS, Theorem 1.2]. It is well-known that algebro-geometric and cohomology properties of toric varieties are determined by combinatorial and convex properties of their fans (see [Fu] [CLS] [Od1]). Consequently, the existence of a correspondence between toric varieties and polyhedral norms is not surprising. But it is probably not obvious that there exists such a direct connection between horofunction compactifications of in metric geometry and important parts of toric varieties : the nonnegative part and the image of the moment map of . The correspondence in Theorem 1.1 can give numerical invariants of toric varieties. In the statement of Theorem 1.1, we have fixed the standard integral structure of when we discuss toric varieties and the rationality of polyhedral norms. Consequently, by requiring the standard basis of to be unit vectors, we can also fix the standard Euclidean metric on . Though scaling of the polyhedral norm does change its unit ball, i.e., the polytope, it does not change the fan induced from . On the other hand, we can use the following canonical normalization of polyhedral norms: Every vertex of the unit ball of is integral, and one of them is primitive. With this normalization, by Theorem 1.1, each projective toric variety gives a unique polyhedral norm and its unit ball , which is a polytope in . Besides computing the volume of with respect to the standard Euclidean metric on , we can also compute the volume of with respect to a suitable notion of volume induced from the norm . According to [AT], there are four commonly used definitions of volumes on normed vector spaces: Busemann volume, Holmes-Thompson volume, Gromov volume, and Benson volume. Consequently, we obtain the following corollary: ###### Corollary 1.3. Choose one of the four volumes mentioned above, for each projective toric variety , there is a canonical number given by the volume of the unit ball of the normalized polyhedral norm corresponding to the toric variety . One natural question is the meaning of such volumes for toric varieties. Note that if we use the standard Euclidean metric on , then the volume of the convex polytope is related to the implicit degree of the projective toric variety . See [So, §5]. The correspondence in Theorem 1.1 also raises the question of how to understand toric varieties by using metric geometry. Horofunction compactification and noncommutative geometry Before we explain some detailed definitions of toric varieties and horofunction compactifications in later sections, we point out a connection between horofunction compactifications of normed vector spaces and reduced -algebras of discrete groups and consequently the noncommutative geometry. After the horofunction compactification of a proper metric space was introduced by Gromov [Gr1, §1.2] in 1981, the horofunction compactification of a complete simply connected nonpositively curved manifold was identified with the geodesic compactification in [BGS, §3]. This gives a direct connection between the geometry of geodesics and analysis, or rather a class of special functions, on the manifold. Among nonpositively curved simply connected Riemannian manifolds, horofunctions are difficult to compute except for symmetric spaces of noncompact type (see [Ha] [GJT]). For noncompact locally symmetric spaces of nonpositive curvature, the horofunction compactification was identified in [JM] and [DFS]. It will be seen below that with polyhedral norms provide another class of spaces for which all horofunctions can be computed [Wa1]. It turned out that the horofunction compactifications of with respect to norms are unexpectedly related to the noncommutative geometry developed by Alain Connes (see [Co], [Ri1] and [Ri2]). This brought another perspective to horofunction compactifications and motivated the work in this paper. Let be a countable discrete group such as and . Let be the convolution -algebra of complex valued functions of finite support, i.e., of compact support, on . Let be the usual -representation of on . Then the norm-completion of in the space of operators of is the reduced -algebra of , denoted by . Let be a length function of , i.e., a function such that (1) , (2) for all , , (3) for all , . For example, the word length on with respect to a set of generators gives rise to such a length function. Let be the multiplication operator on defined by the length function , which is usually unbounded. Then serves as a Dirac operator in the noncommutative geometry of . The following fact is true: For every , the commutator is a bounded operator on . This allows one to define a semi-norm on : Lℓ(f)=||[Mℓ,π(f)]||. In general, if is a semi-norm on a dense sub--algebra of a unital -algebra such that , then Connes [Co] (see [Ri1, p. 606]) defined a metric on the state space of as follows: For any two states , ρL(μ,ν)\coloneqqsup{|μ(a)−ν(a)|∣a∈A,L(a)≤1}. We recall that a state on a -algebra is a positive linear functional of norm 1. The set of all states of a -algebra is denoted by , and is a convex subset of the space of linear functionals of . Extreme points of are called pure states of . When , the space of continuous functions on a compact topological space , then states on correspond to probability measures on , and pure states correspond to evaluations on . In [Ri1, p. 606], Rieffel called a semi-norm on a Lip-norm if the topology on induced from coincides with the weak -topology, and he called a unital -algebra equipped with a Lip-norm a compact quantum metric space. In [Ri1], Rieffel asked the question: Given a discrete group , is the seminorm on coming from a length function on a Lip-norm? He could only handle the case and prove the following result: ###### Proposition 1.4 ([Ri1], Thm 0.1). Let be a length function on which is either the word length for some finite generating set or the restriction to of some norm on . Then the induced seminorm is a Lip-norm on , and hence is a compact quantum metric space. In proving this result, Rieffel made crucial use of horofunction compactifications of with respect to norms. In this paper, he also raised the following question ([Ri1, Question 6.5]): Is it true that, for every finite-dimensional vector space and every norm on it, every horofunction (i.e., a boundary point of the horofunction compactification of ) is a Busemann function, i.e., the limit of an almost-geodesic ray? This question motivated the paper [KMN] and was settled completely in [Wa1]. It also motivated the other papers [Wa2], [Wa3], [Wa5], [AGW], [WW1], [WW2], [An], [De] and [LS] on horofunction compactifications. ## 2. Toric varieties In this section, we give a summary of several results on toric varieties which are needed to understand and prove Theorem 1.1. The basic references for this section are [Fu], [CLS], [Od1], [Od2], [AM], [Cox], and [So]. ###### Definition 2.1. A toric variety over is an irreducible variety over such that 1. the complex torus is a Zariski dense subvariety of and 2. the action of on itself by multiplication extends to an action of to . We fix the standard lattice in , which gives an integral structure, and also a -structure, . Recall that a rational polyhedral cone is a cone generated by finitely many elements of , or equivalently of : σ={λ1u1+⋯+λmum∈Rn∣λ1,⋯,λm≥0}. Usually, is assumed to be strongly convex: , i.e., does not contain any line through the origin. A face of a cone is the intersection of with the 0-level set of a linear functional which is nonnegative on . The relative interior and relative boundary of a cone are the interior respectively boundary of in the linear subspace spannend by . For each strongly convex rational polyhedral cone , define its dual cone by (2.1) σ∨\coloneqq{v∈Rn∣⟨v,u⟩≥0, % for all u∈σ}. Then is also a convex rational polyhedral cone, though it is not strongly convex anymore unless . ###### Definition 2.2. A fan in is a collection of strongly convex rational polyhedral cones such that 1. if , then every face of also belongs to ; 2. if , then their intersection is a common face of both of them, and hence belongs to . In this paper, we only deal with fans which consist of finitely many polyhedral cones. It is known that there is a strong correspondence between fans and toric varieties, namely: 1. For every fan of , there is an associated toric variety , which is a normal algebraic variety. 2. If a toric variety is a normal variety, then is of the form for some fan in . Because of this correspondence, toric varieties are often required to be normal, for example in [Fu]. In this paper, we follow this convention and require toric varieties to be normal. The construction of a toric variety from a fan and a description of its topology in terms of is crucial to the proof of Theorem 1.1. Therefore we give a short description here: Given a fan in , its associated toric variety is constructed as follows: 1. Each cone gives rise to an affine toric variety . Specifically, is a finitely generated semigroup. Let be a set of generators of this semigroup, i.e., every element of is of the form , with being non-negative integers. Then the Zariski closure of the image of in under the embedding φ:(C∗)n→Ck,t↦(tm1,⋯,tmk) is the affine toric variety . Note that we use Laurent monomials for the notation: for all where denotes the -th component of . 2. For any two cones in , if is a face of , then is a Zariski dense subvariety of . 3. The toric variety is obtained by gluing these affine toric varieties together: XΣ=∪σ∈ΣUσ/∼, where the relation is given by the inclusion relation in (2): Note that for any two cones , the intersection , if nonempty, is a common face of both and , and hence can be identified with a subvariety of both and . Many properties of can be expressed in terms of the combinatorial properties of the fan . We state one about the orbits of in , details can be found for example in [CLS, p. 119], [Cox, §9] or [Fu, §3.1]. ###### Proposition 2.3. For every toric variety , there is a bijective correspondence between orbits of the torus in and cones in the fan . Denote the orbit in corresponding to by . Then is a complex torus isomorphic to . In particular, the open and dense orbit corresponds to the trivial cone . ### 2.1. A topological model of toric varieties In order to better understand the toric variety as a compactification of , we want to give a topological description of which exhibits its dependence on clearly and also describes explicitly sequences in which converge to points in the complement . To do so, note that in terms of the standard integral structure , we can realize by: iZ∖C≅C∗,z↦e−2πz, and when Re, it holds . Then the exponential map gives an identification iZn∖Cn≅(C∗)n. Conversely, using the logarithmic function , we get an identification (2.2) (C∗)n≅iZn∖Cn. In the following, we denote the complex torus by . Given any fan in , we will define a bordification of and show in Proposition 2.10 below that is homeomorphic to the toric variety as -topological spaces. ###### Definition 2.4. For each cone , define a boundary component O(σ)=iZn∖Cn/SpanC(σ). Note that this is a complex torus of dimension equal to . When , then . Later we will identify with . ###### Definition 2.5. Define a topological bordification by (2.3) ¯¯¯¯TΣ=T∪∐σ∈Σ,σ≠{0}O(σ) with the following topology: A sequence , where and , converges to a point for some if and only if the following conditions hold: 1. The real part can be written as such that when it holds 1. the first part is contained in the relative interior of the cone and its distance to the relative boundary of goes to infinity, 2. the second part is bounded. 2. the image of in under the projection iZn∖Cn→iZn∖Cn/SpanC(σ) converges to the point . Note that the imaginary part of lies in the compact torus , and the second condition controls both the imaginary part and the bounded component of the real part . The behaviour of converging sequences is shematically shown in Figure 1 and Figure 2. ###### Remark 2.6. The above definition of and the identification of with in Proposition 2.10 follows the construction and discussion in [AM, pp. 1-6]. We note that there is one difference with the convention there: On page 2 in [AM], the complex torus is identified with , and the real part is the compact torus , and the imaginary part is , which can be identified with . ###### Remark 2.7. The toric variety is compact if and only if the support of is equal to , i.e., gives a rational polyhedral decomposition of . Similarly, it is clear from the definition that the bordification is a compactification of if and only if the support of is equal to . To obtain a continuous action of on , we note that or acts on and every boundary component by translation. These translations are compatible in the following sense. ###### Lemma 2.8. For any sequence , if is convergent in , then for any vector , or rather its image in , the shifted sequence is also convergent. Furthermore, limn→+∞z+zj=z+limn→+∞zj. This implies the following result. ###### Proposition 2.9. The action of on itself by multiplication extends to a continuous action on , and the decomposition in Equation 2.3 into gives the orbit decomposition of with respect to the action of . ###### Proof. We note that the multiplication of the torus on itself and the boundary components corresponds to translation in and . Then the proposition follows from Lemma 2.8. ∎ One key result we need for the proof of Theorem 1.1 is the following description of the toric variety as a topological -space. Since this proposition and its proof are not explicitly written down in literature, we give an outline of the proof on page 2.1 for the convenience of the reader. ###### Proposition 2.10. The identity map on extends to a homeomorphism , which is equivariant with respect to the action of , and the -orbits in the toric variety are mapped homeomorphically to the boundary components . The identification between and allows one to see that when a sequence of points in the real part of the complex torus goes to infinity along the directions contained in a cone of the fan , the sequence will converge to a point of a complex torus of smaller dimension. Hence the compact torus , which is the fiber over in the toric variety, will collapse to a torus of smaller real dimension . ###### Remark 2.11. This result is well-known and can be found for example in [AM, pp. 1-6] [Od2, §10] [Cox, p. 211] or [Fu, p. 54]. Such a picture of toric varieties including also the compact part of the torus is often described in connection with the moment map of toric varieties (for a reference see [Fu, p. 79] or [Mi]). We will come back to this map later. But first we need a description of it as a bordification of instead of a bounded realization via the image of the moment map. A bordification of the noncompact part of the torus is described in this way in [AM, p. 6] (see also [Od2, §10]) together with a map from the toric variety to this bordification. First, we recall some properties of orbits of in a toric variety . The 1-1 correspondence between -orbits in and cones in mentioned in Proposition 2.3 above can be described more explicitly (see [Fu, p. 28], [CLS, p. 118] and [Cox, p. 212]): ###### Proposition 2.12. For every cone , there is a distinguished point in the affine toric variety . It is contained in the orbit of in corresponding to (see Proposition 2.3), and hence the orbit is equal to the orbit . The distinguished point can be described as follows. The smallest cone of the fan corresponds to the affine toric variety , and the distinguished point is in this case. The -orbit through this point gives . In general, we note that every one-parameter subgroup is of the form λm(z)=(zm1,⋯,zmn), where . Let be an integral vector contained in the relative interior of the cone . By [Fu, p. 37] [CLS, Proposition 3.2.2] (see also [Cox, p. 212]), the distinguished point is given by xσ=limz→0λm(z)∈XΣ. We need to identify this distinguished point with a corresponding distinguished point in the bordification . ###### Lemma 2.13. Under the identification of with in Proposition 2.10, this distinguished point in corresponds to the image of the origin of in under the projection . When the orbit is identified with , where , then corresponds to . ###### Proof. As mentioned before, for the trivial cone of the fan , the distinguished point is . Under the identification in Equation 2.2 on page 2.2, the distinguished point corresponds to the image of the origin of under the projection . Therefore, corresponds to in . For any nontrivial cone , the distinguished point in the orbit is equal to the limit in , where is an integral vector contained in the relative interior of the cone . We need to determine the limit in the bordification . When we identify with as above in Equation 2.2, the complex curve () in is the image of a complex line in with slope given by , and hence its real part is a straight line in through the origin with slope , i.e., , , and is contained in . By the definition of the topology of above, converges to the distinguished point in , i.e., to the image of the origin of in . This proves Lemma 2.13. ∎ ###### Lemma 2.14. For any cone , a sequence in converges to the distinguished point in the toric variety if and only if it converges to the distinguished point in the topological model ###### Proof. We note that for the open subset , under the embedding of in Equation 2.1 on page 2.1, the coordinates of the distinguished point are either 0 or 1 depending on whether the element in is zero or positive on . This implies that a sequence converges to the distinguished point if and only if the following conditions are satisfied: 1. for any with it holds as , 2. for any with it holds as . Note that the vectors in with span the dual cone , i.e., linear combinations of these vectors with nonpositive coefficients give . In terms of the identification , write with as in the definition of the topology of , then the above condition on is equivalent to the following conditions: 1. The real part can be written as such that when , 1. the first part is contained in the interior of the cone and its distance to the relative boundary of goes to infinity, 2. the second part is bounded. 2. The image of in under the projection iZn∖Cn→iZn∖Cn/SpanC(σ) converges to the image in of the zero vector in . By the definition of , this is exactly the conditions for the sequence to converge to the distinguished point in . This proves Lemma 2.14. ∎ Proof of Proposition 2.10. The idea of the proof is to use the continuous actions of on and to extend the equivalence of convergence of interior sequences to the distinguished point in Lemma 2.14 to other boundary points. Under the action of , the orbit in gives . As pointed out in Proposition 2.3 on page 2.3 and Proposition 2.12, the orbit in gives the orbit corresponding to . It can be seen that the stabilizer of the distinguished point in is equal to the subgroup (see [CLS, Lemma 3.2.5]). By the definition of , the stabilizer of the point is also equal to . Therefore, there is a canonical identification between and . By Lemma 2.14, for any sequence in , in if and only if in . Take any such sequence with . Let be any converging sequence with . For both the toric variety and the bordification , the continuous actions of on and in Proposition 2.9 imply that the sequence converges to in , and to in respectively. This implies that a sequence of interior points in converges to a boundary point in the orbit if and only if it converges to a corresponding point in . Since is an arbitrary cone in and is an arbitrary convergent sequence in , this proves the topological description of toric varieties in Proposition 2.10. ### 2.2. Fans coming from convex polytopes One way to construct fans in is to start with a rational convex polytope which contains the origin as an interior point. As a more detailed reference for this construction see [Fu, Section 1.5]. Recall that a convex polytope in is the convex hull of finitely many points of . If the vertices of are contained in , then is called a rational convex polytope2. Assume that is a rational convex polytope in and contains the origin as an interior point. Then each face of spans a rational polyhedral cone σF=R≥0⋅F, i.e., the face is a section of the cone , and these cones form a fan in , denoted by . See for example Figure 3 below. Denote the toric variety defined by the fan by . Since the support of the fan is equal to , is compact. Note that for any integer , the scaled polytope is also a rational polytope and gives the same fan, . It is known that not every fan in comes from such a rational convex polytope , as the following example shows. ###### Example 2.15. [Fu, p.25] Take the fan generated by the eight halflines through the origin and one of the following eight points: (−1,±1,±1),(1,−1,±1),(1,1,−1),(1,2,3). Then it is not possible to find eight points, one on each of the halflines, such that for each of the six cones the four corresponding generating points lie on one affine hyperplane. The toric varieties defined by fans which do come from rational convex polytopes as above have a simple characterization: ###### Proposition 2.16. A toric variety is a projective variety if and only if the fan is equal to the fan induced from a rational convex polytope containing the origin as an interior point as above. In [Fu, p. 26] and [Cox, p. 219], a rational polytope dual to is used to construct a toric variety. ###### Definition 2.17. The polar set of a convex polytope is defined by (2.4) P∘={v∈Rn∣⟨v,u⟩≥−1, for all \ u∈P}. ###### Remark 2.18. • When is a rational convex polytope containing the origin as an interior point, then is also a rational convex polytope containing the origin as an interior point. • When is symmetric with respect to the origin, then is equivalent to another common definition of polar set: {v∈Rn∣⟨v,u⟩≤1, for all \ u∈P}. The following fact is well-known, for a reference see for example [Fu, p. 24] or [HSW, Lemma 3.7] for a proof. ###### Proposition 2.19. There is a duality between and which is given by an one-to-one correspondence between the set of faces of and the set of faces of which reverses the inclusion relation. The correspondence is as follows: Let be a face of . Then there is exactly one face of , called the dual face of , which satisfies the following two conditions: 1. for any and it holds: , 2. . ###### Proof of Proposition 2.16. The fan associated with the dual polytope above is called the normal fan of the polytope and is defined for example in [Cox, pp. 217-218] or [Fu, Proposition, p. 26]. The toric varieties there are defined by the normal fans of and not by the fans as in this paper. But since the polar set of is equal to , , the above statement in Proposition 2.16 is equivalent to that in [Cox, Theorem 12.2] which is stated in terms of normal fans of rational polytopes. ∎ ### 2.3. Real and nonnegative part of toric varieties and the moment map For every toric variety , there is the notion of the nonnegative part (see also [Fu, p. 78], [Od1, §1.3] and [So, §6]). In , the real part is , and in the complex torus , the real part is , which has -connected components. The positive part of is , and the positive part of is . Under the identification (Equation 2.2 on page 2.2) (C∗)n≅iZn∖Cn=Rn×iZn∖Rn the positive part corresponds to . ###### Definition 2.20. [So, Definition 6.2] For any toric variety , the closure of the positive part is called the nonnegative part of , denoted by . Under the identification in Proposition 2.10, can be described as follows: ###### Proposition 2.21. For any fan of , the nonnegative part is homeomorphic to the space ¯¯¯¯¯¯¯RnΣ=Rn∪∐cσ∈Σ,σ≠{0}Rn/SpanR(σ) with the following topology: An unbounded sequence converges to a boundary point in for a cone if and only if one can write such that the following conditions are satisfied: 1. when , is contained in the cone and its distance to the relative boundary of goes to infinity, 2. is bounded, 3. the image of in under the projection converges to . This proposition was explained in detail and proved in [AM, pp. 2-6] and motivated Proposition 2.10 above. If we denote by , then for any two cones , it holds that is contained in the closure of if and only if is a face of . Therefore, the nonnegative part can be rewritten as (2.5) XΣ,≥0=Rn∪∐σ∈Σ,σ≠{0}OR(σ). Consequently, we have ###### Corollary 2.22. The translation action of on extends to a continuous action on , and the decomposition of in Equation 2.5 is the decomposition into -orbits. This decomposition of the nonnegative part of the toric variety is a cell complex dual to the fan . If for a rational convex polytope containing the origin as an interior point, then this cell complex structure is isomorphic to the cell structure of the polar set . Using the moment map for projective toric varieties, we can realize this cell complex of and hence the compactification by a bounded convex polytope: Let be a rational convex polytope containing the origin as an interior point, and the associated projective variety. By definition, each cone of corresponds to a unique face of , which gives by Proposition 2.19 a dual face of the polar set . ###### Proposition 2.23. The moment map induces a homeomorphism μ:XΣP,≥0→P∘ such that for every cone , the positive part of the orbit as a complex torus, or equivalently the orbit in Proposition 2.21, is mapped homeomorphically to the relative interior of the face corresponding to the cone . For more details about the moment map and the induced homeomorphism see [Od1, p. 94], [Fu, §4.2], [So, §8] and [JS, Theorem 1.2]. ## 3. Polyhedral metrics In this section, we recall the definition of polyhedral norms on . They are rather special in view of the Minkowski geometry of normed real vector spaces and the Hilbert geometry of bounded convex subsets of real vector spaces. Let be an asymmetric norm on , i.e., a function satisfying: 1. For any , if , then . 2. For any and , 3. For any two vectors , . In particular, and may not be equal to each other. If the second condition is replaced by the stronger condition: for all , then is symmetric and is a usual norm on . Normed vector spaces have been extensively studied. They are also called Minkowski geometry in [Th]. Asymmetric norms on vector spaces have also been studied systematically, see [Cob]. In terms of their connection with convex domains below, they are natural. Given an asymmetric norm on , the unit ball of , B||⋅||={x∈Rn∣||x||≤1}, is a closed convex subset of which contains the origin as an interior point. Conversely, given any convex closed subset of which contains the origin as an interior point, we can define the Minkowski functional on by ||x||P=inf{λ>0∣x∈λP}. It can be checked easily that defines an asymmetric norm on . If is symmetric with respect to the origin, i.e., , then is a norm on . It is also easy to see that the unit ball of is equal to . Since any asymmetric norm on is uniquely determined by its unit ball, it is of the form for some closed convex domain in containing the origin in its interior. ###### Definition 3.1. When is a polytope, the asymmetric norm is called a polyhedral norm. If is a rational polytope with respect to the integral structure , the norm is also called a rational polyhedral norm. ###### Remark 3.2 (Connections to the Minkowski and Hilbert geometry). This interplay between convex subsets of and norms on plays a foundational role in the convex analysis of Minkowski geometry, see for example [Gru] and [Th]. If the lattice is taken into account, connections with number theory and counting of lattices points are established and the structure becomes richer. The geometry of numbers relies crucially on these connections, see also [GrL] and [Ba]. There is another metric space associated with a convex domain of . It is the domain itself equipped with the Hilbert metric defined on it. When is the unit ball of , this is the Klein’s model of the hyperbolic plane. In general, the Hilbert metric is a complete metric on defined through the cross-ratio. See [deL] for details. Since is diffeomorphic to , the Hilbert metric induces a metric on . When is the interior of a convex polytope , the Hilbert metric on is quasi-isometric to a polyhedral norm [Be] [Ve]. The polyhedral Hilbert metric associated with a polytope is isometric to a normed vector space if and only if the polytope is the simplex [FK, Theorem 2]. Furthermore, polyhedral Hilbert metrics have also special isometry groups [LW]. See also [LN] for other special properties of these Hilbert metrics. These discussions show that polyhedral norms on , in particular rational polyhedral norms, are very special in the context of the Minkowski geometry [Th] and the Hilbert geometry [deL]. ## 4. Horofunction compactification of metric spaces In this section we recall briefly the horofunction compactification of metric spaces, which was first introduced in [Gr1, §1.2]. Let be a proper metric space, for example, a locally compact metric space. The metric can be asymmetric, i.e., it satisfies all conditions of a usual metric except for the symmetry: possible. Such metrics arise naturally in view of polyhedral norms on as we saw in the previous subsection. Let be the space of continuous functions on with the compact-open topology. Let be the quotient space . Denote the image of a function in by . Define a map ψ:X→˜C(X),x↦[d(⋅,x)]. If we fix a basepoint , then we can consider functions normalized to take value 0 at , and get a map ψ:X→C(X),x↦d(⋅,x<
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844866394996643, "perplexity": 333.8365306897633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00034.warc.gz"}
https://tex.stackexchange.com/questions/447881/how-to-make-beginalign-looks-like-begineqnarray/447887
# How to make \begin{align} looks like \begin{eqnarray} I trying to align a programming model using eqnarray but the command \tag do not work with it, I could use align instead but the alignment is right for the first column and left for the second column. So what I want to do is use the eqnarray alignment (which is right for the first column and centered for the second column) with align. This is how both commands look like: • Welcome to TeX SX! No =sign (or < or …) between the two ‘columns’? – Bernard Aug 27 '18 at 0:23 • Nope, that's why I don't want the right-left alignment – GuadalupeAnimation Aug 27 '18 at 0:26 • Centred w.r.t. what? And should the first column be aligned? – Bernard Aug 27 '18 at 0:33 • I added a picture so you can see what the problem is – GuadalupeAnimation Aug 27 '18 at 0:42 • the way you are using eqnarray – barbara beeton Aug 27 '18 at 1:22 Here is a solution using \mathmakebox from the mathtools package: \documentclass{article} \DeclareMathOperator*{\Max}{Max} \begin{document} A solution with \verb|\mathmakebox|: \begin{alignat}{2} \Max_t & \quad & F(t) = at, \\ \text{subject to} & & \mathmakebox[\widthof{$F(t) = at,$}][c]{t \le b,} \\ & & \mathmakebox[\widthof{$F(t) = at,$}][c]{t \ge 0.} \end{alignat} \end{document} • (+1) I had completely forgotten this command (much more recent than eqparbox)! – Bernard Aug 27 '18 at 1:47 Here are two possibilities with alignat and eqparbox: \documentclass{article} \usepackage{mathtools} \usepackage{eqparbox} \newcommand{\eqmathbox}[2][M]{\eqmakebox[#1]{$\displaystyle#2$}} \begin{document} \begin{alignat}{2} \max_t & &\quad & \eqmathbox{F(t) = at}\\ \text{subject to}& & & \eqmathbox{\begin{gathered}[t] t\le b, \\t\ge 0. \end{gathered}} \end{alignat} \begin{alignat}{2} \max_t & &\quad & \eqmathbox{F(t) = at}\\ \text{subject to}& & & \eqmathbox{t \le b,} \\ & & & \eqmathbox{t \ge 0.} \end{alignat} \end{document} • is the mathtools package only needed with gathered? – GuadalupeAnimation Aug 27 '18 at 1:12 • It is a very useful extension of amsmath (which would be enough for this code), deeply recommended. B.t.w., eqnarray shouldn't be used any more, as it leads to bad spacings. – Bernard Aug 27 '18 at 1:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939378559589386, "perplexity": 3882.622462345211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00528.warc.gz"}
http://www.ck12.org/statistics/Variance-of-a-Data-Set/lesson/Calculating-Variance-PST/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Variance of a Data Set ## The mean of the squares of the deviation of data values Estimated17 minsto complete % Progress Practice Variance of a Data Set MEMORY METER This indicates how strong in your memory this concept is Progress Estimated17 minsto complete % Calculating Variance If you were told that the mean income at a certain company was 35,000, you wouldn’t really know much about the actual income of the majority of the employees, since there could be a few upper-level managers or owners whose income might skew the mean badly. However, if you were also given the variance of the incomes, how would that help? ### Calculating Variance Variance (commonly denoted \begin{align*}\sigma ^2\end{align*}) is a very useful measure of the relative amount of ‘scattering’ of a given set. In other words, knowing the variance can give you an idea of how closely the values in a set cluster around the mean. The greater the variance, the more the data values in the set are spread out away from the mean. Variance is an important calculation to become familiar with because, like the arithmetic mean, variance is used in many other more complex statistical evaluations. The calculation of variance is slightly different depending on whether you are working with a population (you do not intend to generalize the results back to a larger group) or a sample (you do intend to use the sample results to predict the results of a larger population). The difference is really only at the end of the process, so let’s start with the calculation of the population. To calculate the variance of a population: 1. First, identify the arithmetic mean of your data by finding the sum of the values and dividing it by the number of values. 2. Next, subtract each value from the mean and record the result. This value is called the deviation of each score from the mean. 3. For each value, square the deviation. 4. Finally, divide the sum of the squared deviations by the number of values in the set. The resulting quotient is the variance \begin{align*}(\sigma^2)\end{align*} of the set. To calculate the variance of a sample, the only difference is that in step 4, you divide the sum of squared deviations by the number of values in the sample minus 1. By dividing the sum of squared deviations by one less than the number of values, you help reduce the effect of outliers in the sample and increase the calculated variance of the sample by a small amount to allow more ‘room’ for the unknown values in the population. #### Calculating the Variance 1. Calculate the variance of set \begin{align*}x\end{align*}: \begin{align*}x=\left \{12, 7, 6, 3, 10, 5, 18, 15\right \}\end{align*} Follow the steps from above to calculate the variance: • First, calculate the arithmetic mean: \begin{align*}\mu =\frac{12+7+6+3+10+5+18+15}{8}=9.5\end{align*} • Subtract each value from the mean to get the deviation of each value, square the deviation of each value: \begin{align*}\text{Value} - \text{Mean} = \text{Deviation}\end{align*} \begin{align*}\text{Deviation}^2\end{align*} \begin{align*}12-9.5=2.5\end{align*} 6.25 \begin{align*}7-9.5=-2.5\end{align*} 6.25 \begin{align*}6-9.5=-3.5\end{align*} 12.25 \begin{align*}3-9.5=-6.5\end{align*} 42.25 \begin{align*}10-9.5=.5\end{align*} .25 \begin{align*}5-9.5=-4.5\end{align*} 20.25 \begin{align*}18-9.5=8.5\end{align*} 72.25 \begin{align*}15-9.5=5.5\end{align*} 30.25 TOTAL (sum of deviation2): 190.00 • Finally, divide the sum of the squared deviations by the count of values in the data set: \begin{align*}\frac{190}{8} & =23.75\\ \therefore \ The \ variance \ & of \ set \ x \ is \ 23.75\end{align*} 2. Find the variance of set \begin{align*}z\end{align*}: \begin{align*}z=\left \{1, 2, 3, 4, 5, 6, 7, 9\right \}\end{align*} Divide the squared deviation of each value from the mean by the total number of values in the set: \begin{align*}& \qquad \qquad \mu =\frac{1+2+3+4+5+6+7+9}{8}=4.625 \\ &(1-4.625)^2+(2-4.625)^2+(3-4.625)^2+(4-4.625)^2\\ &\qquad +(5-4.625)^2+(6-4.625)^2+(7-4.625)^2+(9-4.625)^2 =49.875 \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \frac{49.875}{8} =6.234\\ & \qquad \qquad \qquad \ \therefore \ Variance \ (\sigma^2) \ of \ set \ z = 6.234\end{align*} 3. Find \begin{align*}\sigma^2 \ of \ y\end{align*}: \begin{align*}y=\left \{13, 14, 15, 16, 17, 18, 19, 20, 21\right \}\end{align*} Let’s do this one differently, using a nifty trick known as the “mean of the squares minus the square of the mean.” Start, as before, by finding the arithmetic mean: \begin{align*}\mu =\frac{13+14+15+16+17+18+19+20+21}{9}=17\end{align*} Then, to find the variation, divide the sum of the squares of each value by the number of values (this is the “mean of the squares”), then square the mean we calculated above, 17 (the “square of the mean”), and subtract it from the mean of the squares: \begin{align*}&\sigma ^2 = \frac{13^2+14^2+15^2+16^2+17^2+18^2+19^2+20^2+21^2}{9}-17^2=6.6\overline{6} \\ &\qquad \qquad \qquad \qquad \qquad \quad \therefore \ \sigma ^2 \ of \ y=6.6\overline{6}\end{align*} #### Earlier Problem Revisited If you were told that the mean income at a certain company was35,000, you wouldn’t really know much about the actual income of the majority of the employees, since there could be a few upper-level managers or owners whose income might skew the mean badly. However, if you were also given the variance of the incomes, how would that help? By learning the variance of the set of incomes, you could get a feel for how representative the \$35,000 figure was of the likely salary of a common employee. ### Examples #### Example 1 Find \begin{align*}\mu\end{align*} and \begin{align*}\sigma ^2\end{align*} of set \begin{align*}z\end{align*} Let’s use the “mean of the squares minus the square of the mean” method: First find the mean of the set: \begin{align*}\frac{3.25+3.5+2.85+3.4+2.95+3.02+3.17}{7}=3.16286\end{align*} Now divide the sum of each of the values squared by the number of values: \begin{align*}\frac{3.25^2+3.5^2+2.85^2+3.4^2+2.95^2+3.02^2+3.17^2}{7}-10.0036=10.0524-10.0036=0.049\end{align*} is the variance. #### Example 2 If all values of set \begin{align*}z\end{align*}, above, were increased by 5, what would the new mean and variance be? Find the mean of the new set: \begin{align*}\frac{8.25+8.5+7.85+8.4+7.95+8.02+8.17}{7}=8.16286\end{align*} Divide the sum of the values squared by the number of values: \begin{align*}\frac{466.7668}{7}=66.681\end{align*} Subtract the squared mean from the mean of the squares: \begin{align*}66.681-66.632=0.049\end{align*} is the variance. The variance is the same as before! Does that surprise you? It should, because they actually aren’t the same, it just appears that way due to rounding. The new set actually has a variance closer to 0.048688, and the original is more accurately 0.04873469. Obviously they are very close, but not exactly the same. #### Example 3 If all values of set \begin{align*}z\end{align*} from question #1 were doubled, how would that affect  \begin{align*}\mu\end{align*} and \begin{align*}\sigma ^2\end{align*}? The question is what would happen if all of the values were doubled. Do the mean and variance also double? Let’s see: The mean of the new set is \begin{align*}\frac{6.5+7+5.7+6.8+5.9+6.04+6.34}{7}=\frac{44.28}{7}=6.326\end{align*}, which is twice the mean of the original set. So far so good. The “mean of the squares” is \begin{align*}\frac{6.5^2+7^2+5.7^2+6.8^2+5.9^2+6.04^2+6.34^2}{7}=\frac{281.47}{7}=40.21\end{align*}, which is four times the original mean of the squares, not double after all (which makes sense, given that each doubled value was squared). Finally, subtract the two values: \begin{align*}40.21-6.326^2 = .192\end{align*} is the variance. If we compare this to the original: \begin{align*}\frac{.192}{.049}\approx 4\end{align*}, we can see that doubling the original values quadruples the variance. ### Review Questions 1-12: find \begin{align*}\sigma ^2\end{align*} 1. \begin{align*}y=\left \{4, 50, 63, 2, 82, 99\right \}\end{align*} 2. Set \begin{align*}x\end{align*} is a random sample from a population with 38 members: \begin{align*}x=\left \{8, 13, 5, 10\right \}\end{align*} 3. Set \begin{align*}z\end{align*} is a random sample from a larger population: \begin{align*}z=\left \{4,3,5,15,5\right \}\end{align*} 4. \begin{align*}y=\left \{3,26,5,1,1\right \}\end{align*} 5. 22, 21, 13, 19, 16, 18 6. Sample: 1, 2, 5, 1 7. Sample: 10, 6, 3, 4 8. 8, 11, 17, 7, 19 9. 15, 17, 19, 21, 23, 25, 27, 29 10. Sample: 15, 17, 19, 21, 23, 25, 27, 29 11. .25, .35, .45, .55, .26, .75 12. Find the variance of the data in the table: HEIGHTS (rounded to the nearest inch) FREQUENCY OF STUDENTS 60 35 61 33 62 45 63 4 64 3 65 4 66 7 67 4 ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes ### Vocabulary Language: English absolute deviation The absolute deviation is the sum total of how different each number is from the mean. deviation Deviation is a measure of the difference between a given value and the mean. Mean The mean of a data set is the average of the data set. The mean is found by calculating the sum of the values in the data set and then dividing by the number of values in the data set. mean absolute deviation The mean absolute deviation is an alternate measure of how spread out the data is. It involves finding the mean of the distance between each data value and the mean. While this method might seem more intuitive, in statistics it has been found to be too limited and is not commonly used. Population In statistics, the population is the entire group of interest from which the sample is drawn. Sample A sample is a specified part of a population, intended to represent the population as a whole. Skew To skew a given set means to cause the trend of data to favor one end or the other standard deviation The square root of the variance is the standard deviation. Standard deviation is one way to measure the spread of a set of data. variance A measure of the spread of the data set equal to the mean of the squared variations of each data value from the mean of the data set.
{"extraction_info": {"found_math": true, "script_math_tex": 46, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673658609390259, "perplexity": 609.1816303887783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00047-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/50-kg-block-initially-rest-level-frictionless-surface-attached-hooke-s-law-spring-spring-c-q2160379
## 4C) A 5.0-kg block, initially at rest on a level, frictionless surface, is attached to a Hooke's Law spring with spring constant k= 80 N/m. The spring, which is also level, is rigidly attached to a wall on the other end as shown in the diagram above. Also assume that there is no friction in the spring or at the point of attachment. The block is then streched from its equilibrium position at x= 0.00 m to a distance of 50 cm to the right and released. Taking the initial time to be t = 0.00s at the release point, answer the following: What is the ordinary frequency in Hz? (1Hz= 1 cycle/s)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688787817955017, "perplexity": 453.5451953545959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698693943/warc/CC-MAIN-20130516100453-00026-ip-10-60-113-184.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/455951/how-draw-this-figure-spiral-in-tikz/456069
How draw this figure (spiral) in tikz? I would like to draw this figure in tikz. This figure below is already in tikz, but I would like to develop the code. • Welcome to TeX - LaTeX! Please show us what code you have so far and be precise about where you have problems. – Andrew Swann Oct 20 '18 at 9:49 • If you know that diagram was created with TikZ, have you tried asking whoever made it if you can see the code? – Torbjørn T. Oct 21 '18 at 14:17 Welcome to TeX.SE! Since this is your first question, here is some attempt to answer it. Please consider posting what you have tried in the future. (EDIT: closed the gaps and added the two missing annotations, thanks to Werner for reminding me.) \documentclass[tikz,border=3.14mm]{standalone} \begin{document} \usetikzlibrary{decorations.markings} \tikzset{midmark/.style n args={2}{postaction={decorate,decoration={markings, mark=at position 0.5 with {\node[font=\tiny] at (0,#2) {#1};}}}}} \begin{tikzpicture}[declare function={R(\x)=0.3+\x/360;}] \draw[line width=5mm,green] node[below right,black,xshift=-3mm,yshift=6mm]{AVA} plot[variable=\x,domain=65:321] (\x:{R(\x)}) node[above left,black,xshift=2mm,yshift=-2mm]{FVA}; \draw[line width=5mm,orange] plot[variable=\x,domain=320:452,samples=31] (\x:{R(\x)}); \draw[line width=5mm,red] plot[variable=\x,domain=450:551,samples=61] (\x:{R(\x)}) node[black,left]{AVE}; \draw[line width=5mm,blue] plot[variable=\x,domain=550:830,samples=71] (\x:{R(\x)}) node[black,left,yshift=2.5mm,xshift=2mm]{FVE}; % \draw[thin,<->,midmark={$25^\circ$}{-5pt}] plot[variable=\x,domain=65:90] (\x:{R(\x)+0.2}); \draw[thin,<->,midmark={$50^\circ$}{5pt}] plot[variable=\x,domain=270:320] (\x:{R(\x)+0.2}); \draw[white,thin,<->,midmark={$80^\circ$}{7pt}] plot[variable=\x,domain=550:630] (\x:{R(\x)+0.2}); \draw[white,thin,<->,midmark={$20^\circ$}{-5pt}] plot[variable=\x,domain=810:830] (\x:{R(\x)-0.2}); % \draw (0,-pi) node[below] {PMI} -- (0,pi) node[above] {PMS}; % \end{tikzpicture} \end{document} • Awesome as usual :) – Diaa Oct 20 '18 at 14:37 • How can one close the connections between the coloured segments? – Werner Oct 21 '18 at 16:55 • @Werner By just adding 0.5 to the upper ends of all but the last domains or by increasing the numbers of samples or dialing smooth. As you probably know, plot (without the smooth option) just connects the points (whose number is given by samples) by straight lines. This is why the gaps arise: the slopes of the last segment of a plot and the first segment of the next plot do not coincide. – marmot Oct 21 '18 at 17:01 Edit: explanation of the method used to determine the nature of the spiral. This curve is a spiral with two centers. To determine this, I printed your image (my printer did not respect the original colors) then actually built with a ruler and compass the perpendicular bisector of the segment that cuts the small green curve on the left. This allowed me to understand that this curve is a spiral with 2 centers. To find the second centre, I extended the separation lines of the green part with the other two. I was able to see that the distance between the 2 centers is 5 mm. It is constructed by drawing arcs of circles whose centre is alternately R and L which are 5mm apart. According to my measurements, the first circle has a radius of 5mm and therefore each of the others increases by 5mm. First I built it this way with a line of normal thickness L and R are the centers of the arcs of the circle : \documentclass[tikz,border=5mm]{standalone} \usetikzlibrary{arrows.meta} \begin{document} \begin{tikzpicture} [every node/.style={font=\footnotesize,inner sep=0pt,outer sep=1pt}, >={Straight Barb[inset=3pt,angle=90:3pt]}] \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=.2mm](0,1)arc(90:65:5mm); \draw[draw=green,line width=.2mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=.2mm](0,-1)arc(270:320:15mm)coordinate(a); \draw[draw=orange,line width=.2mm](a)arc(-40:90:15mm); \draw[draw=red,line width=.2mm](0,2)arc(90:190:20mm)coordinate(b); \draw[draw=blue,line width=.2mm](b)arc(-170:-90:20mm)coordinate(c); \draw[draw=blue,line width=.2mm](c)arc(-90:90:25mm)coordinate(d); \draw[draw=blue,line width=.2mm](d)arc(90:110:30mm); \end{tikzpicture} \end{document} To build this spiral, it is therefore necessary to build arcs of a given center circle. Here to avoid having to recalculate these centers, I have named the points where the spiral ends (a), (b), (c) and (d). To draw the arrows, I had to draw arcs of circles of known center and to do so, I used the technique indicated by @JonathanGratus in his answer here: Draw arc in tikz when center of circle is specified like this: \draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm) And finally, I placed the text separately. \documentclass[tikz,border=5mm]{standalone} \usetikzlibrary{arrows.meta} \begin{document} \begin{tikzpicture} [every node/.style={font=\footnotesize,inner sep=0pt,outer sep=1pt}, >={Straight Barb[inset=3pt,angle=90:3pt]}] %\fill[green!50!black](0,0)circle(1pt)node[left]{L}; %\fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,1)arc(90:65:5mm); \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=6mm](0,-1)arc(270:320:15mm)coordinate(a); \draw[draw=orange,line width=6mm](a)arc(-40:90:15mm); \draw[draw=red,line width=6mm](0,2)arc(90:190:20mm)coordinate(b); \draw[draw=blue,line width=6mm](b)arc(-170:-90:20mm)coordinate(c); \draw[draw=blue,line width=6mm](c)arc(-90:90:25mm)coordinate(d); \draw[draw=blue,line width=6mm](d)arc(90:110:30mm); % arrows placement % from https://tex.stackexchange.com/a/453649/138900 \draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[<->](0,.5)+(-90:17mm)arc(-90:-40:17mm)node[midway,above]{$50^\circ$}; \draw[<->,white](0,0)+(190:22mm)arc(190:270:22mm)node[midway,above right,inner sep=0pt]{$80^\circ$}; \draw[<->,white](0,0)+(90:28mm)arc(90:110:28mm)node[midway,above,inner sep=1pt]{$20^\circ$}; % node placement \node[anchor=north west] at ([shift={(-.1,.5)}]65:2mm){AVA}; \node[above left]at([shift={(-5pt,5pt)}]a){FVA}; \node[anchor=east]at(190:23mm){AVE}; \draw[node font=\bf](0,-3)node[below]{PMI}--(0,4)node[above]{PMS}; \end{tikzpicture} \end{document} Edit : Just for fun, an animation that builds this spiral. its code is not optimized. \documentclass[tikz,border=5mm]{standalone} \usetikzlibrary{arrows.meta} \begin{document} %\begin{tikzpicture} %\useasboundingbox(-2.5,-2.5)rectangle(3,3.5); %\fill[green!50!black](0,0)circle(1pt)node[left]{L}; %\fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; %\end{tikzpicture} \tikzset{every node/.style={font=\footnotesize,inner sep=0pt,outer sep=1pt},>={Straight Barb[inset=3pt,angle=90:3pt]}} \foreach \a in {-90,-85,...,65}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw(0,5mm)--([shift={(0,5mm)}]\a:8mm); %\draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:\a:5mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {70,75,...,90}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw(0,5mm)--([shift={(0,5mm)}]\a:8mm); \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:\a:5mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {95,100,...,270}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw(0,0)--(\a:13mm); \draw[draw=green,line width=6mm](0,1)arc(90:\a:10mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {275,280,...,320}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw(0,5mm)--([shift={(0,5mm)}]\a:18mm); \draw[draw=green,line width=6mm](0,-1)arc(270:\a:15mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {-35,-30,...,90}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=6mm](0,-1)arc(270:320:15mm)coordinate(a); %\draw[<->](0,.5)+(-90:17mm)arc(-90:-40:17mm)node[midway,above]{$50^\circ$}; \draw(0,.5)--([shift={(0,5mm)}]\a:18mm); \draw[draw=orange,line width=6mm](a)arc(-40:\a:15mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {95,100,...,190}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=6mm](0,-1)arc(270:320:15mm)coordinate(a); %\draw[<->](0,.5)+(-90:17mm)arc(-90:-40:17mm)node[midway,above]{$50^\circ$}; \draw[draw=orange,line width=6mm](a)arc(-40:90:15mm); \draw[draw=red,line width=6mm](0,2)arc(90:\a:20mm)coordinate(b); \draw(0,0)--(\a:23mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {-165,-160,...,-90}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=6mm](0,-1)arc(270:320:15mm)coordinate(a); %\draw[<->](0,.5)+(-90:17mm)arc(-90:-40:17mm)node[midway,above]{$50^\circ$}; \draw[draw=orange,line width=6mm](a)arc(-40:90:15mm); \draw[draw=red,line width=6mm](0,2)arc(90:190:20mm)coordinate(b); \draw[draw=blue,line width=6mm](b)arc(-170:\a:20mm)coordinate(c); \draw(0,0)--(\a:23mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {-85,-80,...,90}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=6mm](0,-1)arc(270:320:15mm)coordinate(a); %\draw[<->](0,.5)+(-90:17mm)arc(-90:-40:17mm)node[midway,above]{$50^\circ$}; \draw[draw=orange,line width=6mm](a)arc(-40:90:15mm); \draw[draw=red,line width=6mm](0,2)arc(90:190:20mm)coordinate(b); \draw[draw=blue,line width=6mm](b)arc(-170:-90:20mm)coordinate(c); %\draw[<->,white](0,0)+(190:22mm)arc(190:270:22mm)node[midway,above right,inner sep=0pt]{$80^\circ$}; \draw[draw=blue,line width=6mm](c)arc(-90:\a:25mm)coordinate(d); \draw(0,.5)--([shift={(0,5mm)}]\a:28mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {95,100,...,110}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=6mm](0,-1)arc(270:320:15mm)coordinate(a); %\draw[<->](0,.5)+(-90:17mm)arc(-90:-40:17mm)node[midway,above]{$50^\circ$}; \draw[draw=orange,line width=6mm](a)arc(-40:90:15mm); \draw[draw=red,line width=6mm](0,2)arc(90:190:20mm); \draw[draw=blue,line width=6mm](b)arc(-170:-90:20mm)coordinate(c); %\draw[<->,white](0,0)+(190:22mm)arc(190:270:22mm)node[midway,above right,inner sep=0pt]{$80^\circ$}; \draw[draw=blue,line width=6mm](c)arc(-90:90:25mm)coordinate(d); \draw[draw=blue,line width=6mm](d)arc(90:\a:30mm); %\draw[<->,white](0,0)+(90:28mm)arc(90:110:28mm)node[midway,above,inner sep=1pt]{$20^\circ$}; \draw(0,0)--(\a:33mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \foreach \a in {115,120,...,270}{ \begin{tikzpicture} \useasboundingbox(-2.5,-2.5)rectangle(3,3.5); \fill[green!50!black](0,0)circle(1pt)node[left]{L}; \fill[green!50!black](0,5mm)circle(1pt)node[left]{R}; \draw[draw=green,line width=6mm](0,5mm)+(65:5mm)arc(65:90:5mm); %\draw[<->] (0,.5)+(65:7mm) arc (65:90:7mm)node[shift={(9pt,6pt)}]{$25^\circ$}; \draw[draw=green,line width=6mm](0,1)arc(90:270:10mm); \draw[draw=green,line width=6mm](0,-1)arc(270:320:15mm)coordinate(a); %\draw[<->](0,.5)+(-90:17mm)arc(-90:-40:17mm)node[midway,above]{$50^\circ$}; \draw[draw=orange,line width=6mm](a)arc(-40:90:15mm); \draw[draw=red,line width=6mm](0,2)arc(90:190:20mm); \draw[draw=blue,line width=6mm](b)arc(-170:-90:20mm)coordinate(c); %\draw[<->,white](0,0)+(190:22mm)arc(190:270:22mm)node[midway,above right,inner sep=0pt]{$80^\circ$}; \draw[draw=blue,line width=6mm](c)arc(-90:90:25mm)coordinate(d); \draw[draw=blue,line width=6mm](d)arc(90:110:30mm); %\draw[<->,white](0,0)+(90:28mm)arc(90:110:28mm);%node[midway,above,inner sep=1pt]{$20^\circ$}; \draw(0,0)--(\a:33mm); \draw(0,-3)--(0,3.5); \end{tikzpicture} } \end{document} • Did you change the title of the question? If so, how do you know that the OP wants a "spiral with two centers"? – marmot Oct 21 '18 at 14:04 • @marmot Yes, I changed the title of the question because this curve is really a two-center spiral. To verify this, I printed the image attached by the OP and checked with a compass and a ruler by drawing mediators, which allowed me to verify the nature of this curve as well as its construction. – AndréC Oct 21 '18 at 14:29 • Are you saying that my proposal does not match? (To me the results look almost identical.) I would actually refrain from making such drastic edits to questions without consulting the OP first. – marmot Oct 21 '18 at 14:32 • @marmot I had not noticed that your solution did not correspond, I was convinced that you had also made a spiral with two centres without having taken the trouble to check, I am confused. I voted for your solution because it taught me a lot of things I didn't know yet. – AndréC Oct 21 '18 at 14:40 • I believe that my solution (also) reproduces the OP's screen shot. I am not saying yours does not, but I am not sure if your construction mechanism is advantageous or precisely what the OP wants. Rather, I really believe one should consult the OP for that. – marmot Oct 21 '18 at 14:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088948130607605, "perplexity": 3484.725933872336}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999218.7/warc/CC-MAIN-20190620125520-20190620151520-00459.warc.gz"}
http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=63&l=ko&document_srl=560282&sort_index=speaker&order_type=asc
Omori-Yau maximum principle is a generalization of classic pointwise maximum principle for C2 smooth functions on compact manifolds to complete noncompact Riemannian manifolds with lower bound of Ricci curvature. It was introduced by S. T. Yau related to Liouville type theorem for harmonic functions on complete noncompact Riemannian manifolds with lower bound of Ricci curvature. It has been turned out to be useful in some geometric analysis problems on Riemannian manifolds. We consider its extension to Alexandrov spaces which appear as limit spaces of Riemannian manifolds under Gromov-Hausdorff distance.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9390648603439331, "perplexity": 117.17851749292187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00476.warc.gz"}
http://cosmocoffee.info/viewtopic.php?t=611
CosmoCoffee FAQ   Search  SmartFeed   Memberlist    Register Profile   Log in Arxiv New Filter | Bookmarks & clubs | Arxiv ref/author: [astro-ph/0606538] Point source power in three-year Wilkinson Microwave Anisotropy Probe data Authors: K. M. Huffenberger, H. K. Eriksen, F. K. Hansen Abstract: Using a set of multifrequency cross-spectra computed from the three year WMAP sky maps, we fit for the unresolved point source contribution. For a white noise power spectrum, we find a Q-band amplitude of A = 0.011 +/- 0.001 muK^2 sr (antenna temperature), significantly smaller than the value of 0.017 +/- 0.002 muK^2 sr used to correct the spectra in the WMAP release. Modifying the point source correction in this way largely resolves the discrepancy Eriksen et al. (2006) found between the WMAP V- and W-band power spectra. Correcting the co-added WMAP spectrum for both the low-l power excess due to residual foregrounds and the high-l power deficit due to over-subtracted point sources, we find that the net effect in terms of cosmological parameters is a ~ 0.7 sigma shift in n_s to larger values: For the combination of WMAP, BOOMERanG and Acbar data, we find n_s = 0.969 +/- 0.016, lowering the significance of n_s not equal to 1 from ~ 2.7 sigma to ~ 2.0 sigma. [PDF] [PS] [BibTex] [Bookmark] Author Message Anze Slosar Joined: 24 Sep 2004 Posts: 205 Affiliation: Brookhaven National Laboratory Posted: July 06 2006 This is a quite interesting paper - I hoped that someone else will come up with discussion about it. They refit the residual point source spectrum in WMAP3 and show that ns moves by about 0.7 sigma: in general I wouldn't care about sub 1-sigma shifts, but this time it moves ns=1 from the tail of the distribution into a completely ok zone. Few comments though: I don't like 100−200 bin: it makes χ2 horrible and it cannot possibly be a noise fluctuation, unless errors are badly underestimated. I think it is also quite difficult to come up with something that changes things on such large scales. β=−2 - is assuming this a good assumption? What happens if you put some sensible prior around β (say 2±0.5)? I would expect that faint sources are not the same population as sources so bright, that you can actually cut them out.... Poisson distribution: surely, there are some LSS effects in the unresolved sources; what authors do (I think) is to assume white noise for unresolved sources: instead you could do something like Limber between z=0 and 5, fit the amplitude on large scales and extrapolate to smaller scales - surely this would have been better than C = const? Just as an asside - could you detected them via non-Guassianity? If there are not too many sources, you should seem them in bispectrum, I guess? Hans Kristian Eriksen Joined: 25 Sep 2004 Posts: 58 Affiliation: ITA, University of Oslo Posted: July 06 2006 Anze Slosar wrote: I don't like 100−200 bin: it makes χ2 horrible and it cannot possibly be a noise fluctuation, unless errors are badly underestimated. I think it is also quite difficult to come up with something that changes things on such large scales. If it's any comfort, we don't like that bin either :-) My best guess is that it's residual foregrounds leaking in from the Q-band. Diffuse foregrounds still have a significant impact at l~100, and the Q-band is not clean by any means. But one never knows. Another thing that may be worth checking out is the fact that the maps are cleaned with (smoothed) K-Ka as one of three templates. These are of course noisy, and since this noise contribution is the same for all channels in the yearly maps, it's not killed by the cross-correlation estimator. Don't know the magnitude of this effect yet, though.. Quote: β=−2 - is assuming this a good assumption? What happens if you put some sensible prior around β (say 2±0.5)? I would expect that faint sources are not the same population as sources so bright, that you can actually cut them out.... Certainly, the fact that WMAP (and we) use the same spectral index for both the resolved and the unresolved point sources is a quite bold assumption. I think the best justification for doing so is that it seems to work reasonably OK. But of course, there is still a ~2σ difference between the V and W-bands, so there may be something there. As always, more work is needed.. Quote: Poisson distribution: surely, there are some LSS effects in the unresolved sources; what authors do (I think) is to assume white noise for unresolved sources: instead you could do something like Limber between z=0 and 5, fit the amplitude on large scales and extrapolate to smaller scales - surely this would have been better than C = const? Right. The justification for using a flat spectrum is simply that CMB experiments are rather insensitive, and therefore only pick up the brightest sources. In other words, the point sources are very sparsely sampled. And that pushes the effective spectrum towards a flat "white noise" spectrum. But in principle, there may certainly be clustering effects present, yes. I think the main conclusion from all of this work is that current parameter constraints shouldn't be taken too literally, in the sense that 3σ detections really are 3σ detections. There is a reason why particle physicists operate with a 5σ criterion, and that's precisely because of unknown systematics. Of course, we like to think that CMB observations are both cleaner and simpler, but it's still a good idea to have these issues in mind, I think. Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First All times are GMT + 5 Hours Page 1 of 1 Jump to: Select a forum Arxiv paper discussion----------------arXiv papers Topic discussion----------------Early UniverseCosmological ModelCosmological Observations  Projects and Resources----------------Computers and softwareTeaching, Papers and PresentationsResearch projectsiCosmo Coming up----------------Job vacanciesConferences and meetings Management----------------CosmoCoffee You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715390563011169, "perplexity": 2231.2656082394074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515311.25/warc/CC-MAIN-20171212075935-20171212095935-00363.warc.gz"}
https://differencebetweenatoz.com/difference-between-mass-and-weight/
What is the Difference Between Mass and Weight Mass is measured as how much matter an object has. Weight is a measure of how gravity pulls on the matter. Mass and Weight are two important factor in physics. In this article, we are going to share the basic Difference Between Mass and Weight and how much they are related to each other. Mass is the level of matter within a body while Weight is a measure of how strongly gravity draws on that matter. Weight is a drive, and force is (Mass * Acceleration). The weight of an object is its large times the velocity due to gravity. The weight of the body varies by position. For example, materials weigh less on the moon where gravity is leaner in comparison to Our Planet. Measurement of mass vs. weight Weight is assessed using a degree which effectively measures the pull around the mass exerted by the gravity of our planet. The bulk of the body is calculated by controlling it equally with another known amount of mass. Bulk could be calculated employing a pan balance while Fat could be calculated utilizing a spring balance. Strategies may be interchanged if gravity is known and frequent, as it is on earth. Effect of gravity of mass and weight The weight of an object depends on the gravity at that position while Size is a continuing from anywhere and anytime. Eg., If an object is large is 60 kgs, then it’s Fat would be 600 Newtons however when taken up to the Moon, this object may have a weight of 100 newtons while the seriousness of the moon is 1/6th that of Our Planet. However, the bulk of the object can remain the same. Imagine yourself out is space far from any gravitational energy, with a bowling ball inside your hands. Ignore it, and it simply floats before you. Without gravity, it’s no weight. Now get it again and shake it backward and forwards. That opposition to being shifted is inertia, and mass measures how much inertia an item has. Inertia doesn’t rely on gravity. Difference Between Mass and Weight Any two masses use a shared attractive force on one another. The total amount of that force is fat. The Difference Between Mass and Weight is that mass could be the number of matter in substance and weight is a measure of how the drive of gravity acts upon that size. Mass is the measure of the quantity of subject in a body. Weight is denoted using m or M. W = m * g Relative weight on different planets relative to earth • Weight on Mercury: 0.378 • Weight on Venus: 0.907 • Weight on Earth: 1 • Weight on Moon: 0.165 • Weight on Mars: 0.377 • Weight on Jupiter: 2.364 • Weight on Saturn: 0.910 • Weight on Uranus: 0.889 • Weight on Neptune: 1.125 Hope you liked this article on the basic Difference Between Mass and Weight in Physics and How Does Mass Affect Weight in space. If you want to share anything with us then comment below.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847075343132019, "perplexity": 912.8627786956163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00346.warc.gz"}
https://www.physicsforums.com/threads/quantum-gravity-distortion.84158/
# Quantum gravity distortion 1. Aug 4, 2005 ### professor i know what it is, and why its searched for...but i have brought myself to a fairly simple answer, and would like to know why it is not a correct one. could quantum gravity not merely state that the reason for the distortion of particles into the qm description (partial-wave) is because of the ripples of gravitation (ie. curved space time). - this seems perfectly reasonable to me... does anyone else see what might be wrong with this ... or is it correct, just not exact enough to be considered fact just yet? Last edited: Aug 4, 2005 2. Aug 4, 2005 ### marlon what do you mean by distortion of particles into the QM description? marlon 3. Aug 4, 2005 ### professor i mean the very reason that they are not completely particulate, and that neither is a photon completely wavelike could be because of the shifting of spacetime, therefore making it seem as if these (particles) are indees moving with a partial wave themselves, when rather it is the gravitational waves that affect them in this way 4. Aug 4, 2005 ### professor aka- without gravitation of any sort... would the waves become less significant? 5. Aug 4, 2005 ### professor perhaps not... i may see my flaw, if i had a hold of a space station, and some very precese laser fluctuation readers then i could prove myself completely wrong im willing to bet... ohh well
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753584623336792, "perplexity": 1645.7339630181907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323604.1/warc/CC-MAIN-20170628101910-20170628121910-00183.warc.gz"}
http://mathhelpforum.com/statistics/139008-normal-distribution-problem.html
# Math Help - normal distribution problem 1. ## normal distribution problem A certain brand of soft drink is in a so called "litre" bottle. In fact, the amount of liquid in each bottle (in litres) is a normally distribuited random variable with mean 1.005 and standard deviation 0.01. a) If I buy four bottles, find the probability that the mean contents of the four bottles is less than 1 litre. 2. Originally Posted by shawli A certain brand of soft drink is in a so called "litre" bottle. In fact, the amount of liquid in each bottle (in litres) is a normally distribuited random variable with mean 1.005 and standard deviation 0.01. a) If I buy four bottles, find the probability that the mean contents of the four bottles is less than 1 litre. i think its 1.005-1/ .01 = .5
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832413911819458, "perplexity": 1296.4557672058627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456975.30/warc/CC-MAIN-20151124205416-00320-ip-10-71-132-137.ec2.internal.warc.gz"}
https://brilliant.org/problems/finding-infinite-areas-are-awesome/
# Finding Infinite Areas are Awesome! Calculus Level 3 If the area enclosed by the equation: $y = \dfrac {360}{36 + x^2}$ and the $$x$$ axis can be represented as $$a \pi$$, find $$a$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878604531288147, "perplexity": 1471.0485853209514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656577.40/warc/CC-MAIN-20190116011131-20190116033131-00152.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_latest/clhtml/f01/f01sac.html
# NAG CL Interfacef01sac (real_​nmf) Settings help CL Name Style: ## 1Purpose f01sac computes a non-negative matrix factorization for a real non-negative $m×n$ matrix $A$. ## 2Specification #include void f01sac (Integer m, Integer n, Integer k, const double a[], Integer pda, double w[], Integer pdw, double h[], Integer pdh, Integer seed, double errtol, Integer maxit, NagError *fail) The function may be called by the names: f01sac or nag_matop_real_nmf. ## 3Description The matrix $A$ is factorized into the product of an $m×k$ matrix $W$ and a $k×n$ matrix $H$, both with non-negative elements. The factorization is approximate, $A\approx WH$, with $W$ and $H$ chosen to minimize the functional $f(W,H) = ‖A-WH‖ F 2 .$ You are free to choose any value for $k$, provided $k<\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$. The product $WH$ will then be a low-rank approximation to $A$, with rank at most $k$. f01sac finds $W$ and $H$ using an iterative method known as the Hierarchical Alternating Least Squares algorithm. You may specify initial values for $W$ and $H$, or you may provide a seed value for f01sac to generate the initial values using a random number generator. ## 4References Cichocki A and Phan A–H (2009) Fast local algorithms for large scale nonnegative matrix and tensor factorizations IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E92–A 708–721 Cichocki A, Zdunek R and Amari S–I (2007) Hierarchical ALS algorithms for nonnegative matrix and 3D tensor factorization Lecture Notes in Computer Science 4666 Springer 169–176 Ho N–D (2008) Nonnegative matrix factorization algorithms and applications PhD Thesis Univ. Catholique de Louvain ## 5Arguments 1: $\mathbf{m}$Integer Input On entry: $m$, the number of rows of the matrix $A$. Also the number of rows of the matrix $W$. Constraint: ${\mathbf{m}}\ge 2$. 2: $\mathbf{n}$Integer Input On entry: $n$, the number of columns of the matrix $A$. Also the number of columns of the matrix $H$. Constraint: ${\mathbf{n}}\ge 2$. 3: $\mathbf{k}$Integer Input On entry: $k$, the number of columns of the matrix $W$; the number of rows of the matrix $H$. See Section 9.2 for further details. Constraint: $1\le {\mathbf{k}}<\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{m}},{\mathbf{n}}\right)$. 4: $\mathbf{a}\left[\mathit{dim}\right]$const double Input Note: the dimension, dim, of the array a must be at least ${\mathbf{pda}}×{\mathbf{n}}$. The $\left(i,j\right)$th element of the matrix $A$ is stored in ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{pda}}+i-1\right]$. On entry: the $m×n$ non-negative matrix $A$. 5: $\mathbf{pda}$Integer Input On entry: the stride separating matrix row elements in the array a. Constraint: ${\mathbf{pda}}\ge {\mathbf{m}}$. 6: $\mathbf{w}\left[\mathit{dim}\right]$double Input/Output Note: the dimension, dim, of the array w must be at least ${\mathbf{pdw}}×{\mathbf{k}}$. The $\left(i,j\right)$th element of the matrix $W$ is stored in ${\mathbf{w}}\left[\left(j-1\right)×{\mathbf{pdw}}+i-1\right]$. On entry: • if ${\mathbf{seed}}\le 0$, w should be set to an initial iterate for the non-negative matrix factor, $W$. • If ${\mathbf{seed}}\ge 1$, w need not be set. f01sac will generate a random initial iterate. On exit: the non-negative matrix factor, $W$. 7: $\mathbf{pdw}$Integer Input On entry: the stride separating matrix row elements in the array w. Constraint: ${\mathbf{pdw}}\ge {\mathbf{m}}$. 8: $\mathbf{h}\left[\mathit{dim}\right]$double Input/Output Note: the dimension, dim, of the array h must be at least ${\mathbf{pdh}}×{\mathbf{n}}$. The $\left(i,j\right)$th element of the matrix $H$ is stored in ${\mathbf{h}}\left[\left(j-1\right)×{\mathbf{pdh}}+i-1\right]$. On entry: • if ${\mathbf{seed}}\le 0$, h should be set to an initial iterate for the non-negative matrix factor, $H$. • If ${\mathbf{seed}}\ge 1$, h need not be set. f01sac will generate a random initial iterate. On exit: the non-negative matrix factor, $H$. 9: $\mathbf{pdh}$Integer Input On entry: the stride separating matrix row elements in the array h. Constraint: ${\mathbf{pdh}}\ge {\mathbf{k}}$. 10: $\mathbf{seed}$Integer Input On entry: • if ${\mathbf{seed}}\le 0$, the supplied values of $W$ and $H$ are used for the initial iterate. • If ${\mathbf{seed}}\ge 1$, the value of seed is used to seed a random number generator for the initial iterates $W$ and $H$. See Section 9.3 for further details. 11: $\mathbf{errtol}$double Input On entry: the convergence tolerance for when the Hierarchical Alternating Least Squares iteration has reached a stationary point. If ${\mathbf{errtol}}\le 0.0$, is used. 12: $\mathbf{maxit}$Integer Input On entry: specifies the maximum number of iterations to be used. If ${\mathbf{maxit}}\le 0$, $200$ is used. 13: $\mathbf{fail}$NagError * Input/Output The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface). ## 6Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information. On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value. NE_CONVERGENCE The function has failed to converge after $⟨\mathit{\text{value}}⟩$ iterations. The factorization given by w and h may still be a good enough approximation to be useful. Alternatively an improved factorization may be obtained by increasing maxit or using different initial choices of w and h. NE_INIT_ESTIMATE NE_INT On entry, ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{m}}\ge 2$. On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{n}}\ge 2$. NE_INT_2 On entry, ${\mathbf{pda}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{pda}}\ge {\mathbf{m}}$. On entry, ${\mathbf{pdh}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{k}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{pdh}}\ge {\mathbf{k}}$. On entry, ${\mathbf{pdw}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{pdw}}\ge {\mathbf{m}}$. NE_INT_3 On entry, ${\mathbf{k}}=⟨\mathit{\text{value}}⟩$, ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$. Constraint: $1\le {\mathbf{k}}<\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{m}},{\mathbf{n}}\right)$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. See Section 7.5 in the Introduction to the NAG Library CL Interface for further information. NE_INVALID_ARRAY On entry, one of more of the elements of a, w or h were negative. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 8 in the Introduction to the NAG Library CL Interface for further information. ## 7Accuracy The Hierarchical Alternating Least Squares algorithm used by f01sac is locally convergent; it is guaranteed to converge to a stationary point of $f\left(W,H\right)$, but this may not be the global minimum. The iteration is deemed to have converged if the gradient of $f\left(W,H\right)$ is less than errtol times the gradient at the initial values of $W$ and $H$. Due to the local convergence property, you may wish to run f01sac multiple times with different starting iterates. This can be done by explicitly providing the starting values of $W$ and $H$ each time, or by choosing a different random seed for each function call. Note that even if f01sac exits with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_CONVERGENCE, the factorization given by $W$ and $H$ may still be a good enough approximation to be useful. ## 8Parallelism and Performance f01sac is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library. f01sac makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information. Each iteration of the Hierarchical Alternating Least Squares algorithm requires $\mathit{O}\left(mnk\right)$ floating-point operations. The real allocatable memory required is $m×n+k\left(m+n\right)$. If $A$ is large and sparse, then f01sbc should be used to compute a non-negative matrix factorization. ### 9.1Uniqueness Note that non-negative matrix factorization is not unique. For a factorization given by the matrices $W$ and $H$, an equally good solution is given by $WD$ and ${D}^{-1}H$, where $D$ is any real non-negative $k×k$ matrix whose inverse is also non-negative. In f01sac, $W$ and $H$ are normalized so that the columns of $W$ have unit length. ### 9.2Choice of $\mathbit{k}$ The most appropriate choice of the factorization rank, $k$, is often problem dependent. Details of your particular application may help in guiding your choice of $k$, for example, it may be known a priori that the data in $A$ naturally falls into a certain number of categories. Alternatively, trial and error can be used. Compute non-negative matrix factorizations for several different values of $k$ (typically with $k\ll \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$) and select the one that performs the best. Finally, it is also possible to use a singular value decomposition of $A$ to guide your choice of $k$, by looking for an abrupt decay in the size of the singular values of $A$. The singular value decomposition can be computed using f08kbc. ### 9.3Generating Random Initial Iterates If ${\mathbf{seed}}\ge 1$ on entry, then f01sac uses the functions g05kfc and g05sac, with the NAG basic generator, to populate w and h. For further information on this random number generator see Section 2.1.1 in the G05 Chapter Introduction. Note that this generator gives a repeatable sequence of random numbers, so if the value of seed is not changed between function calls, then the same initial iterates will be generated. ## 10Example This example finds a non-negative matrix factorization for the matrix $A= ( 8 10 5 10 5 9 21 7 17 10 12 17 8 14 6 14 18 9 16 7 13 29 10 23 13 10 17 7 14 7 ) .$ ### 10.1Program Text Program Text (f01sace.c) ### 10.2Program Data Program Data (f01sace.d) ### 10.3Program Results Program Results (f01sace.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 130, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278625249862671, "perplexity": 1012.2044404990962}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00181.warc.gz"}
http://tex.stackexchange.com/questions/28179/colored-background-in-inline-listings?answertab=votes
# Colored background in inline listings I would like to make inline code more distinguishable from the text surrounding it, and I thought that using a background might work (like it is done on the stackexchange pages.) Since I use the listings package for my other code samples, I tried the inline version of listings, but the key=value pairs seem not to work in the inline version (although the documentation seems to indicate that they should). Full example: \documentclass{article} \usepackage{color} \usepackage{listings} \begin{document} This is a test where \lstinline[backgroundcolor=\color{yellow}]{A=1} should have a yellow background. This, on the other hand, actually works: \begin{lstlisting}[backgroundcolor=\color{yellow}] A = 1 \end{lstlisting} \end{document} Questions: 1. Can it be done with listings? And how? 2. What other ways to increase the distinguishability of inline verbatim do you use? - Yes, it would be nice, for consistency, if color options worked in \lstinline as they work in the lstlisting environment. –  alfC Jul 19 '13 at 0:46 I'm not familiar with most of the color- and highlighting-related options of the lstlistings package. However, the \colorbox macro, which is defined in the color package and takes two arguments (the color and the item to be placed in the colored box), will do a nice job of highlighting a piece of inline text. The following MWE builds directly on your code: \documentclass{article} \usepackage{color,listings} \begin{document} This is to test whether \colorbox{yellow}{\lstinline{A=1}} has a yellow background. \end{document} Addendum: As @MartinScharrer has pointed out in a comment to my original posting, the \colorbox command won't get the job done if the argument of your \lstinline command contains "real" verbatim material -- which may well feature some TeX "special" characters such as %, _, &, and so on. For that contingency, you should load the realboxes package and employ its \Colorbox macro: This is to test whether \Colorbox{yellow}{\lstinline{A=@#\$%^&*()1}} has a yellow background. - Thanks, that is a good solution. –  xubuntix Sep 12 '11 at 9:40 @xubuntix: Actually it doesn't work with real verbatim code, because you can't have it in an argument of another macro. E.g. replace A=1 with A=1% and see how it breaks. You can however use \Colorbox from the realboxes package (requires xcolor too) which reads the content as a box, not as a macro argument. –  Martin Scharrer Apr 9 '12 at 11:10 \documentclass{article} \usepackage{xcolor} \usepackage{listings} \usepackage{realboxes} \begin{document} This is to test whether \Colorbox{yellow}{\lstinline{A=%1}} has a yellow background. \end{document} –  Martin Scharrer Apr 9 '12 at 11:11 THANKS, great solution! +1 –  AndreasT Jul 30 '13 at 15:20 You could also use the solution from How to redefine \lstinline to automatically highlight or draw frames around all inline code snippets? The highlighting code is from Highlight text in code listing while also keeping syntax highlighting ## Known Issues: • This is intended for use with short inline code snippets that do not span line breaks. ## Code: \documentclass{article} \usepackage{etoolbox} \usepackage{atbegshi,ifthen,listings,tikz} % change this to customize the appearance of the highlight \tikzstyle{highlighter} = [ yellow,% set the color for inline listings here. line width = \baselineskip, ] % enable these two lines for a more human-looking highlight %\usetikzlibrary{decorations.pathmorphing} %\tikzstyle{highlighter} += [decorate, decoration = random steps] % implementation of the core highlighting logic; do not change! \newcounter{highlight}[page] \newcommand{\tikzhighlightanchor}[1]{\ensuremath{\vcenter{\hbox{\tikz[remember picture, overlay]{\coordinate (#1 highlight \arabic{highlight});}}}}} \newcommand{\bh}[0]{\stepcounter{highlight}\tikzhighlightanchor{begin}} \newcommand{\eh}[0]{\tikzhighlightanchor{end}} \AtBeginShipout{\AtBeginShipoutUpperLeft{\ifthenelse{\value{highlight} > 0}{\tikz[remember picture, overlay]{\foreach \stroke in {1,...,\arabic{highlight}} \draw[highlighter] (begin highlight \stroke) -- (end highlight \stroke);}}{}}} %-------------------------- \makeatletter % Redefine macros from listings package: \newtoggle{@InInlineListing}% \togglefalse{@InInlineListing}% \renewcommand\lstinline[1][]{% \leavevmode\bgroup\toggletrue{@InInlineListing}\bh % \hbox\bgroup --> \bgroup \def\lst@boxpos{b}% \lsthk@PreSet\lstset{flexiblecolumns,#1}% \lsthk@TextStyle \@ifnextchar\bgroup{\afterassignment\lst@InlineG \let\@let@token}% \lstinline@}% \def\lst@LeaveAllModes{% \ifnum\lst@mode=\lst@nomode \expandafter\lsthk@EndGroup\iftoggle{@InInlineListing}{\eh{}}{}% \else \expandafter\egroup\expandafter\lst@LeaveAllModes \fi% } \makeatother \lstset{backgroundcolor=\color{green!10}}% \begin{document} This is a test where \lstinline{A=1} should have a yellow background. This, on the other hand, actually works: \begin{lstlisting}[backgroundcolor=\color{green}] A = 1 \end{lstlisting} \end{document} - This is great. However, when I used it at some pages there appear spureous highligths at places where there is not lstinline, or some highlights do not appear. Apparently what happens is that highlighs pertaining to a page appear in the previous page. This is related to highlights near the page break, and in combination with footnotes in the same page. Do you have a clue about why this happens and how to fix it? –  JLDiaz Jul 15 '13 at 9:06 @JLDiaz: Added a Known Issues to note that this won't work across line breaks (and also page breaks). –  Peter Grill Jul 15 '13 at 12:33 Thank you, but the problem is not due to the code being too long. It is indeed a short snippet (\lstinline!print!), but it happens at the end of a paragraph which latex decided to break among pages, perhaps as a consequence of a footnote or, the starting of a new section after that paragraph or this kind of reasons. Apparently, the paragraph is broken after the tikzmark is written in the aux file, and thus the information about the page in which should appear is wrong. If you want, I can post a new question with a MWE showing the problem. –  JLDiaz Jul 15 '13 at 15:25 @JLDiaz: Yeah, I think posting a new question makes sense. –  Peter Grill Jul 15 '13 at 16:40 –  JLDiaz Jul 15 '13 at 17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9306007027626038, "perplexity": 2903.2935018656626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645340161.79/warc/CC-MAIN-20150827031540-00009-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.impan.pl/en/activities/banach-center/conferences/21-juliasets3/abstracts
## Abstracts + Slides A PDF file with all the abstracts is available here. Anna Miriam Benini (Università di Parma) This is joint work with M. Astorg and N. Fagella. We present a version of Mañé-Sad-Sullivan Theorem for natural families of meromorphic maps with finitely many singular values. One of the main differences with respect to the rational case (Lyubich, Mañé-Sad-Sullivan) and to the entire transcendental case (Eremenko, Lyubich) is that some points in periodic cycles can disappear at infinity, creating so-called virtual cycles. Walter Bergweiler (Christian-Albrechts-Universität zu Kiel) The Hausdorff dimension of Julia sets of meromorphic functions in the Speiser class We show that for each d in (0,2] there exists a meromorphic function f such that the inverse function of f has three singularities and the Julia set of f has Hausdorff dimension d. This is joint work with Weiwei Cui. Fabrizio Bianchi (CNRS - Université de Lille) Higher bifurcations for polynomial skew products Given a holomorphic family of rational maps on the Riemann Sphere, one can decompose the parameter space into a stability locus and a bifurcation locus. The latter corresponds to maps whose global dynamics are very sensitive to a perturbation of the parameter and is characterized as the support of the so-called bifurcation current. The changes in the global dynamics are dictated by changes in the dynamics of the critical set. When several critical points are present, it makes sense to define a stratification of the bifurcation locus, depending on how many critical points bifurcate independently. The good framework to do this is by means of the self-intersections of the bifurcation current, and one can prove that the resulting stratification is strict. We consider in this talk dynamical systems on $\mathbb C^2$ of the form $f(z,w)= (p(z),q(z,w))$, for suitable polynomials $p,q$. The stability/bifurcation dichotomy in higher dimensions was developed in previous joint work with Berteloot and Dupont. We prove here that, in contrast with the one-dimensional case, all the self-intersections of the bifurcation current have the same support. A key point in the proof is the construction of a sufficiently large hyperbolic set. This is achieved by exploiting a recent result by Przytycki-Zdunik on non completely disconnected Julia sets and by extending techniques coming from the thermodynamic formalism of rational maps to this skew-product setting. Joint work with Matthieu Astorg, Orleans. Christopher Bishop (Stony Brook University) Dimensions of transcendental Julia sets This is a survey of some known results about Hausdorff and packing dimension of Julia sets of transcendental entire functions. After reviewing the basic definitions and some analogous results for polynomials, I will discuss Baker's theorem that the Hausdorff dimension is always at least 1, and describe examples showing that all values in the interval [1,2] can be attained. Whereas in polynomial dynamics it is hard to construct Julia sets with dimension 2 or positive area, in transcendental dynamics the difficult problem is to build "small" examples, e.g., dimension close to 1 or having finite (spherical) length. I will end by stating some open problems. Luka Boc Thaler (Univerza v Ljubljani) On the geometry of simply connected wandering domains In this talk we will construct an entire function $f:\mathbb{C}\rightarrow\mathbb{C}$ for which the unit disk $\mathbb{D}$ is a wandering domain. The construction relies on the approximation techniques and can be generalized so that the unit disk $\mathbb{D}$ may be replaced by any bounded connected regular open set $U$, whose closure has a connected complement. In particular this implies that every simply connected Jordan domain is an wandering domain of some entire function. These wandering domains can either be escaping or oscillating [2]. Similar approach was used earlier in [1] to show that a unit ball $\mathbb{B}\subset\mathbb{C}^m$ is a wandering domain of some automorphism of $\mathbb{C}^m$. [1] L. Boc Thaler, Automorphisms of $\mathbb{C}^m$ with bounded wandering domains, Annali di Matematica Pura ed Applicata (2021), doi:10.1007/s10231-020-01057-3 [2] L. Boc Thaler, On the geometry of simply connected wandering domains, Bulletin of the London Mathematical Society, (2021), doi:10.1112/blms.12518 Andrew Brown (University of Liverpool) Eremenko’s Conjecture, Devaney’s Hairs, and the Growth of Counterexamples Fatou noticed in 1926 that certain transcendental entire functions have Julia sets in which there are curves of points that escape to infinity under iteration and he wondered whether this might hold for a more general class of functions. In 1989, Eremenko carried out an investigation of the escaping set of a transcendental entire function $f$, $I(f) = \lbrace z \in \mathbb{C} \colon \vert f^n(z) \vert \rightarrow \infty \rbrace$ and produced a conjecture with a weak and a strong form. The strong form asks if every point in the escaping set of an arbitrary transcendental entire function can be joined to infinity by a curve in the escaping set. This was answered in the negative by the 2011 paper of Rottenfusser, Rückert, Rempe, and Schleicher (RRRS) by constructing a tract that produces a function that cannot contain such a curve. In the same paper, it was also shown that if the function was of finite order, that is, $\log \log \vert f(z)\vert = \mathcal{O}(\log\vert z\vert)$ as $\vert z \vert \rightarrow \infty$, then every point in the escaping set can indeed be connected to infinity by a curve in the escaping set. The counterexample $f$ used in the RRRS paper has growth such that $\log \log \vert f(z)\vert = \mathcal{O}\left (\log \vert z \vert)^{K}\right)$ where $K>12$ is an arbitrary constant. The question is, can this exponent, $K$, be decreased and can explicit calculations and counterexamples be performed and constructed that improve on this? Davoud Cheraghi (Imperial College London) Analytic symmetries of parabolic and elliptic elements In this talk we discuss the analytic centralisers of parabolic and elliptic analytic maps. We explain the analytic centraliser for a number of classes of maps, in particular, we show that for a dense set of irrational numbers $\alpha$, the analytic centraliser of the map $e^{2\pi i \alpha} z+ z^2$ near 0 is trivial. We also present the first examples of analytic circle diffeomorphisms, with irrational rotation numbers, which have trivial centralisers. Arnaud Chéritat (Institut de Mathématiques de Toulouse) Bounded type Siegel disks of finite type maps with few singular values Let $U$ be an open subset of the Riemann sphere $\widehat{\mathbb{C}}$. We give sufficient conditions for which a finite type map $f:U\to \widehat{\mathbb{C}}$ with at most three singular values has a Siegel disk compactly contained in $U$ and whose boundary is a quasicircle containing a unique critical point. The main tool is quasiconformal surgery à la Douady-Ghys-Herman-Świątek. We also give sufficient conditions for which, instead, $\Delta$ has not compact closure in $U$. The main tool is the Schwarzian derivative and area inequalities à la Graczyk-Świątek. Weiwei Cui (Lunds Universitet) Hausdorff dimension of escaping sets for Speiser functions with few singular values A meromorphic function $f \in \mathbb{C} \to \widehat{\mathbb{C}}$ is called a Speiser function, if it has finitely many singular values. In the talk we will talk about constructing Speiser functions with only few singular values. The purpose is to show that to attain each possible value of Hausdorff dimension of escaping sets for Speiser functions, it is enough to consider functions with only four singular values. This is a joint work with Magnus Aspenberg. Kostiantyn Drach (Aix-Marseille Université) How to use box mappings as “black boxes” The concept of a complex box mapping (aka puzzle mapping) is a generalization of the classical notion of polynomial-like map to the case when one allows for countably many components in the domain and finitely many components in the range of the mapping. In one-dimensional dynamics, box mappings appear naturally as first return maps to certain nice sets intersecting the critical set of the map. In this talk, we will discuss various features of general box mappings, as well as so-called dynamically natural box mappings, which will include local connectivity of their Julia sets, ergodicity, etc. We will then show how these results can be used almost as “black boxes” to conclude similar properties in those families of rational maps where non-trivial box mappings can be extracted. Among the key examples for us will be complex polynomials of arbitrary degrees and their Newton maps. (The talk is based on joint work with Trevor Clark, Oleg Kozlovski, Dierk Schleicher and Sebastian van Strien.) Dzmitry Dudko (Stony Brook University) Expanding and relatively expanding Thurston maps One of the prominent features of post-critically finite rational maps is that they expand the hyperbolic metric of the complement of the postcritical set. More generally, a non-invertible Thurston map f is isotopic to an expanding map if and only if f admits no Levy cycle (unless an "exceptional" torus endomorphism doubly covers f). Even if there is a Levy cycle, f may still have an expanding "cactoid" quotient. We will discuss this "relative expansion" property and its relation to the mating and Thurston decidability problems. Based on joint work with Laurent Bartholdi and Kevin Pilgrim. Vasiliki Evdoridou (The Open University) Boundary dynamics of wandering domains: sufficient conditions for uniform behaviour 2 This is the third of a series of talks on this topic. In this talk we focus on the boundary behaviour of contracting wandering domains. For such a wandering domain U of a transcendental entire function f, we show that the Denjoy-Wolff set of (f^n|_U) has either full or zero harmonic measure. In fact, we prove a more general result concerning compositions of holomorphic maps, which is inspired, and somehow extends, a result by Pommerenke on compositions of inner functions. We also give an example which shows that the contracting condition is necessary in the general case. This is joint work with A.M. Benini, N. Fagella, P. Rippon and G. Stallard. Núria Fagella (Universitat de Barcelona) Virtual centers in the parameter space of meromorphic maps (Joint work with Anna Miriam Benini and Matthieu Astorg.) We present new results about the bifurcation loci of natural families of meromorphic transcendental maps. We describe some special types of parameter values and their role in the bifurcation locus. In particular we prove (in great generality) that the set of parameters for which an asymptotic value is a prepole of order $n$, coincides with the set of parameters for which an attracting periodic of period $n+1$ disappears to infinity (virtual centers). Robert Florido Llinàs (Universitat de Barcelona) Projecting Newton maps of Entire functions via the Exponential map Lifting maps by the exponential function is a fruitful procedure to construct examples of escaping wandering domains. This method relies on a theorem by Bergweiler relating the Julia sets of an entire map and its projection, which was extended by Zheng in 2005 to meromorphic functions outside a small set. We investigate the Fatou components of a particular class of transcendental Newton maps and the corresponding projections, and we seek conditions for the existence of possible wandering domains. Work in progress. Antonio Garijo (Universitat Rovira i Virgili) Dynamics of the secant map We investigate the root finding algorithm given by the secant method applied to a real polynomial $p$ as a discrete dynamical system defined on the real plane. In particular, given a simple root of $p$ we show the existence of a four cycle related to the immediate basin of attraction. This is a joint work with Ernest Fontich, Laura Gardini and Xavier Jarque. Oleg Ivrii (Tel Aviv University) Shapes of trees A finite tree in the plane is called a true tree if every side of every edge has the same harmonic measure as seen from infinity. It is well known that any finite tree has a conformally balanced shape, unique up to scale. In this talk, we study shapes of infinite trees, focusing on the case of an infinite trivalent tree. To conformally balance the infinite trivalent tree, we truncate it at level $n$, form the true tree $\mathcal T_n$ and take $n \to \infty$. We show that the Hausdorff limit of the $\mathcal T_n$ contains the boundary of the developed deltoid, the domain obtained by repeatedly reflecting the deltoid in its sides. We also give a sequence of trees which produces the Cauliflower, the Julia set of $z^2+1/4$. (This is joint work with P. Lin, S. Rohde and E. Sygal.) Xavier Jarque (Universitat de Barcelona) On the basins of attraction of a one-dimensional family of root finding algorithms: From Newton to Traub In this paper we study the dynamics of damped Traub's methods $T_\delta$ when applied to polynomials. The family of damped Traub's methods consists of root finding algorithms which contain both Newton's ($\delta=0$) and Traub's method ($\delta=1$). Our goal is to obtain several topological properties of the basins of attraction of the roots of a polynomial $p$ under $T_1$, which are used to determine a (universal) set of initial conditions for which convergence to all roots of $p$ can be guaranteed. We also numerically explore the global properties of the dynamical plane for $T_\delta$ to better understand the connection between Newton's method and Traub's method. Anna Jové Campabadal (Universitat de Barcelona) Dynamics on the boundary of Fatou components In this talk, we compile the known results about the dynamics on the boundary of invariant simply connected Fatou components, as well as the questions which are still open concerning the topic. We focus on ergodicity and recurrence. One of the main tools to deal with this kind of questions is to study the boundary behaviour of the associate inner functions. Therefore, first ergodicity and recurrence are studied for inner functions. Second, these results are applied to study the dynamics on the boundary of invariant simply connected Fatou components. Moreover, we study the concrete example $f(z)=z+e^{-z}$, which presents infinitely many invariant doubly-parabolic Baker domains $U_k$. Making use of the associate inner function, which can be computed explicitly, we give a complete characterization of the periodic points in $\partial U_k$ and prove the existence of uncountably many curves of non-accessible escaping points. Masashi Kisaka (Kyoto University) Fatou-Shishikura inequality for transcendental entire functions in the Speiser class We discuss the following realizability problem: For given numbers which satisfy the Fatou-Shishikura inequality, is there a transcendental entire function in the Speiser class S with the given numbers of Fatou components? Genadi Levin (Hebrew University of Jerusalem) On hyperbolic sets of polynomials, II Let $f$ be an infinitely-renormalizable quadratic polynomial and $J_\infty$ the intersection of orbits of 'small' Julia sets of simple renormalizations of $f$. In [LP] we show that the restriction map $f:J_\infty\to J_\infty$ has no hyperbolic sets. I plan to talk about a more general question: can the Lyapunov exponent of an invariant measure of the map $f: J_\infty\to J_\infty$ be positive? Joint work (in progress) with Feliks Przytycki. [LP] G. Levin, F. Przytycki, On hyperbolic sets of polynomials. arXiv:2107.11962 Daniel Meyer (University of Liverpool) Quasisymmetric Uniformization, quasi-visual approximations, and Thurston maps Quasisymmetric maps are generalizations of conformal maps and may be viewed as a global versions of quasiconformal maps. Originally, they were introduced in the context of geometric function theory, but appear now in geometric group theory and analysis on metric spaces among others. The quasisymmetric uniformization problem asks when a given metric space is quasisymmetric to some model space. Of particular importance is the case where the model space is the $2$-sphere. The reason is that Cannon's conjecture, and Thurston's characterization of rational maps, may be expressed that certain metric spaces are quasisymmetrically equivalent to the standard $2$-sphere. A quasi-visual approximation is a sequence of discrete approximations of a given metric space. There is a necessary and sufficient condition for quasisymmetric equivalence in terms of quasi-visual approximations. This has applications to the quasisymmetric uniformization of trees as well as to Thurston maps. This is joint work with Mario Bonk and Mikhail Hlushchanka. Dan Nicks (University of Nottingham) Orbits and bungee sets In the study of discrete dynamical systems, we typically start with a function from a space into itself, and ask questions about the properties of sequences of iterates of the function. In the first part of this talk we reverse the direction of this study by starting with a sequence of points and studying the functions (if any) for which this sequence is an orbit under iteration. This gives rise to questions of existence and of uniqueness. The answers depend on the class of functions considered: holomorphic functions, quasiregular functions, continuous functions, etc. In the second part of the talk, we consider the \emph{bungee set} of a function $f$; that is, the set of points $x$ for which the orbit $(f^n(x))_{n\ge0}$ has both bounded and unbounded subsequences. For a quasiregular map $f$ we make some connections between the bungee set and the Julia set of $f$. Joint work with David Sixsmith. Dan Alexandru Paraschiv (Universitat de Barcelona) Newton-like behaviour in the Cebyshev-Halley family of degree n polynomials Previously, Campos, Canela, and Vindel have studied the family of maps obtained by applying the Cebyshev-Halley family of numerical methods to degree n unicritical polynomials. More precisely, they have studied the possible connectivities of Fatou components in a general setting and the dynamical/ parameter plane for n=2. We prove that for n>2, there exist parameters such that there exists a connected component of the Julia set which is a quasiconformal copy of the Julia set of a Newton's map for degree n unicritical polynomials. Work in progress. Łukasz Pawelec (SGH - Warsaw School of Economics) The important set for non-autonomous exponential map In studying the behaviour of exponential map it is often useful to consider the set of points whose trajectory do not leave the strip $\R\times (-\pi,+\pi)$. The same holds for non-autonomous maps $\lambda_n e^z$. We will visit some properties of the set, such as its Hausdorff dimension and its topology. We will sketch the proof that the dimension is equal to one under some assumptions on lambda. Han Peters (Universiteit van Amsterdam) Zeros of the Independence polynomial for graphs of large degrees I will report on joint work with Pjotr Buys and Ferenc Bencs, both from the University of Amsterdam (UvA) . The goal of our project is to accurately describe the maximal zero-free component of the independence polynomial for graphs of bounded degree, for large degree bounds. In previous work with David de Boer, Lorenzo Guerini and Guus Regts (all UvA) we demonstrated that this component coincides with the normality region of a discrete semi-group generated by finitely many rational maps. By passing through the infinite degree limit this normality region translates to the boundedness region for a continuous semi-group generated by infinitely many exponential maps. The key observation is that in the interior of the boundedness component, dominating singular orbits are strictly invariant under all the exponential generators, and hence provide bounded invariant sets for the rational semi-groups for sufficiently large degrees. We prove that away from the real axis, the exponential boundedness component avoids a neighborhood of the limit cardioid, answering a recent question posed by Andreas Galanis (Oxford). We also describe the boundary of the exponential boundedness component near each of the real boundary points. Finally, we show that properties of the singular orbit can be used for rigorous computer computations of the exponential boundedness component. Phil Rippon (The Open University) Boundary dynamics of wandering domains: sufficient conditions for uniform behaviour 1 This is the second of a series of talks on this topic. In this talk we prove a theorem which shows that if the orbit under a holomorphic map f of an interior point in a wandering domain of $f$ converges to the boundary of the corresponding wandering domains sufficiently quickly, depending on the geometry of the domains, then almost all points on the boundary have the same convergence property; that is, the so-called Denjoy-Wolff set of $(f^n|_U)$ has full harmonic measure. In fact, we prove such a result in a much more general setting. This generalises one part of a dichotomy due to Aaronson and Doering & Mañé, concerning the boundary dynamics of inner functions, and we give examples relating to the other part of the dichotomy. This is joint work with A.M. Benini, V. Evdoridou, N. Fagella and G. Stallard. Gustavo Rodrigues Ferreira (The Open University) Uniformity in internal dynamics of wandering domains In 2019, Benini et al. showed that, in a simply connected wandering domain, all pairs of orbits behave the same way relative to the hyperbolic metric, thus giving us our first insight into the general internal dynamics of such domains. After the more recent observation that the same is not true for multiply connected wandering domains, we ask ourselves: how inhomogeneous can multiply connected wandering domains get? We give an answer to this question, in that we show that whatever happens inside an open subset of the domain generalises (in some sense) to the whole wandering domain. After that (time allowing), we will show an application of this result towards the construction of new examples. Dierk Schleicher (Aix-Marseille Université) Postsingularly finite entire functions: combinatorics, complexity, Thurston theory We describe the dynamics of post-singularly finite transcendental entire functions, reporting on work by and with David Pfrang, Roman Chernov, Malte Hassler, and Sergey Shemyakov. We show that every such function has a Homotopy Hubbard Tree that allows to distinguish different maps within any given parameter space from each other. Via these Hubbard trees, one can define “core entropy” as a measure of complexity of the dynamics. Unlike topological entropy, we show that core entropy is always finite. For certain families of maps such as exponential maps, the entropy is always uniformly bounded (for exponential maps, by log 2). For other families, such as the cosine family, there is no uniform bound throughout the family. The main focus of this talk is on extending Thurston theory from rational maps to a certain family of transcendental entire functions. Together with the existence of Hubbard trees, this provides a possibility to actually classify certain explicit families of entire functions in terms of their Hubbard tree. Some parts of this presentation has the character of an overview on recent work, and others describe work in progress. Mitsuhiro Shishikura (Kyoto University) Multiply connected Fatou components Gwyneth Stallard (The Open University) Boundary dynamics of wandering domains: overview Although the dynamical behaviour of periodic Fatou components is well understood, far less is known about the behaviour of wandering domains. Recently, we showed that the internal dynamics of wandering domains can be classified into nine possible types. Now we give several results concerning the relationship between the behaviour of interior points and points on the boundary. We state our results in the much more general setting of sequences of holomorphic maps between simply connected domains, generalising classical results about iterates of self-maps of the unit disc. Motivated by the Denjoy-Wolff theorem, we introduce the notion of the Denjoy-Wolff set; those points on the boundary whose images have the same limiting behaviour as the images of all interior points. We state several results about the possible size of this set. The two subsequent talks will give details of some of the proofs and construct examples to show the rich variety of possible behaviours that can occur in our setting. This is joint work with Anna Miriam Benini, Vasiliki Evdoridou, Nuria Fagella and Phil Rippon. Athanasios Tsantaris (University of Nottingham) Explosion points of Zorich maps In the theory of one dimensional holomorphic dynamics, one of the most well studied families of maps is the exponential family $E_\lambda(z):=\lambda e^z$, $\lambda\in \mathbb{C}\setminus\{0\}$. Zorich maps are the quasiregular higher dimensional analogues of the exponential map on the plane. For the exponential family $E_\lambda(z):=\lambda e^z$, $\lambda>0$ it is generally well known that for $0<\lambda\geq 1/e$ the Julia set of $E_\lambda$ is a collection of disjoint curves. Mayer has shown that the set of endpoints of those curves together with the point at infinity form a connected set but the endpoints themselves are totally separated. In this talk we will discuss how we can generalize this result to the higher dimensional setting of Zorich maps. James Waterman (Stony Brook University) Whether Lakes of Wada continua can arise in complex dynamics is a long standing open problem, an analogue of which, Fatou first posed in 1920 concerning Fatou components of rational functions. We discuss constructing a transcendental entire function for which infinitely many Fatou components share the same boundary, answering this question. Our theorem also provides the first example of an entire function having a simply connected Fatou component whose closure has a disconnected complement, answering a recent question of Boc Thaler. Using the same techniques, we discuss giving new counterexamples to a conjecture of Eremenko concerning curves in the escaping set of a transcendental entire function. This is joint work with David Martí-Pete and Lasse Rempe. Michael Yampolsky (University of Toronto) Harmonic measures, Julia sets, and computability The interplay between computability theory and complex analysis leads to interesting results in both disciplines. I will discuss some examples relating to computable properties of harmonic measures. Michel Zinsmeister (Université d'Orléans) Integral means spectrum In this talk I will 1) survey some known facts and conjectures about integral means spectrum with a focus on their connection with holomorphic dynamics (thru the notion of pressure), 2) discuss some cases where this spectrum can be explicitely computed. (Joint work with B.Duplantier, Han Yong, Nguyen Thi Phung Chi)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.882588803768158, "perplexity": 499.63859323579527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00362.warc.gz"}
https://people.mpi-inf.mpg.de/~pmiettin/btc/
# Boolean Tensor Factorization ## Clustering Boolean Tensors Much of the computational complexity of Boolean CP and Tucker3 tensor decompositions comes from the fact that the rank-1 components can overlap. Yet, in many real-world applications, at least one mode can be considered non-overlapping (for example, in subject–relation–object data the relations behave differently to subjects and objects). It is therefore natural to ask what can be gained (and what will be lost) if we restrict the Boolean CP decomposition by requiring one factor matrix to be a cluster assignment matrix, that is, a binary matrix where each row has exactly one non-zero. We [1] phrase the problem as a maximum-similarity problem (see Figure 1): Given a binary $$n$$-by-$$m$$-by-$$l$$ tensor $$\tens{X}$$ and an integer $$k$$, our goal is to find matrices $$A\in\{0,1\}^{n\times k}$$, $$B\in\{0,1\}^{m\times k}$$, and $$C\in\{0,1\}^{l\times k}$$, where $$C$$ is a cluster assignment matrix, such that $$\tilde{\tens{X}} = A\otimes B\otimes C$$ ($$\otimes$$ denotes the outer product) is as similar to $$\tens{X}$$ as possible. Given a tensor $$\tens{X}$$, for the optimal solution of the problem, we need binary matrices $$A$$, $$B$$, and $$C$$ that maximize the similarity between $$\tens{X}$$ and $$A\otimes B\otimes C$$, which can be expressed as $$C(B\odot A)^T$$ ($$\odot$$ denotes the Khatri-Rao product) when we compare to the matricization of $$\tens{X}$$ along the third mode, $$X_{(3)}$$: $$\text{sim}(X_{(3)}-C(B\odot A)^T)$$. If we replace $$B\odot A$$ with an arbitrary binary matrix, this would be equal to the Hypercube segmentation problem defined by Kleinberg et al. [2004]: Given a set $$S$$ of $$l$$ vertices of the $$d$$-dimensional cube $$\{0,1\}^d$$, find $$k$$ vertices $$P_1,\ldots,P_k\in \{0,1\}^d$$ and a partition of $$S$$ into $$k$$ segments to maximize $$\sum_{i=1}^k\sum_{c\in S} \text{sim}(P_i, c)$$. Therefore we employ an algorithm that resembles those for hypercube segmentation, with the added restrictions to the centroids. Our algorithm, SaBoTeur (Sampling for Boolean Tensor clustering), acts on the unfolded tensor $$X_{(3)}$$. That is the tensor $$\tens{X}$$ turned into a matrix by arranging its tubes (fibers of the third mode) as columns of a matrix. In each iteration, SaBoTeur samples $$k$$ rows of $$X_{(3)}$$ as the initial, unrestricted centroids. The next step is to turn these centroids into the restricted type. This means, the maximum-similarity binary rank-1 decomposition of each centroid has to be obtained. Then, SaBoTeur assigns each row of $$X_{(3)}$$, i.e. each slice of $$\tens{X}$$ to its closest restricted centroid. The sampling is repeated multiple times, and in the end the factors that gave highest similarity are returned. We show that, like the hypercube segmentation problem, the maximum similarity binary rank-1 decomposition also admits a PTAS. For hypercube segmentation, Alon and Sudakov [1999] present a randomized algorithm that on expectation attains a similarity within $$(1 - \epsilon)$$ of the optimum and can be derandomized. To show the approximabilty of the maximum similarity binary rank-1 decomposition, we proof that a variation of Alon's and Sudakov's algorithm solves the decomposition while the approximation bounds are maintained. ## References 1. Clustering Boolean Tensors. Data Mining and Knowledge Discovery 29(5), , 13431373. 10.1007/s10618-015-0420-3 [tech. rep. | manuscript | pdf (Springer) | source code] 2. Clustering Boolean Tensors. arXiv:1501.00696 [cs.NA] . [pdf (arXiv)]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704384803771973, "perplexity": 736.8766887837344}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00354.warc.gz"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=877&pid=7095
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Further observations on fractional calc solution to tetration JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 05/30/2014, 04:10 PM (This post was last modified: 05/30/2014, 04:16 PM by JmsNxn.) Hi, everyone. This is a continuation of my last thread http://math.eretrandre.org/tetrationforu...hp?tid=847 I don't have the time to explain too much now, but I realize a mistake I made and it causes for an inaccurate result. Take $0<\sigma <1$ $\frac{1}{2 \pi i} \int_{\sigma - i\infty}^{\sigma + i\infty} \G(s) \frac{w^{-s}}{(^{-s} e)}\,ds = \vartheta(-w)$ Quite remarkably: $\vartheta(-w) = \sum_{n=0}^\infty \frac{(-w)^n}{(^n e)n!} + k(w)$ where I've recently calculated that: $k(w) = \lim_{n\to\infty} \frac{w^n}{2 \pi i} \int_{\sigma - i\infty}^{\sigma + i \infty} \G(s-n) \frac{w^{-s}}{(^{n-s} e)}\,ds$ Where I have been unable to calculate if this converges to a limit. If it does, (and I think it does), we are good. I'm not sure if I can show recursion with this new transform but I'll try my best to work on this. We recall the important property, of recovering tetration: $[\frac{d^z}{dw^z} \vartheta(w)]_{w=0} = \frac{1}{^z e}$, for $\Re(z) > -1$ tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 05/30/2014, 09:58 PM (05/30/2014, 04:10 PM)JmsNxn Wrote: Hi, everyone. This is a continuation of my last thread http://math.eretrandre.org/tetrationforu...hp?tid=847 I don't have the time to explain too much now, but I realize a mistake I made and it causes for an inaccurate result. Take $0<\sigma <1$ $\frac{1}{2 \pi i} \int_{\sigma - i\infty}^{\sigma + i\infty} \G(s) \frac{w^{-s}}{(^{-s} e)}\,ds = \vartheta(-w)$ Quite remarkably: $\vartheta(-w) = \sum_{n=0}^\infty \frac{(-w)^n}{(^n e)n!} + k(w)$ where I've recently calculated that: $k(w) = \lim_{n\to\infty} \frac{w^n}{2 \pi i} \int_{\sigma - i\infty}^{\sigma + i \infty} \G(s-n) \frac{w^{-s}}{(^{n-s} e)}\,ds$ Where I have been unable to calculate if this converges to a limit. If it does, (and I think it does), we are good. I'm not sure if I can show recursion with this new transform but I'll try my best to work on this. We recall the important property, of recovering tetration: $[\frac{d^z}{dw^z} \vartheta(w)]_{w=0} = \frac{1}{^z e}$, for $\Re(z) > -1$ But every integral integrates a tetration component ... while we do not have that function yet ... It feels a bit like when someone asks for a proof that a function is analytic , someone shouts : cauchy's theorem. Which does nothing ... at least not by itself. regards tommy1729 mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 05/31/2014, 01:46 AM (This post was last modified: 05/31/2014, 01:51 AM by mike3.) (05/30/2014, 04:10 PM)JmsNxn Wrote: Hi, everyone. This is a continuation of my last thread http://math.eretrandre.org/tetrationforu...hp?tid=847 I don't have the time to explain too much now, but I realize a mistake I made and it causes for an inaccurate result. Take $0<\sigma <1$ $\frac{1}{2 \pi i} \int_{\sigma - i\infty}^{\sigma + i\infty} \G(s) \frac{w^{-s}}{(^{-s} e)}\,ds = \vartheta(-w)$ Quite remarkably: $\vartheta(-w) = \sum_{n=0}^\infty \frac{(-w)^n}{(^n e)n!} + k(w)$ where I've recently calculated that: $k(w) = \lim_{n\to\infty} \frac{w^n}{2 \pi i} \int_{\sigma - i\infty}^{\sigma + i \infty} \G(s-n) \frac{w^{-s}}{(^{n-s} e)}\,ds$ Where I have been unable to calculate if this converges to a limit. If it does, (and I think it does), we are good. I'm not sure if I can show recursion with this new transform but I'll try my best to work on this. We recall the important property, of recovering tetration: $[\frac{d^z}{dw^z} \vartheta(w)]_{w=0} = \frac{1}{^z e}$, for $\Re(z) > -1$ Two issues I see here: 1. The integral for $k(w)$ involves a complex tetration. Yet, we don't have complex tetration in the first place if we are trying to construct it -- so how does this work? 2. Given the chaotic behavior of tetration I've mentioned, I don't think this integral converges. The following is a graph of the reciprocal Kneser tetration $\frac{1}{^z e}$ on the complex plane:     The brightness gives the magnitude (brighter = bigger), and the hue the phase. The lines show effective (i.e. when shifted by the $n - s$ in the tower) integration paths for a $\sigma$ value near $\frac{1}{2}$ and increasing $n$. Grey areas are areas where the function could not be computed due to arithmetic overflow. It will have both very large and very small values in those areas. (When I say "very large" I mean HUGE -- the integer parts of the magnitudes of most of these numbers are so big that all the matter in the Universe could not give anywhere even close to enough stuff to write down their digits.) I didn't include the factor of $\Gamma$ or the $w^{-z}$ factor in the plot, because these things don't decay strongly enough to suppress the monster growth of reciprocal tetration in the complex plane (also, to include the $\Gamma$ factor in this specific formula would require the creation of a separate graph for each $n$, and I don't want to clog this post up with graphs!). The areas of huge values will remain regardless, so this should still be a useful illustration of the problem. I'd be extremely surprised if an integral through all that mess was well-behaved enough to have some kind of limit. This seems to be a common problem plaguing all of these tetration integral formulas. I think any such formula, with nothing to cancel out the rapid growth, that takes a limit of integrals in the right half-plane with integration paths extending further and further right and crossing that region, or which relies on such a limit, is doomed to failure. I did, however, have an idea for a possible solution: what about trying to use your fractional calculus not for the tetration function, but for the super-logarithm? The principal branch of the super-logarithm $\mathrm{slog}(z)$ is defined with $\mathrm{slog}(1) = 0$ $\mathrm{slog}(e^z) = \mathrm{slog}(z) + 1$ $\mathrm{slog}(z)$ is holomorphic for all $\Re(z) > \Re(L)$, where $L = -W_0(-1)$ is a principal fixed point of the logarithm. Henryk Trappmann showed that a function satisfying these conditions (actually, the range of holomorphism need only be the "sickle" between the two principal fixed points of the logarithm, but the above is simpler to state) is unique. Kneser gave the construction which shows it exists. This function is very well-behaved in the region given, compared to the tetration. It looks to be asymptotically exponentially bounded -- heck to be asymptotically bounded by a linear function there. That is, $|\mathrm{slog(z)}| < K|z|$ for some $K > 0$ and $\Re(z) > R > \Re(L)$ for any $R$. The only rub is that the non-extended version (i.e. only what you can derive from the first two criteria above) is not defined at every positive integer, rather it is defined at every power tower $1$, $e$, $e^e$, $e^{e^e}$. ... and takes on integer values at each of those. I'm not sure how that impacts the use of your methods. With the super-logarithm in hand, you then have $\mathrm{slog}^{-1}(z) =\ ^z e$ (functional inverse) tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 05/31/2014, 08:30 PM Im sorry Mike, but I think switching to slog is not going to help. Convergeance is not the big problem here , the big problem is the selfreference. I see no correlation between the convergeance issue and the selfreference issue. As you adequately put : The integral involves a complex tetration. Yet, we don't have complex tetration in the first place if we are trying to construct it -- so how does this work? ( quote slightly modified ) Ergo without a fundamental change in the strategy , I see no future for this method at the moment. After a countable amount of efforts I gave up on trying to fix this anomaly. Selfreference is the red wire and déjà-vu in hard complex analysis. It requires the action of a master. regards tommy1729 mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 06/01/2014, 11:16 PM (This post was last modified: 06/02/2014, 07:30 AM by mike3.) (05/31/2014, 08:30 PM)tommy1729 Wrote: Im sorry Mike, but I think switching to slog is not going to help. Convergeance is not the big problem here , the big problem is the selfreference. I see no correlation between the convergeance issue and the selfreference issue. As you adequately put : The integral involves a complex tetration. Yet, we don't have complex tetration in the first place if we are trying to construct it -- so how does this work? ( quote slightly modified ) Ergo without a fundamental change in the strategy , I see no future for this method at the moment. After a countable amount of efforts I gave up on trying to fix this anomaly. Selfreference is the red wire and déjà-vu in hard complex analysis. It requires the action of a master. regards tommy1729 Yes, however I believe, given this is a continuation of the previous thread, what is being done is to hypothesize that a tetration function satisfying certain criteria exists. Namely, he hypothesizes that there exists a tetration function $F(z)$ satisfying the criteria (0. $F(z)$ satisfies the tetration functional equations and is holomorphic on at least a cut plane, so $F(0) = 1$ and $F(z+1) = e^{F(z)}$) 1. $|\frac{1}{F(z)}| \le Ce^{\alpha |\Im(z)|}$ for $0 \le \alpha < \pi/2$, $\Re(z) < -1$. 2. $\frac{1}{F(z)}$ decays uniformly as $\Re(F(z)) > 0$ 3. $\Re(F(z)) > 0$ for $\Re(z) > -1$. Then he works from that hypothesis to a formula for that function using his fractional calculus methods. In particular, using his fractional calculus results he gets the first formula given on his first post in this new thread from the above hypotheses, and then works from there. (--- Note, this is a slight derail as you were talking about circularity, but I just noticed this! ---) Now, Kneser's function looks to show (I don't have a proof on hand) the existence of a function satisfying criteria 0, 1, and 3. The problem with this is that there does not exist a tetration function which also satisfies criterion 2! This is a consequence of the chaotic nature of the exponential map (the fact that the Julia set $J[\exp]$ is the whole complex plane $\mathbb{C}$ so it is chaotic everywhere.). So his method looks to start from a flawed premise, and therefore it is no surprise it does not converge. I just realized this as I hadn't quite paid close enough attention to his criteria before to notice the criterion (2) above. (Now, if you, JmsNxn, or anyone else can shoot down my argument above as to why a tetration function satisfying the above criteria doesn't exist, I'd be happy to hear about it. Though I don't think the Kneser function would be the one that would work, since at least from the graph its reciprocal does not appear to satisfy hypothesis 2.) EDIT: I also notice that, as strictly worded, criterion 1 does not apply either due to the pole at $z = -1$, but it would work for $\Re(z) < -1 - \epsilon$. (--- End derail ---) This does not look circular, since you can try to start from a series of hypotheses to attempt to construct an object satisfying them. If the object is constructed successfully, then that shows that the statement "there exists an object satisfying these hypotheses" is true. In this case, it is not, but that does not matter with regard to the validity of the underlying method in general. The reasoning is: -- We seek the construction of an object satisfying some hypotheses. 1. Deduce from the hypotheses and known results an equation which an object satisfying them would also satisfy, and such that an object satisfying the formula would also satisfy the original hypotheses. 2. If the equation can be gotten to a form where it involves only known quantities on one side and the hypothesized object on the other, attempt to calculate a solution. If the solution can be obtained, then we have an object satisfying the given hypotheses. -- The argument is not circular. By using truths already proven, it essentially restates the hypotheses in a different form, and then by solving that formula which is equivalent by logic to the original hypotheses. At least that's what I get from it, anyways. I'm not sure if my above description is entirely right but hopefully it should show why this is not circular. Although, what he gave in the first post does not appear to be complete since he hasn't yet gotten the formula to a form involving only known quantities such as only the integer (discrete) values of tetration, which follow immediately from criterion 0. (derail) (although if the premise is flawed this is not going to go anywhere anyways -- I'm just saying) (/derail) JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 06/04/2014, 03:11 AM (This post was last modified: 06/04/2014, 03:15 AM by JmsNxn.) I understand the circularity ^_^, it was worded a little weird. I got excited when I thought of applying these methods to tetration and I didn't consider too much about the erradic behaviour of the tetration function in the imaginary plane. As mike pointed out it blows up as n grows in 1/{n-s}^e which is not what I would've expected :\. However! using the slogarithm makes a lot more sense. NOW I have something to say. Using fractional calculus, (and even carlson's theorem if you want), if a slogarithm satisfies the exponential bounds in a half plane it is UNIQUELY determined by the values it takes on at integers. This implies, that the tetration it provides is fully determined by the sequence of numbers $a_n$ such that $(^{a_n} e) = n$. Now that's gotta be something interesting. Particularly: $\int_0 ^\infty t^{z-1} \sum_{n=0}^\infty a_n (-t)^n/n!\,dt = \G(z)slog(-z)$ for $0 < \Re(z) < \Re(L)$ that's pretty interesting. I'll have to mull on what we can accomplish with this but a very good idea I'm thinking about is determining these a_n (or a criterion they all satisfy) and seeing what else we can do with them. FURTHERMORE find me a function that (a) satisfies f(e^n) = f(n)+1 (b) holomorphic in a half plane and satisfies our exponential bounds then DA DUM DA DUM f is a slogarithm. tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 06/04/2014, 10:02 PM (This post was last modified: 06/04/2014, 10:04 PM by tommy1729.) (06/04/2014, 03:11 AM)JmsNxn Wrote: ... fully determined by the sequence of numbers $a_n$ such that $(^{a_n} e) = n$. Particularly: $\int_0 ^\infty t^{z-1} \sum_{n=0}^\infty a_n (-t)^n/n!\,dt = \G(z)slog(-z)$ for $0 < \Re(z) < \Re(L)$ that's pretty interesting. I'll have to mull on what we can accomplish with this but a very good idea I'm thinking about is determining these a_n (or a criterion they all satisfy) and seeing what else we can do with them. FURTHERMORE find me a function that (a) satisfies f(e^n) = f(n)+1 (b) holomorphic in a half plane and satisfies our exponential bounds then DA DUM DA DUM f is a slogarithm. 1) HUH ? $\int_0 ^\infty t^{z-1} \sum_{n=0}^\infty a_n (-t)^n/n!\,dt = \G(z)slog(-z)$ for $0 < \Re(z) < \Re(L)$ What is Re(L) ?? 2) as for finding those a_n without a given slog or sexp : Perhaps usefull is the approximation : $d slog(x) / dx$ ~ $C (x ln(x) ln^{[2]}(x) ln^{[3]}(x) ...)^{-1}$ from where you can estimate the a_n. 3) * quote * : find me a function that (a) satisfies f(e^n) = f(n)+1 (b) holomorphic in a half plane and ... * end quote * Probably NO and ! that is already a uniqueness criterion for sexp. And I think it is also a uniqueness criterion for slog. Isnt that f(x) 2pi i periodic ? Is that not a problem ? Need to think about it more though ... regards tommy1729 mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 06/04/2014, 11:55 PM (This post was last modified: 06/04/2014, 11:58 PM by mike3.) (06/04/2014, 03:11 AM)JmsNxn Wrote: I understand the circularity ^_^, it was worded a little weird. I got excited when I thought of applying these methods to tetration and I didn't consider too much about the erradic behaviour of the tetration function in the imaginary plane. As mike pointed out it blows up as n grows in 1/{n-s}^e which is not what I would've expected :\. However! using the slogarithm makes a lot more sense. NOW I have something to say. Using fractional calculus, (and even carlson's theorem if you want), if a slogarithm satisfies the exponential bounds in a half plane it is UNIQUELY determined by the values it takes on at integers. This implies, that the tetration it provides is fully determined by the sequence of numbers $a_n$ such that $(^{a_n} e) = n$. Now that's gotta be something interesting. Particularly: $\int_0 ^\infty t^{z-1} \sum_{n=0}^\infty a_n (-t)^n/n!\,dt = \G(z)slog(-z)$ for $0 < \Re(z) < \Re(L)$ that's pretty interesting. I'll have to mull on what we can accomplish with this but a very good idea I'm thinking about is determining these a_n (or a criterion they all satisfy) and seeing what else we can do with them. FURTHERMORE find me a function that (a) satisfies f(e^n) = f(n)+1 (b) holomorphic in a half plane and satisfies our exponential bounds then DA DUM DA DUM f is a slogarithm. I just refreshed my memory on what Trappmann's uniqueness condition was. It turns out it's a little more complicated. I mentioned about holomorphism on the "sickle" between the fixed points. The condition is actually stronger: the function must actually be biholomorphic (holomorphic and injective with holomorphic inverse) on that sickle, not just holomorphic. In addition the image of the sickle under the function must be unbounded in the imaginary direction (both directions). The second criterion seems related to the notion of tetration "approaching the fixed points of the logarithm" at $\pm i\infty$ (which gives the super-logarithm singularities at the fixed points). The sickle region is defined as the region bounded by the straight line connecting $L$ and $\bar{L}$ together with its image (a curve) under the exponential $\exp$. It is a subset of the half-plane $\Re(z) > \Re(L)$ if you don't include the boundary. So if you can find coefficients which will make your super-logarithm function satisfy these criteria, then you will have the Kneser super-logarithm. mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 06/05/2014, 12:45 AM (This post was last modified: 06/05/2014, 01:08 AM by mike3.) I decided to test your integral plugging the $a_n$ in from the Kneser super-logarithm as calculated via other methods. If it works you should get that super-logarithm back out. However, I wasn't able to quite get it to converge on a numerical test for $z = 0.1$. The infinite-sum-defined function exhibits what is probably a very slow and very rapidly (tetrationally, I bet! )-increasing period oscillation, and so I'm not sure how well the negative-power factor in the integrand damps it out, i.e. if it damps it out enough to converge. I want to point out that the Kneser super-logarithm has singularities at the fixed points of the logarithm $L$ and $\bar{L}$. Therefore, on the boundary of the half-plane $\Re(z) > \Re(L)$, there are two singularities. These are logarithmic singularities and so the function is exponentially unbounded on that half-plane. If you need a tight (and not just asymptotic) bound, try a half-plane $\Re(z) > R > \Re(L)$ for some $R$. Then there are no singularities on the boundary and the function is exponentially-bounded on the whole half-plane. JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 06/05/2014, 01:17 AM (This post was last modified: 06/05/2014, 01:23 AM by JmsNxn.) (06/05/2014, 12:45 AM)mike3 Wrote: I decided to test your integral plugging the $a_n$ in from the Kneser super-logarithm as calculated via other methods. If it works you should get that super-logarithm back out. However, I wasn't able to quite get it to converge on a numerical test for $z = 0.1$. The infinite-sum-defined function exhibits what is probably a very slow and very rapidly (tetrationally, I bet! )-increasing period oscillation, and so I'm not sure how well the negative-power factor in the integrand damps it out, i.e. if it damps it out enough to converge. I want to point out that the Kneser super-logarithm has singularities at the fixed points of the logarithm $L$ and $\bar{L}$. Therefore, on the boundary of the half-plane $\Re(z) > \Re(L)$, there are two singularities. These are logarithmic singularities and so the function is exponentially unbounded on that half-plane. If you need a tight (and not just asymptotic) bound, try a half-plane $\Re(z) > R > \Re(L)$ for some $R$. Then there are no singularities on the boundary and the function is exponentially-bounded on the whole half-plane. Yep that should fix it, makes a lot more sense. And as to tommy's qblunt statement. lol, you're wrong. $\int_{\sigma - i\infty}^{\sigma + i \infty} \G(z) f(R-z) w^{-z} \,dz = 2 \pi i\sum_{n=0}^\infty f(R+n) \frac{(-w)^n}{n!}$ and as well: $\int_{\sigma - i\infty}^{\sigma + i \infty} \G(z) f(e^{R-z}) w^{-z} \,dz = 2 \pi i \sum_{n=0}^\infty f(e^{R+n}) \frac{(-w)^n}{n!} = 2 \pi i \sum_{n=0}^\infty (f(R+n)+1) \frac{(-w)^n}{n!}= p(w)$ Similarly: $\int_{\sigma - i\infty}^{\sigma + i \infty} \G(z) (f(R-z)+1) w^{-z} \,dz = p(w)$ where $R$ is an int Now are you going to doubt the one to one nature of the fourier transform? « Next Oldest | Next Newest » Possibly Related Threads... Thread Author Replies Views Last Post Math overflow question on fractional exponential iterations sheldonison 4 3,865 04/01/2018, 03:09 AM Last Post: JmsNxn tommy's simple solution ln^[n](2sinh^[n+x](z)) tommy1729 1 2,578 01/17/2017, 07:21 AM Last Post: sheldonison [MSE] Fixed point and fractional iteration of a map MphLee 0 2,174 01/08/2015, 03:02 PM Last Post: MphLee Fractional calculus and tetration JmsNxn 5 7,394 11/20/2014, 11:16 PM Last Post: JmsNxn Theorem in fractional calculus needed for hyperoperators JmsNxn 5 6,664 07/07/2014, 06:47 PM Last Post: MphLee Negative, Fractional, and Complex Hyperoperations KingDevyn 2 6,388 05/30/2014, 08:19 AM Last Post: MphLee left-right iteraton in right-divisible magmas, and fractional ranks. MphLee 1 2,904 05/14/2014, 03:51 PM Last Post: MphLee A new way of approaching fractional hyper operators JmsNxn 0 3,919 05/26/2012, 06:34 PM Last Post: JmsNxn Fractional Tetration bobsmyuncle 1 3,892 02/20/2012, 01:04 PM Last Post: nuninho1980 generalizing the problem of fractional analytic Ackermann functions JmsNxn 17 22,365 11/24/2011, 01:18 AM Last Post: JmsNxn Users browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 97, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291653037071228, "perplexity": 777.4182965548677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00381.warc.gz"}
https://brilliant.org/problems/going-to-the-basics/
# Going to the basics Let $$b,c\ne 0$$ be two coprime integers and we define $$A = \{ bx + cy : bx +cy>0 ; x,y \text{ are integers} \}$$. Find the infimum of the set $$A$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844433069229126, "perplexity": 541.8024223371187}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608765.79/warc/CC-MAIN-20170527021224-20170527041224-00152.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/1-determine-v-circuit-shown--also-determine-v-short-connected-terminals-b-resistances-ohms-q3055214
1. Determine V and I for the circuit shown below. Also, determine V and I when a short is connected between terminals a-b. All resistances are in ohms. 2. For the following circuits, determineRTHby finding the equivalent restistance "seen" by the load at terminals a-b. All resistances are in ohms. 3. Determine the Thevenin equivalent circuit (both VTHandRTH) at terminals a-b. For the Thevenin resistanceRTH,find the equivalent restisnce "seen" by the load. All restisnces are in ohms. 4. For the circuit of Problem 3, determine the Norton equivalent circuit (bothIN andRN) at terminals a-b by connecting a short between terminals a-b and solving forISC=IN. 5. The Thevenin resistanceRTHcan also by be calculated using Ohm's law equation below. Use the calculated values ofVTHand ISCfrom Problems 3 and 4 to determineRTH .DoesRTHequal theREQvalue "seen" by the load? RTH= VTH /ISC 6. Given the following circuit: All resistance are in ohms. a) WhatRLvalue will provide maximum power transfer to the load? b) Calculate the power in Watts delievered to the load for the value ofRL of part a. c) It is desired to provide maximum power transfer to a 10 ohm load restisnces (RL.) If a resistor (R4) is placed in parallel withR2,what is the requiredR4value?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783620595932007, "perplexity": 4707.427128692334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00331-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/find-primitives-of-function-1-4x-sqrt-1-x-2x-2.144074/
# Find primitives of function (1+4x)/sqrt(1 + x + 2x^2) 1. Nov 16, 2006 ### Taryn This is a practice exam question that I have been given! Find the primitives of the functions (1+4x)/(sqrt(1+x+2x^2)) My question is 1. is a primitive the antiderivative? I dont remember my lecturer using primitive during our course! 2. Nov 16, 2006 It is basically asking you to find: $$\int \frac{1+4x}{\sqrt{1+x+2x^{2}}} \; dx$$ A primitive is an antiderivative. So set $$u = 1+x+2x^{2}$$ Then $$du = 4x+1 \; dx$$ and you end up with $$\int u^{-\frac{1}{2}} \; du$$. All the primitives mean that you add the integration constant $$C$$. Last edited: Nov 16, 2006 3. Nov 16, 2006 ahhh thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572818875312805, "perplexity": 2260.5533185079225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544140.93/warc/CC-MAIN-20161202170904-00381-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/acceleration-due-to-gravity-help.240407/
# Acceleration due to gravity help 1. Jun 14, 2008 ### sheevz acceleration due to gravity help!! 1. The problem statement, all variables and given/known data PROB: How long would it take for a mass of 1kg to fall a distance of 2m to the surface of the moon? 2. Relevant equations G=6.673*10^-11 Nm^2/kg^2 R of moon = 1.76*10^6m m of moon = 7.35*10^22kg 3. The attempt at a solution i started by finding the acceleration due to gravity by using g= (m)(G)/r^2 (m=mass of moon, G as the gravitational constant, r= radius of moon) finding that g is 1.58m/s^2 now i am lost in what formula to use to get the displacement of this object??? 2. Jun 14, 2008 ### konthelion Check your acceleration due to gravity for the moon, it should be ~1.62m/s^2 To solve for time t, use the formula $$x=v_{0}t+\frac{1}{2}at^2$$ 3. Jun 14, 2008 ### sheevz ok i dont know if this is a type o or not but yes the acceleration due to gravity for the moon is ~1.62m/s^2 but in this prob, the radius is given @ 1.76*10^6 thus giving an acceleration due to gravity @ ~1.58m/s^2 and in using the above formula you gave me, X=Vot+.5at^2 is Vo my accel due to gravity on the moon and a my G constant? why in the original problem was G given to me, is it necessary in this? (probably a stupid question) 4. Jun 15, 2008 ### Gib Z Those pieces of data were given to you so you could work out the acceleration due to gravity, to sub into the formula konthelion gave to you. Obviously you did this through Newtons Universal Law of Gravitation. After we have the force, Using Newtons Second Law we can have the acceleration. We can take the Force and acceleration to be practically constant, because the change in the value of r in the Universal Gravitation Law changes by one 1m, very small in comparison to the radius of the moon. Although yes, it is true it is not exactly constant. Similar Discussions: Acceleration due to gravity help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987761378288269, "perplexity": 712.2527111889802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320532.88/warc/CC-MAIN-20170625134002-20170625154002-00224.warc.gz"}
http://www.gnu.org/software/emacs/manual/html_node/efaq-w32/Uninstall.html
Next: , Previous: , Up: Installing Emacs   [Contents][Index] ### 3.12 How do I uninstall Emacs? Emacs does not come with an uninstall program. No files are installed outside of the directories you find in the binary zip archive, so deleting those directories is sufficient to clean away the files. If you ran `addpm`, you’ll need to delete the Start Menu group too. The registry entries inserted by `addpm` will not cause any problems if you leave them there, but for the sake of completeness, you can use `regedit` to remove the keys under `HKEY_LOCAL_MACHINE` or `HKEY_CURRENT_USER`: `SOFTWARE\GNU\Emacs`, and the key ```HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\emacs.exe``` if it exists.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059880375862122, "perplexity": 1973.2054152025828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065488.33/warc/CC-MAIN-20150827025425-00078-ip-10-171-96-226.ec2.internal.warc.gz"}
http://nrich.maths.org/1943/index?nomenu=1
Show that for every integer $k$ the point $(x, y)$, where $$x = {2k\over k^2 + 1}, \ y = {k^2 - 1\over k^2 + 1},$$ lies on the unit circle, $x^2 + y^2 =1$. That is, there are infinitely many rational points on this circle. Show that there are no rational points on the circle $x^2 + y^2 =3$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9874733686447144, "perplexity": 63.493379144644535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095557.73/warc/CC-MAIN-20150627031815-00250-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/143404-compute-integral.html
1. ## Compute this integral. Integral from plus to minus infinity of x(1/3)^x with repect to x. thanks 2. Originally Posted by feyomi Integral from plus to minus infinity of x(1/3)^x with repect to x. thanks $\int_{-\infty}^{\infty} x \cdot 3^{-x} \, dx = \int_{-\infty}^{0} x \cdot 3^{-x} \, dx + \int_{0}^{\infty} x \cdot 3^{-x} \, dx $ should be easy to show the integral from $-\infty$ to $0$ diverges
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9862358570098877, "perplexity": 1243.6043696425431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105700.94/warc/CC-MAIN-20170819162833-20170819182833-00674.warc.gz"}
http://physics.stackexchange.com/questions/92640/how-would-a-physicist-measure-temperature-of-molten-metals-in-1850-1920s
# How would a physicist measure temperature of molten metals in 1850-1920s? How would a physicist measure temperature of molten metals in 1850-1920s? What equipment would be used? -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8207353949546814, "perplexity": 4718.010415585711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929803.61/warc/CC-MAIN-20150521113209-00062-ip-10-180-206-219.ec2.internal.warc.gz"}
https://ng.siyavula.com/read/maths/jss2/graphs/09-graphs?id=93-practical-applications
We think you are located in Nigeria. Is this correct? # Chapter 9: Graphs ## 9.1 Indicating position In Chapter 8 you represented algebraic inequalities on a number line. A number line can also be used to give you the position of a number. The number line below shows that the number $3$ is three units to the right of zero and the number $-5$ is five units to the left of zero. Last year you learnt that we can also show numbers on a vertical number line. A thermometer is an example of an instrument that uses a vertical number line. On the number line below, the number $7$ is seven units above zero and the number $-4$ is 4 units below zero. A line has only length. It does not have breadth or width. We say a line has only one dimension. When we describe position along a straight line, we use only one dimension. line A line is a continuous path with length but no breadth. A line is one-dimensional. Look at the grid below. We can describe the position of point X as nine units to the right and two units upwards from zero. This relationship is shown by the red dashed lines. The green dashed lines show the same relationship. From zero, point X is two units upwards and nine units to the right. Similarly, the position of point Y is seven units upwards and two units to the right of zero. We can also say it is two units to the right and seven units upwards from zero. The grid represents a plane. A plane is a flat surface that has length and breadth. We say a plane has two dimensions. When we describe position on a plane, we use two dimensions. We can use the length and breadth of a plane to determine its area. plane A plane is a flat surface that has length and breadth. A plane is two-dimensional. In later grades you will learn how to describe position in a three-dimensional space, which has length and breadth and height. ## 9.2 The Cartesian plane The Cartesian plane is a plane used to show the position of any point relative to a horizontal number line and a vertical number line. The horizontal number line is called the $x$-axis and the vertical number line is called the $y$-axis. The two axes intersect at zero on both number lines. This point of intersection is called the origin. The Cartesian plane is named after a French mathematician, René Descartes, who introduced the system. The position of any point on the Cartesian plane is given as two coordinates, an $x$-coordinate and a $y$-coordinate. The $x$-coordinate gives the horizontal relationship between the point and the origin. The $y$-coordinate gives the vertical relationship between the point and the origin. The coordinates are written in the form $(x, y)$. When we work on the Cartesian plane, the $x$-coordinate is always placed first. The coordinates of the origin are $(0, 0)$. Cartesian plane The Cartesian plane is a plane used to show the position of any point relative to the intersection of a horizontal number line and a vertical number line. origin The origin on the Cartesian plane is the point of intersection of the $x$-axis and the $y$-axis. Its coordinates are $(0, 0)$. coordinates The coordinates of a point on the Cartesian plane give the position of the point relative to the origin of a horizontal number line and a vertical number line. Coordinates are written in the form $(x, y)$. ### Worked example 9.1: Determining the coordinates of a point on the Cartesian plane Determine the coordinates of point $X$ on the Cartesian plane below. 1. Step 1: Write down the label of the point using the general format for coordinates. 2. Step 2: Determine the horizontal displacement of the point from the origin. This is the $x$-coordinate. "Displacement" means how much something has moved. Along the horizontal axis, you must move from the origin to the right until you get to the number $5$. The $x$-coordinate is $5$. 3. Step 3: Determine the vertical displacement of the point from the origin. This is the $y$-coordinate. To get from the number $5$ on the horizontal axis to point $X$, you must move two units downwards. This is the same as moving from the origin downwards to the number $-2$. The $y$-coordinate is $-2$. 4. Step 4: Write down the answer in the correct format. Remember that the $x$-coordinate must always be written down first, in other words, on the left. When we want to show the position of a point on the Cartesian plane, we plot the point as described in the worked example that follows. plot We plot a point on the Cartesian plane by marking it according to its coordinates. ### Worked example 9.2: Plotting a point on the Cartesian plane Plot the point $X (-3, 6)$ on the Cartesian plane. 1. Step 1: Draw the axes of the Cartesian plane correctly. • The axes must be perpendicular to each other. • There must be an arrowhead at both ends of each axis. • Label the horizontal axis $x$ and the vertical axis $y$. • Label the origin with the number 0. • Insert equal divisions on each axis. Label them with whole numbers, as you would do for a number line. 2. Step 2: From the origin, move along the horizontal axis until you reach the $x$-coordinate. The $x$-coordinate is $-3$. 3. Step 3: From the $x$-coordinate, move parallel to the vertical axis towards the $y$-coordinate. Stop when you are across from the $y$-coordinate. The $y$-coordinate is $6$. 4. Step 4: Draw a dot to show the position of the point. Add the label of the point. ### Exercise 9.1: Determine the coordinates and plot points on the Cartesian plane 1. Write down the coordinates for each of the points A to F. If the $x$-coordinate of a point is 0, it means the point is not displaced from the origin in a horizontal direction. The point lies on the $y$-axis. Similarly, if the $y$-coordinate of a point is 0, it means the point is not displaced from the origin in a vertical direction. The point lies on the $x$-axis. 2. Plot the following points on the Cartesian plane: $A(-1, -4)$, $B(0, 0)$, $C(-6, 3)$, $D(6, 4)$, $E(2, -2)$, $F(-3, -5)$ ## 9.3 Plotting graphs ### Compiling a table of values An algebraic equation can be used to pair up $x$-coordinates and $y$-coordinates on the Cartesian plane. Take the $x$-coordinate $2$ as an example. If we put the number $2$ in the place of $x$ in the equation $y=x+1$, then $y=2+1=3$. This means the equation $y=x+1$ pairs the $x$-coordinate $2$ with the $y$-coordinate $3$. This gives us a point in the Cartesian plane: $(2, 3)$. Another equation will pair up the $x$-coordinate $2$ with another $y$-coordinate. For the equation $y=-2x+3$, the $y$-coordinate that corresponds to the $x$-coordinate $2$ is $y = (-2)(2)+3 = -4+3 = -1$. This gives us the point $(2, -1)$ on the Cartesian plane. We can use this method to compile a table of coordinate pairs for a specific equation. Such a table is known as a table of values. table of values A table of values shows the pairs of coordinates for a given algebraic equation, and can be used for plotting the points. ### Worked example 9.3: Compiling a table of values Compile a table of values for the equation $y=3x-1$. 1. Step 1: Choose at least five $x$-coordinates to pair up. The whole numbers $-2,\;-1,\;0,\;1,\;2$ usually work well. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & & & & & \\ \hline \end{array} 2. Step 2: Replace the $x$ in the given equation with each of the values you chose in Step 1. Calculate the corresponding $y$-coordinates. \begin{align} y&=3x-1 \\ &=3(-2)-1 \\ &=-6-1 \\ &=-7 \end{align} \begin{align} y&=3x-1 \\ &=3(-1)-1 \\ &=-3-1 \\ &=-4 \end{align} \begin{align} y&=3x-1 \\ &=3(0)-1 \\ &=0-1 \\ &=-1 \end{align} \begin{align} y&=3x-1 \\ &=3(1)-1 \\ &=3-1 \\ &=2 \end{align} \begin{align} y&=3x-1 \\ &=3(2)-1 \\ &=6-1 \\ &=5 \end{align} 3. Step 3: Enter the values you calculated in Step 2 in your table. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & -7 & -4 & -1 & 2 & 5 \\ \hline \end{array} ### Exercise 9.2: Compile a table of values Compile a table of values for each of the following equations. 1. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & 2 & 3 & 4 & 5 & 6 \\ \hline \end{array} 2. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & -4 & -1 & 2 & 5 & 8 \\ \hline \end{array} 3. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & 10 & 5 & 0 & -5 & -10 \\ \hline \end{array} 4. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & 6 & 4 & 2 & 0 & -2 \\ \hline \end{array} 5. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & 2\frac{1}{2} & 1\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -1\frac{1}{2} \\ \hline \end{array} ### Plotting points from a table of values Each pair in a table of values represents the coordinates of a point on the Cartesian plane. An algebraic equation therefore gives the relationship between the $x$-coordinates and the $y$-coordinates of a collection of points. We can plot these points to see whether they form a pattern. This pattern or shape is called a graph. graph A graph is a pattern on a Cartesian plane formed by points whose coordinates are obtained from a given equation. ### Worked example 9.4: Plotting a graph Plot a graph for the equation $y=3x-1$. 1. Step 1: Compile a table of values. We compiled a table of values for this equation in the previous worked example: \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & -7 & -4 & -1 & 2 & 5 \\ \hline \end{array} 2. Step 2: Write down the coordinates of the points represented in the table of values in the correct format. 3. Step 3: Draw the axes of the Cartesian plane correctly. • The axes must be perpendicular to each other. • There must be an arrowhead at both ends of each axis. • Label the horizontal axis $x$ and the vertical axis $y$. • Label the origin with the number 0. • Insert equal divisions on each axis. Label them with whole numbers, as you would do for a number line. 4. Step 4: Plot each of the points in Step 2 as described in Worked example 9.2. 5. Step 5: Draw a line through the points to get the graph. These points from a straight line, so you must use a ruler to draw the line. ### Exercise 9.3: Plot graphs Plot a graph for each of the following equations. 1. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & 0 & 1 & 2 & 3 & 4 \\ \hline \end{array} 2. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & -7 & -5 & -3 & -1 & 1 \\ \hline \end{array} 3. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & -1 & -2 & -3 & -4 & -5 \\ \hline \end{array} 4. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & 6 & 3 & 0 & -3 & -6 \\ \hline \end{array} 5. \begin{array}{|c|c|c|c|c|c|} \hline x & -2 & -1 & \;0\; & \;1\; & \;2\; \\ \hline y & 4 & 3\frac{1}{2} & 3 & 2\frac{1}{2} & 2 \\ \hline \end{array} ## 9.4 Practical applications Graphs can be very useful in everyday situations. Here are some examples: • A distance-time graph shows us the relationship between the distance covered and time elapsed for any given journey. Note that $\text{speed}=\frac{\text{distance}}{\text{time}}$. • A velocity-time graph shows us the relationship between the displacement of an object as time goes by for any given journey. The displacement of an object is the distance it covered in a certain direction in a straight line. Note that $\text{acceleration}=\frac{\text{velocity}}{\text{time}}$. • A wage-hours graph shows the relationship between the number of hours worked and the total wages earned. • A cost-items graph shows the relationship between the number of items bought and the total cost. ### Worked example 9.5: Plotting and interpreting a wages-time graph Ruqayyah earns ₦3,000 per hour. Plot a graph to show her total wages for up to five hours of work. Use the graph to answer the following questions: 1. Determine Ruqayyah's wages for three hours of work. 2. Determine how long Ruqayyah must work to earn wages of ₦12,000. 1. Step 1: Compile a table of values. Start at zero. Ruqayyah earns ₦3,000 per hour. Therefore: \begin{array}{|l|r|r|r|r|r|r|} \hline \text{Time worked} & \;\;\;0 & 1 & 2 & 3 & 4 & 5 \\ \hline \text{Total wages} & 0 & 3,000 & 6,000 & 9,000 & 12,000 & 15,000 \\ \hline \end{array} 2. Step 2: Draw a horizontal axis and vertical axis. • Only draw the negative parts of the axes if they are needed. There is no such thing as negative time or negative wages, so in this example we only draw the positive parts of the axes. • Insert an arrowhead at the end of each axis. • Label each axis with a description of what the axis represents, as well as a unit of measurement. • Label the origin with the number 0. • Insert equal divisions on each axis. Label them with appropriate numbers. 3. Step 3: Plot the points represented in the table of values you compiled in Step 1. Draw a line through the points to get the graph. 4. Step 4: Use the graph to answer the questions. To determine Ruqayyah's wages for three hours of work: • Find 3 hours on the horizontal axis. • Move from the value on the horizontal axis straight towards the graph. • From the position on the graph, move straight towards the vertical axis. • Write down the value from the vertical axis. For three hours of work, Ruqayyah receives wages of ₦9,000. To determine how long Ruqayyah must work to earn wages of ₦12,000: • Find ₦12,000 on the vertical axis. • Move from the value on the vertical axis straight towards the graph. • From the position on the graph, move straight towards the horizontal axis. • Write down the value from the horizontal axis. Ruqayyah must work 4 hours to earn wages of ₦12,000. ### Worked example 9.6: Interpreting a distance-time graph Chizoba lives 90 metres from a shop. The graph shows his journey to the shop, the time he spent at the shop, and his journey back home. Interpret each section of the graph. 1. Step 1: Look at the first section of the graph, AB. • The graph shows that the distance from Chizoba's home increases as time goes by. This means he is moving towards the shop. • Point B is across from the distance 90 metres on the vertical axis. This means Chizoba is at the shop. • Point B is straight above from 6 minutes on the horizontal axis. This means it took Chizoba 6 minutes to reach the shop. 2. Step 2: Look at the second section of the graph, BC. • The graph shows that the distance from Chizoba's home remains constant at 90 metres at time goes by. This means Chizoba did not move away from the shop. • Point B is straight above 6 minutes on the horizontal axis. Point C is straight above 14 minutes on the horizontal axis. This means Chizoba spent $14 - 6 = 8$ minutes at the shop. 3. Step 3: Look at the third section of the graph, CD. • The graph shows that the distance from Chizoba's home decreases as time goes by. This means he is moving away from the shop and towards his home. • Point D is at zero on the vertical axis. This means Chizoba is back at his home. • Point C is above 14 minutes on the horizontal axis. Point D is at 20 minutes on the horizontal axis. This means it took Chizoba $20 - 14 = 6$ minutes to move from the shop back to his home. • Point D shows that the full journey took 20 minutes. We can confirm this by adding the time for each interval: \begin{align} \text{Total time}&=\text{Time AB}+\text{Time BC}+\text{Time CD} \\ &=6\text{ minutes}+8\text{ minutes}+6\text{ minutes} \\ &=20\text{ minutes} \end{align} ### Exercise 9.4: Apply graphs to everyday situations 1. The full price of one movie ticket for an adult is ₦1,500 at a certain cinema. 1. Plot a graph to show the total price of 10, 20, 30, 40 and 50 of these movie tickets. \begin{array}{|l|r|r|r|r|r|r|} \hline \text{Number of tickets} & \;\;\;0 & 10 & 20 & 30 & 40 &50 \\ \hline \text{Total price} & 0 & 15,000 & 30,000 & 45,000 & 60,000 & 75,000 \\ \hline \end{array} 1. Use the graph to determine the total price of 35 of these movie tickets. On the horizontal axis, 35 is halfway between 30 and 40. This gives the number halfway between ₦45,000 and ₦60,000 on the vertical axis: $$\dfrac{ ₦45,000 + ₦60,000}{2}= ₦52,500$$ $\therefore$ The price for 35 tickets is ₦52,500. 1. Use the graph to determine how many of these movie tickets you can buy for ₦37,500. On the vertical axis, ₦37,500 is halfway between ₦30,000 and ₦45,000, because: • $30,000 - 45,000 = 15,000$ • $15,000 \div 2 = 7,500$ • $30,000 + 7,500 = &#8358;37,500$ From the point on the graph for ₦37,500, straight down to the horizontal axis gives the number halfway between 20 and 30: $\dfrac{20 + 30}{2}=25$. $\therefore$ You can buy 25 tickets for ₦37,500. 2. Adaeze lives 80 m from her friend. She walks to her friend's home and spends a few minutes there. Adaeze then walks back home. The graph shows her journey. Use the graph to answer the questions that follow. 1. How long did it take Adaeze to reach her friend's home? Adaeze reached her friend's home when she was 80 metres from her own home. The corresponding time is 5 minutes, so it took 5 minutes for Adaeze to reach her friend's home. 1. How much time did Adaeze spend at her friend's home? Adaeze's distance from her home remained constant at 80 metres for 15 minutes $-$ 5 minutes = 10 minutes. $\therefore$ Adaeze spent 10 minutes at her friend's home. 1. How long did it take Adaeze to walk back from her friend's home to her own home? Adaeze's distance from her home decreased for 20 minutes $-$ 15 minutes = 5 minutes. $\therefore$ It took Adaeze 5 minutes to walk back to her own home. 3. The table shows the velocity of a car that is travelling in a straight line over a period of 90 seconds. \begin{array}{|l|r|r|r|r|r|r|r|} \hline \text{Time (seconds)} & \;\;\;0 & 15 & 30 & 45 & 60 & 75 & 90\\ \hline \text{Velocity (m/s)} & 0 & 10 & 20 & 30 & 40 & 40 & 40 \\ \hline \end{array} 1. Plot a velocity-time graph for the car. 1. Give the maximum velocity that the car reached. The maximum value for velocity is 40 m/s. 1. How long did it take the car to reach maximum velocity? It took the car 60 seconds to reach a maximum velocity of 40 m/s. 1. What happened from 60 seconds to 90 seconds? The velocity of the car remained the same at 40 m/s. 1. From 0 seconds to 60 seconds, the car accelerates from 0 m/s to 40 m/s. Determine the acceleration of the car. \begin{align} \text{acceleration}&=\frac{\text{velocity}}{\text{time}} \\ &=\frac{40\text{ m/s}-0\text{ m/s}}{60\;\text s-0}\\ &=\frac{4}{6}\frac{\text m}{\text s}\times\frac{1}{\text s}\\ &=\frac{2}{3}\text{ m/s}^2 \end{align} ## 9.5 Summary • A line is a continuous path with length but no breadth. A line is one-dimensional. • A plane is a flat surface that has length and breadth. A plane is two-dimensional. • The Cartesian plane is a plane used to show the position of any point relative to the intersection of a horizontal number line and a vertical number line. • The origin on the Cartesian plane is the point of intersection of the $x$-axis and the $y$-axis. Its coordinates are $(0, 0)$. • The coordinates of a point on the Cartesian plane give the position of the point relative to the origin of a horizontal number line and a vertical number line. Coordinates are written in the form $(x, y)$. • We plot a point on the Cartesian plane by marking it according to its coordinates. • If the $x$-coordinate of a point is 0, it means the point is not displaced from the origin in a horizontal direction. The point lies on the $y$-axis. Similarly, if the $y$-coordinate of a point is 0, it means the point is not displaced from the origin in a vertical direction. The point lies on the $x$-axis. • A table of values shows the pairs of coordinates for a given algebraic equation. • A graph is a pattern formed on a Cartesian plane by points whose coordinates are obtained from a given equation. • Graphs can be applied to everyday situations.
{"extraction_info": {"found_math": true, "script_math_tex": 105, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918250441551208, "perplexity": 373.67745238705885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878879.33/warc/CC-MAIN-20201022024236-20201022054236-00701.warc.gz"}
https://cs.stackexchange.com/questions/57786/when-can-you-invert-an-equation-in-the-lambda-calculus
# When can you “invert” an equation in the lambda calculus Suppose that $M$ is a full model of the simply typed lambda calculus. Suppose each base type is infinite. Now suppose that $f$ and $g$ are two functions in $M$ (not necessarily in the same domain) that are not definable by any pure term, and that $\alpha$ is a pure term of the $\lambda$-calculus such that: $$M(\alpha) g = f$$ My question is: when is it possible to find a pure term $\beta$ such that $$g=M(\beta)f$$ Are there easy to state necessary or sufficient conditions? I have been unable to find counterexamples when gs type complexity is higher than fs, so maybe that is a sufficient condition? A couple of examples: if $f$ is the converse of $g$: $$f=M(\lambda xyz. xzy)g$$ then we can find such a $\beta$ -- in this case, letting it also be $\lambda xyz. xzy$. Similarly If $f$ is application to $a$: $$f = M(\lambda xy.yx)a$$ then $a$ is $f$ applied to $\lambda x.x$: $$a = M(\lambda y.y(\lambda x.x))f$$ Lastly, if $x$ is a variable in some base type and $f=M(\lambda xy.x)a$, then I don't think there is any pure term such that $a=M(\beta)f$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147372841835022, "perplexity": 143.79194207179117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00243.warc.gz"}
https://www.simis.io/docs/user-manual-turbulent-wind-creator
# Turbulent Wind Creator Turbulent wind can be simulated as a 3D grid where each grid point has a wind speed defined by 3 components in the 3 directions of space. Turbulent wind is generally neither constant in time nor space. Ashes is able to read and generate two types of turbulent wind files: Both these simulators are shipped with Ashes, and the Turbulent Wind Creator offers a graphical user interface for you to use them within Ashes. The following section explains how to use the Turbulent Wind Creator. ## 1 Turbulent Wind Creator The Turbulent Wind Creator enables you to generate turbulent wind files which can then be selected as the wind input file for a time simulation with Turbulent wind. There are two situations where you can run a simulation with turbulent wind without the need to use the Turbulent Wind Creator: • for short simulations: if you select Turbulent wind in the Atmosphere part, a 60 seconds turbulent wind will be automatically generated once you start the simulation. In this case, the wind is generated with a set of default parameters. • for batches using the Batch manager: in this case you will have to enter the parameters of your turbulent wind in the Batch window The Turbulent Wind Creator can be opened by clicking the  icon in the Top Ribbon of the Simulation window. This will open the following window: The different components of the Turbulent Wind Creator are explained in the following list: 1. Name: the generated turbulent wind field will be saved to a file with that name. The extension of the file name will be set according to the simulator selected: if TurbSim is selected, the extension will be .wnd, if the Mann simulator is selected, the extension will be .bin. 2. Folder path: the folder to which the turbulent wind file will be saved. 3. Turbulence simulator: the two different simulators shipped with Ashes use different theories to generate turbulent wind fields. These implies that the requested input for both simulators is slightly different, as explained in Sections 2 and 3 on this page. 4. Parameters: the input parameters to generate the turbulent wind field. These parameters are briefly explained in the remainder of this page and in the Turbulent wind section 5. Create: clicking this button will generate the turbulent wind field and save it to the file specified by the Name and the Folder path. 6. Viewing window: the current model appears in this window, together with a visualization of the turbulent wind field, where the green crosses correspond to the grid points and the red rectangle represents the size of the field. This window helps you make sure that the turbulent wind field covers all your rotor (or rotors for multi-rotor turbines) or the support structure for example. Note that if during a time simulation any blade station from the model moves out of the turbulent wind field, the simulations will sttop and an error message will be displayed. In the rest of this page, we briefly explain some of the parameters that can be adjusted in the different tabs of the Parameters pane. A more thorough description is given in the Turbulent wind section. ### 1.1 Parameters common to both simulators For both simulators, the size of the grid can be set to automatically fit the rotor of the current model by ticking the box Automatically cover model's rotor swap area. In the single rotor case, the grid will be a square centered at the bug and with a side equal to the rotor diameter multiplied by the Size factor parameter. In addition, if the Automatically cover model's support structure box is ticked, the grid will be extended down to the ground (or mean sea level). The seed used to generate the turbulent wind field can be adjusted in the Simulation tab. It can either be • Random: based on the computer clock, this will produce different turbulent wind fields every time the Creator is used • User defined: you define which seed is used. This enables you to reproduce exactly the same turbulent wind field for two different simulations. The TurbSim and the Mann simulator are conceptually different, which means that they require significantly different parameters. The next two sections explain these difference and how that affects the selection of parameters. ### 1.2 Parameters for the TurbSim simulator For the TurbSim simulator, the environmental characteristics of the turbulent wind field are chosen beforehand. This means that parameters such as Average wind speed, Turbulence intensity or Wind profile must be adjusted in the Turbulent Wind Creator. In addition, different Turbulent models are available. The number of grid points in the longitudinal directions (i.e. the wind speed direction) is calculated from the duration of the field and the time step. TurbSim also offers the possibility to adjust some advanced parameters that can be found in the Advanced tab. ### 1.3 Parameters for the Mann simulator The Mann simulator generates a turbulent wind grid independently of the environmental conditions. This means that the average wind speed or the turbulence intensity must be adjusted at the start of the time simulation, once the turbulent wind field has been generated. Therefore, these parameters are not available in the Turbulent Wind Creator. The geometrical length of the turbulent wind field can be defined either as Absolute, in which case a length in meters must be given, or as the product of a temporal length and an average wind speed. Note: the average wind speed used to specify a grid length for the Mann simulator is independent of the average speed that the wind will have during the simulation. When starting the simulation, make sure that the average wind speed used in the time simulation corresponds to your requirements. The number of grid points in the longitudinal direction must be entered manually. For the Mann simulator, the number of grid points in all directions must be a power of 2.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938679099082947, "perplexity": 898.0285091405442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00597.warc.gz"}
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&pubname=all&v1=22E46&startRec=31
AMS eContent Search Results Matches for: msc=(22E46) AND publication=(all) Sort order: Date Format: Standard display Results: 31 to 60 of 267 found      Go to page: 1 2 3 4 5 > >> [31] Wolfgang Soergel. Corrections to: On the $\mathfrak n$-cohomology of limits of discrete series representations''. Represent. Theory 19 (2015) 1-2. Abstract, references, and article information View Article: PDF [32] Binyong Sun and Chen-Bo Zhu. Conservation relations for local theta correspondence. J. Amer. Math. Soc. 28 (2015) 939-983. Abstract, references, and article information    View Article: PDF [33] Alexander Gorodnik and Amos Nevo. Quantitative ergodic theorems and their number-theoretic applications. Bull. Amer. Math. Soc. 52 (2015) 65-113. Abstract, references, and article information    View Article: PDF [34] Toshihisa Kubo. Special values for conformally invariant systems associated to maximal parabolics of quasi-Heisenberg type. Trans. Amer. Math. Soc. 366 (2014) 4649-4696. Abstract, references, and article information    View Article: PDF [35] O. G. Styrt. On the orbit space of an irreducible representation of the special unitary group. Trans. Moscow Math. Soc. 74 (2013) 145-164. Abstract, references, and article information    View Article: PDF [36] Alexander Gorodnik and Felipe A. Ramírez. Limit theorems for rank-one Lie groups. Proc. Amer. Math. Soc. 142 (2014) 1359-1369. Abstract, references, and article information    View Article: PDF [37] Toshiki Nakashima. Decorated geometric crystals and polyhedral realizations of type $D_n$. Contemporary Mathematics 623 (2014) 227-242. Book volume table of contents    View Article: PDF [38] Simon Gindikin. Harmonic analysis on symmetric spaces as complex analysis. Contemporary Mathematics 614 (2014) 69-80. Book volume table of contents    View Article: PDF [39] Toshiyuki Kobayashi. Special Functions in Minimal Representations. Contemporary Mathematics 610 (2014) 253-266. Book volume table of contents View Article: PDF [40] Matt Kerr. Notes on the representation theory of $SL_{2}(\mathbb{R})$. Contemporary Mathematics 608 (2014) 173-198. Book volume table of contents    View Article: PDF [41] Matt Kerr. Cup products in automorphic cohomology: The case of $Sp_{4}$. Contemporary Mathematics 608 (2014) 199-233. Book volume table of contents    View Article: PDF [42] Mark Green and Phillip Griffiths. On the differential equations satisfied by certain Harish-Chandra modules. Contemporary Mathematics 608 (2014) 85-141. Book volume table of contents    View Article: PDF [43] Siddhartha Sahi. The Capelli identity for Grassmann manifolds. Represent. Theory 17 (2013) 326-336. Abstract, references, and article information    View Article: PDF [44] Benjamin Schwarz and Henrik Seppänen. Symplectic branching laws and Hermitian symmetric spaces. Trans. Amer. Math. Soc. 365 (2013) 6595-6623. Abstract, references, and article information    View Article: PDF [45] Liang Yang. On the quantization of spherical nilpotent orbits. Trans. Amer. Math. Soc. 365 (2013) 6499-6515. Abstract, references, and article information    View Article: PDF [46] O. G. Styrt. The simplest stationary subalgebras, for compact linear Lie algebras. Trans. Moscow Math. Soc. 73 (2012) 107-120. Abstract, references, and article information View Article: PDF This article is available free of charge [47] Toshiki Nakashima. Decorated Geometric Crystals, Polyhedral and Monomial Realizations of Crystal Bases. Contemporary Mathematics 602 (2013) 143-163. Book volume table of contents    View Article: PDF [48] Toshiyuki Kobayashi. $F$-method for constructing equivariant differential operators. Contemporary Mathematics 598 (2013) 139-146. Book volume table of contents View Article: PDF [49] Joseph A. Wolf. Principal series representations of infinite dimensional Lie groups, II: Construction of induced representations. Contemporary Mathematics 598 (2013) 257-280. Book volume table of contents    View Article: PDF [50] Sigurdur Helgason. Some personal remarks on the Radon transform. Contemporary Mathematics 598 (2013) 3-19. Book volume table of contents    View Article: PDF [51] G. Ólafsson, A. Pasquale and B. Rubin. Analytic and group-theoretic aspects of the Cosine transform. Contemporary Mathematics 598 (2013) 167-188. Book volume table of contents    View Article: PDF [52] Hideko Sekiguchi. Radon--Penrose transform between symmetric spaces. Contemporary Mathematics 598 (2013) 239-256. Book volume table of contents View Article: PDF [53] Benjamin Harris. Tempered representations and nilpotent orbits. Represent. Theory 16 (2012) 610-619. Abstract, references, and article information    View Article: PDF [54] Kenji Taniguchi. On the composition series of the standard Whittaker $(\mathfrak{g},K)$-modules. Trans. Amer. Math. Soc. 365 (2013) 3899-3922. Abstract, references, and article information    View Article: PDF [55] Pierre Baumann and Joel Kamnitzer. Preprojective algebras and MV polytopes. Represent. Theory 16 (2012) 152-188. Abstract, references, and article information    View Article: PDF [56] Michael G. Eastwood and Joseph A. Wolf. A duality for the double fibration transform. Contemporary Mathematics 584 (2012) 1-16. Book volume table of contents    View Article: PDF [57] Roberto Camporesi and Bernhard Krötz. The complex crown for homogeneous harmonic spaces. Trans. Amer. Math. Soc. 364 (2012) 2227-2240. MR 2869205. Abstract, references, and article information    View Article: PDF This article is available free of charge [58] Robert Guralnick, Michael Larsen and Corey Manack. Low degree representations of simple Lie groups. Proc. Amer. Math. Soc. 140 (2012) 1823-1834. MR 2869167. Abstract, references, and article information    View Article: PDF This article is available free of charge [59] Hans Wenzl. Quotients of representation rings. Represent. Theory 15 (2011) 385-406. MR 2801174. Abstract, references, and article information    View Article: PDF [60] Toshiyuki Kobayashi and Gen Mano. The Schr\"odinger model for the minimal representation of the indefinite orthogonal group $O(p, q)$. Memoirs of the AMS 213 (2011) MR 2858535. Book volume table of contents Results: 31 to 60 of 267 found      Go to page: 1 2 3 4 5 > >>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948233962059021, "perplexity": 4229.028366044203}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888077.41/warc/CC-MAIN-20180119164824-20180119184824-00562.warc.gz"}
https://www.physicsforums.com/threads/gauss-law-for-a-point-charge.46426/
# Gauss' Law for a point charge 1. Oct 6, 2004 ### stunner5000pt This is SUPPOSED to be easy but i seemingly find find it hard... A poin charge of +Q is places a distance d/2 above the centre of a square surface of side d. Find the electric flux through the square. so i know that E dA = EA (because the flux through the square is all at 90 degree angles) = kQd^2 / (d/2)^2 = Q / pi epsilon0 But the answer is Q / 6epsilon0 have i got the concept wrong here? please do help! 2. Oct 6, 2004 ### Staff: Mentor The electric field at the surface is not simply kQ/(d/2)^2: the distance to the surface is not just d/2! Furthermore, the electric field is not perpendicular to that surface! (The field from a point charge radiates out from the center.) To calculate the flux directly, you need to find the component of the field perpendicular to the surface and integrate. But don't do that. Instead, take advantage of symmetry. Hint: Imagine other sides were added forming a cube around the point charge. (It is easy.) 3. Oct 6, 2004 ### stunner5000pt I think i figured something out, if flux is Qenc / permittivity then the charge +Q in a cube of side d is simply Qenc / permit But sinc this is a square, the flux is one sixth (since a cbe has six sides) of what a cube is so it is Q / 6permit am i right?? 4. Oct 6, 2004 ### Tide Why would you think the electric field is everywhere normal to the surface? 5. Oct 6, 2004 ### stunner5000pt i thought wrong, read my second post, i believe it is more relevant 6. Oct 6, 2004 ### Staff: Mentor Yes, you are right. 7. Mar 28, 2011 ### Muhammad Usma can't we derive it by any other method??? Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Gauss' Law for a point charge
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499083161354065, "perplexity": 2534.797707358585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423222.65/warc/CC-MAIN-20170720141821-20170720161821-00457.warc.gz"}
https://www.intechopen.com/books/electromagnetic-materials-and-devices/the-influence-of-the-dielectric-materials-on-the-fields-in-the-millimeter-and-infrared-wave-regimes
Open access peer-reviewed chapter # The Influence of the Dielectric Materials on the Fields in the Millimeter and Infrared Wave Regimes By Zion Menachem Submitted: July 9th 2018Reviewed: August 16th 2018Published: November 5th 2018 DOI: 10.5772/intechopen.80943 Downloaded: 278 ## Abstract This chapter presents the influence of the dielectric materials on the output field for the millimeter and infrared regimes. This chapter presents seven examples of the discontinuous problems in the cross section of the straight waveguide. Two different geometrical of the dielectric profiles in the cross section of the straight rectangular and circular waveguides will be proposed to understand the behavior of the output fields. The two different methods for rectangular and circular waveguides and the techniques to calculate any geometry in the cross section are very important to understand the influence of the dielectric materials on the output fields. The two different methods are based on Laplace and Fourier transforms and the inverse Laplace and Fourier transforms. Laplace transform on the differential wave equations is needed to obtain the wave equations and the output fields that are expressed directly as functions of the transmitted fields at the entrance of the waveguide. Thus, the Laplace transform is necessary to obtain the comfortable and simple input-output connections of the fields. The applications are useful for straight waveguides in the millimeter and infrared wave regimes. ### Keywords • wave propagation • dielectric profiles • rectangular and circular waveguides • dielectric materials ## 1. Introduction Various methods for analysis of waveguides have been studied in the literature. The review for the modal analysis of general methods has been published [1]. The important methods, such as the finite difference method and integral equation method, and methods based on series expansion have been described. An analytical model for the corrugated rectangular waveguide has been extended to compute the dispersion and interaction impedance [2]. The application of analytical method based on the field equations has been presented to design corrugated rectangular waveguide slow-wave structure THzamplifiers. A fundamental technique has been proposed to compute the propagation constant of waves in a lossy rectangular waveguide [3]. An important consequence of this work is the demonstration that the loss computed for degenerate modes propagating simultaneously is not simply additive. The electromagnetic fields in rectangular conducting waveguides filled with uniaxial anisotropic media have been characterized [4]. A full-vectorial boundary integral equation method for computing guided modes of optical waveguides has been proposed [5]. The integral equations are used to compute the Neumann-to-Dirichlet operators for sub-domains of constant refractive index on the transverse plane of the waveguide. Wave propagation in an inhomogeneous transversely magnetized rectangular waveguide has been studied with the aid of a modified Sturm-Liouville differential equation [6]. An advantageous finite element method for the rectangular waveguide problem has been developed [7] by which complex propagation characteristics may be obtained for arbitrarily shaped waveguide. The characteristic impedance of the fundamental mode in a rectangular waveguide was computed using finite element method. The finite element method has been used to derive approximate values of the possible propagation constant for each frequency. A new structure has been proposed for microwave filters [8]. This structure utilizes a waveguide filled by several dielectric layers. The relative electric permittivity and the length of the layers were optimally obtained using least mean square method. An interesting method has been introduced for frequency domain analysis of arbitrary longitudinally inhomogeneous waveguides [9]. In this method, the integral equations of the longitudinally inhomogeneous waveguides are converted from their differential equations and solved using the method of moments. A general method has been introduced to frequency domain analysis of longitudinally inhomogeneous waveguides [10]. In this method, the electric permittivity and also the transverse electric and magnetic fields were expanded in Taylor’s series. The field solutions were obtained after finding unknown coefficients of the series. A general method has been introduced to analyze aperiodic or periodic longitudinally inhomogeneous waveguides [11]. The periodic longitudinally inhomogeneous waveguides were analyzed using the Fourier series expansion of the electric permittivity function to find their propagation constant and characteristic impedances. Various methods for the analysis of cylindrical hollow metallic or metallic with inner dielectric coating waveguide have been studied in the literature. A review of the hollow waveguide technology [12, 13] and a review of IR transmitting, hollow waveguides, fibers, and integrated optics [14] were published. Hollow waveguides with both metallic and dielectric internal layers have been proposed to reduce the transmission losses. A hollow waveguide can be made, in principle, from any flexible or rigid tube (plastic, glass, metal, etc.) if its inner hollow surface (the core) is covered by a metallic layer and a dielectric overlayer. This layer structure enables us to transmit both the TE and TM polarizations with low attenuation [15]. A transfer matrix function for the analysis of electromagnetic wave propagation along the straight dielectric waveguide with arbitrary profiles has been proposed [16]. This method is based on the Laplace and Fourier transforms. This method is based on Fourier coefficients of the transverse dielectric profile and those of the input-wave profile. Laplace transform is necessary to obtain the comfortable and simple input–output connections of the fields. The transverse field profiles are computed by the inverse Laplace and Fourier transforms. The influence of the spot size and cross section on the output fields and power density along the straight hollow waveguide has been proposed [17]. The derivation is based on Maxwell’s equations. The longitudinal components of the fields are developed into the Fourier-Bessel series. The transverse components of the fields are expressed as functions of the longitudinal components in the Laplace plane and are obtained by using the inverse Laplace transform by the residue method. These are two kinds of different methods that enable us to solve practical problems with different boundary conditions. The calculations in all methods are based on using Laplace and Fourier transforms, and the output fields are computed by the inverse Laplace and Fourier transforms. Laplace transform on the differential wave equations is needed to obtain the wave equations (and thus also the output fields) that are expressed directly as functions of the transmitted fields at the entrance of the waveguide at z=0+. Thus, the Laplace transform is necessary to obtain the comfortable and simple input-output connections of the fields. All models that are mentioned refer to solve interesting wave propagation problems with a particular geometry. If we want to solve more complex discontinuous problems of coatings in the cross section of the dielectric waveguides, then it is important to develop for each problem an improved technique for calculating the profiles with the dielectric material in the cross section of the straight waveguide. This chapter presents two techniques for two different geometries of the straight waveguide. The two proposed techniques are very important to solve discontinuous problems with dielectric material in the cross section of the straight rectangular and circular waveguides. The proposed technique relates to the method for the propagation along the straight rectangular metallic waveguide [16]. The examples will be demonstrated for the rectangular and circular dielectric profiles in the straight rectangular waveguide. In this chapter, we present seven dielectric structures as shown in Figure 1(a)–(g). Figure 1(a)–(e) shows five examples of the discontinuous dielectric materials in the cross section of the rectangular straight waveguide. Figure 1(f)–(g) shows two examples of the discontinuous dielectric materials in the cross section of the circular straight waveguide. ## 2. Two proposed techniques for discontinuous problems in the cross section of the rectangular and circular waveguides The wave equations for the components of the electric and magnetic field are given by 2E+ω2μεE+Eεε=0E1 and 2H+ω2μεH+εε××H=0E2 where ε0represents the vacuum dielectric constant, χ0is the susceptibility, and gis its dielectric profile function in the waveguide. Let us introduce the dielectric profile function for the examples as shown in Figure 1(a)–(g) for the inhomogeneous dielectric materials. ## 3. The derivation for rectangular straight waveguide The wave Eqs. (1) and (2) are given in the case of the rectangular straight waveguide, where εxy=ε01+χ0gxy,gx=1/εxyεxy/x,andgy=1/εxyεxy/y. ### 3.1 The first technique to calculate the discontinuous structure of the cross section Figure 2(a) shows an example of the cross section of the straight waveguide (Figure 1(a)) for gxfunction. In order to solve inhomogeneous dielectric profiles, we use the ωεfunction, with the parameters ε1and ε2(Figures 1(a) and (b)). The ωεfunction [18] is used in order to solve discontinuous problems in the cross section of the straight waveguide. The ωεfunction is defined according to Figure 2(b) as ωεr=Cεeε2ε2r2rε0r>εE3 where Cεis a constant and ωεrdr=1. In order to solve inhomogeneous dielectric profiles (e.g., in Figure 1(a)–(b)) in the cross section of the straight waveguide, the parameter εis used according to the ωεfunction (Figure 2(b)), where ε0. The dielectric profile in this case of a rectangular dielectric material in the rectangular cross section (Figure 1(b)) is given by gx=00x<adε/2g0exp1ε2ε2xad+ε/22adε/2x<ad+ε/2g0ad+ε/2<x<a+dε/2g0exp1ε2ε2xa+dε/22a+dε/2x<a+d+ε/20a+d+ε/2<xa,E4 and gy=00y<bcε/2g0exp1ε2ε2ybc+ε/22bcε/2y<bc+ε/2g0bc+ε/2<y<b+cε/2g0exp1ε2ε2yb+cε/22b+cε/2y<b+c+ε/20b+c+ε/2<yb.E5 The elements of the matrices are given according to Figure 1(b), in the case of b c by gnm=g0abadε/2ad+ε/2exp1ε2ε2xad+ε/22cosnπxadx+ad+ε/2a+dε/2cosnπxadx+a+dε/2a+d+ε/2exp1ε2ε2xa+dε/22cosnπxadxbcε/2bc+ε/2exp1ε2ε2ybc+ε/22cosmπybdy+bc+ε/2b+cε/2cosmπybdy+b+cε/2b+c+ε/2exp1ε2ε2yb+cε/22cosmπybdy,E6 where ad+ε/2a+dε/2cosnπxadx=2a/sin/2adεcos/2n0dεn=0 and bc+ε/2b+cε/2cosmπybdy=2b/sin/2bcεcos/2n0cεn=0. The elements of the matrices are given according to Figure 1(a), in the case of b = c by gnm=g0abadε/2ad+ε/2exp1ε2ε2xad+ε/22cosnπxadx+ad+ε/2a+dε/2cosnπxadx+a+dε/2a+d+ε/2exp1ε2ε2xa+dε/22cosnπxadx0bcosmπybdy, where 0bcosmπybdy=b/sin=bm=00m0. ### 3.2 The second technique to calculate the discontinuous structure of the cross section The second technique to calculate the discontinuous structure of the cross section as shown in Figure 1(a) and (b). The dielectric profile gxyis given according to εxy=ε01+gxy. According to Figure 3 and for gxy=g0, we obtain gnm=g04abaadxbbexpjkxx+kyydy=g04abx11x12dxy11y12expjkxx+kyydy+x12x11dxy11y12expjkxx+kyydy+x12x11dxy12y11expjkxx+kyydy+x11x12dxy12y11expjkxx+kyydy.E7 If y11and y12are functions of x, then we obtain gnm=g0abkyx11x12sinkyy12sinkyy11coskxxdx=2g0amπx11x12sinky2y12y11cosky2y12+y11coskxxdx.E8 The dielectric profile for Figure 1(b) is given by gnm=g04ab4cdn=0,m=0g04ab8dk0ymsink0ymc2cosk0ymb2n=0,m0g04ab8ck0xnsink0xnd2cosk0xna2n0,m=0g04ab16k0xk0ynmsink0xnd2cosk0xna2sink0ymc2cosk0ymb2n0,m0E9 ### 3.3 The dielectric profile for the circular profile in the cross section The dielectric profile for the circular profile in the cross section of the straight rectangular waveguide is given by (Figure 1(c)) gxy=g00r<r1ε1/2g0exp1qεrr1ε/2r<r1+ε/2,E10 where qεr=ε2ε2rr1ε/22, else gxy= 0. The radius of the circle is given by r=xa/22+yb/22. ### 3.4 The dielectric profile for the waveguide filled with dielectric material in the entire cross section The dielectric profile (Figure 1(d)) is given by gnm=g0n=0,m=0g04ab8ak0ymsink0ymb2cosk0ymb2n=0,m0g04ab8bk0xnsink0xna2cosk0xna2n0,m=0g04ab16k0xk0ynmsink0xna2cosk0xna2sink0ymb2cosk0ymb2n0,m0E11 ### 3.5 The hollow rectangular waveguide with the dielectric material between the hollow rectangle and the metallic The dielectric profile of the hollow rectangular waveguide with the dielectric material between the hollow rectangle and the metal (Figure 1(e)) is calculated by subtracting the dielectric profile of Figure 1(b) from the dielectric profile of Figure 1(d). The matrix G is given by the form G=g00g10g20gnmgNMg10g00g10gn1mgN1Mg20g10g20gnmg00gNMg00.E12 Similarly, the Gxand Gymatrices are obtained by the derivatives of the dielectric profile. These matrices relate to the method that is based on the Laplace and Fourier transforms and the inverse Laplace and Fourier transforms [16]. Laplace transform is necessary to obtain the comfortable and simple input-output connections of the fields. The output transverse fields are computed by the inverse Laplace and Fourier transforms. This method becomes an improved method by using the proposed technique and the particular application also in the cases of discontinuous problems of the hollow rectangular waveguide with dielectric material between the hollow rectangle and the metal (Figure 1(e)), in the cross section of the straight rectangular waveguide. In addition, we can find the thickness of the dielectric layer that is recommended to obtain the desired behavior of the output fields. Several examples will demonstrate in the next section in order to understand the influence of the hollow rectangular waveguide with dielectric material in the cross section (Figure 1) on the output field. All the graphical results will be demonstrated as a response to a half-sine (TE10) input-wave profile and the hollow rectangular waveguide with dielectric material in the cross section of the straight rectangular waveguide. ## 4. The derivation for circular straight waveguide The wave Eqs. (1) and (2) are given in the case of the circular straight waveguide, where εr=ε01+χ0grandgrr=1/εrεr/r. The proposed technique to calculate the refractive index for discontinuous problems (Figure 1(f) and (g)) is given in this section for the one dielectric coating (Figure 1(f)) and for three dielectric coatings (Figure 1(g)). ### 4.1 The refractive index for the circular hollow waveguide with one dielectric coating in the cross section The cross section of the hollow waveguide (Figure 1(f)) is made of a tube of various types of one dielectric layer and a metallic layer. The refractive indices of the air, dielectric, and metallic layers are n0=1, nAgI=2, and nAg=10j60, respectively. The value of the refractive index of the material at a wavelength of λ= 10.6 μm is taken from the table performed by Miyagi et al. [19]. The refractive indices of the air, dielectric layer (AgI), and metallic layer (Ag) are shown in Figure 1(f). The refractive index (n(r)) is dependent on the transition’s regions in the cross section between the two different materials (air-AgI, AgI-Ag). The refractive index is calculated as follows: nr=n00r<bε1/2n0+ndn0exp1ε12ε12rb+ε1/22bε1/2r<b+ε1/2ndb+ε1/2r<aε2/2nd+nmndexp1ε22ε22ra+ε2/22aε2/2r<a+ε2/2nmelse, where the internal and external diameters are denoted as 2b, 2a, and 2(a+δm), respectively, where δmis the metallic layer. The thickness of the dielectric coating (d) is defined as [a − b], and the thickness of the metallic layer (δm) is defined as [(a+δm) − a]. The parameter εis very small [ε= [a − b]/50]. The refractive indices of the air, dielectric, and metallic layers are denoted as n0, nd, and nm, respectively. ### 4.2 The refractive index for the circular hollow waveguide with three dielectric coatings in the cross section The cross section of the hollow waveguide (Figure 1(g)) is made of a tube of various types of three dielectric layers and a metallic layer. The internal and external diameters are denoted as 2b, 2 b1, 2 b2, 2a, and 2(a + δm), respectively, where δmis the thickness of the metallic layer. In addition, we denote the thickness of the dielectric layers as d1, d2, and d3, respectively, where d1= b1− b, d2= b2b1, and d3= a − b2. The refractive index in the particular case with the three dielectric layers and the metallic layer in the cross section of the straight hollow waveguide (Figure 1(g)) is calculated as follows: nr=n00r<bε/2n0+n1n0exp1ε2ε2rb+ε/22bε/2r<b+ε/2n1b+ε/2r<b1ε/2n1+n2n1exp1ε2ε2rb1+ε/22b1ε2/2r<b1+ε2/2n2b1+ε/2r<b2ε/2n2+n3n2exp1ε2ε2rb2+ε/22b2ε/2r<b2+ε/2n3b2+ε/2r<aε/2n3+nmn3exp1ε2ε2ra+ε/22aε/2r<a+ε/2nmelse, where the parameter εis very small [ε= ab/50]. The refractive indices of the air, dielectric, and metallic layers are denoted as n0, n1, n2, n3, and nm, respectively. In this study we suppose that n3>n2>n1. The proposed technique to calculate the refractive indices of the dielectric profile of one dielectric coating (Figure 1(f)) or three dielectric coatings (Figure 1(g)), and the metallic layer in the cross section relate to the method that is based on Maxwell’s equations, the Fourier-Bessel series, Laplace transform, and the inverse Laplace transform by the residue method [17]. This method becomes an improved method by using the proposed technique also in the cases of discontinuous problems of the hollow circular waveguide with one dielectric coating (Figure 1(f)), three dielectric coatings (Figure 1(g)), or more dielectric coatings. ## 5. Numerical results Several examples for the rectangular and circular waveguides with the discontinuous dielectric profile in the cross section of the straight waveguide are demonstrated in this section according to Figure 1(a)–(g). Figure 4(a)–(c) demonstrates the output field as a response to a half-sine (TE10) input-wave profile in the case of the slab profile (Figure 1(a)), where a = b = 20 mm, c = 20 mm, and d = 2 mm, for εr= 3, 4, and 5, respectively. Figure 4(c) shows the output field for εr= 3, 4, and 5, respectively, where y = b/2 = 10 mm. By increasing only the value of the dielectric profile from εr= 3 to εr= 5, the width of the output field decreased, and also the output amplitude decreased. Figure 5(a)–(e) demonstrates the output field as a response to a half-sine (TE10) input-wave profile in the case of the rectangular dielectric profile in the rectangular waveguide (Figure 1(b)), where a = b = 20 mm and c = d = 2 mm, for εr= 3, 5, 7, and 10, respectively. Figure 5(e) shows the output field for εr= 3, 5, 7, and 10, respectively, where y = b/2 = 10 mm. By increasing only the dielectric profile from εr= 3 to εr= 5, the width of the output field increased, and also the output amplitude increased. The output fields are strongly affected by the input-wave profile (TE10mode), the location, and the dielectric profile, as shown in Figure 4(a)–(c) and Figure 5(a)–(e). Figure 6(a)–(e) shows the output field as a response to a half-sine (TE10) input-wave profile in the case of the circular dielectric profile (Figure 1(c)), for εr= 3, 5, 7 and 10, respectively, where a = b = 20 mm, and the radius of the circular dielectric profile is equal to 1 mm. Figure 6(e) shows the output field for εr=3, 5, 7, and 10, respectively, where y = b/2 = 10 mm. The other parameters are z = 0.15 m, k0= 167 1/m, λ= 3.75 cm, and β= 58 1/m. The proposed technique in Section 3.3 is also effective to solve discontinuous problems of periodic circular profiles in the cross section of the straight rectangular waveguides, and some examples were demonstrated in Ref. [20]. The behavior of the output fields (Figures 5(a)–(e) and 6(a)–(e)) is similar when the dimensions of the rectangular dielectric profile (Figure 1(b)) and the circular profile (Figure 1(c)) are very close. The output field (Figure 5(a)–(e)) is shown for c = d = 2 mm as regards to the dimensions a = b = 20 mm. The output field (Figure 6(a)–(e)) is shown where the radius of circular profile is equal to 1 mm (viz., the diameter 2 mm), as regards to the dimensions a = b = 20 mm. Figure 7(a)–(e) shows the output field as a response to a half-sine (TE10) input-wave profile in the case of the circular dielectric profile (Figure 1(c)), where a = b = 20 mm, and the radius of the circular dielectric profile is equal to 2 mm for εr= 3, 5, 7, and 10, respectively. The other parameters are z = 0.15 m, k0= 167 1/m, λ= 3.75 cm, and β= 58 1/m. Figure 7(e) shows the output field for εr=3, 5, 7, and 10, respectively, where y = b/2 = 10 mm. By changing only the value of the radius of the circular dielectric profile (Figure 1(c)) from 1 mm to 2 mm, as regards to the dimensions of the cross section of the waveguide (a = b = 20 mm), the output field of the Gaussian shape increased, and the half-sine (TE10) input-wave profile decreased. The dielectric profile of the hollow rectangular waveguide with the dielectric material between the hollow rectangle and the metal (Figure 1(e)) is calculated by subtracting the dielectric profile of the waveguide with the dielectric material in the core (Figure 1(b)) from the dielectric profile according to the waveguide entirely with the dielectric profile (Figure 1(d)). Figure 8(a)–(c) shows the output field as a response to a half-sine (TE10) input-wave profile in the case of the hollow rectangular waveguide with one dielectric material between the hollow rectangle and the metal (Figure 1(e)), where a = b = 20 mm, c = d = 14 mm, and d = 14 mm, namely, e = 3 mm and f = 3 mm. Figure 8(a)–(b) shows the output field for εr= 2.5 and εr= 4, respectively. Figure 8(c) shows the output field for εr= 2.5, 3, 3.5, and 4, respectively, where y = b/2 = 10 mm. The other parameters are z = 0.15 m, k0= 167 1/m, λ= 3.75 cm, and β= 58 1/m. Figure 9(a)–(c) shows the output power density in the case of the hollow circular waveguide with one dielectric coating (Figure 1(f)), where a = 0.5 mm. Figure 9(a)–(b) shows the output power density for w0= 0.15 mm and w0= 0.25 mm, respectively. The output power density of the central peak is shown for w0= 0.15 mm, w0= 0.2 mm, and w0= 0.25 mm, respectively, where y = b/2. The other parameters are z = 1 m, nd= 2.2, and nAg= 13.5 - j 75.3. Figure 1(f) and (g) shows two examples of discontinuous problems for circular waveguides. The practical results are demonstrated for Figure 1(f). Figure 10(a)–(c) shows also the output power density in the case of the hollow circular waveguide with one dielectric coating (Figure 1(f)), where a = 0.5 mm, but for other values of the spot size. Figure 10(a)–(b) shows the output power density for w0= 0.26 mm and w0= 0.3 mm, respectively. The output power density of the central peak is shown for w0= 0.26 mm, w0= 0.28 mm, and w0= 0.3 mm, respectively, where y = b/2. The other parameters are z = 1 m, nd= 2.2, and nAg= 13.5 − j 75.3. By changing only the values of the spot size from w0= 0.15 mm, w0= 0.2 mm, and w0= 0.25 mm to w0= 0.26 mm, w0= 0.28 mm, and w0= 0.3 mm, respectively, the results of the output power density for a = 0.5 mm are changed as shown in Figure 10(a)–10(c). The output modal profile is greatly affected by the parameters of the spot size and the dimensions of the cross section of the waveguide. Figure 10(a)–(c) demonstrates that in addition to the main propagation mode, several other secondary modes and symmetric output shape appear in the results of the output power density for the values of w0= 0.26 mm, w0= 0.28 mm, and w0= 0.3 mm, respectively. The proposed technique in Section 4.2 is also effective to solve discontinuous problems of the straight hollow circular waveguide with three dielectric layers (Figure 1(g)), and some examples were demonstrated in Ref. [21]. ## 6. Conclusions Several examples for the rectangular and circular waveguides with the discontinuous dielectric profile in the cross section of the straight waveguide were demonstrated in this research, according to Figure 1(a)–(g). Figure 4(a)–(c) demonstrates the output field as a response to a half-sine (TE10) input-wave profile in the case of the slab profile (Figure 1(a)), where a = b = 20 mm, c = 20 mm, and d = 2 mmfor εr= 3 and 5, respectively. By increasing only the value of the dielectric profile from εr= 3 to εr= 5, the width of the output field decreased, and also the output amplitude decreased. Figure 5(a)–(e) demonstrates the output field as a response to a half-sine (TE10) input-wave profile in the case of the rectangular dielectric profile in the rectangular waveguide (Figure 1(b)), where a = b = 20 mm and c = d = 2 mm, for εr= 3, 5, 7, and 10, respectively. By increasing only the dielectric profile from εr= 3 to εr= 5, the width of the output field increased, and also the output amplitude increased. The output fields are strongly affected by the input-wave profile (TE10mode), the location, and the dielectric profile, as shown in Figure 4(a)–(c) and Figure 5(a)–(e). The behavior of the output fields (Figures 5(a)–(e) and 6(a)–(e)) is similar when the dimensions of the rectangular dielectric profile (Figure 1(b)) and the circular profile (Figure 1(c)) are very close. The output field (Figure 5(a)–(e)) is shown for c = d = 2 mmas regards to the dimensions a = b = 20 mm. The output field (Figure 6(a)–(e)) is shown where the radius of circular profile is equal to 1 mm(viz., the diameter 2 mm), as regards to the dimensions a = b = 20 mm. Figures 6(a)–(e) and 7(a)–(e) show the output field as a response to a half-sine (TE10) input-wave profile in the case of the circular dielectric profile (Figure 1(c)), for εr= 3, 5, 7, and 10, respectively, where a = b = 20 mm, and the radius of the circular dielectric profile is equal to 1 mm. By changing only the value of the radius of the circular dielectric profile (Figure 1(c)) from 1 mmto 2 mm, as regards to the dimensions of the cross section of the waveguide (a = b = 20 mm), the output field of the Gaussian shape increased, and the half-sine (TE10) input-wave profile decreased. Figure 8(a)–(c) shows the output field as a response to a half-sine (TE10) input-wave profile in the case of the hollow rectangular waveguide with one dielectric material between the hollow rectangle and the metal (Figure 1(e)), where a = b = 20 mm, c = d = 14 mm, and d = 14 mm, namely, e = 3 mmand f = 3 mm. Figures 9(a)–(c) and 10(a)–(c) show the output power density in the case of the hollow circular waveguide with one dielectric coating (Figure 1(f)), where a = 0.5 mm. By changing only the values of the spot size from w0= 0.15 mm, w0= 0.2 mm, and w0= 0.25 to w0= 0.26 mm, w0= 0.28 mm, and w0= 0.3 mm, respectively, the results of the output power density for a = 0.5 mm are changed as shown in Figure 10(a)–(c). The output modal profile is greatly affected by the parameters of the spot size and the dimensions of the cross section of the waveguide. Figure 10(a)–(c) demonstrates that in addition to the main propagation mode, several other secondary modes and symmetric output shape appear in the results of the output power density for the values of w0= 0.26 mm, w0= 0.28 mm, and w0= 0.3 mm, respectively. The two important parameters that we studied were the spot size and the dimensions of the cross section of the straight hollow waveguide. The output results are affected by the parameters of the spot size and the dimensions of the cross section of the waveguide. ## Download for free chapter PDF Citations in RIS format Citations in bibtex format ## More © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## How to cite and reference ### Cite this chapter Copy to clipboard Zion Menachem (November 5th 2018). The Influence of the Dielectric Materials on the Fields in the Millimeter and Infrared Wave Regimes, Electromagnetic Materials and Devices, Man-Gui Han, IntechOpen, DOI: 10.5772/intechopen.80943. Available from: ### chapter statistics 278total chapter downloads ### More statistics for editors and authors Login to your personal dashboard for more detailed statistics on your publications. ### Related Content #### Electromagnetic Materials and Devices Edited by Man-Gui Han Next chapter #### Phase-Shift Transmission Line Method for Permittivity Measurement and Its Potential in Sensor Applications By Vasa Radonic, Norbert Cselyuszka, Vesna Crnojevic-Bengin and Goran Kitic First chapter #### Morphotropic Phase Boundary in Ferroelectric Materials By Abdel-Baset M. A. Ibrahim, Rajan Murgan, Mohd Kamil Abd Rahman and Junaidah Osman We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities. View all Books
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8482891917228699, "perplexity": 1055.7659790726186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367949.58/warc/CC-MAIN-20210303230849-20210304020849-00104.warc.gz"}
http://www.icoachmath.com/physics/definition-of-hydraulic-machine.html
# Hydraulic Machine ## Definition of Hydraulic Machine The machine in which force is transmitted by liquids under pressure is known as hydraulic machine. Example: Hydraulic Brakes, Hydraulic Jack #### Hydraulic Brakes The brakes used for car are hydraulic brakes. • The above diagram describes the working principle of Hydraulic brake. • When the brake pedal is pressed, a piston force breaks fluid from one cylinder along a connecting pipe to another cylinder. • There, the fluid pushes on another piston. This pushes a brake pad against a metal disc attached to the rotating wheel of the car. • The friction slows the wheel. • Liquids are generally incompressible (they cannot be compressed). • When liquid is trapped (closed in a container) and pressure is exerted, this pressure is transmitted to all parts of the liquid. • The force (or pressure) is exerted on small area, but it results on a large area with much force. • Thus the small force applied on a car brake results large force on 4 wheels. #### Worked Example The following system contains fluid A downward force of 20 N is applied on the piston X. What will be the upward pressure exerted by the liquid at Y. A. 4 N/Cm2 B. 1 N/Cm2 C. 20 N/Cm2 D. 40 N/Cm2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9197558164596558, "perplexity": 3185.4699689676904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00209-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/6501/is-there-a-known-well-ordering-of-the-reals
# Is there a known well ordering of the reals? So, from what I understand, the axiom of choice is equivalent to the claim that every set can be well ordered. A set is well ordered by a relation, $R$ , if every subset has a least element. My question is: as anyone constructed a well ordering on the reals? First, I was going to ask this question about the rationals, but then I realised that if you pick your favourite bijection between rationals and the integers, this determines a well ordering on the rationals through the natural well order on $\mathbb{Z}$ . So it's not the denseness of the reals that makes it hard to well order them. So is it just the size of $\mathbb{R}$ that makes it difficult to find a well order for it? Why should that be? To reiterate: • Is there a known well order on the Reals? • If there is, does a similar construction work for larger cardinalities? • Is there a largest cardinality for which the construction works? - You read XKCD don't you? –  BCS Oct 11 '10 at 15:30 @BCS yes, but I only saw todays in this question: math.stackexchange.com/questions/6489/… and in fact, the question I asked has been bugging me for some time. –  Seamus Oct 11 '10 at 15:37 Goedel explicitly constructed a subset of the reals and a well order on the subset such that (in ZF) it is consistent that the subset is all reals. But subsequently Cohen showed it is also consistent that the subset if NOT all reals. –  GEdgar May 18 '11 at 19:33 I assume you know the general theorem that, using the axiom of choice, every set can be well ordered. Given that, I think you're asking how hard it is to actually define the well ordering. This is a natural question but it turns out that the answer may be unsatisfying. First, of course, without the axiom of choice it's consistent with ZF set theory that there is no well ordering of the reals. So you can't just write down a formula of set theory akin to the quadratic formula that will "obviously" define a well ordering. Any formula that does define a well-ordering of the reals is going to require a nontrivial proof to verify that it's correct. However, there is not even a formula that unequivocally defines a well ordering of the reals in ZFC. • The theorem of "Borel determinacy" implies that there is no well ordering of the reals whose graph is a Borel set. This is provable in ZFC. The stronger hypothesis of "projective determinacy" implies there is no well ordering of the reals definable by a formula in the projective hierarchy. This is consistent with ZFC but not provable in ZFC. • Worse, it's even consistent with ZFC that no formula in the language of set theory defines a well ordering of the reals (even though one exists). That is, there is a model of ZFC in which no formula defines a well ordering of the reals. A set theorist could tell you more about these results. They are in the set theoretic literature but not in the undergraduate literature. Here is a positive result. If you work in $L$ (that is, you assume the axiom of constructibility) then a specific formula is known that defines a well ordering of the reals in that context. However, the axiom of constructibility is not provable in ZFC (although it is consistent with ZFC), and the formula in question does not define a well ordering of the reals in arbitrary models of ZFC. A second positive result, for relative definability. By looking at the standard proof of the well ordering principle (Zermelo's proof), we see that there is a single formula $\phi(x,y,z)$ in the language of set theory such that if we have any choice function $F$ on the powerset of the reals then the formula $\psi(x,y) = \phi(x,y,F)$ defines a well ordering of the reals, in any model of ZF that happens to have such a choice function. Informally, this says that the reason the usual proof can't explicitly construct a well ordering is because we can't explicitly construct the choice function that the proof takes as an input. - No, it's not just the size. One can constructively prove the existence of large well-ordered sets, but for example even when one has the first uncountable ordinal in hand, one can't show that it is in bijection with $\mathbb{R}$ without the continuum hypothesis. All the difficulty in the problem has to do with what you mean by "constructed." If one has a well-ordering on $\mathbb{R}$ then it is possible to carry out the construction of a Vitali set, which is a non-measurable subset of $[0, 1]$. And it is known that the existence of non-measurable subsets of $\mathbb{R}$ is independent of ZF. In other words, it is impossible to write down a well-ordering of $\mathbb{R}$ in ZF.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527174234390259, "perplexity": 121.769071511458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999676283/warc/CC-MAIN-20140305060756-00048-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/astro-ph/0208035/
# Monte Carlo Study of Supernova Neutrino Spectra Formation Mathias Th. Keil and Georg G. Raffelt Max-Planck-Institut für Physik (Werner-Heisenberg-Institut) Föhringer Ring 6, 80805 München, Germany Hans-Thomas Janka Max-Planck-Institut für Astrophysik Karl-Scharzschild-Str. 1, 85741 Garching, Germany ###### Abstract The neutrino flux and spectra formation in a supernova core is studied by using a Monte Carlo code. The dominant opacity contribution for is elastic scattering on nucleons , where always stands for either or . In addition we switch on or off a variety of processes which allow for the exchange of energy or the creation and destruction of neutrino pairs, notably nucleon bremsstrahlung , the pair annihilation processes and , recoil and weak magnetism in elastic nucleon scattering, elastic scattering on electrons and elastic scattering on electron neutrinos and anti-neutrinos and . The least important processes are neutrino-neutrino scattering and annihilation. The formation of the spectra and fluxes of is dominated by the nucleonic processes, i.e. bremsstrahlung and elastic scattering with recoil, but also annihilation and scattering contribute significantly. When all processes are included, the spectral shape of the emitted neutrino flux is always “pinched,” i.e. the width of the spectrum is smaller than that of a thermal spectrum with the same average energy. In all of our cases we find that the average energy exceeds the average energy by only a small amount, 10% being a typical number. Weak magnetism effects cause the opacity of to differ slightly from that of , translating into differences of the luminosities and average energies of a few percent. Depending on the density, temperature, and composition profile, the flavor-dependent luminosities , , and can mutually differ from each other by up to a factor of two in either direction. diffusion — neutrinos — supernovae: general ## 1 Introduction In numerical core-collapse supernova (SN) simulations, the transport of - and -neutrinos has received scant attention because their exact fluxes and spectra are probably not crucial for the explosion mechanism. However, the recent experimental evidence for neutrino oscillations implies that the flavor-dependent fluxes and spectra emitted by a SN will be partly swapped so that at any distance from the source the actual fluxes and spectra can be very different from those originally produced. In principle, this effect can be important for the SN shock revival (Fuller et al. 1992) and r-process nucleosynthesis (Qian et al. 1993, Pastor & Raffelt 2002), although the experimentally favored small neutrino mass differences suggest that this is not the case. On the other hand, in view of the large-mixing-angle solution of the solar neutrino problem flavor oscillations are quite relevant for the interpretation of the SN 1987A neutrino signal (Jegerlehner, Neubig, & Raffelt 1996, Lunardini & Smirnov 2001a, Kachelriess et al. 2002, Smirnov, Spergel, & Bahcall 1994). More importantly, the high-statistics neutrino signal from a future galactic SN may allow one to differentiate between some of the neutrino mixing scenarios which explain the presently available data (Chiu & Kuo 2000, Dighe & Smirnov 2000, Dutta et al. 2000, Fuller, Haxton, & McLaughlin 1999, Lunardini & Smirnov 2001b, 2003, Minakata & Nunokawa 2001, Takahashi & Sato 2002). Even though the solution of the solar neutrino problem has been established, the magnitude of the small mixing angle and the question if the neutrino mass hierarchy is normal or inverted will remain open and can be settled only by future precision measurements at dedicated long-baseline oscillation experiments (Barger et al. 2001, Cervera et al. 2000, Freund, Huber, & Lindner 2001) and/or the observation of a future galactic SN. The usefulness of SN neutrinos for diagnosing flavor oscillations depends on the flavor dependence of the fluxes and spectra at the source. Very crudely, a SN core is a black-body source of neutrinos of all flavors which are emitted from the surface of the proto-neutron star that was born after collapse. It is the flavor-dependent details of the neutrino transport in the neutron-star atmosphere which cause the spectral and flux differences that can lead to interesting oscillation effects. The and opacity is dominated by the charged-current processes and , reactions that allow for the exchange of energy and lepton number between the medium and the neutrinos. Therefore, it is straightforward to define an energy-dependent neutrinosphere where this reaction freezes out for neutrinos of a particular energy. This sphere yields a thermal contribution to the neutrino flux at the considered energy. The atmosphere of the proto-neutron star is neutron rich, providing for a larger opacity than for so that for a given energy the flux originates at deeper and thus hotter layers than the flux. In other words, a larger fraction of the flux emerges with high energies. This simple observation explains the usual hierarchy of the mean energies. The spectra are found to be “pinched”, meaning that the high-energy tail is suppressed relative to that of a thermal spectrum with the same mean energy (Janka & Hillebrandt 1989a,b). This numerical result can be understood analytically by constructing the neutrino spectrum from the fluxes emitted by the energy-dependent neutrinospheres which are at different temperatures (Myra, Lattimer, & Yahil 1988, Giovanoni, Ellison, & Bruenn 1989). The formation of the , , , and spectra is far more complicated. The opacity is dominated by the neutral-current scattering on nucleons, , a process that prevents neutrino free streaming, but is unable to change the neutrino number and is usually considered to be inefficient at exchanging energy. (Here and in the following stands for either or .) Neutrino pairs can be created by nucleon bremsstrahlung, , and pair annihilation, or , while pairs are absorbed by the inverse reactions. In addition, energy is exchanged by elastic scattering on leptons, notably , by the recoil in nucleon scattering, , and by inelastic scattering on nucleons , a channel that is the “crossed process” of bremsstrahlung. For a given neutrino energy these processes freeze out at different radii so that one can define a “number sphere” for the pair processes, an “energy sphere” for the energy-exchange processes, and a “transport sphere” for elastic nucleon scattering with (Suzuki 1990). The region between the number sphere and the transport sphere plays the role of a scattering atmosphere because neutrinos can not be created or destroyed. They propagate by diffusion and can still exchange energy with the background medium. Usually the transport sphere is deeper than the sphere so that numerical simulations find . This hierarchy is the main motivation for the proposed use of SN neutrinos as a diagnostic for neutrino oscillations. However, the quantitative statements found in the literature range from being 20% to nearly a factor of 2 larger than ; for a review see Janka (1993) and Sec. 4.3. Of course, the mean energies and their ratios change significantly between the SN bounce, accretion phase, and the later neutron-star cooling phase. Therefore, one must distinguish carefully between instantaneous fluxes and spectra and the time-integrated values. While for the analysis of the sparse SN 1987A data only time-integrated values make sense, a future galactic SN may well produce enough events to study the instantaneous fluxes and spectra (Barger, Marfatia, & Wood 2001, Minakata et al. 2001). The overall energy emitted by a SN is often said to be equipartitioned among all six neutrino degrees of freedom. In some numerical simulations the neutrino luminosities are indeed astonishingly equal for all flavors (Totani et al. 1998), while other simulations easily find a factor of two difference between, say, the and luminosities, at least during the accretion phase (Mezzacappa et al. 2001). Therefore, it is by no means obvious how precisely equipartition can be assumed for the purpose of diagnosing neutrino oscillations. Another important feature is the neutrino spectral shape, notably the amount of pinching. If one could assume with confidence that the instantaneous spectra of all flavors are pinched at the source, and if the measured SN neutrino spectra were instead found to be anti-pinched, this effect would be a powerful diagnostic for the partial spectral swapping caused by flavor oscillations (Dighe & Smirnov 2000). Unfortunately, the existing literature does not allow one to develop a clear view on these “fine points” of the neutrino fluxes and spectra, largely because not enough attention has been paid to the and emission from a SN core. The published full numerical SN collapse simulations have not yet included the bremsstrahlung process or nucleon recoils (but see first results of state-of-the-art models in Rampp et al. 2002), even though it is no longer controversial that these effects are important (Janka et al. 1996, Burrows et al. 2000, Hannestad & Raffelt 1998, Raffelt 2001, Suzuki 1991, 1993, Thompson, Burrows, & Horvath 2000). Moreover, some of the interesting information such as the spectral pinching was usually not documented. Another problem with self-consistent hydrodynamic simulations is that the models with the most elaborate neutrino transport usually do not explode so that even the most recent state-of-the-art simulations do not reach beyond the accretion phase at a few hundred milliseconds after bounce (Rampp & Janka 2000, Mezzacappa et al. 2001, Liebendörfer et al. 2001), thus not providing any information on the neutron-star cooling phase. Successful multi-dimensional models of the explosion (e.g., Fryer & Warren 2002, Fryer 1999 and references therein) were also not continued to the neutron-star cooling phase. These simulations, moreover, treat the neutrino transport only in a very approximate way and do not provide spectral information. The calculations performed by the Livermore group also yield robust explosions (Totani et al. 1998). They include a mixing-length treatment of the phenomenon of neutron-finger convection in the neutron star, that increases the early neutrino luminosities and thus enhances the energy transfer by neutrinos to the postshock medium (Wilson & Mayle 1993). Whether neutron-finger convection actually occurs inside the neutrinosphere and has effects on a macroscopic scale, however, is an unsettled issue. We will follow here an alternative approach to full hydrodynamic simulations, i.e. we will study neutrino transport on the background of an assumed neutron-star atmosphere. While this approach lacks hydrodynamic self-consistency, it has the great advantage of allowing one to study systematically the influence of various pieces of microscopic input physics and of the medium profile. The goal is to develop a clearer picture of the generic properties of the SN neutrino spectra and fluxes and what they depend upon. To this end we have adapted the Monte Carlo code of Janka (1987, 1991) and added new microphysics to it. We go beyond the work of Janka & Hillebrandt (1989a,b) in that we include the bremsstrahlung process, nucleon recoils and weak magnetism, pair annihilation into , and scattering of on and . With these extensions we investigate the neutrino transport systematically for a variety of medium profiles that are representative for different SN phases. One of us (Raffelt 2001) has recently studied the spectra-formation problem with the limitation to nucleonic processes (elastic and inelastic scattering, recoils, bremsstrahlung), to Maxwell-Boltzmann statistics for the neutrinos, and plane-parallel geometry. Our present study complements this more schematic work by including the leptonic processes, Fermi-Dirac statistics, and spherical geometry. In addition we apply our Monte Carlo code to the transport of and and thus are able to compare the flavor-dependent fluxes and spectra. In Sec. 2 we first assess the relative importance of different processes in terms of their energy-dependent “thermalization depth”. In this context we introduce a number of stellar background models. In Sec. 3 we perform a Monte Carlo study of transport on the previously introduced background models in order to assess the importance of different pieces of input physics. In Sec. 4 we compare the fluxes and spectra with those of and . We conclude in Sec. 5 with a discussion and summary of our findings. ## 2 Thermalization Depth of Energy-Exchange Processes ### 2.1 Simple Picture of Spectra Formation One of our goals is to assess the relative importance of different neutrino interaction channels with the background medium of the SN core. As a first step it is instructive to study the thermalization depth of various energy-exchange processes. Within the transport sphere, the neutrinos are trapped by elastic scatterings on nucleons, , which are by far the most frequent reactions between neutrinos and particles of the stellar medium. (Unless otherwise noted “neutrinos” always refers to any of , , or .) Assuming for the moment that these collisions are iso-energetic (no nucleon recoils), it is straightforward to define for a neutrino of given energy  the location (“thermalization depth”) where it last exchanged energy with the medium by a reaction such as . Following Shapiro & Teukolsky (1983) we define the optical depth for energy exchange or thermalization by τtherm(r)=∫∞rdr′ ⎷1λE(r′)[1λT(r′)+1λE(r′)]. (1) Here, is the mean free path (mfp) for the relevant energy-exchange process and the transport mfp, i.e. the mfp corresponding to the cross section for momentum exchange in the reaction. The quantities , and are all understood to depend on the neutrino energy . The main philosophy of Eq. (1) is that a neutrino trapped by elastic scattering has a chance to exchange energy corresponding to its actual diffusive path through the scattering atmosphere; for a discussion see Suzuki (1990). The thermalization depth is given by τtherm(Rtherm)=23, (2) where depends on the neutrino energy . When this energy dependence is not too steep it makes sense to define an average thermalization depth, i.e. an “energy sphere” that for pair creating processes is equal to the “number sphere.” For nucleon bremsstrahlung this requirement is well fulfilled (Raffelt 2001) so that one may picture the energy sphere as a blackbody surface that injects neutrinos into the scattering atmosphere and absorbs those scattered back. The neutrino flux and spectrum emerging from the transport sphere is then easily understood in terms of the energy-dependent transmission probability of the blackbody spectrum launched at the energy sphere. The transport cross section scales as , implying that the transmitted flux spectrum is shifted to lower energies relative to the temperature at the energy sphere. This simple “filter effect” accounts surprisingly well for the emerging flux spectrum (Raffelt 2001). For typical conditions the mean flux energies are 50–60% of those corresponding to the blackbody conditions at the energy sphere. Moreover, it is straightforward to understand that the effective temperature of the emerging flux spectrum is not overly sensitive to the exact location of the energy sphere. If the energy-exchange reaction is somewhat more effective, the energy sphere is at a larger radius with a lower medium temperature. However, the scattering atmosphere has a smaller optical depth so that the higher-energy neutrinos are less suppressed by the filter effect, partly compensating the smaller energy-sphere temperature. For typical situations Raffelt (2001) found that changing the bremsstrahlung rate by a factor of 3 would change the emerging neutrino energies only by some 10%. This finding suggests that the emitted average neutrino energy is not overly sensitive to the details of the energy-exchange processes. ### 2.2 Neutron-Star Atmospheres In order to determine the location of the thermalization depth for different processes we need to define our assumed neutron-star atmospheres. As a first example we use a model taken from a full hydrodynamic simulation. This model is representative for the accretion phase; henceforth we will refer to it as the “Accretion-Phase Model I” (Fig. 1). It was provided to us by O. E. B. Messer and was already used in Raffelt (2001) for a more schematic study. Based on the Woosley & Weaver progenitor model labeled s15s7b, the Newtonian collapse simulation was performed with the SN code developed by Mezzacappa et al. (2001). The snapshot is taken at 324 ms after bounce when the shock is at about 120 km, i.e. the star still accretes matter. In this simulation the traditional microphysics for transport was included, i.e. iso-energetic scattering on nucleons, annihilation and scattering. As another self-consistent example (Accretion-Phase Model II) we obtained a 150 ms postbounce model from M. Rampp (personal communication) that uses a very similar progenitor (s15s7b2). The simulation includes an approximate general relativistic treatment in spherical symmetry as described by Rampp & Janka (2002). The three neutrino flavors are transported with all relevant interactions except pair annihilation to (see also Sec. 4.1 and Rampp et al. 2002). As another set of examples we use two power-law profiles of the form ρ=ρ0(r0r)p,T=T0(r0r)q, (3) with a constant electron fraction per baryon . We adjust parameters such that –25 MeV for the emerging neutrinos to obtain model atmospheres in the ballpark of results from proto-neutron star evolution calculations. We define a “steep” power-law model, corresponding to the one used by Raffelt (2001), and a “shallow” one; the characteristics are given in Table 1. The shallow model could be characteristic of a SN core during the accretion phase while the steep model is more characteristic for the neutron-star cooling phase. The constant electron fraction is another parameter that allows us to investigate the relative importance of the leptonic processes as a function of the assumed . ### 2.3 Thermalization Depth We now calculate the thermalization depth as a function of the neutrino energy for several energy-exchanging processes and the neutron-star atmospheres described above. We consider the neutrino mfp for nucleon bremsstrahlung , pair annihilation and , and scattering on charged leptons . The numerical implementation of the reaction rates is described in Appendix B. In Figs. 3 and 4 we give the thermalization depth as a function of neutrino energy for the two hydrodynamically self-consistent accretion-phase models. From top to bottom the panels show the results for , , and , respectively. The step-like curves represent the temperature profiles in terms of the mean neutrino energy, for non-degenerate neutrinos at the local medium temperature; the steps correspond to the radial zones of our Monte Carlo simulation. The other curves represent for bremsstrahlung (b), annihilation (p), annihilation (n), and scattering on (s). In the case of and we do not include bremsstrahlung and annihilation. Particle creation is dominated by the charged current reactions on nucleons (urca). For the power-law models we show for in Figs. 5 and 6. The different panels correspond to the indicated values of the electron fraction . Note that represents the net electron density per baryon, i.e. the density minus that of so that implies that there is an equal thermal population of and . The absorption rate for the bremsstrahlung process varies approximately as , the transport cross section as so that the inverse mfp for thermalization varies only as . This explains why for bremsstrahlung is indeed quite independent of . Therefore, bremsstrahlung alone allows one to specify a rather well-defined energy sphere. The other processes depend much more sensitively on so that a mean energy sphere is much less well defined. Both electron scattering and the leptonic pair processes are so ineffective at low energies that true local thermodynamic equilibrium (LTE) can not be established even for astonishingly deep locations. Bremsstrahlung easily “plugs” this low-energy hole so that one can indeed expect LTE for all relevant neutrino energies below a certain radius. For higher energies, the leptonic processes dominate and shift the energy sphere to larger radii than bremsstrahlung alone. The relative importance of the various processes depends on the density and temperature profiles as well as . To assess the role of the various processes for the overall spectra formation one needs to specify some typical neutrino energy. One possibility would be for neutrinos in LTE. Another possibility is the mean energy of the neutrino flux, in particular the mean energy of those neutrinos which actually leave the star. For our power-law atmospheres this is always around 20–25 MeV. Therefore, the process with the largest in this energy band is the one most relevant for determining the emerging neutrino spectrum. It appears that at least for steep profiles pair annihilation is never crucial once bremsstrahlung is included, i.e. we would guess that including pair annihilation will not affect the emerging neutrino spectra. The relevance of electron scattering is far more difficult to guess. On the one hand it surely is more important than recoil in nucleon scatterings for some of the relevant energies, on the other hand we are not able to define an energy sphere for nucleon recoils because this process is different from the others in that neutrinos transfer only a small fraction of their energy per scattering. Therefore, it is not straightforward to assess the relevance of electron scattering compared with nucleon recoils on the basis of the various thermalization spheres alone. ## 3 Monte Carlo Study of Muon Neutrino Transport ### 3.1 Spectral Characteristics In order to characterize the neutrino spectra and fluxes emerging from a neutron star we need to introduce some simple and intuitive parameters. One is the mean energy ⟨ϵ⟩=∫∞0dϵϵ∫+1−1dμf(ϵ,μ)∫∞0dϵ∫+1−1dμf(ϵ,μ), (4) where is the neutrino distribution function with the energy and the cosine of the angle between the neutrino momentum and the radial direction. If the neutrinos are in LTE without a chemical potential one has f(ϵ,μ)=ϵ21+exp(ϵ/T) (5) and therefore ⟨ϵ⟩=7π4180ζ3T≈3.1514T. (6) One can define an effective neutrino temperature for non-equilibrium distributions by inverting this relationship. It is often useful to extract spectral characteristics for those neutrinos which are actually flowing by removing the isotropic part of the distribution. Specifically, we define the average flux energy by ⟨ϵ⟩flux=∫∞0dϵϵ∫+1−1dμμf(ϵ,μ)∫∞0dϵ∫+1−1dμμf(ϵ,μ). (7) Far away from the star all neutrinos will flow essentially in the radial direction, implying that the angular distribution becomes a delta-function in the forward direction so that . However, in the trapping regions the two averages are very different because the distribution function is dominated by its isotropic term. To characterize the spectrum beyond the mean energy one can consider a series of moments (Janka & Hillebrandt 1989a); we usually limit ourselves to and 2. Note that a Fermi-Dirac distribution at zero chemical potential yields a≡⟨ϵ2⟩⟨ϵ⟩2=486000ζ3ζ549π8≈1.3029. (8) For a Maxwell-Boltzmann distribution this quantity would be 4/3. Following Raffelt (2001) we further define the “pinching parameter” p≡1a⟨ϵ2⟩⟨ϵ⟩2, (9) where signifies that the spectrum is thermal up to its second moment, while signifies a pinched spectrum (high-energy tail suppressed), an anti-pinched spectrum (high-energy tail enhanced). An analogous definition applies to the pinching parameter of the flux spectrum by replacing with . In some publications the root-mean-square energy is given instead of the average energy. The definition corresponding to Eq. (4) is ⟨ϵ⟩rms=  ⎷∫∞0dϵ∫+1−1dμϵ3f(ϵ,μ)∫∞0dϵ∫+1−1dμϵf(ϵ,μ)=√⟨ϵ3⟩⟨ϵ⟩. (10) This characteristic spectral energy is useful for estimating the energy transfer from neutrinos to the stellar medium in reactions with cross sections proportional to . For thermal neutrinos with vanishing chemical potential we find ⟨ϵ⟩rms=√930441πT≈4.5622T. (11) With Eq. (6) this corresponds to . Beyond the energy moments and related parameters it is often useful to approximate the neutrino spectrum by a simple analytic fit. If one uses two parameters beyond the overall normalization one can adjust the fit to reproduce two moments, for example and . In the literature one frequently encounters an approximation in terms of a nominal Fermi-Dirac distribution characterized by a temperature and a degeneracy parameter according to fη(ϵ)=ϵ21+exp(ϵT−η) (12) (Janka & Hillebrandt 1989a). In Fig. 7 we show and as a function of . Up to second order, expansions are ⟨ϵ/T⟩ ≈ 3.1514+0.1250η+0.0429η2 p ≈ 1−0.0174η−0.0046η2. (13) These expansions are shown in Fig. 7 as dashed lines. Using a nominal Fermi-Dirac fit to approximate the spectrum is physically motivated because a truly thermal neutrino flux would follow this behavior. On the other hand, the neutrino flux emitted from a SN core is not very close to being thermal so that the limiting behavior of the fit function is not a strong argument. Therefore, we consider an alternative fit function for which analytic simplicity is the main motivation, fα(ϵ)=(ϵ¯ϵ)αe−(α+1)ϵ/¯ϵ. (14) For any value of we have while ⟨ϵ2⟩⟨ϵ⟩2=2+α1+α. (15) Put another way, is the average energy while represents the amount of spectral pinching. For general moments the analogous relation is ⟨ϵk⟩⟨ϵk−1⟩=k+α1+α¯ϵ. (16) In the upper panel of Fig. 8 we show , the integral normalized to unity, for several values of . The broadest curve is for while for the narrower ones the width w=√⟨ϵ2⟩−⟨ϵ⟩2 (17) was decreased in 10% decrements as shown in Table 2. The middle panel of Fig. 8 shows the corresponding curves with the -values given in Table 2. The broadest curves in each panel are identical and correspond to with a width . The limiting behavior of for large is while for the limiting width is . Evidently the curves can accommodate a much broader range of widths than the curves . We will find that the neutrino spectra are always fit with parameters in the range On the basis of a few high-statistics Monte-Carlo runs we will show in Sec. 3.5 that the numerical spectra are actually better approximated over a broader range of energies by the “power-law” fit functions . In addition, these functions are more flexible at representing the high-energy tail of the spectrum that is most relevant for studying the Earth effect in neutrino oscillations. ### 3.2 Monte Carlo Set Up We have run our Monte Carlo code, that is described in Appendix A, for the stellar background models introduced in Sec. 2.2 and for different combinations of energy-changing neutrino reactions. Our main interest is to assess the impact of the scattering atmosphere on the flux and spectrum formation. Therefore, it is sufficient to simulate the neutrino transport above some radius where we have to specify a boundary condition. We always use a blackbody boundary condition at the bottom of the atmosphere, i.e. we assume neutrinos to be in LTE at the local temperature and the appropriate chemical potential; for and the latter is taken to vanish. As a consequence of this boundary condition, the luminosity emerging at the surface is generated within the computational domain and calculated by our Monte Carlo transport. A small flux across the inner boundary develops because of the negative gradients of temperature and density in the atmosphere, but its magnitude depends on the radial resolution of the neutron-star atmosphere and will not in general correspond to the physical diffusive flux. But as long as the flux is small compared to the luminosity at the surface, the emerging neutrino spectra will not depend on the lower boundary condition. Usually it is sufficient to place the inner grid radius deeper in the star than the thermalization depth of the dominant pair process. The shallow energy dependence of the thermalization depth of the nucleon bremsstrahlung implies that whenever we include this process it is not difficult to choose a reasonable location for the lower boundary. Taking the latter too deep in the star is very CPU-expensive as one spends most of the simulation for calculating frequent scatterings of neutrinos that are essentially in LTE. We always include scattering as the main opacity source. For energy exchange, we switch on or off bremsstrahlung (b), nucleon recoil (r), scattering on electrons (s), pair annihilation (p), and annihilation (n). We never include inelastic nucleon scattering as this process is never important relative to recoil (Raffelt 2001). Likewise, we ignore scattering on and which is always unimportant if is included (Buras et al. 2002). We also neglect or scattering even though such processes may have a larger rate than some of the included leptonic processes. Processes of this type do not exchange energy between the neutrinos and the background medium. They are therefore not expected to affect the emerging fluxes and should also have a minor effect on the emitted spectra. ### 3.3 Importance of Different Processes #### 3.3.1 Accretion-Phase Model I Our first goal is to assess the relative importance of different energy-exchange processes for the transport. As a first example we begin with our Accretion-Phase Model I. The results from our numerical runs are summarized in Table 3 where for each run we give , our fit parameter determined by Eq. (15), and the pinching parameter for the emerging flux spectrum, the temperature and degeneracy parameter of an effective Fermi-Dirac spectrum producing the same first two energy moments, and the luminosity. The first row contains the muon neutrino flux characteristics of the original Boltzmann transport calculation by Messer. To make a connection to these results we ran our code with the same input physics, i.e.  scattering (s) and annihilation (p). There remain small differences between the original spectral characteristics and ours. These can be caused by differences in the implementation of the neutrino processes, by the limited number of energy and angular bins in the Boltzmann solver, the coarser resolution of the radial grid in our Monte Carlo runs, and by our simple blackbody lower boundary condition. We interpret the first two rows of Table 3 as agreeing sufficiently well with each other that a detailed understanding of the differences is not warranted. Henceforth we will only discuss differential effects within our own implementation. In the next row (bsp) we include nucleon bremsstrahlung which has the effect of increasing the luminosity by a sizable amount without affecting much the spectral shape. This suggests that bremsstrahlung is important as a source for pairs, but that the spectrum is then shaped by the energy-exchange in scattering with . In the next row we switch off scattering (bp) so that no energy is exchanged except by pair-producing processes. The spectral energy indeed increases significantly. However, the biggest energy-exchange effect in the scattering regime is nucleon recoil. In the next two rows we include recoil (brp) and then additionally scattering (brsp), both lowering the spectral energies and also the luminosities. The picture of all relevant processes is completed by adding pair annihilation (brspn), which is similar to pair annihilation, but a factor of 2–3 more important (Buras et al. 2002). The luminosity is again increased, an effect which is understood in terms of our blackbody picture for the number and energy spheres. In the lower panel of Fig. 3 we see that moves to larger radii once “n” is switched on, the radiating surface of the “blackbody” increases and more pairs are emitted. For both “p” and “n” is strongly energy dependent and therefore it is impossible to define a sharp thermalization radius. Switching off “r” again (bspn) shows that also with “n” included, “r” really dominates the mean energy and shaping of the spectrum. To study the relative importance of the different pair processes, we switch off the leptonic ones (row “brs”) and compare this to only the leptonic processes (row “rspn”). In this stellar model both types contribute significantly. Comparing then “brsp” with “brsn” shows that among the leptonic processes “n” is clearly more important than “p”. The last row “brsnpn” includes in addition to all other processes scattering on and . It was already shown by Buras et al. (2002) that this process is about half as important as scattering on and its influence on the neutrino flux and spectra is negligible. We show this case for completeness but do not include scattering on and for any of our further models. In order to illustrate some of the cases of Table 3 we show in the upper panel of Fig. 9 several flux spectra from high-statistics Monte-Carlo runs. Starting again with the input physics of the original hydrodynamic simulation (sp) we add bremsstrahlung (b), recoil (r), and finally pair annihilation (n). Each of these processes has a significant and clearly visible influence on the curves. The pair-creation processes (“b” and “n”) hardly change the spectral shape but increase the number flux, whereas recoil (r) strongly modifies the spectral shape. In the lower panel of Fig. 9 we show the same curves, normalized to equal particle fluxes. In this representation it is particularly obvious that the pair processes do not affect the spectral shape. The very different impact of pair processes and nucleon recoils has a simple explanation. The thermalization depth for the pair processes is deeper than that of the energy-exchanging reactions, i.e. the “number sphere” is below the “energy sphere.” Therefore, the particle flux is fixed more deeply in the star while the spectra are still modified by energy-exchanging reactions in the scattering atmosphere. #### 3.3.2 Steep Power Law As another example we study the steep power-law model defined in Eq. (3) and Table 1. This model is supposed to represent the outer layers of a late-time proto-neutron star but without being hydrostatically self-consistent. It connects directly with Raffelt (2001), where the same profile was used in a plane parallel setup, studying bremsstrahlung and nucleon recoil. The results of our runs are displayed in Table 4 and agree very nicely with those obtained by Raffelt (2001), corresponding to our cases “b” and “br”. For investigating the importance of leptonic processes, we run our code with a variety of neutrino interactions and in addition assume a constant electron fraction throughout the whole stellar atmosphere. This assumption is somewhat artificial, but gives us the opportunity to study extreme cases in a controlled way. In the relevant region yields the highest possible electron density. In addition we study the electron fraction being one order of magnitude smaller, , and finally the extreme case with an equal number of electrons and positrons, . The first leptonic process we consider is pair annihilation. Comparing the rows “bp” with the row “b” shows a negligible effect on the spectrum, but a rise in luminosity. Increasing brings the luminosity almost back to the “b” case, because the electron degeneracy rises and the positron density decreases so that the pair process becomes less important. Adding scattering on forces the transported neutrinos to stay closer to the medium temperature, i.e. reduces their mean energy. Of course, the scattering rate increases with the number of electrons and positrons, i.e. for higher we get lower spectral energies. For the luminosity the situation is more complicated. Since the neutrino flux energies decrease when we switch on scattering we would expect a lower luminosity. However, the opacity of the medium to neutrinos is strongly energy dependent and low energy neutrinos can escape more easily than high-energy ones, increasing the number flux. On balance, the “bsp” luminosities are larger compared to the “bp” ones. To compare the scattering on with that on nucleons, we turn off “s” again and instead switch on recoil (r). Qualitatively, the energy exchange is very different from the earlier case. In the scattering on a neutrino can exchange a large amount of energy, while for scattering on nucleons the energy exchange is small. But since neutrino-nucleon scattering is the dominant source of opacity that keeps the neutrinos inside the star, the scatterings are very frequent. This leads to a stronger suppression in the high-energy tail of the neutrino spectrum and therefore to a visibly smaller mean flux energy and lower effective spectral temperature, but higher effective degeneracy. Many nucleon scatterings, however, are needed to downgrade the high-energy neutrinos (different from e scattering). Therefore neutrinos stay longer at high energies and experience a larger opacity and a larger amount of backscattering. This suppresses the neutrino flux significantly. In the runs including both scattering reactions (brsp), we find a mixture of the effects of e and nucleon scatterings and an enhanced reduction of the mean flux energy. Finally, adding the neutrino pair process yields almost no change in energy and pinching, but an increased luminosity as expected from the analogous case in Sec. 3.3.1. Although this profile is rather steep, leptonic pair processes are still important (Fig. 5). In order to estimate the sensitivity to the exact treatment of nucleon bremsstrahlung we have performed one run with the bremsstrahlung rate artificially enhanced by a factor of 3, and one where it was decreased by a factor 0.3. All other processes were included. The emerging fluxes and spectra indeed do not depend sensitively on the exact strength of bremsstrahlung as argued in Sec. 2.1. #### 3.3.3 Shallow Power Law For the shallow power law almost the same discussion as for the steep case applies. As we can already infer from Fig. 6, leptonic processes are more important. This leads to a much higher increase of the neutrino flux once “p” or “n” are included, and to stronger spectral pinching when ee annihilation is switched on. Scattering on downgrades the transported neutrino flux by a larger amount. ### 3.4 The Effect of Binning Evidently nucleon recoil plays an important role for the spectrum formation. It is straightforward to implement this reaction in our Monte Carlo approach, but it may be more difficult in those treatments of neutrino transport that rely on binned energy spectra. The energy exchange in a given collision is relatively small (see e.g. Raffelt 2001) so that one may lose this effect if the spectrum is too crudely binned. To test the impact of binning we have performed two runs for the Accretion Phase Model I where we fix the neutrino energies to a small number of values. After every interaction the final-state energy is set to the central value of the energy bin it falls into. We have chosen the 17 logarithmically spaced bins on the interval from 0 to around 380 MeV that are used in the simulations of the Garching SN group. In Fig. 10 we compare the emergent flux spectra from runs with binning (histograms) and without (smooth curves). For both cases recoil was once included (brspn) and once not (bspn). The mean energies calculated from either binned or unbinned runs agree to better than 0.5%, and also the shapes are well reproduced, although there are slight differences around the spectral peak. We conclude that the impact of recoil is well accounted for in our runs with discrete energies. Therefore, the energy grid of the Garching group should well suffice to reproduce the main spectral impact of nucleon recoil. ### 3.5 Detailed Spectral Shape Thus far we have characterized the neutrino spectra by a few simple parameters. However, it is extremely useful to have a simple analytic fit to the overall spectrum that can be used, for example, to simulate the response of a neutrino detector to a SN signal. To study the quality of different fit functions we have performed a few high-statistics Monte-Carlo runs for the Accretion Phase Model I, including all interaction processes. Moreover, we have performed these runs for the flavors , , and . (A detailed discussion of the flavor dependence is deferred to Sec. 4.) In order to get smooth spectral curves we have averaged the output of 70,000 time steps. In addition we have refined the energy grid of the neutrino interaction rates. Both measures leave the previous results unaffected but increase computing time and demand for memory significantly. In Fig. 11 we show our high-statistics Monte Carlo (MC) spectra together with the -fit function defined in Eq. (14) and the -fit function of Eq. (12). The analytic functions can only fit the spectrum well over a certain range of energies. We have chosen to optimize the fit for the event spectrum in a detector, assuming the cross section scales with . Therefore, we actually show the neutrino flux spectra multiplied with . Accordingly, the parameters and , as well as and and the normalizations are determined such that the energy moments , , and are reproduced by the fits. Below each spectrum we show the ratio of our MC results with the fit functions. In the energy range where the statistics in a detector would be reasonable for a galactic SN, say from 5–10 MeV up to around 40 MeV, both types of fits represent the MC results nicely. However, in all cases the -fit works somewhat better than the -fit. We have repeated this exercise for the steep power-law models with and the one with . The quality of the fits is comparable to the previous example. ### 3.6 Summary We find that the spectra are reasonably well described by the simple picture of a blackbody sphere determined by the thermalization depth of the nucleonic bremsstrahlung process, the “filter effect” of the scattering atmosphere, and energy transfers by nucleon recoils. This is also true for the flux in case of steep neutron-star atmospheres. For more shallow atmospheres pair annihilation (ee and ), however, yields a large contribution to the emitted flux and e scattering reduces the mean flux energy significantly. It is therefore important for state-of-the art transport calculations to include these leptonic processes. The traditional process is subdominant compared to as previously found by Buras et al. (2002). The relative importance of the various reactions depends on the stellar profile. Neutrinos emitted from a blackbody surface and filtered by a scattering atmosphere without recoils and leptonic processes have an anti-pinched spectrum (Raffelt 2001). However, after all energy-exchanging reactions have been included we find that the spectra are always pinched. When described by effective Fermi-Dirac distributions, the nominal degeneracy parameter is typically in the range 1–2, depending on the profile and electron concentration. ## 4 Comparing Different Flavors ### 4.1 Monte Carlo Study The new energy-exchange channels studied in the previous section lower the average energies. In order to compare the fluxes and spectra with those of and we perform a new series of runs where we include the full set of relevant microphysics for and also simulate the transport of and . The microphysics for the interactions of and is the same as in Janka & Hillebrandt (1989a,b), i.e. charged-current reactions of e with nucleons, iso-energetic scattering on nucleons, scattering on , and pair annihilation. In principle one should also include nucleon bremsstrahlung and the effect of nucleon recoils for the transport of and , but their effects will be minimal. Therefore, we preferred to leave the original working code unmodified for these flavors. In the first three rows of Table 6 we give the spectral characteristics for the Accretion-Phase Model I from the original simulation of Messer. The usual hierarchy of average neutrino energies is found, i.e. . The luminosities are essentially equal between and while , , , and each provide about half of the luminosity. Our Monte Carlo runs of this profile establish the same picture for the same input physics. Although our mean energies are slightly offset to lower values for all flavors relative to the original run, our energies relative to each other are and thus very similar. However, once we include all energy exchanging processes we find instead. Therefore, no longer exceeds by much. The luminosity of is about half that of or which are approximately equal, in rough agreement with the original results. Even though the additional processes lower the mean energy of they yield a more than 10% higher luminosity, mainly due to annihilation. As another example of an accreting proto-neutron star we use the Accretion-Phase Model II. The neutrino interactions included in this model were nucleon bremsstrahlung, scattering on , and annihilation. Nucleon correlations, effective mass, and recoil were taken into account, following Burrows & Sawyer (1998, 1999), as well as weak magnetism effects (Horowitz 2002) and quenching of at high densities (Carter & Prakash 2002). All these improvements to the traditional microphysics affect mainly and to some degree also . Weak magnetism terms decrease the nucleon scattering cross sections for more strongly than they modify scatterings. In this hydrodynamic calculation, however, and were treated identically by using the average of the corresponding reaction cross sections. The effects of weak magnetism on the transport of and are therefore not included to very high accuracy. Note, moreover, that the original data come from a general relativistic hydrodynamic simulation with the solution of the Boltzmann equation for neutrino transport calculated in the comoving frame of the stellar fluid. Therefore the neutrino results are affected by gravitational redshift and, depending on where they are measured, may also be blueshifted by Doppler effects due to the accretion flow to the nascent neutron star. Our Monte Carlo simulation in contrast was performed on a static background without general relativistic corrections. It includes bremsstrahlung, recoil, pair annihilation, scattering on , and annihilation, i.e. our microphysics is similar but not identical with that used in the original run. As an outer radius we took 100 km; all flux parameters are measured at this radius because farther out Doppler effects of the original model would make it difficult to compare the results. Keeping in mind that we use very different numerical approaches and somewhat different input physics, the agreement in particular for and is remarkably good. This agreement shows once more that our Monte Carlo approach likely captures at least the differential effects of the new microphysics in a satisfactory manner. In Fig. 12 we compare our calculations for the Accretion-Phase Model II with those of the original simulation. The step-like curve again represents the mean energy of neutrinos in LTE for zero chemical potential. The smooth solid line is the mean energy from our runs, the dotted (lower) line gives . The crosses are the corresponding results from the original runs. For the transport of our inner boundary is  km, while for and we use  km. For and the charged-current processes (urca) keep these neutrinos in LTE up to larger radii than pair processes in the case of . With our choice of the neutrinos are in LTE within the innermost radial zones. The results are similar to the Accretion-Phase Model I. The luminosities are not equipartitioned but instead follow roughly . The ratios of mean energies are in the original run and in our run. In summary, both accretion-phase models agree reasonably well in the ratio for all runs. Moreover, using traditional input physics one finds something like . Depending on the implementation of the new input physics and depending on the model one finds results between and . The higher ratio in Rampp’s simulation could be due to the inclusion of weak magnetism which tends to raise more than . In order to estimate the corresponding results for later stages of the proto-neutron star evolution we employ our steep power-law model. We vary the power of the temperature profile within a reasonable range so that –0.35, with and defined in Eq. (3). is fixed by demanding roughly equal number fluxes for and because a few seconds after bounce deleptonization should be essentially complete. The fluxes of these neutrinos depend very sensitively on so that this constraint is only reached to within about 30% without tuning to three decimal places. However, the mean energies are rather insensitive to the exact value of . This is illustrated by the steep power-law model with where we show results for and 0.20. The number fluxes of and differ by less than 30% for , but differ by a factor of 3 for . At the same time, the average spectral energies barely change. The ratios of mean energies are not very different from those of the accretion-phase models. Of course, the absolute flux energies have no physical meaning because we adjusted the stellar profile in order to obtain realistic values. For the luminosities we find , different from the accretion phase. The steep power law implies that the radiating surfaces are similar for all flavors so that it is not surprising that the flavor with the largest energies also produces the largest luminosity. We find that always exceeds by a small amount, the exact value depending on the stellar model. During the accretion phase the energies seem to be almost identical, later they may differ by up to 20%. We have not found a model where the energies differ by the large amounts which are sometimes assumed in the literature. At late times when is small the microphysics governing transport is closer to that for than at early times. Therefore, one expects that at late times the behavior of is more similar to than at early times. We do not see any argument for expecting an extreme hierarchy of energies at late times for self-consistent stellar models. We never find exact equipartition of the flavor-dependent luminosities. Depending on the stellar profile the fluxes can mutually differ by up to a factor of 2 in either direction. ### 4.2 Weak Magnetism Weak magnetism causes a significant correction to the neutrino-nucleon cross section that arises due to the large anomalous magnetic moments of protons and neutrons (Vogel & Beacom 1999, Horowitz 2002). It increases the neutrino interaction rate but lowers the rate for anti-neutrinos. It is expected to be a small correction in the SN context, but has never been implemented so far. Following Horowitz (2002) we add weak magnetism to our nucleon-recoil rate as given in Appendix B. Our Monte Carlo code transports only one species of neutrinos at a time. In order to test the impact of weak magnetism we assumed that a chemical potential for would build up, and assumed a fixed value for the degeneracy parameter throughout our stellar model. We then iterated several runs for and with different degeneracy parameters until their particle fluxes were equal because in a stationary state there will be no net flux of -lepton number. We performed this procedure for our Accretion-Phase Model I and our steepest power law; the results are summarized in Table 6. In both cases the mean energies of go down by and go up for by . The mean luminosities are unaffected. We conclude that weak-magnetism corrections are small. Transporting and separately in a self-consistent hydrodynamic simulation is probably not worth the cost in computer time. ### 4.3 Previous Literature There is a large recent body of literature quoted in our introduction where the effect of flavor oscillations on SN neutrino spectra and fluxes is studied. Many of these papers assumed that is much larger than and that the luminosities between all flavors were exactly equipartitioned. Our findings here are almost orthogonal to this perception. Where does it come from? To the best of our knowledge, the microphysics employed for transport is roughly the same in all published simulations. It includes iso-energetic scattering on nucleons, annihilation and scattering. Of course, the transport method and the numerical implementation of the neutrino processes differ in the codes of different groups. The new reactions and nucleon recoil lower and modify the luminosities, but not by such a large amount as to explain a completely different paradigm. Therefore, we have inspected the previous literature and collect a representative sample of pertinent results in Table 7. Note that the simulations discussed below did not in all cases use the same stellar models and equations of state for the dense matter in the supernova core. We begin with the simulations of the Livermore group who find robust explosions by virtue of the neutron-finger convection phenomenon. Neutrino transport is treated in the hydrodynamic models with a multigroup flux-limited diffusion scheme. Mayle, Wilson, & Schramm (1987) gave detailed results for their SN simulation of a star. For half a second after bounce they obtained a somewhat oscillatory behavior of the neutrino luminosities. After the prompt peak of the electron neutrino luminosity, they got . After about one second the values stabilize. This calculation did not produce the “standard” hierarchy of energies. However, there is clearly a tendency that behave more similar to at late times. The most recent published Livermore simulation is a star (Totani et al. 1998). It shows an astonishing degree of luminosity equipartition from the accretion phase throughout the early Kelvin-Helmholtz cooling phase. About two seconds after bounce the flux falls off more slowly than the other flavors. In Table 7 we show representative results for an early and a late time. The mean energies and their ratios are consistent with what we would have expected on the basis of our study. With a different numerical code, Bruenn (1987) found for a progenitor qualitatively different results for luminosities and energies. At about 0.5 s after bounce the luminosities and energies became stable at the values given in Table 7. This simulation is an example for an extreme hierarchy of mean energies. In Burrows (1988) all luminosities are said to be equal. In addition it is stated that for the first 5 seconds MeV and the relation to the other flavors is . Detailed results are only given for , so we are not able to add this reference to our table. The large variety of models investigated by Burrows (1988) and the detailed results for go beyond the scope of our brief description. In a later paper Myra & Burrows (1990) studied a progenitor model and found the extreme hierarchy of energies shown in our table. With the original version of our code Janka & Hillebrandt (1989b) performed their analyses for a progenitor from a core-collapse calculation by Hillebrandt (1987). Of course, like our present study, these were Monte Carlo simulations on a fixed background model, not self-consistent simulations. Taking into account the different microphysics the mean energies are consistent with our present work. The mean energies of were somewhat on the low side relative to and the luminosity was overestimated. Both can be understood by the fact that the stellar background contained an overly large abundance of neutrons, because the model resulted from a post-bounce calculation which only included electron neutrino transport. Suzuki (1990) studied models with initial temperature and density profiles typical of proto-neutron stars at the beginning of the Kelvin-Helmholtz cooling phase about half a second after bounce. He used the relatively stiff nuclear equation of state developed by Hillebrandt & Wolff (1985). In our table we show the results of the model C12. From Suzuki (1991) we took the model labeled C20 which includes bremsstrahlung. The model C48 from Suzuki (1993) includes multiple-scattering suppression of bremsstrahlung. Suzuki’s models are the only ones from the previous literature which go beyond the traditional microphysics for transport. It is reassuring that his ratios of mean energies come closest to the ones we find. Over the past few years, first results from Boltzmann solvers coupled with hydrodynamic simulations have become available, for example the unpublished ones that we used as our Accretion-Phase Models I and II. For convenience we include them in Table 7. Further, we include a very recent accretion phase model of the Garching group (Buras et al., personal communication) that includes the full set of microphysical input. Finally, we include two simulations similar to the Accretion-Phase Model I, one by Mezzacappa et al. (2001) and the other by Liebendörfer et al. (2001). These latter papers show rms energies instead of mean energies. Recalling that the former tend to be about 45% larger than the latter these results are entirely consistent with our Accretion-Phase Models. Moreover, the ratios of tend to exaggerate the spread between the flavor-dependent mean energies because of different amounts of spectral pinching, i.e. different effective degeneracy parameters. To illustrate this point we take the first two rows from Table 6 as an example. The ratio of mean energies for Fermi-Dirac spectra with temperatures and and degeneracy parameters and is , whereas the ratio of rms energies equals 1.30. To summarize, the frequently assumed exact equipartition of the emitted energy among all flavors appears only in some simulations of the Livermore group. We note that the flavor-dependent luminosities tend to be quite sensitive to the detailed atmospheric structure and chemical composition. On the other hand, the often-assumed extreme hierarchy of mean energies was only found in the early simulations of Bruenn (1987) and of Myra & Burrows (1990), possibly a consequence of the neutron-star equation of state used in these calculations. If we ignore results which appear to be “outliers”, the picture emerging from Table 7 is quite consistent with our own findings. For the luminosities, typically and a factor of 2–3 between this and in either direction, depending on the evolutionary phase. For the mean energies we read typical ratios in the range of –1.3. The more recent simulations involving a Boltzmann solvers show a consistent behavior and will in future provide reliable information about neutrino fluxes and spectra. ## 5 Discussion and Summary We have studied the formation of neutrino spectra and fluxes in a SN core. Using a Monte Carlo code for neutrino transport, we varied the microscopic input physics as well as the underlying static proto-neutron star atmosphere. We used two background models from self-consistent hydrodynamic simulations, and several power-law models with varying power-law indices for the density and temperature and different values for the electron fraction , taken to be constant. The transport opacity is dominated by neutral-current scattering on nucleons. In addition, there are number-changing processes (nucleon bremsstrahlung, leptonic pair annihilation) and energy-changing processes (nucleon recoil, scattering). The spectra and fluxes are roughly accounted for if one includes one significant channel of pair production and one for energy exchange in addition to scattering. For example, the traditional set of microphysics (iso-energetic scattering, annihilation, and scattering) yields comparable spectra and fluxes to a calculation where pairs are produced by nucleon bremsstrahlung and energy is exchanged by nucleon recoil. The overall result is quite robust against the detailed choice of microphysics. However, in state-of-the-art simulations where one aims at a precision better than some 10–20% for the fluxes and spectral energies, one needs to include bremsstrahlung, leptonic pair annihilation, neutrino-electron scattering, and energy transfer in neutrino-nucleon collisions. Interestingly, the traditional annihilation process is always much less important than annihilation, a point that we previously raised with our collaborators (Buras et al. 2002). None of the reactions studied here can be neglected except perhaps the traditional annihilation process and and scattering. The existing treatments of the nuclear-physics aspects of the bremsstrahlung process are rather schematic. We find, however, that the fluxes and spectra do not depend sensitively on the exact strength of the bremsstrahlung rate. Therefore, while a more adequate treatment of bremsstrahlung remains desirable, the final results are unlikely to be much affected. The transport of and is usually treated identically. However, weak-magnetism effects render the and scattering cross sections somewhat different (Horowitz 2002), causing a small chemical potential to build up. We find that the differences between the average energies of and are only a few percent and can thus be neglected for most purposes. Including all processes works in the direction of making the fluxes and spectra of more similar to those of compared to a calculation with the traditional set of input physics. During the accretion phase the neutron-star atmosphere is relatively expanded, i.e. the density and temperature gradients are relatively shallow. Our investigation suggests that during this phase is only slightly larger than , perhaps by a few percent or 10% at most. This result agrees with the first hydrodynamic simulation including all of the relevant microphysics except annihilation (Accretion-Phase Model II) provided to us by M. Rampp. For the luminosities of the different neutrino species one finds . The smallness of is not surprising because the effective radiating surface is much smaller than for . During the Kelvin-Helmholtz cooling phase the neutron-star atmosphere will be more compact, the density and temperature gradients will be steeper. Therefore, the radiating surfaces for all species will become more similar. In this situation may well become larger than . However, the relative luminosities depend sensitively on the electron concentration. Therefore, without a self-consistent hydrostatic late-time model it is difficult to claim this luminosity cross-over with confidence. The ratio of the spectral energies is most sensitive to the temperature gradient relative to the density gradient. In our power-law models we used and . Varying between 0.25 and 0.35 we find that varies between and . Noting that the upper range for seems unrealistically large we conclude that even at late times the spectral differences should be small; 20% sounds like a safe upper limit. We are looking forward to this prediction being checked in a full-scale self-consistent neutron-star evolution model with a Boltzmann solver. The statements in the previous literature fall into two classes. One group of workers, using the traditional set of microphysics, found spectral differences between and on the 25% level, a range which largely agrees with our findings in view of the different microphysics. Other papers claim ratios as large as or even exceeding . We have no explanation for these latter results. At least within the framework of our simple power-law models we do not understand which parameter could be reasonably adjusted to reach such extreme spectral differences. In a high-statistics neutrino observation of a future galactic SN one may well be able to discover signatures for flavor oscillations. However, when studying these questions one has to allow for the possibility of very small spectral differences, and conversely, for the possibility of large flux differences. This situation is almost orthogonal to what often has been assumed in papers studying possible oscillation signatures. A realistic assessment of the potential of a future galactic SN to disentangle different neutrino mixing scenarios should allow for the possibility of very small spectral differences among the different flavors of anti-neutrinos. The spectral differences between and are always much larger, but a large SN neutrino (as opposed to anti-neutrino) detector does not exist. The diffuse neutrino flux from all past SNe in the universe is difficult to detect, although Super-Kamiokande has recently established an upper limit that touches the upper end of theoretical predictions (Malek et al. 2002). If our findings are correct, neutrino oscillations will not much enhance the high-energy tail of the spectrum and thus will not significantly enhance the event rate. ## Acknowledgments We thank the Institute for Nuclear Theory (University of Washington, Seattle) for its hospitality during a visit when this work was begun. In Munich, this work was partly supported by the Deutsche Forschungsgemeinschaft under grant No. SFB 375 and by the ESF network Neutrino Astrophysics. We thank Bronson Messer and Markus Rampp for providing unpublished stellar profiles from self-consistent collapse simulations. \[email protected]@defs ## A Monte Carlo Code Our Monte Carlo code is based on that developed by Janka (1987) where a detailed description of the numerical aspects can be found. The code was first applied to calculations of neutrino transport in supernovae by Janka & Hillebrandt (1989a,b) and Janka (1991). It uses Monte Carlo methods to follow the individual destinies of sample neutrinos (particle “packages” with suitably attributed weights to represent a number of real neutrinos) on their way through the star from the moment of creation or inflow to their absorption or escape through the inner or outer boundaries. The considered stellar background is assumed to be spherically symmetric and static, and the sample neutrinos are characterized by their weight factors and by continuous values of energy, radial position and direction of motion, represented by the cosine of the angle relative to the radial direction. The rates of neutrino interactions with particles of the stellar medium can be evaluated by taking into account Fermion blocking effects according to the local phase-space distributions of neutrinos (Janka & Hillebrandt 1989b). As background stellar models we use the ones described in Sec. 2.2. They are defined by radial profiles of the density , temperature , and electron fraction , i.e. the number of electrons per baryon. The calculations span the range between some inner radius and outer radius . These bound the computational domain which is divided into 30 equally spaced radial zones. In each zone , , and are taken to be constant. is chosen at such high density and temperature that the neutrinos are in LTE in at least the first radial zone. is placed in a region where the neutrinos essentially stream freely. At neutrinos are injected isotropically according to LTE. While a small net flux across the inner boundary develops, the neutrinos emerging from the star are generated almost exclusively within our computational domain. If is chosen so deep that the neutrinos are in LTE, the assumed boundary condition for the flux will therefore not affect the results. The stellar medium is assumed to be in thermodynamic equilibrium with nuclei being completely disintegrated into free nucleons. Based on , , and we calculate all the required thermodynamic quantities, notably the number densities, chemical potentials, and temperatures of protons, neutrons, electrons, positrons, and the relevant neutrinos. The chemical potentials for and are taken to be zero. Next we compute the interaction rates in each radial zone for all included processes. In the simulations discussed in the present work, fermion phase-space blocking is calculated from the neutrino equilibrium distributions instead of the computed phase-space distributions. This simplification saves a lot of CPU time because otherwise the rates have to be re-evaluated whenever the distribution of neutrinos has changed after a transport time step. The approximation is justified because phase-space blocking is most important in regions where neutrinos frequently interact and thus are close to LTE. At the start of a Monte Carlo run, 800,000 test neutrinos are randomly distributed in the model according to the local equilibrium distributions. Each test neutrino represents a certain number of real neutrinos. In this initial setup the number of real neutrinos is determined by LTE. Then transport is started. The time step is fixed at ; recall that the interaction rates do not change. At the beginning of each step neutrino creation takes place. The number of test particles that can be created is given by the number of neutrinos that were lost through the inner and outer boundaries plus those absorbed by the medium. Based on , the production rates, and the fact that the inner boundary radiates neutrinos, we calculate the number of neutrinos that are produced in one time step and distribute them among the available test neutrinos by attributing suitable weight factors. The sample particles are created within the medium or injected at the inner boundary in appropriate proportions. During a time step the path of each test particle through the stellar atmosphere is followed by Monte Carlo sampling. With random numbers we decide whether it flies freely or interacts. If it interacts it can scatter or it can be absorbed; in this case we turn to the next particle. For scattering we determine the new momentum and position and continue with the process until the time step is used up. Particles leaving through the lower or upper boundaries are eliminated from the transport. After a certain number of time steps (typically around 15,000) the neutrino distribution reaches a stationary state and further changes occur only due to statistical fluctuations. At that stage we start averaging the output quantities over the next 500 time steps. ## B Neutrino Processes ### b.1 Neutrino-Nucleon Scattering The rates for the reactions are calculated following Raffelt (2001). For a neutrino with initial energy and final energy , the differential cross section is given by dσdϵ2dcosθ=C2A(3−cosθ)2πG2Fϵ22S(ω,k)2π (B1) with , the modulus of the momentum transfer to the medium, and the scattering angle. We do not distinguish between protons and neutrons. Since for nonrelativistic nucleons the scattering cross section is proportional to , the vector current (
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478503465652466, "perplexity": 1016.4676837537627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00601.warc.gz"}
https://www.physicsforums.com/threads/friction-problem-involving-2-blocks-sliding-in-2-directions-with-2-frictions.270709/
# Friction problem involving 2 blocks sliding in 2 directions with 2 frictions 1. Nov 9, 2008 ### 2FAST4U8 1. The problem statement, all variables and given/known data A 1-kg block is pushed against a 4-kg block on a horizontal surface of coefficient of friction 0.25, as shown in the figure. Determine the minimum force needed to ensure that the 1-kg block does not slip down. Assume that the coefficient of friction at the interface between the block is 0.4. Hint: The two blocks exert equal and opposite forces on each other. 2. Relevant equations F=(A+B)a Ag=fs N=Ba Ag=$$\mu$$Ba 3. The attempt at a solution In class we have done similiar problems, only with a frictionless horizontal surface. I don't know how to account for the horizontal coefficient of friction of .25 in this problem. Using the above equations and ignoring the horizontal coefficient of friction, I get this: a=(Ag)/($$\mu$$B F=(((A+B)Ag)/($$\mu$$B)) F=(((4+1)(1)(9.8))/((.4)(4))) F=30.62N Now how do I account for the horizontal coefficient of friction of .25? Thanks in advance for any help you can provide. 2. Nov 9, 2008 ### asleight Well, examining block one, we notice that we want $$\sum\vec{F}=m\vec{a}=0=\vec{F}_f+\vec{F}_g\rightarrow\vec{F}_f=-\vec{F}_g$$. From this, we can examine the frictional force, specifically: $$\vec{F}_f=\mu_i\vec{N}\rightarrow m\vec{g}/\mu_i=\vec{N}$$. What next? :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383964538574219, "perplexity": 423.5873213270927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296721.55/warc/CC-MAIN-20160823195816-00234-ip-10-153-172-175.ec2.internal.warc.gz"}
http://eprints.pascal-network.org/archive/00002774/
Minimum Cost Homomorphisms to Semicomplete Bipartite Digraphs G Gutin, A Rafiey and A Yeo SIAM J. Discrete Mathematics 2006. ## Abstract For digraphs $D$ and $H$, a mapping $f:\ V(D)\dom V(H)$ is a homomorphism of $G$ to $H$ if $uv\in A(D)$ implies $f(u)f(v)\in E(H).$ If, moreover, each vertex $u \in V(D)$ is associated with costs $c_i(u), i \in V(H)$, then the cost of the homomorphism $f$ is $\sum_{u\in V(D)}c_{f(u)}(u)$. For each fixed digraph $H$, we have the {\em minimum cost homomorphism problem for} $H$. The problem is to decide, for an input graph $D$ with costs $c_i(u),$ $u \in V(D), i\in V(H)$, whether there exists a homomorphism of $D$ to $H$ and, if one exists, to find one of minimum cost. Minimum cost homomorphism problems encompass (or are related to) many well studied optimization problems. We describe a dichotomy of the minimum cost homomorphism problem for semicomplete multipartite digraphs $H$. This solves an open problem from an earlier paper. To obtain the dichotomy of this paper, we introduce and study a new notion, a $k$-Min-Max ordering of digraphs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330093264579773, "perplexity": 435.1067009540838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345769117/warc/CC-MAIN-20131218054929-00096-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/238575/law-of-sines-for-tetrahedra
# Law of sines for tetrahedra Wikipedia gives a generalization of the law of sines to higher dimensions, as defined in F. Eriksson, The law of sines for tetrahedra and n-simplices. However, this generalization misses an important point about the standard law of sines, which relates it to the radius of the circumcircle of the triangle. Is there a property which generalizes this relation of the 2-dimensional law of sines? In other words: is there a constant relation of this kind that all tetrahedra inscribed in a sphere of the same radius have in common?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928292632102966, "perplexity": 166.39076657797804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144637.88/warc/CC-MAIN-20200220035657-20200220065657-00495.warc.gz"}
https://pokemongo.gamepress.gg/maximizing-candy-raid-bosses-pinap-or-golden-razz
# Maximizing Candy from Raid Bosses: Pinap or Golden Razz? Article by ## Introduction This guide provides a simple formula to determine the optimal strategy that maximizes the expected candies gained from catching a raid boss. ## Conclusion • The optimal strategy is always using Pinap Berry first and Golden Razz Berry (GRB) last. • The number of balls left where you switch to GRB is independent of the number of balls you receive Let • $p_{1}$ be the catch rate of using GRB • $c_{1}$ be the number of candies gained of using GRB • $p_{2}$ be the catch rate of using Pinap • $c_{2}$ be the number of candies gained of using Pinap Where $0 < p_{2} < p_{1} < 1$ and $0 < c_{1} < c_{2}$. Then: • If $p_{1} c_{1} > p_{2} c_{2}$, then the optimal strategy is using GRB in the last $k$ balls, where $$k = \lceil \log_{1 - p_{1}} ( \frac{ c_{2} - c_{1} }{ c_{1} } \frac{ p_{2} }{ p_{1} - p_{2} } ) \rceil$$ • If $p_{1} c_{1} \leq p_{2} c_{2}$, then it is optimal to always use Pinap. For the bosses with base catch rate of 2% (including the current Eon Duo), assuming you have golden badges, and you’ll transfer the boss if caught (and get one candy, during non-candy-event), the optimal strategy is: Excellent CurveGreat CurveNice Curve Non-boosted Weather Boosted The result above uses the Grand Unified Catch Theory. From it, we calculate the catch rates on a single throw in all cases: Excellent CurveGreat CurveNice Curve Non-boosted14.668% / 6.154%12.166% / 5.054%9.837% / 4.054% Weather Boosted13.207% / 5.514%11.167% / 4.626%8.832% / 3.628% ## Derivation In any case (using GRB or Pinap), denote the catch rate as $p$ and the number of candies gained as $c$. Let $X_{n}$ be the number of candies gained with $n$ balls to throw. $X_{n}$ takes two possible values: $X_{n} = \begin{cases} c & \quad \text{with the probability of } p \\ X_{n-1} & \quad \text{with the probability of } 1-p \end{cases}$ Therefore: $$E[X_{n}] = pc + (1-p)E[X_{n-1}]$$ At the $n$-th last ball, use GRB if and only if $$p_{1}c_{1} + (1-p_{1})E[X_{n-1}] > p_{2}c_{2} + (1-p_{2})E[X_{n-1}]$$ That is (with the assumption that $p_{1} > p_{2}$): $$E[X_{n-1}] < \frac { p_{1}c_{1} - p_{2}c_{2} } { p_{1} - p_{2} }$$ • If $p_{1} c_{1} \leq p_{2} c_{2}$, then it is never optimal to use GRB (hence use Pinap only) since $$E[X_{n}] \geq 0 \geq \frac { p_{1}c_{1} - p_{2}c_{2} } { p_{1} - p_{2} }$$ • If $p_{1} c_{1} > p_{2} c_{2}$, then the proof continues. Note that $E[X_{n}]$ increases in $n$. Hence, if it is better to use GRB at the $n$-th last ball, then it must be also better to use GRB at the $(n-1)$-th last, at the $(n-2)$-th last, ..., and at the last ball, since: $$0 = E[X_{0}] < E[X_{1}] < \dots < E[X_{n-1}] < \frac { p_{1}c_{1} - p_{2}c_{2} } { p_{1} - p_{2} }$$ Likewise, if it is better to use Pinap at the $(n+1)$-th last ball, then it must be also better to use Pinap at the at the $(n+2)$-th last, ..., and at the first ball: $$\frac { p_{1}c_{1} - p_{2}c_{2} } { p_{1} - p_{2} } \leq E[X_{n}] < E[X_{n+1}] < \dots$$ Suppose using GRB in the last $k$ balls is the optimal strategy. That is, it is better to use GRB at the $k$-th last ball and to use Pinap at the $(k+1)$-th last ball. Thus: $$E[X_{k-1}] < \frac { p_{1}c_{1} - p_{2}c_{2} } { p_{1} - p_{2} } \leq E[X_{k}]$$ Since it is always using GRB from the $k$-th last ball to the very last ball, we have: $\begin{cases} E[X_{k-1}] = (1 - (1-p_{1}) ^ {k-1} ) c_{1} \\ E[X_{k}] = (1 - (1-p_{1}) ^ {k} ) c_{1} \end{cases}$ Plug in and solve the inequality with respect to $k$, and it will give: $$k = \lceil \log_{1 - p_{1}} ( \frac{ c_{2} - c_{1} }{ c_{1} } \frac{ p_{2} }{ p_{1} - p_{2} } ) \rceil$$ -- Results have been independently confirmed by reddit user u/ZicNik in this thread.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928710401058197, "perplexity": 1820.0784988881496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831334.97/warc/CC-MAIN-20181219045716-20181219071716-00382.warc.gz"}
http://math.stackexchange.com/questions/638756/when-to-use-the-contrapositive-to-prove-a-statment
# When to use the contrapositive to prove a statment My question tries to address the intuition or situations when using the contrapositive to prove a mathematical statement is an adequate attempt. Whenever we have a mathematical statement of the form $A \implies B$, we can always try to prove the contrapositive instead i.e. $\neg B \implies \neg A$. However, what I find interesting to think about is, when should this approach look promising? When is it a good idea when trying to prove something to use the contrapositive? What is the intuition that $A \implies B$ might be harder to do directly than if one tried to do the contrapositive? I am looking more for a set of guidelines or intuitions or heuristics that might suggest that trying to use the contrapositive to prove the mathematical statement might be a good idea. Some problems have structures that make it more "obvious" to try induction or contradiction. For example, in cases where a recursion is involved or something is iterating, sometimes induction is natural way of trying the problem. Or when some mathematical object has property X, then assuming it doesn't have property X can seem promising because assuming the opposite might lead to a contradiction. So I was wondering if there was something analogous to proving stuff using the contrapositive. I was wondering if the community had a good set of heuristics for when they taught it could be a good idea to use the contrapositive in a proof. Also, this question might benefit from some simple, but very insightful examples that show why the negation might be easier to prove. Also, I know that this intuition can be gained from experience, so providing good or solid examples could be a great way to contribute. Just don't forget to say what type of intuition you are trying to teach with your examples! Note that I am probably not expecting an actual full proof super general magical algorithm because an algorithm like that could probably be used for automating prooving, which might imply something big like $P=NP$. (Obviously a proof of P vs NP is always interesting, but I think that asking the community to prove the P vs NP is not a realistic thing to ask...) - It's a good idea if it works :) No, seriously, I don't think there is some kind of general suggestion, but a nice question (+1). I would be interested in an answer as well. –  user127.0.0.1 Jan 15 at 0:21 Prove is the verb, proof the is noun. –  Pedro Tamaroff Jan 15 at 0:25 Re the question, a bit of it is experience, and trial and error. Some proofs work out nicely by contradiction or contraposition (some people fail to distinguish those), some theorems yield nicely to direct proofs. –  Pedro Tamaroff Jan 15 at 0:26 Maybe a collection of the ideas/guidlines/examples learned by people that have the experience might be a great answer! :) –  Pinocchio Jan 15 at 0:30 It also happens, one tries to argue by contradiction or contraposition when the negation is simpler than the assertion, for example, if an $\forall$ gets turned into a $\exists$, we can work with something less restrictive. For example, suppose I wanted to show that "If $X$ is a metric space $X$ with the Bolzano Weiertrass property (or that is sequentially compact) and $O$ is an open cover, then it has a Lebesgue number $\varepsilon_L$." This means that for every $x\in X$, $B(x,\varepsilon_L)$ is contained in some element of the cover. –  Pedro Tamaroff Jan 15 at 0:44 Contraposition is often helpful when an implication has multiple hypotheses, or when the hypothesis specifies multiple objects (perhaps infinitely many). As a simple (and arguably artificial) example, compare, for $x$ a real number: 1(a). If $x^4 - x^3 + x^2 \neq 1$, then $x \neq 1$. (Not easy to see without implicit contraposition?) 1(b). If $x = 1$, then $x^4 - x^3 + x^2 = 1$. (Immediately obvious.) Here's a classic from elementary real analysis, with $x$ again standing for a real number: 2(a). If $|x| \leq \frac{1}{n}$ for every positive integer $n$, then $x = 0$. This is true, but awkward to prove directly because there are effectively infinitely many hypotheses (one for each positive $n$), yet no finite number of these hypotheses implies the conclusion. 2(b). If $x \neq 0$, there exists a positive integer $n$ such that $\frac{1}{n} < |x|$. The contrapositive, which follows immediately from the Archimedean property, requires only a strategy for showing some hypothesis is violated if the conclusion is false. 3(a). If $f$ is a continuous, real-valued function on $[0, 1]$ and if $$\int_0^1 f(x) g(x)\, dx = 0\quad\text{for every continuous g,}$$ then $f \equiv 0$. 3(b). If $f$ is a continuous, real-valued function on $[0, 1]$ that is not indentically zero, then $$\int_0^1 f(x) g(x)\, dx \neq 0\quad\text{for some continuous g.}$$ Here contraposition is not especially helpful because the specific choice $f = g$ gives either direction. That is, 3(a) has infinitely many hypotheses, but one of them is sufficient to deduce the conclusion. Contrast with the superficially similar-looking: 4(a). If $f$ is a continuous, real-valued function on $[0, 1]$ and if $$\int_0^1 f(x) g(x)\, dx = 0\quad\text{for every non-negative step function g,}$$ then $f \equiv 0$. 4(b). If $f$ is a continuous, real-valued function on $[0, 1]$ that is not indentically zero, then $$\int_0^1 f(x) g(x)\, dx \neq 0\quad\text{for some non-negative step function g.}$$ (Sketch of 4(b): If $f$ is not identically $0$, there is an $x_0$ in $(0, 1)$ such that $f(x_0) \neq 0$. By continuity, there exists a $\delta > 0$ such that $[x_0 - \delta, x_0 + \delta] \subset (0, 1)$ and $|f(x)| > |f(x_0)|/2$ for all $x$ with $|x - x_0| < \delta$. Let $g$ be the characteristic function of $[x_0 - \delta, x_0 + \delta]$.) As in 2(a), the hypothesis of 4(a) comprises infinitely many conditions, and no finite number suffice. Here, the contrapositive 4(b) arises naturally, because in trying to establish 4(a) directly one is all but forced to ask how the conclusion could fail. - Fantastic answer –  Spencer Jan 25 at 4:31 I have been a little busy lately and havn't had time to read your answer, but I feel if so many people liked your response, you'll get the bounty. However, I will read it later and probably write comments/questions, just to make sure if people in the future come and read it, they can take advantage of our comments. Btw, thanks! :) –  Pinocchio Feb 1 at 15:48 Here are some examples, hope they help. First an easy one. Theorem. Let $n$ be an integer. If $n$ is even then $n^2$ is even. Proof (outline). Let $n$ be even. Then $n=2k$ for some integer $k$, so $n^2=4k^2=2(2k^2)$, which is even. Now try the converse by the same method. Theorem. Let $n$ be an integer. If $n^2$ is even then $n$ is even. Proof (attempted). Suppose that $n^2$ is even, say $n^2=2k$. Then $n=\sqrt{2k}$ and so...???? This seems hopeless, $\sqrt{2k}$ does not look like an integer at all, never mind proving that it's even! Now try proving the converse by using its contrapositive. Theorem. Let $n$ be an integer. If $n^2$ is even then $n$ is even. Proof. We have to prove that if $n$ is odd, then $n^2$ is odd. So, let $n=2k+1$; then $n^2=4k^2+4k+1=2(2k^2+2k)+1$ which is odd. Done! I think the point here is that for the attempt at a direct proof we start with $n^2=2k$. This implicitly gives us some information about $n$, but it's rather indirect and hard to get hold of. Using the contrapositive begins with $n=2k+1$, which gives us very clear and usable information about $n$. Perhaps you could put a heuristic in the following form: "try both ways, just for a couple of steps, and see if either looks notably easier than the other". - Well, you can use that $2$ is prime. Since $2\mid n^2,2\mid n$, so $n=2k$. =) –  Pedro Tamaroff Jan 15 at 1:19 @Pedro, bad idea in my view because you are using a sophisticated result to prove something very simple. In any case the question was about techniques of proof, not about how to prove my particular example. –  David Jan 15 at 1:24 I am just saying you're forcing an impossibility that is not there. –  Pedro Tamaroff Jan 15 at 1:27 In a way, "try both ways" is essentially using proof by contradiction. In fact, I often find myself using proof by contradiction (or at least, proving from both directions at once) when constructing a solution, and I only simplify my solution to direct or contrapositive later. That's because proof by contradiction, or "try both ways", essentially lets you use two hypotheses rather than just one, and in that sense it is superior. –  Goos Jan 27 at 20:25 Look at another example: In the language of sets, it takes the form $$A\subset B\Longleftrightarrow B^c\subset A^c,$$ where $A^c$ is the complement of $A$. - Not exactly what I was looking for, but quite a interesting and cure example. Maybe if you add an intuition explanation, it would be more clear what you were trying to communicate. Regardless, I thought it was a cute example. :) –  Pinocchio Jan 24 at 22:04 When you are confronted to prove $A=B$ via $A\subset B$ and $A\supset B$, one or the two of these may need to be transformed by this technique. –  janmarqz Jan 24 at 22:44 One reason why doing a proof of $A\Rightarrow B$ by contradiction might be easier is the following. For a direct proof you can only use $A$ being true as your working ground, while a proof by contradiction gives you both $A$ and $\neg B$ to get started, so you can draw more conclusions. In this case, we aren't even proving $\neg B \Rightarrow \neg A$ but $$A \wedge \neg B \Rightarrow \text{false}.$$ In the end, all of these are equivalent of course. - So contradiction is the same as contrapositive basically? Or what do you mean? –  Pinocchio Jan 25 at 4:13 Well logically all these implications are equivalent. But if you just prove the contraposition, you assume $\neg B$ and try to infer $\neg A$, while a proof by contradiction gives you $A\wedge\neg B$ as the assumption and you try to reach a contradiction from that. –  Christoph Jan 25 at 11:51 I think that is better to distinguish between, contraposition, based on the tautological schema : $\vdash (P \supset Q) \supset (\lnot Q \supset \lnot P)$ from the ex falso quodlibet, based on : $\vdash (\lnot P \supset (P \supset Q))$. –  Mauro ALLEGRANZA Jan 26 at 10:31 A proof by contradiction is based on the tautological schema : $\vdash (P \supset Q) \supset ( (P \supset \lnot Q) \supset \lnot P )$. –  Mauro ALLEGRANZA Jan 27 at 15:45 On what basis we think that, in general, a contrapositive argument is "more easy" to find that a direct one ? In user86418's very useful list of examples, we have that : (1) has "logical form" : $\forall x ( \lnot \phi(x) \rightarrow \lnot \psi(x) )$ So, when we contrapose it, we reduce it to : $\psi(x) \rightarrow \phi(x)$ that is the direct one. (2) has the form : $\forall x (\forall n \phi (x,n) \rightarrow x = 0 )$; the contraposition give us : $x \neq 0 \rightarrow \exists n \lnot \phi (x,n)$ (3) and (4) both are like : $\forall f ( \forall g \int_0^1 f(x) g(x)\, dx = 0 \rightarrow f \equiv 0 )$. The contrapositive is "if $f$ is not indentically zero" $\rightarrow \exists g \int_0^1 f(x) g(x)\, dx \neq 0$. My suggestion is that, the last three cases support the idea that is not, in general, more easy to prove $(\lnot Q \rightarrow \lnot P)$ instead of the direct $(P \rightarrow Q)$, but is more easy to prove : $(\lnot \psi \rightarrow \exists \lnot \phi)$ $(\forall \phi \rightarrow \psi)$. My suggestion (to be verified) is that the interplay between conditional and quantifiers is the relevant issue here. - PS this Answer is just a rough draft, maybe I will add more later In general there are 6 ways to proof conditional theorems. I think that if you want to prove $P \to Q$ you have the following 6 options (they are more details later) 1. Direct conditional proof 2. Direct contrapositive proof 3. Conditional indirect proof 4. Contrapositive indirect proof 5. Indirect proof 6. Indirect Contrapositive proof Which way is easiest in your case depends on what the theorem you want to proof is, I think in general: • if only one of the formulas "$P \lor \lnot P$ " $( \forall x P(x) \lor \forall x \lnot P(x) )$ or "$Q \lor \lnot Q$" $( \forall x Q(x) \lor \forall x \lnot Q (x) )$ is provable, prefer the negation of that variable above the negation of the other variable. • Direct conditional proof is best (it is constructive) • Then then the methods 2, 3 and 4. • Only use one of the Indirect proofs if everything else fails (because they do add an extra layer of negation), if you act this way possibly you will never have to use the indirect method again, although many will argue that the methods 3 and 4 are just indirect proof methods in disguise. PS 1 the names of the methods 2, 3, 4 and 6 are my own, there is no official terminology. (I just made them up while thinking about the question) PS 2 off course there are many combinations possible of the 6 methods I mentioned, and it is even true that an "Conditional indirect proof" is a combination of an "Indirect proof" inside a "Direct proof" , but I organised them a bit so that all major methods (my opinion) are mentioned. PS 3 all proofs methods that contain a "double negation elimination" (~~Elimination) are not constructive (you proof that $P \to Q$ is a theorem, but have not found a method to transform a P into a Q), but in fact all except the direct proof method contain ~~Eliminations. PS 4 Proof to get $P \to Q$ from $\lnot Q \to \lnot P$ : This proof in itself contains a double negation elimination, so all proofs leading to $\lnot Q \to \lnot P$ are not constructive. 1 | . . . ~Q -> ~P proved before 2 | |____ P Assumption 3 | | |__ ~Q Assumption 4 | | | . ~P 1,3 -> Eliminations . | | <----------------------- end subproof 6 | | . . ~~Q 3-5 ~ Introduction 7 | | . . Q 6 ~~Elimination . | <------------------------- end subproof 8 | . . . P -> Q 2-7 ->Introduction # The different methods: 1) Direct Conditional proof • Assume P • some how get to Q • implication introduction Formal proof 1 | |____ P Assumption : | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules k | | . . Q inference rule . | <-------------------- end subproof m | . . . P -> Q 1-k -> Introduction 2) Direct Contrapositive proof • Assume ~Q • some how get to ~P • implication introduction Formal proof 1 | |____ ~Q Assumption : | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules k | | . . ~P inference rule . | <--------------------- end subproof m | . . . ~Q -> ~P 1-k -> Introduction 3) Conditional Indirect proof • Assume P • Assume ~Q • implication introduction Formal proof 1 | |____ P Assumption : | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules a | | |__ ~Q Second Assumption : | | | : ????????? some aplications of inference rules : | | | : ????????? some aplications of inference rules . | | <---------------------- end subproof j | | . . ~~Q a-i ~ Introduction k | | . . Q j ~~Elimination . | <------------------------ end subproof m | . . . P -> Q 1-k -> Introduction The results between line 1 and a maybe interesting of their own accoord, that is why this method is better than 5) Indirect proof 4) Contrapositive Indirect proof This is a variation of conditional indirect proof method (no 3) the assumptions are reshuffeled. choose this method if from assuming ~Q more usefull things are provable than from assuming P. • Assume ~Q • Assume P • implication introduction Formal proof 1 | |____ ~Q Assumption : | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules a | | |__ P Second Assumption : | | | : ????????? some aplications of inference rules : | | | : ????????? some aplications of inference rules . | | <---------------------- end subproof k | | . . ~P a-i ~ Introduction . | <------------------------ end subproof m | . . . ~Q -> P 1-k -> Introduction The results between line 1 and a maybe interesting off their own accoord, that is why this method is better than 6) Indirect contrapositive proof 5) Indirect proof • Assume ~(P -> Q) Formal proof 1 | |____ ~(P -> Q) Assumption 2 | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules j | | : : ????????? some aplications of inference rules . | <------------------------ end subproof m | . . . ~~(P -> Q) 1-k ~ Introduction n | . . . P -> Q m ~~Elimination Often (allways?) this proof can be replaced by a Conditional Indirect Proof (3), it is advisable to use that method. 6) Indirect Contrapositive proof • Assume ~(~Q -> ~P) Formal proof 1 | |____ ~(~Q -> ~P) Assumption 2 | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules : | | : : ????????? some aplications of inference rules j | | : : ????????? some aplications of inference rules
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091204166412354, "perplexity": 706.3885730637795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832032.48/warc/CC-MAIN-20140820021352-00441-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.x-mol.com/paper/1295424512411181056
International Journal of Mathematics ( IF 0.604 ) Pub Date : 2020-08-17 , DOI: 10.1142/s0129167x20500834 Constantin Shramov We classify finite groups acting by birational transformations of a nontrivial Severi–Brauer surface over a field of characteristic zero that are not conjugate to subgroups of the automorphism group. Also, we show that the automorphism group of a smooth cubic surface over a field $𝕂$ of characteristic zero that has no $𝕂$-points is abelian, and find a sharp bound for the Jordan constants of birational automorphism groups of such cubic surfaces. down wechat bug
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586964011192322, "perplexity": 216.85577639592506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00653.warc.gz"}
https://www.physicsforums.com/threads/cross-correlation-of-two-sound-signals.661990/
# Cross Correlation of two sound signals 1. Jan 2, 2013 ### sampathkumarm Hello! We are trying to verify if an acoustic standing wave has been established in a cavity. In order to do so, we are giving a constant frequency signal to a speaker. We are picking up the signal at two locations, one very close to the speaker(reference signal) and at a particular location(captured signal). We are cross correlating them. Ideally, we are supposed to get a zero phase difference between the two signals for the frequency sent in, proving that the reference and captured signals are varying simultaneously in time. However, we are getting a jump from -pi to pi at that frequency ?(here,its 520Hz). Could anyone explain what is happening? Thank you for taking your time for this! File size: 10 KB Views: 140 2. Jan 3, 2013 ### Staff: Mentor Welcome to the PF. Could you also post a diagram of your cavity, showing the speaker location and the locations of the microphones? 3. Jan 3, 2013 ### AlephZero Why do you think that should happen? And why do you think it should happen only at a resonance? Also, what are you measuring (or calculating) on the plot that looks like an amplitude? The sudden change in phase suggests the system has a mode (or resonance) at that frequency, but without a drawing of the complete setup it's hard to guess what is happening.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842831552028656, "perplexity": 767.6610430687833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509958.44/warc/CC-MAIN-20181015225726-20181016011226-00448.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/66859/HTML/default/statug_power_details70.htm
#### Analyses in the TWOSAMPLEMEANS Statement ##### Two-Sample t Test Assuming Equal Variances (TEST=DIFF) The hypotheses for the two-sample t test are The test assumes normally distributed data and common standard deviation per group, and it requires , , and . The test statistics are where and are the sample means and is the pooled standard deviation, and The test is Exact power computations for t tests are given in O’Brien and Muller (1993, Section 8.2.1): Solutions for N, , , , and are obtained by numerically inverting the power equation. Closed-form solutions for other parameters, in terms of , are as follows: Finally, here is a derivation of the solution for : Solve the equation for (which requires the quadratic formula). Then determine the range of given : This implies ##### Two-Sample Satterthwaite t Test Assuming Unequal Variances (TEST=DIFF_SATT) The hypotheses for the two-sample Satterthwaite t test are The test assumes normally distributed data and requires , , and . The test statistics are where and are the sample means and and are the sample standard deviations. DiSantostefano and Muller (1995, p. 585) state, the test is based on assuming that under , F is distributed as , where is given by Satterthwaite’s approximation (Satterthwaite, 1946), Since is unknown, in practice it must be replaced by an estimate So the test is Exact solutions for power for the two-sided and upper one-sided cases are given in Moser, Stevens, and Watts (1989). The lower one-sided case follows easily by using symmetry. The equations are as follows: The density is obtained from the fact that Because the test is biased, the achieved significance level might differ from the nominal significance level. The actual alpha is computed in the same way as the power, except that the mean difference is replaced by the null mean difference . ##### Two-Sample Pooled t Test of Mean Ratio with Lognormal Data (TEST=RATIO) The lognormal case is handled by reexpressing the analysis equivalently as a normality-based test on the log-transformed data, by using properties of the lognormal distribution as discussed in Johnson, Kotz, and Balakrishnan (1994, Chapter 14). The approaches in the section Two-Sample t Test Assuming Equal Variances (TEST=DIFF) then apply. In contrast to the usual t test on normal data, the hypotheses with lognormal data are defined in terms of geometric means rather than arithmetic means. The test assumes equal coefficients of variation in the two groups. The hypotheses for the two-sample t test with lognormal data are Let , , and be the (arithmetic) means and common standard deviation of the corresponding normal distributions of the log-transformed data. The hypotheses can be rewritten as follows: where The test assumes lognormally distributed data and requires , , and . The power is where ##### Additive Equivalence Test for Mean Difference with Normal Data (TEST=EQUIV_DIFF) The hypotheses for the equivalence test are The analysis is the two one-sided tests (TOST) procedure of Schuirmann (1987). The test assumes normally distributed data and requires , , and . Phillips (1990) derives an expression for the exact power assuming a balanced design; the results are easily adapted to an unbalanced design: where is Owen’s Q function, defined in the section Common Notation. ##### Multiplicative Equivalence Test for Mean Ratio with Lognormal Data (TEST=EQUIV_RATIO) The lognormal case is handled by reexpressing the analysis equivalently as a normality-based test on the log-transformed data, by using properties of the lognormal distribution as discussed in Johnson, Kotz, and Balakrishnan (1994, Chapter 14). The approaches in the section Additive Equivalence Test for Mean Difference with Normal Data (TEST=EQUIV_DIFF) then apply. In contrast to the additive equivalence test on normal data, the hypotheses with lognormal data are defined in terms of geometric means rather than arithmetic means. The hypotheses for the equivalence test are The analysis is the two one-sided tests (TOST) procedure of Schuirmann (1987) on the log-transformed data. The test assumes lognormally distributed data and requires , , and . Diletti, Hauschke, and Steinijans (1991) derive an expression for the exact power assuming a crossover design; the results are easily adapted to an unbalanced two-sample design: where is the (assumed common) standard deviation of the normal distribution of the log-transformed data, and is Owen’s Q function, defined in the section Common Notation. ##### Confidence Interval for Mean Difference (CI=DIFF) This analysis of precision applies to the standard t-based confidence interval: where and are the sample means and is the pooled standard deviation. The half-width is defined as the distance from the point estimate to a finite endpoint, A valid conference interval captures the true mean. The exact probability of obtaining at most the target confidence interval half-width h, unconditional or conditional on validity, is given by Beal (1989): where and is Owen’s Q function, defined in the section Common Notation. A quality confidence interval is both sufficiently narrow (half-width ) and valid:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705296754837036, "perplexity": 1309.1809430024027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00119.warc.gz"}
http://math.stackexchange.com/questions/15024/convolution-of-multiple-probability-density-functions
# Convolution of multiple probability density functions I have a series of tasks where when one task finishes the next task runs, until all of the tasks are done. I need to find the probability that everything will be finished at different points in time. How should I approach this? Is there a way to find this in polynomial time? The pdfs for how long individual tasks will run have been found experimentally and are not guaranteed to follow any particular type of distribution. - If the durations of the different tasks are independent, then the PDF of the overall duration is indeed given by the convolution of the PDFs of the individual task durations. For efficient numerical computation of the convolutions, you probably want apply something like a Fourier transform to them first. If the PDFs are discretized and of bounded support, as one would expect of empirical data, you can use a Fast Fourier Transform. Then just multiply the transformed PDFs together and take the inverse transform. - I don't know if that is what you are looking for, but if $X_1,\ldots,X_n$ are i.i.d. random variables with mean $\mu$ and (finite) variance $\sigma^2$, then $${\rm P}\bigg(\sum\limits_{i = 1}^n {X_i } \le t \bigg) = {\rm P}\bigg(\frac{{\sum\nolimits_{i = 1}^n {X_i } - n\mu }}{{\sigma \sqrt n }} \le \frac{{t - n\mu }}{{\sigma \sqrt n }}\bigg),$$ and the random variable $\frac{{\sum\nolimits_{i = 1}^n {X_i } - n\mu }}{{\sigma \sqrt n }}$ converges to the standard normal distribution as $n \to \infty$. Now, you can estimate the unknown parameters $\mu$ and $\sigma^2$ (assuming the variance is finite), so if $n$ is sufficiently large, you are actually done. For further details see this (note the subsection Density functions). - Unfortunately, I don't have enough variables to use the Central Limit Theorem. Most cases there will probably only be 2 or 3 and there is no guarantee about their distribution's –  Statler Dec 21 '10 at 3:45 Are the variables i.i.d.? –  Shai Covo Dec 21 '10 at 4:05 no, unfortunately not. They are defiantly not identically distributed, and probably not independent in most cases. I am willing to live with restricting it to independent variables though, if it is necessary to have a solution. –  Statler Dec 21 '10 at 4:15 If there are only 2 or 3, you can just numerically integrate the pdfs. This doesn't depend upon the distributions being the same, nor even independence. –  Ross Millikan Feb 19 '11 at 20:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118699789047241, "perplexity": 232.03826659302462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775394.157/warc/CC-MAIN-20141217075255-00121-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.mathxplain.com/precalculus/complex-numbers/what-are-complex-numbers
Contents of this Precalculus episode: Integer, Rational numbers, Real numbers, Complex numbers, Imaginary axis, Imaginary unit, Addition, Multiplication. Text of slideshow What are complex numbers? Operations on complex numbers Absolute value of complex numbers, sets on the complex plane Let’ see what complex numbers are. First, let’s talk a bit about numbers. This is 3, for example. And this is 4. And unfortunately, sometimes we need negative numbers, too. Then we may need numbers that express ratios. These are called rational numbers. Like the solution of this equation: And then there are equations where the solution is not a rational number. So, we introduce the irrational numbers that fill the gap between the rational numbers on the number line. And that takes us to real numbers. At every point of the number line there is a real number. But in certain cases - especially if physicists are lurking around - we need numbers that have some quite unusual properties. For example, one like this: Right off the top of our heads, we cannot find many numbers that would fit here, because These strange numbers were named imaginary numbers. Since the real numbers already took up all spots on the number line, we place the imaginary numbers on an axis perpendicular to it. The unit of the imaginary axis is . Its most important property is . Numbers that consist of real and imaginary parts are called complex numbers. So, complex numbers are in the form of , and they are located on the so called complex plane. Here are two complex numbers: and let’s see how we add or even multiply them together. and the imaginary parts. Multiplication is more exciting. But . The funniest is division. Stay tuned... # What are complex numbers? 01 Let's see this Precalculus episode
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171767830848694, "perplexity": 912.2337318367897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00106.warc.gz"}
https://ijqf.org/archives/6163
# Weekly Papers on Quantum Foundations (3) Nonlocality via Entanglement-Swapping — a Bridge Too Far?. (arXiv:2101.05370v1 [quant-ph]) A 2015 experiment by Hanson and his Delft colleagues provided new confirmation that the quantum world violates the Bell inequalities, closing some loopholes left open by previous experiments. The experiment was also taken to provide new evidence of quantum nonlocality. Here we argue for caution about the latter claim. The Delft experiment relies on entanglement swapping, and our main point is that this introduces new loopholes in the argument from violation of the Bell inequalities to nonlocality. As we explain, the sensitivity of such experiments to these new loopholes depends on the temporal relation between the entanglement swapping measurement C and the two measurements A and B between which we seek to infer nonlocality. If the C is in the future of A and B, the loopholes loom large. If C is in the past, they are less of a threat. The Delft experiment itself is the intermediate case, in which the separation is spacelike. We argue that this still leaves it vulnerable to the new loopholes, unable to establish conclusively that it avoids them. Quantum logics close to Boolean algebras. (arXiv:2101.05501v1 [quant-ph]) We consider orthomodular posets endowed with a symmetric difference. We call them ODPs. Expressed in the quantum logic language, we consider quantum logics with an XOR-type connective. We study three classes of “almost Boolean” ODPs, two of them defined by requiring rather specific behaviour of infima and the third by a Boolean-like behaviour of Frink ideals. We establish a (rather surprising) inclusion between the three classes, shadding thus light on their intrinsic properties. (More details can be found in the Introduction that follows.) Let us only note that the orthomodular posets pursued here, though close to Boolean algebras (i.e., close to standard quantum logics), still have a potential for an arbitrarily high degree of non-compatibility and hence they may enrich the studies of mathematical foundations of quantum mechanics. Entangled Kernels — Beyond Separability. (arXiv:2101.05514v1 [cs.LG]) We consider the problem of operator-valued kernel learning and investigate the possibility of going beyond the well-known separable kernels. Borrowing tools and concepts from the field of quantum computing, such as partial trace and entanglement, we propose a new view on operator-valued kernels and define a general family of kernels that encompasses previously known operator-valued kernels, including separable and transformable kernels. Within this framework, we introduce another novel class of operator-valued kernels called entangled kernels that are not separable. We propose an efficient two-step algorithm for this framework, where the entangled kernel is learned based on a novel extension of kernel alignment to operator-valued kernels. We illustrate our algorithm with an application to supervised dimensionality reduction, and demonstrate its effectiveness with both artificial and real data for multi-output regression. Layers of classicality in the compatibility of measurements. (arXiv:2101.05752v1 [quant-ph]) The term “Layers of classicality” in the context of quantum measurements, is introduced in [T. Heinosaari, Phys. Rev. A \textbf{93}, 042118 (2016)]. The strongest layer among these consists of the sets of observables that can be broadcast and the weakest layer consists of the sets of compatible observables. There are several other layers in between those two layers. In this work, we show the differences and similarities in their mathematical and geometric properties. We also show the relations among these layers. An experimental proposal to study collapse of the wave function in travelling-wave parametric amplifiers. (arXiv:1811.01698v6 [quant-ph] UPDATED) The read-out of a microwave qubit state occurs using an amplification chain that enlarges the quantum state to a signal detectable with a classical measurement apparatus. However, at what point in this process is the quantum state really ‘measured’? In order to investigate whether the `measurement’ takes place in the amplification chain, in which a parametric amplifier is often chosen as the first amplifier, it is proposed to construct a microwave interferometer that has such an amplifier added to each of its arms. Feeding the interferometer with single photons, the interference visibility depends on the gain of the amplifiers and whether a measurement collapse has taken place during the amplification process. The visibility as given by standard quantum mechanics is calculated as a function of gain, insertion loss and temperature. We find a visibility of 1/3 in the limit of large gain without taking into account losses, which is reduced to 0.26 in case the insertion loss of the amplifiers is 2.2 dB at a temperature of 50 mK. It is shown that if the wave function collapses within the interferometer, the measured visibility is reduced compared to its magnitude predicted by standard quantum mechanics once this collapse process sets in. Introducing the inverse hoop conjecture for black holes. (arXiv:2101.05290v1 [gr-qc]) Authors: Shahar Hod It is conjectured that stationary black holes are characterized by the inverse hoop relation ${\cal A}\leq {\cal C}^2/\pi$, where ${\cal A}$ and ${\cal C}$ are respectively the black-hole surface area and the circumference length of the smallest ring that can engulf the black-hole horizon in every direction. We explicitly prove that generic Kerr-Newman-(anti)-de Sitter black holes conform to this conjectured area-circumference relation. Generalized gravity theory with curvature, torsion and nonmetricity. (arXiv:2101.05318v1 [gr-qc]) In this article, the generalized gravity theory with the curvature, torsion and nonmetricy was studied. For the FRW spacetime case, in particular, the Lagrangian, Hamilatonian and gravitational equations are obtained. The particular case $F(R,T)=\alpha R+\beta T+\mu Q+\nu{\cal T}$ is investigated in detail. In quantum case, the corresponding Wheeler-DeWitt equation is obtained. Finally, some gravity theories with the curvature, torsion and nonmetricity are presented. Gravitationally induced uncertainty relations in curved backgrounds. (arXiv:2101.05552v1 [gr-qc]) Authors: Luciano PetruzzielloFabian Wagner This paper aims at investigating the influence of space-time curvature on the uncertainty relation. In particular, relying on previous findings, we assume the quantum wave function to be confined to a geodesic ball on a given space-like hypersurface whose radius is a measure of the position uncertainty. On the other hand, we concurrently work out a viable physical definition of the momentum operator and its standard deviation in the non-relativistic limit of the 3+1 formalism. Finally, we evaluate the uncertainty relation which to second order depends on the Ricci scalar of the effective 3-metric and the corresponding covariant derivative of the shift vector. For the sake of illustration, we apply our general result to a number of examples arising in the context of both general relativity and extended theories of gravity. A cosmic shadow on CSL. (arXiv:1906.04405v4 [quant-ph] UPDATED) Authors: Jerome MartinVincent Vennin The Continuous Spontaneous Localisation (CSL) model solves the measurement problem of standard quantum mechanics, by coupling the mass density of a quantum system to a white-noise field. Since the mass density is not uniquely defined in general relativity, this model is ambiguous when applied to cosmology. We however show that most natural choices of the density contrast already make current measurements of the cosmic microwave background incompatible with other laboratory experiments. IR dynamics and entanglement entropy. (arXiv:1910.07847v3 [hep-th] UPDATED) Authors: Theodore N TomarasNicolaos Toumbas We consider scattering of Faddeev-Kulish electrons in QED and study the entanglement between the hard and soft particles in the final state at the perturbative level. The soft photon spectrum naturally splits into two parts: i) soft photons with energies less than a characteristic infrared scale $E_d$ present in the clouds accompanying the asymptotic charged particles, and ii) sufficiently low energy photons with energies greater than $E_d$, comprising the soft part of the emitted radiation. We construct the density matrix associated with tracing over the radiative soft photons and calculate the entanglement entropy perturbatively. We find that the entanglement entropy is free of any infrared divergences order by order in perturbation theory. On the other hand infrared divergences in the perturbative expansion for the entanglement entropy appear upon tracing over the entire spectrum of soft photons, including those in the clouds. To leading order the entanglement entropy is set by the square of the Fock basis amplitude for real single soft photon emission, which leads to a logarithmic infrared divergence when integrated over the photon momentum. We argue that the infrared divergences in the entanglement entropy (per particle flux per unit time) in this latter case persist to all orders in perturbation theory in the infinite volume limit. Conserved charges in general relativity. (arXiv:2005.13233v3 [gr-qc] UPDATED) Authors: Sinya AokiTetsuya OnogiShuichi Yokoyama We present a precise definition of a conserved quantity from an arbitrary covariantly conserved current available in a general curved spacetime with Killing vectors. This definition enables us to define energy and momentum for matter by the volume integral. As a result we can compute charges of Schwarzschild and BTZ black holes by the volume integration of a delta function singularity. Employing the definition we also compute the total energy of a static compact star. It contains both the gravitational mass known as the Misner-Sharp mass in the Oppenheimer-Volkoff equation and the gravitational binding energy. We show that the gravitational binding energy has the negative contribution at maximum by 68% of the gravitational mass in the case of a constant density. We finally comment on a definition of generators associated with a vector field on a general curved manifold. The Generalized OTOC from Supersymmetric Quantum Mechanics: Study of Random Fluctuations from Eigenstate Representation of Correlation Functions. (arXiv:2008.03280v2 [hep-th] UPDATED) The concept of out-of-time-ordered correlation (OTOC) function is treated as a very strong theoretical probe of quantum randomness, using which one can study both chaotic and non-chaotic phenomena in the context of quantum statistical mechanics. In this paper, we define a general class of OTOC, which can perfectly capture quantum randomness phenomena in a better way. Further we demonstrate an equivalent formalism of computation using a general time independent Hamiltonian having well defined eigenstate representation for integrable supersymmetric quantum systems. We found that one needs to consider two new correlators apart from the usual one to have a complete quantum description. To visualize the impact of the given formalism we consider the two well known models viz. Harmonic Oscillator and one dimensional potential well within the framework of supersymmetry. For the Harmonic Oscillator case we obtain similar periodic time dependence but dissimilar parameter dependences compared to the results obtained from both micro-canonical and canonical ensembles in quantum mechanics without supersymmetry. On the other hand, for one dimensional potential well problem we found significantly different time scale and the other parameter dependence compared to the results obtained from non-supersymmetric quantum mechanics. Finally, to establish the consistency of the prescribed formalism in the classical limit, we demonstrate the phase space averaged version of the classical version of OTOCs from a model independent Hamiltonian along with the previously mentioned these well cited models. Quantum Detection of Inertial Frame Dragging. (arXiv:2009.10584v3 [gr-qc] UPDATED) A relativistic theory of gravity like general relativity produces phenomena differing fundamentally from Newton’s theory. An example, analogous to electromagnetic induction, is gravitomagnetism, or the dragging of inertial frames by mass-energy currents. These effects have recently been confirmed by classical observations. Here we show, for the first time, that they can be observed by a quantum detector. We study the response function of Unruh De-Witt detectors placed in a slowly rotating shell. We show that the response function picks up the presence of rotation even though the spacetime inside the shell is flat and the detector is locally inertial. The detector can distinguish between the static situation when the shell is non-rotating and the stationary case when the shell rotates and the dragging of inertial frames, i.e. gravitomagnetic effects, arise. Moreover, it can do so when the detector is switched on for a finite time interval within which a light signal cannot travel to the shell and back to convey the presence of rotation. String Theory, the Dark Sector and the Hierarchy Problem. (arXiv:2010.15610v2 [hep-th] UPDATED) Authors: Per BerglundTristan HübschDjordje Minic We discuss dark energy, dark matter and the hierarchy problem in the context of a general non-commutative formulation of string theory. In this framework dark energy is generated by the dynamical geometry of the dual spacetime while dark matter, on the other hand, comes from the degrees of freedom dual to the visible matter. This formulation of string theory is sensitive both to the IR and UV scales and the Higgs scale is radiatively stable by being a geometric mean of radiatively stable UV and IR scales. We also comment on various phenomenological signatures of this novel approach to dark energy, dark matter and the hierarchy problem. We find that this new view on the hierarchy problem is realized in a toy model based on a non-holomorphic deformation of the stringy cosmic string. Finally, we discuss a proposal for a new non-perturbative formulation of string theory, which sheds light on M theory and F theory, as well as on supersymmetry and holography. The history of LHCb. (arXiv:2101.05331v1 [physics.hist-ph]) In this paper we describe the history of the LHCb experiment over the last three decades, and its remarkable successes and achievements. LHCb was conceived primarily as a b-physics experiment, dedicated to CP violation studies and measurements of very rare b decays, however the tremendous potential for c-physics was also clear. At first data taking, the versatility of the experiment as a general-purpose detector in the forward region also became evident, with measurements achievable such as electroweak physics, jets and new particle searches in open states. These were facilitated by the excellent capability of the detector to identify muons and to reconstruct decay vertices close to the primary pp interaction region. By the end of the LHC Run 2 in 2018, before the accelerator paused for its second long shut down, LHCb had measured the CKM quark mixing matrix elements and CP violation parameters to world-leading precision in the heavy-quark systems. The experiment had also measured many rare decays of b and c quark mesons and baryons to below their Standard Model expectations, some down to branching ratios of order 10-9. In addition, world knowledge of b and c spectroscopy had improved significantly through discoveries of many new resonances already anticipated in the quark model, and also adding new exotic four and five quark states. On the Relationship Between Modelling Practices and Interpretive Stances in Quantum Mechanics Ruyant, Quentin (2021) On the Relationship Between Modelling Practices and Interpretive Stances in Quantum Mechanics. Foundations of Science. ISSN 1233-1821 Efficient Simulation of Loop Quantum Gravity: A Scalable Linear-Optical Approach Author(s): Lior Cohen, Anthony J. Brady, Zichang Huang, Hongguang Liu, Dongxue Qu, Jonathan P. Dowling, and Muxin Han The problem of simulating complex quantum processes on classical computers gave rise to the field of quantum simulations. Quantum simulators solve problems, such as boson sampling, where classical counterparts fail. In another field of physics, the unification of general relativity and quantum theor… [Phys. Rev. Lett. 126, 020501] Published Mon Jan 11, 2021 Truth and beauty in physics and biology Nature Physics, Published online: 11 January 2021; doi:10.1038/s41567-020-01132-9 Physicists and biologists have different conceptions of beauty. A better appreciation of these differences may bring the disciplines closer and help develop a more integrated view of life. No Time for Time from No-Time Chua, Eugene Y. S. and Callender, Craig (2020) No Time for Time from No-Time. [Preprint] Humean Laws of Nature: The End of the Good Old Days Callender, Craig (2021) Humean Laws of Nature: The End of the Good Old Days. [Preprint] Downward Causation Defended Woodward, James (2021) Downward Causation Defended. [Preprint]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918080747127533, "perplexity": 810.9186639876754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151563.91/warc/CC-MAIN-20210725014052-20210725044052-00467.warc.gz"}
http://math.stackexchange.com/questions/60675/social-network-representation-graphs-or-sets/60687
# Social network representation: graphs or sets? for social network representation, what is better, sets or graphs ? What kind of feature the first gives that the second doesn't and viceversa? - A graph is a set that has additional structure -- specifically, it has structure representing the connections between its elements (and possible other information, such as the strength of the connection if it's a weighted graph). If you need to represent connections between elements you should use a graph representation of the problem. If you're doing computations, the fact of how to implement your graph on the computer is another question - there are at least three possible representations which offer different benefits and drawbacks. –  Chris Taylor Aug 30 '11 at 9:25 Not every question about graphs is graph theory, and not every question about sets is set theory. In this case, this question is neither. –  Asaf Karagila Aug 30 '11 at 9:32 Hmm... from the title, I kind of expected the question to be about deep foundational issues and to involve things like Aczel's axiom. I guess I was wrong. –  Ilmari Karonen Aug 30 '11 at 10:48 The more I think about it, the more I'm starting to doubt that the question is mathematical in its nature. –  Asaf Karagila Aug 30 '11 at 11:10 @Niel: If this were on SO, I'd be inclined to tag it as data-structures (and probably also social-networking‌​). Perhaps we could use such a tag here, or perhaps the problem is that the question really belongs elsewhere. –  Ilmari Karonen Aug 30 '11 at 11:10 For instance, as a potential foundation for mathematics, sets can be used to do essentially anything if you put your mind to it. For instance, sets can be used to represent a graph. You can represent a graph as a set of labelled nodes, and the edges $u\text{-}v$ are represented by unordered pairs $\{u,v\}$ (which will be singleton sets in the case of self-loops). Or you can represent a graph as a set of labelled edges, where the nodes are given as collections of edges $\{e_1, e_2, \ldots, e_k\}$ which meet one another (and where each edge can belong to at most two such sets).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581984043121338, "perplexity": 393.5706939592768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928015.28/warc/CC-MAIN-20150521113208-00110-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/terms-in-a-mathematical-expression.522864/
# Terms in a Mathematical Expression 1. Aug 19, 2011 ### gwsinger When we refer to terms in an equation, what EXACTLY are we referring to? Suppose for example we have: ab + bc + cd = A Suppose somebody refers to term "ab". Are they referring to the syntactical NAME "ab", the IMAGE of ab (i.e., it's value), or the ARGUMENT (a,b) which belongs to some ordered triplet in the multiplication function? 2. Aug 19, 2011 ### disregardthat Do you mean the ordered triplet in the addition function? Yes, I believe that's it. 3. Aug 20, 2011 ### Stephen Tashi I don't think there is any general rule for this. It depends on the context. It's a matter of interpreting English unless you are studying a text that is using such terminology to describe the precise syntax of a symbolic language. For example, one might just as well ask "Who is the 'we' that you refer to?" or "EXACTLY what particular passages of text are you talking about?". 4. Aug 20, 2011 ### TylerH In high school algebra II, we're taught that a term is an expression that is being added to another. In your example, you would have three terms: ab, bc, and cd. 5. Aug 22, 2011 ### alexfloo A "term" is not a formal mathematical object, it's just a word that's used for conveying a point. It can mean any of the three things you mentioned, and it's still (mathematically) unambiguous, because two of the three are the same (the value and the argument to the operation, since the value is what is actually being used as an argument) and the other one (the name) has no bearing on the mathematical value of the sentence.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770824074745178, "perplexity": 744.9016147162146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00671.warc.gz"}
https://mikepawliuk.ca/tag/ultrahomogeneous/
## Dual Ramsey, an introduction – Ramsey DocCourse Prague 2016 The following notes are from the Ramsey DocCourse in Prague 2016. The notes are taken by me and I have edited them. In the process I may have introduced some errors; email me or comment below and I will happily fix them. Title: Dual Ramsey, the Gurarij space and the Poulsen simplex 1 (of 3). Lecturer: Dana Bartošová. Date: December 12, 2016. Main Topics: Comparison of various Fraïssé settings, metric Fraïssé definitions and properties, KPT of metric structures, Thick sets Definitions: continuous logic, metric Fraïssé properties, NAP (near amalgamation property), PP (Polish Property), ARP (Approximate Ramsey Property), Thick, Thick partition regular. Lecture 1 – Lecture 2 – Lecture 3 Ramsey DocCourse Prague 2016 Index of lectures. ## Bootcamp 5 – Ramsey DocCourse Prague 2016 The following notes are from the Ramsey DocCourse in Prague 2016. The notes are taken by me and I have edited them. In the process I may have introduced some errors; email me or comment below and I will happily fix them. Title: Bootcamp 5 (of 8) Lecturer: Jan Hubička Date: Friday September 30, 2016. Main Topics: Rado Graph, Fraïssé’s Theorem, Examples of Fraïssé classes, Ramsey implies Amalgamation, Lifts and Reducts, Ramsey classes have linear orders Definitions: Extension Property, Ultrahomogeneous, Universal, $\text{Age}(A)$, Fraïssé class, irreducible structure, Lifts/Expansions and Shadows/reducts. Bootcamp 1 – Bootcamp 2 – Bootcamp 3Bootcamp 4 – Bootcamp 5 – Bootcamp 6Bootcamp 7 – Bootcamp 8 ## Facts about the Urysohn Space – Some useful, some cool (This is almost verbatim the talk I gave recently (Feb 23, 2012) at the Toronto Student Set Theory and Topology Seminar. I will be giving this talk again on April 5, 2012) I have been working on a problem involving the Urysohn space recently, and I figured that I should fill people in with the basic facts and techniques involved in this space. I will give some useful facts, a key technique and 3 cool facts. First, the definition! Definition: A metric space $U$ has the Urysohn property if • $U$ is complete and separable • $U$ contains every separable metric space as an isometric copy. • $U$ is ultrahomogeneous in the sense that if $A,B$ are finite, isometric subspaces of $U$ then there is an automorphism of $U$ that takes $A$ to $B$. You might already know a space that satisfies the first two properties – The Hilbert cube $[0,1]^\omega$ or $C[0,1]$ the continuous functions from $[0,1]$ to $[0,1]$. However, these spaces are not ultrahomogeneous. Should a Urysohn space even exist? It does, but the construction isn’t particularly illuminating so I will skip it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065575361251831, "perplexity": 2260.8969910983988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202506.45/warc/CC-MAIN-20190321072128-20190321094128-00193.warc.gz"}
http://tex.stackexchange.com/questions/84603/not-able-to-write-bold-characters-using-textbf/84620
not able to write bold characters using textbf I'm working on latex and R (swaeave) and I am using utf8 encoding but when I use \textbf{dataset}, the text is not bold. Below is part of my code. But I am not sure what is reason behind it. \documentclass[11pt,table,a4paper]{article} \usepackage[T1,T2A]{fontenc} \usepackage{CJKutf8} \usepackage[english,russian]{babel} \begin{document} %chunk = 1 <<echo=FALSE,results=tex>>= cat("dataset","\n") cat("\\textbf{dataset}","\n") # it should be bold @ \end{document}linecolor=goldenpoppy The error is due to \usepackage[english,russian]{babel}. Please help me out in fixing this pblm. - Sorry, but without an MWE few people will be able or willing to help you. For a start clean up your package list: You load everything and the kitchen sink, and you load it twice. –  Martin Schröder Nov 28 '12 at 7:16 If i remove unnecessary package then i never face such pblm. The pblm is due to clash of package. –  Manish Nov 28 '12 at 7:29 One idea would be, to remove one package after another in order to narrow down the clash you propose to have. If one package is neccessary to produce your problem remove others (that are not involved of course) to further compress your MWE and post that MWE here. –  Ronny Nov 28 '12 at 7:31 Please don't downvote below -1, even if the question in it's current form needs some improvement. A score of -1 is enough to show that the question needs work, anything below that is of no use. Also, if you downvote or vote to close, please leave a comment explaining why you did so. –  Jake Nov 28 '12 at 7:39 This should not have been closed: The question has been improved significantly from its first version. Please wait at least 24 hours after asking the OP for improvements to the question before voting to close. Also, if you downvote, please don't forget to revert the vote after the question is improved. –  Jake Nov 28 '12 at 8:31 UPDATE: See the solution at point 4. I don't really understand the question. Apparently you are trying to insert some R code inside your latex document, but didn't take into account that some characters used in the listing have special meaning for latex. For example, you have %chunk=1, but since % is the comment character for TeX, that line will be ignored. Then you have some \n inside, but since \ is the escape character for TeX, it complains about \n not being defined. Oddly enough, you doubled the \ before textbf, so textbf is not a tex command anymore in that context (while \\ is the command for a newline). Also you have # to introduce a comment in R, but # is a special char used by tex to denote an argument. All these errors are not related to \usepackage[english,russian]{babel}, as you suggest. They are instead the usual errors which happen when you insert code from some other language into tex, without "protecting" it in a verbatim environment. Also, you have an extraneous linecolor=goldenpoppy after \end{document}, which will be ignored (what was it for, anyway?) While more clarifications about your actual problem arrive, I can try guessing and suggesting some solutions. 1. Solve the errors and get a compilable document. Use verbatim to disable the special meaning of all those special chars. \documentclass[11pt,table,a4paper]{article} \usepackage[T1,T2A]{fontenc} \usepackage{CJKutf8} \usepackage[english,russian]{babel} \begin{document} \begin{verbatim} %chunk = 1 <<echo=FALSE,results=tex>>= cat("dataset","\n") cat("\\textbf{dataset}","\n") # it should be bold @ \end{verbatim} \end{document} Result: 2. Use listings package to get syntax highlight for R (comments are italized, keywords are bold, etc.) \documentclass[11pt,table,a4paper]{article} \usepackage[T1,T2A]{fontenc} \usepackage{CJKutf8} \usepackage[english,russian]{babel} \usepackage{listings} \begin{document} \begin{lstlisting}[language=R] %chunk = 1 <<echo=FALSE,results=tex>>= cat("dataset","\n") cat("\\textbf{dataset}","\n") # it should be bold @ \end{lstlisting} \end{document} Result: 3. I guess that you want the text "dataset" to be typeset in bold. If you use verbatim, the command \textbf is not "executed" (because \ has lost its status of escape char). So you need a verbatim environment which disables most of the scpecial chars, but still allows for \ as escape. Fancyvrb package has such an option. Unfortunately in that case \n would be interpreted also as a command. An easy solution is to define that command. \documentclass[11pt,table,a4paper]{article} \usepackage[T1,T2A]{fontenc} \usepackage{CJKutf8} \usepackage[english,russian]{babel} \usepackage{fancyvrb} \def\n{\textbackslash n} \begin{document} \begin{Verbatim}[commandchars=\\\{\}] %chunk = 1 <<echo=FALSE,results=tex>>= cat("dataset","\n") cat("\textbf{dataset}","\n") # it should be bold @ \end{Verbatim} \end{document} But, oh! The result shows no bold (perhaps this was your actual problem?): In this case this is because your tt font has not bold variant. And this could be related to the cyrillic fonts you use. Please, confirm if this is the intended question. 4. UPDATED: Once it has been confirmed (in a comment) that the problem was not the syntax errors found in the (non-working) minimal example, but instead the lack of an appropiate tt bold font, the solution is to provide such a font. Typically the solution would be as simple as to \usepackage{extra-bold}, which installs bold versions of cmtt font. However this does not work either in this case, because this package installs those fonts under the 'OT1' enconding, but the document uses T2A encoding. A "brute force" solution is to fool latex into "thinking" that font cmttb has T2A encoding. This works for the minimal example provided because the listing contains only ascii characters. I don't know what would happen if the listing had contained cyrillic or chinese characters. You have to try yourself. In order to implement this "solution" (note the quotes), you have to write a file, called myboldtt.sty for example, in the same folder than your main document. The content of this file would be: \providecommand{\EC@ttfamily}[5]{% \DeclareFontShape{#1}{#2}{#3}{#4}% {<5><6><7><8>#50800% <9><10><10.95><12><14.4><17.28><20.74><24.88><29.86>% <35.83>genb*#5}{}} \DeclareFontFamily{T2A}{cmtt}{\hyphenchar\font\m@ne} \EC@ttfamily{T2A}{cmtt}{m}{n}{latt} \EC@ttfamily{T2A}{cmtt}{m}{sl}{last} \EC@ttfamily{T2A}{cmtt}{m}{it}{lait} \EC@ttfamily{T2A}{cmtt}{m}{sc}{latc} \EC@ttfamily{T2A}{cmtt}{bx}{n}{labtl} \DeclareFontShape{T2A}{cmtt}{b}{n} { <5><6><7><8><9><10><12><10.95><14.4><17.28><20.74><24.88>cmttb10 }{} \DeclareFontShape{T2A}{cmtt}{bx}{n}% {<->ssub*cmtt/b/n}{} \endinput And then ensure that \usepackage{myboldtt} appears in your main document. A MWE follows: \documentclass[11pt,table,a4paper]{article} \usepackage{bold-extra} \usepackage[T1,T2A]{fontenc} \usepackage{CJKutf8} \usepackage[english,russian]{babel} \usepackage{fancyvrb} \usepackage{myboldtt} \def\n{\textbackslash n} \begin{document} Regular text. \textbf{Bold}. \begin{Verbatim}[commandchars=\\\{\}] %chunk = 1 <<echo=FALSE,results=tex>>= cat("dataset","\n") cat("\textbf{dataset}","\n") # it should be bold @ \end{Verbatim} \end{document} Which (at last!) produces the intended output: NOTE: You need to have cm-mf-extra fonts also installed in your system. - Yes! Just to inform, posted code is sweave code (R & Latex) which is part of my big code (2000 lines). Now pblm is dataset should be in bold but i m not getting desired outptut. –  Manish Nov 29 '12 at 2:27 Answer updated to show a possible solution. –  JLDiaz Nov 29 '12 at 15:50 JLDiaz@ i have extra bold and myboldtt.sty but still same pblm. \documentclass[11pt,table,a4paper]{article} \usepackage{bold-extra} \usepackage[T1,T2A]{fontenc} \usepackage{CJKutf8} \usepackage[english,russian]{babel} \usepackage{fancyvrb} \usepackage{myboldtt} \begin{document} <<echo=FALSE,results=tex>>= cat("dataset","\n") cat("\\textbf{dataset}","\n") # it should be bold @ \end{document} –  Manish Nov 30 '12 at 1:26 @Manish You need to have cm-mf-extra fonts also installed in your system. I added a final note in my answer. –  JLDiaz Nov 30 '12 at 8:42 I just checked all cm pakcage arelady there in cmbcsc10.mf cmbtex10.mf cmbtt10.mf cmbtt8.mf cmbtt9.mf cmexb10.mf cmttb10.mf are already there in /usr/local/texlive/2012/texmf-dist/fonts/source/public/cm/. Still i m facing same pblm . –  manish Dec 4 '12 at 8:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332923889160156, "perplexity": 2497.7203548820466}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989507.42/warc/CC-MAIN-20150728002309-00193-ip-10-236-191-2.ec2.internal.warc.gz"}
https://imstat.org/2013/04/02/medallion-lecture-preview-jeremy-quastel/
A specialist in probability theory, stochastic processes and partial differential equations, Jeremy Quastel has been at the University of Toronto since 1998. He studied at McGill University, then the Courant Institute at New York University where he completed his PhD in 1990 under the direction of S.R.S. Varadhan; he has also worked at the Mathematical Sciences Research Institute in Berkeley, and UC Davis. His research is on the large scale behaviour of interacting particle systems and stochastic partial differential equations. He was a Sloan Fellow 1996–98, a Killam Fellow 2013–15, invited speaker at the International Congress of Mathematicians in Hyderabad 2010, gave the Current Developments in Mathematics 2011 and St. Flour 2012 lectures, and was a plenary speaker at the International Congress of Mathematical Physics in Aalborg 2012. Jeremy’s Medallion Lecture will be at JSM Montreal on August 5. ## The Kardar-Parisi-Zhang equation and its universality class Stochastic partial differential equations are used throughout the sciences to provide more realistic models than partial differential equations, taking into account the natural randomness or uncertainty in the environment. Sometimes the effects can be highly non-trivial, especially in small scale cases, e.g. biological cells, nanotechnology, chemical kinetics, but also large scale phenomena such as climate modelling. Our understanding of stochastic partial differential equations is still in a very primitive stage, and we are just starting to be able to study problems of genuine relevance to applications. A key scientific question for which there is no general recipe is how the input noise is transformed by a non-linear stochastic partial differential equation into fluctuations of the solution. One of the most important non-linear stochastic partial differential equations is the Kardar-Parisi-Zhang equation (KPZ), introduced in 1986 as a canonical model for random surface growth. In the one dimensional case, it is equivalent to the stochastic Burgers equation, which is a model for randomly forced one-dimensional fluids. These models have been widely used in physics, but their mathematics was very poorly understood. At the physical level, it was discovered that the one-dimensional KPZ equation had highly non-trivial fluctuation behaviour, shared by a large collection of one-dimensional asymmetric, randomly forced systems: stochastic interface growth on a one dimensional substrate, randomly stirred one dimensional fluids, polymer chains directed in one dimension and fluctuating transversally in the other due to a random potential (with applications to domain interfaces in disordered crystals), driven lattice gas models, reaction-diffusion models in two-dimensional random media (including biological models such as bacterial colonies), randomly forced Hamilton-Jacobi equations, etc. These form the conjectural KPZ universality class. A combination of non-rigorous methods (renormalization, mode-coupling, replicas) and mathematical breakthroughs on a few special solvable models led to very precise predictions of universal scaling exponents and exact statistical distributions describing the long time properties. Surprisingly, they are the same as those found in random matrix theory: The Tracy-Widom distributions and their process level generalizations, the Airy processes. These predictions have been repeatedly confirmed through Monte-Carlo simulation as well as experiments; in particular, recent spectacular experiments on turbulent liquid crystals. However, at the mathematical level the KPZ equation proved difficult, until recently, in a series of unexpected breakthroughs, the equation was shown to be well-posed, and exact distributions were computed for the main scaling invariant initial data. The goal of this talk will be to describe the background and the progress that has been made.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411850333213806, "perplexity": 623.6251826351943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00559.warc.gz"}
https://jpmccarthymaths.com/2010/11/03/ms-2001-week-7/
On Monday we proved that if a function is differentiable then it is continuous (today I stated that a rough word explaining differentiable is smooth). We showed that a continuous function need not be differentiable by showing the counterexample $f(x)=|x|$. We presented and proved the sum, scalar, product and quotient rules of differentiation. The proof of the quotient rule is on this page. We did the derivative of $x^n$ and $x^{-n}$ for $n\geq 1$. As a corollary we showed that polynomials are differentiable everywhere. Finally we wrote down the Chain Rule. In the tutorial nobody asked any questions, so I threw the exercises up on the projector and allowed ye work away. Ye put up your hand if ye wanted assistance. In the end I did questions 1 & 2 from http://euclid.ucc.ie/pages/staff/wills/teaching/ms2001/exercise3.pdf. On Wednesday we wrote down the Chain rule again, stated the proof was up here and gave a very dodgy explanation of why we must multiply by the derivative of the ‘inside’ function. We stated and proved the derivatives of $\sin x$, $\cos x$, $\tan x$, $e^x$ and $\log x$ (the last two proved non-rigorously). Finally we wrote down Rolle’s Theorem. Problems You need to do exercises – all of the following you should be able to attempt. Do as many as you can/ want in the following order of most beneficial: Wills’ Exercise Sheets More exercise sheets Section 3 from Problems Past Exam Papers Q. 1(c), 3(b), 4(b) from http://booleweb.ucc.ie/ExamPapers/exams2010/MathsStds/MS2001Sum2010.pdf Q . 1(c), 3(a), 4 from http://booleweb.ucc.ie/ExamPapers/exams2009/MathsStds/MS2001s09.pdf Q. 1(c), 3(a), 4 from http://booleweb.ucc.ie/ExamPapers/exams2009/MathsStds/Autumn/MS2001A09.pdf Q. 1(c), 3(b), 4(a) from http://booleweb.ucc.ie/ExamPapers/exams2008/Maths_Stds/MS2001Sum08.pdf Q. 3(b), 4(b), 5(a) from http://booleweb.ucc.ie/ExamPapers/exams2006/Maths_Stds/MS2001Sum06.pdf Q. 3(b), 4(a) from http://booleweb.ucc.ie/ExamPapers/Exams2005/Maths_Stds/MS2001.pdf Q. 4, 5(a), 6(a) from http://booleweb.ucc.ie/ExamPapers/exams2002/Maths_Stds/ms2001.pdf Q. 1(b), 4(b), 5(b) from http://booleweb.ucc.ie/ExamPapers/exams2001/Maths_studies/MS2001Summer01.pdf Q. 1(b), 4(b) from http://booleweb.ucc.ie/ExamPapers/exams/Mathematical_Studies/MS2001.pdf From the Class 1. Prove Proposition 4.1.4 (ii) 2. Prove Proposition 4.1.9 (ii)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96613609790802, "perplexity": 1155.2572813015547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00201.warc.gz"}
https://www.physicsforums.com/threads/forces-with-friction-problem-two-block-on-each-other.472088/
# Forces with friction problem, two block on each other • #1 Asphyxiated 264 0 ## Homework Statement The original problem with statement is http://images3a.snapfish.com/232323232%7Ffp733%3B9%3Enu%3D52%3A%3A%3E379%3E256%3EWSNRCG%3D335%3A728%3A%3B4347nu0mrj". ## Homework Equations Newton's Laws 1-3 ## The Attempt at a Solution Note that the table is frictionless, the only friction is between the two boxes So first: $$\mu = .45$$ and the T on M1 is $$T_{m_{1}}=F_{a}$$ and T on M2 is $$T_{m_{2}}=-F_{a}$$ and $$F_{n_{1}}=m_{1}g$$ $$F_{n_{2}}=m_{1}g+m_{2}g$$ So the friction force on M1 from M2 should be: $$F_{ \mu_{1}} = \mu F_{n_{1}}$$ and the friction on M2 from M1 should be: $$F_{ \mu_{2}}=\mu F_{n_{1}}$$ So i am kind of confused as to how I should use the T forces... here is what I did: $$F_{net_{x}} = F_{a}+F_{\mu_{1}-T-F_{\mu_{2}}=0$$ $$F_{net_{y}}=F_{n_{1}}+F_{n_{2}}-m_{1}g-m_{2}g=0$$ is that right? I am not sure how to set up T because for m1 t is opposite the friction force and for m2 T is opposite the Fa force... should I set up a relationship that way to do this correctly? Last edited by a moderator: • #2 Staff Emeritus Gold Member 5,196 38 It seems to me that Fa ≠ T. After all, the horizontal forces on block 1 must balance, and according to the FBD that YOU drew for it (which looks correct), there are three such forces. Fa is to the right and BOTH T and the frictional force from block 2 are pointing to the left. • #3 ajith.mk91 30 0 Yes,horizontal forces on block 1 should balance.If T is the tension in the string acting to pull the body to left then the frictional force between the bodies will act to the right to counter the tension.So T=f. Now coming to the second block it has got F acting to the right and tension acting to the left.Since the frictional force is acting to right on block 1,it acts to left on block 2.To sum up the whole thing,your F acts to counter both T(=f) and f.So F equals 2f.Any way i am not sure of this. • #4 Staff Emeritus Gold Member 5,196 38 Yes,horizontal forces on block 1 should balance.If T is the tension in the string acting to pull the body to left then the frictional force between the bodies will act to the right to counter the tension.So T=f. Yeah, but you're forgetting the applied force (also points to the right). EDIT: and you also haven't specified which frictional force you're referring to. The one that acts on block 1 points to the left, in the SAME direction as the tension. I explicitly said that there were THREE horizontal forces acting on block 1 in my previous post. Let's let the OP take stock of things... • #5 ajith.mk91 30 0 Block 1 is the top one and block 2 is bottom one right?Force acting on the bottom block cannot directly act on the top one.The only way for it is to use friction.Now if the friction tries to stop the relative motion it should act to right on the top block since it tends to move to left. • #6 ashishsinghal 462 0 do not write vertical and horizontal equations for whole system that is unnecessary just balance vertical and horizontal forces on each block. Remember the tension in the string is uniform • #7 Staff Emeritus Gold Member 5,196 38 Block 1 is the top one and block 2 is bottom one right?Force acting on the bottom block cannot directly act on the top one.The only way for it is to use friction.Now if the friction tries to stop the relative motion it should act to right on the top block since it tends to move to left. Hey -- I was referring to the bottom block as "block 1" without double-checking the labeling on the picture. Sorry. • Last Post Replies 10 Views 324 Replies 8 Views 322 • Last Post Replies 7 Views 441 • Last Post Replies 27 Views 417 • Last Post Replies 4 Views 395 • Last Post Replies 6 Views 672 • Last Post Replies 24 Views 419 • Last Post Replies 5 Views 415 • Last Post Replies 13 Views 561 • Last Post Replies 7 Views 332
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836650550365448, "perplexity": 998.9993770039225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00418.warc.gz"}
https://vknight.org/Year_3_game_theory_course/Content/Chapter_10_Infinetely_Repeated_Games/
## Recap In the previous chapter: • We defined repeated games; • We showed that a sequence of stage Nash games would give a subgame perfect equilibria; • We considered a game, illustrating how to identify equilibria that are not a sequence of stage Nash profiles. In this chapter we’ll take a look at what happens when games are repeatedly infinitely. ## Discounting To illustrate infinitely repeated games ($$T\to\infty$$) we will consider a Prisoners dilemma as our stage game. Let us denote $$s_{C}$$ as the strategy “cooperate at every stage”. Let us denote $$s_{D}$$ as the strategy “defect at every stage”. If we assume that both players play $$s_{C}$$ their utility would be: Similarly: It is impossible to compare these two strategies. To be able to carry out analysis of strategies in infinitely repeated games we make use of a discounting factor $$0<\delta<1$$. The interpretation of $$\delta$$ is that there is less importance given to future payoffs. One way of thinking about this is that “the probability of recieveing the future payoffs decreases with time”. In this case we write the utility in an infinitely repeated game as: Thus: and: ## Conditions for cooperation in Prisoner’s Dilemmas Let us consider the “Grudger” strategy (which we denote $$s_G$$): “Start by cooperating until your opponent defects at which point defect in all future stages.” If both players play $$s_G$$ we have $$s_G=s_C$$: If we assume that $$S_1=S_2=\{s_C,s_D,s_G\}$$ and player 2 deviates from $$S_G$$ at the first stage to $$s_D$$ we get: Deviation from $$s_G$$ to $$s_D$$ is rational if and only if: $% $ $\Leftrightarrow$ $% $ thus if $$\delta$$ is large enough $$(s_G,s_G)$$ is a Nash equilibrium. Importantly $$(s_G,s_G)$$ is not a subgame perfect Nash equilibrium. Consider the subgame following $$(r_1,c_2)$$ having been played in the first stage of the game. Assume that player 1 adhers to $$s_G$$: 1. If player 2 also plays $$s_G$$ then the first stage of the subgame will be $$(r_2,c_1)$$ (player 1 punishes while player 2 sticks with $$c_1$$ as player 1 played $$r_1$$ in previous stage). All subsequent plays will be $$(r_2,c_2)$$ so player 2’s utility will be: 2. If player 2 deviates from $$s_G$$ and chooses to play $$D$$ in every period of the subgame then player 2’s utility will be: which is a rational deviation (as $$0<\delta<1$$). Two questions arise: 1. Can we always find a strategy in a repeated game that gives us a better outcome than simply repeating the stage Nash equilibria? (Like $$s_G$$) 2. Can we also find a strategy with the above property that in fact is subgame perfect? (Unlike $$s_G$$) ## Folk theorem The answer is yes! To prove this we need to define a couple of things. ### Definition of an average payoff If we interpret $$\delta$$ as the probability of the repeated game not ending then the average length of the game is: We can use this to define the average payoffs per stage: This average payoff is a tool that allows us to compare the payoffs in an infitely repeated game to the payoff in a single stage game. ### Definition of individually rational payoffs Individually rational payoffs are average payoffs that exceed the stage game Nash equilibrium payoffs for both players. As an example consider the plot corresponding to a repeated Prisoner’s Dilemma. The feasible average payoffs correspond to the feasible payoffs in the stage game. The individually rational payoffs show the payoffs that are better for both players than the stage Nash equilibrium. The following theorem states that we can choose a particular discount rate that for which there exists a subgame perfect Nash equilibrium that would give any individually rational payoff pair! ### Folk Theorem for infinitely repeated games Let $$(u_1^*,u_2^*)$$ be a pair of Nash equilibrium payoffs for a stage game. For every individually rational pair $$(v_1,v_2)$$ there exists $$\bar \delta$$ such that for all $$1>\delta>\bar \delta>0$$ there is a subgame perfect Nash equilibrium with payoffs $$(v_1,v_2)$$. ### Proof Let $$(\sigma_1^*,\sigma_2^*)$$ be the stage Nash profile that yields $$(u_1^*,u_2^*)$$. Now assume that playing $$\bar\sigma_1\in\Delta S_1$$ and $$\bar\sigma_2\in\Delta S_2$$ in every stage gives $$(v_1,v_2)$$ (an individual rational payoff pair). Consider the following strategy: “Begin by using $$\bar \sigma_i$$ and continue to use $$\bar \sigma_i$$ as long as both players use the agreed strategies. If any player deviates: use $$\sigma_i^*$$ for all future stages.” We begin by proving that the above is a Nash equilibrium. Without loss of generality if player 1 deviates to $$\sigma_1’\in\Delta S_1$$ such that $$u_1(\sigma_1’,\bar \sigma_2)>v_1$$ in stage $$k$$ then: Recalling that player 1 would receive $$v_1$$ in every stage with no devitation, the biggest gain to be made from deviating is if player 1 deviates in the first stage (all future gains are more heavily discounted). Thus if we can find $$\bar\delta$$ such that $$\delta>\bar\delta$$ implies that $$U_1^{(1)}\leq \frac{v_1}{1-\delta}$$ then player 1 has no incentive to deviate. as $$u_1(\sigma_1’,\bar \sigma_2)>v_1>u_1^*$$, taking $$\bar\delta=\frac{u_1(\sigma_1’,\bar\sigma_2)-v_1}{u_1(\sigma_1’,\bar\sigma_2)-u_1^*}$$ gives the required required result for player 1 and repeating the argument for player 2 completes the proof of the fact that the prescribed strategy is a Nash equilibrium. By construction this strategy is also a subgame perfect Nash equilibrium. Given any history both players will act in the same way and no player will have an incentive to deviate: • If we consider a subgame just after any player has deviated from $$\bar\sigma_i$$ then both players use $$\sigma_i^*$$. • If we consider a subgame just after no player has deviated from $$\bar\sigma_i$$ then both players continue to use $$\bar\sigma_i$$.
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128772616386414, "perplexity": 523.045588951656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00282.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ph/9707417/
NEAP-54, July 1997 Flavor-Nonconservation and CP-Violation with Singlet Quarks Katsuji Yamamoto Department of Nuclear Engineering Kyoto University, Kyoto 606-01, Japan abstract Some aspects are considered on the flavor-nonconservation and -violation arising from the quark mixings with singlet quarks. In certain models incorporating the singlet quarks, the contributions of the quark couplings to the neutral Higgs fields may become more significant than those of the neutral gauge interactions. Then, they would provide distinct signatures for new physics beyond the standard model in various flavor-nonconserving and -violating processes such as the neutron electric dipole moment, -, -, -, , and so on. Talk presented at “Masses and Mixings of Quarks and Leptons”, March, 1997, Shizuoka, Japan ## 1 Introduction Some extensions of the standard model may be expected in various points of view. Among such possibilities, electroweak models incorporating singlet quarks with electric charges and have been investigated extensively in the literature [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. In the presence of singlet quarks, various interesting issues are presented phenomenologically. In particular, some novel features arise from the mixings between the ordinary quarks () and the singlet quarks (): The CKM unitarity in the ordinary quark sector is violated, and the flavor changing neutral currents (FCNC’s) are present at the tree-level in both the gauge and Higgs interactions of the quarks. The - mixings even involve new -violating sources. In this talk, we make further considerations on the flavor-nonconservation and -violation arising from the quark mixings with singlet quarks. We present some relevant formulations for the quark mixings and the FCNC’s, which are useful for making more precise estimates on the - mixing effects in the flavor-nonconserving and -violating processes such as the neutron electric dipole moment (NEDM), -, -, -, , and so on. Then, these formulations are applied, for instance, for estimating the neutral Higgs contributions to the NEDM and the - mixing in the presence of significant mixing between the top quark and singlet quarks [11]. These estimates may be relevant for investigating the electroweak baryogenesis with -violating - mixing. [12, 13]. Some variants of the electroweak model incorporating singlet quarks will also be discussed later, in particular, by considering possible parametrizations of the quark mass matrices, relative significance between the gauge and Higgs FCNC’s and the role of the singlet Higgs field. ## 2 Quark masses and mixings with singlet quarks We here describe explicitly a specific version of electroweak model incorporating singlet quarks, where the quark Yukawa couplings are given by LYukawa = − uc0λuq0H−uc0(fUS+f′US†)U0−Uc0(λUS+λ′US†)U0 (1) − dc0λdV†0q0~H−dc0(fDS+f′DS†)D0−Dc0(λDS+λ′DS†)D0 + h.c. with the two-component Weyl fields (the generation indices and the factors representing the Lorentz covariance are omitted for simplicity). Here represents the quark doublets with a unitary matrix , and is the electroweak Higgs doublet with . Suitable redefinitions among the and fields with the same quantum numbers has been made to eliminate the and couplings without loss of generality. Then, the Yukawa coupling matrices and have been made diagonal by using unitary transformations among the ordinary quark fields. In this basis, by turning off the - mixings with , and are reduced to the mass eigenstates, and is identified with the CKM matrix. The actual CKM matrix is slightly modified due to the - mixings, as shown explicitly later. A complex singlet Higgs field is introduced in the present model given by eq. (1) to provide the singlet quark mass terms and the - mixing terms. Some variants of the model will be considered later by noting whether the singlet Higgs field is introduced or not. The Higgs fields develop vev’s, ⟨H0⟩=v√2 ,  ⟨S⟩=eiϕSvS√2 , (2) where may acquire a nonvanishing phase due to either spontaneous or explicit violation originating in the Higgs sector. The quark mass matrices are produced with these vev’s as MU=(MuΔu-U0MU) ,  MD=(MdΔd-D0MD) . (3) These quark mass matrices are diagonalized by unitary transformations and () as V†URMUVUL = diag.(mu,mc,mt,mU1,…) , V†DRMDVDL = diag.(md,ms,mb,mD1,…) (4) with quark mass eigenvalues given by mqi=λqiv√2[1+O(ϵ2q% -Q)] . (5) Here the parameters represent the mean magnitudes of the - mixings. The generalized CKM matrix is given by V=V†UL(V0000)VDL=(V∗∗∗) . (6) Here the CKM matrix is no longer unitary due to the - mixings. It is found by determinig perturbatively that the CKM unitarity violation arises as (V†V−1)ij , (VV†−1)ij∼(muimuj/m2U)ϵ2u-U+(mdimdj/m2D)ϵ2d-D , (7) being related to the FCNC’s coupled to the boson [1, 2, 4, 5, 9, 10, 11]. This relation ensures that the CKM unitarity violation is sufficiently below the experimental bounds [14] for reasonable ranges of the model parameters. ## 3 FCNC’s The FCNC’s, which may include violating sources, arise in both the gauge interactions and the neutral Higgs couplings. We here consider these FCNC’s, respectively. FCNC’s in the gauge interactions It is straightforward to write down the quark gauge interactions coupled to the bosons in terms of the quark mass eigenstates: LNC(Z)=gZZμ⎡⎣∑Q=U,DQ†σμVZ(Q)Q +∑Qc=Uc,DcQc†σμVZ(Qc)Qc⎤⎦ , (8) where , , and the coupling matrices are given by VZ(Q) = V†QL(IZ(q0)100IZ(Q0)1)VQL=(VZ(q)∗∗∗) , (9) VZ(Qc) = IZ(Qc0)(1001) (10) with [ ]. The FCNC’s do not appear for the right-handed quarks in the gauge interactions, since they have the same quantum numbers with . The coupling matrices for the left-handed quarks are actually determined in respective models depending on how the singlet quark mass terms and the - mixing terms are provided, and how the Yukawa couplings and the quark mass matrices are parametrized in certain bases. In the present model given by eq. (1), the and couplings have been rotated out by suitable redefinition among the right-handed quarks. Then, the quark mass matrices have the specific form (3), respecting the relation as given in eq. (5). By making perturbative calculations with these quark mass matrices the - mixing effects on the neutral gauge couplings of the ordinary quarks are found as VZ(q)ij−VZ(q0)ij∼(mqimqj/m2Q)ϵ2q-Q , (11) where the represents the usual neutral currents in the absence of - mixings. This modification on the neutral currents is related to the unitarity violation given in eq. (7) [1, 2, 4, 5, 9, 10, 11]. FCNC’s in the neutral Higgs couplings The quark couplings to the neutral Higgs fields are extracted from (1) as LNC(Higgs)=−∑Q;a=0,1,2QcΛQaQϕa + h.c. , (12) where , , are the mass eigenstates of the neutral Higgs fields. Then, the coupling matrices in eq.(12) are given by ΛQa=1√2V†QR(\sf Oa0Λq+\sf Oa1Λ+Q+i\sf O% a2Λ−Q)VQL (13) with Λu=(λu000) ,  Λd=(−λdV†0000) ,  Λ±Q=(0fQ±f′Q0λQ±λ′Q) . (14) Here an orthogonal matrix O is introduced to parametrize the mass eigenstates of the neutral Higgs fields. It is seen by making perturbative calculations that the ordinary quark couplings to the neutral Higgs fields, in particular, have specific generation dependence as (ΛQa)ij∼(mqj/mQ)ϵ2q-Q . (15) FCNC’s () versus FCNC’s (Higgs) As seen in eqs. (11) and (15), the FCNC’s coupled to the neutral Higgs fields are of the first order of the relevant ordinary quark masses, while the - mixing effects on the boson couplings appear at the second order. This implies that the FCNC’s (Higgs) are more significant than FCNC’s () in this sort of models with the quark mass matrices of the form given in eq.(3), where the quark mass hierarchy is respected naturally under the relation (5). Then, the neutral Higgs contributions in various flavor-nonconserving and -violating processes are expected to serve as signals for the new physics beyond the standard model. ## 4 Higgs contributions to NEDM and D0-¯D0 with u-U mixings The neutral Higgs contribution to the NEDM was considered earlier [5], claiming that the singlet Higgs mass scale should be in the TeV region or larger. On the other hand, in view point of electroweak baryogenesis [12, 13], the mass scale of the singlet Higgs field to provide the -violating - mixings is desired to be comparable to the electroweak scale. In order to clarify this apparently controvertial situation, detailed analyses have been made recently [11], showing that the NEDM becomes comparable to the experimental bound for the singlet Higgs mass scale even in the presence of significant - mixing. We here describe these analyses briefly, where the - mixing is also considered. The total one-loop contribution of the neutral Higgs fields to the quark EDM is calculated by a usual formula du(ϕ)=−2e3(4π)2∑a∑UK=ui,U{Im[(Λa)1K(Λa)K1]mUKm2ϕaI(m2UK/m2ϕa)} , (16) where is a certain function of . The effective Hamiltonian for the - mixing, on the other hand, is obtained from the quark couplings to the neutral Higgs fields (13) as HΔc=2ϕ=∑a1m2ϕa[¯c{(ΓSa)21+(ΓPa)21γ5}u]2 , (17) where (ΓSa)21=12[(ΛUa)∗12+(ΛUa)21] ,  (ΓPa)21=12[(ΛUa)∗12−(ΛUa)21] . (18) Systematic analyses have been done in [11] for the neutral Higgs contributions (16) and (17) to the NEDM and the - mixing due to the - mixings. There, the quark mass matrices are diagonalized numerically to determine presicely the quark mixing matrices and the quark couplings to the boson and neutral Higgs fields. The relevant coupling parameters are taken in certain reasonable ranges as ϵu-U∼0.1 ,  complex % phases in LYukawa and ⟨S⟩∼1 ,mU∼several×100GeV ,mϕ0∼100GeV ,  mϕ1,mϕ2∼vS~{}> {\raisebox{-3.225pt}{∼}}~{}100GeV . The results are given as |du(ϕ)|∼(10−25−10−27)ecm ,  ΔmD(ϕ)∼(10−13−10−15)GeV , which are comparable to the present experimental bounds [14]. As for the case of - mixings, the neutral Higgs contributions to the - mixing and the NEDM should be investigated as well. It is actually found that the violation parameter for the - mixing, in particular, can be as large as for and . ## 5 Some variants of the model with singlet quarks We finally consider some possible variants of the model incorporating the singlet quarks. alternative form of the quark mass matrices We have rotated out the and couplings in eq. (1) providing the specific form of the quark mass matrices (3). This respects naturally the mass hierarchy of the ordinary quarks with the relation (5). It is instead possible to elimiate the and couplings with , while the and couplings may still have generic forms with the diagonal and couplings. Then, alternative form of the quark mass matrices are obtained. Even in this case, if the and couplings are small enough (not necessarily for the top quark), then the masses of the ordinary quarks are not changed significantly, maintaining the natural relation . It is interesting in this case, as confirmed by numerical calculations, that the FCNC’s () can be larger than the FCNC’s (Higgs). Then, for instance, a significant contribution of the FCNC’s () to the - mixing may be obtained, as investigated in [7]. real singlet Higgs field The complex Higgs field may be replaced by a real field with . Even in this case, similar contributions are expected from the FCNC’s (Higgs) for the flavor-nonconserving and -violating processes. It should here be mentioned that with only one real singlet Higgs field the - mixing is ineffective for the electroweak baryogenesis. This is because the complex phases in the - couplings to the real singlet Higgs field are eliminated away by rephasing the and fields. The alternative form of the quark mass matrices, MQ=(Mq0Δ′q-QMQ) , are also possible, as mentioned above, by redefining the right-handed quark fields. no singlet Higgs field The singlet Higg field may be absent with explicit mass terms and in eq. (3). Even in this case, the significant FCNC’s with -violating phases may still be present in the quark couplings to the boson and the standard neutral Higgs field. The one-loop neutral Higgs contribution to the NEDM is, however, vanishing, just as the one-loop boson contribution. This is because the standard neutral Higgs field and the Nambu-Goldstone mode couple to the quarks in the same way. ## 6 Summary Some aspects have been considered on the flavor-nonconservation and -violation arising from the quark mixings with singlet quarks. In certain models incorporating the singlet quarks, the contributions of the quark couplings to the neutral Higgs fields may become more significant than those of the neutral gauge interactions. Then, they would provide distinct signatures for new physics beyond the standard model in various flavor-nonconserving and -violating processes such as the NEDM, -, -, -, , and so on. It is, for instance, found that the neutral Higgs contributions to the NEDM and the neutral meson mass difference can be comparable to the present experimental bounds for the case where the singlet Higgs mass scale is of order of the electroweak scale and a significant -violating - mixing is present. This situation may be desired for the electroweak baryogenesis with - mixing. acknowledgement I would like to thank I. Kakebe for his collaboration in a part of this work. ## References • [1] G. C. Branco and L. Lavoura, Nucl. Phys. B 278 (1986) 738. • [2] F. del Aguila and J. Cortés, Phys. Lett. B 156 (1985) 243. • [3] K. S. Babu and R. N. Mohapatra, Phys. Rev. Lett. 62 (1989) 1079. • [4] P. Langacker and D. London, Phys. Rev. D 38 (1988) 886; Y. Nir and D. Silverman, Phys. Rev. D 42 (1990) 1477; L. Lavoura and J. P. Silva, Phys. Rev. D 47 (1993) 1117; G. C. Branco, T. Morozumi, P. A. Parada and M. N. Rebelo, Phys. Rev. D 48 (1993) 1167; V. Barger, M. S. Berger and R. J. N. Phillips, Phys. Rev. D 52 (1995) 1663. • [5] L. Bento and G. C. Branco, Phys. Lett. B 245 (1990) 599. • [6] L. Lavoura and J. P. Silva, Phys. Rev. D 47 (1993) 2046. • [7] G. C. Branco, P.A. Parada and M.N. Rebelo, Phys. Rev. D 52 (1995) 4217. • [8] W.-S. Hou and H.-c. Kao, Phys. Lett. B 387 (1996) 544; G. Bhattacharyya, G. C. Branco and W.-S. Hou, Phys. Rev. D 54 (1996) 2114. • [9] Y. Takeda, I. Umemura, K. Yamamoto and D. Yamazaki, Phys. Lett. B 386 (1996) 167. • [10] F. del Aguila, J. A. Aguilar-Saavedra and G. C. Branco, UG-FT-69/97, hep-ph/9703410, March 1997. • [11] I. Kakebe and K. Yamamoto, NEAP-53, hep-ph/9705203, April 1997. • [12] For a review see for instance, A. G. Cohen, D. B. Kaplan and A. E. Nelson, Ann. Rev. Nucl. Part. Sci. 43 (1993) 27, and references therein; K. Funakubo, Prog. Theor. Phys. 46 (1996) 652, and refereces therein. • [13] J. McDonald, Phys. Rev. D 53 (1996) 645; T. Uesugi, A. Sugamoto and A. Yamaguchi, Phys. Lett. B 392 (1997) 389. • [14] Particle Data Group, Phys. Rev. D 54 (1996) 1.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649274349212646, "perplexity": 1224.2497732974039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00144.warc.gz"}
https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.9-The-Moore-Penrose-Pseudoinverse/
Essential Math for Data Science Do you want more math for data science and machine learning? I just released my book "Essential Math for Data Science"🎉. # Introduction We saw that not all matrices have an inverse. It is unfortunate because the inverse is used to solve system of equations. In some cases, a system of equation has no solution, and thus the inverse doesn’t exist. However it can be useful to find a value that is almost a solution (in term of minimizing the error). We will see for instance how we can find the best-fit line of a set of data points with the pseudoinverse. # 2.9 The Moore-Penrose Pseudoinverse The Moore-Penrose pseudoinverse is a direct application of the SVD (see 2.8). But before all, we have to remind that systems of equations can be expressed under the matrix form. As we have seen in 2.3, the inverse of a matrix $\bs{A}$ can be used to solve the equation $\bs{Ax}=\bs{b}$: $\bs{A}^{-1}\bs{Ax}=\bs{A}^{-1}\bs{b}$ $\bs{I}_n\bs{x}=\bs{A}^{-1}\bs{b}$ $\bs{x}=\bs{A}^{-1}\bs{b}$ But in the case where the set of equations have 0 or many solutions the inverse cannot be found and the equation cannot be solved. The pseudoinverse is $\bs{A}^+$ such as: $\bs{A}\bs{A}^+\approx\bs{I_n}$ minimizing $\norm{\bs{A}\bs{A}^+-\bs{I_n}}_2$ The following formula can be used to find the pseudoinverse: $\bs{A}^+= \bs{VD}^+\bs{U}^T$ with $\bs{U}$, $\bs{D}$ and $\bs{V}$ respectively the left singular vectors, the singular values and the right singular vectors of $\bs{A}$ (see the SVD in 2.8). $\bs{A}^+$ is the pseudoinverse of $\bs{A}$ and $\bs{D}^+$ the pseudoinverse of $\bs{D}$. We saw that $\bs{D}$ is a diagonal matrix and thus $\bs{D}^+$ can be calculated by taking the reciprocal of the non zero values of $\bs{D}$. This is a bit crude but we will see some examples to clarify all of this. ### Example 1. Let’s see how to implement that. We will create a non square matrix $\bs{A}$, calculate its singular value decomposition and its pseudoinverse. $\bs{A}=\begin{bmatrix} 7 & 2 \\ 3 & 4 \\ 5 & 3 \end{bmatrix}$ A = np.array([[7, 2], [3, 4], [5, 3]]) U, D, V = np.linalg.svd(A) D_plus = np.zeros((A.shape[0], A.shape[1])).T D_plus[:D.shape[0], :D.shape[0]] = np.linalg.inv(np.diag(D)) A_plus = V.T.dot(D_plus).dot(U.T) A_plus array([[ 0.16666667, -0.10606061, 0.03030303], [-0.16666667, 0.28787879, 0.06060606]]) We can now check with the pinv() function from Numpy that the pseudoinverse is correct: np.linalg.pinv(A) array([[ 0.16666667, -0.10606061, 0.03030303], [-0.16666667, 0.28787879, 0.06060606]]) It looks good! We can now check that it is really the near inverse of $\bs{A}$. Since we know that $\bs{A}^{-1}\bs{A}=\bs{I_n}$ with $\bs{I_2}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ A_plus.dot(A) array([[ 1.00000000e+00, 2.63677968e-16], [ 5.55111512e-17, 1.00000000e+00]]) This is not bad! This is almost the identity matrix! A difference with the real inverse is that $\bs{A}^+\bs{A}\approx\bs{I}$ but $\bs{A}\bs{A}^+\neq\bs{I}$. Another way of computing the pseudoinverse is to use this formula: $(\bs{A}^T\bs{A})^{-1}\bs{A}^T$ The result is less acurate than the SVD method and Numpy pinv() uses the SVD (cf Numpy doc). Here is an example from the same matrix $\bs{A}$: A_plus_1 = np.linalg.inv(A.T.dot(A)).dot(A.T) A_plus_1 array([[ 0.16666667, -0.10606061, 0.03030303], [-0.16666667, 0.28787879, 0.06060606]]) In this case the result is the same as with the SVD way. ## Using the pseudoinverse to solve a overdetermined system of linear equations In general there is no solution to overdetermined systems (see 2.4 ; Overdetermined systems). In the following picture, there is no point at the intersection of the three lines corresponding to three equations: There is more equations (3) than unknowns (2) so this is an overdetermined system of equations The pseudoinverse solve the system in the least square error perspective: it finds the solution that minimize the error. We will see this more explicitly with an example. The pseudoinverse solve the system in the least square error perspective ### Example 2. For this example we will consider this set of three equations with two unknowns: $\begin{cases} -2x_1 + 2 = x_2 \\ 4x_1 + 8 = x_2 \\ -1x_1 + 2 = x_2 \end{cases} \Leftrightarrow \begin{cases} -2x_1 - x_2 = -2 \\ 4x_1 - x_2 = -8 \\ -1x_1 - x_2 = -2 \end{cases}$ Let’s see their graphical representation: x1 = np.linspace(-5, 5, 1000) x2_1 = -2*x1 + 2 x2_2 = 4*x1 + 8 x2_3 = -1*x1 + 2 plt.plot(x1, x2_1) plt.plot(x1, x2_2) plt.plot(x1, x2_3) plt.xlim(-2., 1) plt.ylim(1, 5) plt.show() Representation of our overdetermined system of equation We actually see that there is no solution. Putting this into the matrix form we have: $\bs{A}= \begin{bmatrix} -2 & -1 \\ 4 & -1 \\ -1 & -1 \end{bmatrix}$ $\bs{x}= \begin{bmatrix} x_1 \\\\ x_2 \end{bmatrix}$ and $\bs{b}= \begin{bmatrix} -2 \\\\ -8 \\\\ -2 \end{bmatrix}$ So we have: $\bs{Ax} = \bs{b} \Leftrightarrow \begin{bmatrix} -2 & -1 \\ 4 & -1 \\ -1 & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -2 \\ -8 \\ -2 \end{bmatrix}$ We will now calculate the pseudoinverse of $\bs{A}$: A = np.array([[-2, -1], [4, -1], [-1, -1]]) A_plus = np.linalg.pinv(A) A_plus array([[-0.11290323, 0.17741935, -0.06451613], [-0.37096774, -0.27419355, -0.35483871]]) Now that we have calculated the pseudoinverse of $\bs{A}$: $\bs{A}^+= \begin{bmatrix} -0.1129 & 0.1774 & -0.0645 \\ -0.3710 & -0.2742 & -0.3548 \end{bmatrix}$ we can use it to find $\bs{x}$ knowing that: $\bs{x}=\bs{A}^+\bs{b}$ with: $\bs{x} = \begin{bmatrix} x1 \\\\ x2 \end{bmatrix}$ b = np.array([[-2], [-8], [-2]]) res = A_plus.dot(b) res array([[-1.06451613], [ 3.64516129]]) So we have \begin{aligned} \bs{A}^+\bs{b}&= \begin{bmatrix} -0.1129 & 0.1774 & -0.0645 \\ -0.3710 & -0.2742 & -0.3548 \end{bmatrix} \begin{bmatrix} -2 \\ -8 \\ -2 \end{bmatrix}\\ &= \begin{bmatrix} -1.06451613 \\ 3.64516129 \end{bmatrix} \end{aligned} In our two dimensions, the coordinates of $\bs{x}$ are $\begin{bmatrix} -1.06451613 \\ 3.64516129 \end{bmatrix}$ Let’s plot this point along with the equations lines: plt.plot(x1, x2_1) plt.plot(x1, x2_2) plt.plot(x1, x2_3) plt.xlim(-2., 1) plt.ylim(1, 5) plt.scatter(res[0], res[1]) plt.show() The pseudoinverse can be used to find the point that minimizes the mean square error Maybe you would have expected the point being at the barycenter of the triangle (cf. Least square solution in the triangle center). This is not the case becase the equations are not scaled the same way. Actually the point is at the intersection of the three symmedians of the triangle. ### Example 3. This method can also be used to fit a line to a set of points. Let’s take the following data points: We want to fit a line to this set of data points We have this set of $\bs{x}$ and $\bs{y}$ and we are looking for the line $y=mx+b$ that minimizes the error. The error can be evaluated as the sum of the differences between the fit and the actual data points. We can represent the data points with a matrix equations: $\bs{Ax} = \bs{b} \Leftrightarrow \begin{bmatrix} 0 & 1 \\\\ 1 & 1 \\\\ 2 & 1 \\\\ 3 & 1 \\\\ 3 & 1 \\\\ 4 & 1 \end{bmatrix} \begin{bmatrix} m \\\\ b \end{bmatrix} = \begin{bmatrix} 2 \\\\ 4 \\\\ 0 \\\\ 2 \\\\ 5 \\\\ 3 \end{bmatrix}$ Note that here the matrix $\bs{A}$ represents the values of the coefficients. The column of 1 correspond to the intercepts (without it the fit would have the constraint to cross the origin). It gives the following set of equations: $\begin{cases} 0m + 1b = 2 \\\\ 1m + 1b = 4 \\\\ 2m + 1b = 0 \\\\ 3m + 1b = 2 \\\\ 3m + 1b = 5 \\\\ 4m + 1b = 3 \end{cases}$ We have the set of equations $mx+b=y$. The ones are used to give back the intercept parameter. For instance, in the first equation corresponding to the first point we have well $x=0$ and $y=2$. This can be confusing because here the vector $\bs{x}$ corresponds to the coefficients. This is because the problem is different from the other examples: we are looking for the coefficients of a line and not for $x$ and $y$ unknowns. We kept this notation to indicate the similarity with the last examples. So we will construct these matrices and try to use the pseudoinverse to find the equation of the line minimizing the error (difference between the line and the actual data points). Let’s start with the creation of the matrix of $\bs{A}$ and $\bs{b}$: A = np.array([[0, 1], [1, 1], [2, 1], [3, 1], [3, 1], [4, 1]]) A array([[0, 1], [1, 1], [2, 1], [3, 1], [3, 1], [4, 1]]) b = np.array([[2], [4], [0], [2], [5], [3]]) b array([[2], [4], [0], [2], [5], [3]]) We can now calculate the pseudoinverse of $\bs{A}$: A_plus = np.linalg.pinv(A) A_plus array([[ -2.00000000e-01, -1.07692308e-01, -1.53846154e-02, 7.69230769e-02, 7.69230769e-02, 1.69230769e-01], [ 6.00000000e-01, 4.00000000e-01, 2.00000000e-01, 4.16333634e-17, 4.16333634e-17, -2.00000000e-01]]) and apply it to the result to find the coefficients with the formula: $\bs{x}=\bs{A}^+\bs{b}$ coefs = A_plus.dot(b) coefs array([[ 0.21538462], [ 2.2 ]]) These are the parameters of the fit. The slope is $m=0.21538462$ and the intercept is $b=2.2$. We will plot the data points and the regression line: x = np.linspace(-1, 5, 1000) y = coefs[0]*x + coefs[1] plt.plot(A[:, 0], b, '*') plt.plot(x, y) plt.xlim(-1., 6) plt.ylim(-0.5, 5.5) plt.show() We found the line minimizing the error! If you are not sure about the result. Just check it with another method. For instance, I double-checked with R: a <- data.frame(x=c(0, 1, 2, 3, 3, 4), y=c(2, 4, 0, 2, 5, 3)) ggplot(data=a, aes(x=x, y=y)) + geom_point() + stat_smooth(method = "lm", col = "red") + xlim(-1, 5) + ylim(-1, 6) outputs: Just checking with another method You can also do the fit with the Numpy polyfit() to check the parameters: np.polyfit(A[:, 0], b, 1) array([[ 0.21538462], [ 2.2 ]]) That’s good! We have seen how to use the pseudoinverse in order to solve a simple regression problem. Let’s see with a more realistic case. ### Example 4. To see the process with more data points we can generate data (see this nice blog post for other methods of fitting). We will generate a column vector (see reshape() bellow) containing 100 points with random $x$ values and pseudo-random $y$ values. The function seed() from the Numpy.random package is used to freeze the randomisation and be able to reproduce the results: np.random.seed(123) x = 5*np.random.rand(100) y = 2*x + 1 + np.random.randn(100) x = x.reshape(100, 1) y = y.reshape(100, 1) We will create the matrix $\bs{A}$ from $\bs{x}$ by adding a column of ones exactly like we did in the example 3. A = np.hstack((x, np.ones(np.shape(x)))) A[:10] array([[ 3.48234593, 1. ], [ 1.43069667, 1. ], [ 1.13425727, 1. ], [ 2.75657385, 1. ], [ 3.59734485, 1. ], [ 2.1155323 , 1. ], [ 4.90382099, 1. ], [ 3.42414869, 1. ], [ 2.40465951, 1. ], [ 1.96058759, 1. ]]) We can now find the pseudoinverse of $\bs{A}$ and calculate the coefficients of the regression line: A_plus = np.linalg.pinv(A) coefs = A_plus.dot(y) coefs array([[ 1.9461907 ], [ 1.16994745]]) We can finally draw the point and the regression line: x_line = np.linspace(0, 5, 1000) y_line = coefs[0]*x_line + coefs[1] plt.plot(x, y, '*') plt.plot(x_line, y_line) plt.show() Fitting a line to a set of data points Looks good! # Conclusion You can see that the pseudoinverse can be very useful for this kind of problems! The series is not completely finished since we still have 3 chapters to cover. However, we have done the hardest part! We will now see two very light chapters before going to a nice example using all the linear algebra we have learn: the PCA. # References ## Least square fit Feel free to drop me an email or a comment. The syllabus of this series can be found in the introduction post. All the notebooks can be found on Github. This content is part of a series following the chapter 2 on linear algebra from the Deep Learning Book by Goodfellow, I., Bengio, Y., and Courville, A. (2016). It aims to provide intuitions/drawings/python code on mathematical theories and is constructed as my understanding of these concepts. You can check the syllabus in the introduction post. Essential Math for Data Science Do you want more math for data science and machine learning? I just released my book "Essential Math for Data Science"🎉.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906358003616333, "perplexity": 702.012241777298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00020.warc.gz"}
https://physics.stackexchange.com/questions/54208/internal-rotational-angular-momentum
# Internal/Rotational angular momentum I have some difficulties to understand the relation between the internal and the rotational angular momentum of a rigid body which is also known as König's theorem, so what physical intuition lies behind this equation? (I googled it but I can't find anything, so I will also appreciate any references if possible.) edit: $$H(O/Rg) = OG \times p + H^\star$$ (as vector quantities) $p$ is the moment of momentum , $G$ is the center of mass , $Rg$ is a gallilean reference of frame , $H$ is the angular momentum with respect to a point $O$ fixed in that reference , and $H^\star$ is the angular momentum in the CM reference . • Can you maybe give some more details? I.e., state Konig's theorem in your own words as far as possible, give maybe an example, and explain where you got stuck. – Bernhard Feb 17 '13 at 16:47 • Here is König's theorem according to Wikipedia, so I'm guessing OP is talking about splitting motion into (i) motion relative to CM, and (ii) motion of CM. See e.g. here. – Qmechanic Feb 17 '13 at 16:51 • @Guest, can you edit this in your question? Such that it is self-explanatory without the comments? – Bernhard Feb 17 '13 at 17:02 • @Qmechanic I don't want to guess here ;) – Bernhard Feb 17 '13 at 17:02 • sorry for mistakes in the previous comment – Guest Feb 17 '13 at 17:19 König's theorem is essentially a statement of conservation of angular momentum. If you consider the angular momenta of a system of particles, they had better add up to the same value no matter what frame you consider, provided that you are computing the angular momentum about the same fixed point in space. In a fixed frame, a system of particles has some angular momentum, which you have denoted $H(O/Rg)$. In a frame of reference co-moving with the center of mass of that system of particles, the total angular momentum appears to be $H^*$. König's theorem states that the connection between these two frames that resolves the apparent discrepancy in observed angular momentum is to adjust for the angular momentum of the center of mass itself, as viewed from the Gallilean refrence frame. This angular momentum is $\mathbf{r}_{CM} \times \mathbf{p}_{CM}$ where $\mathbf{r}_{CM}$ is the position of the center of mass relative to the origin $O$ and $\mathbf{p}_{CM}$ is the momentum of the center of mass of that system of particles, again relative to $O$. Let a system consist of particles with positions $\mathbf x_i$ as measured in some inertial frame, and let $\mathbf x'$ denote the position of the center of mass of the system, then if we define the center of mass positions by $$\mathbf x'_i = \mathbf x_i -\mathbf x'$$ then we have $$\mathbf x = \mathbf x' + \mathbf x_i'$$ And the angular momentum of the system is \begin{align} \mathbf L &= \sum_i \mathbf x_i\times(m_i \dot {\mathbf x}_i) \\ &= \sum_i m_i(\mathbf x'+\mathbf x_i')\times (\dot {\mathbf x}' + \dot{\mathbf x}_i') \\ &= \mathbf x'\times (M\dot{\mathbf x}')+\mathbf x'\times\left(\sum_i m_i\dot{\mathbf x}_i'\right)+ \mathbf x'\times\left(\sum_i m_i{\mathbf x}_i'\right)\times \dot{\mathbf x}' +\sum_i \mathbf x_i'\times(m_i \dot{\mathbf x}_i') \\ \end{align} where $$M = \sum_i m_i$$ is the total mass. The second and third terms vanish because $$\sum_i m_i \mathbf x_i' = \sum_im_i(\mathbf x_i - \mathbf x') = \sum_i m_i \mathbf x_i - M \mathbf x' = M\mathbf x' - M\mathbf x' = \mathbf 0$$ So we finally get $$\boxed{\mathbf L = \mathbf L' + \mathbf L_{\mathrm{cm}}}$$ where $\mathbf L'$ denotes the angular momentum of the center of mass and $\mathbf L_{\mathrm{cm}}$ denotes the angular momentum of the system about the center of mass; $$\mathbf L' = \mathbf x'\times (M\dot{\mathbf x}'), \qquad \mathbf L_{\mathrm{cm}} = \sum_i \mathbf x_i'\times(m_i \dot{\mathbf x}_i')$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992019534111023, "perplexity": 256.7276030386255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00549.warc.gz"}
http://mathhelpforum.com/pre-calculus/103628-finding-inverse-function.html
# Math Help - Finding the inverse of this function 1. ## Finding the inverse of this function f(x) = 3 + x^2 + tan(pi(x/2)) , -1 < x < 1 I found the inverse and it was: 3 + y^2 + tan(pi(y/2)), -1 < y < 1 Correct me if I was wrong. Then the question asks me to find f(f^-1(5)) And here's what I did. f^-1 (5) = 5 = 3 + y^2 + tan(pi(y/2)) 2 = y^2 + tan(pi(y/2)) f^-1 (5) = 1 f (f^-1 (5)) = f(1) = 3 + (1)^2 + tan(pi(1/2)) = 3 + 1 + 1 = 5 But the domain clearly states that x has to be between -1 and 1. What am I doing wrong? 2. Originally Posted by letzdiscuss f(x) = 3 + x^2 + tan(pi(x/2)) , -1 < x < 1 I found the inverse and it was: 3 + y^2 + tan(pi(y/2)), -1 < y < 1 Mr F says: The inverse is the function y such that x = 3 + y^2 + tan(pi(y/2)) where -1 < y < 1. Correct me if I was wrong. Then the question asks me to find f(f^-1(5)) And here's what I did. f^-1 (5) = 5 = 3 + y^2 + tan(pi(y/2)) Mr F says: f^-1 (5) is the value of y such that 5 = 3 + y^2 + tan(pi(y/2)). 2 = y^2 + tan(pi(y/2)) f^-1 (5) = 1 Mr F says: This is clearly wrong. x = 5 does not give y = 1. In other words, y = 1 is NOT a solution to 2 = y^2 + tan(pi(y/2)). In fact, y = 0.64216. f (f^-1 (5)) = f(1) = 3 + (1)^2 + tan(pi(1/2)) = 3 + 1 + 1 = 5 But the domain clearly states that x has to be between -1 and 1. What am I doing wrong? The logic of your setting out leaves something to be desired. Be that as it may, none of the above is necessary since by definition f(f^-1(a)) = a. 3. Thank you ! I know what I did wrong but how to I calculate what y is from 2 = y^2 + tan(pi(y/2))? 4. Originally Posted by letzdiscuss Thank you ! I know what I did wrong but how to I calculate what y is from 2 = y^2 + tan(pi(y/2))? An exact answer using algebra is not possible. You have to settle for a numerical approximation using technnology. But as I said earlier, the solution to this equation is not required in this case. 5. Originally Posted by mr fantastic An exact answer using algebra is not possible. You have to settle for a numerical approximation using technnology. But as I said earlier, the solution to this equation is not required in this case. Once again, Thanks a lot !!!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123590588569641, "perplexity": 478.1363282758085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.cheenta.com/spiral-similarity-of-cyclic-quadrilaterals/
Select Page ABC be any triangle. P is any point inside the triangle ABC. ( PA_1, PB_1, PC_1 ) be the perpendiculars dropped from P on the sides BC, CA and AB respectively. ( A_1 B_1 C_1) constitutes a pedal triangle. Also see Drop perpendiculars from P on ( A_1 B_1, , B_1 C_1, C_1 A_1 ) at ( C_2, A_2, B_2 ) respectively. ( A_2 B_2 C_2 ) known as the second pedal triangle. Finally, repeat the process to have the third pedal triangle (A_3 B_3 C_3 ). Proposition (easy angle chasing): The third pedal triangle is similar to the original triangle ( ( \Delta ABC \sim \Delta A_3 B_3 C_3 ) ) ## Spiral Similarity Notice that quadrilateral ( q_1 = P A_1 B C_1 ) is cyclic (why?). Rotate ( q_1 ) by ( 180^\circ ) and dilate it by a factor of ( \frac {1}{8} ). This spiral similarity sends the vertex B to ( B_3 ). Exercise 1: Proof this using complex bashing or otherwise. Exercise 2: Normalize by recreating the process in an equilateral triangle. Remark: It is interesting to note that ( P A_1 B C_1 ) appears to be spirally similar to ( PB_2 C_1 A_2 ) and ( PC_3 A_2 B_3 ) but that does not happen.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751675128936768, "perplexity": 1834.2513379288391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00444.warc.gz"}
https://de.maplesoft.com/support/help/addons/view.aspx?path=Magma/Zero&L=G
Zero - Maple Help Magma Zero return a zero element of a magma with a zero. Calling Sequence Zero( m ) Parameters m - Array representing the Cayley table of a finite magma Description • The Zero(m) command returns a zero element of a magma m with a zero.  If the magma has no zero element, then an exception is raised.  In the case in which a magma has multiple zeroes, the first one found is returned. Examples > $\mathrm{with}\left(\mathrm{Magma}\right):$ > $m≔⟨⟨⟨1|2|3⟩,⟨2|2|2⟩,⟨3|2|1⟩⟩⟩$ ${m}{≔}\left[\begin{array}{ccc}{1}& {2}& {3}\\ {2}& {2}& {2}\\ {3}& {2}& {1}\end{array}\right]$ (1) > $\mathrm{Zero}\left(m\right)$ ${2}$ (2) Compatibility • The Magma[Zero] command was introduced in Maple 15.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987726628780365, "perplexity": 1618.1171854121962}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00276.warc.gz"}
http://onlineprediction.net/index.html?n=Main.LossFunction
# Loss Function This is the key component of prediction games considered in competitive on-line prediction. Formally, a loss function is any function $\lambda: \Omega\times\Gamma \to [-\infty,\infty]$, where $\Omega$ is the outcome space and $\Gamma$ is the prediction space. It measures the discrepancy between the prediction and the actual outcome. All algorithms for competitive on-line prediction make some assumptions about the loss function. Among the most popular classes of loss functions are: nonnegative, bounded, perfectly mixable, continuous in the second argument, having a bounded derivative in the second argument. The most popular loss functions in the case where $\Omega=\Gamma=[0,1]$ are: • The square loss function, sometimes also called the Brier loss function: $\lambda(\omega,\gamma)=\lvert\omega-\gamma\rvert^2$. • The absolute loss function: $\lambda(\omega,\gamma)=\lvert\omega-\gamma\rvert$. • The Hellinger loss function: $\lambda(\omega,\gamma)=(1/2)((\sqrt{1-\omega} - \sqrt{1-\gamma})^2 + (\sqrt\omega - \sqrt\gamma)^2)$. • The log loss, or logarithmic loss, or relative entropy loss function: $\lambda(\omega,\gamma)=(1-\omega)\ln\frac{1-\omega}{1-\gamma} + \omega\ln\frac{\omega}{\gamma}$. This loss function is unbounded. The counting loss function used in the simple prediction game is the restriction of the absolute loss function (or the square loss function, or the Hellinger loss function) to $\Omega=\Gamma=\{0,1\}$. Some algorithms consider the expected loss rather than the "pure" loss. Some algorithms are capable of competing under discounted loss, where at each step $t$ the loss on all the previous steps is discounted with a factor $\alpha_t$. One of such algorithms is the Shortcut Defensive Forecasting, as it is shown here. The important subclass of loss functions are proper scoring rules. The square loss and the logarithmic loss are proper. One can easily generalize some of the losses given above to the case of multi-dimensional outcomes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974893927574158, "perplexity": 467.95923639185287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00534.warc.gz"}
https://dsp.stackexchange.com/questions/15909/correlation-between-real-and-imaginary-parts-of-a-fourier-transform-of-zero-mean
# Correlation between real and imaginary parts of a Fourier transform of zero mean Gaussian From this previous post, the real and imaginary parts of the Fourier transform of a zero mean Gaussian are uncorrelated (and i.i.d. Gaussians) This some how seems counter-intuitive. It seems if the real part has a large intensity, then the imaginary part should be small. I read through the proof linked from the previous post, but I don't have a good intuitive feel from this proof. Can you provide an intuitive explanation of why this is true? Recall the following properties of the Fourier transform: • If $x(t)$ is an even function, then its Fourier transform $X(\omega)$ is purely real. • If $x(t)$ is an odd function, then its Fourier transform $X(\omega)$ is purely imaginary. Thus, we can think of the real and imaginary parts of the Fourier transform of a zero-mean Gaussian random process $x(t)$ as the Fourier transforms of two separate inputs $x_e(t)$ and $x_o(t)$: the components of the process that have even and odd symmetry, respectively. We split the process into these components as follows: $$x_e(t) = \frac{x(t)+x(-t)}{2}$$ $$x_o(t) = \frac{x(t)-x(-t)}{2}$$ Note that $x(t) = x_e(t) + x_o(t)$. Moving through this step by step, • Your observation was that $\text{Re}\{X(\omega)\}$ and $\text{Im}\{X(\omega)\}$ (the real and imaginary parts of the process's DFT) are uncorrelated. • Using the Fourier transform properties mentioned above, we can deduce that $\text{Re}\{X(\omega)\} = X_e(\omega)$ and $\text{Im}\{X(\omega)\} = X_o(\omega)$; the real and imaginary parts of $X(\omega)$ are none other than the Fourier transforms of $x_e(t)$ and $x_o(t)$, respectively. • Therefore, your observation is equivalent to saying that $X_e(\omega)$ and $X_o(\omega)$ are uncorrelated. • Since the Fourier transform is a one-to-one mapping between the time and frequency domains, I posit that the lack of correlation between $X_e(\omega)$ and $X_o(\omega)$ would imply a lack of correlation between $x_e(t)$ and $x_o(t)$ as well. What is the correlation between $x_e(t)$ and $x_o(t)$? Simple: \begin{align} \mathbb{E}(x_e(t)x_o(t)) &= \mathbb{E}\left(\left(\frac{x(t)+x(-t)}{2}\right)\left(\frac{x(t)-x(-t)}{2}\right)\right) \\ &= \mathbb{E}\left(\frac{1}{4}\left(x^2(t) - x^2(-t)\right)\right) \\ &= \frac{1}{4} \left(\mathbb{E}(x^2(t)) - \mathbb{E}(x^2(-t))\right) \\ &= \frac{1}{4} \left(\sigma^2 - \sigma^2\right) \\ &= 0 \end{align} As expected, the even and odd components are uncorrelated. So, to summarize, I would say the following: • The real and imaginary components of a Fourier transform correspond to the individual Fourier transforms even and odd components of the input function. • For a zero-mean Gaussian random process, these even and odd components are uncorrelated. • Therefore, their Fourier transforms (the real and imaginary components that you asked about) are also uncorrelated. • If $x(t)$ is Gaussian, then its even and odd components $x_e(t)$ and $x_o(t)$ are as well, due to the property that any weighted sum of Gaussian random variables is also Gaussian. • If $x_e(t)$ and $x_o(t)$ are Gaussian random processes, then their Fourier transforms $X_e(\omega)$ and $X_o(\omega$) are as well. This follows from the same property as the previous statement; if you look at the transform, you're computing a weighted sum of a bunch of Gaussian random variables. • If $X_e(\omega)$ and $X_o(\omega)$ are Gaussian, and they are uncorrelated with one another (as described above), then they are also independent. This is a property of the Gaussian distribution. • I think the first added bullet is a bit short. The sum of two independent Gaussian variables is Gaussian, but you're perhaps a bit quick in assuming x(-t) and x(t) are independent. (They are, because the autocorrelation function of x is a delta function) – MSalters May 2 '14 at 16:39 • @MSalters: I don't make that assumption. Check out the part above where I show that $x_e(t)$ and $x_o(t)$ are uncorrelated. I don't assume that $x(t)$ is independent of $x(-t)$ there. The expectation operator is linear, so the order of expectation and subtraction can be swapped. – Jason R May 2 '14 at 16:43 • @JasonR: The added bullet states that the sum of two Gaussians is Gaussian, therefore x(t) is Gaussian implies xe(t) is Gaussian. That makes sense: xe(t) = x(t) + x(-t), the sum of two Gaussians. But the real theorem is that the sum of two independent Gaussians is Gaussian. Thus to prove xe(t) is Gaussian that way means proving x(-t) is independent. – MSalters May 2 '14 at 16:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9957892894744873, "perplexity": 204.28005077014586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00431.warc.gz"}
http://www.contrib.andrew.cmu.edu/~ryanod/?p=1153
# §7.1: Dictator testing In Chapter 1.6 we described the BLR property testing algorithm: given query access to an unknown function $f : \{0,1\}^n \to \{0,1\}$, this algorithm queries $f$ on a few random inputs and approximately determines whether $f$ has the property of being linear over ${\mathbb F}_2$. The field of property testing for boolean functions is concerned with coming up with similar algorithms for other properties. In general, a “property” can be any collection $\mathcal{C}$ of $n$-bit boolean functions; it’s the same as the notion of “concept class” from learning theory. Indeed, before running an algorithm to try to learn an unknown $f \in \mathcal{C}$, one might first run a property testing algorithm to try to verify that indeed $f \in \mathcal{C}$. Let’s encapsulate the key aspects of the BLR linearity test with some definitions: Definition 1 An $r$-query function testing algorithm for boolean functions $f : \{0,1\}^n \to \{0,1\}$ is a randomized algorithm which: • chooses $r$ (or fewer) strings ${\boldsymbol{x}}^{(1)}, \dots, {\boldsymbol{x}}^{(r)} \in \{0,1\}^n$ according to some probability distribution; • queries $f({\boldsymbol{x}}^{(1)}), \dots, f({\boldsymbol{x}}^{(r)})$; • based on the outcomes, decides (deterministically) whether to “accept” $f$. Definition 2 Let $\mathcal{C}$ be a “property” of $n$-bit boolean functions; i.e., a collection of functions $\{0,1\}^n \to \{0,1\}$. We say a function testing algorithm is a local tester for $\mathcal{C}$ (with rejection rate $\lambda > 0$) if it satisfies the following: • If $f \in \mathcal{C}$ then the tester accepts with probability $1$. • For all $0 \leq \epsilon \leq 1$, if $\mathrm{dist}(f,\mathcal{C}) > \epsilon$ (in the sense of Definition 1.30) then the tester rejects $f$ with probability greater than $\lambda \cdot \epsilon$. Equivalently, if the tester accepts $f$ with probability at least $1 – \lambda \cdot \epsilon$ then $f$ is $\epsilon$-close to $\mathcal{C}$; i.e., $\exists g \in \mathcal{C}$ such that $\mathrm{dist}(f,g) \leq \epsilon$. By taking $\epsilon = 0$ in the above definition you see that any local tester gives a characterization of $\mathcal{C}$: a function is in $\mathcal{C}$ if and only if it is accepted by the tester with probability $1$. But a local tester furthermore gives a “robust” characterization: any function accepted with probability close to $1$ must be close to satisfying $\mathcal{C}$. Example 3 By Theorem 1.31, the BLR Test is a $3$-query local tester for the property $\mathcal{C} = \{f : {\mathbb F}_2^n \to {\mathbb F}_2 \mid f \text{ is linear}\}$ (with rejection rate $1$). Remark 4 To be pedantic, the BLR linearity test is actually a family of local testers, one for each value of $n$. This is a common scenario: we will usually be interested in testing natural families of properties $(\mathcal{C}_n)_{n \in {\mathbb N}^+}$, where $\mathcal{C}_n$ contains functions $\{0,1\}^n \to \{0,1\}$. In this case we need to describe a family of testers, one for each $n$. Generally, these testers will “act the same” for all values of $n$ and will have the property that the rejection rate $\lambda > 0$ is a universal constant independent of $n$. There are a number of standard variations of Definition 2 that one could consider. One variation is to allow for an adaptive testing algorithm, meaning that the algorithm can decide how to generate ${\boldsymbol{x}}^{(t)}$ based on the query outcomes $f({\boldsymbol{x}}^{(1)}), \dots, f({\boldsymbol{x}}^{(t-1)})$. However in this book we will only consider non-adaptive testing. Another variation is to relax the requirement that $\epsilon$-far functions be rejected with probability $\Omega(\epsilon)$; one could allow for smaller rates such as $\Omega(\epsilon^2)$, or $\Omega(\epsilon/\log n)$. For simplicity, we will stick with the strict demand that the rejection probability be linear in $\epsilon$. Finally, the most common definition of property testing allows the number of queries to be a function $r(\epsilon)$ of $\epsilon$ but requires that any function $\epsilon$-far from $\mathcal{C}$ be rejected with probability at least $1/2$. This is easier to achieve than satisfying Definition 2; see the exercises. So far we have seen that the property of being linear over ${\mathbb F}_2$ is locally testable. We’ll now spend some time discussing local testability of an even simpler property, the property of being a dictator. In other words, we’ll consider the property${\mathcal D} = \{f : \{0,1\}^n \to \{0,1\} \mid f(x) = x_i \text{ for some i \in [n]}\}.$ As we will see, dictatorship is in some ways the most important property to be able to test. We begin with a reminder: even though ${\mathcal D}$ is a subclass of the linear functions and we have a local tester for linearity, this doesn’t mean we automatically have a local tester for dictatorship. (This is in contrast to learning theory, where a learning algorithm for a concept class automatically works for any subclass.) The reason is that the non-dictator linear functions — i.e., $\chi_S$ for $|S| \neq 1$ — are at distance $\tfrac{1}{2}$ from ${\mathcal D}$ but are accepted by any linearity test with probability $1$. Still, we could use a linearity test as a first component of a test for dictatorship; this essentially reduces the problem to testing if an unknown linear function is a dictator. Historically, the first local testers for dictatorship [BGS95,PRS01] worked this way; after testing linearity, they chose ${\boldsymbol{x}}, \boldsymbol{y} \sim \{0,1\}^n$ uniformly and independently, set $\boldsymbol{z} = {\boldsymbol{x}} \wedge \boldsymbol{y}$ (the bitwise logical AND), and tested whether $f(\boldsymbol{z}) = f({\boldsymbol{x}}) \wedge f(\boldsymbol{y})$. The idea is that the only parity functions which satisfy this “AND test” with probability $1$ are the dictators (and the constant $0$). The analysis of the test takes a bit of work; see the exercises for details. Here we will describe a simpler dictatorship test. Recall we have already seen an important result which characterizes dictatorship: Arrow’s Theorem, from Chapter 2.5. Furthermore the robust version of Arrow’s Theorem (Corollary 2.59) involves evaluating a $3$-candidate Condorcet election under the impartial culture assumption, and this is the same as querying the election rule $f$ on $3$ correlated random inputs. This suggests a dictatorship testing component we call the “NAE Test”: NAE Test Given query access to $f : \{-1,1\}^n \to \{-1,1\}$: • Choose ${\boldsymbol{x}}, \boldsymbol{y}, \boldsymbol{z} \in \{-1,1\}^n$ by letting each triple $({\boldsymbol{x}}_i, \boldsymbol{y}_i, \boldsymbol{z}_i)$ be drawn independently and uniformly at random from among the $6$ triples satisfying the not-all-equal predicate $\mathrm{NAE}_3 : \{-1,1\}^3 \to \{0,1\}$. • Query $f$ at ${\boldsymbol{x}}$, $\boldsymbol{y}$, $\boldsymbol{z}$. • Accept if $\mathrm{NAE}_3(f({\boldsymbol{x}}), f(\boldsymbol{y}), f(\boldsymbol{z}))$ is satisfied. The NAE Test by itself is almost a $3$-query local tester for the property of being a dictator. Certainly if $f$ is a dictator then the NAE Test accepts with probability $1$. Furthermore, in Chapter 2.5 we proved: Theorem 5 (Restatement of Corollary 2.58) If the NAE Test accepts $f$ with probability $1-\epsilon$ then $\mathbf{W}^{1}[f] \geq 1 – \frac92\epsilon$ and hence $f$ is $O(\epsilon)$-close to $\pm \chi_i$ for some $i \in [n]$ by the FKN Theorem. There are two slightly unsatisfactory aspects to this theorem. First, it gives a local tester only for the property of being a dictator or a negated-dictator. Second, though the deduction $\mathbf{W}^{1}[f] \geq 1 – \frac92\epsilon$ requires only simple Fourier analysis, the conclusion that $f$ is close to a (negated-)dictator relies on the non-trivial FKN Theorem. Fortunately we can fix both issues simply by adding in the BLR Test: Theorem 6 Given query access to $f : \{-1,1\}^n \to \{-1,1\}$, perform both the BLR Test and the NAE Test. This is a $6$-query local tester for the property of being a dictator (with rejection rate $.1$). Proof: The first condition in Definition 2 is easy to check: If $f : \{-1,1\}^n \to \{-1,1\}$ is a dictator then both tests accept $f$ with probability $1$. To check the second condition, fix $0 \leq \epsilon \leq 1$ and assume the overall test accepts $f$ with probability at least $1 – .1 \epsilon$. Our goal is to show that $f$ is $\epsilon$-close to some dictator. Since the overall test accepts with probability at least $1-.1 \epsilon$, both the BLR and the NAE tests must individually accept $f$ with probability at least $1-.1\epsilon$. By the analysis of the NAE Test we deduce that $\mathbf{W}^{1}[f] \geq 1-\frac92 \cdot .1\epsilon = 1-.45\epsilon$. By the analysis of the BLR Test (Theorem 1.31) we deduce that $f$ is $.1\epsilon$-close to some parity function; i.e., $\widehat{f}(S^*) \geq 1-.2\epsilon$ for some $S^* \subseteq [n]$. Now if $|S^*| \neq 1$ we would have $1 = \sum_{k=0}^n \mathbf{W}^{k}[f] \geq (1 – .45\epsilon) + (1-.2\epsilon)^2 \geq 2 – .85 \epsilon > 1,$ a contradiction. Thus we must have $|S^*| = 1$ and hence $f$ is $.1\epsilon$-close to the dictator $\chi_{S^*}$, stronger than what we need. $\Box$ As you can see, we haven’t been particularly careful about obtaining the largest possible rejection rate. Instead, we will be more interested in using as few queries as possible (while maintaining some positive constant rejection rate). Indeed we now show a small trick which lets us reduce our $6$-query local tester for dictatorship down to a $3$-query one. This is best possible since dictatorship can’t be locally tested with $2$ queries (see the exercises). BLR+NAE Test Given query access to $f : \{-1,1\}^n \to \{-1,1\}$: • With probability $1/2$, perform the BLR Test on $f$. • With probability $1/2$, perform the NAE Test on $f$. Theorem 7 The BLR+NAE Test is a $3$-query local tester for the property of being a dictator (with rejection rate $.05$). Proof: The only observation we need to make is that if the BLR+NAE Test accepts with probability $1-.05\epsilon$ then both the BLR and the NAE tests individually must accept $f$ with probability at least $1-.1\epsilon$. The result then follows from the analysis of Theorem 6. $\Box$ Remark 8 In general, this trick lets us take the maximum of the query complexities when we combine tests, rather than the sum (at the expense of worsening the rejection rate). Suppose we wish to combine $t = O(1)$ different testing algorithms, where the $i$th tester uses $r_i$ queries. We make an overall test which performs each subtest with probability $1/t$. This gives a $\max(r_1, \dots, r_t)$-query testing algorithm with the following guarantee: if the overall test accepts $f$ with probability $1-\frac{\lambda}{t} \epsilon$ then every subtest must accept $f$ with probability at least $1-\lambda\epsilon$. We can now explain one reason why dictatorship is a particularly important property to be able to test locally. Given the BLR Test for linear functions it still took us a little thought to find a local test for the subclass ${\mathcal D}$ of dictators. But given our dictatorship test, it’s easy to give a $3$-query local tester for any subclass of ${\mathcal D}$. (On a related note, in the exercises you are asked to give a $3$-query local tester for any affine subspace of the linear functions.) Theorem 9 Let $\mathcal{S}$ be any subclass of $n$-bit dictators; i.e., let $S \subseteq [n]$ and let $\mathcal{S} = \{\chi_i : \{0,1\}^n \to \{0,1\} \mid i \in S\}.$ Then there is a $3$-query local tester for $\mathcal{S}$ (with rejection rate $.01$). Proof: Let $1_S \in \{0,1\}^n$ denote the indicator string for the subset $S$. Given access to $f : \{0,1\}^n \to \{0,1\}$, the test is as follows: • With probability $1/2$, perform the BLR+NAE Test on $f$. • With probability $1/2$, apply the local correcting routine of Proposition 1.32 to $f$ on string $1_S$; accept if and only if the output value is $1$. This test always makes either $2$ or $3$ queries, and whenever $f \in \mathcal{S}$ it accepts with probability $1$. Now let $0 \leq \epsilon \leq 1$ and suppose the test accepts $f$ with probability at least $1 – \lambda\epsilon$, where $\lambda = .01$. Our goal will be to show that $f$ is $\epsilon$-close to a dictator $\chi_i$ with $i \in S$. Since the overall test accepts $f$ with probability at least $1-\lambda \epsilon$, the BLR+NAE Test must accept $f$ with probability at least $1-2\lambda\epsilon$. By Theorem 7 we may deduce that $f$ is $40\lambda\epsilon$-close to some dictator $\chi_i$. Our goal is to show that $i \in S$; this will complete the proof because $40\lambda\epsilon \leq \epsilon$ (by our choice of $\lambda = .01$). So suppose by way of contradiction that $i \not \in S$; i.e., $\chi_i(1_S) = 0$. Since $f$ is $40\lambda\epsilon$-close to the parity function $\chi_i$, Proposition 1.32 tells us that $\mathop{\bf Pr}[\text{locally correcting f on input 1_S produces the output \chi_i(1_S) = 0}] \geq 1 – 80\lambda\epsilon.$ On the other hand, since the overall test accepts $f$ with probability at least $1-\lambda \epsilon$, the second subtest must accept $f$ with probability at least $1-2\lambda\epsilon$. This means $\mathop{\bf Pr}[\text{locally correcting f on input 1_S produces the output 0}] \leq 2\lambda\epsilon.$ But this is a contradiction, since $2\lambda\epsilon < 1-80\lambda\epsilon$ for all $0 \leq \epsilon \leq 1$ (by our choice of $\lambda=.01$). Hence $i \in S$ as desired. $\Box$ ### 11 comments to §7.1: Dictator testing • Just a couple of really minor typos: At the beginning of the proof of theorem 9 – I think you are missing a “the” – “Let $1_S$ denote indicator string for the subset $S$” At the third paragraph above the definition of the BLR+NAE test – the comma is before the parenthesis “(This is in contrast to learning theory, where a learning algorithm for a concept class automatically works for any subclass.)” • Thanks Tom! Actually, I didn’t understand what the typo was in the second part of your comment. • It’s probably just me being over-pedantic, but I think: “works for any subclass.)” should be: “works for any subclass).” • Chicago Manual of Style, #6.96, “A period precedes the closing parenthesis if the entire sentence is in parentheses; otherwise it follows.” Thanks though, keep the suggestions comment! • Deepak After Remark 4, do you mean to say “by” in “This is easier to achieve than satisfying by Definition 2″ ? • Thanks, fixed. • While we’re on the topic of minor typos: * just before Definition 1, “give encapsulate” * this may be a matter of style, but I think “An r-query” rolls off the tongue easier than “A r-query” • Thanks, fixed both! • Amos W. In the definition of the NAE Test, you might consider changing “drawn independent” to “drawn independently“. • You’re right, fixed! Thanks Amos.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417126774787903, "perplexity": 398.06966208633946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982291015.10/warc/CC-MAIN-20160823195811-00072-ip-10-153-172-175.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/20362/in-what-category-is-the-sum-of-real-numbers-a-coproduct/20365
# In what category is the sum of real numbers a coproduct? (if any?) I understand that in the natural numbers, the sum of two numbers can be readily thought of as the disjoint union of two finite sets. John Baez even spent a week talking about how you can extend this idea to thinking about the integers here: TWF 102. This led into a discussion of the homotopy groups of spheres. But then you have to pass to a certain colimit to get the rationals, and take a certain completion to get the reals. It all gets very complicated. One place where we rely on a correspondence between a sum of real numbers and a certain coproduct is in measure theory-- perhaps in analogy to the relation between finite sets and natural numbers, we should think of some measure space as the categorification of the real numbers. But this sounds unpromising-- what space would be in any sense a canonical categorification? Moreover, what I was really hoping for originally was a precise sense in which the $\sigma$-additivity of a measure states that it preserves coproducts or something, so I was hoping there might be more to it than sigma-algebras. - In Baez's categorification of the integers, sum isn't coproduct; it's just a monoidal operation. So you need to be asking a slightly different question. –  Qiaochu Yuan Apr 5 '10 at 5:08 There's also something of a problem with thinking of sigma-additivity as preserving coproducts: the natural category structure to place on a measurable space is the one where objects are measurable subsets and morphisms are inclusions. Then coproducts are unions; I assume this is what you were talking about. However, sigma-additivity only applies to disjoint unions. –  Qiaochu Yuan Apr 5 '10 at 5:28 Aren't coproducts of sets already disjoint unions? The union would be a pushout, right? –  Tim Campion Apr 5 '10 at 21:19 ## 2 Answers None (except trivially). It's an elementary (though maybe not obvious) lemma that if $X$ and $Y$ are objects of a category and their coproduct $X + Y$ is initial, then $X$ and $Y$ are both initial. Suppose there is some category whose objects are the real numbers, and such that finite coproducts of objects exist and are the same as finite sums of real numbers. In particular (taking the empty sum/coproduct), the real number $0$ is an initial object. Now for any real number $x$ we have $x + (-x) = 0$, so by the lemma, $x$ is initial. So every object is initial, so all objects of the category are uniquely isomorphic, so the category is equivalent to the terminal category 1. If you just want non-negative real numbers then this argument doesn't work, and I don't immediately see an argument to take its place. But I don't think it's too likely that an interesting such category exists. I wonder if it would be more fruitful to ask a slightly different question. Product and coproduct aren't the only interesting binary operations on a category. You can equip a category with binary operations (as in the concept of monoidal category). Sometimes this is a better thing to do. For example, there is on the one hand the concept of distributive category, which is something like a rig (=semiring) in that it has finite products $\times$ and finite coproducts $+$, with one distributing over the other. On the other hand, there is the concept of rig category, which is a category equipped with binary operations $\otimes$ and $\oplus$, with one distributing over the other. Distributive categories are examples of rig categories. Any rig, seen as a category with no morphisms other than identities, is a rig category. Any ordered rig can be regarded as a rig category (just as any poset can be regarded as a category): e.g. $[0, \infty]$ is one, with its usual ordering, $\otimes = \times$, and $\oplus = +$. - You didn't write this but it is easy to come up with examples of rig categories where 0 is not initial. In fact there are non-trivial bimonoidal categories (bipermutative in fact) which has $\mathbb{Z}$ as the underlying object-set with its usual ring structure induced from $\otimes$ and $\oplus$. –  Thomas Kragh Apr 5 '10 at 10:11 Right; e.g. any nontrivial rig regarded as a discrete rig category gives an example where 0 is not initial. On the second point (re $\mathbb{Z}$): there's also a nontrivial rig category whose underlying rig of isomorphism classes of objects is $\mathbb{Z}[i]$. –  Tom Leinster Apr 5 '10 at 15:06 I'm probably just slow here, but I don't see why the arrow $0 \rightarrow X$ should be unique without making some additional assumptions... That said, point taken. Any category like the one I asked for is not going to be nice, not in any nice sense going to be the reals, rationals, postive reals, ... Thanks. –  Tim Campion Apr 5 '10 at 21:25 Don't worry about being slow... it's just something to wrap one's head around. For any finite family of numbers, you can take the sum. In particular, you can do it for the empty family; and the sum of the empty family is zero. (You might call that a "convention", but it's the only sensible one.) In the same way, for any finite family of objects of a category, you can (if the category allows) take the coproduct. In particular, the coproduct of the empty family is the initial object. More precisely, the empty family has a coproduct iff an initial object exists, and in that case they coincide. –  Tom Leinster Apr 5 '10 at 23:09 Isomorphism classes of finitely generated right Hilbert modules over a II_1 factor are in a bijective correspondence with the nonnegative reals. The correspondence sends every module to its dimension. Moreover, the dimension function is additive with respect to the coproduct of modules. I believe you can obtain all reals using some form of supersymmetry (Quillen construction?), but then the addition of reals will no longer correspond to the coproduct of objects. - +1. There were a couple of related questions by David Corfield: mathoverflow.net/questions/2100 and mathoverflow.net/questions/5190 –  Tom Leinster Apr 5 '10 at 15:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488504886627197, "perplexity": 325.7241683650897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00104-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.wikihow.com/Integrate-Using-the-Riemann-Zeta-Function
Edit Article # wikiHow to Integrate Using the Riemann Zeta Function The Riemann zeta function is one of the most important functions in mathematics. It was first introduced by Euler and its properties were further explored in Riemann's 1859 paper, leading to the famed Riemann hypothesis concerning the distribution of prime numbers. While the function and its generalizations are of critical importance to number theorists, it is also useful in applications such as thermodynamics and quantum field theory. In this article, we show how the integral form of the Riemann zeta function can be used to evaluate integrals inaccessible through the standard methods. ## Preliminaries • The Riemann zeta function ${\displaystyle \zeta (s)}$ is a function of a complex variable ${\displaystyle s}$ initially defined as the infinite sum given below, convergent for all ${\displaystyle \operatorname {Re} (s)>1.}$ • ${\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots }$ • The Riemann zeta function is also defined by the integral given below, where ${\displaystyle \Gamma (s)}$ is the Gamma function, also convergent for all ${\displaystyle \operatorname {Re} (s)>1.}$ This result is derived in part 1 of this article. • ${\displaystyle \zeta (s)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}-1}}{\mathrm {d} }x}$ • This function satisfies Riemann's functional equation that analytically continues the function to all complex numbers except for the pole at ${\displaystyle s=1.}$ • ${\displaystyle \zeta (s)=2^{s}\pi ^{s-1}\sin {\frac {\pi s}{2}}\Gamma (1-s)\zeta (1-s)}$ • In part 3, we will also be using the related Dirichlet eta function to evaluate integrals. The Dirichlet eta function is the Riemann zeta function with alternating signs. • ${\displaystyle \eta (s)=\sum _{n=1}^{\infty }{\frac {(-1)^{s+1}}{n^{s}}}={\frac {1}{1^{s}}}-{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}-\cdots }$ • ${\displaystyle \eta (s)={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}+1}}{\mathrm {d} }x}$ • ${\displaystyle \eta (s)={\frac {2^{s-1}-1}{2^{s-1}}}\zeta (s)}$ ### Part 1 Derivation of the Integral Form 1. 1 Begin with the integral. The identity involving the integral, the Gamma function, and the Riemann zeta function is pretty straightforward to derive. • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}-1}}{\mathrm {d} }x}$ 2. 2 Multiply the top and bottom by ${\displaystyle e^{-x}}$. This allows us to rewrite the integral in terms of a power series ${\displaystyle {\frac {1}{1-e^{-x}}}=\sum _{k=0}^{\infty }e^{-kx}.}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s-1}e^{-x}}{1-e^{-x}}}{\mathrm {d} }x=\sum _{k=0}^{\infty }\int _{0}^{\infty }x^{s-1}e^{-(k+1)x}{\mathrm {d} }x}$ 3. 3 Make the u-sub ${\displaystyle u=(k+1)x}$. Then we see that the integral is simply the Gamma function after pulling the terms with ${\displaystyle k}$ out of the integral. Furthermore, the sum is simply the Riemann zeta function. • ${\displaystyle \sum _{k=0}^{\infty }\int _{0}^{\infty }{\frac {u^{s-1}}{(k+1)^{s-1}}}{\frac {e^{-u}}{k+1}}{\mathrm {d} }u=\left(\sum _{k=0}^{\infty }{\frac {1}{(k+1)^{s}}}\right)\Gamma (s)=\Gamma (s)\zeta (s)}$ • This is a result that can be used to evaluate any types of integrals of this form. Furthermore, we can introduce additional parameters and differentiate under the integral to yield even more results, which we explore in part 2. 4. 4 Verify the integrals below. These integrals can directly be evaluated using the Riemann zeta function and should be considered trivial. • ${\displaystyle \int _{0}^{\infty }{\frac {x}{e^{x}-1}}{\mathrm {d} }x={\frac {\pi ^{2}}{6}}}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{2}}{e^{x}-1}}{\mathrm {d} }x=2\zeta (3)}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{2}{\sqrt {x}}}{e^{x}-1}}{\mathrm {d} }x={\frac {15{\sqrt {\pi }}}{8}}\zeta (7/2)}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{3}}{e^{x}-1}}{\mathrm {d} }x={\frac {\pi ^{4}}{15}}}$ ### Part 2 Differentiating Under the Integral 1. 1 Consider the integral below. We introduce an additional constant ${\displaystyle \alpha ,}$ allowing us to determine some more integrals. Make sure that you verify this integral using u-substitution. • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s-1}}{e^{\alpha x}-1}}{\mathrm {d} }x={\frac {\Gamma (s)\zeta (s)}{\alpha ^{s}}}}$ 2. 2 Differentiate under the integral with respect to ${\displaystyle \alpha }$. If you are familiar with differentiating under the integral, you will recognize that doing so will negate both the integral and the result, so the answers will stay positive no matter how many times we differentiate. • After simplifying, we get these results. • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s}e^{\alpha x}}{(e^{\alpha x}-1)^{2}}}{\mathrm {d} }x={\frac {\Gamma (s+1)\zeta (s)}{\alpha ^{s+1}}}}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s+1}e^{\alpha x}(1+e^{\alpha x})}{(e^{\alpha x}-1)^{3}}}{\mathrm {d} }x={\frac {\Gamma (s+2)\zeta (s)}{\alpha ^{s+2}}}}$ • For each of these integrals, we can set ${\displaystyle \alpha =1}$ to obtain these results. • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s}e^{x}}{(e^{x}-1)^{2}}}{\mathrm {d} }x=\Gamma (s+1)\zeta (s)}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s+1}e^{x}(1+e^{x})}{(e^{x}-1)^{3}}}{\mathrm {d} }x=\Gamma (s+2)\zeta (s)}$ • We can also combine these results to obtain another integral. This identity can be verified by writing the right side in integral form and multiplying the top and bottom of the right integral by ${\displaystyle (e^{x}-1).}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s}}{(e^{x}-1)^{2}}}{\mathrm {d} }x=\Gamma (s+1)(\zeta (s)-\zeta (s+1))}$ 3. 3 Differentiate with respect to ${\displaystyle s}$. This introduces some more integrals involving logarithms. However, it also introduces derivatives of the Gamma function and the Riemann zeta function. Obviously, more of these can be found simply by repeatedly differentiating with respect to either ${\displaystyle \alpha }$ or ${\displaystyle s.}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s-1}\ln x}{e^{x}-1}}{\mathrm {d} }x=\Gamma (s)\zeta ^{\prime }(s)+\Gamma ^{\prime }(s)\zeta (s)}$ 4. 4 Verify the integrals below. Using the techniques discussed in this section, as well as u-substitution, we can evaluate these classes of integrals. • ${\displaystyle \int _{0}^{\infty }{\frac {x^{3}}{e^{4x}-1}}{\mathrm {d} }x={\frac {\pi ^{4}}{2^{8}\cdot 3\cdot 5}}}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{2}}{e^{4x^{2}}-1}}{\mathrm {d} }x={\frac {\sqrt {\pi }}{32}}\zeta (3/2)}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{3}e^{2x}}{(e^{2x}-1)^{2}}}{\mathrm {d} }x={\frac {3}{8}}\zeta (3)}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{2}{\sqrt[{3}]{x}}}{(e^{x}-1)^{2}}}{\mathrm {d} }x={\frac {28}{27}}\Gamma \left({\frac {1}{3}}\right)\left(\zeta \left({\frac {7}{3}}\right)-\zeta \left({\frac {10}{3}}\right)\right)}$ ### Part 3 Dirichlet Eta Function 1. 1 Consider the integral below. In this section, we consider a related function to the Riemann zeta function whose terms alternate signs - the Dirichlet eta function. This relation can be derived in a similar manner as before. It converges whenever ${\displaystyle \operatorname {Re} (s)>0.}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}+1}}{\mathrm {d} }x=\Gamma (s)\eta (s)=\Gamma (s){\frac {2^{s-1}-1}{2^{s-1}}}\zeta (s)}$ 2. 2 Insert a parameter ${\displaystyle \alpha }$ in the exponent and differentiate under the integral. The same differentiation under the integral that was performed on the Riemann zeta function can be done on the Dirichlet eta function as well. We write some of these integrals below, where we set ${\displaystyle \alpha =1}$ at a convenient time. • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s}e^{x}}{(e^{x}+1)^{2}}}{\mathrm {d} }x=\Gamma (s+1)\eta (s)}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{s}}{(e^{x}+1)^{2}}}{\mathrm {d} }x=\Gamma (s+1)(\eta (s)-\eta (s+1))}$ 3. 3 Verify the integrals below. It turns out that the Dirichlet eta function is also useful in certain fields of physics, which is why we go over them in this article. • ${\displaystyle \int _{0}^{\infty }{\frac {x^{2}}{e^{4x}+1}}{\mathrm {d} }x={\frac {3}{128}}\zeta (3)}$ • ${\displaystyle \int _{0}^{\infty }{\frac {xe^{x}}{(e^{x}+1)^{2}}}{\mathrm {d} }x=\ln 2}$ • ${\displaystyle \int _{0}^{\infty }{\frac {x^{2}e^{3x}}{(e^{3x}+1)^{2}}}{\mathrm {d} }x={\frac {\pi ^{2}}{162}}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 50, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963037371635437, "perplexity": 191.70189772380508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424910.80/warc/CC-MAIN-20170724202315-20170724222315-00045.warc.gz"}
https://math.stackexchange.com/questions/650739/using-field-axioms-for-a-simple-proof
# Using field axioms for a simple proof Question: If $F$ is a field, and $a, b, c \in F$, then prove that if $a+b = a+c$, then $b=c$ by using the axioms for a field. Relevant information: Field Axioms (for $a, b, c \in F$): $a+b = b+a$ (Commutativity) $a+(b+c) = (a+b)+c$ (Associativity) $a+0 = a$ (Identity element exists) $a+(-a) = 0$ (Inverse exists) Multiplication: $ab = ba$ (Commutativity) $a(bc) = (ab)c$ (Associativity) $a1 = a$ (Identity element exists) $aa^{-1} = 1$ (Inverse exists) Distributive Property: $a(b+c) = ab + ac$ Attempt at solution: I'm not sure where I can begin. Is it ok to start with adding the inverse of a to both sides, as in the following? $(a+b)+(-a) = (a+c)+(-a)$ (Justification?) $(b+a)+(-a) = (c+a)+(-a)$ (Commutativity) $b+(a+(-a)) = c+(a+(-a))$ (Associativity) $b+0 = c+0$ (Definition of additive inverse) $b = c$ (Definition of additive identity) I'm wondering about my very first step. Specifically, the axioms don't mention anything about doing something to both sides of an equation simultaneously. Is there some other axiom I can use to justify this step? This is Exercise 1, part b in Section 1 on page 2 of Halmos, Finite Dimensional Vector Spaces (reading book for fun--this is not homework (probably too easy to be a homework problem anyway!)). In part a, I proved that $0+a = a$, in case that is somehow helpful in this problem. Thanks! • Just a minor note: you may want to write $1/a$ as $a^{-1}$ because you would want to define "/" later using the inverses and it wouldn't be good to have syntax with ambiguous parsing. – user21820 Jan 25 '14 at 11:33 • That's a good idea--I made the change. – Mike Bell Jan 25 '14 at 11:44 Martín-Blas Pérez Pinilla suggests that "=" can be considered a logical symbol obeying logical axioms. While I agree that it fundamentally is so, I would like to note that it is possible to consider it an equivalence relation obeying 'internal' field axioms, because for example the rational numbers can be taken as equivalence classes of a certain set of pairs of integers, and so it is not quite right to consider the equality between these rationals as a logical equality. Also, Ittay made a mistake where he used an unstated axiom that allows substitution. What you need, either way, is something equivalent to the following for any field $F$: $a=a$ for any $a \in F$ [reflexivity of =] $a=b \Rightarrow b=a$ for any $a,b \in F$ [commutativity of =] $a=b \wedge b=c \Rightarrow a=c$ for any $a,b,c \in F$ [transitivity of =] (These describe "=" as an equivalence relation on $F$) $a=b \Rightarrow P(a)=P(b)$ for any $a,b \in F$ and predicate $P$ [substitution] (This describes substitution, which can be used to replace separate axioms governing how "=" and the field operations interact. Ittay used this in one of his steps.) These allow us to "do the same thing to both sides", for example: For any $a,b,c \in F$ such that $a=b$, Let $d=a+c$ [closure under +] $a+c=a+c$ [transitivity of =; $a+c=d=a+c$] $a+c=b+c$ [substitution; where the predicate is given by $P(x) \equiv (a+c=x+c)$] Note that to prove that something is a field, we will have to prove the substitution axiom, which boils down to proving the following equivalent set of axioms: $a=b \Rightarrow a+c=b+c$ for any $a,b,c \in F$ $a=b \Rightarrow ac=bc$ for any $a,b,c \in F$ The original problem can then be proven as follows: For any $a,b,c \in F$ such that $a+b=a+c$, $b = 0+b = (-a+a)+b = (-a)+(a+b) = (-a)+(a+c) = (-a+a)+c = 0+c = c$ • Hurkyl had given a comment indicating that we could use substitution of an argument of $+$ considered as a 2-argument function. As I noted, fields constructed through equivalence relations will still require one to prove the substitution rule before using, so it amounts to the same two more basic axioms that I gave. – user21820 Jan 25 '14 at 11:48 The first step is justified by the existence of $-a$. No further axiom is required in order to deduce that $(a+b)+(-a)=(a+c)+(-a)$. To see that, just write $a+b=y=a+c$ (after all, it is given that $a+b=a+c$). Now, one of the information defining a field is the function of addition: $(u,v)\mapsto u+v$. Applying it to $u=y$ and $v=(-a)$, yields the element $y+(-a)$. But, since $y=a+b$, $y+(-a)=(a+b)+(-a)$. Similarly, since $y=a+c$, $y+(-a)=(a+c)+(-a)$. By transitivity of equality, it follows that $(a+b)+(-a)=(a+c)+(-a)$. Note that it would be a lot easier to add $(-a)$ on the left rather than on the right. It will save you using commutativity. • The step where you substitute $(a+b)$ into $y$ requires another axiom such as substitution. (See my answer.) With substitution or the alternative, you can just do it in one line. And that very axiom is the most important part of the problem! – user21820 Jan 25 '14 at 11:19 • Yes, you can put $(-a)$ on the left. But the axiom says $a + (-a) = 0$, not $(-a) + a = 0$. So you have to use Commutativity anyway! – TonyK Jan 25 '14 at 11:26 "Specifically, the axioms don't mention anything about doing something to both sides of an equation simultaneously." Because is a logical axiom. If you have two equal things and do the same with both things the results are equal. Search "first order logic with identity". • See also en.wikipedia.org/wiki/…, for when we don't have two equal things but have two things that behave the same way under some operations, and we still want to have 'substitution'. – user21820 Jan 25 '14 at 11:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9481188058853149, "perplexity": 380.9813622254612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575596.77/warc/CC-MAIN-20190922160018-20190922182018-00029.warc.gz"}
https://stats.stackexchange.com/questions/264485/how-to-detect-multivariate-binomial-distributions/264487
# How to detect multivariate binomial distributions? I tried the hartigans dip test, and it works well for univariate distributions. However, when i tried taking each variable (dimension) and applied hartigans dip test (assuming that if along one dimension/variable its bimodal the whole distribution is bimodal) it did not work. As an example, this is how my bimodal distribution looks like. Can you please help me with any other way in which i can detect multimodality in my data?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028065800666809, "perplexity": 1093.8173034589724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00124.warc.gz"}
https://math.stackexchange.com/questions/2390879/prove-that-if-integer-a-0-is-not-a-square-then-a-neq-fracb2c2
# Prove that if integer $a > 0$ is not a square , then $a \neq \frac{b^2}{c^2}$ for non-zero integers b,c Prove that if integer $a > 0$ is not a square , then $a \neq \frac{b^2}{c^2}$ for non-zero integers b,c. I would like to know if the proposed proof below is valid. Assume that $ac^2 = b^2$. With $a > 0$ we have $ac^2 > 0$. $b^2$ is a square and can written with the following factorization: $b^2 = p_1^{\alpha_1} \cdot p_2^{\alpha_2} \ldots p_n^{\alpha_n}$ where ${\alpha_1,\alpha_2,\ldots,\alpha_n}$ are all even integers and ${p_1,p_2,\ldots,p_n}$ are prime numbers. Now $c^2$ and $a$ can be written with the following factorizations: $c^2 = r_1^{2\beta_1}\cdot r_2^{2\beta_2} \ldots r_n^{2\beta_n}$ $a = r_1^{\gamma_1}\cdot r_2^{\gamma_2} \ldots r_n^{\gamma_n}$ where ${r_1,r_2,\ldots,r_n}$ are prime numbers and, ${\beta_1,\beta_2,\ldots,\beta_n}$ and ${\gamma_1,\gamma_2,\ldots,\gamma_n}$ are positive integers. Then for any prime $s$ in the factorization of $b^2$ we have that ${s\mid r_1^{\gamma_1+2\beta_1}\cdot r_2^{\gamma_1+2\beta_2} \ldots r_n^{\gamma_n+2\beta_n}}$ which implies there is a unique $r_i$ such that ${s\mid r_i^{\gamma_i+2\beta_i}}$. It also implies $s\mid r_i$ and $s= r_i$ since $s$ and $r_i$ are both prime. (Believe last implication is correct but needs confirmation). By the uniqueness of the factorization of $b^2$, $s$ has the power $t$ so that ${s^t = r_i^{\gamma_i+2\beta_i}}$ and $t = \gamma_i+2\beta_i$. $t$ is odd if $\gamma_i$ is odd but $t$ is even if $\gamma_i$ is even. So some of the powers in ${b^2 = p_1^{\alpha_1} \cdot p_2^{\alpha_2} \ldots p_n^{\alpha_n}}$ are odd. This is a contradiction and ${ac^2 \neq b^2}$. • Why do a and c have the same prime factors? And why must some of the powers of $b^2$ must be odd? And you never used a at all, nor that it's not square. Those part I don't get. Your idea is correct. You do find which factors are even or odd. – fleablood Aug 12 '17 at 3:34 • Thanks for the comments. You are correct in asking why do $a$ and $c$ have the same prime factors? I did not know if I could do that or not. Below "dxiv" pointed out that it is not a given and it is incorrect to do so. In the proof I gave I showed that some powers of $b^2$ are odd where in fact they must be all even for $b^2$ to be a square. Please note in the 2nd to last paragraph $a$ is used as shown by the factorization of $a c^2$ which are made of terms like $r_i^{\gamma_i+2\beta_i}$. – Rene Girard Aug 13 '17 at 1:25 Yes, your proof looks correct. I think proof by contrapositive might also work here, too: if $a=b^2/c^2$ is an integer, suppose $b^2/c^2 = (kp)^2/(kq)^2$, where $p/q$ and hence $p^2/q^2$ are in lowest terms. Then $q^2=1$ because $a$ is an integer, and hence $a = b^2$ is the square of an integer. (You would need a similar kind of factorization argument if you wanted to prove in more detail the part which says “$p/q$ and hence $p^2/q^2$ are in lowest terms”.) • Thanks for the comments. I understand from your proof that ${k=gcd(b,c)}$. Is this correct? You indicate that ${q \mid p}$. Is this because ${c \mid b}$ and $b^2/c^2 = a$ where $a$ is integer? I am not familiar with the terminology " ${p^2/q^2}$ are in lowest terms ". Can you give some details on its meaning? – Rene Girard Aug 13 '17 at 2:18 Almost correct, But don't start your proof with $$ac^2 = b^2$$ instead, start with; assume $$a = \frac{b^2}{c^2},$$ now we get $$ac^2 = b^2.$$ The intuition is right, but the formalization of the proof has some holes. $c^2$ and $a$ can be written $\,\dots\,$ where $\,\dots\,$ ${\beta_1,\beta_2,\ldots,\beta_n}$ and ${\gamma_1,\gamma_2,\ldots,\gamma_n}$ are positive integers This assumes that $a$ and $c$ have the same prime factors, which is not a given. Also, as just a side advice, it helps with making proofs more readable (to others, and even yourself) to use consistent, easy to follow notation. The posted proof uses: $b^2 = p_1^{\alpha_1} \cdot p_2^{\alpha_2} \ldots p_n^{\alpha_n}$ $c^2 = r_1^{2\beta_1}\cdot r_2^{2\beta_2} \ldots r_n^{2\beta_n}$ Using $\alpha_i$ for twice the exponent of prime factors of $b$ vs. $\beta_i$ for exponents of prime factors of $c$ doesn't make it any easier to follow. Below is my alternative writeup of what is essentially the same proof... Assume that $a = b^2/c^2$ which is equivalent to $ac^2=b^2$. Let $p_1, p_2, \dots, p_n$ be the prime factors that divide either of $a,b,c$ so that $a = p_1^{\alpha_1}p_2^{\alpha_2}\dots p_n^{\alpha_n}\,$, $b = p_1^{\beta_1}p_2^{\beta_2}\dots p_n^{\beta_n}\,$, $c = p_1^{\gamma_1}p_2^{\gamma_2}\dots p_n^{\gamma_n}\,$, where $\alpha_i, \beta_i,\gamma_i$ are non-negative integers (some can be $0\,$, for example $\alpha_k=0$ if $p_k$ does not divide $a$). Then, by the unique factorization theorem (also known as FTA), the powers of each prime on the two sides of the equality $ac^2=b^2$ must be equal, therefore: $$\alpha_i+2\gamma_i = 2\beta_i$$ It follows that $\alpha_i = 2(\beta_i-\gamma_i)\,$ is even, but in that case each prime factor of $a$ occurs at an even power, therefore $a$ is a perfect square, in fact $a=\big(p_1^{\beta_1-\gamma_1}p_2^{\beta_2-\gamma_2} \dots p_n^{\beta_n-\gamma_n}\big)^2$ which proves the contrapositive of the problem statement, and so the statement itself. • P.S. The problem is of course equivalent to Prove that the square root of a positive integer is either an integer or irrational. In this case the square root of positive integer $a$ is $b/c$ which is obviously not irrational, so $b/c$ must be an integer, then it follows that $a = (b/c)^2$ is a perfect square. – dxiv Aug 12 '17 at 3:39 • Thank you for the advice on the use of symbols for factorization exponents. The 2nd sentence of the 1st paragraph of your answer really help in improving my understanding of factorization when dealing with several integers. If I understand you correctly we have a set P that contains all the prime numbers that are used in the factorizations of $a,b,c$. Each of these factorization uses different subset of P. This enables writing the factorization of $a,b,c$ as you did above where some of the exponents are 0 if the corresponding prime number does not divide the integer being factored. – Rene Girard Aug 13 '17 at 1:53 • @ReneGirard a set P that contains all the prime numbers that are used in the factorizations of a,b,c. Each of these factorization uses different subset of P. Right, that's precisely so. some of the exponents are 0 if the corresponding prime number does not divide the integer being factored Right again. – dxiv Aug 13 '17 at 1:58 • Your reply made me understand better a proof of the Fund. Thm of Algebra that I am studying. This is great. Thanks again. – Rene Girard Aug 13 '17 at 2:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9195647835731506, "perplexity": 127.85798877254982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00204.warc.gz"}
http://mathhelpforum.com/pre-calculus/121580-find-solution.html
# Math Help - Find the solution 1. ## Find the solution Find the solution in R : 2. Originally Posted by dapore Find the solution in R : I think you can see from the second equation that $y = 4 - x$. Substitute into the first equation: $x^3 + (4 - x)^3 = 28$ $x^3 + 64 - 48x + 12x^2 - x^3 = 28$ $12x^2 - 48x + 36 = 0$ $x^2 - 4x + 3 = 0$ $(x - 1)(x - 3) = 0$ $x = 1$ or $x = 3$. 3. Originally Posted by Prove It I think you can see from the second equation that $y = 4 - x$. Substitute into the first equation: $x^3 + (4 - x)^3 = 28$ $x^3 + 64 - 48x + 12x^2 - x^3 = 28$ $12x^2 - 48x + 36 = 0$ $x^2 - 4x + 3 = 0$ $(x - 1)(x - 3) = 0$ $x = 1$ or $x = 3$. x=1, y=4-1=3 x=3, y=4-3=1 4. Hello, dapore! Here is another approach. It's no shorter, but it's rather "cute". Find the solution in R: . $\begin{array}{cccc}x^3 + y^3 &=& 28 & [1] \\ x + y &=& 4 & [2] \end{array}$ $\text{Cube [2]: }\;(x+y)^3 \:=\:4^3 \quad\Rightarrow\quad x^3 + 3x^2y + 3xy^2 + y^3 \:=\:64$ $\text{and we have: }\;\underbrace{x^3 + y^3}_{\text{This is 28}} + 3xy\underbrace{(x+y)}_{\text{This is 4}} \:=\:64 \quad\Rightarrow\quad 28 + 3xy(4) \:=\:64$ . . . . . . . . . $12xy \:=\:36 \quad\Rightarrow\quad y \:=\:\tfrac{3}{x}\;\;[3]$ Substitute into [2]: . $x + \tfrac{3}{x} \:=\:4 \quad\Rightarrow\quad x^2 - 4x + 3 \:=\:0$ . . . . . . $(x-1)(x-3) \:=\:0 \quad\Rightarrow\quad x \:=\:1,\:3$ Substitute into [3]: . $y \:=\:3,\:1$ Solutions: . $(x,y) \:=\1,3),\3,1)" alt="(x,y) \:=\1,3),\3,1)" /> 5. Thank you my friends 6. $x^{3}+y^{3}=(x+y)\left( (x+y)^{2}-3xy \right)=4(16-3xy)=28,$ thus $xy=3.$ (1) Now put the second equation into (1) and get $4x-x^2=3\implies x^2-4x+3=(x-1)(x-3)=0.$ The solutions are $(x,y)=(1,3)$ and $(x,y)=(3,1).$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784183502197266, "perplexity": 3721.1802510479693}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834258.45/warc/CC-MAIN-20140820021354-00330-ip-10-180-136-8.ec2.internal.warc.gz"}
https://passthecpp.com/lesson-5-controlling-exposure/
# Calculating Exposure by Steve Kozak M. Photog., CR. CPP The basics of photography begin with correctly exposing the sensor by finding the right combination of f-stops and shutter speeds. Exposure can be expressed in this formula: Exposure = INTENSITY X TIME or, E = IT “Intensity” is the f-stop and “Time” is the shutter speed. By the way, there is no actual multiplication taking place in this formula. It is used as a way of explaining the concept of keeping exposures “equal”. “E” will represent the amount of light that reaches the sensor. If you remember any algebra, because there is an = in the equation, I can change my variables, so long as I keep them an equal value. For Example: E = 2 x 10, and E = 4 x 5 (E is the same answer in both, but we used different variables.) Using this formula we can plug-in some f-stops and shutter speeds just to see what happens. So let’s do it… E = I x T E = F8 @ 1/125 (F8 and 1/125 are variables that I selected entirely at random for this example.) In my sample, I stated E = F8 @1/125.  If for some reason I wanted to change the F8 to F5.6, could I do it? YES! As long as I keep the amount of light that reaches the film equal! -If I move from F8 to F5.6, I am opening up the lens to let twice the amount of light reach the sensor. -In order to remain equal, I have to cut the light in half by moving the shutter speed to the next faster speed. E = F8 @ 125 And E = F5.6 @ 250 F5.6 @ 1/250 would be an equal, or equivalent exposure to F8 @ 1/125 because in both cases, the exact amount of light reaches the film. As a matter of fact, there can be many equivalent exposures: F22 @ 1/15 F16 @ 1/30 F11 @ 1/60 F8 @ 1/125 F5.6 @ 1/250 F4 @ 1/500 F2.8 @ 1/1000 Each of the above exposures yield the exact amount of light onto the film. This process is the same no matter which f-stop and shutter speed combination you choose to start out with. The hard part about exposure is figuring out which f-stop and shutter speed to start with in the first place. (We will cover this in-depth later.) It is at this point that you may be thinking, “Gosh, If F8 @ 1/125 will work, why the heck would I want to change to F4 @ 1/500?”  This is where we first begin to realize the magic of photography. Remember, F-stops control depth of field, so changing from F8@1/125 to F4@1/500 allows you to create an image with a shallower depth of field. Welcome to your Lesson 5: Controlling Exposure 1. Given F8 @ 1/125, what would the new shutter speed need to be if you moved the lens to F5.6? 2. Define “equivalent exposure”. 3. Give an equivalent exposure to F11 @ 1/15. 4. Give an equivalent exposure to F4 @ 1/125. 5. Give an equivalent exposure to F5.6 @ 1/500.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116676807403564, "perplexity": 1648.9942399699892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00458.warc.gz"}
https://www.knowpia.com/knowpedia/Orthogonal_complement
BREAKING NEWS Orthogonal complement ## Summary In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace W of a vector space V equipped with a bilinear form B is the set W of all vectors in V that are orthogonal to every vector in W. Informally, it is called the perp, short for perpendicular complement. It is a subspace of V. ## Example Let ${\displaystyle V=(\mathbb {R} ^{5},\langle \cdot ,\cdot \rangle )}$  be the vector space equipped with the usual dot product ${\displaystyle \langle \cdot ,\cdot \rangle }$  (thus making it an inner product space), and let ${\displaystyle W=\{u\in V:Ax=u,\ x\in \mathbb {R} ^{2}\},}$ with ${\displaystyle A={\begin{pmatrix}1&0\\0&1\\2&6\\3&9\\5&3\\\end{pmatrix}}.}$ then its orthogonal complement ${\displaystyle W^{\perp }=\{v\in V:\langle u,v\rangle =0,\ \forall u\in W\}}$ can also be defined as ${\displaystyle W^{\perp }=\{v\in V:{\tilde {A}}y=v,\ y\in \mathbb {R} ^{3}\},}$ being ${\displaystyle {\tilde {A}}={\begin{pmatrix}-2&-3&-5\\-6&-9&-3\\1&0&0\\0&1&0\\0&0&1\end{pmatrix}}.}$ The fact that every column vector in ${\displaystyle A}$  is orthogonal to every column vector in ${\displaystyle {\tilde {A}}}$  can be checked by direct computation. The fact that the spans of these vectors are orthogonal then follows by bilinearity of the dot product. Finally, the fact that these spaces are orthogonal complements follows from the dimension relationships given below. ## General bilinear forms Let ${\displaystyle V}$  be a vector space over a field ${\displaystyle F}$  equipped with a bilinear form ${\displaystyle B.}$  We define ${\displaystyle u}$  to be left-orthogonal to ${\displaystyle v}$ , and ${\displaystyle v}$  to be right-orthogonal to ${\displaystyle u,}$  when ${\displaystyle B(u,v)=0.}$  For a subset ${\displaystyle W}$  of ${\displaystyle V,}$  define the left orthogonal complement ${\displaystyle W^{\bot }}$  to be ${\displaystyle W^{\bot }=\left\{x\in V:B(x,y)=0{\text{ for all }}y\in W\right\}.}$ There is a corresponding definition of right orthogonal complement. For a reflexive bilinear form, where ${\displaystyle B(u,v)=0}$  implies ${\displaystyle B(v,u)=0}$  for all ${\displaystyle u}$  and ${\displaystyle v}$  in ${\displaystyle V,}$  the left and right complements coincide. This will be the case if ${\displaystyle B}$  is a symmetric or an alternating form. The definition extends to a bilinear form on a free module over a commutative ring, and to a sesquilinear form extended to include any free module over a commutative ring with conjugation.[1] ### Properties • An orthogonal complement is a subspace of ${\displaystyle V}$ ; • If ${\displaystyle X\subseteq Y}$  then ${\displaystyle X^{\bot }\supseteq Y^{\bot }}$ ; • The radical ${\displaystyle V^{\bot }}$  of ${\displaystyle V}$  is a subspace of every orthogonal complement; • ${\displaystyle W\subseteq (W^{\bot })^{\bot }}$ ; • If ${\displaystyle B}$  is non-degenerate and ${\displaystyle V}$  is finite-dimensional, then ${\displaystyle \dim(W)+\dim(W^{\bot })=\dim V.}$ • If ${\displaystyle L_{1},\ldots ,L_{r}}$  are subspaces of a finite-dimensional space ${\displaystyle V}$  and ${\displaystyle L_{*}=L_{1}\cap \cdots \cap L_{r},}$  then ${\displaystyle L_{*}^{\bot }=L_{1}^{\bot }+\cdots +L_{r}^{\bot }.}$ ## Inner product spaces This section considers orthogonal complements in an inner product space ${\displaystyle H.}$ [2] Two vectors ${\displaystyle x}$  and ${\displaystyle y}$  are called orthogonal if ${\displaystyle \langle x,y\rangle =0,}$  which happens if and only if ${\displaystyle \|x\|\leq \|x+sy\|}$  for all scalars ${\displaystyle s.}$ [3] If ${\displaystyle C}$  is any subset of an inner product space ${\displaystyle H}$  then its orthogonal complement in ${\displaystyle H}$  is the vector subspace {\displaystyle {\begin{alignedat}{4}C^{\bot }:&=\{x\in H:\langle x,c\rangle =0{\text{ for all }}c\in C\}\\&=\{x\in H:\langle c,x\rangle =0{\text{ for all }}c\in C\}\end{alignedat}}} which is always a closed subset of ${\displaystyle H}$ [3][proof 1] that satisfies ${\displaystyle C^{\bot }=\left(\operatorname {cl} _{H}\left(\operatorname {span} C\right)\right)^{\bot }}$  and if ${\displaystyle C\neq \varnothing }$  then also ${\displaystyle C^{\bot }\cap \operatorname {cl} _{H}\left(\operatorname {span} C\right)=\{0\}}$  and ${\displaystyle \operatorname {cl} _{H}\left(\operatorname {span} C\right)\subseteq \left(C^{\bot }\right)^{\bot }.}$  If ${\displaystyle C}$  is a vector subspace of an inner product space ${\displaystyle H}$  then ${\displaystyle C^{\bot }=\left\{x\in H:\|x\|\leq \|x+c\|{\text{ for all }}c\in C\right\}.}$ If ${\displaystyle C}$  is a closed vector subspace of a Hilbert space ${\displaystyle H}$  then[3] ${\displaystyle H=C\oplus C^{\bot }\qquad {\text{ and }}\qquad \left(C^{\bot }\right)^{\bot }=C}$ where ${\displaystyle H=C\oplus C^{\bot }}$  is called the orthogonal decomposition of ${\displaystyle H}$  into ${\displaystyle C}$  and ${\displaystyle C^{\bot }}$  and it indicates that ${\displaystyle C}$  is a complemented subspace of ${\displaystyle H}$  with complement ${\displaystyle C^{\bot }.}$ ### Properties The orthogonal complement is always closed in the metric topology. In finite-dimensional spaces, that is merely an instance of the fact that all subspaces of a vector space are closed. In infinite-dimensional Hilbert spaces, some subspaces are not closed, but all orthogonal complements are closed. If ${\displaystyle W}$  is a vector subspace of an inner product space the orthogonal complement of the orthogonal complement of ${\displaystyle W}$  is the closure of ${\displaystyle W,}$  that is, ${\displaystyle \left(W^{\bot }\right)^{\bot }={\overline {W}}.}$ Some other useful properties that always hold are the following. Let ${\displaystyle H}$  be a Hilbert space and let ${\displaystyle X}$  and ${\displaystyle Y}$  be its linear subspaces. Then: • ${\displaystyle X^{\bot }={\overline {X}}^{\bot }}$ ; • if ${\displaystyle Y\subseteq X}$  then ${\displaystyle X^{\bot }\subseteq Y^{\bot }}$ ; • ${\displaystyle X\cap X^{\bot }=\{0\}}$ ; • ${\displaystyle X\subseteq (X^{\bot })^{\bot }}$ ; • if ${\displaystyle X}$  is a closed linear subspace of ${\displaystyle H}$  then ${\displaystyle (X^{\bot })^{\bot }=X}$ ; • if ${\displaystyle X}$  is a closed linear subspace of ${\displaystyle H}$  then ${\displaystyle H=X\oplus X^{\bot },}$  the (inner) direct sum. The orthogonal complement generalizes to the annihilator, and gives a Galois connection on subsets of the inner product space, with associated closure operator the topological closure of the span. ### Finite dimensions For a finite-dimensional inner product space of dimension ${\displaystyle n,}$  the orthogonal complement of a ${\displaystyle k}$ -dimensional subspace is an ${\displaystyle (n-k)}$ -dimensional subspace, and the double orthogonal complement is the original subspace: ${\displaystyle \left(W^{\bot }\right)^{\bot }=W.}$ If ${\displaystyle A}$  is an ${\displaystyle m\times n}$  matrix, where ${\displaystyle \operatorname {Row} A,}$  ${\displaystyle \operatorname {Col} A,}$  and ${\displaystyle \operatorname {Null} A}$  refer to the row space, column space, and null space of ${\displaystyle A}$  (respectively), then[4] ${\displaystyle \left(\operatorname {Row} A\right)^{\bot }=\operatorname {Null} A\qquad {\text{ and }}\qquad \left(\operatorname {Col} A\right)^{\bot }=\operatorname {Null} A^{\operatorname {T} }.}$ ## Banach spaces There is a natural analog of this notion in general Banach spaces. In this case one defines the orthogonal complement of W to be a subspace of the dual of V defined similarly as the annihilator ${\displaystyle W^{\bot }=\left\{x\in V^{*}:\forall y\in W,x(y)=0\right\}.}$ It is always a closed subspace of V. There is also an analog of the double complement property. W⊥⊥ is now a subspace of V∗∗ (which is not identical to V). However, the reflexive spaces have a natural isomorphism i between V and V∗∗. In this case we have ${\displaystyle i{\overline {W}}=W^{\bot \,\bot }.}$ This is a rather straightforward consequence of the Hahn–Banach theorem. ## Applications In special relativity the orthogonal complement is used to determine the simultaneous hyperplane at a point of a world line. The bilinear form η used in Minkowski space determines a pseudo-Euclidean space of events. The origin and all events on the light cone are self-orthogonal. When a time event and a space event evaluate to zero under the bilinear form, then they are hyperbolic-orthogonal. This terminology stems from the use of two conjugate hyperbolas in the pseudo-Euclidean plane: conjugate diameters of these hyperbolas are hyperbolic-orthogonal. ## Notes 1. ^ If ${\displaystyle C=\varnothing }$  then ${\displaystyle C^{\bot }=H,}$  which is closed in ${\displaystyle H}$  so assume ${\displaystyle C\neq \varnothing .}$  Let ${\textstyle P:=\prod _{c\in C}\mathbb {F} }$  where ${\displaystyle \mathbb {F} }$  is the underlying scalar field of ${\displaystyle H}$  and define ${\displaystyle L:H\to P}$  by ${\displaystyle L(h):=\left(\langle h,c\rangle \right)_{c\in C},}$  which is continuous because this is true of each of its coordinates ${\displaystyle h\mapsto \langle h,c\rangle .}$  Then ${\displaystyle C^{\bot }=L^{-1}(0)=L^{-1}\left(\{0\}\right)}$  is closed in ${\displaystyle H}$  because ${\displaystyle \{0\}}$  is closed in ${\displaystyle P}$  and ${\displaystyle L:H\to P}$  is continuous. If ${\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle }$  is linear in its first (respectively, its second) coordinate then ${\displaystyle L:H\to P}$  is a linear map (resp. an antilinear map); either way, its kernel ${\displaystyle \operatorname {ker} L=L^{-1}(0)=C^{\bot }}$  is a vector subspace of ${\displaystyle H.}$  Q.E.D. ## References 1. ^ Adkins & Weintraub (1992) p.359
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 118, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676833152770996, "perplexity": 756.8113388366237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00691.warc.gz"}
http://mathoverflow.net/questions/7063/a-problem-of-an-infinite-number-of-balls-and-an-urn/7113
# A problem of an infinite number of balls and an urn [closed] I think that the following problem originated in a probability textbook : You have a countably infinite supply of numbered balls at your disposal. They are all labeled with the natural numbers {1,2,3,...}. At 11h30, you put in the urn the balls labeled 1 to 10, and then right after remove the ball labeled 1. Then, at 11h45, you put into the jug balls 11 to 20, and remove the ball number 2, etc. In general at $\frac{1}{(2^n)}$ hours before midnight, you put in balls $(n-1)10+1$ to $10n$, and remove the ball labeled $n$. The question is : how many balls are left in the urn at (or after) midnight? (that is, after a countably infinite number of steps). I know that the most accepted answer to this question is that at midnight there aren't any balls left in the urn, because if you consider ball $n$, you know that it has been removed from the urn $\frac{1}{(2^n)}$ hours before midnight, and hasn't been put back at any subsequent step. Thus there can't be any balls in the urn at midnight. I know that the fact that this problem is very counter-intuitive is probably that it is not worded in any particular axiomatic system, so you can have multiple interpretations of the answer. I know that the above reasoning implies that after midnight, there can't be any balls in the urn that have a natural number of them, and I also know that there are only balls with natural numbered labels available, so that would imply that the jug is empty, but it still is not that much convincing. consider the following different problem : At the start, there is one ball in the jug labeled 1. Then, at the first step, you remove ball 1 and put in ball 2 at the same time. Then, you remove ball 2 and put in ball 3, etc... There is a ball in the jug at any time, so certainly there should still be ball in the urn at midnight, but it can't have a natural-numbered label. Consider also the following : in the previous problem, after removing ball #2 at the second step, but back in ball #1. Then at the next step remove ball 1 and put in ball 2, etc. At midnight there is no way to know if the ball is labeled 1 or 2. (probably because the limit of the series $(-1)^n$ doesn't exist?) My question is : Is there any more satisfying way to formalize this problem, or to explain it that would resolve the paradox that the number of balls in the jug gets constantly larger, never diminishes, and that at the end there is nothing left in the urn? - ## closed as no longer relevant by Robin Chapman, François G. Dorais♦Jul 9 '10 at 19:34 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question. I think the real answer is that the problem gives no information about what happens at midnight or later. Maybe at exactly midnight a jaguar comes along and eats all but 8 of the balls. –  Reid Barton Nov 28 '09 at 17:13 I assume that there is a typo: at 11:45 you put in balls 11 to 20. You can't put in ball 10 as it is already in. A somewhat different typo in in the "In general..." next sentence. I assume that you mean to put in the balls numbered (n-1)*10 + 1 to n*10? –  Sam Nead Nov 29 '09 at 18:23 This is how I justify buying mathematics books faster than I can read them. If I live forever, even though I keep buying them, I'll eventually read each one. –  Dan Piponi May 24 '10 at 21:02 Can you, please, change "infinite balls" to $\textit{infinitely many}$ balls in the title? Infinite balls have something unsavory about them. –  Victor Protsak May 25 '10 at 4:56 Write about balls and jugs ... someone may think you intend some sexual double-entendre... Isn't that mathematical item you put the balls in usually known as an urn? –  Gerald Edgar May 25 '10 at 12:22 A good way to formalize the problem is to describe the state of the jug not as a set (a subset of N), because those are hard to take limits of, but as its characteristic function. The first "paradox" is that pointwise limits are not the same as uniform limits. Later ones can be "resolved" by extending the notion of "limit" in various ways, most of which were invented by Euler. See for example the book Divergent Series by Hardy. - The second part is the paradox of Thomson's lamp, which can be formalized as thinking about Grandi's series $\sum_{n=0}^{\infty}{(-1)^n}$. This is a decent summary; one can argue either "the sum is 1/2" or "there is no sum" based on the formalization used. For some philosophers, this is a good illustration that the operation above does not occur in real life (which is not necessarily irrelevant, since it affects which models of space and time are valid). The research generally falls under the title of "supertasks". To address formalizing the first part of the problem, I prefer the analogous $\sum_{n=0}^{\infty}{10} - \sum_{n=0}^{\infty}{1}$ so that instead of considering the series 1 - 1 + 1 - 1 + 1 - 1 ... 10 - 1 + 10 - 1 + 10 - 1 ... although there are certainly other ways. One can try to obtain the continuation of the function $f(1-\frac{1}{2^n})=9n$ from [0,1] using the Alexandroff compactification of the real numbers. This gives the answer of "infinite", although I'm not keen on this formalization because it removes the iterative sense of the original problem. I find such paradoxes to be most useful (to mathematicians) pedagogically, because it allows students to apply their mathematical intutition and give them some investment before formalizing their handling of infinity. - This is a good answer for the last of the situations I posted, but what about the original question? The one where you add more balls than you remove? –  Jean-Philippe Burelle Nov 28 '09 at 19:40 Hm, somehow I thought the second question was the only actual one. I have edited. Let me know if you need more. –  Jason Dyer Nov 29 '09 at 14:44 You are describing what is known as a supertask, or task involving infinitely many steps, and there are numerous interesting examples. In a previous MO answer, for example, I described an entertaining example about the deal with the Devil, which is similar to your example. Let me mention a few additional examples here. In the article "A beautiful supertask" (Mind, 105(417):81-84, 1996), the author Laraudogoitia considers the situation with Newtonian physics in which there are infinitely many billiard balls, getting progressively smaller, with the $n^{th}$ ball positioned at $\frac1n$, converging to $0$. Now, set ball $1$ in motion, which hits ball 2 in such a way that all energy is transferred to ball 2, which hits ball 3 and so on. All collisions take place in finite time, because of the positions of the balls, and so the motion disappers into the origin; in finite time after the collisions are completed, all the balls are stationary. Thus: • Even though each step of the physical system is energy-conserving, the system as a whole is not energy-conserving in time. The general conclusion is that one cannot expect to prove the principle of conservation of energy throughout time without completeness assumptions about the nature of time, space and spacetime. A similar example has the balls spaced out to infinity, and this time the collisions are arranged so that the balls move faster and faster out to infinity (using Newtonian physics), completing their progressively rapid interactions in finite total time. In this case, once again, a physical system that is energy-preserving at each step does not seem to be energy-preserving throughout time, and the energy seems to have leaked away out to infinity. The interesting thing about this example is that one can imagine running it in reverse, in effect gaining energy from infinity, where the balls suddenly start moving towards us from infinity, without any apparent violation of energy-conservation in any one interaction. Another example uses relativistic physics. Suppose that you want to solve an existential number-theoretic question, of the form $\exists n\varphi(n)$. In general, such statements are verified by a single numerical example, and there is in principle no way of getting a yes-no answer to such questions in finite time. The thing to do is to get into a rocket ship and fly around the earth, while your graduate student---and her graduate students, and so on in perpetuity---search for an additional example, with the agreement that if an example is ever found, then a signal will be sent up to your rocket. Meanwhile, you should accelerate unboundedly close to the speed of light, in such a way that because of relativistic time contraction, the eternity on earth corresponds to only a finite time on the rocket. In this way, one will know the answer is finite time. With rockets flying around rockets, one can in principle learn the answer to any arithmetic statement in finite time. There are, of course, numerous issues with this story, beginning with the fact that unbounded energy is required for the required time foreshortening, but nevertheless Malament-Hogarth spacetimes can be constructed to avoid these issues, and allow a single observer to have access to an infinite time history of another individual. These examples speak to an intriguing possible argument against the Church-Turing thesis, based on the idea that there may be unrealized computational power arising from the fact that we live in a quantum-mechanical relativistic world. - Joel, this is so cool! I know you did work on infinite time Turing machines -- In the literature, are there any discussions of infinite time Turing machines which go into serious detail w.r.t. the physics you mention? –  Grant Olney Passmore May 25 '10 at 17:32 Yes, Philip Welch has done some nice work on it. Look at a few of his recent papers on his web page: maths.bris.ac.uk/~mapdw One paper is explicitly concerned with physical models for infinitary computability. –  Joel David Hamkins May 25 '10 at 17:52 IIRC if you have access to unbounded energy or precision, you can tackle NP-complete problems in polytime with analog computers. –  Steve Huntsman Jul 10 '10 at 11:31 This procedure is very common in mathematics in general, especially in inductive constructions. Usually one doesn't scream "paradox!" when seeing a function on the set of positive integers defined recursively by, say, $f(1)=1$, $f(a)=f([a/2])+1$. But let's think of it this way. We have a machine that computes the values. After each computation, we put the values that can be computed next in a queue. At the first step we know $f$ only at $1$ so the queue consists of 2 and 3. After we serve 2, the queue becomes 3,4,5. After we serve 3, it becomes 4,5,6,7, and so on. What a horror: the more numbers we've served, the longer the queue becomes, so in the end we'll, probably, have infinitely many numbers that still need to be computed! It is very funny how one easily swallows such things when they aren't singled out from long arguments and how one finds them "paradoxical" and "unbelievable" when they are presented in their pure form. As to "occuring in the real life", just restate the Zeno's paradox as "After you reach the middle of the way, you'll still need to reach 5/8, 6/8 and 7/8 of the way. After you reach the 5/8 mark, you'll need to reach 41/64, 42/64, ... , 63/64. And so on. In this version, the conclusion is that not only will you never reach the end, but it'll be farther and farther away from you with each step. I wonder how many people realize that this might be one of the things the black Queen meant when she told Alice that "Here you need to run as fast as you can just to stay where you are and if you want to get anywere else, you need to run twice faster". - There is a paradox involving the series 1-1+1-... the terms can be grouped in pairs each pair summing to zero (1-1)+(1-1) or there is the following regrouping 1+(1+1) -1 +(1+1=1+1)-(1+1)... which each pair of series following +2^k ones minus 2^k-1 ones starting at k=3 and continuing to infinity. This grouping will produce the same pattern as those in your problem. This involves regrouping an infinite series to change values. This is related to Riemann series theorem that if an infinite series is conditionally convergent, then its terms can be arranged so that the series converges to any given value, or diverges. see the following for more information: http://en.wikipedia.org/wiki/Riemann_series_theorem - Thinking about these kinds of problems has convinced me that it may be quite reasonable to assume that there's an absolute bound on how fast an object can travel. Or at least, that this is more "intuitive" than the alternative. - You can formalize this problem by talking about limits of sets, where the limsup of a sequence of sets consists of the elements that are in infinitely many of them, and liminf consists of the elements that are in cofinitely many of them, and the limit exists if these are equal. In this case, if we look at the sets of balls in the urn at each time, we see that the limit of this sequence is the empty set. Of course, as you point out, in the case where you keep putting in 10, the limit of the cardinalities is ℵ0, and in the case of taking 1 out, it doesn't exist. So I think what we get out of this really is the conclusion that taking cardinalities is a discontinous operation... Edit: I just realized, it would probably help to clarify if you're unfamiliar - the reason the above are sensible notions of limsup and liminf is because they're $\bigcap_{n\in\mathbb{N}} \bigcup_{k=n}^\infty A_k$ and $\bigcup_{n\in\mathbb{N}} \bigcap_{k=n}^\infty A_k$, respectively. - Yeah, I think this is the key. If you think in measure theoretic terms, you could say that integration is not continuous with respect to the topology of pointwise convergence (though it is lower semicontinuous, which is Fatou's lemma). In particular, it is not necessarily true that $\liminf \mu(A_n) = \mu(\liminf A_n)$, though $\ge$ does hold. (For this example, $\mu$ could be counting measure.) So I think this paradox is just the human tendency to assume that "simple" operations are continuous. –  Nate Eldredge May 25 '10 at 19:20 For me, one reason why the accepted answer to the original problem is somewhat "less than satisfying" is that it answers the stated question by rephrasing into a supposedly equivalent question. "How many balls are left at midnight?" Changes to "Can I remove each ball before midnight?" Our experience with completing finitely many steps in long (i.e. not infinitesimal) lengths of time tells us these two questions are the same... But I don't think they are. After all our prior experience with series and sequences tells us how to answer the first, by taking a limit. At any time before midnight we know how many balls are in the urn, from $\frac{1}{2^n}$ before midnight until $\frac{1}{2^{n+1}}$ before midnight we have $9n$ balls. For any number of balls $M$ we can find a time $T=$ ($\frac{1}{2^n}$ for some $n$) where for all times $t\geq T$ (before midnight) we have $9n>M$ balls in the urn. i.e. as we approach midnight the number of balls is unbounded. We also know how to answer the second phrasing of the question. "Yes, we can remove all the balls before midnight" By the aforementioned name the ball I'll tell you when it left method. At this point I think that "When all balls have been removed there are none left" is a tautology. With the two answers contradicting each other it seems to me that either: the way we deal with unbounded sequences is wrong, or my all vs none tautology is not actually a tautology, or those two phrases we assume are the same... are not. Unless someone sees another possibility beyond these three I have to think that it is the third option that we must pick. Some additional ways to think about the problem to muddy the waters further: "How long is ball $n$ in the urn before it is removed?" and the followup question "What is the average length of time each ball is in the urn?" "If you can't/won't/don't read the labels why isn't this equivalent to throwing 9 balls in the urn at $\frac{1}{2^n}$ before midnight for each $n\in\mathbb{N}$?" and the followup "If, instead of placing the removed ball in different spot, you place the "non-read label ball" back into the countably infinite supply is this now equivalent to throwing in 9 balls at each each $\frac{1}{2^n}$?" - I'm just rephrasing what other people have said, but more or less you are telling us this. You have function $f$, defined on $\mathbb{N}$ which is the number of balls after $n$ steps. One can actually compute $f(n) = 9n$. The question is: if we extend this function as a "function" from ordinals to say $\mathbb{N} \cup \{ \infty \}$ what is the value of $f(\omega)$? You are arguing that is should be $\infty$ and indeed it is if you require some sort of continuity of $f$ at $\omega$. But you have just shown a (discontinous) extension with $f(\omega) = 0$. There is nothing paradoxical about it. - The reason this problem has the generally accepted answer of 0 balls in the jug at midnight -- among mathematicians -- is that for any given ball, one may follow its itinerary: at some point it goes into the jug, and then at some point no earlier it leaves the jug, never to be moved again. Thus it would be easy to formalize a generalized class of problems so that one may prove rigorously that as long as each ball n is moved only finitely many times before midnight, it is at midnight where its last motion took it to. The infinitely-many-marbles problem was probably created by the mathematician J.E. Littlewood in the early 1950s, and was popularized in Martin Gardner's Mathematical Games column in Scientific American, where "zero balls at midnight" was given as the correct solution. (See A Mathematician's Miscellany, J.E. Littlewood, Methuen, 1953.) Philosophers, on the other hand, are still debating this problem. (See, for example, Paradoxes from A to Z by Michael Clark, 2nd ed., Routledge, 2007.) - I am a mathematician and I don't accept "0 balls" as "the" answer (because of implicit continuity assumptions). There is ample evidence at this page that I am not alone in that. –  Victor Protsak May 26 '10 at 4:48 Naturally, not everyone will agree on a philosophical issue. Which is why I said Martin Gardner's solution is "generally accepted," not "universally accepted," among mathematicians. –  Daniel Asimov May 30 '10 at 0:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8747800588607788, "perplexity": 397.4128704711977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558065752.33/warc/CC-MAIN-20141017150105-00189-ip-10-16-133-185.ec2.internal.warc.gz"}
https://brilliant.org/problems/how-much-bigger-is-your-screen-2/
# How much bigger is your screen? Geometry Level 1 The dimension of a rectangular television screen is given as the diagonal length (in centimeters). What is the dimension of a rectangular television screen which has a length of 80 centimeters and a width of 60 centimeters? × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8668065667152405, "perplexity": 1205.785946672695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186780.20/warc/CC-MAIN-20170322212946-00249-ip-10-233-31-227.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=15&t=34740
## 6th Edition, Question 1.55 $E=hv$ Tyra Nguyen 4H Posts: 60 Joined: Thu Sep 27, 2018 11:25 pm ### 6th Edition, Question 1.55 Anybody know how to do question 1.55, part c? The wording of the question is slightly confusing to me, where it says "absorbing at 3500 cm-1". Te Jung Yang 4K Posts: 51 Joined: Thu Sep 27, 2018 11:18 pm ### Re: 6th Edition, Question 1.55 It means to use the energy you got from part b of the exercise. So, on the previous exercise, you would have for one absorbtion 7.2 * 10^-20J. For the answer to part c, you would need to multiply that energy by Avogadro's constant.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454642534255981, "perplexity": 2926.7273599284626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494741.0/warc/CC-MAIN-20190220105613-20190220131613-00343.warc.gz"}
https://www.physicsforums.com/threads/flywheel-rotation-question.903037/
# Flywheel rotation question. 1. Feb 7, 2017 ### Arman777 1. The problem statement, all variables and given/known data A flywheel with a D=1.2m is rotating at an angular speed of 200 rev/min (a) Whats the angular speed in rad/s ? (b)Whats v=? in the point on the rim ? (c)What const. ∝ ( in rev/min^2) will increase its angular speed to 1000 rev/min in 60 sec ? (d)How many revolutions does the wheel make during that 60 sec ? 2. Relevant equations Rotational Kinematics Equations 3. The attempt at a solution I found a,b,c correctly (for long tries ) but I stucked at d. I guess I ll use here $Δθ=wt-1/2∝t^2$ (b)v=12.5 (c)=800 rev/min^2 Now for d, lets to write all equaiton in rev/min form so I dont know what will be the left part cause Δθ in units its normally just rev I guess so θ-200rev/min=7879rev/min.1min+1/2 (800rev/min^2).(1 min)^2 Here I get a huge number... Thank you 2. Feb 7, 2017 ### Staff: Mentor You established before that 200 rev/min corresponds to 20.9 rad/s (which is correct). How can it correspond to a larger number of revolutions per minute now? You cannot subtract an angle and an angular velocity. That's like adding 3 meters to 1 minute: It does not make sense. You can use the formula you posted, but it is probably easier to find the average angular velocity (in rev/min) and to use that. 3. Feb 7, 2017 ### Arman777 ohh I see...so I can convert 200rev/min to rev which its just 200 or ..I dont know ? 4. Feb 7, 2017 ### Staff: Mentor You cannot "convert an angular velocity to revolutions" in the same way you cannot convert apples to minutes. There is no need for such a conversion. Your formula multiplies the angular velocity by a time. The product of those two is a number of revolutions or an angle, depending on which units you use. 5. Feb 7, 2017 ### Arman777 ok then how can I found inital rev ? If other parts are correct ? 6. Feb 7, 2017 oh I see... 7. Feb 7, 2017 ### Arman777 Here my new try 8. Feb 7, 2017 ### Staff: Mentor Your angular acceleration seems to be too low. 9. Feb 7, 2017 ### Arman777 yeah,sorry it would be 1.40 rad/sec ? 10. Feb 7, 2017 ### Arman777 I found yeyyyyy 11. Feb 7, 2017 ### Arman777 Thanks Draft saved Draft deleted Similar Discussions: Flywheel rotation question.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016199111938477, "perplexity": 4597.195018194202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00617.warc.gz"}
https://pureportal.strath.ac.uk/en/publications/parameter-estimation-for-the-stochastic-sis-epidemic-model
Parameter estimation for the stochastic SIS epidemic model Research output: Contribution to journalArticlepeer-review 7 Citations (Scopus) Abstract In this paper we estimate the parameters in the stochastic SIS epidemic model by using pseudo-maximum likelihood estimation (pseudo-MLE) and least squares estimation. We obtain the point estimators and 100(1 − α)% confidence intervals as well as 100(1 − α)% joint confidence regions by applying least squares techniques. The pseudo-MLEs have almost the same form as the least squares case. We also obtain the exact as well as the asymptotic 100(1 − α)% joint confidence regions for the pseudo-MLEs. Computer simulations are performed to illustrate our theory. Original language English 75-98 24 Statistical Inference for Stochastic Processes 17 1 https://doi.org/10.1007/s11203-014-9091-8 Published - 1 Apr 2014 Keywords • Stochastic SIS epidemic model • pseudo-maximum likelihood estimation • least squares estimation • confidence interval • confidence region • asymptotic estimator • logistic equations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952586054801941, "perplexity": 1876.896016935845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038118762.49/warc/CC-MAIN-20210417071833-20210417101833-00537.warc.gz"}
https://ee.gateoverflow.in/317/gate-electrical-2016-set-2-question-32
Let $P=\begin{bmatrix} 3&1 \\ 1 & 3 \end{bmatrix}$ Consider the set $S$ of all vectors $\begin{pmatrix} x\\ y \end{pmatrix}$ such that $a^{2}+b^{2}=1$ where $\begin{pmatrix} a\\ b \end{pmatrix}= P \begin{pmatrix} x\\ y \end{pmatrix}$. Then $S$ is 1. A circle of radius $\sqrt{10}$. 2. A circle of radius $\frac{1}{\sqrt{10}}$ 3. An ellipse with major axis along $\begin{pmatrix} 1\\ 1 \end{pmatrix}$ 4. An ellipse with minor axis along $\begin{pmatrix} 1\\ 1 \end{pmatrix}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904289841651917, "perplexity": 158.73619447198203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00150.warc.gz"}
https://aas.aanda.org/articles/aas/full/1999/01/ds1595/node3.html
Up: uvby photometry of the 115708 # 3 HR 1094 Hill & Blake (1996) discovered that the peculiar CP star HR 1094 (= HD 22316) possesses a fairly strong magnetic field whose effective longitudinal field varies between -2200 and 600 gauss with an ephemeris HJD (magnetic maximum) = 2449007.589 0.130 + 2.9761 0.0014 E. This star is one of the few CP stars with Co II lines in its spectrum and is also very Cl overabundant (Sadakane 1992). Seven, 35, and 46 observations with HD 23383 as the comparison star and HD 23594 as the check star were obtained respectively in the 1995-96, 1996-97, and 1997-98 observing seasons. Another 12 observations were obtained in 1997-98 using HD 21447 as the comparison star and HD 20536 as the check star. Although Hipparcos photometry (Adelman et al. 1998) suggested that the use of the latter stars was preferable to the use of the former, the standard deviations of the means do not confirm this. Scargle periodograms of the larger set of HR 1094 observations suggests several possible periods of which only 2.9749 days is compatible with the magnetic field variations. As the light curves are very similar for this period and that of Hill & Blake, and as the latter period phases the light variations better relative to that of the magnetic field, I have adopted their ephemeris. Figure 1 shows the photometry as a function of phase. HR 1094 is a low amplitude variable with the amplitudes of u and b being 0.015 mag, v being of order 0.007 mag, and y being 0.01 mag. For comparison the magnetic extrema occur at phases 0.0 and 0.5. The minimum of u is at phase 0.3 and its maximum at 0.8. The light curve for v is almost constant with a weak maximum near phase 0.5. The light curves for b and y are in phase with maxima near phase 0.5. Thus the photometry suggests that HR 1094 may have complex surface abundance patterns. Figure 1: FCAPT uvby photometry of HR 1094 plotted according to the Hill & Blake (1996) ephemeris: HJD (magnetic maximum) = 2449007.589 + 2.9761 E Up: uvby photometry of the 115708 Copyright The European Southern Observatory (ESO)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8629991412162781, "perplexity": 2692.132528423643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.53/warc/CC-MAIN-20180922221246-20180923001646-00390.warc.gz"}
http://www.thespectrumofriemannium.com/2012/10/07/log036-action-and-relativity/
# LOG#036. Action and relativity. The hamiltonian formalism and the hamiltonian H in special relativity has some issues with the definition. In the case of the free particle one possible definition, not completely covariant, is the relativistic energy There are two others interesting scalars in classical relativistic theories. They are the lagrangian L and the action functional S. The lagrangian is obtained through a Legendre transformation from the hamiltonian: From the hamiltonian, we get the velocity using the so-called hamiltonian equation: Then, and finally The action functional is the time integral of the lagrangian: However, let me point out that the above hamiltonian in SR has some difficulties in gauge field theories. Indeed, it is quite easy to derive that a more careful and reasonable election for the hamiltonian in SR should be zero! In the case of the free relativistic particle, we obtain Using the relation between time and proper time (the time dilation formula): direct substitution provides And defining the infinitesimal proper length in spacetime as , we get the simple and wonderful result: Sometimes, the covariant lagrangian for the free particle is also obtained from the following argument. The proper length is defined as The invariant in spacetime is related with the proper time in this way: Thus, dividing by and so that is and the free coordinate action for the free particle would be: Note, that since the election of time “t” is “free”, we can choose to obtain the generally covariant free action: Remark: the (rest) mass is the “coupling” constant for the free particle proper length to guess the free lagrangian Now, we can see from this covariant action that the relativistic hamiltonian should be a feynmanity! From the equations of motion, The covariant hamiltonian , different from H, can be build in the following way: The meaning of this result is hidden in the the next identity ( Noether identity or “hamiltonian constraint” in some contexts): since This strange fact that in SR, a feynmanity as the hamiltonian, is related to the Noether identity for the free relativistic lagrangian, indeed, a consequence of the hamiltonian constraint and the so-called reparametrization invariance . Note, in addition, that the free relativistic particle would also be invariant under diffeomorphisms if we were to make the metric space-time dependent, i.e., if we make the substitution . This last result is useful and important in general relativity, but we will not discuss it further in this moment. In summary, from the two possible hamiltonian in special relativity the natural and more elegant (due to covariance/invariance) is the second one. Moreover, the free particle lagrangian and action are: Remark: The true covariant lagrangian dynamics in SR is a “constrained” dynamics, i.e., dynamics where we are undetermined. There are more variables that equations as a result of a large set of symmetries ( reparametrization invariance and, in the case of local metrics, we also find diffeomorphism invarince). The dynamical equations of motion, for a first order lagrangian (e.g., the free particle we have studied here), read for the lagrangian formalism: By the other hand, for the hamiltonian formalism, dynamical equations are: View ratings
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796688556671143, "perplexity": 513.6969388747729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00031.warc.gz"}
https://mathleadershipcorps.com/question/which-of-the-following-numbers-are-cubes-of-even-numbers-a-729-b-1000-c-2744-d-6859-19278766-6/
## Which of the following numbers are cubes of even numbers ? a) 729 b) 1000 c) 2744 d) 6859 < Question Which of the following numbers are cubes of even numbers ? a) 729 b) 1000 c) 2744 d) 6859
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8036131262779236, "perplexity": 2218.6240824013494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00290.warc.gz"}
https://proofwiki.org/wiki/Bernstein%27s_Theorem_on_Unique_Global_Solution_to_y%27%27%3DF(x,y,y%27)
Bernstein's Theorem on Unique Global Solution to y''=F(x,y,y') Theorem Let $F$ and its partial derivatives $F_y, F_{y'}$ be real functions, defined on the closed interval $I = \closedint a b$. Let $F, F_y, F_{y'}$ be continuous at every point $\tuple {x, y}$ for all finite $y'$. Suppose there exists a constant $k > 0$ such that: $\map {F_y} {x, y, y'} > k$ Suppose there exist real functions $\alpha = \map \alpha {x, y} \ge 0$, $\beta = \map \beta {x, y}\ge 0$ bounded in every bounded region of the plane such that: $\size {\map F {x, y, y'} } \le \alpha y'^2 + \beta$ Then one and only one integral curve of equation $y'' = \map F {x, y, y'}$ passes through any two points $\tuple {a, A}$ and $\tuple {b, B}$ such that $a \ne b$. Proof Lemma 1 (Uniqueness) Aiming for a contradiction, suppose there are two integral curves $y = \map {\phi_1} x$ and $y = \map {\phi_2} x$ such that: $\map {y''} x = \map F {x, y, y'}$ Define a mapping $\delta: I \to S \subset \R$: $\map \delta x = \map {\phi_2} x - \map {\phi_1} x$ From definition it follows that: $\map \delta a = \map \delta b = 0$ Then the second derivative of $\delta$ yields: $\displaystyle \map {\delta''} x$ $=$ $\displaystyle \map {\phi_2''} x - \map {\phi_1''} x$ Definition of $\delta$ $\displaystyle$ $=$ $\displaystyle \map F {x, \phi_2, \phi_2'} - \map F {x, \phi_1, \phi_1'}$ as $y'' = \map F {x, y', y''}$ $\displaystyle$ $=$ $\displaystyle \map F {x, \phi_1 + \delta, \phi_1' + \delta'} - \map F {x, \phi_1, \phi_1'}$ Definition of $\delta$ $(1):\quad$ $\displaystyle$ $=$ $\displaystyle \map \delta x F_y^* + \map {\delta'} x F_{y'}^*$ Multivariate Mean Value Theorem with respect to $y, y'$ where: $F_y^* = \map {F_y} {x, \phi_1 + \theta \delta, \phi_1' + \theta \delta'}$ $F_{y'}^* = \map {F_{y'} } {x, \phi_1 + \theta \delta, \phi_1' + \theta \delta'}$ and $0 < \theta < 1$. Suppose: $\forall x \in I: \map {\phi_2} x \ne \map {\phi_1} x$ Then there are two possibilities for $\delta$: $\map \delta x$ attains a positive maximum within $\tuple {a, b}$ $\map \delta x$ attains a negative minimum within $\tuple {a, b}$. Denote this point by $\xi$. Suppose $\xi$ denotes a maximum. Then: $\map {\delta''} \xi \le 0$ $\map \delta \xi > 0$ $\map {\delta'} \xi = 0$ These, together with $(1)$, imply that $F_y^* \le 0$. This is in contradiction with assumption. For the minimum the inequalities are reversed, but the last equality is the same. Therefore, it must be the case that: $\map {\phi_1} x = \map {\phi_2} x$ $\Box$ Lemma 2 Suppose $\map {y''} x = \map F {x, y, y'}$ for all $x \in \closedint a c$, where: $\map y a = a_1$ $\map y c = c_1$ Then the following bound holds: $\size {\map y x - \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} } \le \dfrac 1 k \max \limits_{a \mathop \le x \mathop \le b} \size {\map F {x, \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a}, \dfrac {c_1 - a_1} {c - a} } }$ Proof As a consequence of $y'' = \map F {x, y, y'}$ we have: $\displaystyle y''$ $=$ $\displaystyle \map F {x, y, y'}$ $\displaystyle$ $=$ $\displaystyle \map F {x, y, y'} - \map F {x, \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a}, y'} + \map F {x, \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a}, y'}$ $(2):\quad$ $\displaystyle$ $=$ $\displaystyle \sqbrk {\map y x - \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} } \map {F_y} {x, \psi, y'} + \map F {x, \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a},y'}$ Mean Value Theorem with respect to $y$ where: $\psi = \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} + \theta \sqbrk {\map y x - \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} }$ and: $0 < \theta < 1$ Note that the term $\chi$, defined as: $\chi = \map y x - \dfrac {a_1 \paren {c - a} + c_1 \paren {x - a} } {c - a}$ vanishes at $x = a$ and $x = c$. Unless $\chi$ is identically zero, there exists a point $\xi\in\openint a b$ such one of the following holds: $\chi$ attains a positive maximum at $\xi$ $\chi$ attains a negative minimum at $\xi$. In the first case: $\map {y''} \xi \le 0$, $\map {y'} \xi = \dfrac {c_1 - a} {c - a}$ which implies: $\displaystyle 0$ $\ge$ $\displaystyle \map {y''} \xi$ $\displaystyle$ $=$ $\displaystyle \map F {\xi, \dfrac {a_1 \paren {c - \xi} + c_1 \paren {\xi - a} } {c - a}, \dfrac {c_1 - a_1} {c - a} } + \map {F_y} {x, \map \psi \xi, \map {y'} x} \sqbrk {\map y \xi - \dfrac {a_1 \paren {c - \xi} + c_1 \paren {\xi - a} } {c - a} }$ equation $(2)$ $\displaystyle$ $\ge$ $\displaystyle \map F {\xi, \dfrac {a_1 \paren {c - \xi} + c_1 \paren {\xi - a} } {c - a}, \dfrac {c_1 - a_1} {c - a} } + k \sqbrk {\map y \xi - \dfrac {a_1 \paren {c - \xi} + c_1 \paren {\xi - a} } {c - a} }$ Assumption of Theorem Hence: $\map y \xi - \dfrac {a_1 \paren {c - \xi} + c_1 \paren {\xi - a} } {c - a} \le -\dfrac 1 k \map F {\xi, \dfrac {a_1 \paren {c - \xi} + c_1 \paren {\xi - a} } {c - a}, \dfrac {c_1 - a_1} {c - a} }$ The negative minimum part is proven analogously. Hence, the following holds: $\size {\map y x - \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} } \le \dfrac 1 k \max \limits_{a \mathop \le x \mathop \le b} \size {\map F {x, \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a}, \dfrac {c_1 - a_1} {c - a} } }$ $\Box$ Lemma 3 Suppose that for $x \in I$: $\map {y''} x = \map F {x, y, y'}$ where: $\map y a = a_1$ $\map y c = c_1$ Then: $\forall x \in I: \exists M \in \R: \size {\map {y'} x - \dfrac {c_1 - a_1} {c - a} } \le M$ Proof Let $\AA$ and $\BB$ be the least upper bounds of $\map \alpha {x, y}$ and $\map \beta {x, y}$ respectively in the rectangle $a \le x \le c$, $\size y \le m + max \set {\size a_1, \size c_1}$ where: $m = \dfrac 1 k \max \limits_{a \mathop \le x \mathop \le b} \size {\map F {x, \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a}, \dfrac {c_1 - a_1} {c - a} } }$ Suppose that $\AA \ge 1$. Let $u, v$ be real functions such that $(3):\quad$ $\displaystyle \map y x - \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} + m$ $=$ $\displaystyle \dfrac {\Ln u} {2 \AA}$ $(4):\quad$ $\displaystyle -\map y x + \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} + m$ $=$ $\displaystyle \dfrac {\Ln v} {2 \AA}$ Due to Lemma 2, for $x \in I$ the left hand sides of $(3)$ and $(4)$ are not negative. Thus: $\forall x \in I: u, v \ge 1$ Differentiate equations $(3)$ and $(4)$ with respect to $x$: $(5):\quad$ $\displaystyle \map {y'} x - \dfrac {c_1 - a_1} {c - a}$ $=$ $\displaystyle \dfrac {u'} {2 \AA u}$ $(6):\quad$ $\displaystyle -\map {y'} x + \dfrac {c_1 - a_1} {c - a}$ $=$ $\displaystyle \dfrac {v'} {2 \AA v}$ Differentiate again: $(7):\quad$ $\displaystyle \map {y''} x$ $=$ $\displaystyle \dfrac {u''} {2 \AA u} - \dfrac {u'^2} {2 \AA u^2}$ $(8):\quad$ $\displaystyle -\map {y''} x$ $=$ $\displaystyle \dfrac {v''} {2 \AA v} - \dfrac {v'^2} {2 \AA v^2}$ By assumption: $\displaystyle \size {\map F {x, y, y'} }$ $=$ $\displaystyle \size {\map {y''} x}$ $\displaystyle$ $\le$ $\displaystyle \map \alpha {x, y} \map {y'^2} x + \map \beta {x, y}$ $\displaystyle$ $\le$ $\displaystyle \AA \map {y'^2} x + \BB$ $\displaystyle$ $\le$ $\displaystyle 2 \AA \map {y'^2} x + \BB$ $\displaystyle$ $=$ $\displaystyle 2 \AA {\paren {\map {y'} x - \dfrac {c_1 - a_1} {c - a} }^2} + 2 \AA \paren {\dfrac {c_1 - a_1} {c - a} }^2 - 2 \map {y'} x \dfrac {c_1 - a_1} {c - a} + \BB$ $(9):\quad$ $\displaystyle$ $\le$ $\displaystyle 2 \AA \paren {\map {y'} x - \dfrac {c_1 - a_1} {c - a} }^2 + \BB_1$ where: $\BB_1 = \BB + 2 \AA \paren {\dfrac {c_1 - a_1} {c - a} }^2$ Then: $\displaystyle y''$ $=$ $\displaystyle \dfrac {u''} {2 \AA u} - \dfrac {u'^2} {2 \AA u^2}$ Equation $(7)$ $\displaystyle$ $\ge$ $\displaystyle -2 \AA \paren {\map {y'} x - \dfrac {c_1 - a_1} {c - a} }^2 - \BB_1$ Inequality $(9)$ $\displaystyle$ $=$ $\displaystyle -2 \AA \paren {\dfrac {u'} {2 \AA u} }^2 - \BB_1$ Equation $(5)$ Multiply the inequality by $2 \AA u$ and simplify: $\displaystyle u''$ $\ge$ $\displaystyle -2 \AA \BB_1 u$ Similarly: $\displaystyle y''$ $=$ $\displaystyle - \dfrac {v''} {2 \AA v} + \dfrac {v'^2} {2 \AA v^2}$ Equation $(8)$ $\displaystyle$ $\le$ $\displaystyle 2 \AA \paren {\map {y'} x - \dfrac {c_1 - a_1} {c - a} }^2 + \BB_1$ Inequality $(9)$ $\displaystyle$ $=$ $\displaystyle 2 \AA \paren {\dfrac {v'} {2 \AA v} }^2 + \BB_1$ Equation $(6)$ Multiply the inequality by $-2 \AA v$ and simplify: $\displaystyle v''$ $\ge$ $\displaystyle -2 \AA \BB_1 v$ Note that $\map u a = \map u c$ and $\map v a = \map v c$. From Intermediate Value Theorem it follows that $\exists K \subset I: \forall x_0 \in K: \map {y'} {x_0} - \dfrac {c_1 - a_1} {c - a} = 0$ Points $x_0$ divide $I$ into subintervals. Due to $(5)$ and $(6)$ both $\map {u'} x$ and $\map {v'} x$ maintain sign in the subintervals and vanish at one or both endpoints of each subinterval. Let $J$ be one of the subintervals. Let functions $\map {u'} x$, $\map {v'} x$ be zero at $\xi$, the right endpoint. The quantity $\map {y'} x - \dfrac {c_1 - a_1} {c - a}$ has to be either positive or negative. Suppose it is positive in $J$. From $(5)$, $u'$ is not negative. Multiply both sides of $(10)$ by $u'$: $u'' u' \ge - 2 \AA \BB_1 u u'$ Integrating this from $x \in J$ to $\xi$, together with $\map {u'} \xi = 0$, yields: $-\map {u'^2} x \ge - 2 \AA \BB_1 \paren {\map {u^2} \xi - \map {u^2} x}$ Then: $\displaystyle \map {u'^2} x$ $\le$ $\displaystyle 2 \AA \BB_1 \paren {\map {u^2} \xi - \map {u^2} x}$ $\displaystyle$ $\le$ $\displaystyle 2 \AA \BB_1 \map {u^2} \xi$ $\displaystyle$ $=$ $\displaystyle 2 \AA \BB_1 \exp \sqbrk {4 \AA \paren {m + \map y x - \dfrac {a_1 \paren {c - x} + c_1 \paren {x - a} } {c - a} } }$ Equation $(3)$ $\displaystyle$ $\le$ $\displaystyle 2 \AA \BB_1 e^{8 \AA m}$ Lemma 2 $\displaystyle \leadsto \ \$ $\displaystyle \dfrac {\map {u'^2} x} {\map {u^2} x}$ $\le$ $\displaystyle 2 \AA \BB_1 e^{8 \AA m}$ $u \ge 1$ $\displaystyle \leadsto \ \$ $\displaystyle 4 \AA^2 \paren {\map {y'} x - \dfrac {c_1 - a_1} {c - a} }^2$ $\le$ $\displaystyle 2 \AA \BB_1 e^{8 \AA m}$ Equation $(5)$ Hence: $\forall x \in J: \size {\map {y'} x - \dfrac {c_1 - a_1} {c - a} } \le \sqrt {\dfrac {\BB_1} {2 \AA} } e^{4 \AA m}$ Similar arguments for aforementioned quantity being negative. $\Box$ Consider a plane with axes denoted by $x$ and $y$: Put the point $A \tuple {a, a_1}$. Through this point draw an arc of the integral curve such that $\map {y'} a = 0$. On this arc put another point $D \tuple {d, d_1}$. For $x \ge d$ draw the straight line $y = d_1$. Put the point $B \tuple {b, b_1}$. For $y \ge d_1$ draw the straight line $x = b_1$. Denote the intersection of these two straight lines by $Q$. Then the broken curve $DQB$ connects points $D$ and $B$. Choose any point of $DQB$ and denote it by $P \tuple {\xi, \xi_1}$. Consider a family of integral curves $y = \map \phi {x, \alpha}$, passing through the point $A$, where $\alpha = \map {y'} a$. For $\alpha = 0$ the integral curve concides with $AD$. Suppose point $P$ is sufficiently close to the point $D$. By Lemma 1, there exists a unique curve $AP$. Then, $\alpha$ can be found uniquely from: $d_1 = \map \phi {\xi, \alpha}$. Due to uniqueness and continuity, it follows that $\xi$ is a monotonic function of $\alpha$. Hence, $\alpha$ is a monotonic function of $\xi$. Put the point $R$ in between of $D$ and $Q$. Suppose, that, except for $R$, any point of $DR$ can be reached by the aforementioned procedure. When $\xi$ approaches the abscissa $r$ of $R$, $\alpha$ monotonically approaches a limit. If it is different from $\pm \dfrac \pi 2$, point $R$ is attained. By assumption, $R$ is not attained. Thus: $\displaystyle \lim_{\xi \mathop \to r} \alpha = \pm \dfrac \pi 2$ In other words, as $P$ approaches $R$, the derivative of $\map y x$ joining $A$ to $P$ will not be bounded at $x = a$. This contradicts the bounds from Lemma 2 and 3, and the fact that the difference of abscissas of $A$ and $P$ does not approach $0$. Therefore, $R$ can be reached. Similar argument can be repeated for the line segment $QB$. $\blacksquare$ Source of Name This entry was named for Sergei Natanovich Bernstein.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.989980161190033, "perplexity": 185.26955944831573}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00127.warc.gz"}
http://math.stackexchange.com/questions/33680/on-maximizing-a-function
# On maximizing a function I have a function as below, $$f(n) = \sum_{i}{-\Big( (1-q_i)^{n-1} - b \Big)^2}$$ how to find an integer $n\in[0,N]$ that maximizes function $f(\cdot)$. Here, $q_i\in[0,1]$, $b\in[0,1]$ and $N \gg 1$. Or which area should I look into, to solve this problem? - A few questions: is $n$ real or is it an integer? what do you know about $b$? what about $q_i$? what is the range on your sum? The reason I ask, is that if your sum range is small and n is an integer (with a small $N$), then if may be easiest just to calculate every possible value. – rcollyer Apr 18 '11 at 19:53 edited accordingly. – Richard Apr 18 '11 at 21:01 In that case, you'd want to look at integer programming. – J. M. Apr 18 '11 at 21:08 By looking at the function, it seems that you only have to compute $f(n)$ for a few values of $n$. Let $q \in \mathbb{R}^k$ and the solution be $n^*$. Now, $n^* \neq 0$ since $(1-q_i) \in [0,1]$. Also, note that $f(1) = -k(1-b)^2$. Now, for $n>1$, you'll have some fluctuations in $f(n)$ due to $b$, but as $n$ increases, $f(n)$ should monotonically decrease to $-kb^2$. Therefore, finding the $\max$ of the first $m=20$ values should do the trick (you can experiment with the value of $m$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9441685676574707, "perplexity": 169.96616078958598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860110805.57/warc/CC-MAIN-20160428161510-00121-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/inclined-glass-wedge-diffraction-help.598792/
# Inclined Glass Wedge Diffraction - Help ! 1. Apr 21, 2012 ### Zaknife 1. The problem statement, all variables and given/known data Monochromatic beam of incident light on the surface of the glass wedge, whose upper edge is inclined at an angle of γ = 0.05 ° from the base. In reflected light observe a number of interference fringes, the distance between adjacent dark streaks is △X = 0.21 mm. Calculate the wavelength λ of the incident light. Refractive index of glass n = 1.5. 2. Relevant equations $$d \sin \alpha = n \lambda$$ 3. The attempt at a solution I need a guidance ! Is it a thin-film problem ? 2. Apr 21, 2012 ### rude man Need a picture of this setup. Similar Discussions: Inclined Glass Wedge Diffraction - Help !
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055141568183899, "perplexity": 3509.1892942346294}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948563083.64/warc/CC-MAIN-20171215021156-20171215041156-00709.warc.gz"}
https://physics.stackexchange.com/questions/53638/what-was-ticking-just-after-the-big-bang
# What was ticking just after the Big Bang? When reading about the Big Bang, I see phrases like 3 trillionths of a second after... So, what was ticking to give a time scale like this? We define time now in terms of atomic oscillations, but these effects occurred before there were any atoms, or anything else, oscillating to give a reference to measure time against. A similar question has already been posted, but the esoteric answers there do not give me a way of visualising what information these time intervals are really intended to convey. • "We define time now in terms of atomic oscillations," Er...no. We define units of time in terms of atomic oscillations. That's subtly different. Nothing fundamentally prevents us from measuring length of time less than those oscillations, and indeed we measure things like the lifetimes of hadronic states to times very much shorter than a single oscillation of any atomic system. Feb 11, 2013 at 16:42 • Thank you Qmechanic for your answer. I am sorry for my loose use of "time", but that is not really the point. Perhaps I should have asked how, say, a millisecond immediately after the Big Bang, which I cannot visualise, relates to a millisecond today, which as an amateur radio licensee, I am happily familiar with. Feb 12, 2013 at 10:41 • Sorry, got the thanks wrong, should have been: thank you Qmechanic for the edits, and dmckee for the answer. Feb 12, 2013 at 14:35 • It is a slightly different question to ask if time meant the same thing in the early epochs as it does now. I believe that it can be show exactly back to shortly after the CMB decoupled and there is no reason to assume anything else for earlier epochs. Feb 12, 2013 at 16:26 As dmckee points out in his comment on your question - There's a difference between defining a unit of time, and defining time. There are several ways to define our units of time. We could define it in terms of atomic transitions or in terms of distant stars or from pulsars or from the sun. These are all different ways of measuring time for different purposes. But we can always imagine breaking up time into arbitrarily small chunks. By this I mean we don't as yet know whether time and space themselves are fundamentally quantized or if there is another physical limitation which will make measuring arbitrarily small units of time and space impossible. So in theory, we can always hypothesize what will happen at infinitesimally incremental times. dmckee made an astute point that time is not the same as units of time. Today, we have a very well-defined notion of time since the time scale of the vibration of atoms is much smaller than the age of the universe. If I think of time as a coordinate in $ds^2=dt^2-dx^2-dy^2-dz^2$ (Minkowski spacetime, for example), the (timelike) interval between two events $dt$ is still a well defined concept for very small intervals. By that, I mean suppose the time interval for an atomic vibration is $\Delta t=1$ (arbitrary units), I may still talk about time on scales of $\Delta t=0.001$ since $dt$ can be made to be arbitrarily small*, I just won't be able to build a clock that uses atomic vibrations to measure time since the clock itself would not have the necessary precision to differentiate between time $t=2.000$ and $t=2.001$. This leads to the question what can we use in place of a clock that measures in atomic vibrations. In cosmology we may choose proxies for the time coordinate $t$ such as temperature, size of the universe, or average density. (This is of course assuming we possess a decent understanding of the physics at these epochs and how these quantities relate to each other.) For example, we believe that the universe was radiation dominated from the era of reheating until matter-radiation equality, and the physics are well understood. In a radiation dominated universe, I can calculate the scale factor $a(t)\propto t^{1/2}$, temperature $T(a)\propto 1/a$, and density $\rho(a)\propto a^{-4}$. Using these relations, I may use time, size, temperature, density interchangeably. These relations allow us to make statements about time scales smaller than clocks are able to measure. $*$ By arbitrarily small, I mean classically arbitrarily small. Quantum effects will be a whole other interesting story.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131596446037292, "perplexity": 279.1309490267222}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00570.warc.gz"}
https://iwaponline.com/wst/article-abstract/70/4/627/18203/Visualized-study-on-the-interaction-between-single?redirectedFrom=PDF
The present study has been devoted to bubble–curved solid surface interaction in water, which is critical to the separation of suspended particles by air flotation. For this purpose, two particular stages of the interaction (collision and attachment) have been examined visually using high-speed photography in a laboratory-scale flotation column. The effects of the surface material and surfactant concentration on these two stages have been also studied quantitatively. The considered solid materials are the cleaned glass as hydrophilic surface and Teflon as hydrophobic surface. The experimental results show that the presence of surfactant significantly affects the collision and rebound process of a gas bubble, while there is no obvious effect of the surface material on the rebound process. An increase in surfactant concentration has been observed to suppress the rebound number and maximal distance of the bubble from the surface. Moreover, the three-phase contact time of the bubble is a strong function of the surfactant concentration and surface hydrophobicity as well as of the bubble diameter. Another important finding is that the bubble attachment is only observed at the hydrophobic Teflon surface below the surfactant CMC (critical micelle concentration). Results of this study are relevant for deep understanding of the attachment mechanism and to determine the proper conditions for a selective flotation process. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238164186477661, "perplexity": 1003.7225220824095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00261.warc.gz"}
https://www.aip.de/de/fuer-studierende/leibniz-graduiertenschule/projekt-e
# Projekt E ## Massive Stars in Binary Systems Prof. Dr. Wolf-Rainer Hamann (UP, advisor) and Dr. Swetlana Hubrig (AIP, co-advisor) Background: Stars that are initially much more massive than the sun play a fundamental role in the evolution of the Universe. They are a main source of ionizing photons and generate chemical elements which they distribute by mass loss. Finally they explode as supernova or gamma-ray burst. However, just because of their stellar winds, the massive stars are not well understood, especially with regard to their evolution and their feedback to the galactic ecology. Binaries are a potential key for massive-star research. Perhaps the majority of all massive stars form and evolve in close-binary systems. In such cases, their observed spectra are composed from two stars, which makes their analysis more difficult. However, as an advantage they provide additional information from the binary orbit, especially about the stellar masses. Strong global magnetic fields are found in a number of massive stars. Among the different scenarios for their origin is the possibility that such fields are generated by strong binary interactions in stellar mergers or during mass transfer. Aims: The broad scientific goal of this project is to understand the massive stars and their contribution to the evolution of galaxies and to the cosmic circuit of matter. We suggest to study the about 50 binary systems containing a Wolf-Rayet star that are known in the Magellanic Clouds and the Galaxy, and combine the obtained results with the information from the binary properties. These three galaxies exhibit different concentrations of heavy elements, which makes a big difference for the evolution of massive stars. From such an investigation we expect important conclusions about the evolutionary fate of massive stars, concerning single stars as well as interacting binary systems. Methods: The spectral analysis of early-type stars with winds requires especially laborious and complex model calculations, as we have developed in Potsdam over many years of work (the Potsdam Wolf-Rayet model atmospheres, PoWR). Spectroscopic observations of many massive binary systems are available to us, often for multiple epochs. We suggest to: • analyze the known SB2 systems with a WR component in the SMC, LMC and the Galaxy, by calculating and applying PoWR models; • plan and perform further observational campaigns to complete the data if neccessary; • study the phase-dependent observations in order to improve the spectral decomposition and the orbital data; • fit and analyze the multi-wavelength light curves and spectral variability; • search and measure magnetic fields in spectrapolariometric observations of massive stars; and • discuss the results with respect to the evolution of massive stars as single stars and as binaries, and with regard to their feedback to the galactic ecology. Bibliography
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938875198364258, "perplexity": 1204.9809088712677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00379.warc.gz"}
http://clay6.com/qa/51942/which-of-the-following-equations-is-not-correctly-formulated-
Comment Share Q) # Which of the following equations is not correctly formulated ? $\begin {array} {1 1} (A)\;4Zn+10HNO_3(dilute) \rightarrow 4Zn(NO_3)_2+N_2O+5H_2O \\ (B)\;3Zn+8HNO_3(dilute) \rightarrow 3Zn(NO_3)_2+2N_2O+4H_2O \\ (C)\;6Hg+8HNO_3(dilute) \rightarrow 3Hg_2(NO_3)_2+2N_2O+4H_2O \\ (D)\;Zn+2OH^-+2H_2O \rightarrow [Zn(OH)_4]^{2-}+H^2 \end {array}$ Answer : $3Zn+8HNO_3 \rightarrow 3Zn(NO_3)_2+2N_2O+4H_2O$ $\qquad \: \qquad \: \qquad ( \text{dilute})$ $4Zn+10HNO_3 \rightarrow 4Zn(NO_3)_2+N_2O+5H_2O$ $\quad \qquad ( \text{dilute})$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670910835266113, "perplexity": 1616.4144617317663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00230.warc.gz"}
http://tex.stackexchange.com/questions/59722/hyphenation-of-terms-containing-symbols-other-than-letters?answertab=votes
# Hyphenation of terms containing symbols other than letters I'm trying to hyphenate a chemical name and have come into difficulty. The word I'm trying to hyphenate is poly(ethyeneglycol). I'm using babel and inputenc. I've tried specifying a custom hyphenation pattern using \hyphenation{poly(ethylene-glycol)} but it fails to complete. Being chemistry terminology I wondered if this was dealt with by a chemistry package such as mhchem but have not found a package that does it. - I have noticed all your words "hyphenate" were missing the h, including the command \hyphenation{poly(ethylene-glycol)}. I corrected them all in your question, but perhaps your code needs to be corrected as well? Maybe is just a misspelling problem? – Vivi Jun 13 '12 at 16:19 Have you looked at bpchem and the \IUPAC macro? – Joseph Wright Jun 13 '12 at 16:23 Hi Joseph, the IUPAC macro didn't seem to do anything to it. Thanks everyone. – Darling Jun 14 '12 at 10:14 Techically, ( and ) are not letters, so they won't participate to hyphenation. You can make them acceptable with the following trick, which should have no adverse effect with normal input: \lccode$$=\( \lccode$$=\) \hyphenation{poly(ethylene-glycol)} A different strategy could be to define a macro: \newcommand{\Q}[1]{(\nobreak\hspace{0pt}#1)} \hyphenation{ethylene-glycol} % TeX incorrectly hyphenates this word and use poly\Q{ethyleneglycol} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834696412086487, "perplexity": 2668.12397226975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860110356.23/warc/CC-MAIN-20160428161510-00182-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/second-countable-topological-spaces-are-separable-topologica
Second Countable Topological Spaces are Separable Topo. Spaces # Second Countable Topological Spaces are Separable Topo. Spaces Recall from the Second Countable Topological Spaces page that the topological space $(X, \tau)$ is said to be second countable if there exists a countable basis $\mathcal B$ of $\tau$. Furthermore, recall from the Separable Topological Spaces page that the topological space $(X, \tau)$ is said to be separable if it contains a countable dense subset. We will now look at a rather nice theorem which says that every second countable topological space is a separable topological space. Theorem 1: Let $(X, \tau)$ be a topological space. If $(X, \tau)$ is second countable then $(X, \tau)$ is separable. • Proof: Let $(X, \tau)$ be a second countable topological space. Then there exists a countable basis $\mathcal B = \{ B_1, B_2, ..., B_n, ... \}$ of $\tau$. Since $\mathcal B$ of $\tau$ is a basis of $\tau$ we have that every open set $U \in \tau$ can be expressed as the union of sets in some subcollection $\mathcal B^* \subseteq \mathcal B$. In particular: (1) \begin{align} \quad U = \bigcup_{B \in \mathcal B^*} B \end{align} • We must now construct a countable dense subset of $X$. Assume that $\mathcal B$ does not contain the empty set. If it does contain the empty set then we can discard it. Then for each $B_n \in \mathcal B$ take $x \in B_n$ and define the set $A$ as: (2) \begin{align} \quad A = \{ x_n : x_n \: \mathrm{is \: any \: element \: in \:} B_n, n = 1, 2, ... \} \end{align} • Then $A$ is a countable subset of $X$ since we take one element from each set in the countable basis. • Furthermore, for all $U \in \tau \setminus \{ \emptyset \}$ we have that $A \cap U \neq \emptyset$ because $A$ contains one element from each of the basis sets and $U$ is the union of some subcollection of the basis sets. Therefore $A$ is a dense subset of $X$. • Hence $A$ is a countable dense subset of $X$, so $(X, \tau)$ is a separable topological space. $\blacksquare$ Note: The above theorem tells us that every second countable topological space is separable. The converse is NOT true in general! There are separable topological spaces that are not second countable. However, if $X$ is a metric space then separability and second countability are equivalent properties of topological spaces.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969388842582703, "perplexity": 83.9684161196114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00068.warc.gz"}
http://math.stackexchange.com/questions/390570/how-to-find-the-inverse-laplace-transforms
# How to find the inverse Laplace Transforms? How do you find the inverse Laplace Transform of the following, $$\frac{2 (s^2+4 s+5)^2+40}{(s^2+4 s+5)^2}$$ Separating them into complex coefficients is to long. - Let $F(s)$ denote the fraction in the post, hence $F(s)=2+40\frac1{(s^2+4s+5)^2}$. The $2$ part of $F(s)$ is the Laplace transform of twice the Dirac measure at $0$. The fraction $\frac1{s^2+4s+5}$ is a linear combination of $\frac1{s+2\pm\mathrm i}$ hence it is the Laplace transform of a linear combination of the functions $t\mapsto\exp(-(2\pm\mathrm i)t)$ on $t\geqslant0$, namely, the Laplace transform of $f$ where $f(t)=\mathrm e^{-2t}\sin(t)$ for every $t\geqslant0$. Thus, the fraction $\frac1{(s^2+4s+5)^2}$ is the Laplace transform of $f\ast f$, which might (or might not, check for yourself) be such that $(f\ast f)(t)=\frac12\mathrm e^{-2t}(\sin(t)-t\cos(t))$ for every $t\geqslant0$. Finally, the function $F$ is the Laplace transform of the measure $\nu=2\delta_0+40\mu$ where $\mu$ has density $f\ast f$ with respect to the Lebesgue measure on $t\geqslant0$. Note that $F$ is the Laplace transform of no function, only a measure can do, and one has $$F(s)=\int_0^{+\infty}\mathrm e^{-st}\mathrm d\nu(t).$$ - Although I didn't quite get how L(f*f(t)) is equal to that, thanks for the answer :) –  Eddy May 13 at 17:30 It is a general fact that L(f*g)=(Lf)(Lg) hence for f=g, one gets... –  Did May 13 at 19:16 Are you certain? –  Eddy May 14 at 13:25 Certain about what? (For what it is worth, as explained (and acknowledged by the OP) in the comments, the answer you accepted does not answer your question. Can you explain?) –  Did May 14 at 16:56 L(f*g)=(Lf)(Lg) that this true. My book states: L(f(t)*g(t)) does not equal to L(f(t))*L(g(t)) Are you saying the accepted answer is wrong? It's like yours. –  Eddy May 14 at 17:16 The problem reduces to $$2+\frac{40}{(s^2+4s+5)^2}$$ Let us denote the Laplace Transform of $f(t)$ as $L\left(f(t)\right)=F(s)$ and the inverse as $L^{-1}\left(F(s)\right)=f(t)$ For $L^{-1}\frac1{(s^2+4s+5)^2},$ $$\text{If }L\left(f(t)\right)=\frac{s-a}{\{(s-a)^2+b^2\}^2},$$ $$L\left(\frac {f(t)}t\right)=\int_s^\infty \frac{s-a}{\{(s-a)^2+b^2\}^2} ds$$ $$=\frac12\int_s^\infty \frac{2(s-a)}{\{(s-a)^2+b^2\}^2} ds=-\frac12\left(\frac1{(s-a)^2+b^2}\right)_s^\infty=\frac12\frac1{(s-a)^2+b^2}$$ $$\implies \frac {2f(t)}t=L^{-1}\left(\frac1{(s-a)^2+b^2}\right)=\frac{e^{at}\sin bt}{b}$$ $$\implies L^{-1}\frac{s-a}{\{(s-a)^2+b^2\}^2} =f(t)=\frac{te^{at}\sin bt}{2b}$$ $$a=0\implies L^{-1}\frac s {\{s^2+b^2\}^2} =\frac{t\sin bt}{2b}$$ As $L\{g(t)\}=G(s)\implies \int_0^tg(t)dt=L^{-1}\left(\frac{G(s)}s\right)$ $$L^{-1}\frac1{(s^2+b^2)^2}=L^{-1}\left(\frac1s\cdot \frac s{(s^2+b^2)^2}\right)=\int_0^t\frac{t\sin bt}{2b}dt=\frac{\sin bt-bt\cos bt}{2b^3}$$ (Integrating by parts) Now using shifting property $L\{g(t)\}=G(s)\implies L(e^{at}g(t))=G(s-a)$ So, $$L^{-1}\frac1{\{(s-a)^2+b^2\}^2}=\frac{e^{at}(\sin bt-bt\cos bt)}{2b^3}$$ Here $s^2+4s+5=(s+2)^2+1^2\implies a=-2,b=1$ - Sorry but what does $L^{-1}(2)=\frac 2s$ mean? –  Did May 13 at 19:17 Quite unrelated, I would say. –  Did May 13 at 19:22 @Did, not sure what you mean? How do you handle the 1st part? –  lab bhattacharjee May 13 at 19:24 Sorry but this is not/should not be the question: the question is what you mean by $L^{-1}(2)=\frac 2s$. –  Did May 13 at 19:26 I have done it, you can see if it's useful to you, I.m not sure so if you look out the mistake please fix it.. $\begin{array}{l} F(s) = \frac{{2{{({s^2} + 4s + 5)}^2} + 4}}{{{{({s^2} + 4s + 5)}^2}}} = 2 + \frac{{40}}{{{{{\rm{[}}{{(s + 2)}^2} + 1{\rm{]}}}^2}}} \\ {L^{ - 1}}{\rm{[2] = }}2\delta (t) \\ {\rm{\{ }}\int_0^\infty {2.{e^{ - st}}\delta (t)dt = 2} {\rm{.}}{{\rm{e}}^{ - s0}}{\rm{ = 2\} }} \\ G(s) = \frac{{40}}{{{{{\rm{[}}{{(s + 2)}^2} + 1{\rm{]}}}^2}}}{\rm{ = 40}}{\rm{.}}\frac{1}{{{\rm{[}}{{(s + 2)}^2} + 1{\rm{]}}}}.\frac{1}{{{\rm{[}}{{(s + 2)}^2} + 1{\rm{]}}}} = 40L\left[ {{e^{ - 2t}}.\sin t} \right].{L^{ - 1}}\left[ {{e^{ - 2t}}.\sin t} \right] \\ = > g(t) = {e^{ - 2t}}.\sin t*{e^{ - 2t}}.\sin t = \int_0^t {{e^{ - 2(t - u)}}.\sin (t - u).{e^{ - 2u}}.\sin u.du} \\ = {e^{ - 2t}}\int_0^t {(\sin t.\cos u - \cos t.\sin u)\sin u.du} \\ = > g(t) = {e^{ - 2t}}\left[ {\sin t\underbrace {\int_0^t {\cos u.\sin u.du} }_{{I_1}} - \cos t\underbrace {\int_0^t {{{\sin }^2}udu} }_{{I_2}}} \right] \\ = > f(t) = 2\delta (t) + g(t) \\ \end{array}$ You can evalue the integrals I1 and I2 yourself -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802265763282776, "perplexity": 472.9419544453509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775423/warc/CC-MAIN-20131218054935-00053-ip-10-33-133-15.ec2.internal.warc.gz"}