url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.hepdata.net/search/?q=&collaboration=ALICE&page=1&author=Alessandro%2C+Bruno
Showing 25 of 213 results #### Evidence of rescattering effect in Pb-Pb collisions at the LHC through production of $\rm{K}^{*}(892)^{0}$ and $\phi(1020)$ mesons The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adler, Alexander ; et al. Phys.Lett. B802 (2020) 135225, 2020. Inspire Record 1762368 Measurements of K∗ (892) 0 and φ(1020)resonance production in Pb–Pb and pp collisions at √ sNN = 5.02 TeV with the ALICE detector at the Large Hadron Collider are reported. The resonances are measured at midrapidity (|y| < 0.5) via their hadronic decay channels and the transverse momentum (pT) distributions are obtained for various collision centrality classes up to pT = 20 GeV/c. The pT-integrated yield ratio K∗ (892) 0/K in Pb–Pb collisions shows significant suppression relative to pp collisions and decreases towards more central collisions. In contrast, the φ(1020)/K ratio does not show any suppression. Furthermore, the measured K∗ (892) 0/K ratio in central Pb–Pb collisions is significantly suppressed with respect to the expectations based on a thermal model calculation, while the φ(1020)/K ratio agrees with the model prediction. These measurements are an experimental demonstration of rescattering of K∗ (892) 0 decay products in the hadronic phase of the collisions. The K∗ (892) 0/K yield ratios in Pb–Pb and pp collisions are used to estimate the time duration between chemical and kinetic freeze-out, which is found to be ∼ 4–7 fm/c for central collisions. The pT-differential ratios of K∗ (892) 0/K, φ(1020)/K, K∗ (892) 0/π, φ(1020)/π, p/K ∗ (892) 0 and p/φ(1020) are also presented for Pb–Pb and pp collisions at √ sNN = 5.02 TeV. These ratios show that the rescattering effect is predominantly a low-pT phenomenon. 20 data tables $p_{\rm T}$-distributions of $\rm{K}^{*0}$ (average of particle and anti-particle) meson measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV. $p_{\rm T}$-distributions of $\rm{K}^{*0}$ (average of particle and anti-particle) meson measured in pp collisions at $\sqrt{s}$ = 5.02 TeV. $p_{\rm T}$-distributions of $\phi$ meson measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV. More… #### Longitudinal and azimuthal evolution of two-particle transverse momentum correlations in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adler, Alexander ; et al. Phys.Lett. B804 (2020) 135375, 2020. Inspire Record 1762340 This paper presents the first measurements of the charge independent (CI) and charge dependent (CD) two-particle transverse momentum correlators $G_{2}^{\rm CI}$ and $G_{2}^{\rm CD}$ in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV by the ALICE collaboration. The correlators are measured as a function of pair separation in pseudorapidity ($\Delta \eta$) and azimuth ($\Delta \varphi$) and as a function of collision centrality. The correlator $G_{2}^{\rm CI}$ exhibits a longitudinal broadening while undergoing a monotonic azimuthal narrowing from peripheral to central collisions. By contrast, $G_{2}^{\rm CD}$ exhibits a narrowing along both dimensions towards central events. These features are not reproduced by models such as HIJING and AMPT. However, the observed narrowing of the correlators is expected to result from the stronger transverse flow profiles produced in more central collisions and the longitudinal broadening is predicted to be sensitive to momentum currents and the shear viscosity per unit of entropy density $\eta/s$ of the matter produced in the collisions. The observed broadening is found to be consistent with the hypothesized lower bound of $\eta/s$ and is in qualitative agreement with values obtained from anisotropic flow measurements. 12 data tables Longitudinal width evolution with the number of participants of the two-particle transverse momentum correlation $G_{2}^{\rm CI}$ in Pb--Pb collisions at $\sqrt{s_{\rm NN}}=2.76\;\text{TeV}$. The widths are extracted from bi-dimensional (2D) or projection (1D) fits of the correlation function. Two-particle transverse momentum correlation $G_{2}^{\rm CI}$ for central (0-5%) Pb--Pb collisions at $\sqrt{s_{\rm NN}}=2.76\;\text{TeV}$. Two-particle transverse momentum correlation $G_{2}^{\rm CI}$ for semi-central (30-40%) Pb--Pb collisions at $\sqrt{s_{\rm NN}}=2.76\;\text{TeV}$. More… #### Jet-hadron correlations measured relative to the second order event plane in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adler, Alexander ; et al. No Journal Information, 2019. Inspire Record 1762358 The Quark Gluon Plasma (QGP) produced in ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC) can be studied by measuring the modifications of jets formed by hard scattered partons which interact with the medium. We studied these modifications via angular correlations of jets with charged hadrons for jets with momenta 20 < $p_{\rm{T}}^{\rm{jet}}$ < 40 GeV/$c$ as a function of the associated particle momentum. The reaction plane fit (RPF) method is used in this analysis to remove the flow modulated background. The analysis of angular correlations for different orientations of the jet relative to the second order event plane allows for the study of the path length dependence of medium modifications to jets. We present the dependence of azimuthal angular correlations of charged hadrons with respect to the angle of the axis of a reconstructed jet relative to the event plane in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. The dependence of particle yields associated with jets on the angle of the jet with respect to the event plane is presented. Correlations at different angles relative to the event plane are compared through ratios and differences of the yield. No dependence of the results on the angle of the jet with respect to the event plane is observed within uncertainties, which is consistent with no significant path length dependence of the medium modifications for this observable. 58 data tables The near-side and away-side yield vs $p_{T}^{assoc}$ for $20<p_T^{jet}<40$ GeV/$c$ full jets of 30-50% centrality in Pb-Pb collisions. The background uncertainty is non-trivially correlated point-to-point. The correlated systematic uncertainties come from the shape uncertainty of the acceptance correction. There is an additional 5% global scale uncertainty. The differences between out-of-plane and in-plane yields and mid-plane and in-plane yields on near-side and away-side vs $p_{T}^{assoc}$ for $20<p_T^{jet}<40$ GeV/$c$ full jets of 30-50% centrality in Pb-Pb collisions. The background uncertainty is non-trivially correlated point-to-point. The correlated systematic uncertainties come from the shape uncertainty of the acceptance correction. There is an additional 5% global scale uncertainty. The ratios of out-of-plane to in-plane yields and mid-plane to in-plane yields on near-side and away-side vs $p_{T}^{assoc}$ for $20<p_T^{jet}<40$ GeV/$c$ full jets of 30-50% centrality in Pb-Pb collisions. The background uncertainty is non-trivially correlated point-to-point. The correlated systematic uncertainties come from the shape uncertainty of the acceptance correction. More… #### Measurements of inclusive jet spectra in pp and central Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adler, Alexander ; et al. No Journal Information, 2019. Inspire Record 1755387 This article reports measurements of the $p_{\rm{T}}$-differential inclusive jet cross-section in pp collisions at $\sqrt{s}$ = 5.02 TeV and the $p_{\rm{T}}$-differential inclusive jet yield in Pb-Pb 0-10% central collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV. Jets were reconstructed at mid-rapidity with the ALICE tracking detectors and electromagnetic calorimeter using the anti-$k_{\rm{T}}$ algorithm. For pp collisions, we report jet cross-sections for jet resolution parameters $R=0.1-0.6$ over the range $20<p_{\rm{T,jet}}<140$ GeV/$c$, as well as the jet cross-section ratios of different $R$, and comparisons to two next-to-leading-order (NLO)-based theoretical predictions. For Pb-Pb collisions, we report the $R=0.2$ and $R=0.4$ jet spectra for $40<p_{\rm{T,jet}}<140$ GeV/$c$ and $60<p_{\rm{T,jet}}<140$ GeV/$c$, respectively. The scaled ratio of jet yields observed in Pb-Pb to pp collisions, $R_{\rm{AA}}$, is constructed, and exhibits strong jet quenching and a clear $p_{\rm{T}}$-dependence for $R=0.2$. No significant $R$-dependence of the jet $R_{\rm{AA}}$ is observed within the uncertainties of the measurement. These results are compared to several theoretical predictions. 33 data tables Fig. 1 Left, data for jet radius R=0.1. Unfolded pp full jet cross-section at $\sqrt{s}$ = 5.02 TeV for R = 0.1 − 0.6. No leading track requirement is imposed. Fig. 1 Left, data for jet radius R=0.2. Unfolded pp full jet cross-section at $\sqrt{s}$ = 5.02 TeV for R = 0.1 − 0.6. No leading track requirement is imposed. Fig. 1 Left, data for jet radius R=0.3. Unfolded pp full jet cross-section at $\sqrt{s}$ = 5.02 TeV for R = 0.1 − 0.6. No leading track requirement is imposed. More… #### Studies of J/$\psi$ production at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. JHEP 2002 (2020) 041, 2020. Inspire Record 1753083 The inclusive J/ψ production in Pb–Pb collisions at the center-of-mass energy per nucleon pair $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV, measured with the ALICE detector at the CERN LHC, is reported. The J/ψ meson is reconstructed via the dimuon decay channel at forward rapidity (2.5 < y < 4) down to zero transverse momentum. The suppression of the J/ψ yield in Pb–Pb collisions with respect to binary-scaled pp collisions is quantified by the nuclear modification factor (R$_{AA}$). The R$_{AA}$ at $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV is presented and compared with previous measurements at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV as a function of the centrality of the collision, and of the J/ψ transverse momentum and rapidity. The inclusive J/ψ RAA shows a suppression increasing toward higher transverse momentum, with a steeper dependence for central collisions. The modification of the J/ψ average transverse momentum and average squared transverse momentum is also studied. Comparisons with the results of models based on a transport equation and on statistical hadronization are carried out.[graphic not available: see fulltext] 43 data tables Transverse momentum dependence (in 0-90% centrality class) of the inclusive J/$\psi$ $R_{\rm AA}$. The first uncertainty is statistical, the second is the uncorrelated systematic, while the third one is a $p_{\rm T}$-correlated systematic uncertainty. The minimum and maximum variations for the $R_{\rm AA}$ of prompt J/$\psi$ with respect to the $R_{\rm AA}$ values of inclusive J/$\psi$ reported in Table 1. The variations correspond to two extreme hypotheses on the unknown contribution of non-prompt J/$\psi$. Transverse momentum dependence (in 0-90% centrality class) of the ratio of the inclusive J/$\psi$ $R_{\rm AA}$ at $\sqrt{s_{NN}}$= 5.02 and 2.76 TeV. The first uncertainty is statistical, the second is the uncorrelated systematic, while the third one is a $p_{\rm T}$-correlated systematic uncertainty. More… #### Multiplicity dependence of (multi-)strange hadron production in proton-proton collisions at $\sqrt{s}$ = 13 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. Eur.Phys.J. C80 (2020) 167, 2020. Inspire Record 1748157 The production rates and the transverse momentum distribution of strange hadrons at mid-rapidity ($\ |y\ | < 0.5$) are measured in proton-proton collisions at $\sqrt{s}$ = 13 TeV as a function of the charged particle multiplicity, using the ALICE detector at the LHC. It is found that the production rates of $\rm{K}^{0}_{S}$, $\Lambda$, $\Xi$, and $\Omega$ increase with the multiplicity faster than what is reported for inclusive charged particles. The increase is found to be more pronounced for hadrons with a larger strangeness content. Possible auto-correlations between the charged particles and the strange hadrons are evaluated by measuring the event-activity with charged particle multiplicity estimators covering different pseudorapidity regions. The yields of strange hadrons are found to depend only on the mid-rapidity multiplicity for charged particle multiplicity estimators selecting in the forward region, which turn out to be more directly related to the number of Multiple Parton Interactions. Several features of the data are reproduced qualitatively by general purpose QCD Monte Carlo models that take into account the effect of densely-packed QCD strings in high multiplicity collisions. However, none of the tested models reproduce the data quantitatively. This work corroborates and extends the ALICE findings on strangeness production in proton-proton collisions at 7 TeV. 59 data tables $K^{0}_{S}$ transverse momentum spectrum - V0M multiplicity classes. Total systematic uncertainties include both correlated and uncorrelated uncertainties across multiplicity. Uncorrelated systematic originating from the multiplicity dependence of the efficiency (2%) is not included. $\Lambda+\bar{\Lambda}$ transverse momentum spectrum - V0M multiplicity classes. Total systematic uncertainties include both correlated and uncorrelated uncertainties across multiplicity. Uncorrelated systematic originating from the multiplicity dependence of the efficiency (2%) is not included. $\Xi^{-}+\bar{\Xi^{+}}$ transverse momentum spectrum - V0M multiplicity classes. Total systematic uncertainties include both correlated and uncorrelated uncertainties across multiplicity. Uncorrelated systematic originating from the multiplicity dependence of the efficiency (2%) is not included. More… #### $^3_\Lambda\mathrm{H}$ and $^3_{\overline{\Lambda}}\mathrm{\overline{H}}$ lifetime measurement in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}} =$ 5.02 TeV via two-body decay The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1743989 An improved value for the lifetime of the (anti-)hypertriton has been obtained using the data sample of Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}} =$ 5.02 TeV collected by the ALICE experiment at the LHC. The (anti-)hypertriton has been reconstructed via its charged two-body mesonic decay channel and the lifetime has been determined from an exponential fit to the d$N$/d($ct$) spectrum. The measured value, $\tau$ = 242$^{+34}_{-38}$ (stat.) $\pm$ 17 (syst.) ps, is compatible with all the available theoretical predictions, thus contributing to the solution of the longstanding hypertriton lifetime puzzle. 1 data table (Hypertriton + Anti-Hypertriton)dN/d(ct) distribution. #### Measurement of $\Upsilon(1{\rm S})$ elliptic flow at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}=5.02$ TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1742764 The first measurement of the $\Upsilon(1{\rm S})$ elliptic flow coefficient ($v_2$) is performed at forward rapidity (2.5 $<$ $y$ $<$ 4) in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The results are obtained with the scalar product method and are reported as a function of transverse momentum ($p_{\rm{T}}$) up to 15 GeV/$c$ in the 5-60% centrality interval. The measured $\Upsilon(1{\rm S})$ $v_2$ is consistent with zero and with the small positive values predicted by transport models within uncertainties. The $v_2$ coefficient in 2 $<$ $p_{\rm T}$ $<$ 15 GeV/$c$ is lower than that of inclusive J/$\psi$ mesons in the same $p_{\rm{T}}$ interval by 2.6 standard deviations. These results, combined with earlier suppression measurements, are in agreement with a scenario in which the $\Upsilon$(1S) production in Pb-Pb collisions at LHC energies is dominated by dissociation limited to the early stage of the collision whereas in the J/$\psi$ case there is substantial experimental evidence of an additional regeneration component. 4 data tables The J/$\psi$ $v_2$ coefficient as a function of $p_{\rm T}$ in 5-60% centrality interval in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV. The $\Upsilon$(1S) $v_2$ coefficient as a function of $p_{\rm T}$ in 5-60% centrality interval in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV. The J/$\psi$ $v_2$ coefficient in three centrality intervals integrated over the transverse momentum range 2~$<$~$p_{\rm T}$~$<$~15 GeV/$c$ in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV. More… #### Measurement of prompt D$^{0}$, D$^{+}$, D$^{*+}$, and ${\mathrm{D}}_{\mathrm{S}}^{+}$ production in p–Pb collisions at $\sqrt{{\mathrm{s}}_{\mathrm{NN}}}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. JHEP 1912 (2019) 092, 2019. Inspire Record 1738950 The measurement of the production of prompt D$^0$, D$^+$, D$^{*+}$, and D$^+_s$ mesons in proton$-$lead (p$-$Pb) collisions at the centre-of-mass energy per nucleon pair of $\sqrt{s_{\rm NN}}$ = 5.02 TeV, with an integrated luminosity of $292\pm 11$ $\mu$b$^{-1}$, are reported. Differential production cross sections are measured at mid-rapidity ($-0.96<y_{\rm cms}<0.04$) as a function of transverse momentum ($p_{\rm T}$) in the intervals $0< p_{\rm T} < 36$ GeV/$c$ for D$^0$, $1< p_{\rm T} <36$ GeV/$c$ for D$^+$ and D$^{*+}$, and $2< p_{\rm T} <24$ GeV/$c$ for D$^+_s$ mesons. For each species, the nuclear modification factor $R_{\rm pPb}$ is calculated as a function of $p_{\rm T}$ using a proton-proton (pp) reference measured at the same collision energy. The results are compatible with unity in the whole $p_{\rm T}$ range. The average of the non-strange D mesons $R_{\rm pPb}$ is compared with theoretical model predictions that include initial-state effects and parton transport model predictions. The $p_{\rm T}$ dependence of the D$^0$, D$^+$, and D$^{*+}$ nuclear modification factors is also reported in the interval $1< p_{\rm T} < 36$ GeV/$c$ as a function of the collision centrality, and the central-to-peripheral ratios are computed from the D-meson yields measured in different centrality classes. The results are further compared with charged-particle measurements and a similar trend is observed in all the centrality classes. The ratios of the $p_{\rm T}$-differential cross sections of D$^0$, D$^+$, D$^{*+}$, and D$^+_s$ mesons are also reported. The D$^+_s$ and D$^+$ yields are compared as a function of the charged-particle multiplicity for several $p_{\rm T}$ intervals. No modification in the relative abundances of the four species is observed with respect to pp collisions within the statistical and systematic uncertainties. 27 data tables Ratio of prompt Ds+ over D+ production cross section as a function of the charged particle pseudorapidity density in p-Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}~=~5.02~TeV}$. Ratio of prompt Ds+ over D+ production cross section as a function of the charged particle pseudorapidity density in p-Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}~=~5.02~TeV}$. $p_{\rm{T}}$ differential cross section of prompt D0 mesons obtained from the analysis without vertexing reconstruction in p-Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}~=~5.02~TeV}$. More… #### Multiplicity dependence of light (anti-)nuclei production in p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1738836 The measurement of the deuteron and anti-deuteron production in the rapidity range $-1 < y < 0$ as a function of transverse momentum and event multiplicity in p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV is presented. (Anti-)deuterons are identified via their specific energy loss $\rm{d}E/\rm{d}x$ and via their time-of-flight. Their production in p-Pb collisions is compared to pp and Pb-Pb collisions and is discussed within the context of thermal and coalescence models. The ratio of integrated yields of deuterons to protons (d/p) shows a significant increase as a function of the charged-particle multiplicity of the event starting from values similar to those observed in pp collisions at low multiplicities and approaching those observed in Pb-Pb collisions at high multiplicities. The mean transverse momenta are extracted from the deuteron spectra and the values are similar to those obtained for p and $\Lambda$ particles. Thus, deuteron spectra do not follow mass ordering. This behaviour is in contrast to the trend observed for non-composite particles in p-Pb collisions. In addition, the production of the rare $^{3}{\rm{He}}$ and $^{3}\overline{\rm He}$ nuclei has been studied. The spectrum corresponding to all non-single diffractive p-Pb collisions is obtained in the rapidity window $-1 < y < 0$ and the $p_{\rm{T}}$-integrated yield d$N$/d$y$ is extracted. It is found that the yields of protons, deuterons, and $^{3}{\rm{He}}$, normalised by the spin degeneracy factor, follow an exponential decrease with mass number. 25 data tables Transverse momentum distributions of deuterons in the 0-10% V0A multiplicity class Transverse momentum distributions of anti-deuterons in the 60-100% V0A multiplicity class $\bar{d}$/d ratio as a function of transverse momentum in the 0-10% V0A multiplicity class More… #### Measurement of the inclusive isolated photon production cross section in pp collisions at $\sqrt{s}$ = 7 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. Eur.Phys.J. C79 (2019) 896, 2019. Inspire Record 1738300 The production cross section of inclusive isolated photons has been measured by the ALICE experiment at the CERN LHC in pp collisions at a centre-of-momentum energy of $\sqrt{s}=$ 7 TeV. The measurement is performed with the electromagnetic calorimeter EMCal and the central tracking detectors, covering a range of $|\eta|<0.27$ in pseudorapidity and a transverse momentum range of $10 < p_{\rm T}^{\gamma} <$ 60 GeV/$c$. The result extends the $p_{\rm T}$ coverage of previously published results of the ATLAS and CMS experiments at the same collision energy to smaller $p_{\rm T}$. The measurement is compared to next-to-leading order perturbative QCD calculations and to the results from the ATLAS and CMS experiments. All measurements and theory predictions are in agreement with each other. 1 data table Double $p_{T}$-differential production cross section of isolated photons in pp collisions at $\sqrt{s}$=7 TeV in the rapidity interval -0.27<$\eta$<0.27. #### Scattering studies with low-energy kaon-proton femtoscopy in proton-proton collisions at the LHC The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. Phys.Rev.Lett. 124 (2020) 092301, 2020. Inspire Record 1737592 The study of the strength and behavior of the antikaon-nucleon (K¯N) interaction constitutes one of the key focuses of the strangeness sector in low-energy quantum chromodynamics (QCD). In this Letter a unique high-precision measurement of the strong interaction between kaons and protons, close and above the kinematic threshold, is presented. The femtoscopic measurements of the correlation function at low pair-frame relative momentum of (K+p⊕K−p¯) and (K-p⊕K+p¯) pairs measured in pp collisions at s=5, 7, and 13 TeV are reported. A structure observed around a relative momentum of 58  MeV/c in the measured correlation function of (K-p⊕K+p¯) with a significance of 4.4σ constitutes the first experimental evidence for the opening of the (K¯0n⊕K0n¯) isospin breaking channel due to the mass difference between charged and neutral kaons. The measured correlation functions have been compared to Jülich and Kyoto models in addition to the Coulomb potential. The high-precision data at low relative momenta presented in this work prove femtoscopy to be a powerful complementary tool to scattering experiments and provide new constraints above the K¯N threshold for low-energy QCD chiral models. 7 data tables K-p correlation function in p-p collisions at $\sqrt{s}=5$ TeV. K-p correlation function in p-p collisions at $\sqrt{s}=7$ TeV. K-p correlation function in p-p collisions at $\sqrt{s}=13$ TeV. More… #### Production of muons from heavy-flavour hadron decays in pp collisions at $\sqrt{s}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. JHEP 1909 (2019) 008, 2019. Inspire Record 1735344 Production cross sections of muons from semi-leptonic decays of charm and beauty hadrons were measured at forward rapidity ($2.5<y<4$) in proton--proton (pp) collisions at a centre-of-mass energy $\sqrt{s}=5.02$ TeV with the ALICE detector at the CERN LHC. The results were obtained in an extended transverse momentum interval, $2 < p_{\rm T} < 20$ GeV/$c$, and with an improved precision compared to previous measurements performed in the same rapidity interval at centre-of-mass energies $\sqrt{s}= 2.76$ and 7 TeV. The $p_{\rm T}$- and $y$-differential production cross sections as well as the $p_{\rm T}$-differential production cross section ratios between different centre-of-mass energies and different rapidity intervals are described, within experimental and theoretical uncertainties, by predictions based on perturbative QCD. 10 data tables $p_{\rm T}$-differential production cross section of muons from heavy-flavour hadron decays at forward rapidity in pp collisions at $\sqrt{s}=5.02$ TeV in the rapidity interval $2.5 < y < 4$. Production cross section of muons from heavy-flavour hadron decays as a function of rapidity in pp collisions at $\sqrt{s} = 5.02$ TeV for the $p_{\rm T}$ interval $2 < p_{\rm T} < 7$ GeV/$c$. Production cross section of muons from heavy-flavour hadron decays as a function of rapidity in pp collisions at $\sqrt{s} = 5.02$ TeV for the $p_{\rm T}$ interval $7 < p_{\rm T} < 20$ GeV/$c$. More… #### Study of the $\Lambda$-$\Lambda$ interaction with femtoscopy correlations in pp and p-Pb collisions at the LHC The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1735349 This work presents new constraints on the existence and the binding energy of a possible $\Lambda$-$\Lambda$ bound state, the H-dibaryon, derived from $\Lambda$-$\Lambda$ femtoscopic measurements by the ALICE collaboration. The results are obtained from a new measurement using the femtoscopy technique in pp collisions at $\sqrt{s}=13$ TeV and p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV, combined with previously published results from p-Pb collisions at $\sqrt{s}=7$ TeV. The $\Lambda$-$\Lambda$ scattering parameter space, spanned by the inverse scattering length $f_0^{-1}$ and the effective range $d_0$, is constrained by comparing the measured $\Lambda$-$\Lambda$ correlation function with calculations obtained within the Lednicky model. The data are compatible with hypernuclei results and lattice computations, both predicting a shallow attractive interaction, and permit to test different theoretical approaches describing the $\Lambda$-$\Lambda$ interaction. The region in the $(f_0^{-1},d_0)$ plane which would accommodate a $\Lambda$-$\Lambda$ bound state is substantially restricted compared to previous studies. The binding energy of the possible $\Lambda$-$\Lambda$ bound state is estimated within an effective-range expansion approach and is found to be $B_{\Lambda\Lambda}=3.2^{+1.6}_{-2.4}\mathrm{(stat)}^{+1.8}_{-1.0}\mathrm{(syst)}$ MeV. 8 data tables Exclusion plot for the $\Lambda$-$\Lambda$ binding energy (statistical uncertainty). Exclusion plot for the $\Lambda$-$\Lambda$ binding energy (total uncertainty). p-p correlation function in p-p collisions at $\sqrt{s}=13$ TeV. More… #### Inclusive J/$\psi$ production at mid-rapidity in pp collisions at $\sqrt{s}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1735351 Inclusive J/$\psi$ production is studied in minimum-bias proton-proton collisions at a centre-of-mass energy of $\sqrt{s}$ = 5.02 TeV by ALICE at the CERN LHC. The measurement is performed at mid-rapidity ($|y| < 0.9$) in the dielectron decay channel down to zero transverse momentum $p_{\rm T}$, using a data sample corresponding to an integrated luminosity of $L_{\rm int} = 19.4 \pm$ 0.4 nb$^{-1}$. The measured $p_{\rm T}$-integrated inclusive J/$\psi$ production cross section is d$\sigma$/d$y$ = 5.64 $\pm$ 0.22 (stat.) $\pm 0.33$ (syst.) $\pm 0.12$ (lumi.) $\mu$b. The $p_{\rm T}$-differential cross section d$^{2} \sigma$/d$p_{\rm T}$d$y$ is measured in the $p_{\rm T}$ range 0$-$10 GeV/$c$ and compared with state-of-the-art perturbative QCD calculations. The J/$\psi$ $\langle p_{\rm T} \rangle$ and $\langle p_{\rm T}^{2} \rangle$ are extracted and compared with results obtained at other collision energies. 4 data tables $p_{\rm T}$-integrated inclusive J/$\psi$ cross section. $p_{\rm T}$-differential inclusive J/$\psi$ cross section. Mean $p_{\rm T}$ square of the inclusive J/$\psi$ spectrum at 5.02 TeV. More… #### Version 2 Charged-particle production as a function of multiplicity and transverse spherocity in pp collisions at $\sqrt{s} =5.02$ and 13 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. Eur.Phys.J. C79 (2019) 857, 2019. Inspire Record 1735345 We present a study of the inclusive charged-particle transverse momentum ( $p_{\mathrm{T}}$ ) spectra as a function of charged-particle multiplicity density at mid-pseudorapidity, $\mathrm{d}N_{\mathrm{ch}}/\mathrm{d}\eta$ , in pp collisions at $\sqrt{s}=5.02$ and 13 TeV covering the kinematic range $|\eta |<0.8$ and $0.15<p_{\mathrm{T}} <20$  GeV/c. The results are presented for events with at least one charged particle in $|\eta |<1$ (INEL $\,>0$ ). The $p_\mathrm{T}$ spectra are reported for two multiplicity estimators covering different pseudorapidity regions. The $p_{\mathrm{T}}$ spectra normalized to that for INEL $\,>0$ show little energy dependence. Moreover, the high- $p_{\mathrm{T}}$ yields of charged particles increase faster than the charged-particle multiplicity density. The average ${ p}_{\mathrm{T}}$ as a function of multiplicity and transverse spherocity is reported for pp collisions at $\sqrt{s}=13$  TeV. For low- (high-) spherocity events, corresponding to jet-like (isotropic) events, the average $p_\mathrm{T}$ is higher (smaller) than that measured in INEL $\,>0$ pp collisions. Within uncertainties, the functional form of $\langle p_{\mathrm{T}} \rangle (N_{\mathrm{ch}})$ is not affected by the spherocity selection. While EPOS LHC gives a good description of many features of data, PYTHIA overestimates the average $p_{\mathrm{T}}$ in jet-like events. 18 data tables Transverse momentum spectra as a function of the event multiplicity for pp collisions at 13 TeV. Event multiplicity is estimated with the number of SPD tracklets. Uncorrelated systematic uncertainties are the multiplicity dependent systematic uncertainties. Transverse momentum spectra as a function of the event multiplicity for pp collisions at 5.02 TeV. Event multiplicity is estimated with the number of SPD tracklets. Uncorrelated systematic uncertainties are the multiplicity dependent systematic uncertainties. Transverse momentum spectra as a function of the event multiplicity for pp collisions at 13 TeV. Event multiplicity is estimated with the signal in the VZERO detector. Uncorrelated systematic uncertainties are the multiplicity dependent systematic uncertainties. More… #### Measurement of charged jet cross section in pp collisions at ${\sqrt{s}=5.02}$ TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1733689 The cross section of jets reconstructed from charged particles is measured in the transverse momentum range of $5<p_\mathrm{T}<100\ \mathrm{GeV}/c$ in pp collisions at the center-of-mass energy of $\sqrt{s} = 5.02\ \mathrm{TeV}$ with the ALICE detector. The jets are reconstructed using the anti-$k_\mathrm{T}$ algorithm with resolution parameters $R=0.2$, $0.3$, $0.4$, and $0.6$ in the pseudorapidity range $|\eta|< 0.9-R$. The charged jet cross sections are compared with the leading order (LO) and to next-to-leading order (NLO) perturbative Quantum ChromoDynamics (pQCD) calculations. It was found that the NLO calculations agree better with the measurements. The cross section ratios for different resolution parameters were also measured. These ratios increase from low $p_\mathrm{T}$ to high $p_\mathrm{T}$ and saturate at high $p_\mathrm{T}$, indicating that jet collimation is larger at high $p_\mathrm{T}$ than at low $p_\mathrm{T}$. These results provide a precision test of pQCD predictions and serve as a baseline for the measurement in Pb$-$Pb collisions at the same energy to quantify the effects of the hot and dense medium created in heavy-ion collisions at the LHC. 4 data tables Charged jet differential cross sections without UE subtraction in pp collisions at $\sqrt{s}$ = 5.02 TeV with the leading track bias. All jets must contain at least one track with $p_{T}$ > 5 GeV/$c$. Statistical uncertainties are displayed as vertical error bars. The total systematic uncertainties are shown as shaded bands around the data points. Data are scaled to enhance visibility Fig. 6: Charged jet cross section ratios for $\\sigma$(R = 0.2)/$\\sigma$(R = 0.4) (Red) and $\\sigma$(R = 0.2)/$\\sigma$(R = 0.6). The systematic uncertainty of the cross section ratio is indicated by a shaded band drawn around data points. Fig. 3: Fully corrected charged jet differential cross sections in pp collisions at $\\sqrt{s}$ = 5.02 TeV. Statistical uncertainties are displayed as vertical error bars. The total systematic uncertainties are shown as shaded bands around the data points. Data are scaled to enhance visibility. More… #### Measurement of the production of charm jets tagged with D$^{0}$ mesons in pp collisions at $\sqrt{s}$= 7 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1733683 The production of charm jets in proton-proton collisions at a center-of-mass energy of $\sqrt{s}=7$ TeV was measured with the ALICE detector at the CERN Large Hadron Collider. The measurement is based on a data sample corresponding to a total integrated luminosity of $6.23$ ${\rm nb}^{-1}$, collected using a minimum-bias trigger. Charm jets are identified by the presence of a D$^0$ meson among their constituents. The D$^0$ mesons are reconstructed from their hadronic decay D$^0\rightarrow$K$^{-}\pi^{+}$. The D$^0$-meson tagged jets are reconstructed using tracks of charged particles (track-based jets) with the anti-$k_{\mathrm{T}}$ algorithm in the jet transverse momentum range $5<p_{\rm{T,jet}}^{\mathrm{ch}}<30$ ${\rm GeV/}c$ and pseudorapidity $|\eta_{\rm jet}|<0.5$. The fraction of charged jets containing a D$^0$-meson increases with $p_{\rm{T,jet}}^{\rm{ch}}$ from $0.042 \pm 0.004\, \mathrm{(stat)} \pm 0.006\, \mathrm{(syst)}$ to $0.080 \pm 0.009\, \rm{(stat)} \pm 0.008\, \rm{(syst)}$. The distribution of D$^0$-meson tagged jets as a function of the jet momentum fraction carried by the D$^0$ meson in the direction of the jet axis ($z_{||}^{\mathrm{ch}}$) is reported for two ranges of jet transverse momenta, $5<p_{\rm{T,jet}}^{\rm{ch}}<15$ ${\rm GeV/}c$ and $15<p_{\rm{T,jet}}^{\rm{ch}}<30$ ${\rm GeV/}c$ in the intervals $0.2<z_{||}^{\rm{ch}}<1.0$ and $0.4<z_{||}^{\rm{ch}}<1.0$, respectively. The data are compared with results from Monte Carlo event generators (PYTHIA 6, PYTHIA 8 and Herwig 7) and with a Next-to-Leading-Order perturbative Quantum Chromodynamics calculation, obtained with the POWHEG method and interfaced with PYTHIA 6 for the generation of the parton shower, fragmentation, hadronisation and underlying event. 6 data tables Ratio of the $p_{\rm T}$-differential cross section of charm jets tagged with D$^0$ mesons to the inclusive jet cross section in pp collisions at $\sqrt{s}$ = 7 TeV. $p_{\rm T}$-differential cross section of charm jets tagged with D$^0$ mesons in pp collisions at $\sqrt{s}$ = 7 TeV. $z_{||}^{\rm ch}$-differential cross section of D$^0$-meson tagged track-based jets in pp collisions at $\sqrt{s}$ = 7 TeV, with $p_{\rm T,D}$ > 2 GeV/$c$ and 5 < $p_{\rm T,jet}^{\rm ch}$ < 15 GeV/$c$. More… #### First observation of an attractive interaction between a proton and a multi-strange baryon The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. Phys.Rev.Lett., 2019. Inspire Record 1731784 This work presents the first experimental observation of the attractive strong interaction between a proton and a multi-strange baryon (hyperon) $\Xi^-$. The result is extracted from two-particle correlations of combined $\rm{p}-\Xi^{-}$$\oplus$$\rm{\overline{p}}-\overline{\Xi}^{+}$ pairs measured in p-Pb collisions at $\sqrt{s_{\rm{NN}}}=5.02$ TeV at the LHC with ALICE. The measured correlation function is compared with the prediction obtained assuming only an attractive Coulomb interaction and a standard deviation in the range $[3.6,5.3]$ is found. Since the measured $\rm{p}-\Xi^{-}$$\oplus$$\rm{\overline{p}}-\overline{\Xi}^{+}$ correlation is significantly enhanced with respect to the Coulomb prediction, the presence of an additional, strong, attractive interaction is evident. The data are compatible with recent lattice calculations by the HAL-QCD Collaboration, with a standard deviation in the range $[1.8,3.7]$. The lattice potential predicts a shallow repulsive $\Xi^-$ interaction within pure neutron matter at saturation densities and this implies stiffer equations of state for neutron-rich matter including hyperons. Implications of the strong interaction for the modeling of neutron stars are discussed. 2 data tables The p$-$p $\oplus$ $\overline{\mathrm{p}}-\overline{\mathrm{p}}$ correlation function. The p$-\Xi^{-}$ $\oplus$ $\overline{\mathrm{p}}-\overline{\Xi}^{-}$ correlation function. #### Coherent J/$\psi$ photoproduction at forward rapidity in ultra-peripheral Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}=5.02$ TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1729529 The coherent photoproduction of J/$\psi$ was measured in ultra-peripheral Pb-Pb collisions at a center-of-mass energy $\sqrt{s_{\rm{NN}}}=5.02$ TeV with the ALICE detector. The J/$\psi$ is detected via its dimuon decay in the forward rapidity region for events where the hadronic activity is required to be minimal. The analysis is based on an event sample corresponding to an integrated luminosity of about 750 $\mu$b$^{-1}$. The cross section for coherent J/$\psi$ production is presented in six rapidity bins, covering the interval $-4.0 < y < -2.5$. The results are compared with theoretical models for coherent J/$\psi$ photoproduction. The results indicate that gluon shadowing effects play a role in the photoproduction process. The ratio of $\psi'$ to J/$\psi$ coherent photoproduction cross sections was measured and found to be consistent with that measured for photoproduction off protons. 1 data table Differential cross section as a function of rapidity for coherent J/PSI photoproduction in ultra-peripheral Pb-Pb collisions. #### One-dimensional charged kaon femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1727337 The correlations of identical charged kaons were measured in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV by the ALICE experiment at the LHC. The femtoscopic invariant radii and correlation strengths were extracted from one-dimensional kaon correlation functions and were compared with those obtained in pp and Pb-Pb collisions at $\sqrt{s}=7$ TeV and $\sqrt{s_{\rm NN}}=2.76$ TeV, respectively. The presented results also complement the identical-pion femtoscopic data published by the ALICE collaboration. The extracted radii increase with increasing charged-particle multiplicity and decrease with increasing pair transverse momentum. At comparable multiplicities, the radii measured in p-Pb collisions are found to be close to those observed in pp collisions. The obtained femtoscopic parameters are reproduced by the EPOS hadronic interaction model and disfavor models with large initial size or strong collective expansion at low multiplicities. 8 data tables Correlation function as a function of pair relative momentum for 0-20% multiplicity class and pair transverse momentum range (0.2-0.5) GeV/c. Correlation function as a function of pair relative momentum for 0-20% multiplicity class and pair transverse momentum range (0.5-1.0) GeV/c. Correlation function as a function of pair relative momentum for 20-40% multiplicity class and pair transverse momentum range (0.2-0.5) GeV/c. More… #### Investigations of anisotropic flow using multi-particle azimuthal correlations in pp, p-Pb, Xe-Xe, and Pb-Pb collisions at the LHC The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1723697 Measurements of anisotropic flow coefficients ($v_n$) and their cross-correlations using two- and multi-particle cumulant methods are reported in collisions of pp at $\sqrt{s} = 13$ TeV, p-Pb at $\sqrt{s_{_{\rm NN}}} = 5.02$ TeV, Xe-Xe at $\sqrt{s_{_{\rm NN}}} = 5.44$ TeV, and Pb-Pb at $\sqrt{s_{_{\rm NN}}} = 5.02$ TeV recorded with the ALICE detector. These measurements are performed as a function of multiplicity in the mid-rapidity region $|\eta|<0.8$ for the transverse momentum range $0.2 < p_{\rm T} < 3.0$ GeV/$c$. An ordering of the coefficients $v_2 > v_3 > v_4$ is found in pp and p-Pb collisions, similar to that seen in large collision systems, while a weak $v_2$ multiplicity dependence is observed relative to nucleus--nucleus collisions in the same multiplicity range. Using the novel subevent method, $v_{2}$ measured in pp and p-Pb collisions with four-particle cumulants is found to be compatible with that from six-particle cumulants. The symmetric cumulants $SC(m,n)$ calculated with the subevent method which evaluate the correlation strength between $v_n^2$ and $v_m^2$ are also presented. The presented data, which add further support to the existence of long-range multi-particle azimuthal correlations in high multiplicity pp and p-Pb collisions, can neither be described by PYTHIA8 nor by IP-Glasma+MUSIC+UrQMD model calculations, and hence provide new insights into the understanding of collective effects in small collision systems. 43 data tables $v_2\{2\}$ with $|\Delta \eta| > 1.4$ in pp collisions at $\sqrt{s} = 13$ TeV. $v_3\{2\}$ with $|\Delta \eta| > 1.0$ in pp collisions at $\sqrt{s} = 13$ TeV. $v_4\{2\}$ with $|\Delta \eta| > 1.0$ in pp collisions at $\sqrt{s} = 13$ TeV. More… #### Multiplicity dependence of (anti-)deuteron production in pp collisions at $\sqrt{s}$ = 7 TeV The collaboration Acharya, Shreyasi ; Torales - Acosta, Fernando ; Adamova, Dagmar ; et al. Phys.Lett. B794 (2019) 50-63, 2019. Inspire Record 1721729 In this letter, the production of deuterons and anti-deuterons in pp collisions at s=7 TeV is studied as a function of the charged-particle multiplicity density at mid-rapidity with the ALICE detector at the LHC. Production yields are measured at mid-rapidity in five multiplicity classes and as a function of the deuteron transverse momentum ( pT ). The measurements are discussed in the context of hadron–coalescence models. The coalescence parameter B2 , extracted from the measured spectra of (anti-)deuterons and primary (anti-)protons, exhibits no significant pT -dependence for pT<3 GeV/c , in agreement with the expectations of a simple coalescence picture. At fixed transverse momentum per nucleon, the B2 parameter is found to decrease smoothly from low multiplicity pp to Pb–Pb collisions, in qualitative agreement with more elaborate coalescence models. The measured mean transverse momentum of (anti-)deuterons in pp is not reproduced by the Blast-Wave model calculations that simultaneously describe pion, kaon and proton spectra, in contrast to central Pb–Pb collisions. The ratio between the pT -integrated yield of deuterons to protons, d/p, is found to increase with the charged-particle multiplicity, as observed in inelastic pp collisions at different centre-of-mass energies. The d/p ratios are reported in a wide range, from the lowest to the highest multiplicity values measured in pp collisions at the LHC. 12 data tables Transverse-momentum spectra of deuterons and anti-deuterons measured at mid-rapidity in V0M multiplicity class I+II Transverse-momentum spectra of deuterons and anti-deuterons measured at mid-rapidity in V0M multiplicity class III Transverse-momentum spectra of deuterons and anti-deuterons measured at mid-rapidity in V0M multiplicity class IV+V More… #### Measurement of ${\rm D^0}$, ${\rm D^+}$, ${\rm D^{*+}}$ and ${{\rm D^+_s}}$ production in pp collisions at $\mathbf{\sqrt{{\textit s}}~=~5.02~TeV}$ with ALICE The collaboration Acharya, Shreyasi ; Adamova, Dagmar ; Adhya, Souvik Priyam ; et al. No Journal Information, 2019. Inspire Record 1716440 The measurements of the production of prompt ${\rm D^0}$, ${\rm D^+}$, ${\rm D^{*+}}$, and ${{\rm D^+_s}}$ mesons in proton--proton (pp) collisions at $\sqrt{s}=5.02$ TeV with the ALICE detector at the Large Hadron Collider (LHC) are reported. D mesons were reconstructed at mid-rapidity ($|y|<0.5$) via their hadronic decay channels ${\rm D}^0 \to {\rm K}^-\pi^+$, ${\rm D}^+\to {\rm K}^-\pi^+\pi^+$, ${\rm D}^{*+} \to {\rm D}^0 \pi^+ \to {\rm K}^- \pi^+ \pi^+$, ${\rm D^{+}_{s}\to \phi\pi^+\to K^{+} K^{-} \pi^{+}}$, and their charge conjugates. The production cross sections were measured in the transverse momentum interval $0<p_{\rm T}<36~\mathrm{GeV}/c$ for ${\rm D^0}$, $1<p_{\rm T}<36~\mathrm{GeV}/c$ for ${\rm D^+}$ and ${\rm D^{*+}}$, and in $2<p_{\rm T}<24~\mathrm{GeV}/c$ for ${{\rm D^+_s}}$ mesons. Thanks to the higher integrated luminosity, an analysis in finer $p_{\rm T}$ bins with respect to the previous measurements at $\sqrt{s}=7$ TeV was performed, allowing for a more detailed description of the cross-section $p_{\rm T}$ shape. The measured $p_{\rm T}$-differential production cross sections are compared to the results at $\sqrt{s}=7$ TeV and to four different perturbative QCD calculations. Its rapidity dependence is also tested combining the ALICE and LHCb measurements in pp collisions at $\sqrt{s}=5.02$ TeV. This measurement will allow for a more accurate determination of the nuclear modification factor in p-Pb and Pb-Pb collisions performed at the same nucleon-nucleon centre-of-mass energy. 18 data tables $p_{\rm T}$-differential cross section of prompt $\rm{D}^{0}$ mesons in pp collisions at $\sqrt{\rm{s_{NN}}}$=5.02 TeV in the rapidity interval $|y|$<0.5. Branching ratio of $\rm{D}^{0}\rightarrow K\pi$ : 0.0389. $p_{\rm T}$-differential cross section of prompt $\rm{D^{+}}$ mesons in pp collisions at $\sqrt{\rm{s_{NN}}}$=5.02 TeV in the rapidity interval $|y|$<0.5. Branching ratio of $\rm D^{+-}\rightarrow K{\rm{\pi}}{\rm{\pi}}$ : 0.0898. $p_{\rm T}$-differential cross section of prompt $\rm D^{*}$ mesons in pp collisions at $\sqrt{\rm{s_{NN}}}$=5.02 TeV in the rapidity interval $|y|$<0.5. Branching ratio of $\rm{D}^{*+}\rightarrow \rm{D}^{0}\pi\rightarrow K\pi\pi$ : 0.02633. More… #### Event-shape and multiplicity dependence of freeze-out radii in pp collisions at $\sqrt{{\textit s}}=7$ TeV The collaboration Acharya, Shreyasi ; Torales - Acosta, Fernando ; Adamova, Dagmar ; et al. No Journal Information, 2019. Inspire Record 1714695 Two-particle correlations in high-energy collision experiments enable the extraction of particle source radii by using the Bose-Einstein enhancement of pion production at low relative momentum $q\propto 1/R$. It was previously observed that in $\rm{p}\rm{p}$ collisions at $\sqrt{s}=7$ TeV the average pair transverse momentum $k_{\rm T}$ range of such analyses is limited due to large background correlations which were attributed to mini-jet phenomena. To investigate this further, an event-shape dependent analysis of Bose-Einstein correlations for pion pairs is performed in this work. By categorizing the events by their transverse sphericity $S_{\rm T}$ into spherical $(S_\textrm{T}>0.7)$ and jet-like $(S_\textrm{T}<0.3)$ events a method was developed that allows for the determination of source radii for much larger values of $k_{\rm T}$ for the first time. Spherical events demonstrate little or no background correlations while jet-like events are dominated by them. This observation agrees with the hypothesis of a mini-jet origin of the non-femtoscopic background correlations and gives new insight into the physics interpretation of the $k_{\rm T}$ dependence of the radii. The emission source size in spherical events shows a substantially diminished $k_{\rm T}$ dependence, while jet-like events show indications of a negative trend with respect to $k_{\rm T}$ in the highest multiplicity events. Regarding the emission source shape, the correlation functions for both event sphericity classes show good agreement with an exponential shape, rather than a Gaussian one. 18 data tables Opposite-sign pion pair correlation functions in data for sphericity S_{T} < 0.3 (jet-like events). Opposite-sign pion pair correlation functions in PYTHIA simulations for sphericity S_{T} < 0.3 (jet-like events). Opposite-sign pion pair correlation functions in data for sphericity S_{T} > 0.7 (spherical events). More…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941362142562866, "perplexity": 3107.1743331378416}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00146.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/192995-bound-sum-logarithms.html
Math Help - bound on sum of logarithms 1. bound on sum of logarithms I have that $\prod_{i,j} (1-y_{ij}) \geq l$, where $1> y_{ij}\geq 0$ I need to bound the following using $l$ $\prod_{i,j} (1-a_{ij}y_{ij}) \geq ?$, where $1> y_{ij}\geq 0$ , and $0.5 \geq a_{ij}\geq 0$ Such that ? has to be a function of $l$, $f(l)$ Can I use something like Holder's inequality? I started by taking the logarithm of both sides $\sum_{i,j} \log (1-y_{ij}) \geq \log(l)$ $\sum_{i,j} \log (1-a_{ij}y_{ij}) \geq ?$ And I believe that the solution is $\sum_{i,j} \log (1-a_{ij}y_{ij}) \geq \log (\frac{1+l}{2})$ but I don't know how to prove it. Any pointers?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9944332242012024, "perplexity": 245.25109986202017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663612.31/warc/CC-MAIN-20140930004103-00225-ip-10-234-18-248.ec2.internal.warc.gz"}
http://tailieu.vn/doc/oracle-database-11g-p1-158429.html
# Oracle Database 11g P1 Chia sẻ: Vong Phat | Ngày: | Loại File: PDF | Số trang:40 0 129 lượt xem 36 ## Oracle Database 11g P1 Mô tả tài liệu Tham khảo tài liệu 'oracle database 11g p1', công nghệ thông tin, cơ sở dữ liệu phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả Chủ đề: Bình luận(0) Lưu ## Nội dung Text: Oracle Database 11g P1 1. The eXperT’s Voice ® in oracle All major features of Oracle Database 11g Release 1 tested and explained Oracle Database 11g New Features for DBAs and Developers Learn the powerful new features in Oracle Database 11g and advance to the cutting-edge of Oracle database administration and development. Sam R. Alapati and Charles Kim 2. Oracle Database 11g New Features for DBAs and Developers ■■■ Sam R. Alapati and Charles Kim 4. To Jim Gray (Microsoft Technical Fellow), who is deeply missed by the database world, which remembers him with fondness and respect for both his professional brilliance and his warm personal qualities. Jim Gray is responsible for several fundamental database technologies, especially in online transaction processing. Jim Gray is still missing after embarking on a solo one-day boating trip from San Francisco on January 28, 2007, to immerse his mother’s ashes at sea. In 1997 Jim Gray received the A.M. Turing Award (which is considered by some to be the Nobel Prize for computer science) for his “seminal contributions to database and transaction processing research and technical leadership in system implementation.” Jim Gray is the author of Transaction Processing: Concepts and Techniques, which has been the classic reference in the field for the last several years. Much of what we do in online transaction processing today flows directly from Jim Gray’s seminal contributions, and all of us who work with relational databases owe an immense debt to him. —Sam R. Alapati I dedicate the completed endeavor of this book to my parents, Wan Kyu and Chong Sik Kim, who made incredible sacrifices for my sisters and me. I thank you for my upbringing, education, work ethic, and any and all accomplishments. Thank you for exemplifying what it means to be a follower of Christ. As a parent myself now, I know that you are truly good and Godly parents. —Charles Kim 5. Contents at a Glance About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii ■CHAPTER 1 Installing, Upgrading, and Managing Change . . . . . . . . . . . . . . . . . . . 1 ■CHAPTER 2 Database Diagnosability and Failure Repair . . . . . . . . . . . . . . . . . . . 57 ■CHAPTER 3 Database Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 ■CHAPTER 4 Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 ■CHAPTER 5 Database Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 ■CHAPTER 6 Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 ■CHAPTER 7 Data Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 ■CHAPTER 8 Oracle Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 ■CHAPTER 9 Storage Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 ■CHAPTER 10 Data Guard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 ■CHAPTER 11 Application Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 ■CHAPTER 12 Data Warehousing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 ■INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 v
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577392935752869, "perplexity": 67.65749999745366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00007.warc.gz"}
http://mathoverflow.net/questions/132653/the-balls-and-bins-model-bounding-the-marginal-contributions-in-the-mn-regime
The balls and bins model: bounding the marginal contributions in the m>>n regime Consider the standard balls and bins process, where $m$ balls are thrown into $n$ bins, and consider the case where $m >> n$. Denote the load on bin $i$ by the RV $L_i$. Given a set $S \subseteq [n]$ of size $k$ which does not contain the maximum and minimum load bins, and a threshold $t \in [m]$, a bin $j$ is pivotal to $S$, if $\sum_{i \in S}L_i < t \leq \sum_{i \in S \cup j}L_i$. For a value $k \in [m-2]$, I want to bound the number of sets of size $k$ for which the maximum load bin is pivotal to, but not the minimum load bin, as a function of $t$. For $t$ sufficiently bounded away from $\frac{km}{n}$, it is not hard to show that, with high probability, bin $i$ is pivotal to set $S$ iff bin $j \neq i$ is pivotal to $S$, as the difference in the loads of every two bins is $O(\sqrt{\frac{m\log n}{n}})$ w.h.p. (using the Chernoff bound) However, for $t$ very close to $\frac{km}{n}$, the converse seems to be true (i.e., the maximum load bin is pivotal to many more sets $S$). Now, from this, we know that if $n\log n << m < n\cdot polylog n$, then $L_{max} = \frac{m}{n} + \Theta (\sqrt{\frac{m\log n}{n}})$ w.h.p. By showing that w.h.p. $|L_i - \frac{m}{n}| = O(\sqrt{\frac{m\log n}{n}})$, for all $i=1,\ldots,n$ w.h.p. (using a Chernoff bound), we get that $L_{max} - L_{min} = \Theta(\sqrt{\frac{m\log n}{n}})$ with high probability. Any ideas on how to lower-bound the number of sets? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189313650131226, "perplexity": 207.68405634590152}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641468.77/warc/CC-MAIN-20150417045721-00098-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/90216/carry-out-the-following-operations-and-express-the-answer-with-the-appropriate-n-1
# Problem: Carry out the following operations, and express the answer with the appropriate number of significant figures.863 x [1255 - (3.45 x 108)] 85% (85 ratings) ###### Problem Details Carry out the following operations, and express the answer with the appropriate number of significant figures. 863 x [1255 - (3.45 x 108)]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210333585739136, "perplexity": 1966.6102531984177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674594.59/warc/CC-MAIN-20201201135627-20201201165627-00593.warc.gz"}
http://mathoverflow.net/questions/164295/what-forces-us-to-accept-large-cardinal-axioms
What “forces” us to accept large cardinal axioms? Large cardinal axioms are not provable using usual mathematical tools (developed in $\text{ZFC}$). Their non-existence is consistent with axioms of usual mathematics. It is provable that some of them don't exist at all. They show many unusual strange properties. $\vdots$ These are a part of arguments which could be used against large cardinal axioms, but many set theorists not only believe in the existence of large cardinals, but also refute every statement like $V=L$ which is contradictory to their existence. What makes large cardinal axioms reasonable enough to add them to set of axioms of usual mathematics? Is there any particular mathematical or philosophical reason which forces/convinces us to accept large cardinal axioms? Is there any fundamental axiom which is philosophically reasonable and implies the necessity of adding large cardinals to mathematics? Is it Reflection Principle which informally says "all properties of the universe $V$ should be reflected in a level of von Neumann's hierarchy" and so because within $\text{ZF-Inf}$ the universe $V$ is infinite we should add large cardinal $\omega$ (which is inaccessible from finite numbers) by accepting large cardinal axiom $\text{Inf}$ and because $V$ is a model of (second order) $\text{ZFC}$ we should accept existence of inaccessible cardinals to reflect this property to $V_{\kappa}$ for $\kappa$ inaccessible and so on? Question. I am searching for useful mathematical, philosophical,... references which investigate around possible answers of above questions. - Isn't it a little arrogant to propose some anti-large cardinal axiom like V=L? It's as if you know all there is to know about the structure of the world. Large cardinals are a way of acknowledging our limitations in comparison to the huge complicated world out there. – Monroe Eskew Apr 25 '14 at 3:43 @Monroe Some have argued that point on the other side. For example, Stephen Simpson compares large cardinal skepticism with religious skepticism. – Joel David Hamkins Apr 25 '14 at 4:05 Nobody forces you to accept anything. If you don't want to accept large cardinal axioms, you don't have to. – Asaf Karagila Apr 25 '14 at 4:15 While I think large cardinals are fun, I don't feel particularly forced to have an opinion about their existence. It would be way cool if they could be explained in terms of computation. For instance, inaccessible cardinals correspond to type-theoretic universes, and Mahlo cardinals can be seen as a very strong induction principle in type theory. But what about even larger cardinals? Do they have a computational meaning? That would "force" me personally to regard them seriously. – Andrej Bauer Apr 25 '14 at 7:39 He has said it during some talks I've seen, and so you might find it in his slides. For example, see pages 10-11 of the slides from his talk at the 2009 NYU conference: personal.psu.edu/t20/talks/nyu0904/nyu.pdf, also personal.psu.edu/t20/talks/nyu0904/nyu-slides.pdf. – Joel David Hamkins Apr 25 '14 at 11:03 The line of reasoning you mention at the end of your post, firmly in support of large cardinals, was first argued forcefully in • W. N. Reinhardt, “Remarks on reflection principles, large cardinals, and elementary embeddings,” Proceedings of Symposia in Pure Mathematics, Vol 13, Part II, 1974, pp. 189-205 and the ideas are further discussed, explained and basically supported in These articles have now a rather large literature of discussion and criticism in the philosophy of set theory. To get started, you might find further resources on the reading list of my recent course NYU Philosophy of Set Theory. One can now find numerous articles arguing on any given side of each issue. - This is only a partial answer, but Harvey Friedman has a research program to find concrete $\Pi^0_1$ sentences that are purely combinatorial (i.e., make no reference to concepts from logic such as axioms or formal systems) that can be deduced from a large cardinal axiom and that imply the consistency of a (slightly weaker) large cardinal axiom. The $\Pi^0_1$ statement can of course be partially verified by direct computation, so if you convince yourself that it is true, then the large cardinal axiom helps "explain" why it is true. I believe that Friedman has carried out his program up to and including subtle cardinals; see this post on the Foundations of Mathematics mailing list, for example. I believe Friedman is optimistic that his program can in principle be carried out for any large cardinal axiom, but at present I believe he has no natural, explicit $\Pi^0_1$ statements that require (say) measurable cardinals to prove. - In Friedman's examples, do you know if the consistency of a given large cardinal axiom enough to prove the given $\Pi^0_1$ statements, or does it actually require the existence of a given large cardinal? – Jesse Elliott Dec 11 '14 at 12:28 @JesseElliott : Usually what is needed is 1-consistency. If you think about it, there's no way that a large cardinal could be required by an arithmetical statement S in the strongest possible sense that its existence is actually implied by S, because there are models of true arithmetic in which there is no inaccessible cardinal. – Timothy Chow Dec 11 '14 at 18:37 There are (possibly) two questions here: 1. Why should we believe that large cardinal axioms are consistent? 2. Why, if we believe that large cardinal axioms are consistent, should we believe that they are true? Here are some reasons for believing that large cardinal axioms are consistent (or at least, that small large cardinal axioms that have been studied for a long time are consistent.) First, there is the empirical fact no one has published a proof of a contradiction from the assumption $\mathsf{ZFC} + {}$"there is an inaccessible cardinal" (for example) despite a long period of study in which many theorems have been proved from this assumption. Although some large cardinal hypotheses (such as Reinhardt cardinals) have turned out to be inconsistent, this was discovered relatively quickly, in the period during which most people were still skeptical of them. Second, there is "fine structure" which gives canonical models for the smaller large cardinal axioms (so far, up to Woodin cardinals and a bit further.) It seems reasonable to expect that a systematic study of the structure of the models of a theory would eventually reveal the inconsistency of the theory if it were inconsistent, and this has not happened yet. For question 2, let us now assume (informally, for the sake of non-mathematical argument) that large cardinal axioms are consistent. Why should we then believe that they are true? Most people find it natural to believe the assumptions that they use in their day-to-day work, so this question is closely related to the question of which axioms people should use. Of course, the answer will depend on what types of theorems they want to prove. In most areas of mathematical research, $\mathsf{ZFC}$ seems to be sufficient in a practical sense and there does not seem (to me) to be a compelling argument that people working in these areas should use, or believe, any kind of axiom beyond $\mathsf{ZFC}$. So perhaps the question should be revised to "why should mathematicians who want to prove theorems beyond $\mathsf{ZFC}$ use large cardinal axioms, instead of alternatives such as $V=L$?" A practical answer is that doing so allows us to prove lots of interesting theorems. Suppose that I assume $\mathsf{ZFC} + {}$"there is a measurable cardinal" and you assume "$\mathsf{ZFC} + V=L$." Then for every theorem that you prove, I could have proved (if I were clever enough) a corresponding theorem of the form $L \models \ldots.$ On the other hand, I may have the opportunity to discover an interesting theorem about measurable cardinals that you do not have the opportunity to discover (unless you investigate countable transitive models with measurable cardinals, which seems like an unnatural thing to do if you believe that $V=L$, even though it is presumably formally consistent for you to assume the existence of such models.) This last point is summarized by the slogan "maximize interpretive power." Many of the points I made above are better made in the following paper. I think that what I wrote here leans toward Steel's viewpoint, but I do not claim to have rendered it faithfully. Feferman, Solomon; Friedman, Harvey M.; Maddy, Penelope & Steel, John R. (2000). Does mathematics need new axioms? Bulletin of Symbolic Logic 6 (4):401-446. - Regarding your comment: "Most people find it natural to believe the assumptions that they use in their day-to-day work…" I would venture to guess, from the point of view of human psychology, that one common trait of successful mathematicians is the ability to manage their level of belief in various unproved statements---strengthening it as they attempt to achieve proof; gutting it as they attempt to achieve disproof. – Lee Mosher Apr 26 '14 at 14:21 It is useful in the category theory, in particular, in its applications to algebraic geometry. "Small" categories (whose objects form a set) are much nicer than "large categories" (whose objects mere form a "class", whatever it means). In algebraic geometry one wants to consider categories like Sets, Schemes and so on as small --- and technically it is done using Grothendieck's "Axiom of universe", which is equivalent to existence of strongly inacessible cardinals large than a given cardinal, see http://en.wikipedia.org/wiki/Grothendieck_universe - I think this is a good reason. Non set-theoretic reasons for believing in set-theoretic concepts are the most compelling, by far. – goblin May 23 '14 at 13:39 Let me give an argument, not that we should believe large cardinal axioms or their consistency, but rather that regardless of our belief in consistency we should still be interested in results around them. First, large cardinals are uniquely useful in analyzing principles of strong consistency strength. That is, we know from experience that there are many principles, with consistency strength greater than $ZFC$, for which large cardinals function as a useful organizational principle. This is especially important if I'm agnostic about the consistency of large cardinals (which I am), since then I'm also agnostic about a bunch of other fairly natural principles and want a nice "yardstick" to organize my knowledge of them. Even if I actively believe, say, that "there is an inaccessible" is inconsistent with $ZFC$, playing with large cardinals is still useful to me: if I believe inaccessibles are inconsistent with $ZFC$, then I also must believe that "DC + every set of reals is Lebesgue measurable" is inconsistent with $ZF$; the point is, there are reasonably natural philosophical viewpoints which reject inaccessibles - say, believing that the Inner Model Hypothesis is true - which yield, via arguments around large cardinals, philosophical positions against other principles for which no such natural viewpoint is known to exist. - Do we have examples of axioms A and B, not obviously about large cardinals, for which Con(A) $\rightarrow$ Con(B) can only / best / most easily be proved via theorems about large cardinals? Such an example would strengthen this argument substantially. – Matt F. Apr 26 '14 at 21:08 @MattF. An example is that $\mathbf{\Pi}^1_1$-determinacy implies $<\omega^2$-$\mathbf{\Pi}^1_1$-determinacy. The only known proof goes by showing that the first assumption implies the existence of sharps for reals, and that the sharps imply the stronger determinacy assumption. Other examples of such transfer theorems in descriptive set theory are also known at higher consistency strength. See here for more on this. – Andrés E. Caicedo Apr 26 '14 at 21:19 @AndresCaicedo, that is an interesting result, which was new to me. But Ralf Schindler in that very reference says: "Point is: We don’t know another, 'direct', proof. The only proof known goes through the study of L." He argues there that anyone who cares about determinacy should care about inner models, and does not argue there that they should care about large cardinals specifically. For me "$0^\sharp$ exists" is easier to understand and think about without large cardinals. – Matt F. Apr 26 '14 at 21:42 @MattF. Sure. The point is that unless you are in a situation of very limited interest or too specialized to be a general phenomenon, you are not going to interpolate from $A$ to $B$ using large cardinals in $V$ but rather in inner models. – Andrés E. Caicedo Apr 26 '14 at 21:51 @MattF., this is the result mentioned in the final paragraph of my answer - the relevant source is "Can you take Solovay's inaccessible away?" (link.springer.com/article/10.1007%2FBF02760522) by Saharon Shelah (specifically, this is the paper that showed that Con(ZF+DC+everything measurable)$\implies$Con(ZFC+inaccessible); the converse direction had been proved by Solovay math.wisc.edu/~miller/old/m873-03/solovay.pdf). Zero sharp in my comment is kind of a red herring; the point is that inaccessibles exactly capture "everything's measurable," and zero sharp is stronger. – Noah Schweber Apr 26 '14 at 23:48 This is a personal opinion rather than an answer (in fact another personal opinion of mine is that this kind of question cannot have a meaningful objective answer). Compare this situation with Euclidean geometry. It is not quite correct to ask whether one should believe in the fifth postulate or not. This is because with the current state of knowledge there is no problem at all to deal with all possible versions of it. And in fact, already situations when the status of the fifth postulate varies from point to point are very well understood. In set theory also, already state of knowledge is ripe to study if not all, then at least significant amount of possibilities which can arise from various combinations of large cardinal (and several other important) axioms. And in fact it is perfectly meaningful to consider and study mathematical structures which allow for variability of the status of these axioms similarly to the variation of curvature on a geometric surface. I believe that in such circumstances the question of belief becomes obsolete. It is true that in physics one may believe that the universe is positively or negatively curved, or flat. But this is because we are placed inside this universe. In case of mathematics, we are not placed inside any particular model of set theory, hence we are not forced to choose. Certainly some models are distinguished among the rest by some special properties, like flat geometry is distinguished among the rest of geometries, but that's all one can say I think. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478967547416687, "perplexity": 440.8299714464998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00013-ip-10-164-35-72.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/491144/magnetic-field-lines-vs-magnetic-vector-field
# Magnetic Field Lines Vs Magnetic Vector field I am studying electromagnetic theory and when I started researching the history of conventions used in magnetic interactions I could not get them. The basics of how they modelled the magnetic interaction are a bit confusing. Like what is "number of magnetic field lines" trying to convey when field line are just a simple visualisation tool and approximation of the underlying magnetic vector field. And why further concepts are build using this visualisation tool idea of field lines? Like if field lines are just for visualisation why do measuring quantities such as magnetic flux and magnetic flux density and magnetic flux intensity and magnetic field strength depend on the amount of those field lines? I can draw how many ever field line I want right? And how to correlate these measuring quantities such as magnetic flux and magnetic flux density and magnetic flux intensity and magnetic field strength with respect to the Vector field?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313575625419617, "perplexity": 195.37835526082335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00515.warc.gz"}
https://www.physicsforums.com/threads/multistep-stumper.140880/
# Homework Help: Multistep stumper 1. Oct 31, 2006 ### Sportsman4920 A 5.4 kg block is pushed 3.0 m up a rough 37° inclined plane by a horizontal force of 75 N. If the initial speed of the block is 2.2 m/s up the plane and a constant kinetic friction force of 25 N opposes the motion, *calculate the following. (a) the initial kinetic energy of the block J (b) the work done by the 75 N force J (c) the work done by the friction force J (d) the work done by gravity J (e) the work done by the normal force J (f) the final kinetic energy of the block J HELP!!, please 2. Oct 31, 2006 ### stunner5000pt what have u done to answer them yourself? 3. Oct 31, 2006 ### BishopUser $$W = |F||D|cos\theta$$ theta being the angle between force and displacement vectors. That should take care of the first 5. The last one looks like $$W_{net} = \Delta KE$$ Last edited: Oct 31, 2006 4. Oct 31, 2006 ### Sportsman4920 thank you 5. Oct 31, 2006 ### bosox3790 I tried to use the formula but I cant get the right answer. 6. Oct 31, 2006 ### bosox3790 how would you do part be on this question? 7. Oct 31, 2006 ### BishopUser In part B the force is 75N the distance is 3m and the angle between them is 37 degrees, so using the formula the work should be 180 N*m 8. Nov 4, 2006 ### Sportsman4920 Thanks, now i'm trying to find the work done by friction. so I did W=25N*5.2(distance)*cos(theta but it didn't work Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9517458081245422, "perplexity": 1565.6290421830001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00170.warc.gz"}
https://www.jobilize.com/physics-k12/section/few-words-of-caution-velocity-by-openstax?qcr=www.quizover.com
1.14 Velocity  (Page 5/6) Page 5 / 6 $\begin{array}{l}\mathbf{v}=\frac{d\mathbf{r}}{dt}=\frac{dx}{dt}\mathbf{i}+\frac{dy}{dt}\mathbf{j}\\ \mathbf{v}={v}_{x}\mathbf{i}+{v}_{y}\mathbf{j}\\ v=|\mathbf{v}|=\surd \left({v}_{x}^{2}+{v}_{y}^{2}\right)\end{array}$ Similarly, one dimensional motion (For example : x – direction) is described by one of the components of velocity. $\begin{array}{l}\mathbf{v}=\frac{d\mathbf{r}}{dt}=\frac{dx}{dt}\mathbf{i}\\ \mathbf{v}={v}_{x}\mathbf{i}\\ v=|\mathbf{v}|={v}_{x}\end{array}$ Few words of caution Study of kinematics usually brings about closely related concepts, terms and symbols. It is always desirable to be precise and specific in using these terms and symbols. Following list of the terms along with their meaning are given here to work as reminder : 1: Position vector : r : a vector specifying position and drawn from origin to the point occupied by point object 2: Distance : s : length of actual path : not treated as the magnitude of displacement 3: Displacement : AB or Δ r : a vector along the straight line joining end points A and B of the path : its magnitude, | AB | or |Δ r | is not equal to distance, s. 4: Difference of position vector : Δ r : equal to displacement, AB . Direction of Δ r is not same as that of position vector ( r ). 5: Magnitude of displacement : | AB | or |Δ r |: length of shortest path. 6: Average speed : ${v}_{a}$ : ratio of distance and time interval : not treated as the magnitude of average velocity 7: Speed : v : first differential of distance with respect to time : equal to the magnitude of velocity, | v | 8: Average velocity : ${\mathbf{v}}_{\mathbf{a}}$ : ratio of displacement and time interval : its magnitude, $|{\mathbf{v}}_{\mathbf{a}}|$ is not equal to average speed, ${v}_{a}$ . 9: Velocity : v : first differential of displacement or position vector with respect to time Summary The paragraphs here are presented to highlight the similarities and differences between the two important concepts of speed and velocity with a view to summarize the discussion held so far. 1: Speed is measured without direction, whereas velocity is measured with direction. Speed and velocity both are calculated at a position or time instant. As such, both of them are independent of actual path. Most physical measurements, like speedometer of cars, determine instantaneous speed. Evidently, speed is the magnitude of velocity, $\begin{array}{l}v=|\mathbf{v}|\end{array}$ 2: Since, speed is a scalar quantity, it can be plotted on a single axis. For this reason, tangent to distance – time curve gives the speed at that point of the motion. As $ds=vXdt$ , the area under speed – time plot gives distance covered between two time instants. 3: On the other hand, velocity requires three axes to be represented on a plot. It means that a velocity – time plot would need 4 dimensions to be plotted, which is not possible on three dimensional Cartesian coordinate system. A two dimensional velocity and time plot is possible, but is highly complicated to be drawn. 4: One dimensional velocity can be treated as a scalar magnitude with appropriate sign to represent direction. It is, therefore, possible to draw one dimension velocity – time plot. 5: Average speed involves the length of path (distance), whereas average velocity involves shortest distance (displacement). As distance is either greater than or equal to the magnitude of displacement, $\begin{array}{l}s\ge |\Delta \mathbf{r}|\mathrm{and}{v}_{a}\ge |{\mathbf{v}}_{\mathbf{a}}|\end{array}$ Exercises The position vector of a particle (in meters) is given as a function of time as : $\begin{array}{l}\mathbf{r}=2t\mathbf{i}+2{t}^{2}\mathbf{j}\end{array}$ Determine the time rate of change of the angle “θ” made by the velocity vector with positive x-axis at time, t = 2 s. Solution : It is a two dimensional motion. The figure below shows how velocity vector makes an angle "θ" with x-axis of the coordinate system. In order to find the time rate of change of this angle "θ", we need to express trigonometric ratio of the angle in terms of the components of velocity vector. From the figure : $\begin{array}{l}\mathrm{tan}\theta =\frac{{v}_{y}}{{v}_{x}}\end{array}$ As given by the expression of position vector, its component in coordinate directions are : $\begin{array}{l}x=2t\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}y=2{t}^{2}\end{array}$ We obtain expression of the components of velocity in two directions by differentiating "x" and "y" components of position vector with respect to time : $\begin{array}{l}{v}_{x}=2\phantom{\rule{4pt}{0ex}}\mathrm{and}\phantom{\rule{4pt}{0ex}}{v}_{y}=4t\end{array}$ Putting in the trigonometric function, we have : $\begin{array}{l}\mathrm{tan}\theta =\frac{{v}_{y}}{{v}_{x}}=\frac{4t}{2}=2t\end{array}$ Since we are required to know the time rate of the angle, we differentiate the above trigonometric ratio with respect to time as, $\begin{array}{l}{\mathrm{sec}}^{2}\theta \frac{d\theta }{dt}=2\end{array}$ $\begin{array}{l}⇒\left(1+{\mathrm{tan}}^{2}\theta \right)\frac{d\theta }{dt}=2\\ ⇒\left(1+4{t}^{2}\right)\frac{d\theta }{dt}=2\\ ⇒\frac{d\theta }{dt}=\frac{2}{\left(1+4{t}^{2}\right)}\end{array}$ At t = 2 s, $\begin{array}{l}⇒\frac{d\theta }{dt}=\frac{2}{\left(1+4\phantom{\rule{2pt}{0ex}}x\phantom{\rule{2pt}{0ex}}{2}^{2}\right)}=\frac{2}{17}\phantom{\rule{2pt}{0ex}}\mathrm{rad}/s\end{array}$ what are scalars show that 1w= 10^7ergs^-1 what's lamin's theorems and it's mathematics representative if the wavelength is double,what is the frequency of the wave What are the system of units A stone propelled from a catapult with a speed of 50ms-1 attains a height of 100m. Calculate the time of flight, calculate the angle of projection, calculate the range attained 58asagravitasnal firce Amar water boil at 100 and why what is upper limit of speed what temperature is 0 k Riya 0k is the lower limit of the themordynamic scale which is equalt to -273 In celcius scale Mustapha How MKS system is the subset of SI system? which colour has the shortest wavelength in the white light spectrum if x=a-b, a=5.8cm b=3.22 cm find percentage error in x x=5.8-3.22 x=2.58 what is the definition of resolution of forces what is energy? Ability of doing work is called energy energy neither be create nor destryoed but change in one form to an other form Abdul motion Mustapha highlights of atomic physics Benjamin can anyone tell who founded equations of motion !?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363830089569092, "perplexity": 797.9148227593622}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00092.warc.gz"}
http://www.ni.com/documentation/en/labview/1.0/node-ref/tdms-close/
# TDMS Close (G Dataflow) Version: Closes a reference to the .tdms file. This node disposes of the reference to the .tdms file once the node closes the reference. ## tdms file A reference to a .tdms file. Use the TDMS Open node to obtain the reference. ## error in Error conditions that occur before this node runs. This node runs normally even if an error occurred before this node runs. Otherwise, the node responds to this input according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. ## file path out Path to the .tdms file reference that the node closed. ## error out Error information. The node produces this output according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. Where This Node Can Run: Desktop OS: Windows FPGA: This product does not support FPGA devices
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8230883479118347, "perplexity": 2961.1289588948657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215261.83/warc/CC-MAIN-20180819165038-20180819185038-00408.warc.gz"}
https://www.physicsforums.com/threads/how-to-evaluate-arctan-cosx-from-2-to.582273/
# How to evaluate -arctan(cosx) from ∏/2 to ∏ 1. Feb 28, 2012 ### mathnoobie 1. The problem statement, all variables and given/known data The direction is the evaluate the integral, this isn't really a calculus issue, it's more of a trig issue. ∫sinx(dx)/(1+cos^2x) from ∏/2 to ∏ 2. Relevant equations (1/a)arctan(u/a)+c 3. The attempt at a solution I did all the integrating and ended up at -arctan(cosx) from ∏/2 to ∏ this is where I'm stuck, I know I'm suppose to use the fundamental theorum of Calculus but I don't know what to do once I plug in ∏/2 and ∏. How do I generate values out of this? Do I draw a triangle? Do I use the unit circle? If so, how would I use it 2. Feb 28, 2012 ### Staff: Mentor This seems pretty straightforward. -arctan(cos($\pi$)) - (-arctan(cos($\pi$/2))) What is cos($\pi$)? cos($\pi$/2)? 3. Feb 28, 2012 ### mathnoobie Cos ∏ is -1 Cos ∏/2 is 0 I believe so then I would take the Arctan(-1)-Arctan(0) So I would just find on the unit circle where tangent equals -1 and 0, then subtract? 4. Feb 28, 2012 ### Staff: Mentor Right. Similar Discussions: How to evaluate -arctan(cosx) from ∏/2 to ∏
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92962247133255, "perplexity": 2467.1007356715877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696653.69/warc/CC-MAIN-20170926160416-20170926180416-00174.warc.gz"}
https://math.stackexchange.com/questions/1091881/standard-deviation-used-in-confidence-interval-for-mean
# Standard deviation used in confidence interval for mean I am a novice to Confidence intervals. To figure out the confidence interval for mean, one could either use the $Z$ distribution or t distribution depending on the sample size and population standard deviation. When the size is less than $30$ and standard deviation is unknown, we go for t distribution. On the other hand, when the standard deviation is known, we go for $Z$ distribution. Confidence intervals for mean is used to quantify the uncertainty by providing a lower limit and upper limit that represent a range of values that will represent the true population mean with a specified level of confidence. Now, in the case of $Z$ distribution, how is the population standard deviation alone known prior to the estimation of population mean? Or in other words, what are the cases when population standard deviation is known before estimating the population mean? We interpret your question as asking under what conditions is it reasonable to use a model in which the population standard deviation is known, Let us suppose that we are using a high precision scientific instrument to determine say the mass of an object. We will do this by making a series of $n$ measurements of the mass of the object. The behaviour of the instrument may be well-known, since it has been used for a long time. It is known that the result $Y_i$ produced by the instrument on the $i$-th measurement is $\mu+X_i$, where $\mu$ is the actual mass, and the $X_i$ are independent normally distributed "error" random variables, say with mean $0$ (the instrument is well-calibrated). From long experience with the instrument, the standard deviation $\sigma$ of the $X_i$ may be known with high accuracy. Then the standard deviation of $Y_i$ is $\sigma$, and may be assumed known. • @Nicolas I haven't understood your example quite well, I guess. The standard deviation for the error random variables are known from experience. What is also known is its mean, i.e. zero. My doubt is on cases where you use the standard deviation to estimate the CI for mean. – Raji Jan 6 '15 at 13:24 • In the example, the mean of the $X_i$ is known. However, $\mu$ is not known, it is the actual weight of the object. So the mean of the $Y_i$ is not known. But the variance of the $Y_i$ is instrument-dependent only, and can in this situation be taken as known. Jan 6 '15 at 15:38 • Reasonable enough. Thanks a lot. – Raji Jan 6 '15 at 18:03 • You are welcome. There are a number of situations where we take a quite small sample (make a small number of measurements), and assume standard deviation is known. Another instance is age determination using radioactive decay. A small number of tiny fragments are taken from the sample, and their mean age is determined. The errors in this process are well enough understood that we can assume we know $\sigma$. Jan 7 '15 at 1:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781743288040161, "perplexity": 136.6521221026291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00164.warc.gz"}
https://www.physicsforums.com/threads/a-humble-question.132021/
# A humble question 1. Sep 14, 2006 ### unscientific Does ice melt or remain frozen at 0 degree celcius? 2. Sep 14, 2006 ### tehno He,he,he :Both! 3. Sep 14, 2006 ### masudr Thanks to latent heat (look up Clausius-Clapeyron equation for more details) 4. Sep 14, 2006 ### Bystander Are you adding or removing energy (heat)? Are you increasing or decreasing pressure? 5. Sep 18, 2006 ### unscientific no energy is removed or added and pressure is at a constant 1 atm. 6. Sep 18, 2006 ### Farsight 7. Sep 18, 2006 ### DaveC426913 Tehno is right, it does do both. Molecules are constantly joining and departing from the surface at the same time. The ratio stays about the same unless the temp is added or removed. 8. Sep 18, 2006 ### Claude Bile 9. Sep 18, 2006 ### Epicurus H20 does not have a triple point at atmospheric pressure. 10. Sep 18, 2006 ### scarecrow lol...if that was the case we'd be dead. 11. Sep 18, 2006 ### Epicurus In general there is a coexistence region whereby two phases will be simultaneously present. This happens over an extended temperature range and if predicted by the van der Waals equation of state. I am unsure what this is for water but there is no definate point at which we have either a totally water or totally ice, just proportions of either. 12. Sep 18, 2006 ### castaway oops i didnt read that pressure will be constant , sorry... well ice will start melting atzero degree celsius 13. Sep 19, 2006 ### unscientific so does it remain frozen...or does it start to melt??? 14. Sep 19, 2006 ### Staff: Mentor It takes energy to melt ice. If you don't add any, the ice won't melt.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8409579396247864, "perplexity": 4904.340404892735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00558.warc.gz"}
http://mathhelpforum.com/calculus/61083-asymptote-question.html
Math Help - Asymptote Question... 1. Asymptote Question... I have found the slant asymptote of $\frac{x^4-6x^3-11x^2+60x+100}{x^3-9x^2+27x-27}$ to be $y=x+3$. Somehow, on it's graph, the slant looks totally off. What is the problem with my math? The factored form that was given is $\frac{(x-5)^2(x+2)^2}{(x-3)^4}$. Thanks! 2. Originally Posted by nivek516 I have found the slant asymptote of $\frac{x^4-6x^3-11x^2+60x+100}{x^3-9x^2+27x-27}$ to be $y=x+3$. Somehow, on it's graph, the slant looks totally off. What is the problem with my math? The factored form that was given is $\frac{(x-5)^2(x+2)^2}{(x-3)^{\color{red}4}}$. Thanks! Do you mean $(x-3)^3 = x^3 - 9x^2 + 27x-27$? If so then your expansions are correct and your slant asymptote is correct. The graph you're looking at might be wrong .... Is it a graph you've drawn using technology? Check how you entered the equation. 3. Hello nivek: Your math looks fine; the equation of the slant asymptote is y = x + 3. (There is a typographical error in your factored form; the exponent in the denominator should be 3.) What do you mean by saying that the slant on your graph is off? The slope is one; if the scales on both axes are the same, then the graph of y = x + 3 should make a 45-degree angle with the horizontal axis. Cheers, ~ Mark 4. Yes, I typed in the wrong equation in my post. When I say the asymptote is off, I mean that the graph in quadrant 1 begins at around x=3 and avoids the asymptote. As $x\rightarrow \infty$, the graph starts to acknowlege the aymptote. Take a look. Could you explain why this happens?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098876714706421, "perplexity": 473.3975728849351}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.78/warc/CC-MAIN-20151124205424-00279-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.groundai.com/project/regularization-in-regression-comparing-bayesian-and-frequentist-methods-in-a-poorly-informative-situation/
Regularization in regression: comparing Bayesian and frequentist methods in a poorly informative situation 1footnote 11footnote 1This paper is part of Mohammed EL Anbari’s PhD thesis. This work has been partly supported by the Agence Nationale de la Recherche (ANR, 212, rue de Bercy 75012 Paris) through the 2009-2012 project ANR-09-BLAN-01 EMILE for the last two authors, and by Institut Universitaire de France for the last author. Jean-Michel Marin and Christian P. Robert are grateful to the participants to the BIRS 07w5079 meeting on “Bioinformatics, Genetics and Stochastic Computation: Bridging the Gap” for their helpful comments. Discussions in 2007 in Banff with Sylvia Richardson and in Roma with Jim Berger and Paul Speckman are also gratefully acknowledged. Given that Arnold Zellner sadly passed away last August, we would like to dedicate this paper to the memory of this leading Bayesian thinker who influenced so much the field and will continue to do so much longer. # Regularization in regression: comparing Bayesian and frequentist methods in a poorly informative situation 111This paper is part of Mohammed EL Anbari’s PhD thesis. This work has been partly supported by the Agence Nationale de la Recherche (ANR, 212, rue de Bercy 75012 Paris) through the 2009-2012 project ANR-09-BLAN-01 Emile for the last two authors, and by Institut Universitaire de France for the last author. Jean-Michel Marin and Christian P. Robert are grateful to the participants to the BIRS 07w5079 meeting on “Bioinformatics, Genetics and Stochastic Computation: Bridging the Gap” for their helpful comments. Discussions in 2007 in Banff with Sylvia Richardson and in Roma with Jim Berger and Paul Speckman are also gratefully acknowledged. Given that Arnold Zellner sadly passed away last August, we would like to dedicate this paper to the memory of this leading Bayesian thinker who influenced so much the field and will continue to do so much longer. Gilles Celeux Project select, INRIA Saclay, Université Paris Sud, Orsay, France Mohammed EL Anbari Jean-Michel Marin Institut de Mathématiques et Modélisation de Montpellier, Université de Montpellier 2, France Christian P. Robert Institut Universitaire de France & CREST, France ###### Abstract Using a collection of simulated an real benchmarks, we compare Bayesian and frequentist regularization approaches under a low informative constraint when the number of variables is almost equal to the number of observations on simulated and real datasets. This comparison includes new global noninformative approaches for Bayesian variable selection built on Zellner’s g-priors that are similar to Liang et al. (2008). The interest of those calibration-free proposals is discussed. The numerical experiments we present highlight the appeal of Bayesian regularization methods, when compared with non-Bayesian alternatives. They dominate frequentist methods in the sense that they provide smaller prediction errors while selecting the most relevant variables in a parsimonious way. Keywords: Model choice, regularization methods, noninformative priors, Zellner’s –prior, calibration, Lasso, elastic net, Dantzig selector. ## 1 Introduction Given a response variable, and a collection of associated potential predictor variables , the classical linear regression model imposes a linear dependence on the conditional expectation (Rao, 1973) E[y|x1,…,xp]=β0+β1x1+…βPxp. A fundamental inferential direction for those models relates to the variable selection problem, namely that only variables of relevance should be kept within the regression while the others should be removed. While we cannot discuss at length the potential applications of this perspective, variable selection is particularly relevant when the number of regressors is larger than the number of observations (as in microarray and other genetic data analyzes). To deal with poorly or ill-posed regression problems, many regularization methods have been proposed, like ridge regression (Hoerl and Kennard, 1970) and Lasso (Tibshirani, 1996). Recently the interest for frequentist regularization methods has increased and this has produced a flury of methods (see, among others, Candes and Tao, 2007, Zou and Hastie, 2005, Zou, 2006, Yuan and Lin, 2007). However, a natural approach for regularization is to follow the Bayesian paradigm as demonstrated recently by the Bayesian Lasso of Park and Casella (2008). The amount of literature on Bayesian variable selection is quite enormous (a small subset of which is, for instance, Mitchell and Beauchamp, 1988, George and McCulloch, 1993, Chipman, 1996, Smith and Kohn, 1996, George and McCulloch, 1997, Dupuis and Robert, 2003, Brown and Vannucci, 1998, Philips and Guttman, 1998, George, 2000, Kohn et al., 2001, Nott and Green, 2004, Schneider and Corcoran, 2004, Casella and Moreno, 2006, Cui and George, 2008, Liang et al., 2008, Bottolo and Richardson, 2010). The number of approaches and scenarii that have been advanced to undertake the selection of the most relevant variables given a set of observations is quite large, presumably due to the vague decisional setting induced by the question Which variables do matter? Such a variety of resolutions signals a lack of agreement between the actors in the field. Most of the solutions, including Liang et al. (2008) and Bottolo and Richardson (2010), focus on the use of the -prior, introduced by Zellner (1986). While this prior has a long history and while it reduces the prior input to a single integer, , the influence of this remaining prior factor is long-lasting and large values of are no guarantee of negligible effects, in connection with the Bartlett or Lindley–Jeffreys paradoxes (Bartlett, 1957, Lindley, 1957, Robert, 1993), as illustrated for instance in Celeux et al. (2006) or Marin and Robert (2007). In order to alleviate this influence, some empirical Bayes [Cui and George (2008)] and hierarchical Bayes [Zellner and Siow (1980), Celeux et al. (2006), Marin and Robert (2007), Liang et al. (2008) and Bottolo and Richardson (2010)] solutions have been proposed. In this paper, we pay special attention to two calibration-free hierarchical Zellner -priors. The first one is the Jeffreys prior which is not location invariant. A second one avoids this problem by only considering models with at least one variable in the model. The purpose of our paper is to compare the frequentist and the Bayesian points of views in regularization when remains (slightly) greater than , we limit our attention to full rank models. This comparison is considered from both the predictive and the explicative point of views. The outcome of this study is that Bayesian methods are quite similar while dominating their frequentist counterpart. The plan of the paper is as follows: we recall the details of Zellner’s (1986) original -prior in Section 2, and discuss therein the potential choices of . We present hierarchical noninformative alternatives in Section 3. Section 4 compares the results of Bayesian and frequentist methods on simulated and real datasets. Section 5 concludes the paper. ## 2 Zellner’s g-priors Following standard notations, we introduce a variable that indicates which variables are active in the regression, excluding the constant vector corresponding to the intercept that is assumed to be always present in the linear regression model. We observe , the model is defined as the conditional distribution y|X,γ,βγ,σ2∼Nn(Xγβγ,σ2In), (1) where • , • is the matrix which columns are made of the vector and of the variables for which , • and are unknown parameters. The same symbol for the parameter is used across all models. For model , Zellner’s -prior is given by βγ|X,γ,σ2∼Npγ+1(~βγ,gγσ2((Xγ)′Xγ)−1), π(σ2|X,γ)∝σ−2. The experimenter chooses the prior expectation and . For such a prior, we obtain the classical average between prior and observed regressors, E(βγ|X,γ,y)=gγ^βγ+~βγgγ+1. This prior is traditionally called Zellner’s -prior in the Bayesian folklore because of the use of the constant by Zellner (1986) in front of Fisher’s information matrix . Its appeal is that, by using the information matrix as a global scale, • it avoids the specification of a whole prior covariance matrix, which would be a tremendous task; • it allows for a specification of the constant in terms of observational units, or virtual prior pseudo-observations in the sense of de Finetti (1972). However, fundamental feature of the -prior is that this prior is improper, due to the use of an infinite mass on . From a theoretical point of view, this should jeopardize the use of posterior model probabilities since these probabilities are not uniquely scaled under improper priors, because there is no way of eliminating the residual constant factor in those priors (DeGroot, 1973, Kass and Raftery, 1995, Robert, 2001). However, under the assumption that is a parameter that has a meaning common to all models , Berger et al. (1998) develop a framework that allows to work with a single improper prior that is common to all models (see also Marin and Robert, 2007). A fundamental appeal of Zellner’s -prior in model comparison and in particular in variable selection is its simplicity, since it reduces the prior input to the sole specification of a scale parameter . At this stage, we need to point out that an alternative -prior is often used (Berger et al., 1998, Fernandez et al., 2001, Liang et al., 2008, Bottolo and Richardson, 2010), by singling out the intercept parameter in the linear regression. By first assuming a centering of the covariates, i.e.  for all ’s, the intercept is given a flat prior while the other parameters of are associated with a corresponding -prior. Thus, this is an alternative to model , which we denote by model to stress the distinctions between both representations and which is such that y|X,γ,α,βγ% inv,σ2∼Nn(α1n+Xγinvβγ% inv,σ2In), (2) where • the matrix which columns are made of the variables for which , • , and are unknown parameters. The parameters and are denoted the same way across all models and rely on the same prior. Namely, for model , the corresponding Zellner’s -prior is given by βγinv|X,γ,σ2∼Npγ(~βγinv,gγσ2((Xγinv)′Xγinv)−1), π(α,σ2|X,γ)∝σ−2. In that case, we obtain E(βγinv|X,γ,y)=gγ^βγinv+~βγ% invgγ+1, and E(α|X,γ,y)=¯y=1nn∑i=1yi. For models and , in a noninformative setting, we can for instance choose or and large. However, as pointed out in Marin and Robert (2007, Chapter 3) among others, there is a lasting influence of over the resulting inference and it is impossible to “let go to infinity” to eliminate this influence, because of the Bartlett and Lindley-Jeffreys (Bartlett, 1957, Lindley, 1957, Robert, 1993) paradoxes that an infinite value of ends up selecting the null model, regardless of the information brought by the data. For this reason, data-dependent versions of have been proposed with various degrees of justification: • Kass and Wasserman (1995) use so that the amount of information about the parameters contained in the prior equals the amount of information brought by one observation. As shown by Foster and George (1994), for large enough this perspective is very close to using the Schwarz (Kass and Wasserman, 1995) or BIC criterion in that the log-posterior corresponding to is equal to the penalized log-likelihood of this criterion. • Foster and George (1994) and George and Foster (2000) propose , in connection with the Risk Inflation Criterion (RIC) that penalizes the regression sum of squares. • Fernandez et al. (2001) gather both perspectives in as a conservative bridge between BIC and RIC, a choice that they christened “benchmark prior”. • George and Foster (2000) and Cui and George (2008) resort to empirical Bayes techniques. These solutions, while commendable since based on asymptotic properties (see in particular Fernandez et al., 2001 for consistency results), are nonetheless unsatisfactory in that they depend on the sample size and involve a degree of arbitrariness. ## 3 Mixtures of g-priors The most natural Bayesian approach to solving the uncertainty on the parameter is to put a hyperprior on this parameter: • This was implicitely proposed by Zellner and Siow (1980) since those authors introduced Cauchy priors on the ’s since this corresponds to a -prior augmented by a Gamma prior on . • For model , Liang et al. (2008), Cui and George (2008) and Bottolo and Richardson (2010) use βγinv|X,γ,σ2∼Npγ(0pγ,gσ2((Xγinv)′Xγinv)−1) and an hyperprior of the form π(α,σ2,g|X,γ)∝(1+g)−a/2σ−2, with . This constraint on is due to the fact that the hyperprior must be proper, in connection with the separate processing of the intercept and the use of a Lebesgue measure as a prior on . We note that needs to be specified, and being the solutions favored by Liang et al. (2008). • For model , Celeux et al. (2006) and Marin and Robert (2007) used βγ|X,γ,σ2∼Npγ+1(0pγ+1,gσ2((Xγ)′Xγ)−1) and a hyperprior of the form π(σ2,g|X)∝σ−2g−1IN∗(g). The choice of the integer support is mostly computational, while the Jeffreys-like shape is not justified, but the authors claim that it is appropriate for a scale parameter. For model a more convincing modelling is possible since the Jeffreys prior is available. Indeed, if βγ|X,γ,σ2∼Npγ+1(0pγ+1,gσ2((Xγ)′Xγ)−1), then y|X,γ,g,σ2∼Npγ+1(0n,σ2[In−gg+1Pγ]−1), where is the orthogonal projector on the linear subspace spanned by the columns of . Since, the Fisher information matrix is I(σ2,g)=(12)[n/σ4(pγ+1)/(σ2(g+1))(pγ+1)/(σ2(g+1))(pγ+1)/(g+1)2], the corresponding Jeffreys prior on is π(σ2,g|X)∝σ−2(g+1)−1. Note that, for model , Liang et al. (2008) discuss the choice of and then as leading to the reference prior and Jeffreys prior, presumably also under the marginal model after integrating out , although details are not given. For such a prior modelling, there exists a closed-form representation for posterior quantities in that π(γ,g|X,y)∝(g+1)n/2−(pγ+1)/2−1(1+g(1−y′Pγy/y′y))−n/2 and π(γ|X,y)∝2F1(n/2,1;(pγ+3)/2;y′Pγy/y′y)pγ+1, (3) where is the Gaussian hypergeometric function (Butler and Wood, 2002). We can thus proceed to undertake Bayesian variable selection without resorting at all to numerical methods (Marin and Robert, 2007). Moreover, the shrinkage factor due to the Bayesian modelling can also be expressed in closed form as E(g/(g+1)|X,γ,y) = ∫∞0g(g+1)n/2−(pγ+1)/2−2(1+g(1−y′Pγy/y′y))−n/2dg∫∞0(g+1)n/2−(pγ+1)/2−1(1+g(1−y′Pγy/y′y))−n/2dg = 22F1(n/2,2;(pγ+3)/2+1;y′Pγy/y′y)(pγ+3)2F1(n/2,1;(pγ+3)/2;y′Pγy/y′y). This obviously leads to straightforward representations for Bayes estimates. If is a matrix containing new values of the explanatory variables for which we would like to predict the corresponding response , the Bayesian predictor of is given by ^yγnew = E[ynew|Xnew,X,γ,y] = 22F1(n/2,2;(pγ+3)/2+1;y′Pγy/y′y)(pγ+3)2F1(n/2,1;(pγ+3)/2;y′Pγy/y′y)Xnew^βγ. Similarly, the Bayesian model averaging predictor of is given by ^ynew = E[ynew|Xnew,X,y] = 2∑γ∈Γ2F1(n/2,2;(pγ+3)/2+1;y′Pγy/y′y)/[(pγ+1)(pγ+3)]∑γ∈Γ2F1(n/2,1;(pγ+3)/2;y′Pγy/y′y)/(pγ+1)Xnew^βγ. This numerical simplification in the derivation of Bayesian estimates and predictors is found in Liang et al. (2008) and exploited further in Bottolo and Richardson (2010). Note also that Guo and Speckman (2009) have furthermore established the consistency of the Bayes factors based on such priors. In contrast with this proposal, the prior of Liang et al. (2008) depends on a tuning parameter . Despite that, there also exist arguments to support this prior modelling, including the important issue of invariance under location-scale transforms. As seen in the above formulae, the Jeffreys prior associated to model ensure scale invariance but not location invariance. In order to ensure location invariance for model , it would be necessary to center the observation variable as well as the dependent variables . Obviously, this centering of the data is completely unjustified from a Bayesian perspective and further it creates artificial correlations between observations. However it could be argued that the lack of location invariance only pertains to quite specific and somehow artificial situations and that it is negligible in most situations. We will return to this point in the comparison section. A location scale alternative consists in using the prior of Liang et al. (2008) with and excluding the null model from the competitors. This prior leads to the model posterior probability π(γ|X,y)∝2F1((n−1)/2,1;(pγ+2)/2;(y−¯y)′Pγ(y−¯y)/(y−¯y)′(y−¯y))pγ. (5) Equations (3) and (5) are similar. However, in the last part of (5), is centered, ensuring the location invariance of the selection procedure. ## 4 Numerical comparisons We present here the results of numerical experiments aiming at comparing the behavior of Bayesian variable selection and of some (non-Bayesian) popular regularization methods in regression, when considered from a variable selection point of view: The regularization methods that we consider are the Lasso, the Dantizg selector, and elastic net, described in Section 4.1. The Bayesian variable selection procedures we consider oppose strategies for selecting the hyperparameter in Zellner’s -priors: We include in this comparison the intrinsic prior (Casella and Moreno, 2006) which is another default objective prior for the non informative setting that does not require any tuning parameters and is also invariant under location and scale changes. All procedure under comparison are described in Table 1. We have also included in this comparison the highly standard AIC and BIC penalized likelihood criteria. Moreover, we will refer to the performances of an ORACLE procedure that assumes the true model is known and that estimate the regression coefficients with the least squares method. ### 4.1 Regularization methods 1) The Lasso: Introduced by Tibshirani (1996), the Lasso is a shrinkage method for linear regression. It is defined as the solution to the following penalized least squares optimization problem ^βLasso=argminβ||y−Xβ||22+λp∑j=1|βj|, where is a positive tuning parameter. 2) The Dantzig Selector: Candes and Tao (2007) introduced the Dantzig Selector as an alternative to the Lasso. The Dantzig Selector is the solution to the optimization problem minβ∈Rp∥β∥1subject to ∥Xt(y−Xβ)∥∞≤λ, where is a positive tuning parameter. The constraint can be viewed as a relaxation of the normal equation in the classical linear regression. 3) The Elastic Net (Enet): The Lasso has at least two limitations: a) Lasso does not encourage grouped selection in the presence of high correlated covariates and b) for the case Lasso can select at most covariates. To overcome these limitations, Zou and Hastie (2005) proposed an elastic net that combines both ridge and Lasso penalties, i.e. ^βEnet=argminβ||y−Xβ||22+λp∑j=1|βj|+μp∑j=1β2j, where and are two positive tuning parameters. ### 4.2 Numerical experiments on simulated datasets We have designed six different simulated datasets as benchmarks chosen as follows: 1. Example 1 (sparse uncorrelated design) corresponds to an uncorrelated covariate setting (), with predictors and where the components of () are iid realizations. The response is simulated as y∼Nn(2+x2+2x3−2x6−1.5x7,In). 2. Example 2 (sparse correlated design) corresponds to a correlated case (), with predictors and , for , , for , and for , the components of () being iid realizations. The use of common terms in the ’s obviously induces a correlation among those ’s: the correlation between variables and is 0.9, as for the variables (, and ), and for the variables (, , , and ). There is no correlation between those three groups of variables. The response is simulated as y∼Nn(2+x2+2x3−2x6−1.5x7,In). 3. Example 3 (sparse noisy correlated design) involves predictors. Those variables are generated using a multivariate Gaussian distribution with correlations ρ(xi,xj)=0.5|i−j|. The response is simulated as y∼Nn(3x1+1.5x2+2x5,9In). 4. Example (saturated correlated design) is the same as Example , except that the response is simulated as y∼Nn(0.858∑i=1xi,In). 5. Example 5 involves predictors. Those variables are generated using a multivariate Gaussian distribution with correlations ρ(xi,xj)=0.7|i−j|. The response is simulated as y∼Nn(2x2−3x4,In). 6. Example 6 (null model) involves predictors. Those variables are generated using a multivariate Gaussian distribution with correlations ρ(xi,xj)=0.5|i−j|. The response is simulated as y∼Nn(2,4In). Each dataset consists of a training set of size , on which the regression model has been fitted and a test set of size for assessing performances. Tuning parameters in the Lasso, the Dantzig selector (DZ), and the elastic net (ENET) have been selected by minimizing the cross-validation prediction error through leave-one-out. For each example, independent datasets have been simulated. We use three measures of performances: 1. The root mean squared error (MSE) being the prediction of in the test set; 2. HITS: the number of correctly identified influential variables; 3. FP (False Positives): the number of non-influential variables declared as influential. Using those six different datasets as benchmarks, we compare the variable selection methods listed in Table 1. The performances of the above selection methods are summarized in Tables 213. In the Bayesian approaches, the set of variables is naturally selected according to the maximum posterior probability and the predictive is obtained via the Bayesian model averaging predictors. In this numerical experiment, the Bayesian procedures are clearly much more parsimonious than the regularization procedures in that they almost always avoid overfitting. In all examples, the false positive rate FP is smaller for the Bayesian solutions than for the regularization methods. Except for the ZS-F and OVS scenarios which behave slightly worse than the others, all the Bayesian procedures tested here produce the same selection of predictors. It seems that ZS-F has a slight tendency to select too many variables. The performances of OVS are somewhat disappointing and this procedure seems to have a tendency to be too parsimonious. From a predictive viewpoint, computing the MSE by model averaging, Bayesian approaches also perform better than regularization approaches except for the saturated correlated example (Example 4). We further note that the classical selection procedures based on AIC and BIC do not easily reject variables and are thus slightly worse than Bayesian and regularization procedures (a fact not surprising for AIC). In all examples, the NIMS and HG-2 approaches lead to optimal performances in that they select the right covariates and only the right covariates, while achieving close to the minimal root mean squared error compared with all the other Bayesian solutions we considered. They also do almost systematically better than BIC and AIC. A global remark about this coparison is that all Bayesian procedures have a very similar MSE and thus that they all correspond to the same regularization effect, except for OVS which does systematically worse. However it is important to notice that the MSE for OVS has not been computed by model averaging, but by using the best model. Otherwise, it would be hazardous to recommend one of the priors from those simulations since there is no sensitive difference between them from both selection and prediction points of view.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628900051116943, "perplexity": 1213.4104807960666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00309.warc.gz"}
https://kidbrooke.com/blog/the-volatility-components-and-their-effect-on-the-macroeconomy/
• May • 2017 # The Volatility Components and Their Effect on the Macroeconomy. Cyclicality is a well established behaviour of volatility and has been widely used in its modelling. In particular, it is well documented that market volatility can be characterised by a two-factor process, one with a slowly varying long run component called the core volatility, and another strongly mean-reverting short run component commonly referred to as the transitory volatility. In a paper put forth by the Bank of England (BoE), they have studied the relationship of the two volatility components towards some macroeconomical fundamentals of the U.S economy. The aim is to investigate whether there exist any relationships in how macroeconomical shocks impact the different volatility components as well as how changes in volatility effect the macroeconomy. Using data between 2001 and 2015, a structural vector autoregression (SVAR) model has been used where the core and transitory volatility have been fitted to a series of macroeconomical measures including; industrial production growth rate, inflation rate, short-term interest rate as well as the Shiller’s crash confidence index $$-$$ a proxy for investor sentiment. The result of the study highlights three structural shocks related to aggregate demand, aggregate supply and monetary policy. With respect to supply and demand, the study shows that both a shock to adverse aggregate demand and aggregate supply create a significant and sustained increase in both the core and total volatility with the former peaking later and staying significant for a considerably longer period. For the transitory component on the other hand, the impact of changes in supply and demand is found to be insignificant. Moreover, the study shows that a change in core volatility carries a deeper recessionary impact on the market than what is observed from shocks in transitory or total volatility. In particular, an equal shock towards the core volatility and total volatility resulted in a more sustained rise in volatility and a deeper contraction in industrial production growth and inflation rate for the core volatility in every case. In contrast, the transitory component were found to have a much weaker relationship to the real economy but instead a more pronounced effect on investor sentiment. In conclusion it is apparent based on the study presented by BoE that the choice of volatility component is an important factor in the study of how volatility interacts with the macroeconomy and a practitioner might therefore want to choose a measure of volatility most suitable for their need. If the intention behind the study is to understand how volatility impacts the macroeconomy, core volatility should be considered.  On the other hand, if they are more interested in studying the relationship between investor sentiment and volatility, transitory volatility may be the more suitable component to consider. ## References Bank of England
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307498931884766, "perplexity": 1207.4128420306238}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00586.warc.gz"}
https://mc-stan.org/docs/2_18/functions-reference/poisson-log-glm.html
## 13.7 Poisson-Log Generalised Linear Model (Poisson Regression) Stan also supplies a single primitive for a Generalised Linear Model with poisson likelihood and log link function, i.e. a primitive for a poisson regression. This should provide a more efficient implementation of poisson regression than a manually written regression in terms of a poisson likelihood and matrix multiplication. ### 13.7.1 Probability Mass Function If $$x\in \mathbb{R}^{n\cdot m}, \alpha \in \mathbb{R}^n, \beta\in \mathbb{R}^m$$, then for $$y \in \mathbb{N}^n$$, $\text{PoisonLogGLM}(y|x, \alpha, \beta) = \prod_{1\leq i \leq n}\text{Poisson}(y_i|\exp(\alpha_i + x_i\cdot \beta)).$ ### 13.7.2 Sampling Statement y ~ poisson_log_glm(x, alpha, beta) Increment target log probability density with poisson_log_glm_lpmf( y | x, alpha, beta) dropping constant additive terms. ### 13.7.3 Stan Functions real poisson_log_glm_lpmf(int[] y | matrix x, real alpha, vector beta) The log poisson probability mass of y given log-rate alpha+x*beta, where a constant intercept alpha is used for all observations. The number of rows of the independent variable matrix x needs to match the length of the dependent variable vector y and the number of columns of x needs to match the length of the weight vector beta. real poisson_log_glm_lpmf(int[] y | matrix x, vector alpha, vector beta) The log poisson probability mass of y given log-rate alpha+x*beta, where an intercept alpha is used that is allowed to vary with the different observations. The number of rows of the independent variable matrix x needs to match the length of the dependent variable vector y and the number of columns of x needs to match the length of the weight vector beta.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348985552787781, "perplexity": 1483.6761909057284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201707.53/warc/CC-MAIN-20190318211849-20190318233849-00495.warc.gz"}
https://www.physicsforums.com/threads/why-does-mass-distort-space.751470/
# Why does mass distort space? 1. Apr 30, 2014 ### Evan I understand that the distortion of space is what gives rise to the force of gravity and the Earth is basically stuck in the suns distortion but why does mass cause this distortion. Is it just a property of space that is fundamental or can there be a reason why it causes the distortion and, with more technology and advancements, could be counteracted? Also if gravity is a property of all mass is it possible that gravity is a friction that keeps mass from ever achieving the speed of light? 2. Apr 30, 2014 ### ZapperZ Staff Emeritus You should not try to "extrapolate" an idea when that idea isn't well-understood in the first place. Proposing "gravity is a friction" falls under that description. I think you haven't fully understood General Relativity. It isn't a "distortion of space", but rather a distortion of space-time. So I would recommend a simple introduction to General Relativity for you to read (assuming that you already know about Special Relativity and the connection between space and time): http://www.physics.fsu.edu/courses/spring98/ast3033/relativity/generalrelativity.htm [Broken] Zz. Last edited by a moderator: May 6, 2017 3. Apr 30, 2014 ### HallsofIvy Staff Emeritus what do you mean by "mass"? 4. Apr 30, 2014 ### Evan I did understand that it was space-time but I was wondering why mass effects it. I will read that though because it will probably help me understand it better at least the concept. I'm good with my calculus but I need to work my way through more of the physics before GR. Last edited: Apr 30, 2014 5. Apr 30, 2014 ### Evan Massive objects in space. Special Relativity says matter with mass can't go the speed of light. I kind of want to know why rather than making a right triangle with E=mc^2+pc. I have read that the higgs field is what keeps particles with mass from going the speed of light but why? 6. Apr 30, 2014 ### HomogenousCow Particles are necessarily coupled to the metric. 7. Apr 30, 2014 ### Staff: Mentor Mass curves spacetime because it has energy and the stress-energy tensor is the source of spacetime curvature according to the Einstein field equation. As to why the EFE is correct, like all fundamental physical principles that is simply postulated, and it is justified because it seems to fit the data well. 8. Apr 30, 2014 ### Evan Thank you all, you have been a huge help!! 9. Apr 30, 2014 ### homeomorphic In my GR class, the Einstein field equation was motivated as some sort of 4-dimensional, relativistic analogue to the Poisson equation for Newtonian-gravity, which is equivalent to Newton's inverse square law. So, no we don't know why mass curves space-time--it just does, but there were some initial observations, like special relativity, time dilation in a gravitational field, the equivalence principle, and a bunch of tensor calculus/differential geometry that helped Einstein (with some help from his mathematician friend, Marcel Grossmann) to figure that out that it does. Newton's law of gravity also needed to be fixed, just as the rest of Newton's laws needed to be fixed, due to special relativity. Incidentally, if you want to get a better feel for why space-time curvature explains gravity, here's a good to read: http://math.ucr.edu/home/baez/einstein/ 10. Apr 30, 2014 ### Evan That's all neat I have much to learn until I can do the math and the physics but I'll bookmark the page. I read a little about the stress energy tensor to get better understanding but I am failing to understand where the energy comes from, is it just the energy of the mass and momentum that bends space-time? Similar Discussions: Why does mass distort space?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084689617156982, "perplexity": 730.8694266614057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948523222.39/warc/CC-MAIN-20171213123757-20171213143757-00657.warc.gz"}
http://www.physicsforums.com/showthread.php?t=410286
# Thermodynamic potentials by Lojzek Tags: potentials, thermodynamic P: 249 I have some questions about thermodynamic potentials (internal energy U, enthalpy H, Helmholz free energy F, Gibbs free energy G): 1. The differentials of potentials: dU<=TdS-pdV dH<=TdS+Vdp dF<=-SdT-pdV dH<=-SdT+Vdp Do this equations apply only for a single homogeneous system or can they be used for a system composed of several different subsystems? Example: Let's have N subsystems, each respecting the equation dUi<=TidSi-pidVi Considering U=$$\sum$$Ui S=$$\sum$$Si V=$$\sum$$Vi, does it always follow that dU<=TdS-pdV? I think I can prove this if all pressures and temperatures are equal. Can this equation also be used if pressures and temperatures of subsystems are not equal? In this case, should we use the outside temperature and pressure for the equation corresponding to the whole system? Can similar generalization be used for other potentials? 2. In which cases the can we get inequalities like dU Related Discussions Advanced Physics Homework 1 Classical Physics 8 Introductory Physics Homework 1 Classical Physics 10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062132239341736, "perplexity": 1135.1625205869927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
https://puzzling.stackexchange.com/questions/80116/the-master-puzzle
# The master puzzle Take any angle in a circle 1. Divide that respective angle in equal proportions up to infinity intervals . Then apply a pattern that I had given below . 2. The numbers you obtain after doing so should be equal 3. Pattern - let's take 120 as a angle 120=1+2+0=3(pattern) 4. Divide the number 3 in equal proportions .let us divide it by 2 5. 3÷2=1.5 6. 1.5 =1+5=6 7. Which isn't the same number (3≠6) 8. So find the number 9. Let’s take angle $$360^\circ$$. 1. Let’s take a angle say $$360=3+6+0=9$$. 2. Divide the 360 by 2 = 45 = $$4+5 = 9$$. 3. Continue the process 45 divide it again by 2 = 22.5 = $$2+2+5 = 9$$. 4. Doing this infinity times we get the same result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990946412086487, "perplexity": 1891.1094040406106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257100.22/warc/CC-MAIN-20190523043611-20190523065611-00470.warc.gz"}
https://convert.ehehdada.com/rankinetocelsius
# Rankine to Celsius Calculates the Rankine temperature from the given Celsius or centigrade scale value Type what you want to convert in the box below or (autosubmits, max. 1MB) <- ups! invalid URL! please, delete it or correct it! ## Rankine to Celsius The Rankine temperature scale was proposed by William John Macquorn Rankine in 1859 similarly to Kelvin temperature scale. The zero absolute at Rankine scale is also 0 K, and degree of Rankine is the same than Fahrenheit degree. The Rankine is represented as °R after the value, and sometimes like °RA. The Celsius or centigrade temperature scale is based on the water tristate, placing the 0 at the point when the liquid water becomes ice, and the 100 at the point when the liquid water becomes gas, both at 1 atm pressure. Its unit is represented with ℃ after the value. The Celsius values are calculated from Rankine values using the formula $$(Rankine - 491.67) × {5 \over 9}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445681929588318, "perplexity": 2987.487927025568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00064.warc.gz"}
https://quasirandomideas.wordpress.com/tag/l_2-space/
# Tag Archives: L_2 space ## Math2111: Chapter 1: Fourier series. Section 2: Inner product and norm In this blog entry you can find lecture notes for Math2111, several variable calculus. See also the table of contents for this course. This blog entry printed to pdf is available here. We repeat two fundamental concepts which you should have seen in linear algebra already. Inner product and norm in $\small \mathbb{R}^n$ Let $\boldsymbol{u}, \boldsymbol{v} \in \mathbb{R}^n$ be vectors with $\displaystyle \boldsymbol{u} = (u_1, \ldots, u_n)^\top, \quad \boldsymbol{v} = (v_1,\ldots, v_n)^\top$ where $(u_1,\ldots, u_n)^\top$ stands for the transpose of the vector $(u_1,\ldots, u_n)$. Then the dot product of these vectors is defined by Continue reading
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9399928450584412, "perplexity": 686.9688367183846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647892.89/warc/CC-MAIN-20180322151300-20180322171300-00455.warc.gz"}
https://www.physicsforums.com/threads/why-e-x-2-autocorrelation-of-x-evaluated-in-0.551205/
# Why E[x^2] = autocorrelation of x evaluated in 0 ? 1. Nov 16, 2011 ### mnb96 Hi, I was studying the derivation of the solution of Wiener filter from the http://en.wikipedia.org/wiki/Wiener_filter#Wiener_filter_problem_setup". There is a step I don't quite understand. First, we define the square error between the estimated signal $\hat{s}(t)$ and the original true signal $s(t)$: $$e^2(t) = s^2(t) - 2s(t)\hat{s}(t) + s^2(t)$$ then the authors calculate the mean value of $e^2$, that is $E[e^2]$. At this point I would note that: $$E[e^2] = E[s^2] - 2E[s\hat{s}] + E[s^2]$$ Unfortunately we don't know the probability density function of the original signal $s$, so we cannot compute $E[s^2]$. However, the authors of the article seem to suggest that: $$E[s^2]=R_s(0)$$ where $R_s(0)$ is the autocorrelation function of s evaluated at 0 (though I might have misunderstood this). Could anyone elaborate this point? I thought that $E[s^2]=\int f_{s}(x) x^2 dx$, while $R_s(0)=\int s(x)s(x)dx$ I don't see either, how they solve this problem without any knowledge on the p.d.f. of s. Thanks. Last edited by a moderator: Apr 26, 2017 2. Nov 17, 2011 ### mathman E(s(u)s(v)) = R(|u-v|) by definition. So if u=v we have E(s2(u)) = R(0)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907108545303345, "perplexity": 594.6022799409578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866276.61/warc/CC-MAIN-20180524112244-20180524132244-00201.warc.gz"}
https://tex.stackexchange.com/questions/213432/how-not-to-extract-text-when-using-includesvg
# How not to extract text when using \includesvg? When using the `includesvg` macro that is provided by the `svg` package to include an `svg` image, all "text components" of the image (see the manual) are separated into a `pdf_tex` file (which is basically a `tex` file), while the rest of the image is converted into a `pdf` file. Then the `includesvg` macro somehow merges the `pdf` file and the `pdf_tex` file at the place where the `svg` is included in the document, but the result of this isn't always that great, since when the text components are extracted from the `svg` files, the different font sizes of the different text components are lost and they will all get the same font size in the document they are included in. So my question is: Is there any way when using the `includesvg` macro not to extract the text components into a `pdf_tex` file, but put it in the `pdf` file that is generated, in order to preserve the original appearance of the text? I do realize that it is possible to manually open the svg file in Inkscape and save it as a pdf without using the PDF+LaTeX option and then include the pdf using the `graphicx` package, but I was looking for a way in which includesvg could do this work for you. • What do you mean by "text"? Does the svg have a `<text>` element? Or a `<foreignObject>` with a text element? Or do you perhaps mean the svg file itself? svg files are made of text... – morbusg Nov 22 '14 at 16:38 • The `svg` package manual speaks about a text component, this is what I mean. I will update my question. – StrawberryFieldsForever Nov 22 '14 at 18:13 • I found: "Only PDFLaTeX supports importing a single page of a graphics file, so only PDF backend gets interleaved text/graphics". So maybe try with EPS? – morbusg Nov 22 '14 at 19:37 • Why the package svg extracts and the re-embeds the text really baffles me. This "feature" means that the text almost never works right. I'd love to know a solution to this. – Heisenberg Sep 29 '15 at 19:45 Use the `inkscapelatex=false` option. Behind the scenes, includesvg uses Inkscape's `--export-latex` option, which causes inkscape to generate a PDF without texts, and a LaTex snippet containing all texts, to be included in the document. This allows to put formulae in the SVG, and have them rendered by LaTex on top of the graphics. `inkscapelatex=false` disables this, causing the texts (and fonts, if i remember correctly) to be included int the PDF export. • @TeXnician Behind the scenes, includesvg uses inkscape's `--export-latex` option, which causes inkscape to generate a PDF without texts, and a latex snippet containing all texts, to be included in the document. This allows to put formulas in the svg, and have them rendered by latex on top of the graphics. `inkscapelatex=false` disables this, causing the texts (and fonts, if i remember correctly) to be included int the PDF export – NicolaF_ Oct 24 '18 at 8:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277143478393555, "perplexity": 1268.5138702505524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00409.warc.gz"}
http://www.physicsforums.com/showthread.php?t=656227
# Proving Rotational K.E. Formula? by greswd Tags: formula, proving, rotational P: 147 The total kinetic energy (as viewed from one inertial frame) of a free, rigid body is the sum of all the infinitesimal kinetic energies of the components that comprise the body. How do we prove that for a rotating body $$E_k=\frac{1}{2}\left(M_{T} v_{c}^{2} + I_{c} ω^{2}\right)$$ P: 147 Where MT stands for the total mass of all the infinitesmal components combined. Mentor P: 9,636 Integrate ##\int \frac{1}{2}v^2 \rho dV## (in other words, kinetic energy = 1/2m^2 for all infinitesimal m) and split v into components from translation and rotation and you will get the correct result. Related Discussions Calculus & Beyond Homework 4 Introductory Physics Homework 6 Classical Physics 6 Calculus & Beyond Homework 4 Linear & Abstract Algebra 8
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8350861072540283, "perplexity": 1474.492469078417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164580976/warc/CC-MAIN-20131204134300-00020-ip-10-33-133-15.ec2.internal.warc.gz"}
https://deepai.org/publication/sparse-image-reconstruction-on-the-sphere-a-general-approach-with-uncertainty-quantification
# Sparse image reconstruction on the sphere: a general approach with uncertainty quantification Inverse problems defined naturally on the sphere are becoming increasingly of interest. In this article we provide a general framework for evaluation of inverse problems on the sphere, with a strong emphasis on flexibility and scalability. We consider flexibility with respect to the prior selection (regularization), the problem definition - specifically the problem formulation (constrained/unconstrained) and problem setting (analysis/synthesis) - and optimization adopted to solve the problem. We discuss and quantify the trade-offs between problem formulation and setting. Crucially, we consider the Bayesian interpretation of the unconstrained problem which, combined with recent developments in probability density theory, permits rapid, statistically principled uncertainty quantification (UQ) in the spherical setting. Linearity is exploited to significantly increase the computational efficiency of such UQ techniques, which in some cases are shown to permit analytic solutions. We showcase this reconstruction framework and UQ techniques on a variety of spherical inverse problems. The code discussed throughout is provided under a GNU general public license, in both C++ and Python. ## Authors • 2 publications • 2 publications • 13 publications • ### Bayesian variational regularization on the ball We develop variational regularization methods which leverage sparsity-pr... 05/12/2021 ∙ by Matthew A. Price, et al. ∙ 0 • ### Denoising Score-Matching for Uncertainty Quantification in Inverse Problems Deep neural networks have proven extremely efficient at solving a wide r... 11/16/2020 ∙ by Zaccharie Ramzi, et al. ∙ 0 • ### Uncertainty Quantification with Generative Models We develop a generative model-based approach to Bayesian inverse problem... 10/22/2019 ∙ by Vanessa Böhm, et al. ∙ 0 • ### Scalable Bayesian uncertainty quantification in imaging inverse problems via convex optimization We propose a Bayesian uncertainty quantification method for large-scale ... 03/02/2018 ∙ by Audrey Repetti, et al. ∙ 0 • ### Variational Reformulation of Bayesian Inverse Problems The classical approach to inverse problems is based on the optimization ... 10/21/2014 ∙ by Panagiotis Tsilifis, et al. ∙ 0 • ### Solving stochastic inverse problems for property-structure linkages using data-consistent inversion and machine learning Determining process-structure-property linkages is one of the key object... 10/07/2020 ∙ by Anh Tran, et al. ∙ 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724045991897583, "perplexity": 2875.2630834695924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00469.warc.gz"}
https://istopdeath.com/find-the-integral-2e2x/
# Find the Integral 2e^(2x) Since is constant with respect to , move out of the integral. Let . Then , so . Rewrite using and . Let . Find . Rewrite. Divide by . Rewrite the problem using and . Combine and . Since is constant with respect to , move out of the integral. Simplify. Combine and . Cancel the common factor of . Cancel the common factor. Divide by . Multiply by . The integral of with respect to is . Replace all occurrences of with . Find the Integral 2e^(2x) Scroll to top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936833381652832, "perplexity": 2656.6962451606964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00338.warc.gz"}
https://www.physicsforums.com/threads/radio-microscope.768890/
Tags: 1. Sep 3, 2014 ### vinven7 Suppose there is a set of twenty tiny radio sources that are distributed randomly in an area of 1 mm2. What is the best way to locate each of these sources - as in identify them and their locations? We can suppose that all of them are of the same frequency of 1 Mhz. Thus if the radio telescope was inverted - with its size reduced considerably and it was pointing at this area instead of the sky. how would we realize it? 2. Sep 3, 2014 ### davenn At that frequency ( 1MHz) you wouldn't have the resolution to separate 2 sources within let alone 10 or 20 sources I would venture further, and some one is sure to correct me if I am wrong, that on a surface of several metres square you would not be able to accurately resolve in which 1mm2 box within that surface a 1MHz source would be located resolving ability and wavelength are closely related hence why we see details of deep space objects much better with optical telescopes than what we do with radio telescopes Dave 3. Sep 3, 2014 ### mishima Not sure if there is a practical way to accomplish that. Your wavelength of interest is around 300 meters according to c = λ $\nu$ speed of light is wavelength times frequency If the receiving antenna was 1 cm from your sample, a resolution of well under 0.1 radians would be required. s = r $\theta$ s the width of your sample, r the distance from antenna to sample. s would actually be much smaller in your example, probably on the order of microns. This would make the diameter of the antenna 3 kilometers at the very least, just to resolve the 1 mm^2... $\theta$ ~ $\lambda$ / diameter 4. Sep 3, 2014 ### Staff: Mentor What is the application? As said, you will not be able to be any distance away from those low-frequency closely-spaced sources. You may be able to do an x-y physical scan at close distance to find them, though. 5. Sep 4, 2014 ### f95toli It might be possible if you were allowed to use a near-field probe. Near field effects can be used to "beat" the usual resolution limit, which is why microwave microscopes can reach spatial resolutions of about a micrometer. However, near-field in this case means that you would have to put the probe very close, you would need a proper scanning-probe setup. 6. Sep 4, 2014 ### sophiecentaur It comes down to the measurement of relative phases of signals. mm wavelengths and micron resolution (13:1) is one thing but 300m wavelength and 0.3mm spacing is significantly harder to achieve. (106:1). But, in the end, it would be down to the signal to noise ratio that you would be working with - so I couldn't say there's no chance). @vinven7 Btw, is this just an idle bit of exploratory thought (I have no problem with that) or is there some application you had in mind? 7. Sep 4, 2014 ### Baluncore This problem is usually encountered when trying to reverse engineer programmed semiconductors by reading the protected memory contents. One solution is to use a scanning electron microscope with a probe. If you could lower the frequency from 1 MHz to 1kHz then it would make SEM easier. There is a rule of thumb. If you want to image an object, you must use radiation that has a wavelength shorter than the dimension of the detail you want in the image. That precludes using 1 MHz radiation for images smaller than 300 metres. 8. Sep 5, 2014 ### sophiecentaur With the right processing, you can image a lot finer than that. You need a long time ( many cycles) in order to resolve tiny phase changes - and as I wrote earlier- a good SNR. 9. Sep 5, 2014 ### Baluncore That is the problem. If they were all pure sine waves with different frequencies they could be separated using interferometry without too much processing. Before the advent of optical interferometers, microwave VLBI had an unfair baseline advantage and gave better resolution than optical systems. I don't know which is now the best.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8633679747581482, "perplexity": 1013.6677889850283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588972.37/warc/CC-MAIN-20180715203335-20180715223335-00381.warc.gz"}
http://mathhelpforum.com/discrete-math/23429-recurrence-relations-generating-functions.html
# Math Help - Recurrence Relations: Generating Functions 1. ## Recurrence Relations: Generating Functions I was looking to figure out how to solve the following: The recurrence relation: $a_{n}=4a_{n-1}-4a_{n-2}+4^{n}$ Given: $n\ge 2 , a_{0}=2 , a_{1} = 8$ How would you go about solving this in terms of a generating function? Thanks! Kev 2. Hello, Kev! This one requires another trick . . . The recurrence relation: . $a_{n}\:=\:4a_{n-1}-4a_{n-2}+4^{n}$ .[1] Given: . $n\ge 2,\;a_0\,=\,2,\;a_1\,=\,8$ Consider the $(n+1)^{th}$ case: . $a_{n+1} \;=\;4a_n - 4 a_{n-1} + 4^{n+1}$ .[2] . . . . . . Multiply [1] by 4: . . $4a_n \;=\;16a_{n-1} - 16a_{n-2} + 4^{n+1}$ .[3] Subtract [3] from [2]: . $a_{n+1}-4a_n \;=\;4a_n - 20a_{n-1} + 16a_{n-2}$ . . . . . $a_{n+1} - 8a_n + 20a_{n-1} - 16a_{n-2} \;=\;0$ Let $X^k = a_k\!:\;\;X^{n+1} = 8X^n + 20X^{n-1} - 16X^{n-2} \;=\;0$ Divide by $X^{n-2}\!:\;\;X^3 - 8X^2 + 20X - 16 \;=\;0$ . . which factors; . $(X-2)^2(X-4) \;=\;0$ . . and has roots: . $2,\,2,\,4$ . . . . . .Go for it! 3. I must admit that I'm curious to see the generating function method myself. -Dan 4. I haven't thought about recurrences and GF since college. This is an interesting topic. Here is an example of one I solved years ago. I still had the paper. I keep running into problems with the one posted. Something small I'm overlooking. Most always is. Maybe this will give you something to go by. Anyway, here's an example of a recurrence using GF $a_{n}=5a_{n-1}-6a_{n-2}, \;\ a_{0}=1, \;\ a_{1}=0$ $A(x)=1-6x-30x^{2}-114x^{3}-.................$ $A(x)=\sum_{n=0}^{\infty}a_{n}x^{n}$ $1+\sum_{n=2}^{\infty}a_{n}x^{n}$ $1+\sum_{n=2}^{\infty}(5a_{n-1}-6a_{n-2})x^{n}$ $1+5\sum_{n=2}^{\infty}a_{n-1}x^{n}-6\sum_{n=2}^{\infty}a_{n-2}x^{n}$ $1+5x\sum_{n=2}^{\infty}a_{n-1}x^{n-1}-6x^{2}\sum_{n=2}^{\infty}a_{n-2}x^{n-2}$ $1+5x\sum_{k=1}^{\infty}a_{k}x^{k}-6x^{2}\sum_{k=0}^{\infty}a_{k}x^{k}$ $1+5x\sum_{k=0}^{\infty}a_{k}x^{k}-5x(1)-6x^{2}\sum_{k=0}^{\infty}a_{k}x^{k}$ $A(x)=1+5xA(x)-5x-6x^{2}A(x)$ Let y=A(x): $y=1+5xy-5x-6x^{2}y$ $y=\frac{1-5x}{6x^{2}-5x+1}=\frac{-2}{1-3x}+\frac{3}{1-2x}$ Notice the familiar geometric series solution in the PFD, $\sum_{n=1}^{\infty}a_{n}=\frac{1}{1-x}$ This gives us $\boxed{a_{n}=3\cdot{2^{n}}-2\cdot{3^{n}}}$ Hope this helps. I can easily get the result from Maple, but that's no fun. 5. Hello, Kev ... and anyone else interested, Thought I'd show how I crank out the recursion function. In the above problem, we had roots: . $X \:=\:2,\,2,\,4$ . . where $X$ the base of exponential terms. We now construct the function. It contains: $2^n$ and $4^n$ And because the 2 is repeated, we have the term: . $n\!\cdot2^n$ The function has the form: . $a(n) \;=\;A\!\cdot\!2^n + B\!\cdot\!n\!\cdot\!2^n + C\!\cdot\!4^n$ . . and we must determine $A,\,B,\,C.$ We use the first two terms of the sequence: . $a(0) =2,\;a(1) = 8$ . . and we can calculate the third term: . $a(2) = 40$ So we have: $\begin{array}{ccccccccc}a(0) = 2: & A\!\cdot\!2^0 + B\!\cdot\!0\!\cdot2^0 + C\!\cdot\!4^0 & = & 2 & \Rightarrow & A + C & = & 2 & [1] \\ a(1) = 8: & A\!\cdot\!2^1 + B\!\cdot\!1\!\cdot\!2^1 + C\!\cdot\!4^1 & = & 8 & \Rightarrow & 2A + 2B + 4C & = & 8 & [2] \\ a(2) = 40: & A\!\cdot\!2^2 + B\!\cdot\!2\!\cdot\!2^2 + C\!\cdot\!r^2 & = & 40 & \Rightarrow & 4A + 8A + 16C & = & 40 & [3]\end{array}$ Divide [3] by -4: . $-A - 2B - 4C \:=\,\text{-}10$ . . . . . . Add [2]: . $2A + 2B + 4C \:=\;\;8$ And we get: . $\boxed{A \:=\:\text{-}2}$ Substitute into [1]: . $\text{-}2 + C \:=\:2\quad\Rightarrow\quad \boxed{C \:=\:4}$ Substitute into [2]: . $2(\text{-}2) + 2B + 4(4) \:=\:8\quad\Rightarrow\quad \boxed{B \:=\:\text{-}2}$ The function is: . $a(n) \;=\;-2\!\cdot\!2^n - 2\!\cdot\!n\!\cdot\!2^n + 4\!\cdot\!4^n \;=\;-2^{n+1} - n\!\cdot\!2^{n+1} + 4^{n+1}$ . . $= \;-(1 + n)2^{n+1} + (2^2)^{n+1} \;=\;-(n+1)2^{n+1} + 2^{2n+2} \;=\;2^{2n+2} - (n+1)2^{n+1}$ Factor: . $\boxed{\:a(n) \;=\;2^{n+1}\left[2^{n+1} - (n+1)\right]\:}$ 6. Originally Posted by galactus Anyway, here's an example of a recurrence using GF $a_{n}=5a_{n-1}-6a_{n-2}, \;\ a_{0}=1, \;\ a_{1}=0$ $A(x)=\sum_{n=0}^{\infty}a_{n}x^{n}$ $A(x)=1+5xA(x)-5x-6x^{2}A(x)$ Let y=A(x): $y=1+5xy-5x-6x^{2}y$ $y=\frac{1-5x}{6x^{2}-5x+1}=\frac{-2}{1-3x}+\frac{3}{1-2x}$ Notice the familiar geometric series solution in the PFD, $\sum_{n=1}^{\infty}a_{n}=\frac{1}{1-x}$ This gives us $\boxed{a_{n}=3\cdot{2^{n}}-2\cdot{3^{n}}}$ If you would be so kind... I'm trying to run the original question through this method and I'm running into some trouble. I know I can avoid this by using Soroban's trick of putting the recursion into a form that removes the $4^n$ term, but can this be done from the original problem itself? I'm probably being confusing. I'll just post this: Define a series $A(x) = \sum_{n = 0}^{\infty}a_nx^n$ Then $A(x) = 2 + 8x + \sum_{n = 2}^{\infty}(4a_{n - 1} - 4a_{n - 2} + 4^n)x^n$ $A(x) = 2 + 8x + 4 \sum_{n = 2}^{\infty}a_{n - 1}x^n - 4 \sum_{n = 2}^{\infty} a_{n - 2}x^n + \sum_{n = 2}^{\infty}4^nx^n$ The last term is giving me problems. All I can think of to do with it is to cast it in the form $\sum_{n = 2}^{\infty}4^nx^n = \sum_{n = 2}^{\infty}(4x)^n$ But when I complete the steps of the method to find an equation for A(x) I get $A(x) = 2 + 8 + 4xA(x) - 8x - 4x^2A(x) - 2 - 32x + A(4x)$ Assuming I have everything else right, I'm still stuck with that A(4x) term. Any thoughts? -Dan 7. That is the same exact snag I run into, TQ. Solving for y and using PFD, results in $\frac{-7}{2x+1}-\frac{13}{2x-3}$ I don't beleive that gets us anywhere, though?. Alas, I am at work and don't have much time. I will try to take a glance this evening. Hey. This is #1000. I am now a contributor, as Janvdl reminded me. 8. Originally Posted by galactus That is the same exact snag I run into, TQ. That can't work in general, I don't think. We can't assume that $A(4x) = 4A(x)$ in general. (What's that called anyway? A homogeneous function, or something like that?) Originally Posted by galactus Hey. This is #1000. I am now a contributor, as Janvdl reminded me. Congrats! -Dan 9. With that A(4x), I was grabbing at straws. There's something to be done with that 4^n, I just don't know for sure what it is. I have never tackled any like that. If you figure it out, please let me know. I will surely reciprocate if I get lucky. Perhaps a search regarding generating functions and recurrence relations may prove fruitful. 10. Well: $A(4x)=\sum_{n=0}^{\infty}{a_n\cdot{4^n}\cdot{x^n}}$ But we have: $\sum_{n=2}^{\infty}{4^n\cdot{x^n}}$ Which are not necessarily the same Our equation is: $A(x)\cdot{(1-2x)^2}=2+\sum_{n=2}^{\infty}{4^n\cdot{x^n}}$ We have to remember now that $\frac{1}{(1-2x)^2}=\sum_{n=1}^{\infty}{n\cdot{(2x)^{n-1}}}$ Finally: $A(x)=2\sum_{n=0}^{\infty}{(n+1)2^n\cdot{x^{n}}}+\l eft(\sum_{n=0}^{\infty}{(n+1)2^n\cdot{x^{n}}}\righ t)\cdot{\left(\sum_{n=2}^{\infty}{4^n\cdot{x^n}}\r ight)}$ Then doing that product we are done
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 65, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9048810601234436, "perplexity": 860.422705009205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00067-ip-10-164-35-72.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/405436/density-of-traces-of-solutions-to-an-elliptic-equation/405473
# Density of traces of solutions to an elliptic equation Let $$D_1$$ be a domain with smooth boundary and assume that $$D_1$$ is a proper subset of $$D_2$$ which is itself a bounded domain in $$\mathbb R^n$$ with a smooth boundary. Assume also that $$D_2\setminus D_1$$ is connected. We write $$L^2(D_2\setminus D_1)$$ for the set of functions in the space $$\{f \in L^2(D_2)\,:\,\textrm{supp}(f)\subset D_2\setminus \overline{D_1}\}$$ let us define the mapping $$S: L^2(D_2\setminus \overline{D_1})\mapsto H^{\frac{3}{2}}(\partial D_1),$$ through $$Sf:= u|_{\partial D_1},$$ where $$u \in H^2(D_2)$$ is the unique solution to the equation $$\Delta u =f \quad \text{on D_2},$$ subject to $$u|_{\partial D_2}=0$$. Is it true that the image of $$S$$ is dense in $$H^{\frac{1}{2}}(\partial D_1)$$? The answer is yes: take any smooth function $$g_0$$ on $$\partial D_1$$ and solve the Dirichlet problem $$\begin{cases} \Delta g = 0 & \text{ on } D_1\\ g = g_0 & \text{ on } \partial D_1. \end{cases}$$ Now extend $$g$$ to a smooth function on $$\mathbb{R}^n$$. Multiply by a smooth cutoff function $$\eta$$ which is $$1$$ on $$D_1$$ and compactly supported on $$D_2$$. Then $$f = \Delta (\eta g)$$ is smooth and supported on $$D_2 \setminus \bar{D}_1$$, so in particular lies in the given $$L^2$$ space. This shows that $$g_0$$ is in the image of your operator $$S$$, and smooth functions are dense in $$H^s$$. • But why is ${\rm supp}(f)\subset D_2\backslash\overline{D_1}$? Oct 5 at 2:02 The answer is yes. Suppose $$g$$ is orthogonal to the image of $$S$$, and let $$v$$ be the solution of the Dirichlet problem $$\Delta v=g\delta(\partial D_1)$$ on $$D_2$$, where $$\delta(\partial D_1)$$ is a delta function localized on $$\partial D_1$$. We find $$\int_{\partial D_1} gu\,dS=\int_{D_2}v\Delta u\,dx.$$ Now suppose this is true for every $$u$$ for which $$\Delta u$$ has compact support in any subregion of $$D_2\backslash \overline{D_1}$$. Then $$v$$ must vanish on that subregion. Since $$\Delta v=0$$ on $$D_2\backslash \overline{D_1}$$, it follows that $$v$$ also vanishes there. But $$v$$ is continuous across $$\partial D_1$$ (only the normal derivative has a jump), and $$\Delta v=0$$ in $$D_1$$, so $$v$$ must be zero everywhere by uniqueness of the Dirichlet problem for $$D_1$$. Hence $$g=0$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 51, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99359130859375, "perplexity": 35.36157552888821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00367.warc.gz"}
https://www.physicsforums.com/threads/how-much-powerful-is-an-accelerator.15534/
# How much powerful is an accelerator 1. ### juan avellaneda 36 hi all its has been told in books that a particle accelerator can reproduce the initial conditions in the Universe But we know that this gadgets have very high voltages but low currents so the product V*I = Energy is not too high to assert that I think lighting can carry much more energy than this, although it dont carry much voltage , it carry millions of amperes. So i think this is a missconception that shoul be reviewed. 2. ### ahrkron 734 Staff Emeritus Even a falling piece of chalk may have a higher kinetic energy than that achievable in Fermilab's Tevatron. The difference comes when you consider the energy per particle. 3. ### ZapperZ 30,451 Staff Emeritus ... or that there is a misconception of your understanding. Keep in mind that one doesn't spend billions of dollars to build something that has this kind of, let's face it, elementary misconception. You need to understand what is meant by "energy per nucleon", energy in a center of mass frame especially when you have two incoming, colliding beams, etc. If you accept that the energy per particle have energies of the order of GeV or TeV (you're welcome to visit Fermilab, RHIC, or CERN to verify this), then there are no "misconceptions" here. Zz. 4. ### Cyclotron Boy 24 Units, anyone? V * I = Watts. Watts = Joules / second, or energy per unit time. The total beam power at Fermilab during collisions at the top of a stack (when beam current is maximum) is typically on the order of 500kW. This is equivalent to one Joule of energy being released in 0.000002 seconds. It is also equivalent to 500kJ in one second. There is a disconnect in the units of the question.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9024605751037598, "perplexity": 1113.238896127294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645191214.61/warc/CC-MAIN-20150827031311-00096-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/word-search-s-words
# Word Search: S Words For this word search worksheet, students use a word list and find 16 hidden words, all beginning with either the upper or lower case letter s.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120920062065125, "perplexity": 3142.711645968964}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687766.41/warc/CC-MAIN-20170921115822-20170921135822-00196.warc.gz"}
http://mathoverflow.net/questions/134417/two-variable-polynomials-irreducible-as-formal-power-series
# Two-variable polynomials, irreducible as formal power series Let $k$ be a field and $f\in k[a,b]$ an irreducible two-variable polynomial, $B := k[a,b]/(f)$ and $C$ the integral closure of $B$ in its fraction field. I call $f$ good if it is irreducible in the ring $k[[a,b]]$ of formal power series; equivalently (Nagata, Local Rings, p. 122, Ex. 1), if $C$ has exactly one prime ideal lying above $\mathfrak m = (a,b)B$ (hence, $f$ being good just means that the curve $B$ is analytically irreducible at the origin). I'm looking for examples of "good" polynomials $f$ such that the residue field $L$ of $C$ is a proper extension of $k$. I can show (at least if $k$ is infinite) that $[L:k] \cdot r = \mu(f)$, where $\mu(f)$ is the degree of the lowest-degree summand of $f$, and $r$ is the ramification index of $\mathfrak m$ in $C$. Hence, it is clear that $\mu(f)$ must be large enough if one wants interesting examples. The only "generic" class of examples I could come up with is the one where $f$ is homogeneous: Then $f$ being irreducible implies $f$ is good, and $[L:k] = \deg f$. - ## 1 Answer this is the answer_bot. Love your question. I am sure that in the meantime you have moved on and are studying fully faithful exact functors of derived categories of coherent sheaves, but I am still going to answer this one. Yeah! We can construct examples by starting with a normal affine algebraic curve C over k and a closed point c of C with any given residue field L. If L/k is finite separable, this is always possible even with C being geometrically irreducible and smooth over k. I just made some examples where k has characteristic p > 0 and L is k[x, y]/(x^p - a, y^p - b) which I think generalizes. So there are lot's of L that occur. Anyway, we next choose a general projection C ---> A^2_k (with coordinates a, b) which maps our chosen point c to (0, 0). The image of C is V(f) for some irreducible f. Since by construction (this is where the "general" above comes in) there is only one point of C above (0, 0) you get an example of what you want. You can do this explicitly because you can make explicit curves C and then explicitly project and compute the equation f by taking a resultant. Good luck! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454299211502075, "perplexity": 115.41385210764157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159376.39/warc/CC-MAIN-20160205193919-00175-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/simple-integration-problem-trig-sub.213100/
# Simple? Integration Problem (Trig Sub?) 1. Feb 4, 2008 ### PitchBlack I can't get it! I'm pretty sure it's trig substition $$\int$$$$x^{2}/\sqrt{1-x^{2}}$$ Its a practice problem, if someone could show me the light (or steps) that would be wonderful 2. Feb 4, 2008 ### sutupidmath well, yeah a trig substitution would work, try to let x=sin(t) , so you will get dx=cos(t)dt after you substitute it back you willl end up with something like this integ of (sin(t))^2dt Is this a homework problem by the way??? Can you go from here, anyway?? 3. Feb 4, 2008 ### PitchBlack No its not a homework problem....its a conceptual problem ....but i dont get it, and I'm not that great with calculus to tell you the truth im not a math major i just want to get it! So is this what you mean.... $$\int(sinx)^{2}$$$$/$$$$\sqrt{1-sinx^{2}}$$ so using u subtitution ( or whatever letter you use)... u=sinx du= cosx and since there is no cos in the original then 1/cos(du) (1/cos)$$\int du(u)^{2}$$$$/$$$$\sqrt{1-u^{2}}$$ and go from there? did i do it right? 4. Feb 4, 2008 ### sutupidmath Well You did not get it right, to be honest. Look, $$\int\frac{x^{2}}{\sqrt{1-x^{2}}}dx$$ now let sin(t)=x, from here after defferentiating we get cos(t)dt=dx, now let us substitute this back to the integral, so the integral will take this form: $$\int\frac{(sin(t))^{2}}{\sqrt{1-(sin(t))^{2}}}cos(t)dt$$, now remember that (sin(t))^2= 1-(cos(t))^2, so afer we substitute the integral becomes: $$\int\frac{(sin(t))^{2}}{\sqrt{(cos(t))^{2}}}cos(t)dt$$= $$\int\frac{(sin(t))^{2}}{cos(t)}cos(t)dt$$= $$\int (sin(t))^{2}dt$$, now do u know how to evaluate this one? Last edited: Feb 4, 2008
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731921553611755, "perplexity": 2022.9434586602276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541361.65/warc/CC-MAIN-20161202170901-00133-ip-10-31-129-80.ec2.internal.warc.gz"}
http://ompf2.com/viewtopic.php?p=5917
## My little path tracer Show-off, reference material & tools. cignox1 Posts: 15 Joined: Mon Jan 30, 2012 10:11 am ### Re: My little path tracer I think the jade Buddha is really amazing! dawelter Posts: 42 Joined: Sun Oct 29, 2017 3:15 pm Location: Germany ### Re: My little path tracer Hi. Thank you both. Glad you like it. Btw., I forgot to say, I rendered the media with "The Beam Radiance Estimate" by Jarosz (2008). I use Embree to do the beam-point query. This is possible since fairly recent support for ray-aligned-disc intersections! So, if you happen read this, Embree developer, I say thank you very much. This is a very cool feature. *thumbs* Currently, I'm trying to implement Walter et al.'s "Microfacet Models for Refraction through Rough Surfaces" (2007). It proved to be much more difficult than I thought. Mostly, getting the expression for the density p(wo|wi) right, where wo and wi are given. I need it for BPT MIS. My renderer knows only a monolithic BSDF. It does not allocate component BxDF's like PBRT. Therefore p must include both reflection and transmission. Oh well, I think I finally got it ... XMAMan Posts: 25 Joined: Tue Dec 01, 2015 7:52 am Location: Germany, Dresden ### Re: My little path tracer He dawelter, can you say me, on which paper is the budda(Subsurface Scattering) based on? I want also to enter in this topic. If you have questens to walters microfacet, than ask me^^ I have implemented it in my Raytracer. For me was the big problem with numerical issues in the ggx-normaldistribution-function, if you use very little Roughness-Factors and a Theta-Angle, that goes nerly zero (Miconormal==Marconormal) dawelter Posts: 42 Joined: Sun Oct 29, 2017 3:15 pm Location: Germany ### Re: My little path tracer We know since VCM that photon mapping can be seen as a path sampling method. Therefore, I want to first refer to Raab et al. (2008) "Unbiased Global Illumination with Participating Media" for the concise path integral formulation of volume rendering. I use the stochastic progressive variant of photon mapping with the global radius decay from Knaus & Zwicker (2011) "Progressive Photon Mapping: A Probabilistic Approach" To generate photon paths I use Woodcock tracking, essentially. In "Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes", Kutz et al. (2017) developed many extensions of this. I use the "spectral tracking" variant. I put a volume photon on every sampled interaction point. When tracing eye paths, I obtain volume interaction points by the same tracking methods as used for photon mapping. At these points I look for nearby photons and add their contribution. That is, in the basic variant with Point-Point 3D estimators. In "The Beam Radiance Estimate for Volumetric Photon Mapping", Jarosz et al. (2008) developed a method to gather photons along a beam. I implemented this also. The three pics are rendered with it. It is good for thin media. Actually, I like this paper a lot. Not only presents it the beam thing, in Sec. 3.3 it also has a nice derivation of the photon weights. In denser media I don't want to look for photons all the way to the next surface intersection. So I use a piecewise-constant stochastic estimate of the transmittance along the query beam. This essentially allows to cut off the query beam after a few mean free path lengths. Inspiration for this comes from Jarosz et al. (2011) "Progressive Photon Beams" Sec. 5.2.1 and Krivanek (2014) "Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation" Sec. 4.2 "Long" and "short" beams. But to be honest, this is very much brute force. If you want to render SSS in such dense media as I took for the Buddha, you might be better off using a fast approximation! Regarding Walter et al.'s rough transmittance model: I think I finally got it right. Here is a recreation of Figure 1 glossy_transmissive_globe.jpg (136.48 KiB) Viewed 1097 times I "only" implemented the Beckmann NDF with V-Cavity masking & shadowing function. Looks fine and I don't have to implement VNDF sampling to keep the weights low. I also noticed numerical issues with low alpha. But IIRC I get it to 1e-3 with no issue. And at that point the material looks pretty much perfectly specular. I do shading calculations in double precision though. Btw. Since you mention GGX: Heitz recently released a paper on how to sample the VNDF for GGX more easily. http://jcgt.org/published/0007/04/01/paper.pdf https://hal.archives-ouvertes.fr/hal-01509746/document I thought about implementing it ... For you it is probably worthwhile if you don't have it already. XMAMan Posts: 25 Joined: Tue Dec 01, 2015 7:52 am Location: Germany, Dresden ### Re: My little path tracer Thanks alot for your detailed answer. This will help to get a good start. At the moment I use the Sampling-Technik from Eric Heitz descripted in this paper: https://hal.inria.fr/hal-00996995v1/document The Paper from your link is then the next step. But at the moment I'm more interested in Subsurface Scattering. papaboo Posts: 46 Joined: Fri Jun 21, 2013 10:02 am Contact: ### Re: My little path tracer If you already have the VNDF sampling from the 2014 paper then implementing the new is trivial. It's the same set of samples with the same PDF, so you just have to copy paste the reference sample method and then you'll have faster GGX sampling dawelter Posts: 42 Joined: Sun Oct 29, 2017 3:15 pm Location: Germany ### Re: My little path tracer @XMAMan you're welcome. Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect. buddha_variations.jpg (696.31 KiB) Viewed 952 times So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have. knightcrawler25 Posts: 1 Joined: Mon Mar 25, 2019 10:15 am ### Re: My little path tracer dawelter wrote: Sun Mar 24, 2019 10:04 am @XMAMan you're welcome. Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect. buddha_variations.jpg So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have. Very pretty indeed! dawelter Posts: 42 Joined: Sun Oct 29, 2017 3:15 pm Location: Germany ### Re: My little path tracer Thanks, knightcrawler! Here is a another 24 h rendering. I had this fun idea to make a flat earth version of the globe figure. glossytransmissiveflatearth.jpg (210.29 KiB) Viewed 868 times The dome is filled with a thin, scattering medium. This creates the glow effect around the sun. The image has other details which I like. Like the subtle shadows cast on the surrounding walls. But it proved difficult to render, i.e. it's still noisy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526108860969543, "perplexity": 3234.3728330945446}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00305.warc.gz"}
http://www.adrian.idv.hk/2011-07-13-nz01-wdp/
Extended version of this paper: @article{" title = "On Selection of Candidate Paths for Proportional Routing", author = "Srihari Nelakuditi and Zhi-Li Zhang and David H.C. Du", journal = "Computer Networks", volume = "44", pages = "79--102", year = "2004" } Problem of shortest path routing: Unbalanced traffic distribution. Some links are increasingly congested while other links are underloaded. Therefore, multipath routing is introduced. Minimizing number of multipaths is important because of: • Overhead in establishing, maintaining, and tearing down of paths • Complexity in distributing traffic to paths increases as the number of paths increases • There are limits imposed by, for example, MPLS label spaces OSPF is a link state protocol with infrequent link state update. We can piggyback QoS information with the updates. # Minimizing overall blocking probability This paper proposes the following problem setup: • Source routing network • Route over paths set up a priori • All flows have the same bandwidth demand (1 unit) • Flow arrive Poisson, holding time is exponential (M/M/1) • Performance metric is the overall blocking probability • Objective: Proportional QoS routing, i.e. route flows to paths to minimize overall blocking probability as experienced by flows The global optimal proportioning problem is the following: Assume all nodes know the network topology and the offered load between every source-destination pair. Then we define $\hat c_\ell >0$ to be the capacity of (unidirectional) link $\ell$ and $\sigma = (s,d)$ to be a source-destination pair. Given the arrival rate and holding time of this pair to be $\lambda_\sigma$ and $\mu_\sigma$ respectively, its offered load is $\nu_\sigma = \lambda_\sigma/\mu_\sigma$. Assume that the set of all feasible path for routing σ to be $\hat{R}_\sigma$. The global optimal proportioning problem is therefore, to find $\{\alpha_r^\ast, r\in\hat{R}_\sigma\}$ such that $\sum_{r\in\hat{R}_\sigma} \alpha_r^\ast = 1$ and $W=\sum_\sigma\sum_{r\in\hat{R}_\sigma}\alpha_r\nu_\sigma(1-b_r)$ is maximized, where $b_r$ is the blocking probability on path $r$, which could be derived from Erlang’s formula using capacity of $\hat{c}$, and M/M/1 arrival-departure pattern of offered load $\nu$. Usually, we use only a subset of paths $R_\sigma \subset \hat R_\sigma$ such that for some small $\epsilon$, $R_\sigma = \{r: r\in\hat{R}_\sigma, \alpha_r^\ast>\epsilon\}$. Solving the global optimal proportioning problem requires the global knowledge on the offered load. Therefore localized strategies exists to achieve the near-optimal solution. Two strategies are mentioned in the paper, they are: • equalizing blocking probabilities (ebp): To make $b_{r_1} = b_{r_2} = \cdots = b_{r_k}$ on one source-destination pair and their $k$ paths • equalizing blocking rate (ebr): To make $\alpha_{r_1}b_{r_1} = \alpha_{r_2}b_{r_2} = \cdots = \alpha_{r_k}b_{r_k}$ In the paper, ebp is used, with an approximation. Instead of calculating the adjustments directly, the fractions $\alpha_{r_i}$ are adjusted adaptively at a frequency $\theta$. It is first find the average blocking probability $\bar{b} = \sum_i\alpha_{r_i}b_{r_i}$, and increase $\alpha_{r_i}$ if $% $ or decrease otherwise. The amount of adjustment is depended on $\lvert b_{r_i} - \bar{b}\rvert$. ## Minimizing the number of candidate paths The set of paths for a particular source-destination pair is determined by “widest disjoint paths” (wdp). The width of a path is the residual bandwidth on its bottleneck link, i.e. $w_r = \min_{\ell\in r} c_\ell$ where $c_\ell = \hat c_\ell - \nu_\ell$ is the average residual bandwidth on link $\ell$. Usually, we compute the residual bandwidth on a link by using the utilization $u_\ell$ as reported by the link state update, i.e. $c_\ell = (1-u_\ell)\hat{c}_\ell$. The distance of a path is defined as $\sum_{\ell\in r} 1/c_\ell$. The width of a set of paths $R$ is computed as follows, • First pick the path $r^\ast$ with the largest width. If multiple such paths exist, choose the one with shortest distance. • Then decrease the residual bandwidth on all links of $r^\ast$ by $w_{r^\ast}$, this essentially remove $r^\ast$ from next selection • Repeat this process until we exhaust $R$ • The sum of all widths of paths is the width of $R$ • The last path selected in this computation is denoted by $\textrm{NARROWEST}(R)$ To select η paths from the set $\hat{R}_\sigma$, the idea of the algorithm is as follows • Include a new path $r\in\hat R_\sigma$ to $R_\sigma$ if it contributes to the largest resulting width to $R_\sigma$ and $% $ • If $\lvert R_\sigma\rvert = \eta$, a new path has to be added and an old path has to be removed, so that the resulting width is maximized • Such addition or addition/removal results in a new width of $R_\sigma$, it is accepted only if the new width is larger than the old width by a fraction of $\psi$ • If no addition is made to $R_\sigma$, remove a path from it if this does not result in decreasing its width. Property of this algorithm is to select a set of candidate paths that are mutually disjoint with respect to bottleneck links. This path selection procedure and the proportioning procedure are run together as a heuristic, to “trade-off slight increase in blocking for significant decrease in the number of candidate paths”. ## Bibliographic data @inproceedings{ title = "On Selection of Paths for Multipath Routing", author = "Srihari Nelakuditi and Zhi-Li Zhang", booktitle = "Proceedings of IWQoS", pages = "170--186", year = "2001", }
{"extraction_info": {"found_math": true, "script_math_tex": 49, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111592173576355, "perplexity": 1233.2313881622129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00434.warc.gz"}
https://latex.org/forum/viewtopic.php?t=22470
## LaTeX forum ⇒ BibTeX, biblatex and biber ⇒ @MISC wont show up on reference list Topic is solved Information and discussion about BiBTeX - the bibliography tool for LaTeX documents. tbstensrud Posts: 4 Joined: Mon Jan 28, 2013 5:05 pm ### @MISC wont show up on reference list I'm trying to add a URL to my reference list, but when I use this in the bib file `@MISC{tek10,AUTHOR="Lovdata",TITLE="Teknisk forskrift 2010",YEAR="2010",URL="http://www.lovdata.no/cgi-wift/ldles?doc=/sf/sf/sf-20100326-0489.html"}` nothing shows up in my pdf file when i run it. I get no error messages. Other references do show up in the pdf file. Last edited by cgnieder on Mon Jan 28, 2013 7:59 pm, edited 1 time in total. Stefan Kottwitz Posts: 9617 Joined: Mon Mar 10, 2008 9:44 pm Hi, welcome to the board! Perhaps post a minimal working example, which shows the problem when we run it. Follow the link to learn why and how. It could be a compilable but drastically reduced copy of the problematic document and the bibliography. So we would also see the bibliography style and the settings. Otherwise it's hard to guess - BibTeX or biblatex, natbib or standard, which style, and more is unknown. Stefan tbstensrud Posts: 4 Joined: Mon Jan 28, 2013 5:05 pm Ok. Examples are the bib file and the tex file. Also, somehow the order of the references are not correct. Attachments referanseliste.bib example.tex Stefan Kottwitz Posts: 9617 Joined: Mon Mar 10, 2008 9:44 pm In the attached referanseliste.bib there's a comma missing before the url. But the example in your first post is ok, so that's probably just another small typo. The `plain` bibliography style ignores the `url` field. You need to use a style which supports it. For example, you could load natbib: `\usepackage{natbib}` and later you specify `plainnat`, which is the `natbib` version of `plain`: `\bibliographystyle{plainnat}` `natbib` provides also author-year styles, which are popular today. Stefan tbstensrud Posts: 4 Joined: Mon Jan 28, 2013 5:05 pm I did as you said, however there are no url displaying yet. Also the order of references appears to be random. Ideally I want the references in the text to be labled 1,2,3 and so on regardless of their placement in the bib file. I've looke at this site, http://merkel.zoneo.net/Latex/natbib.php, to find if there is a way to sort the references correctly, but I cant find it. Perhaps there are other packages who does this? Stefan Kottwitz Posts: 9617 Joined: Mon Mar 10, 2008 9:44 pm tbstensrud wrote:I did as you said, however there are no url displaying yet. It's displayed when I do as above. Perhaps you did something different, you could check again. Have a look at my output: bibliography.png (10.53 KiB) Viewed 10441 times tbstensrud wrote:Also the order of references appears to be random. Ideally I want the references in the text to be labled 1,2,3 and so on regardless of their placement in the bib file. Then perhaps use `unsrt` or `unsrtnat` instead. `\bibliographystyle{unsrtnat}` Stefan tbstensrud Posts: 4 Joined: Mon Jan 28, 2013 5:05 pm The error with the URL was my mistake. I was certain I added the "," there in the bib file, but i seem to have forgotten it, so it works now. As for the numbering, for now I'm going to leave it as it is and go for the (author year) option which works just fine. Thanks for all the help. Much obliged!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340896606445312, "perplexity": 2137.0685684063324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00372.warc.gz"}
https://web2.0calc.com/questions/statistics-probability-practice-with-expected-value
+0 # Statistics - probability practice with expected value 0 159 1 +43 I'm so lost in this class.. Any help would be greatly appreciated!! I posted a question earlier and was shown the formula which was such a huge help. Heres another... A lawyer sometimes represents clients for a contingency fee. The lawyer only gets paid for services rendered if the client wins the case. Suppose a client is suing for $400,000 and the lawyers fee is 10% of the settlement. The lawyer will spend$2,000 preparing the case and believes there is a 20% chance of winning the case. Find the lawyers expected profit. Mar 16, 2019 $$E[profit] = P[win] (\text{profit if win} )+ P[lose](\text{"profit" if lose})= \\ (0.2)((0.1)400000-2000)+ (1-0.2)(-2000) = \6000$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8576691746711731, "perplexity": 2392.4629163133986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688826.38/warc/CC-MAIN-20191019040458-20191019063958-00058.warc.gz"}
https://math.au.dk/aktuelt/aktiviteter/event/item/6071/
# From scattering theory to essential self-adjointness Kouichi Taira (University of Tokyo) Analyseseminar Torsdag, 31 oktober, 2019, at 16:15-17:00, in Aud. D3 (1531-215) Beskrivelse: It is shown by T. Kato that the Schrödinger operator with the Coulomb potential. Moreover, it is known that the Laplacian on a complete Riemannian manifold is essential self-adjoint on the space of compactly supported smooth functions. On the other hands, on a domain, the Laplacian is not essential self-adjoint. In fact, the Laplacian has at least two self-adjoint extensions: the Dirichlet Laplacian and the Neumann Laplacian. The last example suggests that an obstruction of essential self-adjointness is the "boundary" of the manifold. In this talk, I will explain how techniques of scattering theory can be applied to judging self-adjointness of differential operators. As an application, I will present the following two results. (1) Essential self-adjointness of real-principal type operators . (2) Give an alternative proof of not essential self-adjointness of repulsive Schrødinger operators with a large exponent in view of scattering theory. This is partially joint work with Shu Nakamura. Kontaktperson: Erik Skibsted
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605669140815735, "perplexity": 674.2939258891264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00182.warc.gz"}
https://www.physicsforums.com/threads/potential-on-a-conductor.618041/
# Potential on a conductor • Start date • #1 3 0 Hi, I consider a connected conductor. Is it right, that: 1)the potential at any point of the conductor is the same, but 2)the absolute value of the potential isn't zero in general? I think these statements are true, but I'm not sure about it, especially with the second statement. Thank you a lot! Regards • #2 mfb Mentor 35,806 12,532 1)the potential at any point of the conductor is the same For an ideal conductor, yes. 2)the absolute value of the potential isn't zero in general? This value has no meaning. You can define "zero" whereever you want. There are some conventions, depending on the setup, but they do not have a physical meaning themself. • #3 tiny-tim Homework Helper 25,836 252 1)the potential at any point of the conductor is the same For an ideal conductor, yes. in equilibrium (if charges are moving, ie if there is a current, then obviously there is a voltage drop along the conductor, ie an electric potential difference) • #4 3 0 Thank you very much!!! Now I understood. • #5 mfb Mentor 35,806 12,532 (if charges are moving, ie if there is a current, then obviously there is a voltage drop along the conductor, ie an electric potential difference) In superconductors with constant current, you have no voltage drop, even with a current flow. If charges are accelerating, you have an electric field and therefore a voltage drop ;). • Last Post Replies 2 Views 28K • Last Post Replies 17 Views 22K • Last Post Replies 4 Views 6K • Last Post Replies 2 Views 1K • Last Post Replies 4 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 5 Views 3K • Last Post Replies 4 Views 590
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145961403846741, "perplexity": 1330.9486559265097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00589.warc.gz"}
http://cosmicreflections.skythisweek.info/tag/impact-flash/
## Meteor Astronomy Terms & Definitions IAU Commission F1 (Meteors, Meteorites, and Interplanetary Dust) officially approved some terms and definitions in meteor astronomy last year.  This is a revision of the terms and definitions that were approved in 1961.  Meteor astronomy knowledge has grown by leaps and bounds since then. Meteoroid A solid natural object of a size roughly1 between 30 micrometers and 1 meter moving in, or coming from, interplanetary space Meteor The light and associated physical phenomena (heat, shock, ionization) which results from the high speed entry of a solid object from space into a gaseous atmosphere Meteorite Any natural solid object that survived the meteor phase in a gaseous atmosphere without being completely vaporized 1“Roughly”, because the 1 meter size limit is not a physical boundary; it is set by agreement. There is a continuous population of bodies both smaller and larger than 1 meter. Bodies larger than 1 meter tend to be dominated by asteroidal debris, rather than debris from comets.  “Roughly”, also because the 30 micrometer size limit is not a physical boundary; it is set by agreement.  There is a continuous population of bodies both smaller and larger than 30 micrometers.  Bodies smaller than 30 micrometers, however, tend to radiate heat away well and not vaporize during an atmospheric entry. “Small dust particles do not give rise to the meteor phenomenon when they enter planetary atmospheres.  Being heated below the melting point, they sediment to the ground more or less unaffected.” “When collected in the atmosphere, they are called interplanetary dust particles (IDPs).  When in interplanetary space, they are simply called dust particles.  The term micrometeoroid is discouraged.” Looking at the definition for meteorite above, what about meteoroids that reach the surface of a world with little or no atmosphere, such as the Moon?  The IAU Commission has a less-than-satisfying answer (to this writer, at least). “Foreign objects on the surfaces of atmosphereless bodies are not called meteorites (i.e. there is no meteorite without a meteor).  They can be called impact debris.” What’s the harm in calling any meteoroid that reaches the surface of a planetary body (planet, moon, asteroid, etc.) a meteorite?  To me, “impact debris” implies material pre-existing on the planetary body that is excavated by an impact event. “In the context of meteor observations, any object causing a meteor can be termed a meteoroid, irrespective of size.” “A meteoroid in the atmosphere becomes a meteorite after the ablation stops and the object continues on dark flight to the ground.” “A meteorite smaller than 1 millimeter can be called a micrometeorite.  Micrometeorites do not have the typical structure of a fresh meteorite—unaffected interior and fusion crust.” Meteor stream is a group of meteoroids which have similar orbits and a common origin.  Meteor shower is a group of meteors produced by meteoroids of the same meteoroid stream.” Dust (interplanetary) Finely divided solid matter, with particle sizes in general smaller than meteoroids, moving in, or coming from, interplanetary space. “Dust in the solar system is observed e.g. as the zodiacal dust cloud, including zodiacal dust bands, and cometary dust tails.  In such contexts the term ‘dust’ is not reserved for solid matter smaller than about 30 micron; the zodiacal dust cloud and cometary dust trails contain larger particles that can also be called meteoroids.” For consistency with the rest of the document, micron in the above paragraph should be micrometers. Meteoric smoke Solid matter that has condensed in a gaseous atmosphere from material vaporized during the meteor phase. “The size of meteoric smoke particles (MSPs) is typically in the sub-100 nm range.” “Meteors can occur on any planet or moon having a sufficiently dense atmosphere.” “A meteor brighter than absolute visual magnitude (distance of 100 km) -4 is also termed a bolide or a fireball.” The fireball definition makes sense, but it was always my understanding that a bolide is accompanied (later) by audible sound and is thus much rarer. “A meteor brighter than absolute visual magnitude -17 is also called a superbolide.” Meteor train is light or ionization left along the trajectory of the meteor after the meteor has passed.” “Small (typically micron-size) non-vaporized remnants of ablating meteoroids can be called meteoritic dust.  They can be observed e.g. as dust trails in the atmosphere after the passage of a bolide.” Again, for consistency with the rest of the document, micron-size in the above paragraph should be micrometer-size. “The radiation phenomenon accompanying a direct meteoroid hit of the surface of a body without an atmosphere is not called a meteor but an impact flash.” References Koschny, D., & Borovička, J. 2017, WGN, The Journal of the IMO, 45,5 https://www.iau.org/static/science/scientific_bodies/commissions/f1/meteordefinitions_approved.pdf
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989467620849609, "perplexity": 3158.2217662976786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517628.91/warc/CC-MAIN-20181024001232-20181024022732-00162.warc.gz"}
http://thephysicsvirtuosi.com/posts/coriolis-effect-on-a-home-run.html
# Coriolis Effect on a Home Run Citizen’s Bank Park I like baseball. Well, technically, I like ~~laying~~[3] lying on the couch for three hours half-awake eating potato chips and mumbling obscenities at the television. But let’s not split hairs here. Anyway, out of curiosity and in partial atonement for the sins of my past [1] I would now like to do a quick calculation to see how much effect the Coriolis force has on a home-run ball. The Coriolis force is one of the artificial forces we have to put in if we are going to pretend the Earth is not rotating. For a nice intuitive explanation of the Coriolis force see this post over at Dot Physics. Let’s now consider the following problem. Citizen’s Bank Park (home to the Philadelphia Phillies) is oriented such that the line from home plate to the foul pole in left field runs essentially South-North. Imagine now that Ryan Howard hits a hard shot down the third base line (that is, he hits the ball due North). Assuming it is long enough to be a home run, how with the Coriolis force effect the ball’s trajectory? This is a well-posed problem and we could solve it as exactly as we wanted. But please don’t make me. It’s icky and messy and I don’t feel like it. So let’s do some dimensional analysis! Hooray for that! So what are the relevant physical quantities in this problem? Well, we’ll certainly need the angular velocity of the Earth and the speed of the baseball. We’ll also need the acceleration due to gravity. Alright, so what do we want to get out of this? Well, ideally we’d like to find the distance the ball is displaced from its current trajectory. So is there any way we can combine an angular velocity, linear velocity and acceleration to get a displacement? Let’s see. We can write out the dimensions of each in terms of some length, L, and some time, T. So: $$\left[ \Omega \right] = \frac{1}{T}$$ $$\left[ v \right] = \frac{L}{T}$$ $$\left[ g \right] = \frac{L}{T^2}$$ where we have used the notation that [some quantity] = units of that quantity. Combining these in a general way gives: $$L = \left[ v^{\alpha} \Omega^{\beta} g^{\gamma} \right] = \left( \frac{L}{T}\right)^{\alpha}\left( \frac{1}{T}\right)^{\beta}\left( \frac{L}{T^2}\right)^{\gamma} = L^{\alpha+\gamma} T^{-(\alpha+\beta+2\gamma)}$$ Since we want just want a length scale here, we need: $$\alpha+\gamma = 1\~\~\~\mbox{and}\~\~\~\alpha+\beta+2\gamma = 0.$$ We can fiddle around with the above two equations to get two new equations that are both functions of alpha. This gives: $$\beta = \alpha - 2\~\~\~\mbox{and}\~\~\~\gamma = 1 - \alpha.$$ Unfortunately, we have two equations and three unknowns, so we have an infinite number of solutions. I’ve listed a few of these in the Table below. Ways of getting a length At this point, we have taken Math as far as we can. We’ll now have to use some physical intuition to narrow down our infinite number of solutions to one. Hot dog! One way we can choose from these expressions is to see which ones have the correct dependencies on each variable. So let’s consider what we would expect to happen to the deflection of our baseball by the Coriolis force if we changed each variable. What happens if we were to “turn up” the gravity and make g larger? If we make g much larger, then a baseball hit at a given velocity will not be in the air as long. If the ball isn’t in the air as long, then it won’t have as much time to be deflected. So we would expect the deflection to decrease if we were to increase g. This suggests that g should be in the denominator of our final expression. What happens if we turn up the velocity of the baseball? If we hit the ball harder, then it will be in the air longer and thus we would expect it to have more time to be deflected. Since increasing the velocity would increase the deflection, we would expect v to be in the numerator. What happens if we turn up the rotation of the Earth? Well, if the Earth is spinning faster, it’s able to rotate more while the ball is in the air. This would result in a greater deflection in the baseball’s path. Thus, we would expect this term to be in the numerator. So, using the above criteria, we have eliminated everything on that table with alpha less than 3 based on physical intuition. Unfortunately, we still have an infinite number of solutions to choose from (i.e. all those with alpha greater than or equal to 3). But, we DO have a candidate for the “simplest” solution available, namely the case where alpha = 3. Since we have exhausted are means of winnowing down our solutions, let’s just go with the alpha = 3 case. Our dimensional analysis expression for the deflection of a baseball is then $$\Delta x \sim \frac{v^3 \Omega}{g^2}$$ Plugging in typical values of $$v = 50\~\mbox{m/s}\~\~\~(110\~\mbox{mi/hr})$$ $$\Omega = 7 \times 10^{-5}\~\mbox{rad/s}$$ $$g = 9.8\~\mbox{m/s}^2$$ we get $$\Delta x \approx 0.1\~\mbox{m} = 10\~\mbox{cm}.$$ That’s all fine and good, but which way does the ball get deflected? Is it fair or foul? Well, remembering that the Coriolis force is given by: $${\bf F} = -2m{\bf \Omega} \times {\bf v}$$ and utilizing Ye Olde Right Hand Rule, we see that a ball hit due north will be deflected to the East. In the case of Citizen’s Bank Park, that is fair territory. But how good is our estimate? Well, I did the full calculation (which you can find here) and found that the deflection due to the Coriolis force is given by $$\Delta x =-\frac{4}{3}\frac{\Omega v^3_0}{g^2} \cos \phi \sin^3 \alpha \left[1 -3 \tan \phi \cot \alpha \right]$$ where phi is the latitude and alpha is the launch angle of the ball. We see that this is essentially what we found by dimensional analysis up to that factor of 4/3 and some geometrical terms. Not bad! Plugging in the same numbers we used before, along with the appropriate latitude and a 45 degree launch angle we find that the ball is deflected by: $$\Delta x = 5\~\mbox{cm}.$$ For comparison, we note that the diameter of a baseball is 7.5 cm. So in the grand scheme of things, this effect is essentially negligible. [2] That wraps up the calculation, but I’m certain that many of you are still a little wary of this voodoo calculating style. And you should be! Although dimensional analysis will give you a result with the proper units and will often give you approximately the right scale, it is not perfect. But, it can be formalized and made rigorous. The rigorous demonstration for dimensional analysis is due to Buckingham and his famous pi-theorem. The original paper can be found behind a pay-wall here and a really nice set of notes can be found here. It’s a pretty neat idea and I highly recommend you check it out! Unnecessary Footnotes: [1] Once in college I argued with a meteorologist named Dr. Thunder over the direction of the Coriolis force on a golf ball for the better half of the front nine at Penn State’s golf course. I was wrong. Moral of the story: don’t play golf with meteorologists. [2] For a counterargument, see Fisk et al. (1975) [3] Text has been corrected to illustrate our enlightenment by a former English major as to the difference between ‘lay’ and ‘lie’ through the following story: ‘Once in a college psych class, a young student said “It’s too hot. Let’s lay down.” A mature student, a journalist, asked, “Who’s Down?” ’
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953755497932434, "perplexity": 359.73595225394155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945497.22/warc/CC-MAIN-20180422061121-20180422081121-00583.warc.gz"}
http://math.stackexchange.com/questions/265723/conformal-map-of-the-complex-plane-is-linear
Conformal map of the complex plane is linear Gameline Complex Analysis, P. 265 #8 is like this, Show that every conformal self-map of the complex plane $\mathbb C$ is linear. Hint: The isolated singularity of $f(z)$ must be the simple pole. First of all, how do I argue that the singularity is not essential or removable? Second of all, how do I argue it is a pole and is simple? I can see there is a singularity at $\infty$ because function is not really defined there. Some hints please! Addendum Now I can see that the singularity can not be removable because of the Liouville's Theorem. If the singularity at $\infty$ were removable then that would make function bounded and hence analyticity in the entire complex plane tells function is constant but which is impossible because function is bijective. - The key point is that $f(z)$ is bijective, so if it had a pole of order $2$ or an essential singularity at infinity can you see how injectivity would fail? –  JSchlather Dec 27 '12 at 3:27 Not really, would you please elaborate? –  Deepak Dec 27 '12 at 3:29 If it has a pole of order 2, then the the principle part of the Laurent series has two negative terms, does that imply that $z_{0}$ maps to $\infty$ twice or what? Confused!! –  Deepak Dec 27 '12 at 3:33 The following are links to related questions: one two three four five –  Jonas Meyer Dec 27 '12 at 5:23 Thanks Jonah Meyer. That really helps. –  Deepak Dec 27 '12 at 6:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624905586242676, "perplexity": 413.5040202842043}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769888.14/warc/CC-MAIN-20141217075249-00069-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/question-about-a-ramsey-graph.661559/
# Question about a Ramsey graph. 1. Dec 30, 2012 ### cragar the Ramsey number of $R(\omega,\omega)=\omega$ but then $R(\omega+1,\omega)=\omega_1$ My question is on the second one can we do a counter example to show that it cant be any countable ordinal. depending on how I count the natural numbers I can get any countable ordinal I want. If we assume that $R(\omega+1,\omega)$ was equal to some countable ordinal then I could just color $\omega$ edges with blue for example and i would stay under the $\omega+1$ limit . And the other color we would just use a finite number of them. I guess i don't really understand why order matters for an infinite Ramsey graph. It doesn't seem like it matters in the finite case. Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: Question about a Ramsey graph.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693279027938843, "perplexity": 711.078608099067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514250.21/warc/CC-MAIN-20171212021458-20171212041458-00742.warc.gz"}
https://www.physicsforums.com/threads/four-vector-problem.584994/
# Four-vector problem 1. ### Wox 71 The four-velocity as defined for example here, is given by $$U=\gamma(c,\bar{u})$$ but I get $$U=\gamma(1,\frac{\bar{u}}{c})$$ Consider the timelike curve $\bar{w}(t)=(ct,\bar{x}(t))$ with velocity $\bar{v}(t)=(c,\bar{x}'(t))\equiv (c,\bar{u}(t))$ and the arc-length (proper time) $$\tau\colon I\subset \mathbb{R}\to \mathbb{R}\colon t\mapsto \int_{t_{0}}^{t}\left\| \bar{v}(k)\right\|dk$$ for which (by First Fundamental Theorem of Calculus (1), the Minkowskian inner product (2) and the definition of the Lorentz factor (3) ) $$\Leftrightarrow \frac{d\tau}{dt}=\left\| \bar{v}(t)\right\|=\sqrt{c^{2}-\bar{u}^{2}(t)}\equiv \frac{c}{\gamma}$$ then the velocity of the curve after arc-length (proper time) parameterization, is given by $$\bar{v}(\tau)=\frac{d\bar{w}}{d\tau}=\frac{d\bar{w}}{dt}\frac{dt}{d\tau}=\frac{\bar{v}(t)}{\left\| \bar{v}(t)\right\|}=\frac{(c,\bar{u}(t))}{\frac{c}{\gamma}}=\gamma(1,\frac{\bar{u}(t)}{c})$$ I would think that my $\bar{v}(\tau)$ is the four-velocity but in fact $\bar{v}(\tau)=\frac{U}{c}$ where U the four-velocity as defined in textbooks. What am I missing? 2. ### Matterwave 3,860 If you are using units where c is not 1, then certainly you want the 4-velocity to be normalized to c (or -c depending on signature of the metric), and not 1 which is not in units of velocity (if, again, c is not set to 1). 3. ### PAllen 5,801 I think it is purely a convention. I learned that U is meant to be a unit vector, always, even when c is not taken to be 1. Some people like norm of U = c, some like 1. Norm of one amounts to units of time rather than distance for positions. However, since you start with 4-position in units of distance, you should get a U whose norm is c. The flaw is your d tau/ dt computation. It is 1/gamma not c/gamma. Then you get U with norm of c. Specifically, your integral formula is not right. Given your units for v, the integral is c * tau, not tau. This, then, is the initial (and only) error, from which all else follows. Last edited: Mar 8, 2012 4. ### PAllen 5,801 I'll add one more thing. The idea of norming U to c has led to Brian Greene's "speed through space-time equals c", which has led to numerous confusions and debates on these forums, as well as facilitating cranks. The convention of Einstein and Bergmann that U has norm 1, irrespective of the value of c sidesteps all of this lunacy. 5. ### Matterwave 3,860 If you parameterize your curve with the proper time in seconds, and your proper distances are measured in meters, don't you necessarily get the norm condition in U to be c? What I mean is, if you use units of distance the same as your units of time, aren't you necessarily setting c=1? 6. ### PAllen 5,801 No. The conventions: position: (t, x/c, y./c, z/c) line element: d tau^2 = d t^2 - (dx^2 + dy^2 + dz^2)/c^2 and the consequence that U = d(position) / d tau has norm 1 in no way have c=1. 7. ### Matterwave 3,860 Are you talking about adding a factor of c into the metric? o.o 8. ### PAllen 5,801 Don't know what you are asking. The two common conventions for the metric are: ds^2 = dx^2 + dy^2 + dz^2 - c^2 t^2 and d tau^2 = dt^2 - dx^2/c^2 - dy^2/c^2 - dz^2/c^2 I have always used the latter. 9. ### Wox 71 Ok, so because $\bar{w}(t)=(ct,\bar{x}(t))$ is in space units, $\left\| \bar{v}(t)\right\|$ has units space/time and the integral is in space units. Then the arc-length (proper time) parameterization in time units is given by $$\tau\colon I\subset \mathbb{R}\to \mathbb{R}\colon t\mapsto \frac{1}{c}\int_{t_{0}}^{t}\left\| \bar{v}(k)\right\|dk$$ and the four-velocity in space/time units $$\bar{v}(\tau)=\gamma(c,\bar{u}(t))$$ Is this the correct explanation? As I understand, the four-velocity in the other convention (norm=1) is unitless, isn't it? So how is it used then? 10. ### PAllen 5,801 Yes, this is correct. The magnitude of U really has no real meaning in either convention. All information about measured velocity in any basis (frame) is contained in the direction of U as a tangent vector. The norm 1 convention makes this explicit: it is literally a unit tangent vector to a world line. There are pros and cons to either convention. 11. ### Wox 71 Thanks, you've been a great help! And what about the magnitude of space-component $\gamma\bar{u}(t)$ of the four-velocity? I'm not quite sure how all this relates to some physical reality. Can I interpret the space-component of the four-velocity as the classical velocity (at least when not choosing the norm=1 convention)? 12. ### PAllen 5,801 Let's say you have U as a tangent vector, considered a coordinate independent quantity. You want to know the spatial velocity measured in some frame defined by a 4 orthonormal unit vectors, one timelike the others spacelike. You take U dot <x unit vector>/ U dot <t unit vector>, same for y and z. Clearly, the norm of U drops out. In the coordinates you initially used to express U, the spatial velocity is just the u you started with (which you would get by executing the procedure above). The quantity gamma*u would be rather meaningless: the rate of change of distance in a given frame by a particle's proper time. This could exceed c by a large factor. 13. ### PAllen 5,801 Error here. In this convention you simply have position: (t,x,y,z) Then U, with norm 1, becomes: gamma * (1,u) This is not necessarily taking c=1, because the covariant metric diagonal is still (1, -1/c^2, -1/c^2,-1/c^2). Contravariant diagonal obviously (1,-c^2,-c^2,-c^2). Last edited: Mar 9, 2012 15. ### PAllen 5,801 Probably no one interested anymore, but I have clarified a few things in my own mind. Given the desire to express a 4 - tangent vector to a world line in terms of u = (dx/dt, dy/dt, dz/dt) with conventional meanings, two separate conventions affect the form it takes: - how you norm it - is your metric canonic (all +1,-1) or not (you have c^2 or 1/c^2 in your metric). The signature of the metric is irrelevant for this situation. With canonic metric, you have a factor of c in your tangent vector components, so you have two natural choices: gamma * (c, u) // norm c; dimensions distance/time; from d (ct,x,y,z) / d tau gamma * (1, u/c) // norm 1; dimensionless; from d (t, x/c, y/c, z/c) / d tau With non-canonic metric, you have only one natural form, with norm 1: gamma * (1, u) // mixed dimensions, as is characteristic of non-canonic metric // from d (t,x,y,z) / d tau With non-canonic metric, a form with norm c is simply unnatural. It happens that I learned SR with non-canonic metric, 4-velocity being a unit vector, and the concept of 'speed through spacetime' not remotely meaningful. It appears that almost all modern books use canonic metric. [EDIT: For emphasis, note something I derived in an earlier post: the norm of c or 1 plays no role at all in computing any observable. You could even normalize to 42 and it would make no difference. Only the direction in 4-space of the tangent vector plays any role in computing observables.] Last edited: Mar 10, 2012 16. ### Matterwave 3,860 Yes, this what I meant when I said "are you talking about adding a factor of c to the metric?" I always learned to us diag(-1,1,1,1) for my metric so any factors of c's I need are in the 4-vectors themselves. 17. ### Wox 71 Not sure what you mean: there is no dot product in Minkowskian space-time... Do you mean that [Ux/Ut,Uy/Ut,Uz/Ut] is the spatial 3-velocity? Why? 18. ### PAllen 5,801 Sure there is a dot product. It is defined by the metric. For example, if the metric is diag(1,-1/c^2, -1/c^2,-1/c^2), then the dot product of X and Y is: x0*y0 - x1*y1/c^2 - x2*y2/c^2 - x3*y3/c^2 19. ### Wox 71 Depends on your definition of the dot product, but I see what you mean. But I don't see why [Ux/Ut,Uy/Ut,Uz/Ut] would correspond to a spatial velocity. 20. ### PAllen 5,801 In a metric space there is one definition of the dot product. The Euclidean one looks the way it does solely because the Euclidean metric is diag(1,1,1). If U is some tangent vector, and x,y,z,t are unit vectors for some frame, then the dot product of U with such unit vectors expresses U in that frame basis. Then Ux/Ut gives the x speed (well, actually, x-speed/c , but that is just as good). Look at the tangent vector itself expressed in your starting coordinates (c-normed, canonic metric; works the same in any other convention): U = gamma(c,u) Ux = gamma * ux is not the x speed; but note Ux/Ut = ux/c. This feature will be true in any other basis. In particular, in an orthonormal basis with U itself taken as the time unit vector, you get spatial speed of zero - the particle has no spatial speed in its own basis. Last edited: Mar 13, 2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.943189799785614, "perplexity": 1480.7387777543604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00179-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.zbmath.org/?q=an%3A1272.30062
× # zbMATH — the first resource for mathematics Large and small covers of a hyperbolic manifold. (English) Zbl 1272.30062 For a discrete subgroup $$\Gamma$$ of the isometry group of the hyperbolic space $$\mathbb H^{n+1}$$, we denote by $$\delta(\Gamma)$$ the exponent of convergence of its Poincaré series. By work of C. J. Bishop and P. W. Jones [Acta Math. 179, No. 1, 1–39 (1997; Zbl 0921.30032)] it is known that $$\delta(\Gamma)$$ coincides with the Hausdorff dimension of the conical limit set of $$\Gamma$$. In the paper under review, the authors focus on the behaviour of $$\delta$$ under taking non-trivial normal subgroups. K. Falk and B. O. Stratmann [Tohoku Math. J., II. Ser. 56, No. 4, 571–582 (2004; Zbl 1069.30070)] showed that if $$\hat \Gamma$$ is a non-trivial normal subgroup of a non-elementary $$\Gamma$$, then $$\delta(\hat \Gamma) \geq \delta(\Gamma)/2$$. In the present paper, it is shown that in the case when $$n=1$$ and $$\Gamma$$ is a Fuchsian group corresponding to a closed hyperbolic surface, this inequality is in a sense best possible: there is a sequence of normal subgroups $$\Gamma_i$$ with $$\delta(\Gamma_i)$$ tending to $$1/2$$. Furthermore, it is also shown that when $$\Gamma$$ is non-elementary and convex cocompact for arbitrary $$n$$, for any non-trivial normal subgroup $$\hat \Gamma$$, the strict inequality $$\delta(\hat \Gamma) > \delta(\Gamma)/2$$ holds. In contrast to these results, the authors also show that in the 3-dimensional cocompact case, there is a normal subgroup with large $$\delta$$: when $$\mathbb H^3/\Gamma$$ is a closed hyperbolic 3-manifold fibring over a circle, for any $$\epsilon >0$$ there is a Schottky subgroup $$G$$ of $$\Gamma$$ with $$\delta(G)>2-\epsilon$$. ##### MSC: 30F40 Kleinian groups (aspects of compact Riemann surfaces and uniformization) 57M50 General geometric structures on low-dimensional manifolds ##### Keywords: Kleinian group; hyperbolic space; exponent of convergence Full Text: ##### References: [1] Agol, I.: Criteria for virtual fibering. J. Topol. 1, 269–284 (2008) · Zbl 1148.57023 · doi:10.1112/jtopol/jtn003 [2] Agol, I.: Tameness of hyperbolic 3-manifolds (preprint) · Zbl 1178.57017 [3] Bers, L.: Automorphic forms for Schottky groups. Adv. Math. 16, 332–361 (1975) · Zbl 0327.32011 · doi:10.1016/0001-8708(75)90117-6 [4] Bishop, C., Jones, P.: Hausdorff dimension and Kleinian groups. Acta Math. 179, 1–39 (1997) · Zbl 0921.30032 · doi:10.1007/BF02392718 [5] Bonahon, F.: Bouts des variétés hyperboliques de dimension 3. Ann. Math. 124(2), 71–158 (1986) · Zbl 0671.57008 · doi:10.2307/1971388 [6] Brooks, R.: The fundamental group and the spectrum of the Laplacian. Comment. Math. Helv. 56, 581–598 (1981) · Zbl 0495.58029 · doi:10.1007/BF02566228 [7] Brooks, R.: The bottom of the spectrum of a Riemannian cover. J. Reine Angew. Math. 357, 101–114 (1985) · Zbl 0553.53027 [8] Canary, R.: On the Laplacian and geometry of hyperbolic 3-manifolds. J. Differ. Geom. 36, 349–367 (1992) · Zbl 0763.53040 [9] Canary, R.: A covering theorem for hyperbolic 3-manifolds and its applications. Topology 35, 751–778 (1996) · Zbl 0863.57010 · doi:10.1016/0040-9383(94)00055-7 [10] Canary, R., Taylor, E.C.: Kleinian groups with small limit sets. Duke Math. J. 73, 371–381 (1994) · Zbl 0798.30030 · doi:10.1215/S0012-7094-94-07316-X [11] Canary, R., Minsky, Y., Taylor, E.C.: Spectral theory, Hausdorff dimension and the topology of hyperbolic 3-manifolds. J. Geom. Anal. 9, 17–40 (1999) · Zbl 0957.57012 · doi:10.1007/BF02923086 [12] Calegari, D., Gabai, D.: Shrinkwrapping and the taming of hyperbolic manifolds. J. Am. Math. Soc. 19, 385–446 (2006) · Zbl 1090.57010 · doi:10.1090/S0894-0347-05-00513-8 [13] Chavel, I.: Eigenvalues in Riemannian Geometry. Pure and Applied Mathematics, vol. 115. Academic Press, San Diego (1984) · Zbl 0551.53001 [14] Cheeger, J.: A lower bound for the smallest eigenvalue of the Laplacian. In: Problems in Analysis, pp. 195–199. Princeton University Press, Princeton (1970) · Zbl 0212.44903 [15] Doyle, P.: On the bass note of a Schottky group. Acta Math. 160, 249–284 (1988) · Zbl 0649.30036 · doi:10.1007/BF02392277 [16] Falk, K.: A note on Myrberg points and ergodicity. Math. Scand. 96, 107–116 (2005) · Zbl 1142.37322 [17] Falk, K., Stratmann, B.: Remarks on Hausdorff dimensions for transient limit sets of Kleinian groups. Tohoku Math. J. 56(2), 571–582 (2004) · Zbl 1069.30070 · doi:10.2748/tmj/1113246751 [18] Fernández, J., Rodríguez, J.: The exponent of convergence of Riemann surfaces. Bass Riemann surfaces. Ann. Acad. Sci. Fenn. 15, 165–183 (1990) · Zbl 0702.30046 [19] Greenberg, L.: Finiteness theorems for Fuchsian and Kleinian groups. In: Discrete Groups and Automorphic Functions, pp. 199–257. Academic Press, New York (1977) [20] Krushkal, S.L., Apanosov, B.N., Gusevskii, N.A.: Kleinian Groups and Uniformization in Examples and Problems. Translations of Mathematical Monographs, vol. 62. Am. Math. Soc., Providence (1986) [21] Lundh, T.: Geodesics on quotient manifolds and their corresponding limit points. Mich. Math. J. 51, 279–304 (2003) · Zbl 1044.37018 · doi:10.1307/mmj/1060013197 [22] Malcev, A.: On faithful representations of infinite groups of matrices. Mat. Sib. 8, 405–422 (1940) · JFM 66.0088.03 [23] Malcev, A.: On faithful representations of infinite groups of matrices. Am. Math. Soc. Transl. 45, 1–8 (1965) [24] Marden, A.: Schottky groups and circles. In: Contributions to Analysis (a Collection of Papers Dedicated to Lipman Bers), pp. 273–278. Academic Press, New York (1974) [25] Maskit, B.: Kleinian Groups. Springer, New York (1998) · Zbl 0940.30022 [26] Matsuzaki, K.: Dynamics of Kleinian groups–the Hausdorff dimension of limit sets. Sugaku 51(2), 142–160 (1999) (Translation in Selected Papers on Classical Analysis, pp. 23–44. AMS Translation Series (2), vol. 204. Am. Math. Soc., Providence, 2001) · Zbl 0931.57034 [27] Matsuzaki, K.: Convergence of the Hausdorff dimension of the limit sets of Kleinian groups. In: The Tradition of Ahlfors and Bers: Proceedings of the First Ahlfors-Bers Colloquium. Contemporary Math., vol. 256, pp. 243–254. Am. Math. Soc., Providence (2000) · Zbl 0973.30030 [28] Matsuzaki, K.: Conservative action of Kleinian groups with respect to the Patterson–Sullivan measure. Comput. Methods Funct. Theory 2, 469–479 (2002) · Zbl 1062.30052 · doi:10.1007/BF03321860 [29] Matsuzaki, K.: Isoperimetric constants for conservative Fuchsian groups. Kodai Math. J. 28, 292–300 (2005) · Zbl 1088.30041 · doi:10.2996/kmj/1123767010 [30] Matsuzaki, K., Taniguchi, M.: Hyperbolic Manifolds and Kleinian Groups. Clarendon Press, Oxford (1998) · Zbl 0892.30035 [31] Nicholls, P.: The Ergodic Theory of Discrete Groups. London Math. Soc. Lecture Note Series, vol. 143. Cambridge Univ. Press, Cambridge (1989) · Zbl 0674.58001 [32] Patterson, S.J.: The limit set of a Fuchsian group. Acta Math. 176, 241–273 (1976) · Zbl 0336.30005 · doi:10.1007/BF02392046 [33] Patterson, S.: Some examples of Fuchsian groups. Proc. Lond. Math. Soc. 39(3), 276–298 (1979) · Zbl 0411.30034 · doi:10.1112/plms/s3-39.2.276 [34] Patterson, S.: Further remarks on the exponent of convergence of Poincaré series. Tohoku Math. J. 35(2), 357–373 (1983) · Zbl 0518.20037 · doi:10.2748/tmj/1178228995 [35] Purzitsky, N.: A cutting and pasting of noncompact polygons with applications to Fuchsian groups. Acta Math. 143, 233–250 (1979) · Zbl 0427.30039 · doi:10.1007/BF02392095 [36] Rees, M.: Checking ergodicity of some geodesic flows with infinite Gibbs measure. Ergod. Theory Dyn. Syst. 1, 107–133 (1981) · Zbl 0469.58012 [37] Sullivan, D.: The density at infinity of a discrete group of hyperbolic motions. Inst. Ht. Études Sci. Publ. Math. 50, 171–202 (1979) · Zbl 0439.30034 · doi:10.1007/BF02684773 [38] Sullivan, D.: Entropy, Hausdorff measures old and new, and limit sets of geometrically finite groups. Acta Math. 153, 259–277 (1984) · Zbl 0566.58022 · doi:10.1007/BF02392379 [39] Sullivan, D.: Related aspects of positivity in Riemannian geometry. J. Differ. Geom. 25, 327–351 (1987) · Zbl 0615.53029 [40] Taylor, E.: Geometric finiteness and the convergence of Kleinian groups. Commun. Anal. Geom. 5, 497–533 (1997) · Zbl 0896.20033 [41] Tukia, P.: The Hausdorff dimension of the limit set of a geometrically finite Kleinian group. Acta Math. 152, 127–140 (1985) · Zbl 0539.30034 · doi:10.1007/BF02392194 [42] Yamamoto, H.: An example of a non-classical Schottky group. Duke Math. J. 63, 193–197 (1991) · Zbl 0731.30036 · doi:10.1215/S0012-7094-91-06308-8 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283180236816406, "perplexity": 1801.226744841721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00282.warc.gz"}
https://warwick.ac.uk/fac/sci/masdoc/current/msc-modules/ma916/pm/analysis/navierbc/discreteel/
# Discretised Euler-Lagrange equations Let $\mathcal{T}=(T_1,\ldots~,~T_M)$ be a triangulation of the computational domain $\Omega_h~$. We first want to solve equation for $w$ and $\phi$ and then the equation for $u$ In the case of Navier boundary conditions one can consider piecewise affine elements for$w$, $\phi$ and $u$. Let $V_1,~\ldots~,~V_K$ be the nodes of the triangulation with $V_1,~\ldots,~V_N$ being the nodes in the interior of $\Omega_h$ and $V_{N+1},~\ldots~,~V_K$ being the nodes on the boundary. Let $W_h~:=~\left\lbrace~u_h~\in~H^1~(\Omega_h~)~\,~:~\,~\left.~u_h~\right|_T~\in~\mathcal{P}^1(T)~\,\,\,~\forall~T\in~\mathcal{T}~\right\rbrace$ be the discretized function space. This is the space corresponding to the the standard piecewise linear finite element basis functions. Let also, for $g\in~C(\partial~\Omega~)$, $W_h^g~:=~\left\lbrace~u_h~\in~W_h~\,~:~\,~u_h~(V_{N+i})~=~g(V_{N+1})~\,~\,\,~\forall~i=1,\ldots~K-N~\right\rbrace~.$ We note that $W_h^0~\subset~H^1_0~(\Omega_h~)$ #### Mathematical analysis of the discretisation We write the discretizations of the problems for $w,~\phi~$ and for $u$ respectively: • Find $(w_h~,\phi_h~)~\in~W_h^f~\times~W_h^p$ such that $\label{wdiscrete}A((w_h~,~\phi_h~),~(v_h,\psi_h~))~=0~\hspace{1cm}~\forall~(v_h,\psi_h~)~\in~(W_h^0)^2.$ • Find $u_h~\in~W_h^g~$ such that $\label{udiscrete}\int_{\Omega_h}~\nabla~u_h~\cdot~\nabla~v_h~\,~dx~=~\int_{\Omega_h~}~w_h~\,~v_h~\,dx~\hspace{1cm}~\forall~v_h~\in~W_h^0.~$ The existence and uniqueness of solutions $(w_h,\phi_h~)~\in~W_h^f~(\Omega~)\times~W_h^p$, $u\in~W_h^g$ to the above problems follows by standard argument (note that $W_h^0~(\Omega~)~\subset~H^1_0~(\Omega~)~$ ) #### Convergence rate of the approximation One can prove the following theorem (see Section 5.2.3 of the RSG report) Theorem: Suppose that $c^2~<~2~\min~\{~\kappa~b~,~\sigma~a~\}$, $\Omega~=~\Omega_h$, each of the functions $f,~g,~p:~\partial~\Omega~\to~\mathbb{R}$ is constant on each component of $\partial~\Omega$ and that the $u,~w,~\phi~\in~H^2~(\Omega_h~)$. Then if the functions $u$, $w$, $\phi$ solve the system of equations, then the following error bounds hold: $||~u~-~u_h~||_{H^1~}~&~\leq~&~(1+C)~\overline{C}~\,~h~\left(~|~u~|_{H^2(\Omega_h~)~}~+~\frac{C_0}{\epsilon~}~\sqrt{|w|_{H^2(\Omega_h~)}^2~+|\phi~|_{H^2(\Omega_h~)}^2}~\right),$ $\sqrt{~||~w-w_h~||^2_{H^1~(\Omega_h~)~}~+~||~\phi~-~\phi_h~||_{H^1~(\Omega_h~)}^2~}~&\leq~&~\frac{C_0\,~\overline{C}}{\epsilon~}~h~\sqrt{~|w|_{H^2(\Omega_h~)}^2~+|\phi~|_{H^2(\Omega_h~)}^2~}$ for all $\epsilon~>0$ such that $\epsilon~\leq~\frac{1}{2}~\min~\{~(\kappa~+b~-~\sqrt{(\kappa~-b~)^2~+~2c^2}~),(\sigma~+a~-~\sqrt{(\sigma~-a~)^2~+~2c^2}~)~\}$. This theorem states the $O(h)$ convergence rate of the approximation $u_h,~\phi_h$ to the exact solution $u,~\phi$ in the $H^1$ norm. We note that the constants depend on $\epsilon~>0$, which measures how much smaller is $c~$ from $2~\min~\{~\kappa~b~,~\sigma~a~\}$. We note, that the constant is fixed for fixed coefficients $\kappa,~\sigma~,~a~,~b~,~c$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 45, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979563355445862, "perplexity": 116.15412950017567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00408.warc.gz"}
http://physics.stackexchange.com/questions/63305/energy-conservation-in-nuclear-reactions-and-radiactive-decay
# Energy conservation in nuclear reactions and radiactive decay Reading "Fundamentals of Nuclear Physics" by Atam P. Arya, I understand that in a nuclear reaction, let say $x+X \to y+Y$ meaning that "when a particle $x$ strikes a target nucleus $X$, the outcome of the nuclear reaction is a recoil nucleus $Y$ and a particle $y$. In many cases more than one type of particle may be given out." Now, Arya applies the energy conservation to the nuclear reaction, writing $$E_i=E_f$$ where $$E_i=K_x+m_xc^2+K_X+M_Xc^2$$ and $$E_f=K_Y+M_Yc^2+K_y+m_yc^2$$ being $K_x, K_X, K_y, K_y$ the kinetic energies of the particle $x$, the target nucleus (parent), the recoil nucleus (daughter) and the particle $y$, respectively. And $M$ & $m$ its respective masses. Rewriting energy conservation, he defines a quantity $Q$ (the disintegration energy): $$Q:=(K_Y+K_y)-(K_X+K_x)=(M_X+m_x)c^2-(M_Y+m_y)c^2 \qquad (1)$$ Now, my question is whether or not the equation (1) (and its interpretation) can be applied to alpha and beta decay (where there are no particle colliding with the nucleus). - Compare the data provided for $^{206}\mathrm{Pb}$ (stable) with the provided for $^{210}\mathrm{Pb}$ (unstable). Look in the section headed "Decay properties". Moreover, note that there is a separate $Q$ provided for each decay mode (but not for each channel). In this instance, however you need to modify your understanding of what $Q$ means. It represents the energy excess of the parent over the total masses of all the products. In your notation: $$Q_\mathrm{channel} = (M_X - M_Y - \sum_\mathrm{products} m_i)c^2 \quad .$$ Your $Q_{channel}$ is my $Q$? If yes, then what is the difference between your equation and (1)? (Obviously there are no $m_x$). So you are saying that Q cannot be considered as the total change of kinetic energies? –  Anuar May 4 '13 at 23:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609605669975281, "perplexity": 449.84037975378084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462035.86/warc/CC-MAIN-20150226074102-00135-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.emathzone.com/tutorials/group-theory/even-and-odd-permutations.html
# Even and Odd Permutations A permutation is said to be an even permutation if it can be expressed as a product of an even number of transpositions; otherwise it is said to be an odd permutation, i.e. it has an odd number of transpositions. Theorem 1: A permutation cannot be both even and odd, i.e. if a permutation $f$ is expected as a product of transpositions then the number of transpositions is either always even or always odd. Proof: Let us consider the polynomial $A$ in distinct symbols ${x_1},{x_2},…,{x_n}$. It is defined as the product of $\frac{1}{{2n\left( {n – 1} \right)}}$ factor of the form ${x_i} – {x_j}$ where $i < j$. Thus $A = \prod\limits_{i \leqslant j = 1}^n {\left( {{x_i} – {x_j}} \right)}$ $\begin{gathered} A = \left( {{x_1} – {x_2}} \right)\left( {{x_1} – {x_3}} \right)\left( {{x_1} – {x_4}} \right) \cdots \left( {{x_1} – {x_n}} \right) \\ \,\,\,\,\,\,\,\,\,\,\left( {{x_2} – {x_3}} \right)\left( {{x_2} – {x_4}} \right)\left( {{x_2} – {x_5}} \right) \cdots \left( {{x_2} – {x_n}} \right) \\ \,\,\,\,\,\,\,\,\,\,\left( {{x_3} – {x_4}} \right) \cdots \left( {{x_3} – {x_n}} \right) \cdots \cdots \cdots \left( {{x_{n – 1}} – {x_n}} \right) \\ \end{gathered}$ Now consider any permutation $P$ on $n$ symbol $1,2,3, \ldots ,n$. By $AP$ we mean the polynomial obtained by permuting the subscript $1,2,3, \ldots ,n$of the ${x_i}$ as prescribed by $P$. For example, taking $n = 4$, we have $A = \left( {{x_1} – {x_2}} \right)\left( {{x_1} – {x_3}} \right)\left( {{x_1} – {x_4}} \right)\left( {{x_2} – {x_3}} \right)\left( {{x_2} – {x_4}} \right)\left( {{x_3} – {x_4}} \right)$ If $P\left( {1\,3\,4\,2} \right)$, then $AP = \left( {{x_3} – {x_1}} \right)\left( {{x_3} – {x_4}} \right)\left( {{x_3} – {x_2}} \right)\left( {{x_1} – {x_4}} \right)\left( {{x_1} – {x_2}} \right)\left( {{x_4} – {x_3}} \right)$ In particular if $P = \left( {1\,2} \right)$, we have $AP = \left( {{x_2} – {x_1}} \right)\left( {{x_2} – {x_3}} \right)\left( {{x_2} – {x_4}} \right)\left( {{x_1} – {x_3}} \right)\left( {{x_1} – {x_4}} \right)\left( {{x_3} – {x_4}} \right) = – A$ This shows that the effect of a transposition on $A$ is to change the sign of $A$. In general, a transposition $\left( {i,j} \right),\,\,i < j$ has the following effects on $A$. (i) Any factor which involves neither the suffix $i$ nor $j$ remains unchanged. (ii) The single factor $\left( {{x_i} – {x_j}} \right)$ changes its sign. (iii) The remaining factors which involve either the suffix $i$ or $j$ but not both can be grouped into pairs of products, $\pm \left( {{x_m} – {x_i}} \right)\left( {{x_m} – {x_j}} \right)$ where $m \ne i$ or $j$ and such a product remains unaltered when ${x_i}$ and ${x_j}$ are interchanged. Hence the net effect of transposition $i,j$ on $A$ is to change its sign, i.e. $A$ operated upon by transposition $\left( {i,j} \right)$gives $– A$. Now the permutation $P$ is considered a product of $s$ transposition when operated upon $A$ and gives ${\left( { – 1} \right)^s}A$ so that $AP = {\left( { – 1} \right)^s}A$ and is considered a product of $t$ transposition when it gives ${\left( { – 1} \right)^t}A$ so that $AP = {\left( { – 1} \right)^t}A$. Hence ${\left( { – 1} \right)^s}A = {\left( { – 1} \right)^t}A$ ${\left( { – 1} \right)^s} = {\left( { – 1} \right)^t}$ Now this equation will hold only if $s$ and $t$ are either both even or both odd. Hence this completes the theorem. Theorem 2: Of the $n{!}$ permutations on $n$ symbols, $\frac{1}{{2n!}}$ are even permutations and $\frac{1}{{2n!}}$ are odd permutations. Proof: Let the even permutations be ${e_1},{e_2}, \ldots ,{e_m}$ and the odd permutations be ${o_1},{o_2}, \ldots ,{o_k}$. Then $m + k = n{!}$ Now let $t$ be any transposition. Since $t$ is evidently an odd permutation, we see that $t{e_1},t{e_2}, \ldots ,t{e_m}$ are odd permutations and that $t{o_1},t{o_2}, \ldots ,t{o_k}$ are even permutations. Since an odd permutation is never an even permutation, we have for any $i = 1,2, \ldots ,m$; $j = 1,2, \ldots ,k$. Furthermore, if $t{e_i} = t{e_j}$, then ${e_i} = {e_j}$ by cancellation law. Similarly $t{o_i} \ne t{o_j}$ if $i \ne j$. It follows that all of the $m$ even permutations must appear in the list $t{o_1},t{o_2}, \ldots ,t{o_k}$, which are all distinct so that their number is $m$. Similarly, all of the $k$ odd permutations must be in the list $t{e_1},t{e_2}, \ldots ,t{e_m}$, which are all distinct as shown above and their number of $k$. Hence $m = k = \frac{1}{{2n!}}$ NOTE: (1) A cyclic containing an odd number of symbols is an even permutation, whereas a cycle containing an even number of symbols is an odd permutation, since a permutation on $n$ symbols can be expressed as a product of $\left( {n – 1} \right)$ transpositions. (2) The inverse of an even permutation is an even permutation and the inverse of an odd permutation is an odd permutation. (3) The product of two permutations is an even permutation if either both the permutations are even or both are odd and the product is an odd permutation if one permutation is odd and the other even.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921771883964539, "perplexity": 116.66314659428264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057039.7/warc/CC-MAIN-20210920131052-20210920161052-00445.warc.gz"}
https://getrevising.co.uk/resources/waves_p6_ocr_21st_century_science
# Waves - P6 - OCR 21st Century Science Notes for P6 HideShow resource information • Created by: Maura1 • Created on: 13-02-12 20:34 First 434 words of the document: 1. Waves are vibrations that transfer energy from place to place without matter (solid, liquid or gas) being transferred. 2. In transverse waves, the vibrations are at right angles to the direction of travel. Light and other types of electromagnetic radiation are transverse waves. Water waves and S waves (a type of seismic wave) are also transverse waves. 3. Sound waves and waves in a stretched spring are longitudinal waves. P waves (a type of seismic wave) are also longitudinal waves. In longitudinal waves, the vibrations are along the same direction as the direction of travel. 4. The frequency of a wave is the number of waves produced by a source each second. It is also the number of waves that pass a certain point each second. The unit of frequency is the hertz (Hz). 5. The wavelength of a wave is the distance between a point on one wave and the same point on the next wave. It is often easiest to measure this from the crest of one wave to the crest of the next wave, but it doesn't matter where as long as it is the same point in each wave. 6. As waves travel, they set up patterns of disturbance. The amplitude of a wave is its maximum disturbance from its undisturbed position. Take care: the amplitude is not the distance between the top and bottom of a wave. It is the distance from the middle to the top. 7. The speed of a wave is generally independent of its frequency or amplitude. 8. Sound waves and light waves reflect from surfaces. Remember that they behave just like water waves in a ripple tank. The angle of incidence equals the angle of reflection. Smooth surfaces produce strong echoes when sound waves hit them, and they can act as mirrors when light waves hit them. The waves are reflected uniformly and light can form images. Rough surfaces scatter sound and light in all directions. However, each tiny bit of the surface still follows the rule that the angle of incidence equals the angle of reflection. 9. The waves can: Be focused to a point, for example sunlight reflected off a concave telescope mirror. Appear to come from a point behind the mirror, for example a looking glass. 10. Sound waves and light waves change speed when they pass across the boundary between two substances with different ## Other pages in this set ### Page 2 Here's a taster: This causes them to change direction and this effect is called refraction. 11. Beyond a certain angle, called the critical angle, all the waves reflect back into the glass. We say that they are totally internally reflected. All light waves, which hit the surface beyond this critical angle, are effectively trapped. The critical angle for most glass is about 42 °. 12. An optical fibre is a thin rod of high-quality glass. Very little light is absorbed by the glass.…read more
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8627176284790039, "perplexity": 941.5701760531153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00201-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.inchmeal.io/msda/ch-7/ex-26.html
Mathematical Statistics and Data Analysis - Solutions Chapter 7, Survey Sampling (a) We need to show that $\, \bar X = n^{-1} \sum_{i=1}^{N} U_i x_i \,$. Since we have exactly $\, n \,$ terms in the sum $\, \sum_{i=1}^{N} U_i x_i \,$, where $\, U_i x_i \ne 0 \,$, we can denote those terms by $\, X_j = U_i x_i \,$, where $\, j \,$ varies from $\, 1 \,$ to $\, n \,$. Thus we have $\, \sum_{i=1}^{N} U_i x_i = \sum_{j=1}^{n} X_j \,$. But $\, n^{-1} \sum_{j=1}^{n} X_j = \bar X \,$, it follows that $\, \bar X = n^{-1} \sum_{i=1}^{N} U_i x_i \,$. (b) $\, P(U_i = 1) \,$ means the probability that $\, i^{th} \,$ element of the population is present in a random sample of size n. This is equivalent to the probability of selecting an element in a random sample of size $\, n \,$. Thus $\, P(U_i = 1) = \frac n N \,$. Since $\, U_i \,$ is either $\, 0 \,$ or $\, 1 \,$, by fundamental bridge, it follows $\, \Exp(U_i) = P(U_i=1) = \frac n N \,$. (c) $\, \Var(U_i) = P(U_i=1)(1-P(U_i=1)) = \frac n N(1-\frac n N) = \frac n N \frac (N-n) N \,$. (d) \, \begin{align*} \Exp(U_i U_j) \\ &= \sum_{m=0}^{1} \sum_{n=0}^{1} m n P(U_i = m, U_j = n) \\ &= 0 + 0 + 0 + 1 \cdot 1 \cdot P(U_i=1, U_j=1) \\ &= \frac n N \frac (n-1) (N-1) \end{align*} \, (e) \, \begin{align*} \Cov(U_i, U_j) \\ &= \Exp(U_i U_j) - \Exp(U_i) \Exp(U_j) \\ &= \frac n N \frac (n-1) (N-1) - \Prn{\frac n N}^2 \\ &= \frac {-n} N \frac {N-n} {N(N-1)} \end{align*} \, (f) \, \begin{align*} \Var(\bar X) \\ &= \Var \Prn{\frac 1 n \sum_{i=1}^{N} U_i x_i} \\ &= \frac 1 {n^2} \sum_{i=1}^{N} U_i x_i \\ &= \frac 1 {n^2} \Prn{ \sum_{i=1}^{N} x_i^2 \Var(U_i) \; + \; \sum_{i=1}^{N} \sum_{j=1,\;j \ne i}^{N} x_i x_j \Cov(U_i, U_j) } \\ &= \frac 1 {n^2} \Prn{ \sum_{i=1}^{N} x_i^2 \Prn{\frac n N \frac {N-n} N} \; + \; \sum_{i=1}^{N} \sum_{j=1,\;j \ne i}^{N} x_i x_j \Prn{\frac {-n} N \frac {N-n} {N(N-1)} } } \\ &= \frac 1 {n^2} \frac n N \frac {N-n} N \Prn{ \sum_{i=1}^{N} x_i^2 \; - \; \frac 1 {N-1} \sum_{i=1}^{N} \sum_{j=1,\;j \ne i}^{N} x_i x_j } \\ &= \frac 1 {n^2} \frac n N \frac {N-n} N \frac 1 {N-1} \Prn{ (N-1)\sum_{i=1}^{N} x_i^2 \; - \; \sum_{i=1}^{N} \sum_{j=1,\;j \ne i}^{N} x_i x_j } \\ &= \frac 1 {n^2} \frac n N \frac {N-n} N \frac 1 {N-1} \Prn{ (N-1)\sum_{i=1}^{N} x_i^2 \; - \; \Prn{\sum_{i=1}^{N} \sum_{j=1}^{N} x_i x_j \; - \; \sum_{i=1}^{N} x_i^2 } } \\ &= \frac 1 {n^2} \frac n N \frac {N-n} N \frac 1 {N-1} \Prn{ N\sum_{i=1}^{N} x_i^2 \; - \; \sum_{i=1}^{N} \sum_{j=1}^{N} x_i x_j } \\ &= \frac 1 {n^2} \frac n N \frac {N-n} N \frac 1 {N-1} N^2 {\sigma^2} && \text{Using variance formulae} \\ &= \frac 1 n \frac {N-n} {N-1} {\sigma^2} \end{align*} \, $$\tag*{\blacksquare}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1625.1578064655773}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00606.warc.gz"}
https://mathoverflow.net/questions/308579/quantity-of-partition-sets-intersecting-a-compact-set
# Quantity of partition sets intersecting a compact set Let $X$ be a compact metric space. Let $\{X_\alpha:\alpha\lt \mathfrak c\}$ be a partition of $X$ into $\mathfrak c=|\mathbb R|$ dense first category $F_\sigma$-subsets of $X$. Let $A$ be a non-empty closed subset of $X$ such that $A\cap X_\alpha$ is first category in $A$ for each $\alpha<\mathfrak c$. It is clear that uncountably many of the $X_\alpha$ must intersect $A$. But what can be said about $$\bigcup \{X_\alpha:X_\alpha\cap A\neq\varnothing\}?$$ Is this set equal to $X$? Does it at least contain a dense $G_\delta$-subset of $X$? Counterexample. Let $\{\alpha:\alpha\lt\mathfrak c\}=I\cup J$ where $I\cap J=\emptyset,\ |I|=|J|=\mathfrak c.$ Let $A=\{t_\alpha:\alpha\in J\}$ be the Cantor ternary set; $t_\alpha\ne t_\beta$ for $\alpha\ne\beta$. Let $S$ be a dense $G_\delta$-subset of $[0,1]$ which has Lebesgue measure zero and is disjoint from $A;$ then $T=[0,1]\setminus S$ is a first category subset of $[0,1].$ For every interval $(a,b)\subseteq[0,1]$ we have $|S\cap(a,b)|=|(T\setminus A)\cap(a,b)|=\mathfrak c.$ Let $\{S_\alpha:\alpha\in I\}$ be a partition of $S$ into countable dense sets. Let $\{T_\alpha:\alpha\in J\}$ be a partition of $T$ into countable dense sets such that $T_\alpha\cap A=\{t_\alpha\}.$ For $\alpha\lt\mathfrak c$ define $$X_\alpha=\begin{cases} S_\alpha\ \text{ if }\ \alpha\in I,\\ T_\alpha\ \text{ if }\ \alpha\in J. \end{cases}$$ $X=[0,1]$ is a compact metric space, and $\{X_\alpha:\alpha\lt\mathfrak c\}$ is a partition of $X$ into $\mathfrak c$ countable dense subsets. $A$ is a nonempty closed subset of $X$ such that $X_\alpha\cap A=\emptyset$ for $\alpha\in I$ and $X_\alpha\cap A=\{t_\alpha\}$ for $\alpha\in J,$ so that $X_\alpha\cap A$ is first category in $A$ for each $\alpha\lt\mathfrak c.$ Finally, $\bigcup\{X_\alpha:X_\alpha\cap A\ne\emptyset\}=\bigcup\{X_\alpha:\alpha\in J\}=T$ is first category in $X.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929229617118835, "perplexity": 34.50431282395301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00508.warc.gz"}
https://www.jpsoft.com/forums/threads/use-except-on-files-without-subdirectories.3771/#post-21284
# How to?Use EXCEPT on files without subdirectories? #### Avi Shmidman Feb 23, 2012 240 3 In my first attempts to use EXCEPT, I noticed that it hides not only files from execution, but also subdirectories. (I didn't expect this behavior, because the documentation refers to the parameters as containing "the file or files to exclude from the command", without mentioning subdirectories, and thus I thought at first that an "except (*.*)" command would exclude only the files in the directory, but would include the subdirectories). I wonder then: Is there a way to instruct EXCEPT to exclude all files but to leave the subdirectories alone? #### rconn Staff member May 14, 2008 12,435 157 You could use a size range, but it's unlikely that you actually want to (or should) use EXCEPT, which is an obsolete holdover from the 4DOS days (and only preserved for the sake of compatibility with old batch files). Let us know what you're trying to do, and we can suggest a more suitable alternative. #### Avi Shmidman Feb 23, 2012 240 3 Hi Rex, As I just posted in an alternate thread, I'm trying to "flatten" the file structure of a folder [that is: I have a base directory containing some files, plus a few subdirectories, each with a few files, and I'd like to bring all of the files from the subdirectories into the base directory. "move /sx *.* ." doesn't work because it fails on the files in the base directory, and it cannot be combined with /S+1. I now tried: "except (*.*) move /sx *.* .", but that didn't work either, because EXCEPT hid the directories, too]. However, taking my cue from EXCEPT, with its method of hiding files to exclude them, I've now settled on the following batch sequence: attrib *.* +h move /sx *.* . attrib *.* -h You've stated, though, that it is "unlikely that you actually want to (or should) use EXCEPT", and presumably your statement would apply to the batch file I've proposed as well. So, what would you propose instead? And, what is the downside of using the hidden attribute in this manner (other than the minor negative side effect of unhiding previously hidden files in the base directory?) In general, although the docs recommend using "File exclusion ranges" rather than "EXCEPT", it seems to me that EXCEPT can provide a good deal of flexibility that File Exclusion Ranges do not. Specifically, EXCEPT allows specification of a list of specific directories to exclude, while File Exclusion Ranges do not. So, for cases in which specific directories should be excluded, do you still recommend using EXCEPT? #### rconn Staff member May 14, 2008 12,435 157 In general, although the docs recommend using "File exclusion ranges" rather than "EXCEPT", it seems to me that EXCEPT can provide a good deal of flexibility that File Exclusion Ranges do not. Specifically, EXCEPT allows specification of a list of specific directories to exclude, while File Exclusion Ranges do not. So, for cases in which specific directories should be excluded, do you still recommend using EXCEPT? Nope, because file exclusion ranges do allow specifying specific directories to exclude. #### Avi Shmidman Feb 23, 2012 240 3 Ah, OK, I see now that I can exclude directories by appending a "\". However, it seems that I cannot exclude the current directory. Thus, if I write: [c:\temp] dir /[!c:\temp\] *.* /s It still shows me all of the files in c:\temp. Nope, because file exclusion ranges do allow specifying specific directories to exclude. #### Avi Shmidman Feb 23, 2012 240 3 In fact, as far as I can tell, the Exclusion Ranges can't take absolute paths at all; for instance, I have a directory r:\temp\ which has a number of directories such as folder1, folder2, folder3, etc. If I write: [r:\temp] dir /[!folder3\] *.* /s /b Then folder3 and its files are excluded. However, if I write: [r:\temp] dir /[!r:\temp\folder3\] *.* /s /b Then folder3 and its files are displayed, and the exclusion list seems to have no effect. If this is the case, EXCEPT still has an important advantage, because it allows specification of full paths. #### rconn Staff member May 14, 2008 12,435 157 No, you can't use paths in exclusion ranges. But you don't need to; just enter the directory names. (There's no advantage in using full paths in EXCEPT, since it cannot handle subdirectories anyway. #### rconn Staff member May 14, 2008 12,435 157 Ah, OK, I see now that I can exclude directories by appending a "\". However, it seems that I cannot exclude the current directory. Thus, if I write: [c:\temp] dir /[!c:\temp\] *.* /s It still shows me all of the files in c:\temp. Use /S+1. #### Avi Shmidman Feb 23, 2012 240 3 Hi Rex, 1] OK, I didn't realize that EXCEPT can't handle subdirectories further down the line (the documentation seemed to indicate that this would work, since it states: "EXCEPT will assume that the files to be excluded are in the current directory, unless another directory is specified explicitly"). 2] Nevertheless, I think that the ability to specify a full path in Exclusion Ranges is significant, since in many cases I have more than one directory by the same name in a large tree of directories, and in order to exclude only one of them from a large scale "/S" operation I need to be able to specify the full path. 3] Finally, back to the main issue. I am aware that I can use "/S+1" to perform a "dir" without the current directory; however, the problem is that I can't use it together with a "move /SX" command. I used the "dir" commands above just to illustrate the problem that Exclusion Ranges cannot be used to exclude the files in the current directory. But the goal I was aiming for was to find a way to use Exclusion Ranges to execute a "move /SX" while ignoring the files in the current directory. #### rconn Staff member May 14, 2008 12,435 157 Hi Rex, 1] OK, I didn't realize that EXCEPT can't handle subdirectories further down the line (the documentation seemed to indicate that this would work, since it states: "EXCEPT will assume that the files to be excluded are in the current directory, unless another directory is specified explicitly"). That means that you can specify a full pathname in the exclude list to hide a file in a subdirectory; it does *not* mean that EXCEPT will step into and run a command in that directory. But it would be a lot easier and faster to do it with ATTRIB. #### mathewsdw May 24, 2010 855 0 Northlake, Il Avi, problem completely solved and easily. Please see my posting in the other thread. - Dan Replies 4 Views 1K Replies 2 Views 128 Replies 9 Views 2K Replies 7 Views 2K Replies 2 Views 143 Replies 14 Views 291 Replies 7 Views 232 Replies 6 Views 533 Replies 4 Views 234 Replies 2 Views 305 Replies 4 Views 540 Replies 9 Views 523 Replies 2 Views 618 Replies 31 Views 3K Replies 0 Views 600 Replies 4 Views 1K Replies 6 Views 993 Replies 4 Views 1K Replies 5 Views 1K Replies 6 Views 1K Replies 18 Views 2K Replies 10 Views 2K Replies 4 Views 3K Replies 5 Views 1K Replies 8 Views 1K Replies 8 Views 1K Replies 6 Views 2K Replies 2 Views 1K Replies 0 Views 813 Replies 3 Views 2K Replies 21 Views 2K Replies 7 Views 10K Replies 9 Views 2K Replies 5 Views 2K Replies 0 Views 654 Replies 8 Views 2K Replies 2 Views 981 Replies 5 Views 1K Replies 0 Views 807 Replies 2 Views 1K Replies 1 Views 1K Replies 3 Views 2K Replies 1 Views 1K Replies 2 Views 2K Replies 1 Views 946 Replies 10 Views 2K Replies 9 Views 2K Replies 10 Views 2K Replies 2 Views 1K Replies 1 Views 1K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583157062530518, "perplexity": 2641.8867902338284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00542.warc.gz"}
https://www.physicsforums.com/threads/rational-function-asymptotes.125929/
# Rational Function Asymptotes 1. Jul 13, 2006 ### scott_alexsk Hello, Recently I have been trying to reason why certain rational functions such as (-2(2x^2+x-31))/((x-3)*(x+4)*(x+1)) have varying near horizontal asymptote slopes. I know that the direction of the horizontal asymptote can be varied by altering the parent function x+1/x, but several equations like the one above do not duplicate the single line or single slope that variations of that parent equation produce. So I quess my question is, in rational functions what causes variation in the slope of horizontal asymptotes? I have been able to glean that when the number of roots on top of the rational function is greater than or equal to the number on the bottom, the slope of the asymptotes is something besides 0. Also it seems that the slopes of the horizontal asymptotes vary from each other when there are 1 or more complex roots in the dividend of the equation. Thanks, -scott 2. Jul 13, 2006 ### HallsofIvy Staff Emeritus ?? horizontal asymptotes always have slope 0! I assume you really mean non-horizontal, slanting, asymptotes. The original function you post, (-2(2x^2+x-31))/((x-3)*(x+4)*(x+1)), since it has degree in the denominator higher than the degree in the numerator has y= 0 as asymptote which is horizontal. But I don't know what you mean by "parent function". Certainly, x+ 1/x has y= x as asymptote because, as x goes to either plus or minus infinity, the 1/x part goes to 0. In what sense is that a "variation" of (-2(2x^2+x-31))/((x-3)*(x+4)*(x+1)) ? If the degree of the numerator is less than the degree of the denominator, then y= 0 is a horizontal asymptote. If the degree of the numerator equals the degree of the denominator, then y= a is a horizontal asymptote where a is the ratio of the leading coefficients. If the degree of the numerator is one more than the degree of the denominator then there is a slant asymptote with slope equal to the ratio of the leading coefficients. If the degree of the numerator is two or more larger than the degree of the denominator, the asymptote is a curve (although many people would not use the word "asymptote" in that case, reserving it for lines). 3. Jul 13, 2006 ### scott_alexsk Well in the case of that equation the near-horizontal asymptotes have a slope that looks like 1/infinity and 0.5/infinity. Most importantly the lines seem to follow two different asymptotes. How would you find the slope of these two lines from the equation? Thanks, -scott 4. Jul 13, 2006 ### arildno (any number)/(members of a number sequence tending towards infinty) tends to zero. 5. Jul 13, 2006 ### HallsofIvy Staff Emeritus It's not clear to me what line you are talking about. The first example you gave, for large x, is close to -4/x which goes to 0. The second you wrote as x+ 1/x which has y= x (with slope 1). Did you mean (x+ 1)/x? That is equivalent to 1+ 1/x which has y= 1 as a horizontal asymptotes. In any case, "1/infinity" and "0.5/infinity" are both 0. There are no two different lines. These are not "near-horizontal", the asymptotes are horizontal. 6. Jul 14, 2006 ### scott_alexsk Sorry about my incorrect description, I did not have that much time this morning. Anyways attached are two diagrams. The first is the graph of the equation I have been mentioning and the second is an example of a less 'messy' rational function, ((3x+5)*(x-1))/((x+1)*(x-2)*(x+1)). What bugged me about the first equation, (-2(2x^2+x-31))/((x-3)*(x+4)*(x+1)), is that, as opposed to the second graph, the the lines converging on the asymptote X=0 bend inward instead outward to that asymptote. By the way I meant x+(1/x). Thank you guys very much for your time. I truely appreciate it. -scott #### Attached Files: File size: 26.1 KB Views: 72 • ###### math2.jpg File size: 23.8 KB Views: 55 7. Jul 14, 2006 ### Robokapp $${-2(2x^2+x-31)}/{(x-3)(x+4)(x+1)}$$ It's bottom heavy...unless you mised an x somewhere. 8. Jul 14, 2006 ### HallsofIvy Staff Emeritus Yes, that's his point. It and the other function he mentions both have horizontal asymptote y= 0. Although in his first post scott_alexsk said "I know that direction of the horizontal asymptote can be varied", I think he is really talking about how the curve approaches the horizontal asymptote. If that is the case, then it depends on the leading coefficients. The leading coefficient of the numerator of ((3x+5)*(x-1))/((x+1)*(x-2)*(x+1)) clearly will be 3 while the leading coefficient in the denominator will be 1. For very large the function value will be close to 3/x and will approach the asymptote y= 0 like 3/x does. The leading coefficient of the numerator of (-2(2x^2+x-31))/((x-3)*(x+4)*(x+1)) is -4 while the leading coefficient of the denominator is 1. For very large x, the function value will be close to -4/x. I'm not sure what "bend inward instead outward" means but I suspect it is due to that negative coefficient. Last edited: Jul 14, 2006 9. Jul 14, 2006 ### scott_alexsk Well just to clarify, I am refering to the interesting feature in the first graph. It looks as if the line to the far left has been translated up 1 unit, but it still bends towards Y=0. The same is with the line on the far right in the first graph, instead translated down about 0.5 units. Personally I do not think that the negative 2 does this. After playing around with the equation 1/(x-1)^2, it seems to me that this feature stems from the incomplete square in the dividend of the equation. When I factor out the equation I get 1/(x^2-2x+1). By simply removing the 1 I get a dramtically changed slope from the orginally horizontal asymptote. But this is just a thought. -scott 10. Jul 15, 2006 ### uart Scott, I was also initially unsure about exactly what you were asking but I think the "thing that bugs you" about the first equation is nothing more than that fact that a zero occurs between the last pole and the asymptote. Just solve for the zeros in the numerator and you'll see that's all it is, after the last pole the curve has to come back down through zero before finally going back to zero along the asymptote. 11. Jul 15, 2006 ### scott_alexsk No it is not that but the fact that the line moves to the other side then bends back towards y=0 unlike most rational function lines, which stay on one side and bend (outwardly) towards the asymptote. I want to see the reason why it does this, unlike most other rational functions. -scott 12. Jul 15, 2006 ### uart Moves to the "other side". What the hell is that if it's not a zero crossing. Like I said before, this happens because there is a zero in the numerator between the last pole (zero of the denominator) and the asymptote. 13. Jul 16, 2006 ### scott_alexsk OK that was part of it, but what I am most concerned about is what causes the line to apparently be translated up but to bend back to the y=0 asymptote, unlike most other rational functions I have seen (like the one in the tumbnail to the right in one of my prior posts). -scott 14. Jul 16, 2006 ### Office_Shredder Staff Emeritus I think he's talking about how the graph in the first thumbnail crosses the asymptote line 15. Jul 17, 2006 ### scott_alexsk HOI, you were right, it is the negative 2 that does it, in combination with the -31 and the 2x^2. In the initial positive x-values, y remains positive because of the -31*-2, but eventually the 2x^2+x catches up and bends it the other direction. Sorry for the delay in understanding. Thanks to everyone to posted in this thread. -scott
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445206522941589, "perplexity": 687.3181306674486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00446-ip-10-171-10-70.ec2.internal.warc.gz"}
https://texblog.org/2007/08/07/introduction-to-tables-in-latex/?shared=email&msg=fail
1. […] and MacTex… A table ususally has the following structure (for an introduction to tables click here) with the “small”- environment added to slightly decrease its size: begin{table}[htdp] […] 2. […] font size. E.g. begin{footnotesize} begin{floatingtable} … end{footnotsize} Check this post for an introduction to […] 3. Lavinia Hi, I am trying to change the “title” of a table in Latex. You know that first part of the tile: “Table 1: —–“. I want this “Table” to disappear or to replace it with something else. Is there any way that I can do that? Thanks, Lavinia • Hi Lavinia, Try the caption package: \usepackage{caption} \captionsetup{tablename=Tab.} for “Tab.” instead of “Table” and similarly for figures: \captionsetup{figurename=Fig.} Source: Caption documentaton Cheers, Tom 4. TeXin I am using LaTeX to make some reports, one problem I am having is with Tables, the headings to the tables are quite long. Right now I use the tabularx package, when I compile I get underfull hbox badness 10000, I know it will still work but I want LaTeX to compile with no warnings. A minimal example is given below. \documentclass[a4paper,10pt]{article} \usepackage{tabularx} \begin{document} \begin{table}[H] \centering \begin{tabularx}{\linewidth}{ |X|X|X|X|} \hline Long Name &amp; Long Name&amp; Long Name &amp; Very Long Namesssdsdsdsdsdsdsdssdsdsdsdsdsdsdsssdsds \\ \hline 1 &amp; 2.3 &amp; 30.3 &amp; 36\\ \hline 2 &amp; 8.6 &amp; 10.4 &amp; 17\\ \hline 3.5 &amp; 18.2 &amp; 5.2 &amp; 12\\ \hline 5 &amp; 28.2 &amp; 3.4 &amp; 6\\ \hline \end{tabularx} \end{table} \end{document} • Hi! Thanks for your question and the minimal working example. As you already pointed out, the problem lies with the very long names of the column headings. I suggest you try the microtype package. Loading that packages possibly resolves most of the warnings. Otherwise, you can enforce how LaTeX splits long names using: \hyphenation{} Hope it helps, Tom. 5. TeXin Hi Tom, Thanks, it fixed it up, however I have decided to make the shift to the booktabs table, it looks much more professional. However another problem with the long names in booktabs is Overfull hbox badness 10000, sometimes I get this when placing figures side by side, any assistance? • Hey! These warnings are relatively easy to fix. Use the documentclass option draft to highlight any “overfull hbox”. I suspect you are using 0.5\linewidth or 0.5\textwidth for two figures side-by-side. Just slightly reduce the size to e.g. 0.47\textwidth for both figures, since LaTeX add some space in between. \usepackage{graphicx} ... \includegraphics[width=0.47\textwidth]{filename} Please provide a minimal working example again in case this does not solve your problem. Thanks, Tom. 6. mero Hello tom I’m using lyx and I want to make word table bold and put caption below word table, also i want both word table and caption to be most left justified with left border of table , like this Table1 This is table1 caption what will be code to modify or steps in lyx • Hi mero, I’m sure you can do it in LyX. But since I never use LyX, here is how I would go about it. Insert TeX code before and after your table: % before \begin{table}[ht] \begin{bfseries} %after \end{bfseries} \caption{Table caption text.} \end{table} Once that’s done, you’ll also need to align the caption which turns out a little more challenging, since you’ll need to load the caption package. Navigate to “Document –> Settings” and choose “LaTeX preamble”. Then paste the following line in the textbox: \usepackage[singlelinecheck=off]{caption} That did the trick for me. Hope it works for you! Cheers, Tom. 7. Markus Hi, I have a problem \begin{table} \begin{tabular}{l*3S[table-format=4.4]} \toprule \input{result1.tex} \bottomrule \end{tabular} \end{table} Result1.tex: &amp;\multicolumn{1}{c}{(1)}&amp;\multicolumn{1}{c}{(2)}&amp;\multicolumn{1}{c}{(3)}\\ &amp;\multicolumn{1}{c}{Success}&amp;\multicolumn{1}{c}{Total}&amp;\multicolumn{1}{c}{Round1}\\ \midrule main&amp;&amp;&amp;\\ Round1&amp;0.405\sym{***}&amp;&amp;\\ &amp;(0.121)&amp;&amp;\\ Gender&amp;0.167&amp;2.852\sym{**}&amp;0.336\sym{***}\\ &amp;(0.298)&amp;(1.177)&amp;(0.128)\\ Biology&amp;-0.695&amp;0.373&amp;0.047\\ &amp;(0.461)&amp;(0.422)&amp;(0.086)\\ Mathematics&amp;0.355&amp;-3.533&amp;-0.829\sym{***}\\ &amp;(0.595)&amp;(2.545)&amp;(0.221)\\ Economics&amp;0.195&amp;-1.535&amp;-0.140\\ &amp;(0.358)&amp;(1.308)&amp;(0.148)\\ Psychology&amp;0.471&amp;-0.011&amp;0.017\\ &amp;(0.608)&amp;(1.833)&amp;(0.248)\\ Law&amp;0.628&amp;0.638&amp;-0.257\\ &amp;(0.568)&amp;(2.058)&amp;(0.211)\\ Treatment&amp;0.722\sym{*}&amp;&amp;0.105\\ &amp;(0.411)&amp;&amp;(0.073)\\ Constant&amp;-1.935\sym{***}&amp;14.719\sym{***}&amp;1.527\sym{***}\\ &amp;(0.702)&amp;(1.343)&amp;(0.138)\\ \midrule Observations&amp;234&amp;234&amp;234\\ I always get the error: missplaced\noaligne. \bottomrule-> noaligne I can solve the problem by changing \input{result1.tex} to \input{result1.tex} \\ But then I have a empty line before the bottomrule and thats not what I want. Anybody an idea? • Hi Markus, I recommend to place the entire table, not only the table content, into the file and try again. It works for me. Also, next time, please provide a minimal working example. I first had to google before I was able to run your code. It only works when using quite a substantial amount of additional code. Cheers, Tom. • Markus Karde Thanks for your help Tom. It was the first time that I posted a problem in a forum like this one. Thats why I had no idea how I should really do it. Next time I will do it as you recommend. Thanks again. The problem is that the file result1.tex is provided from Stata.It’s important for me that it stays like this. But thats exactly the point. Isn’t it strange that it works when one puts everything in the file. I mean what else does the the input command do as just putting the code at it’s place. But that’s not working. • Hi Markus, Thanks for following up on your question. Not sure, why you get the error. Here is what works for me with your Stata output (result1.tex): \documentclass[11pt]{article} \usepackage{booktabs} \usepackage{caption} \newcommand{\sym}[1]{\rlap{#1}}% \usepackage{siunitx} % centering in tables \sisetup{ detect-mode, tight-spacing = true, group-digits = false , input-signs = , input-symbols = ( ) [ ] - + *, input-open-uncertainty = , input-close-uncertainty = , table-align-text-post = false } \let\estinput=\input% \newcommand{\estauto}[3]{ \vspace{.75ex}{ \begin{tabular}{l*{#2}{#3}} \toprule \estinput{#1} \bottomrule \end{tabular} } } \begin{document} \begin{table} \estauto{result1.tex}{4}{l} %\begin{tabular}{l*3S[table-format=4.4]} %\toprule %\input{result1.tex} %\bottomrule %\end{tabular} \end{table} \end{document} I copied most of the code from this blog. Your code (commented with %) also works. Give it a try! Hope it helps. Tom. 8. Hello Tom, I have a query. I want to make a table which is very long in terms of column. So I divided the whole thing into 2 parts. Now these are titled as Table 1 and Table 2. But how can I make them Table 1.(A) and Table 1.(B)…. Is there any way? • Hi Arindam, The longtable package does what you are looking for. You can find an example here. Best, Tom. • It’s so kind of you, I think I have got my answer……:) So quickly…. • Great! And good luck with the table… Tom. 9. himanshu sekhar panda hi tom In my pdf, why it is showing Tab. in place of Table.? himanshu • tom Hi Himanshu, This might be due to the document class or package you’re useing. Try adding this line to your preamble: \renewcommand{\tablename}{Table} • himanshu sekhar panda Hi Tom With your suggestion also, it is not showing TABLE. Please do me a favour. I am about to submit my Ph.D. thesis. PREAMBLE: \documentclass[a4paper,12pt]{thesis} ** code removed by Tom ** • tom Hi Himanshu, Thanks for sending your preamble. It’s the thesis document class that defines \tableshortname to be “Tab.” and \figureshortname to be “Fig.” (on line 1092/3). \def\figureshortname{Fig.} \def\tableshortname{Tab.} So, simply redefine these in your preamble, e.g.: \documentclass{thesis} \renewcommand{\tableshortname}{Table} \begin{document} \listoftables \chapter{First chapter} \begin{table}[ht]\caption{default}\centering \begin{tabular}{|c|c|}\hline a&b\\\hline c&d\\\hline\end{tabular} \end{table}% \end{document} • himanshu sekhar panda Hi Tom thanks a lot for your sweet solution. it worked. Actually, I am a learner of LaTeX. I will need your help at times, if permitted…. Regards himanshu • tom Sure, no problem! Feel free to drop me a comment, should you have other questions. However, please find an relevant article using the search field on the right. Also, I can help more efficiently when you provide a minimal example :). Tom 10. tamonekolevi Hi, does anyone now how to write words ‘figure’ and ‘table’ which looks like in amsart? I think that all letters are capital letters, but only the first is large and the other are small. • tom They are using small capitals, provided by the \sc command. Here’s a minimal working example where the list headings look similar to what amsart produces. \documentclass{article} \usepackage{tocloft} \renewcommand{\cfttoctitlefont}{\hfil\sc} \renewcommand{\cftloftitlefont}{\hfil\sc} \renewcommand{\cftlottitlefont}{\hfil\sc} \begin{document} \tableofcontents \listoffigures \listoftables \end{document} Hope this helps, Tom • tamonekolevi Thanks! I found that I can use \textsc{•} command, but example was helpful. • tom Tom 11. A. M. Alshuaib I use a spreadtab package in tables. I try to enter a text column in it on which no calculation done, but it doesn’t work. Please any help! • tom Hi there, Thanks for this question. You’ll have to use ‘@’ or ‘\STtextcell’ for text cells. Next time, please consider the package documentation before asking a question and/or provide a minimal example (see code below). Best, Tom \documentclass[11pt]{article} \begin{document} 22 & 54 & a1+b1&@Row sum\\ 43 & 65 & a2+b2&@Row sum\\ 49 & 37 & a3+b3&@Row sum\\ \hline a1+a2+a3 & b1+b2+b3 & a4+b4&@Row sum\\ \end{document} 12. Hi Tom i got this table… But… When I am going to use \ref{tab:1} the result on my page says: Table II-A and I got no another table, I got a Figure but it seems that is recognizing it as same type of label. \begin{table} \begin{centering} \label{tab:1} \caption{Parameters used for the calculations \cite{Ganouni}-\cite{Lin}} \par\end{centering} \begin{centering} \begin{tabular}{|l|c|c|} \hline % Table content \hline \end{tabular} \par\end{centering} \end{table} • Hi Santiago, Thanks for your question. I’m not sure what the problem is, since you didn’t include the rest of the code. Your table looks fine. Perhaps II is the chapter/section and A is the first table. You can redefine the way the table counter is printed if necessary. HTH, Tom
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075892329216003, "perplexity": 2909.5676483430016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146064.76/warc/CC-MAIN-20200225080028-20200225110028-00420.warc.gz"}
https://brilliant.org/problems/weird-systems/
# Weird Systems $$x$$ and $$y$$ are the natural numbers satisfying following system of equations. $$y^{3}+3y = x^{3}+3x^{2}+ 266$$ $$x^{2}+y^{2} = 8x+8y+2$$ Find the sum of the digits of the product of $$x$$ and $$y$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9566656351089478, "perplexity": 400.30676565168335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607846.35/warc/CC-MAIN-20170524131951-20170524151951-00394.warc.gz"}
http://math.stackexchange.com/questions/90137/sequences-of-the-form-p-n-2p-n-1-a
# Sequences of the form : $p_n=2^{p_{n-1}}-a$? There is known Catalan sequence : $C_n=2^{C_{n-1}}-1$ , with $C_0=2$ I have noticed that following sequence produces prime numbers for the first four terms (I don't know if the fifth term is a prime number or not) : $P_n=2^{P_{n-1}}-3$ , with $P_0=3$ Are there some similar prime number sequences of the form : $P_n=2^{P_{n-1}}-a$ ? - In short, no. This is because there is no known $a$ such that we can prove $2^n -a$ will be prime infinitely often.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730911254882812, "perplexity": 148.203991107939}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652865/warc/CC-MAIN-20140305060732-00098-ip-10-183-142-35.ec2.internal.warc.gz"}
https://charlesjlee.com/post/20210205-gpt2-twitter-bot/
I was browsing HN one day and came across this thread that mentioned a new GPT-2 library: aitextgen. This is exactly what I was waiting for! (actually, that day had arrived earlier and the author of aitextgen had written previous versions).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166906833648682, "perplexity": 2590.005444635284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00616.warc.gz"}
https://academy.madeincosmos.net/2018/09/dark-energy/
Posted on # Dark Energy In physical cosmology and astronomydark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe.[1][2] Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. Assuming that the standard model of cosmology is correct, the best current measurements indicate that dark energy contributes 68.3% of the total energy in the present-day observable universe. The mass–energy of dark matter and ordinary (baryonic) matter contribute 26.8% and 4.9%, respectively, and other components such as neutrinos and photons contribute a very small amount.[3][4][5][6] The density of dark energy (~ 7 × 10−30 g/cm3) is very low, much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the mass–energy of the universe because it is uniform across space.[7][8][9] Two proposed forms for dark energy are the cosmological constant,[10][11] representing a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to the zero-point radiation of space i.e. the vacuum energy.[12] Scalar fields that change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow. ## History of discovery and previous speculation ### Einstein’s cosmological constant The “cosmological constant” is a constant term that can be added to Einstein’s field equation of General Relativity. If considered as a “source term” in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or “vacuum energy“. The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity.[13] Einstein gave the cosmological constant the symbol Λ (capital lambda). The mechanism was an example of fine-tuning, and it was later realized that Einstein’s static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and not static at all. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder.[14] ### Inflationary dark energy Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe. Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al.[15] and in Perlmutter et al.,[16] and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters. The term “dark energy”, echoing Fritz Zwicky‘s “dark matter” from the 1930s, was coined by Michael Turner in 1998.[17] ### Change in expansion over time Diagram representing the accelerated expansion of the universe due to dark energy. High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology’s standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the “standard model of cosmology” because of its precise agreement with observations. As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein’s cosmological constant to a precision of 10%.[18] Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration. ## Nature The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain matters of speculation.[19] Dark energy is thought to be very homogeneous and not very dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is quite rarefied and un-massive — roughly 10−27 kg/m3— it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it uniformly fills otherwise empty space. Independently of its actual nature, dark energy would need to have a strong negative pressure (repulsive action), like radiation pressure in a metamaterial,[20] to explain the observed accelerationof the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure and viscosity[dubious ]. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled “gravitational repulsion”. ### Technical definition In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. Matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., ρ ∝ a−3, while radiation is anything which scales to the inverse fourth power of the scale factor (ρ ∝ a−4). This can be understood intuitively: for an ordinary particle in a square box, doubling the length of a side of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift.[21] The final component, dark energy, is an intrinsic property of space, and so has a constant energy density regardless of the volume under consideration (ρ ∝ a0). Thus, unlike ordinary matter, it does not get diluted with the expansion of space. ## Evidence of existence The evidence for dark energy is indirect but comes from three independent sources: • Distance measurements and their relation to redshift, which suggest the universe has expanded more in the last half of its life.[22] • The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature). • Measures of large-scale wave-patterns of mass density in the universe. ### Supernovae A Type Ia supernova (bright spot on the bottom-left) near a galaxy In 1998, the High-Z Supernova Search Team[15] published observations of Type Ia (“one-A”) supernovae. In 1999, the Supernova Cosmology Project[16]followed by suggesting that the expansion of the universe is accelerating.[23] The 2011 Nobel Prize in Physics was awarded to Saul PerlmutterBrian P. Schmidt, and Adam G. Riess for their leadership in the discovery.[24][25] Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave backgroundgravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model.[26] Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a “dusty” Friedmann–Lemaître universe and the local measured Hubble constant.[27] Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble’s law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object’s distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme and consistent luminosity. Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter.[28] ### Cosmic microwave background Estimated division of total energy in the universe into matter, dark matter and dark energy based on five years of WMAP data.[29] The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass-energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the CMB spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%.[26] The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter.[5] Work done in 2013 based on the Planck spacecraft observations of the CMB gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter.[30] ### Large-scale structure The theory of large-scale structure, which governs the formation of structures in the universe (starsquasarsgalaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density. A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown.[31][32] The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ~150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10.[32] This provides a confirmation to cosmic acceleration independent of supernovae. ### Late-time integrated Sachs-Wolfe effect Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe.[33] It was reported at high significance in 2008 by Ho et al.[34] and Giannantonio et al.[35] ### Observational Hubble constant data A new approach to test evidence of dark energy through observational Hubble constant data (OHD) has gained significant attention in recent years.[36][37][38][39] The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as “cosmic chronometers”.[40] From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter {\displaystyle H(z)=-{\frac {1}{1+z}}{\frac {dz}{dt}}\approx -{\frac {1}{1+z}}{\frac {\Delta z}{\Delta t}}.} The reliance on a differential quantity, Δz/Δt, can minimize many common issues and systematic effects; and as a direct measurement of the Hubble parameter instead of its integral, like supernovaeand baryon acoustic oscillations (BAO), it brings more information and is appealing in computation. For these reasons, it has been widely used to examine the accelerated cosmic expansion and study properties of dark energy. ## Theories of dark energy Dark energy’s status as a hypothetical force with unknown properties makes it a very active target of research. The problem is attacked from a great variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data. The equation of state of Dark Energy for 4 common models by Redshift.[41] A: CPL Model, B: Jassal Model, C: Barboza & Alcaniz Model, D: Wetterich Model ### Cosmological constant Estimated distribution of matter and energy in the universe[42] The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter Λ (Lambda, hence Lambda-CDM model). Since energy and mass are related according to the equation E = mc2, Einstein’s theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum. The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason a cosmological constant has negative pressure can be seen from classical thermodynamics. In general, energy must be lost from inside a container (the container must do work on its environment) in order for the volume to increase. Specifically, a change in volume dV requires work done equal to a change of energy −P dV, where P is the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases, because the energy is equal to ρV, where ρ is the energy density of the cosmological constant. Therefore, P is negative and, in fact, P = −ρ. There are two major advantages for the cosmological constant. The first is that it is simple. Einstein had in fact introduced this term in his original formulation of general relativity such as to get a static universe. Although he later discarded the term after Hubble found that the universe is expanding, a nonzero cosmological constant can act as dark energy, without otherwise changing the Einstein field equations. The other advantage is that there is a natural explanation for its origin. Most quantum field theories predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, more than 100 orders of magnitude too large.[11]This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero,[43] which does not help because supersymmetry must be broken. Nonetheless, the cosmological constant is the most economical solution to the problem of cosmic acceleration. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature. ### Quintessence In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structurelike matter, the field must be very light so that it has a large Compton wavelength. No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein’s equivalence principle and variation of the fundamental constants in space or time.[44] Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses. The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called “tracker” behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.[45][46] In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = −1) from above to below. A No-Go theorem has been proved that gives this scenario at least two degrees of freedom as required for dark energy models. This scenario is so-called Quintom scenario. Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy.[47] They can have unusual properties: phantom energy, for example, can cause a Big Rip. ### Interacting dark energy This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could for example treat dark energy and dark matter as different facets of the same unknown substance,[48] or postulate that cold dark matter decays into dark energy.[49] Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of the space-time such that the modified dynamic stems what have been assigned to the presence of dark energy and dark matter.[50] ### Variable dark energy models The density of dark energy might have varied in time over the history of the universe. Modern observational data allow for estimates of the present density. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the Universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier–Polarski–Linder model (CPL).[51][52] Some other common models are, (Barboza & Alcaniz. 2008),[53] (Jassal et al. 2005),[54] (Wetterich. 2004).[55] ### Observational skepticism Some alternatives to dark energy aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy doesn’t actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration.[56][57][58][59] A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble.[60][61][62] Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe,[63][64] or that the supernovae sample size used wasn’t large enough.[65][66] ## Other mechanism driving acceleration ### Modified gravity The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are very many such theories, and research is ongoing.[67][68] The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy.[69][70][71] Astrophysicist Ethan Siegel states that, while such alternatives gain a lot of mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists, and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy.[72] ## Implications for the fate of the universe Cosmologists estimate that the acceleration began roughly 5 billion years ago.[73][notes 1] Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant). Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light.[74] This is not a violation of special relativity because the notion of “velocity” used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.[75][76] However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future[77] because the light never reaches a point where its “peculiar velocity” toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away.[76] As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely[78][79] (see Future of an expanding universe). Planet Earth, the Milky Way, and the Local Group of which the Milky way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration. There are other, more speculative ideas about the future of the universe. The phantom energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a “Big Rip“. It is also possible the universe may never have an end and continue in its present state forever[citation needed] (see The second thermodynamics law as a law of disorder). On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility that gravity might yet rule the day and lead to a universe that contracts in on itself in a “Big Crunch“,[80] or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years.[81][82] While none of these are supported by observations, they are not ruled out. ## In philosophy of science In philosophy of science, dark energy is an example of an “auxiliary hypothesis”, an ad hoc postulate that is added to a theory in response to observations that falsify it. It has been argued that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper.[83]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728506565093994, "perplexity": 680.0340214311815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00070.warc.gz"}
https://arxiv.org/abs/1712.06816
Full-text links: math.NA (what is this?) # Title: Arnold-Winther mixed finite elements for Stokes eigenvalue problems Abstract: This paper is devoted to study the Arnold-Winther mixed finite element method for two dimensional Stokes eigenvalue problems using the stress-velocity formulation. A priori error estimates for the eigenvalue and eigenfunction errors are presented. To improve the approximation for both eigenvalues and eigenfunctions, we propose a local post-processing. With the help of the local post-processing, we derive a reliable a posteriori error estimator which is shown to be empirically efficient. We confirm numerically the proven higher order convergence of the post-processed eigenvalues for convex domains with smooth eigenfunctions. On adaptively refined meshes we obtain numerically optimal higher orders of convergence of the post-processed eigenvalues even on nonconvex domains. Subjects: Numerical Analysis (math.NA) Cite as: arXiv:1712.06816 [math.NA] (or arXiv:1712.06816v1 [math.NA] for this version) ## Submission history From: Joscha Gedicke [view email] [v1] Tue, 19 Dec 2017 08:26:13 GMT (736kb,D)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255984783172607, "perplexity": 1871.5237901786215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00066.warc.gz"}
http://math.stackexchange.com/questions/63072/surjectivity-implies-injectivity
# Surjectivity implies injectivity Let S be a finite set.Let F be a surjective function from S to S. How do I prove that it is injective? - Have you tried counting elements yet? – Sebastian Sep 9 '11 at 10:32 Suppose $x \neq y \in S$ and that $f(x) =f(y)$. Let $|S|=n$. How many distinct elements can lie in the image of $f$? – m_t_ Sep 9 '11 at 10:36 Let $S$ be a finite set, and $f : S \to S$ a function. Then the following are equivalent: • $f$ is injective. • $f$ is surjective. • $f$ is bijective. This is really just a counting argument. First, suppose $f$ is injective. If $S$ has $n$ elements, by our assumption, this means the image of $f$ has at least $n$ elements. But the image of $f$ is contained in $S$, so it has at most $n$ elements; so the image of $f$ contains exactly $n$ elements and is therefore the whole of $S$, i.e. $f$ is surjective. Next, suppose $f$ is surjective. So, for each $y$ in $S$, there is an $x$ in $S$ such that $y = f(x)$; we choose one such $x$ for each $y$ and define a function $g : S \to S$ so that $g(y) = x$. By construction, $f(g(y)) = y$, so $g$ must be injective, and hence, must be surjective by the above argument. So $g$ is a bijection, and $f$ is a left inverse for $g$. But a left inverse for a bijection is also a right inverse, so this implies $f$ is a bijection, and a fortiori an injection. Notice that the very first part of the argument fails when $S$ is not finite. For example, let us consider the function $f : \mathbb{N} \to \mathbb{N}$ defined by $f(x) = x + 1$. This function is certainly injective but is not surjective. Similarly, the function $g : \mathbb{N} \to \mathbb{N}$ defined by $f(0) = 0$ and $f(x + 1) = x$ is surjective, but not injective. - Why is the function g injective? – Mohan Sep 9 '11 at 11:06 @user774025: Because we send $y$ to its $x$ such that $f(x)=y$. Since $f$ is a function there can only be one element as $f(x)$. – Asaf Karagila Sep 9 '11 at 11:46 Though technically correct, the claim that "the image of [an injective] $f$ has at least $n$ elements" is odd and misleading. It follows from the definition of a function that the image of any function has at most $n$ elements when its domain has $n$ elements. So proving the first part really just amounts to noticing that injectivity implies the image of $f$ has exactly $n$ elements, i.e., it coincides with $S$. – pash Jul 26 '13 at 18:10 Suppose that $f$ is an injective function and not surjective, i.e. there is point $y\in S$ such that there is no point $x\in S$ with $f(x)=y$. Since $f$ is a function, every $x\in S$ must work as abscissa in the relation $f$. Hence we must have some $x_1 \ne x_2$ with $f(x_1)=f(x_2)$, which gives a contradiction. Therefore $f$ must be onto. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757674336433411, "perplexity": 98.50491516043485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860126502.50/warc/CC-MAIN-20160428161526-00165-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/differential-geometry-tangent-vector-reparameterization.847994/
# Differential geometry : Tangent vector & reparameterization 1. Dec 13, 2015 ### Schwarzschild90 1. The problem statement, all variables and given/known data 2. Relevant equations Arc-length function 3. The attempt at a solution Tangent vector: r=-sinh(t), cosh(t), 3 Now, I just need to reparameterize it using arclength and verify my work is unit-speed. Will someone give me a hint? Should I use the arc-length function to accomplish this. #### Attached Files: • ###### 1.PNG File size: 8.7 KB Views: 60 2. Dec 13, 2015 ### Ray Vickson What is preventing you from trying it for yourself? 3. Dec 13, 2015 ### Schwarzschild90 It's that I have no means of checking the solution, so before I invest in it, I would like to know if my method is correct (assuming that I integrate correctly). 4. Dec 13, 2015 ### Staff: Mentor This isn't the tangent vector. 5. Dec 13, 2015 ### Schwarzschild90 Tangent vector Now, compute the norm of the tangent vector: Using this, make the following substitution 6. Dec 13, 2015 ### Staff: Mentor 7. Dec 13, 2015 ### Schwarzschild90 How do I compute the arclength, without knowing the range? For example [0 <= t <= 2pi] Another shot at the arc length of the tangent vector $\sqrt{(9+9*sinh(t)^2+16*cosh(t)^2)}dt =^* 25 cosh^2(t) = 25 sinh^2(t)+25$ * Using a trigonometric identity PS: csgn is code used specifically by maple. It' not necessarily a mathematical function Last edited: Dec 13, 2015 8. Dec 13, 2015 ### Staff: Mentor The arc length function in your relevant equations gives the arc length in terms of a parameter t. The last expression above is not helpful, but the one before it is helpful. What happened to the square root? Do you know what it means, though? I've never seen it, but I don't use Maple. 9. Dec 13, 2015 ### Schwarzschild90 Right, the square root should've been preserved, in the above equation. Here it is, in all of its glory: $\sqrt{25cosh^2(t)}$ So, is this equation the reparameterization of the tangent vector? csgn(x) is the sign function of real AND complex numbers; where csgn = complex signum. 10. Dec 13, 2015 ### Ray Vickson 11. Dec 13, 2015 ### Schwarzschild90 Plot of the 25cosh^2(t) function; the norm of the tangent vector Last edited: Dec 13, 2015 12. Dec 13, 2015 ### Ray Vickson This is not relevant. The question is what cosh(t) looks like, not its square. You should not even need to do an actual plot; just picture it in your mind. 13. Dec 13, 2015 ### Schwarzschild90 Plot of $5 \sqrt{cosh(t)}$ I can picture it in my mind. What am I supposed to "see"? Last edited: Dec 13, 2015 14. Dec 13, 2015 ### Schwarzschild90 I get this for the parameterization by arclength of the tangent vector $int(5*sqrt(cosh(t)^2), t = 0 .. 1) = 5 sinh(1)$ 15. Dec 13, 2015 ### Staff: Mentor This is not a parameterization -- it's a number. As I said before... IOW, $\int_0^t 5 \sqrt{\cosh^2(w)} dw$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88498854637146, "perplexity": 2966.044005064279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00415.warc.gz"}
https://iwaponline.com/hr/article-abstract/49/2/303/37828/Projections-of-runoff-in-the-Vistula-and-the-Odra
## Abstract The objective of this paper is to assess climate change impacts on spatiotemporal changes in annual and seasonal runoff and its components in the basins of two large European rivers, the Vistula and the Odra, for future horizons. This study makes use of the Soil and Water Assessment Tool (SWAT) model, set up at high resolution, and driven by a multi-model ensemble (MME) of nine bias-corrected EURO-CORDEX simulations under two representative concentration pathways (RCPs), 4.5 and 8.5. This paper presents a wealth of illustrative material referring to the annual and seasonal runoff (R) in the reference period as well as projections for the future (MME mean change), with explicit illustration of the multi-model spread based on the agreement between models and statistical significance of change according to each model. Annual R increases are dominating, regardless of RCP and future horizon. The magnitude of the MME mean of spatially averaged increase varies between 15.8% (RCP 4.5, near future) and 41.6% (RCP 8.5, far future). The seasonal patterns show the highest increase in winter and the lowest in spring, whereas the spatial patterns show the highest increase in the inner, lowland part, and the lowest in the southern mountainous part of the basin.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8650946617126465, "perplexity": 2587.036265817106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479729.27/warc/CC-MAIN-20190216004609-20190216030609-00493.warc.gz"}
http://mathhelpforum.com/calculus/61524-graphs.html
# Math Help - graphs 1. ## graphs The diagram shows the graph of . You are given that . What is the value of ? Give your answer to three significant figures. Could someone explain how i would go about working this out? 2. Originally Posted by Haris The diagram shows the graph of . You are given that . What is the value of ? Give your answer to three significant figures. Could someone explain how i would go about working this out? Hello , Please Dont ask the same question twice! Its against forum rules
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9112411737442017, "perplexity": 594.3374489217985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278887.13/warc/CC-MAIN-20160524002118-00211-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/meaning-of-permeability-of-free-space.740368/
# Meaning Of Permeability Of Free Space 1. Feb 25, 2014 ### LikesIntuition When we talk about the permeability of free space, are we talking about something with physical meaning on its own? Or is it simply a useful constant? If it does have meaning on its own, what exactly is that meaning? 2. Feb 25, 2014 ### Staff: Mentor The permeability of free space is just a constant which is needed for conversion between different units in the SI system of units and other similar systems of units. It other systems of units it doesn't even exist. 3. Feb 25, 2014 ### LikesIntuition Alright thanks! 4. Feb 26, 2014 ### abitslow I don't know the difference between a "physical constant" and "something that has meaning on its own". The speed of light, c, is a physical constant. Does it have meaning? Your question is really not capable of a simple answer. Depending on your level of knowledge (interest) you might be ok with someone telling you it is "just" a conversion factor. Its not that it isn't, it is. But take a look at the last equation in this section of wikipedia:https://en.wikipedia.org/wiki/Covar...agnetism#Electromagnetic_stress-energy_tensor εₒµₒc² = 1... the permittivity and permeability of the vacuum are two aspects of the same thing and from them you can calculate (should I say "specify"??) the speed of light. So, I wouldn't say "just". εₒ and µₒ are the conversion factors between electricity and magnetism. Here is another wiki example of the laws of electromagetism...which you would agree, I hope, are important:https://en.wikipedia.org/wiki/Class...lativity#Maxwell.27s_equations_in_tensor_form You don't have to UNDERSTAND the equations to see that in both references µₒ appears all over the place. Sure, you can define some unit so that it's value is 1, and even make it unitless. BUT if you do that, you won't be able to speak about (for example) velocity being in units of distance ÷ time. I think that would be a step over the line. Maybe it's just me? It is a whack-a-mole (corn hole) question. If you got rid of it (by setting its value to 1 (unitless), many other normal physical quantities would have to use conversion factors to convert them into distance, time, charge, force,.... you really can't get rid of it: it WILL pop up somewhere (else). 5. Feb 26, 2014 ### Staff: Mentor That isn't necessarily correct. Again, it all depends on the system of units being used. In SI units you have the permeability of free space and you have velocity in units of distance/time. In Gaussian units the permeability of free space doesn't exist (it is a dimensionless 1) and you have velocity in units of distance/time. In Geometrized units the permeability of free space doesn't exist and velocity is unitless. So it depends on the system of units. Gaussian units are common in the EM literature. Geometrized units are common in the GR literature. SI units are common in most of the rest of the literature. English units are common in engineering literature. You should be familiar with many systems of units and know how to handle them. 6. Feb 26, 2014 ### LikesIntuition Does changing units allow us to say permeability is gone? Or is it more accurate to say it's just 1? Although I guess you could take any equation and multiply it by 1, stating that 1 is a constant. 7. Feb 26, 2014 ### Staff: Mentor I guess that would be more accurate. But since you can always divide by 1 without changing anything you can always divide it out completely. For example, in Newton's writings the second law is not ∑F=ma, but rather ∑F=kma. Newton wasn't using SI units, he was using the units of the time, which would be closer to English units than SI. If you write it in some systems of units you still need to add that unit conversion factor back in, but since we generally teach using SI we usually think of Newton's law without the k at all. 8. Feb 26, 2014 ### gburkhard Hi folks, It was always my understanding that the 'meaning' behind these numbers is relatively simple: when we talk about permittivity or permeability, we're setting the scale for the electric or magnetic polarizability of the vacuum. Ie, the general electric field is the displacement field, D=epsilon*E (and for magnetism, H=mu*B). epsilon0 and mu0 are the polarizabilities of the empty vacuum and we measure the polarizabilities of materials relative to that. However, you can change the units and call the vacuum polarizability something else if you prefer -- then the materials have different relative values in that system of units. But at the end of the day the permittivity and permeability of the vacuum is just the extent to which the vacuum is polarized when you apply a field to it. It is the reference value that we set, since we can't have an absolute field in absence of vacuum (because what would that mean?), but that's what it is. Basically the same as asking what is the meaning of setting the lorentz gauge such that the potentials go to 0 at infinity. It's just sets a reference from which we can measure everything else. Does this make sense? 9. Feb 26, 2014 ### LikesIntuition What does it mean to polarize the vacuum? Is it to give a point in the vacuum the ability to exert a force on something (so set up a field in it)? 10. Feb 26, 2014 ### gburkhard Yep, that's it! I'm just talking about a generalization of the concept here, not implying that there are physical dipoles aligning with the field in a vacuum (because then you would have to ask what the field would look like "outside" of the vacuum, which is not a concept). But just what you said -- if you have a field, say from a dipole, the field propagates through the vacuum, and the shape that the field lines take (curvature, and extent) depends on the vacuum polarizability. You can see that IF you could change epsilon, you can make the field lines shrink or expand just like you would if you were inside of a dielectric. So the value that we choose (e0) is the value in the dimensionality system of our choice that expresses the true value of the fields everywhere. Maybe in a different universe that value is something different! But in our universe it is what it is... 11. Feb 26, 2014 ### LikesIntuition Oh I see now! So we have a field being set up of which we can describe changes (such as the fact that from a point source it drops off like the inverse of the distance squared). But that doesn't tell us how strong the field will be at each point without some scaling factor. We could just as well have a force that changes through space in the same way, but is stronger or weaker at all those points. So space has some kind of *seemingly* arbitrary value for how strong the fields are that get set up in it? 12. Feb 26, 2014 ### Staff: Mentor It is not just seemingly arbitrary. It is arbitrary. It depends entirely on your units, which are arbitrary. The value of the permeability of free space doesn't tell you anything about free space, it tells you about your units. The only non-arbitrary constants in physics are the dimensionless ones. http://math.ucr.edu/home/baez/constants.html Last edited: Feb 26, 2014 13. Feb 26, 2014 ### LikesIntuition Well I'm speaking in a less mathematical sense. I'm not talking about the "number" of the field. I mean the literal force (not a number) a charge would feel in that field. It's the same physical force no matter what units we choose to describe it with. The units change the number we end up assigning to that given amount of force, but the force is the same magnitude regardless. And the actual, physical strength of our field (regardless of how we are describing it) is dependent on how our world works, not what units we're using. Right? 14. Feb 26, 2014 ### Staff: Mentor But what does it even mean, to have a "physical force" that is independent of the units used? What is the "physical force" of 3 N? All you can do is compare it to other forces, such as a certain spring compressed a certain distance or something similar. Those comparisons are dimensionless, and they don't depend on dimensionful constants like the permeability of free space, they depend only on the dimensionless constants like the fine structure constant. Did you read the link I posted? I may have posted it after you saw my response. Last edited: Feb 26, 2014 15. Feb 26, 2014 ### LikesIntuition Oh! I didn't see the link. I'll check that out. Also, even if we change units, isn't the relationship between our different physical concepts still the same? So using force as an example, the relationship between mass and acceleration (which is a relationship between velocity and time, velocity being its own relationship between distance and time) is the same no matter what units we use. Yes, we'll have different numbers for our concepts in each unit system, and I guess we could have different numbers for our rates of change, too. But the relationship is still the same, right? If you watch a certain force be placed on a certain mass, some phenomena will happen. It doesn't matter what the units we write down for what we observe, that phenomena is determined by an arbitrary physical relationship between our concepts like force and mass, right? 16. Feb 27, 2014 ### Staff: Mentor If you change units only dimensionless quantities remain the same. Luckily, if you think about what you intuitively mean by "physical concepts" they are usually dimensionless anyway. For example, the speed of light is a big number in SI units but 1 in Planck units (same dimensions in both cases). But regardless of the units, the dimensionless ratio between the speed of light and your maximum running speed is very large. Therefore, you intuitively think of light as being very fast. Not because of the large SI value and despite the small Planck value. You think of it as fast because of the dimensionless ratio between it and something familiar: your own body's motion. The relationship will be the same in all systems of units EXCEPT for the scaling factor. In some systems of units the scaling factor will be entirely absent (dimensionless 1) and in others it will be present and will have different units and dimensions. Sit down and think about what you intuitively mean by "a certain force" and "a certain mass". I bet you will find that you are mentally making dimensionless quantities. 17. Feb 27, 2014 ### LikesIntuition I see what you're saying, but I'm still having trouble with this idea. We invented math and numbers, but we didn't "invent" the physical phenomena we use math to describe. The laws of nature aren't dependent on how we choose to describe them, are they? 18. Feb 27, 2014 ### Staff: Mentor I don't know if we invented math and numbers or if we discovered it. I don't think the distinction matters too much. I agree that the laws of nature do not depend on how we choose to describe them. Therefore, anything which does depend on our choice of description is part of the description, not part of the laws of nature. This includes things like the permittivity of free space. 19. Feb 27, 2014 ### LikesIntuition I see what you mean. How "much" field is set up in a vacuum is determined by permittivity. But the value for permittivity depends on how we choose to describe our forces, right? 20. Feb 27, 2014 ### Staff: Mentor Yes. That is well said. Similar Discussions: Meaning Of Permeability Of Free Space
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844931423664093, "perplexity": 461.3581001640547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00248.warc.gz"}
https://math.stackexchange.com/questions/3088540/functions-with-finite-right-hand-limits-are-borel-measurable/3089247
# Functions with finite right-hand limits are Borel measurable I'm studying for my exam in measure and integration theory and we got some exercises that we can do for preparation and I'm stuck on this one. Every function $$f:\Bbb R \rightarrow \Bbb R$$ (from reals to reals) with the property that $$\displaystyle\lim _{ h\to 0^{+}}{ f(x+h) }$$ exists for all $$x\in \Bbb R$$ is Borel measurable. I just proved that the pointwise limit of measurable functions is again measurable. So I thought about ways we could represent our $$f$$ as a limit of (Borel) measurable functions but I dont think it is in general possible to find such a sequence for an arbitrary $$f$$. Further I feel like i don't understand the condition that $$\lim _{ h\to 0^+ }{ f(x+h) }$$ exists correctly. Is it possible to derive some form of continuity with this property? Could someone shed some light on this problem for me, thanks for any help. • Off-hand I don't know how to prove such a function is Borel measurable, but I KNOW that such a function has at most countably many discontinuities (follows from the much stronger results described here), and thus is actually a Baire one function (pointwise limit of continuous functions; the first level of the $\omega_1$-length hierarchy of Borel measurable functions), in fact better than Baire one because a Baire one function can be discontinuous on a $c$-dense set. – Dave L. Renfro Jan 26 at 20:13 If we set $$\tilde{f}(x) := \lim_{h \downarrow 0} f(x+h),$$ then $$\tilde{f}$$ is (by assumption) well-defined. Moreover, straight-forward computations show that $$\tilde{f}$$ is right-continuous (see Lemma 3 below) and, hence, Borel measurable. If we can prove that the set $$J:=\{x \in \mathbb{R}; \tilde{f}(x) \neq f(x)\} \tag{1}$$ is countable, then $$g:=f-\tilde{f}$$ is Borel measurable (see e.g. this proof). Consequently, $$f=g+\tilde{f}$$ is Borel measurable as sum of Borel measurable functions. To prove that $$J$$ is countable we proceed as follows: For $$x \in \mathbb{R}$$ define the oscillations of $$f$$ at $$x$$ by $$\omega(x) := \inf_{r>0} \omega_r(x) := \inf_{r>0} \left( \sup_{z \in B(x,r)} f(z) - \inf_{z \in B(x,r)} f(z) \right).$$ It is not difficult to see that $$\{\omega=0\} = \{x \in \mathbb{R}; \text{x is a continuity point of f}\} \tag{2}$$ and therefore $$J \subseteq \{\omega \neq 0\}$$. Consequently, we are done if we can show that $$\{\omega \neq 0\}$$ is countable. Lemma 1: For any $$x \in \mathbb{R}$$ and $$n \in \mathbb{N}$$ there exists $$\delta(x)>0$$ such that $$\omega(y) \leq 1/n$$ for all $$y \in (x,x+\delta(x))$$. Proof: Fix $$x \in \mathbb{R}$$ and $$n \in \mathbb{N}$$. Since the limit $$\tilde{f}(x) = \lim_{h \downarrow 0} f(x+h)$$ exists, we have $$\sup_{z \in (x,x+h)} f(z) \xrightarrow[]{h \to 0} \tilde{f}(x) \qquad \inf_{z \in (x,x+h)} f(z) \xrightarrow[]{h \to 0} \tilde{f}(x),$$ and so $$\lim_{h \to 0} \left| \sup_{z \in (x,x+h)} f(z) - \inf_{z \in (x,x+h)} f(z) \right|=0;$$ in particular we can choose $$\delta>0$$ such that $$\left| \sup_{z \in (x,x+\delta)} f(z) - \inf_{z \in (x,x+\delta)} f(z) \right| \leq \frac{1}{n}. \tag{3}$$ Now let $$y \in (x,x+\delta)$$. If we set $$r := \min\{|y-x|,|y-(x+\delta)|\}$$ then $$B(y,r) \subseteq (x,x+\delta)$$. In particular, we have $$\sup_{z \in B(y,r)} f(z) \leq \sup_{z \in (x,x+\delta)} f(z) \qquad \inf_{z \in B(y,r)} f(z) \geq \inf_{z \in (x,x+\delta)} f(z) \tag{4}$$ which implies $$\inf_{z \in (x,x+\delta)} f(z) \leq \inf_{z \in B(y,r)} f(z) \leq \sup_{z \in B(y,r)} f(z) \leq \sup_{z \in (x,x+\delta)} f(z).$$ On the other hand, we know from $$(3)$$ that $$\sup_{z \in (x,x+\delta)} f(z) \leq \inf_{z \in (x,x+\delta)} f(z) + \frac{1}{n}.$$ Combining the two chains of inequalities we conclude that $$\sup_{z \in B(y,r)} f(z) - \inf_{z \in B(y,r)} f(z) \leq \frac{1}{n},$$ i.e. $$\omega_r(y) \leq 1/n$$. In particular, $$\omega(y) \leq 1/n$$ which finishes the proof of the Lemma. Lemma 2: $$\{\omega \neq 0\}$$ is countable. Proof: Clearly, it suffices to show that $$\{\omega > 1/n\}$$ is countable for each $$n \in \mathbb{N}$$. For fixed $$n \in \mathbb{N}$$ denote by $$\delta(x)$$ the constant from the previous lemma. For each fixed $$k \in \mathbb{N}$$ and $$N \in \mathbb{N}$$ the set $$B_{k,N} := \{x \in [-N,N] \cap \{\omega>1/n\}; \delta(x) \geq 1/k\}$$ is finite. Indeed: By the previous lemma, the distance between any two points in $$B_{k,N}$$ is at least $$1/k$$ and since the length of the interval $$[-N,N]$$ is $$2N$$, there can exist at most $$2Nk+1$$ points in $$B_{k,N}$$. This implies that $$\{x \in \{\omega>1/n\}; \delta(x) \geq 1/k\} = \bigcup_{N \in \mathbb{N}} B_{k,N}$$ is countable which, in turn, implies that $$\{\omega>1/n\} = \bigcup_{k \in \mathbb{N}} \{x \in \{\omega>1/n\}; \delta(x) \geq 1/k\}$$ is countable. Edit: Following the comment to my answer, I add a proof for the right-continuity of $$\tilde{f}$$. Lemma 3: $$\tilde{f}$$ is right-continuous. Proof: Since $$\tilde{f}(y) = \lim_{h \downarrow 0} f(y+h)$$ we clearly have $$\inf_{z \in (y,y+r)} f(z) \leq \tilde{f}(y) \leq \sup_{z \in (y,y+r)} f(z) \tag{5}$$ for any $$y \in \mathbb{R}$$ and $$r>0$$. For fixed $$x \in \mathbb{R}$$ and $$\epsilon=1/n$$ let $$\delta=\delta(x)>0$$ be as in (the proof of) Lemma 1. Using (5) for $$y=x$$ we find that $$\inf_{z \in (x,x+\delta)} f(z) \leq \tilde{f}(x) \leq \sup_{z \in (x,x+\delta)} f(z).$$ On the other hand it follows from (4) and (5) that $$\inf_{z \in (x,x+\delta)} f(z) \leq \inf_{z \in B(y,r)} f(z) \leq \tilde{f}(y) \leq \sup_{z \in B(y,r)} f(z) \leq \sup_{z \in (x,x+\delta)} f(z)$$ for any $$y \in (x,x+\delta)$$ where $$r:=\min\{|y-x|,y-(x+\delta)|\}$$. Combining both inequalities and using (3) we get $$|\tilde{f}(x)-\tilde{f}(y)| \leq \left| \sup_{z \in (x,x+\delta)} f(z) - \inf_{z \in (x,x+\delta)} f(z) \right| \leq \frac{1}{n}$$ for all $$y \in (x,x+\delta)$$ which finishes the proof of the right-continuity of $$\tilde{f}$$. • great reasoning! just to make sure that i understand correctly the borel measuablility of g:=f−f follows from the fact that every countable set in R is Borel measurable? – MasterPI Jan 27 at 12:12 • @MasterPI Yes, essentially. We can write $g$ in the form $$g(x) = \sum_{j=1}^{\infty} c_j 1_{\{x_j\}}(x)$$ where $(x_j)_j$ is an enumeration of $J$; using the fact that every countable subset of $\mathbb{R}$ is Borel measurable, it can be easily check from the definition of (Borel) measurability that $g$ is Borel measurable. – saz Jan 27 at 12:26 • That $\bar{f}$ is Borel also requires a proof. Note that for any open set $U$, $\bar{f}^{-1}(U)$ is an open set with respect to the "lower limit topology". The "lower limit topology" is strictly larger than the usual topology on $\mathbb{R}$. It is true that the $\sigma$-algebra generated by these two topologies are the same but the proof is hard and tricky. – Danny Pak-Keung Chan Jan 28 at 0:37 • @DannyPak-KeungChan It requires a proof, yes, but it's not that difficult. As I indicated in my answer, I would use the right-continuity of $\tilde{f}$ to conclude that $\tilde{f}$ is Borel measurable. Showing that $\tilde{f}$ is right-continuous is a bit tedious but not difficult (I've now added the proof, see Lemma 3) and that right-continuous functions are Borel measurable is a well-known fact which is also not difficult to prove (e.g. via an approximation argument, as in this answer). – saz Jan 28 at 5:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 75, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950177669525146, "perplexity": 101.45495104227446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00392.warc.gz"}
http://nm.mathforcollege.com/NumericalMethodsTextbookUnabridged/chapter-08.05-on-solving-higher-order-and-coupled-ordinary-differential-equations.html
# Chapter 08.05: On Solving Higher-Order and Coupled Ordinary Differential Equations ## Learning Objectives After successful completion of this lesson, you should be able to: 1) write higher-order ordinary differential equations as simultaneous first-order ordinary differential equations, 2) solve higher-order ordinary differential equations numerically ## Description In the earlier lessons, we have learned Euler’s and Runge-Kutta methods to solve first-order ordinary differential equations of the form $\frac{{dy}}{{dx}} = f\left( x,y \right),y\left( x_{0} \right) = y_{0}\;\;\;\;\;\;\;\;\;\;\;\; (1)$ What do we do to solve simultaneous (coupled) differential equations or differential equations higher than first order? For example, an $$n^{{th}}$$order differential equation of the form $a_{n}\frac{d^{n}y}{dx^{n}} + a_{n - 1}\frac{d^{n - 1}y}{dx^{n - 1}} + \ldots + a_{1}\frac{{dy}}{{dx}} + a_{o}y = f\left( x \right)\;\;\;\;\;\;\;\;\;\;\;\; (2)$ with $$n$$ initial conditions can be solved by assuming $y = z_{1}\;\;\;\;\;\;\;\;\;\;\;\; (3.1)$ $\frac{{dy}}{{dx}} = \frac{dz_{1}}{{dx}} = z_{2}\;\;\;\;\;\;\;\;\;\;\;\; (3.2)$ $\frac{d^{2}y}{dx^{2}} = \frac{dz_{2}}{{dx}} = z_{3}\;\;\;\;\;\;\;\;\;\;\;\; (3.3)$ $\vdots$ $\frac{d^{n - 1}y}{dx^{n - 1}} = \frac{dz_{n - 1}}{{dx}} = z_{n}\;\;\;\;\;\;\;\;\;\;\;\; (3.n)$ $\begin{split} \frac{d^{n}y}{dx^{n}} &= \frac{dz_{n}}{{dx}}\\ &= \frac{1}{a_{n}}\left( - a_{n - 1}\frac{d^{n - 1}y}{dx^{n - 1}}\ldots - a_{1}\frac{{dy}}{{dx}} - a_{0}y + f\left( x \right) \right)\\ &= \frac{1}{a_{n}}\left( - a_{n - 1}z_{n}\ldots - a_{1}z_{2} - a_{0}z_{1} + f\left( x \right) \right)\ (3.n+1) \end{split}$ The above Equations from (3.2) to (3.n+1) represent $$n$$ first-order differential equations as follows $\frac{dz_{1}}{{dx}} = z_{2} = f_{1}\left( z_{1},z_{2},\ldots,x \right)\;\;\;\;\;\;\;\;\;\;\;\; (4.1)$ $\frac{dz_{2}}{{dx}} = z_{3} = f_{2}\left( z_{1},z_{2},\ldots,x \right)\;\;\;\;\;\;\;\;\;\;\;\; (4.2)$ $\vdots$ $\frac{dz_{n}}{{dx}} = \frac{1}{a_{n}}\left( - a_{n - 1}z_{n}\ldots - a_{1}z_{2} - a_{0}z_{1} + f\left( x \right) \right)\;\;\;\;\;\;\;\;\;\;\;\; (4.n)$ Each of the $${n }$$ first-order ordinary differential equations should be accompanied by one initial condition. The initial condition should be on the corresponding dependent variable on the left-hand side of the ordinary differential equation. For example, Equation (4.1) would need an initial condition on $$z_1$$, Equation (4.n) would need an initial condition on $$z_n$$, etc. These first-order ordinary differential equations (Equations (4.1) thru (4.n)) are simultaneous. Still, they can be solved by the methods used for solving first-order ordinary differential equations that we have already learned in the previous lessons. ### Example 1 Rewrite the following differential equation as a set of simultaneous first-order differential equations. $3\frac{d^{2}y}{dx^{2}} + 2\frac{{dy}}{{dx}} + 5y = e^{- x},y\left( 0 \right) = 5,\ y^{\prime}\left( 0 \right) = 7$ Solution The ordinary differential equation $3\frac{d^{2}y}{dx^{2}} + 2\frac{{dy}}{{dx}} + 5y = e^{- x},y\left( 0 \right) = 5,\ y^{\prime}\left( 0 \right) = 7 \;\;\;\;\;\;\;\;\;\;\;\; (E1.1)$ would be rewritten as follows. Assume $\frac{{dy}}{{dx}} = z, \;\;\;\;\;\;\;\;\;\;\;\;(E1.2)$ Then $\frac{d^{2}y}{dx^{2}} = \frac{{dz}}{{dx}}\;\;\;\;\;\;\;\;\;\;\;\;(E1.3)$ Substituting Equations (E1.2) and (E1.3) in the given second-order ordinary differential equation gives $3\frac{{dz}}{{dx}} + 2z + 5y = e^{- x}$ and rewritten as $\frac{{dz}}{{dx}} = \frac{1}{3}\left( e^{- x} - 2z - 5y \right) \;\;\;\;\;\;\;\;\;\;\;\;(E1.4)$ The set of two simultaneous first-order ordinary differential equations complete with the initial conditions then is $\frac{{dy}}{{dx}} = z,y\left( 0 \right) = 5 \;\;\;\;\;\;\;\;\;\;\;\; (E1.5a)$ $\frac{{dz}}{{dx}} = \frac{1}{3}\left( e^{- x} - 2z - 5y \right),z\left( 0 \right) = 7. \;\;\;\;\;\;\;\;\;\;\;\;(E1.5b)$ Now one can apply any of the numerical methods used for solving first-order ordinary differential equations. We write such equations in a subscripted format in a later lesson to make them suitable for matrix and vector representation. In the above example, we would introduce subscripted dependent variables, say, $$w_1$$ and $$w_2$$ as $w_1=y \;\;\;\;\;\;\;\;\;\;\;\;(E1.6)$ and $w_2=\displaystyle \frac{dy}{dx}\;\;\;\;\;\;\;\;\;\;\;\;(E1.7)$ Using Equations (E1.6) and (E1.7), Equations (E1.5a) and (E1.5b) can be rewritten in subscripted form as $\frac{{dw_1}}{{dx}} = w_2,w_1\left( 0 \right) = 5 \;\;\;\;\;\;\;\;\;\;\;\; (E1.8a)$ $\frac{{dw_2}}{{dx}} = \frac{1}{3}\left( e^{- x} - 2w_2 - 5w_1 \right),w_2\left( 0 \right) = 7. \;\;\;\;\;\;\;\;\;\;\;\;(E1.8b)$ Rewriting Equations (E1.8a) and E(1.8b) now as $\frac{{dw_1}}{{dx}} = 0w_1+1w_2,w_1\left( 0 \right) = 5 \;\;\;\;\;\;\;\;\;\;\;\; (E1.9a)$ $\frac{{dw_2}}{{dx}} = -\frac{5}{3}w_1-\frac{2}{3}w_2+\frac{1}{3}e^{- x}, w_2\left( 0 \right) = 7. \;\;\;\;\;\;\;\;\;\;\;\;(E1.9b)$ makes it ready to be written in a matrix form and is called the state-space model. In the matrix form, the Equations (E1.9a) and (E1.9b) are given as $\begin{bmatrix} \displaystyle \frac{dw_1}{dx} \\ \displaystyle\frac{dw_2}{dx} \\ \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\displaystyle \frac{5}{3} & -\displaystyle \frac{2}{3} \\ \end{bmatrix}\begin{bmatrix} w_{1} \\ w_{2} \\ \end{bmatrix} + \begin{bmatrix} 0 \\ \displaystyle\frac{1}{3}e^{-x} \\ \end{bmatrix} \;\;\;\;\;\;\;\;\;\;\;\; (E1.10)$ where the initial conditions are given by $\begin{bmatrix} w_{1}(0) \\ w_{2}(0) \\ \end{bmatrix} = \begin{bmatrix} 5 \\ 7 \\ \end{bmatrix}\;\;\;\;\;\;\;\;\;\;\;\; (E1.11)$ The motivation and description of the state-space model are described in detail in the next lesson for higher-order and coupled ordinary differential equations. ### Example 2 Given $\frac{d^{2}y}{dt^{2}} + 2\frac{{dy}}{{dt}} + y = e^{- t},y\left( 0 \right) = 1,\frac{{dy}}{{dt}}\left( 0 \right) = 2,$ estimate the following by Euler’s method a)  $$y\left( 0.75 \right)$$ b)  the absolute relative true error for part(a), if $$\left. \ y\left( 0.75 \right) \right|_{{exact}} = 1.668$$ c)  $$\displaystyle \frac{{dy}}{{dt}}\left( 0.75 \right)$$ Use a step size of $$h = 0.25$$. Solution First, the second-order differential equation is written as two simultaneous first-order differential equations as follows. Assume $\frac{{dy}}{{dt}} = z$ then $\frac{{dz}}{{dt}} + 2z + y = e^{- t}$ $\frac{{dz}}{{dt}} = e^{- t} - 2z - y$ So the two simultaneous first-order differential equations are $\frac{{dy}}{{dt}} = z = f_{1}\left( t,y,z \right),y(0) = 1\;\;\;\;\;\;\;\;\;\;\;\; (E2.1)$ $\frac{{dz}}{{dt}} = e^{- t} - 2z - y = f_{2}\left( t,y,z \right),\ z(0) = 2\;\;\;\;\;\;\;\;\;\;\;\; (E2.2)$ Using Euler’s method on Equations (E2.1) and (E2.2), we get $y_{i + 1} = y_{i} + f_{1}\left( t_{i},y_{i},z_{i} \right)h\;\;\;\;\;\;\;\;\;\;\;\; (E2.3)$ $z_{i + 1} = z_{i} + f_{2}\left( t_{i},y_{i},z_{i} \right)h\;\;\;\;\;\;\;\;\;\;\;\; (E2.4)$ a) To find the value of $$y\left( 0.75 \right)$$ and since we are using a step size of $$0.25$$ and starting at $$t = 0$$, we need to take three steps to find the value of $$y\left( 0.75 \right)$$. For $$i = 0,t_{0} = 0,y_{0} = 1,z_{0} = 2$$, From Equation (E2.3) $\begin{split} y_{1} &= y_{0} + f_{1}\left( t_{0},y_{0},z_{0} \right)h\\ &= 1 + f_{1}\left( 0,1,2 \right)\left( 0.25 \right)\\ &=1+2\left(0.25\right)\\ &= 1.5 \end{split}$ $$y_{1}$$ is the approximate value of $$y$$ at $t = t_{1} = t_{0} + h = 0 + 0.25 = 0.25$ $y_{1} = y\left( 0.25 \right) \approx 1.5$ From Equation (E2.4) $\begin{split} z_{1} &= z_{0} + f_{2}\left( t_{0},y_{0},z_{0} \right)h\\ &= 2 + f_{2}\left( 0,1,2 \right)\left( 0.25 \right)\\ &= 2 + \left( e^{- 0} - 2\left( 2 \right) - 1 \right)\left( 0.25 \right)\\ &= 1 \end{split}$ $$z_{1}$$ is the approximate value of $$z$$ (same as $$\frac{{dy}}{{dt}}$$) at $$t = 0.25$$ $z_{1} = z\left( 0.25 \right) \approx 1$ For $$i = 1,t_{1} = 0.25,y_{1} = 1.5,z_{1} = 1$$, From Equation (E2.3) $\begin{split} y_{2}\ &= y_{1} + f_{1}\left( t_{1},y_{1},z_{1} \right)h\\ &= 1.5 + f_{1}\left( 0.25,1.5,1 \right)\left( 0.25 \right)\\ &= 1.5 + \left( 1 \right)\left( 0.25 \right)\\ &= 1.75 \end{split}$ $$y_{2}$$ is the approximate value of $$y$$ at $t = t_{2} = t_{1} + h = 0.25 + 0.25 = 0.50$ $y_{2} = y\left( 0.5 \right) \approx 1.75$ From Equation (E2.4) $\begin{split} z_{2} &= z_{1} + f_{2}\left( t_{1},y_{1},z_{1} \right)h\\ &= 1 + f_{2}\left( 0.25,1.5,1 \right)\left( 0.25 \right)\\ &= 1 + \left( e^{- 0.25} - 2\left( 1 \right) - 1.5 \right)\left( 0.25 \right)\\ &= 1 + \left( - 2.7211 \right)\left( 0.25 \right)\\ &= 0.31970 \end{split}$ $$z_{2}$$ is the approximate value of $${z}$$ at $t = t_{2} = 0.5$ $z_{2} = z\left( 0.5 \right) \approx 0.31970$ For $$i = 2,t_{2} = 0.5,y_{2} = 1.75,z_{2} = 0.31970$$, From Equation (E2.3) $\begin{split} y_{3} &= y_{2} + f_{1}\left( t_{2},y_{2},z_{2} \right)h\\ &= 1.75 + f_{1}\left( 0.50,1.75,0.31970 \right)\left( 0.25 \right)\\ &= 1.75 + \left( 0.31970 \right)\left( 0.25 \right)\\ &= 1.8299 \end{split}$ $$y_{3}$$ is the approximate value of $$y$$ at $t = t_{3} = t_{2} + h = 0.5 + 0.25 = 0.75$ $$y_{3} = y\left( 0.75 \right) \approx 1.8299$$ From Equation (E2.4) $\begin{split} z_{3} &= z_{2} + f_{2}\left( t_{2},y_{2},z_{2} \right)h\\ &= 0.31972 + f_{2}\left( 0.50,1.75,0.31970 \right)\left( 0.25 \right)\\ &= 0.31972 + \left( e^{- 0.50} - 2\left( 0.31970 \right) - 1.75 \right)\left( 0.25 \right)\\ &= 0.31972 + \left( - 1.7829 \right)\left( 0.25 \right)\\ &= - 0.1260 \end{split}$ $$z_{3}$$ is the approximate value of $$z$$ at $t = t_{3} = 0.75$ $z_{3} = z\left( 0.75 \right) \approx - 0.12601$ $y\left( 0.75 \right) \approx y_{3} = 1.8299$ b) The exact value of $$y\left( 0.75 \right)$$ is $\left. \ y\left( 0.75 \right) \right|_{{exact}} = 1.668$ The absolute relative true error in the result from part (a) is $\begin{split} \left| \in_{t} \right| &= \left| \frac{1.668 - 1.8299}{1.668} \right| \times 100\\ &= 9.7062\% \end{split}$ c) $\begin{split} \frac{dy}{dx}\left(0.75\right) &=z_3\\ &\approx - 0.12601 \end{split}$ ### Example 3 Given $\frac{d^{2}y}{dt^{2}} + 2\frac{{dy}}{{dt}} + y = e^{- t},\ y(0) = 1,\ \frac{{dy}}{{dt}}(0) = 2,$ estimate the following by Heun’s method. a)  $$y\left( 0.75 \right)$$ b)  $$\displaystyle \frac{{dy}}{{dx}}\left( 0.75 \right)$$. Use a step size of $$h = 0.25$$. Solution First, the second-order differential equation is rewritten as two simultaneous first-order differential equations as follows. Assume $\frac{{dy}}{{dt}} = z$ then $\frac{{dz}}{{dt}} + 2z + y = e^{- t}$ $\frac{{dz}}{{dt}} = e^{- t} - 2z - y$ So the two simultaneous first-order differential equations with the corresponding initial conditions are $\frac{{dy}}{{dt}} = z = f_{1}\left( t,y,z \right),y(0) = 1\;\;\;\;\;\;\;\;\;\;\;\; (E3.1)$ $\frac{{dz}}{{dt}} = e^{- t} - 2z - y = f_{2}\left( t,y,z \right),\ z(0) = 2\;\;\;\;\;\;\;\;\;\;\;\; (E3.2)$ Using Heun’s method on Equations (E3.1) and (E3.2), we get $y_{i + 1} = y_{i}\ + \ \frac{1}{2}\left( k_{1}^{y} + k_{2}^{y} \right)h\;\;\;\;\;\;\;\;\;\;\;\; (E3.3)$ $k_{1}^{y} = f_{1}\ \left( t_{i},\ y_{i},\ z_{i} \right)\;\;\;\;\;\;\;\;\;\;\;\; (E3.4a)$ $k_{2}^{y} = f_{1}\ \left( t_{i}\ + \ h,\ y_{i}\ + \ hk_{1}^{y},\ z_{i}\ + \ hk_{1}^{z} \right)\;\;\;\;\;\;\;\;\;\;\;\; (E 3.4b)$ $z_{i + 1} = z_{i}\ + \frac{1}{2}\ \left( k_{1}^{z} + k_{2}^{z} \right)h\;\;\;\;\;\;\;\;\;\;\;\; (E3.5)$ $k_{1}^{z} = \ f_{2}\ \left( t_{i},\ y_{i},\ z_{i} \right)\;\;\;\;\;\;\;\;\;\;\;\; (E3.6a)$ $k_{2}^{z} = f_{2}\ \left( t_{i}\ + \ h,\ y_{i}\ + \ hk_{1}^{y},\ z_{i}\ + \ hk_{1}^{z} \right)\;\;\;\;\;\;\;\;\;\;\;\; (E3.6b)$ For $$i = 0,t_{o} = 0,y_{o} = 1,z_{o} = 2$$ From Equation (E3.4a) $\begin{split} k_{1}^{y} &= f_{1}\left( t_{0},y_{0},z_{0} \right)\\ &= f_{1}\left( 0,1,2 \right)\\ &= 2 \end{split}$ From Equation (E3.6a) $\begin{split} k_{1}^{z} &= f_{2}\left( t_{0},y_{0},z_{0} \right)\\ &= f_{2}\left( 0,1,2 \right)\\ &= e^{- 0} - 2\left( 2 \right) - 1\\ &= -4 \end{split}$ From Equation (E3.4b) $\begin{split} k_{2}^{y} &= f_{1}\left( t_{0} + h,y_{0} + hk_{1}^{y},z_{0} + hk_{1}^{z} \right)\\ &= f_{1}\left( 0 + 0.25,1 + \left( 0.25 \right)\left( 2 \right),2 + \left( 0.25 \right)\left( - 4 \right) \right)\\ &= f_{1}\left( 0.25,1.5,1 \right)\\ &= 1 \end{split}$ From Equation (E3.6b) $\begin{split} k_{2}^{z} &= f_{2}\left( t_{0} + h,y_{0} + hk_{1}^{y},z_{0} + hk_{1}^{z} \right)\\ &= f_{2}\left( 0 + 0.25,1 + \left( 0.25 \right)\left( 2 \right),2 + \left( 0.25 \right)\left( - 4 \right) \right)\\ &= f_{2}\left( 0.25,1.5,1 \right)\\ &= e^{- 0.25} - 2\left( 1 \right) - 1.5\\ &= - 2.7212 \end{split}$ From Equation (E3.3) $\begin{split} y_{1} &= y_{0} + \frac{1}{2}\left( k_{1}^{y} + k_{2}^{y} \right)h\\ &= 1 + \frac{1}{2}\left( 2 + 1 \right)\left( 0.25 \right)\\ &= 1.375 \end{split}$ $$y_{1}$$ is the approximate value of $$y$$ at $t = t_{1} = t_{0} + h = 0 + 0.25 = 0.25$ $y_{1} = y\left( 0.25 \right) \cong 1.375$ From Equation (E3.5) $\begin{split} z_{1} &= z_{0} + \frac{1}{2}\left( k_{1}^{z} + k_{2}^{z} \right)h\\ &= 2 + \frac{1}{2}( - 4 + ( - 2.7212))(0.25)\\ &= 1.1598 \end{split}$ $$z_{1}$$ is the approximate value of $$z$$ at $t = t_{1} = 0.25$ $z_{1} = z\left( 0.25 \right) \approx 1.1598$ For $$i = 1,t_{1} = 0.25,y_{1} = 1.375,z_{1} = 1.1598$$ From Equation (E3.4a) $\begin{split} k_{1}^{y} &= f_{1}\left( t_{1},y_{1},z_{1} \right)\\ &= f_{1}\left( 0.25,1.375,1.1598 \right)\\ &= 1.1598 \end{split}$ From Equation (E3.6a) $\begin{split} k_{1}^{z} &= f_{2}\left( t_{1},y_{1},z_{1} \right)\\ &= f_{2}\left( 0.25,1.375,1.1598 \right)\\ &= e^{- 0.25} - 2\left( 1.1598 \right) - 1.375\\ &= - 2.9158 \end{split}$ From Equation (E3.4b) $\begin{split} k_{2}^{y} &= f_{1}\left( t_{1} + h,y_{1} + hk_{1}^{y},z_{1} + hk_{1}^{z} \right)\\ &= f_{1}\left( 0.25 + 0.25,1.375 + \left( 0.25 \right)(1.1598),1.1598 + \left( 0.25 \right)\left( - 2.9158 \right) \right)\\ &= f_{1}\left( 0.50,1.6649,0.43087 \right)\\ &= 0.43087 \end{split}$ From Equation (E3.6b) $\begin{split} k_{2}^{z} &= f_{2}\left( t_{1} + h,y_{1} + hk_{1}^{y},z_{1} + hk_{1}^{z} \right)\\ &= f_{2}\left( 0.25 + 0.25,1.375 + \left( 0.25 \right)(1.1598),1.1598 + \left( 0.25 \right)\left( - 2.9158 \right) \right)\\ &= f_{2}\left( 0.50,1.6649,0.43087 \right)\\ &= e^{- 0.50} - 2\left( 0.43087 \right) - 1.6649\\ &= - 1.9201 \end{split}$ From Equation (E3.3) $\begin{split} y_{2}\ &= y_{1} + \frac{1}{2}\left( k_{1}^{y} + k_{2}^{y} \right)h\\ &= 1.375 + \frac{1}{2}\left( 1.1598 + 0.43087 \right)\left( 0.25 \right)\\ &= 1.5738 \end{split}$ $$y_{2}$$ is the approximate value of $$y$$ at $t = t_{2} = t_{1} + h = 0.25 + 0.25 = 0.50$ $y_{2} = y\left( 0.50 \right) \approx 1.5738$ From Equation (E3.5) $\begin{split} z_{2} &= z_{1} + \frac{1}{2}\left( k_{1}^{z} + k_{2}^{z} \right)h\\ &= 1.1598 + \frac{1}{2}( - 2.9158 + ( - 1.9201))(0.25)\\ &= 0.55533 \end{split}$ $$z_{2}$$ is the approximate value of $$z$$ at $t = t_{2} = 0.50$ $z_{2} = z\left( 0.50 \right) \approx 0.55533$ For $$i = 2,t_{2} = 0.50,y_{2} = 1.57384,z_{2} = 0.55533$$ From Equation (E3.4a) $\begin{split} k_{1}^{y} &= f_{1}\left( t_{2},y_{2},z_{2} \right)\\ &= f_{1}\left( 0.50,1.5738,0.55533 \right)\\ &= 0.55533 \end{split}$ From Equation (E3.6a) $\begin{split} k_{1}^{z} &= f_{2}\left( t_{2},y_{2},z_{2} \right)\\ &= f_{2}\left( 0.50,1.5738,0.55533 \right)\\ &= e^{- 0.50} - 2\left( 0.55533 \right) - 1.5738\\ &= - 2.0779 \end{split}$ From Equation (E3.4b) $\begin{split} k_{2}^{y} &= f_{2}\left( t_{2} + h,y_{2} + hk_{1}^{y},z_{2} + hk_{1}^{z} \right)\\ &= f_{1}\left( 0.50 + 0.25,1.5738 + \left( 0.25 \right)(0.55533),0.55533 + \left( 0.25 \right)\left( - 2.0779 \right) \right)\\ &= f_{1}\left( 0.75,1.7126,0.035836 \right)\\ &= 0.035836 \end{split}$ From Equation (E3.6b) $\begin{split} k_{2}^{z} &= f_{2}\left( t_{2} + h,y_{2} + hk_{1}^{y},z_{2} + hk_{1}^{z} \right)\\ &= f_{2}\left( 0.50 + 0.25,1.5738 + \left( 0.25 \right)(0.55533),0.55533 + \left( 0.25 \right)\left( - 2.0779 \right) \right)\\ &= f_{2}\left( 0.75,1.7126,0.035836 \right)\\ &= e^{- 0.75} - 2\left( 0.035836 \right) - 1.7126\\ &= - 1.3119 \end{split}$ From Equation (E3.3) $\begin{split} y_{3} &= y_{2} + \frac{1}{2}\left( k_{1}^{y} + k_{2}^{y} \right)h\\ &= 1.5738 + \frac{1}{2}\left( 0.55533 + 0.035836 \right)\left( 0.25 \right)\\ &= 1.6477 \end{split}$ $$y_{3}$$ is the approximate value of $$y$$ at $t = t_{3} = t_{2} + h = 0.50 + 0.25 = 0.75$ $y_{3} = y\left( 0.75 \right) \approx 1.6477$ b) From Equation (E3.5) $\begin{split} z_{3} &= z_{2} + \frac{1}{2}\left( k_{1}^{z} + k_{2}^{z} \right)h\\ &= 0.55533 + \frac{1}{2}( - 2.0779 + ( - 1.3119))(0.25)\\ &= 0.13158 \end{split}$ $$z_{3}$$ is the approximate value of $$z$$ at $t = t_{3} = 0.75$ $z_{3} = z\left( 0.75 \right) \cong 0.13158$ The intermediate and the final results are shown in Table 1. Table 1 Intermediate results of Heun’s method. $$i$$ $$0$$ $$1$$ $$2$$ $$t_i$$ $$0$$ $$0.25$$ $$0.50$$ $$y_i$$ $$1$$ $$1.3750$$ $$1.5738$$ $$z_i$$ $$2$$ $$1.1598$$ $$0.55533$$ $$k_1^y$$ $$2$$ $$1.1598$$ $$0.55533$$ $$k_1^z$$ $$-4$$ $$-2.9158$$ $$-2.0779$$ $$k_2^y$$ $$1$$ $$0.43087$$ $$0.035836$$ $$k_2^z$$ $$-2.7211$$ $$-1.9201$$ $$-1.3119$$ $$y_{i+1}$$ $$1.3750$$ $$1.5738$$ $$1.6477$$ $$z_{i+1}$$ $$1.1598$$ $$0.55533$$ $$0.13158$$ ## Learning Objectives After successful completion of this lesson, you should be able to: 1) set up higher-order and coupled ordinary differential equations as simultaneous first-order ordinary differential equations 2) write higher-order and coupled ordinary differential equations in a state-space model form 3) solve higher-order and coupled ordinary differential equations numerically using software programs ## Introduction In the previous lesson, we showed how to write a higher-order ordinary differential equation as a set of first-order differential equations along with the corresponding initial conditions. In this lesson, we show how we rewrite the set of first-order differential equations in matrix form called as the state-space model. We illustrate this through an example. We show the concept through higher-order and coupled ordinary differential equations. ### Example 1 Write the following ordinary differentiation equation as a state model $17\frac{d^{3}y}{{dt}^{3}} + 3\frac{d^{2}y}{{dt}^{2}} + 7\frac{{dy}}{{dt}} + 5y = 11e^{- t},$ $y\left( 0 \right) = 13,\ \frac{{dy}}{{dt}}\left( 0 \right) = 19,\ \frac{d^{2}y}{{dt}^{2}} = 23$ Solution The differential equation is of the third order, and we have three state variables. Let these be named as $x_{1} = y\;\;\;\;\;\;\;\;\;\;\;\; (E1.1)$ $x_{2} = \dot{y}\;\;\;\;\;\;\;\;\;\;\;\; (E1.2)$ $x_{3} = \ddot{y}\;\;\;\;\;\;\;\;\;\;\;\; (E1.3)$ Note the symbols. The symbol $$\dot{y}$$ stands for $$\frac{{dy}}{{dt}}$$ and $$\ddot{y}$$ stands for $$\frac{d^{2}y}{{dt}^{2}}$$. We are using these symbols as they are the norm in most textbooks. We also have $\dot{x_{3}} = \dddot{y}\;\;\;\;\;\;\;\;\;\;\;\; (E1.4)$ So the given ordinary differential equation $17\frac{d^{3}y}{{dt}^{3}} + 3\frac{d^{2}y}{{dt}^{2}} + 7\frac{{dy}}{{dt}} + 5y = 11e^{- t},$ can be written as $17\dot{x_{3}} + 3x_{3} + 7x_{2} + 5x_{1} = 11e^{- t}\;\;\;\;\;\;\;\;\;\;\;\; (E1.5)$ Writing equations (E1.2), (E1.3), and (E1.5) with the first derivatives on the left side, we get $\dot{x_{1}} = \dot{y} = x_{2}$ $\dot{x_{2}} = \ddot{y} = x_{3}$ $\dot{x_{3}} = \frac{11e^{- t} - 3x_{3} - 7x_{2} - 5x_{1}}{17}$ Rewriting the above three equations with coefficients for all the state variables, we get $\dot{x_{1}} = 0x_{1} + 1x_{2} + 0x_{3} + 0$ $\dot{x_{2}} = 0x_{1} + 0x_{2} + 1x_{3} + 0$ $\dot{x_{3}} = - \frac{5}{17}x_{1} - \frac{7}{17}x_{2} - \frac{3}{17}x_{3} + \frac{11e^{- t}}{17}$ In the matrix form, the state-space model is given as $\begin{bmatrix} \dot{x_{1}} \\ \dot{x_{2}} \\ \dot{x_{3}} \\ \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ - \displaystyle\frac{5}{17} & - \displaystyle\frac{7}{17} & - \displaystyle\frac{3}{17} \\ \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \\ \displaystyle\frac{11e^{- t}}{17} \\ \end{bmatrix} \;\;\;\;\;\;\;\;\;\;\;\; (E1.6)$ where the conditions are given by $\begin{bmatrix} x_{1}(0) \\ x_{2}(0) \\ x_{3}(0) \\ \end{bmatrix} = \begin{bmatrix} y(0) \\ \dot{y}(0) \\ \ddot{y}(0) \\ \end{bmatrix} = \begin{bmatrix}\displaystyle y(0) \\\displaystyle \frac{{dy}}{{dt}}\left( 0 \right) \\\displaystyle \frac{d^{2}y}{{dt}^{2}}(0) \\ \end{bmatrix} = \begin{bmatrix} 13 \\ 19 \\ 23 \\ \end{bmatrix}$ and $y = \begin{bmatrix} 1 & 0 & 0 \\ \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix}$ In Equation (E1.6), the $$3\times3$$ matrix below $[A] = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ - \displaystyle\frac{5}{17} & - \displaystyle\frac{7}{17} & - \displaystyle\frac{3}{17} \\ \end{bmatrix}$ is the state matrix. The 3-element column vector in Equation (E1.6) $[X]=\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ \end{bmatrix}$ is called the state vector. The 3-element column vector in Equation (E1.6) $[u] = \begin{bmatrix} 0 \\ 0 \\ \displaystyle\frac{11e^{- t}}{17} \\ \end{bmatrix}$ is called the input vector or forcing function vector. The left side of the equation consists of the first derivatives of the state vector. The general form of the state-space model is $[\dot{X}] = [A][X]+[u]$ ### Example 2 Reduce the following coupled ordinary differential equations to a set of first-order differential equations complete with initial conditions and in the matrix form required to solve them numerically. $10\frac{d^2x_1}{dt^2} - 15\left(-2x_1 + x_2 \right) = 0$ $20 \frac{d^2x_2}{dt^2} - 15\left(x_1-2x_2 \right) = 0$ $\frac{dx_1}{dt}\left(0\right) = 4,\ \frac{dx_2}{dt}\left(0\right) = 5,\ x_1\left(0\right) = 2,\ x_2\left(0\right) = 3$ Solution The differential equations given are second-order ordinary differential equations $10\frac{d^2x_1}{dt^2} - 15\left(-2x_1 + x_2 \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E2.1)$ $20 \frac{d^2x_2}{dt^2} - 15\left(x_1-2x_2 \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E2.2)$ The coupled ordinary differential equations (E2.1) and (E2.2) would be rewritten as first-order ordinary differential equations. Assuming $\frac{dx_1}{dt} = x_3, \;\;\;\;\;\;\;\;\;\;\;\; (E2.3)$ $\frac{dx_2}{dt} = x_4 \;\;\;\;\;\;\;\;\;\;\;\; (E2.4)$ then $\frac{d^2x_1}{dt^2} = \frac{dx_3}{dt},\;\;\;\;\;\;\;\;\;\;\;\; (E2.5)$ $\frac{d^2x_2}{dt^2} = \frac{dx_4}{dt}\;\;\;\;\;\;\;\;\;\;\;\; (E2.6)$ Substituting Equations (E2.3), (E2.4), (E2.5), and (E2.6) with the two new variables $$x_3$$ and $$x_4$$ in the given second-order ordinary differential equation (E2.1) gives $10\frac{dx_3}{dt} - 15\left(-2x_1+x_2\right) = 0$ and then rewriting it gives $\frac{dx_3}{dt} = 1.5\left(-2x_1+x_2\right)\;\;\;\;\;\;\;\;\;\;\;\; (E2.7)$ Substituting equations (3), (4), (5), and (6) with the two new variables $$x_3$$ and $$x_4$$ in the given second-order ordinary differential equation (2) gives $20\frac{dx_4}{dt} - 15\left(x_1-2x_2\right) = 0$ and then rewriting it gives $\frac{dx_4}{dt} = 0.75\left(x_1-2x_2\right)\;\;\;\;\;\;\;\;\;\;\;\; (E2.8)$ Now, writing the equations (E2.3), (E2.4), E(2.7), and E(2.8) along with the initial conditions $\frac{dx_1}{dt} = x_3,\ x_1(0) = 4,\;\;\;\;\;\;\;\;\;\;\;\; (E2.9a)$ $\frac{dx_2}{dt} = x_4,\ x_2(0) = 5,\;\;\;\;\;\;\;\;\;\;\;\; (E2.9b)$ $\frac{dx_3}{dt} = 1.5(-2x_1+x_2),\ x_3(0) = 2,\;\;\;\;\;\;\;\;\;\;\;\; (E2.9c)$ $\frac{dx_4}{dt} = 0.75(x_1-2x_2),\ x_4(0) = 3,\;\;\;\;\;\;\;\;\;\;\;\; (E2.9d)$ Preparing for the state-space model form, Equations (E2.9a) through (E2.9d) gives $\frac{dx_1}{dt} = 0x_1+0x_2+ 1x_3+0x_4,\ x_1(0) = 4,\;\;\;\;\;\;\;\;\;\;\;\; (E2.10a)$ $\frac{dx_2}{dt} = 0x_1 + 0x_2+ 0x_3+ 1x_4,\ x_2(0) = 5,\;\;\;\;\;\;\;\;\;\;\;\; (E2.10b)$ $\frac{dx_3}{dt} = -3x_1 +1.5x_2 + 0x_3 +0x_4,\ x_3(0) = 2,\;\;\;\;\;\;\;\;\;\;\;\; (E2.10c)$ $\frac{dx_4}{dt} = 0.75x_1 -1.5x_2 +0x_3+0x_4,\ x_4(0) = 3,\;\;\;\;\;\;\;\;\;\;\;\; (E2.10d)$ Equations (E2.10a)-(E2.10d) can now be written in state-space model form as $\left[\begin{array}{l} \dot{x}_{1} \\ \dot{x}_{2} \\ \dot{x}_{3} \\ \dot{x}_{4} \end{array}\right]=\left[\begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -3 & 1.5 & 0 & 0 \\ 0.75 & -1.5 & 0 & 0 \end{array}\right]\left[\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array}\right]$ with the initial conditions as $\left[\begin{array}{l} x_{1}(0) \\ x_{2}(0) \\ x_{3}(0) \\ x_{4}(0) \end{array}\right]=\left[\begin{array}{l} 2 \\ 3 \\ 4 \\ 5 \end{array}\right]$ Can you identify the state vector, state matrix, input vector, and the initial conditions on the state vector? ### Example 3 A suspension system of a bus can be modeled as below. Figure 1: a) School bus b) A model of 1/4th of suspension system of a bus (Credit: Model and numbers from <http://www.monografias.com/trabajos-pdf/modeling-bus-suspension-transfer-function/modeling-bus-suspension-transfer-function.pdf>) Only 1/4th of the bus is modeled. The differential equations that govern the above system can be derived (this is something you will do in your vibrations course) as $M_{1}\frac{d^{2}x_{1}}{dt^{2}} + B_{1}\left( \frac{dx_{1}}{{dt}} - \frac{dx_{2}}{{dt}} \right) + K_{1}\left( x_{1} - x_{2} \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.1)$ $\begin{split} M_{2}\frac{d^{2}x_{2}}{dt^{2}} + B_{1}\left( \frac{dx_{2}}{{dt}} - \frac{dx_{1}}{{dt}} \right) &+ K_{1}\left( x_{2} - x_{1} \right) + B_{2}\left( \frac{dx_{2}}{{dt}} - \frac{{dw}}{{dt}} \right)\\ & \ \ \ \ \ \ + K_{2}\left( x_{2} - w \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.2) \end{split}$ $x_{1}(0) = 0,x_{3}(0) = 0,x_{2}(0) = 0,x_{4}(0) = 0$ where $M_{1} =\text{body mass}$ $M_{2} =\text{suspension mass}$ $K_{1} =\text{spring constant of the suspension system}$ $K_{2} =\text{spring constant of wheel and tire}$ $B_{1} =\text{damping constant of the suspension system}$ $B_{2} =\text{damping constant of wheel and tire}$ $x_{1} =\text{displacement of the body mass as a function of time}$ $x_{2} =\text{displacement of the suspension mass as a function of time}$ $w =\text{input profile of the road as a function of time}$ The constants are given as $m_{1} = 2500\ \text{kg}$ $m_{2} = 320\ \text{kg}$ $k_{1} = 80000\ \text{N/m},$ $k_{2} = 500000\ \text{N/m},$ $B_{1} = 350\ \text{N-s/m},$ $B_{2} = 15020\ \text{N-s/m}$ Reduce the simultaneous differential equations (E3.1) and (E3.2) to simultaneous first-order differential equations and put those in the state variable form complete with corresponding initial conditions. Solution Substituting the values of the constants in the two differential equations (1) and (2) gives the differential equations (3) and (4), respectively. $2500\frac{d^{2}x_{1}}{dt^{2}} + 350\left( \frac{dx_{1}}{{dt}} - \frac{dx_{2}}{{dt}} \right) + 80000\left( x_{1} - x_{2} \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.3)$ $\begin{split} &320\frac{d^{2}x_{2}}{dt^{2}} + 350\left( \frac{dx_{2}}{{dt}} - \frac{dx_{1}}{{dt}} \right) + 80000\left( x_{2} - x_{1} \right) + 15020\left( \frac{dx_{2}}{{dt}} - \frac{{dw}}{{dt}} \right) +\\ & 500000(x_{2} - w) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.4) \end{split}$ Since $$w$$ is an input, we take it to the right-hand side to show it as a forcing function and rewrite Equations (E3.4) as $\begin{split} &320\frac{d^{2}x_{2}}{dt^{2}} + 350\left( \frac{dx_{2}}{{dt}} - \frac{dx_{1}}{{dt}} \right) + 80000\left( x_{2} - x_{1} \right) + 15020\left( \frac{dx_{2}}{{dt}} \right) + 500000x_{2}\\ &=15020\frac{dw}{dt}+500000w\;\;\;\;\;\;\;\;\;\;\;\; (E3.5) \end{split}$ Now let us start the process of reducing the 2 simultaneous differential equations (Equations (E3.3) and (E3.5)) to 4 simultaneous first-order ordinary differential equations. Choose $\frac{dx_{1}}{{dt}} = x_{3},\;\;\;\;\;\;\;\;\;\;\;\; (E3.6)$ $\frac{dx_{2}}{{dt}} = x_{4},\;\;\;\;\;\;\;\;\;\;\;\; (E3.7)$ then Equation (E3.3) $2500\frac{d^{2}x_{1}}{dt^{2}} + 350\left( \frac{dx_{1}}{{dt}} - \frac{dx_{2}}{{dt}} \right) + 80000\left( x_{1} - x_{2} \right) = 0$ can be written as $2500\frac{dx_{3}}{{dt}} + 350\left( x_{3} - x_{4} \right) + 80000\left( x_{1} - x_{2} \right) = 0$ $2500\frac{dx_{3}}{{dt}} = - 350(x_{3} - x_{4}) - 80000(x_{1} - x_{2})$ $\begin{split} \frac{dx_{3}}{{dt}} &= - 0.14\left( x_{3} - x_{4} \right) - 32\left( x_{1} - x_{2} \right)\\ &= - 32x_{1} - 0.14x_{3} + 32x_{2} + 0.14x_{4}\;\;\;\;\;\;\;\;\;\;\;\; (E3.8)\end{split}$ and Equation (E3.5) $\begin{split} &320\frac{d^{2}x_{2}}{dt^{2}} + 350\left( \frac{dx_{2}}{{dt}} - \frac{dx_{1}}{{dt}} \right) + 80000\left( x_{2} - x_{1} \right) + 15020\left( \frac{dx_{2}}{{dt}} \right) + 500000x_{2} \\ &= 15020\frac{{dw}}{{dt}} + 500000w \end{split}$ can be written as $\begin{split} &320\frac{dx_{4}}{{dt}} + 350\left( x_{4} - x_{3} \right) + 80000\left( x_{2} - x_{1} \right) + 15020x_{4} + 500000x_{2}\\ &= 15020\frac{{dw}}{{dt}} + 500000w \end{split}$ $\begin{split} 320\frac{dx_{4}}{{dt}} = &- 350(x_{4} - x_{3}) - 80000(x_{2} - x_{1}) - 15020x_{4} - \\ & 500000x_{2} + 15020\frac{{dw}}{{dt}} + 500000w \end{split}$ $\begin{split} \frac{dx_{4}}{{dt}} &= - 1.09375\left( x_{4} - x_{3} \right) - 250\left( x_{2} - x_{1} \right) - 46.9375x_{4} - 1562.5x_{2}\\& \ \ \ \ \ \ \ \ \ \ + 46.9375\frac{{dw}}{{dt}} + 1562.5w\\ &= 250x_{1} + 1.09375x_{3} - 1812.5x_{2} - 48.03125x_{4}\\ &\ \ \ \ \ \ \ \ \ \ + 1562.5w + 46.9375\frac{{dw}}{{dt}}\ \ \ \ \ \ \ \ \ (E3.9) \end{split}$ The 4 simultaneous first-order differential equations given by Equations (E3.6) thru (E3.9), complete with the corresponding initial condition, are $\begin{split} \frac{dx_{1}}{{dt}} &= x_{3}\\ &=f_1\left(t,x_1,x_2,x_3,x_4\right)\ \text{with } x_1\left(0\right)=0\;\;\;\;\;\;\;\;\;\;\;\; (E3.10) \end{split}$ $\begin{split} \frac{dx_{2}}{{dt}} &= x_{4}\\ &=f_2\left(t,x_1,x_2,x_3,x_4\right)\text{with }x_2\left(0\right)=0\;\;\;\;\;\;\;\;\;\;\;\; (E3.11) \end{split}$ $\begin{split} \frac{dx_{3}}{{dt}} &= - 32x_{1} + 32x_{2} - 0.14x_{3} + 0.14x_{4}\\ &=f_3\left(t,x_1,x_2,x_3,x_4\right)\text{with }x_3\left(0\right)=0\;\;\;\;\;\;\;\;\;\;\;\; (E3.12) \end{split}$ $\begin{split} \frac{dx_{4}}{{dt}} &= 250x_{1} - 1812.5x_{2} + 1.09375x_{3} - 48.03125x_{4} + 1562.5w + 46.9375\frac{{dw}}{{dt}}\\ &=f_4\left(t,x_1,x_2,x_3,x_4\right)\text{with }x_4\left(0\right)=0\;\;\;\;\;\;\;\;\;\;\;\; (E3.13) \end{split}$ The profile of the road is given below. Assuming that the bus is going at $$60 \ \text{mph}$$, that is, approximately $$27 \ \text{m/s}$$, it takes $\frac{6m}{27{m/s}} = 0.22\ \text{s}$ to go through one period. So the frequency $\begin{split} f &= \frac{1}{0.22}\\ &= {4.545\ \ \text{Hz}}\end{split}$ The angular frequency then is $\begin{split} \omega &=2 \times \pi \times4.545\\ &= 28.6\ \text{rad/s} \end{split}$ giving $\begin{split} w &= 0.01\sin\left( {\omega t} \right)\\ &= 0.01\sin\left( 28.6t \right)\end{split}$ and $\frac{{dw}}{{dt}} = 0.286\cos\left( 28.6t \right)$ To put the differential equations given by Equations (E3.10)-(E3.13) in matrix form, we rewrite them as $\frac{dx_{1}}{{dt}} = x_{3} = 0x_{1} + 0x_{2} + 1x_{3} + 0x_{4},\ {x}_{1}\left( 0 \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.14)$ $\frac{dx_{2}}{{dt}} = x_{4} = 0x_{1} + 0x_{2} + 0x_{3} + 1x_{4},\ {x}_{2}\left( 0 \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.15)$ $\frac{dx_{3}}{{dt}} = - 32x_{1} + 32x_{2} - 0.14x_{3} + 0.14x_{4},\ x_{3}\left( 0 \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.16)$ $\begin{split} \frac{dx_{4}}{{dt}} = &250x_{1} - 1812.5x_{2} + 1.09375x_{3} - 48.03125x_{4} + \\ &1562.5w + 46.9375\frac{{dw}}{{dt}},\ x_{4}\left( 0 \right) = 0\;\;\;\;\;\;\;\;\;\;\;\; (E3.17) \end{split}$ In state variable matrix form, the differential equations are given by $\begin{split} \begin{bmatrix} \displaystyle \frac{dx_{1}}{{dt}} \\ \displaystyle \frac{dx_{2}}{{dt}} \\ \displaystyle \frac{dx_{3}}{{dt}} \\ \displaystyle \frac{dx_{4}}{{dt}} \\ \end{bmatrix} &= \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ - 32 & 32 & - 0.14 & 0.14 \\ 250 & - 1812.5 & 1.09375 & - 48.03125 \\ \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \\ \end{bmatrix} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1562.5w + 46.9375\displaystyle \frac{{dw}}{{dt}} \\ \end{bmatrix} \;\;\;\;\;\;\;\;\;\;\;\; (E3.18)\end{split}$ where $w = 0.01\sin(28.6t) \;\;\;\;\;\;\;\;\;\;\;\; (E3.19)$ and the corresponding initial conditions are $\begin{bmatrix} x_{1}(0) \\ x_{2}(0) \\ x_{3}(0) \\ x_{4}(0) \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}$ Can you identify the state vector, the state matrix, the input vector, and the initial conditions on the state vector? State-space models is how numerical solvers expect you to enter the data. For example, the above state-space model would be entered as the following on MATLAB. First, a function is defined where t is the independent variable, and x is the state-space vector. The names diffx and sysofeqn are the choice of the user. The arguments of the function are the independent variable t and the state vector x. Inside the function, [A] is the state-space matrix from Equation (E3.18), $$w$$ is the forcing function from Equation (E3.19), [B] is the input vector. function diffx=sysofeqn(t,x) A=[0 0 1 0; … 0 0 0 1; … -32 32 -0.14 0.14; … 250 -1812.5 1.09375 -48.03125]; w=0.01*sin(28.6*t); dw=0.286*cos(28.6*t); B=[0; 0; 0; 1652.5*w+46.9375*dw]; diffx=A*x+B; end How is the function used? Assume that you want the output between t=0 and 10. You can use that time span shown as tspan variable. Then the initial conditions of the state vector are entered. The ode45 routine is a combination of the Runge-Kutta 4th and 5th order methods to solve a system of first-order ordinary differential equations. The inputs of the ode45 routine are the system of equations, the span in which the output is requested, and the initial conditions. tspan=[0  10]; initial_cond=[0;0;0;0]; [t,x]=ode45('sysofeqn',tspan,initial_cond); figure (1) plot(t,x(:,1)) A sample of the outputs is shown below with the values of the independent variable t, the state vector x, and the output of x(1) vs t. Figure 3: Output values and $$x_1$$ vs. $$t$$ for the bus-suspension problem. ## Problem Set (1). Reduce the following 2nd order ordinary differential equation to a set of first-order ordinary differential equations complete with initial conditions and in the form required to solve them numerically. $5\frac{d^{2}y}{dt^{2}} + 3\frac{dy}{dt} + 7y = e^{- t} + t^{2},\ y(0) = 6,\ \frac{dy}{dt}(0) = 11.$ (2). Reduce the following coupled ordinary differential equations to a set of first-order ordinary differential equations complete with initial conditions and in the form required for solving them numerically. $10\frac{d^{2}x_{1}}{dt^{2}} - 15( - 2x_{1} + x_{2}) = 0,$ $20\frac{d^{2}x_{2}}{dt^{2}} - 15(x_{1} - 2x_{2}) = 0,$ $x_{1}(0) = 2,\ x_{2}(0) = 3,\ \frac{dx_{1}}{dt}(0) = 4,\ \frac{dx_{2}}{dt}(0) = 5.$ (3). The acceleration of a spring-mass system is given by $$(t > 0)$$ $\frac{d^{2}x}{dt^{2}} + x = e^{- t}$ where $$x$$ is the displacement of the mass given in meters and $$t$$ is the time given in seconds. The initial conditions are given as $$x(0) = 3$$, $$\displaystyle \frac{dx}{dt}(0) = 2$$. a) What is the value of displacement, velocity, and acceleration at $$t = 0 +$$ seconds. b) Use Euler’s method to find the value of displacement, velocity, and acceleration at $$t = 0.5$$ seconds. Use a step size of $$0.25$$ seconds. c) What is the exact value of displacement, velocity and acceleration at $$t = 0.5$$ seconds. Answer: $$a)\ 3,\ 2 ,\ -2$$ $$b)\ 3.8750,\ 0.81970,\ -3.2685$$ $$c)\ 3.6958,\ 0.69213,\ -3.0893$$ (4). The acceleration of a spring-mass system is given by $\frac{d^{2}x}{dt^{2}}\ + x = \ e^{- t}$ where $$x$$ is the displacement of the mass given in meters and $$t$$ is the time given in seconds. The initial conditions are given as $$x(0) = 3$$, $$\displaystyle \frac{dx}{dt}(0) = 2$$. Use Runge Kutta 2nd order Ralston’s method to find the value of displacement, velocity, and acceleration at $$t = 0.5$$ seconds. Use a step size of $$0.25$$ seconds. Answer: $$3.7067\ m,\ 0.67811\ m/s,\ -3.1001\ m/s^2$$ (5). Reduce the following ordinary differential equation to the state-space model form. Include the initial conditions in proper form as well. $4\frac{d^{2}y}{dt^{2}} + 9\frac{dy}{dt} + 12y = 3sin(7x),\ y(0) = 13,\ \frac{dy}{dt}(0) = 19.$ (6). Reduce the following coupled ordinary differential equations to the state space model form. Include the initial conditions in proper form as well. $25\frac{d^{2}x_{1}}{dt^{2}} - 40( - 3x_{1} + x_{2}) = 3 cos(3t),$ $32\frac{d^{2}x_{2}}{dt^{2}} - 12x_{1} - 24x_{2} = 7e^{-3t},$ $x_{1}(0) = 23,\ x_{2}(0) = 17,\ \frac{dx_{1}}{dt}(0) = 37,\ \frac{dx_{2}}{dt}(0) = 29.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970751166343689, "perplexity": 667.476652130324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00540.warc.gz"}
http://www.ck12.org/chemistry/Equilibrium-Constant/studyguide/user:13IntC/Chemical-Equilibrium/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> # Equilibrium Constant ## Defines the equilibrium constant and explains how it is useful in chemistry. 0% Progress Practice Equilibrium Constant Progress 0% Chemical Equilibrium Feel free to modify and personalize this guide by clicking "Customize." In some chemical processes, the forward proceeding reaction occurs simultaneously as the reverse reaction does. What type of a reaction is this called? What is the auto-ionization of water and how does it relate to the type of reaction described above? When the rate of the forward reaction equals the rate of the reverse reaction, chemical equilibrium is reached. Beware! When equilibrium is met, the concentrations of products and reactants remains constant, but that does not mean that the forward and reverse reactions are stopped when equilibrium is met. Instead, the two reactions continuously occur. What are the conditions for chemical equilibrium? The equilibrium constant, or Keq, is a mathematical calculation for expressing the relation between the amount of product and reactants for a system in equilibrium. Explain how to calculate this constant.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9386093020439148, "perplexity": 1026.263449401401}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00129-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/pushing-a-ball-through-a-viscous-fluid.197569/
Pushing a ball through a viscous fluid 1. Nov 11, 2007 Kimko We push a ball through a viscous fluid with constant external force. As the ball moves, it compresses a spring. The spring resists compression with an elastic force f=kd, where k is the spring constant. When this force balances the external force, the ball stops moving at d=f/k. Throughout the process, the applied force is fixed, so the work done is fd=f^2/k and energy stored in the spring is 1/2kd^2 or 1/2f^2/k. Suppose that we suddenly reduce the external force to a value of f1 that is smaller than the original external force.The ball moves in the opposite direction. a. How far does the ball move and how much work it does against the external force f1? b. For what constant value of f1 will the useful work be maximal? Show that the useful work output is half of what is stored in the spring =1/4f^2/k. c. How could we make this process more efficient? a. The elastic force is equal to the external force + friction force? How do we get the distance? I am confused. b. The work that is done on the ball by the spring is 1/4f^2/k. Do we need to include the friction force here when the ball is now moving in the opposite direction? c. Free energy transduction is most efficient when it proceeds by the incremental, controlled release of many small constraints. What steps do we need to take to make it more efficient? What are the constraints? 2. Nov 13, 2007 Dick Ignore frictional forces. Without the viscous fluid, the ball wouldn't just stop at the equilibrium positions. It would oscillate. The purpose of the fluid is just to damp oscillations, and it does this by taking away the kinetic energy of the ball at an equilibrium point. And the question only asks what work is done against the force f1, not the work done against the fluid. Once you know the work done relative to spring and external forces, you can deduce the energy lost to the fluid using conservation. You don't have to calculate it directly. The point of c) is that there may be another way to change the force from f to f1 (other than a sudden shift) that would let the amount of work done against the external force to increase, and hence the amount of work absorbed by the fluid to decrease. Similar Discussions: Pushing a ball through a viscous fluid
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886622965335846, "perplexity": 323.11807152580803}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816083.98/warc/CC-MAIN-20180225011315-20180225031315-00126.warc.gz"}
https://math.stackexchange.com/questions/2227244/probability-that-x2-y2-is-divisible-by-k
# Probability that $x^2-y^2$ is divisible by $k$ Let two numbers $x$ and $y$ be selected from the set of first $n$ natural numbers with replacement(i.e. the two numbers can be same).The question is to find out the probability that $x^2-y^2$ is divisible by $k\in \mathbb{N}$ For $k=2$ Any number can be expressed as $2p,2p+1$.Now $x^2-y^2=(x-y)(x+y)$ If both numbers are of form $2p+1$ then (x-y) would be divisible by $2$ .if two numbers are of different forms then it will not be divisible by $2$.So the probability in this case is $a^2+(1-a)^2$ where $a$ is probability that number chosen is divisible by $2$ which is $\frac{\lfloor \frac{n}{2} \rfloor}{n}$.However this gets complicated with $k=3$ onwards because numbers in different forms may be divisible.In other words if there a generalisation or way to solve for some large $k$.Thanks. • $k=3$ is actually as easy as $k=2$, since any square is either $3p$ or $3p+1$. Looking at the difference of the two squares, we see that it is divisible by $3$ iff $x$ and $y$ are either both divisible by $3$ or neither of them are. – Arthur Apr 10 '17 at 7:15 • @navinstudent You might be interested in the complete analytic solution of your problem provided in my solution. – Dr. Wolfgang Hintze Apr 13 '17 at 17:12 ### A generalization expressed by a set A good way to generalize this is to use modular arithmetic, or essentially look at the remainder of $$\frac{x}{k}$$ and $$\frac{y}{k}$$. As you pointed out in your example for $$k=2$$, the numbers can only be expressed as $$2*p,2*p+1$$ This can be further generalized into a set of unique expressions that define every number or every $$x$$ and $$y$$ for a value $$k>1$$ and $$p≥0$$S={kp,kp+1...kp+(k-2),kp+(k-1)}$$Using modular arithmetic, we know that $$S\equiv \{0,1,2...k-2,k-1\}\pmod k$$ Since $$x$$ and $$y$$ are any two terms from the set $$S$$ for any $$p≥0$$, we know that $$x$$ and $$y$$ are really just equal to either any value from $$\{0,1,2...k-2,k-1\}$$ since we only need to look at the remainder when divided by $$k$$ to determine divisibility. ### Probability using mod Now we know $$x^2-y^2 = (x-y)(x+y)$$ meaning either $$(x-y)$$ or $$(x+y)$$ have to be divisible by $$k$$ in order to fulfill the requirement that $$x^2-y^2$$ is divisible by $$k$$. It should be noted that the probability of choosing a random number being expressed by any expression in set $$S$$ is uniform and is equal to $$\frac{1}{k}$$. For example, the probability of choosing a number is expressible by $$3p+1$$ is $$\frac{1}{3}$$. This can be proved by showing how every number expressed by $$kp$$ can always be paired with $$kp+1,kp+2,...kp+(k-2),kp+(k-1)$$, thus showing how there are an equal number of numbers of each type. Onto the question.... $$(x-y)$$ is divisible by $$k$$ if $$(x-y)\equiv 0\pmod k$$. This simplifies to $$(x$$ mod $$k)-(y \text{ mod}\ k) = 0$$. Thus we now need to find the probability that $$x$$ and $$y$$ are both expressible by the same expression in the set S. Now this comes down to a simple problem of asking what is the probability of choosing two of the same elements from the set $$S$$ without replacement, which is $$\frac{k}{k} *\frac{1}{k} =$$ $$\frac{k}{k^2}$$ $$(x+y)$$ is divisible by $$k$$ if $$(x+y)\equiv 0\pmod k$$. This simplifies to $$(x$$ mod $$k)+(y \text{ mod}\ k) = 0$$. Thus we now need to find the probability that $$x$$ and $$y$$ are chosen such that the $$x+y =$$ a multiple of $$k$$. Knowing that $$x$$ and $$y$$ must come from the set $$S$$, we can see there is a specific pairing of elements from $$S$$ that causes the addition of the pair of elements to be equal to $$2*kp+k$$, a multiple of $$k$$. The only times that the pair of elements, which will be the expression that expresses $$x$$ and $$y$$, add up to $$2*kp$$ is when one is expressed by $$kp+m$$ and the other being $$kp+(k-m)$$ for any whole number $$m$$. This is because $$(kp+m)+(kp+(k-m)) = 2*kp$$. For example $$3p+1$$ can be paired up with $$3p+2$$ since their sum is $$6p+3$$, a multiple of 3. Now we are now looking for the probability the two chosen elements from $$S$$ are pairs of each other (If both chosen elements are $$kp$$, it will be a multiple of k still) The total number of pairs that can be chosen is the total number of elements in set $$S$$ squared, which is $$k^2$$. The total number of pairs that fulfill the divisibility by $$k$$ is equal to $$\lfloor\frac{k}{2}\rfloor + 1$$. The ceiling function is applied to correct errors that occur when $$k$$ is odd as you obviously can't have a fractional number of pairs. The extra $$+1$$ is for including the case when both elements are the same. Thus the probability that $$(x+y)$$ is divisible by $$k$$ is $$\frac{\lfloor\frac{k}{2}\rfloor + 1}{k^2}$$. However now we encounter a problem of overlap, which are the cases when the two elements chosen from $$S$$ make $$(x-y)$$ and $$(x+y)$$ divisible by k. To get rid of the over count we need to subtract the number of times this happens for a given set $$S$$. We know we need to subtract once for the case when both elements are $$kp$$. We also need to subtract another one depending if $$k$$ is even or not. This is because when $$k$$ is even, we have the case where the element $$kp+\frac{k}{2}$$ can pair with itself to fulfill everything. Thus the final answer is $$\frac{k}{k^2}+\frac{\lfloor\frac{k}{2}\rfloor + 1}{k^2} - \frac{(k+1 \text{ mod }2)+1}{k^2}$$ The $$\frac{(k+1 \text{ mod }2)+1}{k^2}$$ tells us to subtract one more and subtract another if $$k$$ is even, hence we shifted the modular part by 1 since $$( \text{ even number mod }2 = 0)$$ • Thanks for your answer.I think you have approached the problem by taking that the two numbers are chosen from the infinite set of natural numbers but how will the answer depend on n if the two numbers are chosen from the set of first n natural numbers with replacement. Before that would you please explain more on how you applied the ceiling function to get the probability that x+y is divisible by k.Thanks. – Navin Apr 10 '17 at 7:47 • The number of pairings that work is always equal to k divided by 2. However, there is a special case when k is odd. We can just check a few explicit exmaples to prove whether or not to use ceiling or floor. If k is 3, we can pair 0,0 and 1,2. If k is 4, we pair 0,0 1,3 2,2. So in fact I wrote it wrong, it should be the floor function as 3/2 floored is 1 and adding the 0,0 pair is 2. For k=4, the floor of k/2 is 2, and adding the 0,0 pair is 3. Sorry for the mistake, will change. I will also work on your other part of the question – Stone Apr 10 '17 at 8:03 • Suppose k = p \cdot q. x^2 - y^2 is also divisible by k if (x - y) is divisible by p and (x + y) is divisible by q. So it is not necessary that ( x- y) is divisible by k or (x + y) is divisible by k. – PSPACEhard Apr 10 '17 at 8:10 • If you can find an example where such a case appears and either x-y doesn't be equal 0 or x+y doesn't equal a multiple of k, then your point would be valid. However I haven't been able to find such an example – Stone Apr 10 '17 at 8:43 • Suppose n = 8, k = 8 and x = 3, y = 1. Would this be a valid case? – PSPACEhard Apr 10 '17 at 12:42 EDIT 12.04.17 After some days of studying this interesting problem, I provide now a complete analytic solution of the problem by retrieving a related problem in OEIS. I give exact values of the first few probabilities, and give an exact general formula for the probababily if the divisor n is prime. A Monte-Carlo-Simulation is also presented. Notice that my findings came up in opposite temporal order. Analytic solution to the problem Looking up the sequence of the first few terms of$$s(n) = p(n) n^2 \tag{1}$$which are$${1, 2, 5, 8, 9, 10, 13, 24, 21, 18, 21, 40, 25, 26, 45}$$in the online-encyclopedia of integer sequences https://oeis.org/ gives us the entry A062803 "Number of solutions to x^2 == y^2 mod(n)", created initially by Ahmed Fares (ahmedfares(AT)my-deja.com), Jul 19 2001. A formula was devised by Vladeta Jovovic on Sep 22, 2003: "Multiplicative with a(2^e)=e*2^e and a(p^e)=((p-1)*e+p)*p^(e-1) for an odd prime p." Let us explore this a little bit closer. Our problem is transformed into the question of the number s(n) of solutions to the congruence$$x^2-y^2 == 0, \; mod(n)\tag{1}$$Our probabilities are then given by$$p(n) = s(n)/n^2\tag{2}$$Suppose the number n has the representation$$n = \prod_{i=1}^{k} p_i^{a_i}\tag{3}$$where p_i is the i-th prime number appearing in n in ascending order, a_i ist multiplicity (exponent) and k is the number of different prime factors in n. Notice that in number theory k is traditionally called \omega(n), the number of distinct prime factors. It is implemented in Mathematica as PrimeNu[n]. Then the statement of being multiplicative means that we can apply the formula to each of the prime power factors separately, and multiply the result together. This gives for n even$$s(n_{even}) = (a_1 2^{a_1}) \prod_{i=2}^{k} ((p_i-1)a_i+p_i) p_i^{a_i-1}\tag{4.1}$$and for n odd$$s(n_{odd}) = \prod_{i=1}^{k} ((p_i-1)a_i+p_i) p_i^{a_i-1}\tag{4.2}$$It is easy to see that for an odd prime n, s(n) = 2n-1, as claimed earlier. The formula simplifies if n is square-free. Then all non-vanishing a_i are equal to 1, and we find$$s(n_{even}) = 2 \prod_{i=2}^{k} (2 p_i-1)\tag{5.1}s(n_{odd}) = \prod_{i=1}^{k} (2 p_i-1)\tag{5.2}$$For those who are interested in encoding this formula, here is an example in Mathematica s[n_] := Module[{fi, pi, ai, pout}, fi = FactorInteger[n]; pi = #[[1]] & /@ fi; ai = #[[2]] & /@ fi; pout = If[OddQ[n], Product[((pi[[i]] - 1) ai[[i]] + pi[[i]]) pi[[i]]^(ai[[i]] - 1), {i, 1, Length[pi]}], ai[[1]] 2^ai[[1]] Product[((pi[[i]] - 1) ai[[i]] + pi[[i]]) pi[[i]]^(ai[[i]] - 1), {i, 2, Length[pi]}]]] The prime factor decomposition of n is done by the function FactorInteger[]. From this we extract the p_i and a_i, and then apply the formula of Jovovic. Exact values of the probabilities We make the (natural) assumption that all possible remainders of a randomly chosen number x modulo n have equal probability. Then we can calculate the exact values of the probabilities with the following piece of code (written here in Mathematica) h[n_] := 1/n^2 Count[Flatten[Table[Mod[x^2 - y^2, n], {x, 0, n - 1}, {y, 0, n - 1}]], 0] Explanation For a given divisor n the expression z = x^2-y^2 needs to be considered only for x and y bewteen 0 and n-1. The Table[] lists all elements x^2-y^2 mod(n), and Flatten[] puts them in one array. Now Count[.,0] counts the zeroes in this array. Dividing this by n^2 gives the probability. The result for n = 1..30 in the format \{n,p(n)\} are$$h(n)_{tab} = \left( \begin{array}{ccccc} \{1,1\} & \left\{2,\frac{1}{2}\right\} & \left\{3,\frac{5}{9}\right\} & \left\{4,\frac{1}{2}\right\} & \left\{5,\frac{9}{25}\right\} \\ \left\{6,\frac{5}{18}\right\} & \left\{7,\frac{13}{49}\right\} & \left\{8,\frac{3}{8}\right\} & \left\{9,\frac{7}{27}\right\} & \left\{10,\frac{9}{50}\right\} \\ \left\{11,\frac{21}{121}\right\} & \left\{12,\frac{5}{18}\right\} & \left\{13,\frac{25}{169}\right\} & \left\{14,\frac{13}{98}\right\} & \left\{15,\frac{1}{5}\right\} \\ \left\{16,\frac{1}{4}\right\} & \left\{17,\frac{33}{289}\right\} & \left\{18,\frac{7}{54}\right\} & \left\{19,\frac{37}{361}\right\} & \left\{20,\frac{9}{50}\right\} \\ \left\{21,\frac{65}{441}\right\} & \left\{22,\frac{21}{242}\right\} & \left\{23,\frac{45}{529}\right\} & \left\{24,\frac{5}{24}\right\} & \left\{25,\frac{13}{125}\right\} \\ \left\{26,\frac{25}{338}\right\} & \left\{27,\frac{1}{9}\right\} & \left\{28,\frac{13}{98}\right\} & \left\{29,\frac{57}{841}\right\} & \left\{30,\frac{1}{10}\right\} \\ \end{array} \right)$$The simulation results (see below) are in reasonable agreement with these results. If n is an odd prime number the probability is given by$$p(n)=\frac{2 n-1}{n^2}$$If n is composite I have not found the exact formula. The problem here seems to be related to quadratic residues. Monte-Carlo-Simulation EDIT 11.04.17 We distinguish two possible basic sets of integers from which to select for the divisibility test: 1. Set with repetition We create a set m consisting of all numbers z = x^2-y^2 of integers where 1<= x <= n_{max}, 1<= y <= n_{max}. 1. Set without repetition The set m_0 is obtained by removing from m all duplicates. Then, for a given divisor n we estimate the probability of divisibility by the ratio of the number of elements of the set for which \frac{z}{n} is integer relative to all elements of the set. The resulting probabilities for m_{max} = 10^3 and divisors in the range n = 1..30 are Case 1 (with repetition) List of results in the format (n, p(n))$$((1, 1.0), (2, 0.5000005), (3, 0.55533378), (4, 0.5000005), (5, \ 0.35968128), (6, 0.27766739), (7, 0.26530612), (8, 0.37475112), (9, \ 0.25933407), (10, 0.17984114), (11, 0.17355372), (12, 0.27766739), \ (13, 0.14792899), (14, 0.13265356), (15, 0.19973533), (16, \ 0.25000075), (17, 0.11417454), (18, 0.12966754), (19, 0.10246197), \ (20, 0.17984114), (21, 0.14736213), (22, 0.08677736), (23, \ 0.08502487), (24, 0.20808462), (25, 0.10419251), (26, 0.073965), (27, \ 0.11103881), (28, 0.13265356), (29, 0.06774345), (30, 0.09986916))$$The graph Case 2 (no repetition) List of results in the format (n, p(n))$$((1, 0.5125414), (2, 0.19847984), (3, 0.22136804), (4, 0.19847984), \ (5, 0.14290505), (6, 0.08393305), (7, 0.10653981), (8, 0.11713062), \ (9, 0.08960969), (10, 0.05436621), (11, 0.07129035), (12, \ 0.08393305), (13, 0.06139016), (14, 0.04070156), (15, 0.05862769), \ (16, 0.06700892), (17, 0.04836223), (18, 0.03377941), (19, \ 0.04369956), (20, 0.05436621), (21, 0.04403089), (22, 0.02740017), \ (23, 0.03676344), (24, 0.04804486), (25, 0.03682731), (26, \ 0.0236357), (27, 0.03531034), (28, 0.04070156), (29, 0.02972352), \ (30, 0.02204489)) The graph Notice the remarkable peridocity with a period of $4$ in the "artificial" case 2. I'm sure there is a simple explanation for this, and that some readers can give it. • However, your answer for probability when dividing by 2 is different from mine. I got 1/4 + 1/4 = 1/2 whereas you got 0.2. The two quarters represent the probability of choosing x and y such that they are congruent in terms of modulo 2. – Stone Apr 11 '17 at 0:13 • @Stone you are right. This difference is due to my perhaps artificial removal of double occurences in the set m. I have added the "pure case" to my answer – Dr. Wolfgang Hintze Apr 11 '17 at 8:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 87, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809697866439819, "perplexity": 3549.0475714915765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00180.warc.gz"}
https://www.ias.ac.in/listing/bibliography/jcsc/Sanjay_Kumar
• Sanjay Kumar Articles written in Journal of Chemical Sciences • Diabatic potential energy surfaces of H+ + CO Ab initio adiabatic and diabatic surfaces of the ground and the first excited electronic states have been computed for the H+ + CO system for the collinear ($\gamma = 0^\circ$) and the perpendicular (γ = 90°) geometries employing the multi-reference configuration interaction method and Dunning's 𝑐𝑐-𝑝VTZ basis set. Other properties such as mixing angle before coupling potential and before coupling matrix elements have also been obtained in order to provide an understanding of the coupling dynamics of inelastic and charge transfer process. • Non-adiabatic collisions in H+ + O2 system: An 𝑎𝑏 initio study An $ab$ initio study on the low-lying potential energy surfaces of H+ + O2 system for different orientations (𝛾) of H+ have been undertaken employing the multi-reference configuration interaction (MRCI) method and Dunning's $cc-p$VTZ basis set to examine their role in influencing the collision dynamics. Nonadiabatic interactions have been analysed for the $2 \times 2$ case in two dimensions for 𝛾 = 0°, 45° and 90°, and the corresponding diabatic potential energy surfaces have been obtained using the diabatic wavefunctions and their CI coefficients. The characteristics of the collision dynamics have been analysed in terms of vibrational coupling matrix elements for both inelastic and charge transfer processes in the restricted geometries. The strengths of coupling matrix elements reflect the vibrational excitation patterns observed in the state-to-state beam experiments. • Quantum dynamics of vibrational excitations and vibrational charge transfer processes in H+ + O2 collisions at collision energy 23 eV Quantum mechanical study of vibrational state-resolved differential cross sections and transition probabilities for both the elastic/inelastic and the charge transfer processes have been carried out in the H+ + O2 collisions at the experimental collision energy of 23 eV. The quantum dynamics has been performed within the vibrational close-coupling rotational infinite-order sudden approximation framework employing our newly obtained quasi-diabatic potential energy surfaces corresponding to the ground and the first excited electronic states which have been computed using ab initio procedures and Dunning’s correlation consistent-polarized valence triple zeta basis set at the multireference configuration interaction level of accuracy. The present theoretical results for elastic/inelastic processes provide an overall agreement with the available state-selected experimental data, whereas the results for the charge transfer channel show some variance in comparison with those of experiments and are similar to the earlier theoretical results obtained using model effective potential based on projected valence bond method and using semi-empirical diatomics-in-molecules potential. The possible reason for discrepancies and the likely ways to improve the results are discussed in terms of the inclusion of higher excited electronic states into the dynamics calculation. • Foreword • Ab initio adiabatic and quasidiabatic potential energy surfaces of H++ CN system We present restricted geometry (collinear and perpendicular approaches of proton) ab initio three dimensional potential energy surfaces for H++ CN system. The calculations were performed at the internally contracted multi-reference configuration interaction level of theory using Dunning’s correlation consistent polarized valence triple zeta basis set. Adiabatic and quasidiabatic surfaces have been computed for the ground and the first excited electronic states. Nonadiabatic effects arising from radial coupling have been analyzed in terms of nonadiabatic coupling matrix elements and coupling potentials. • H+ + O2 system revisited: four-state quasidiabatic potential energy surfaces and coupling potentials The global adiabatic and quasidiabatic potential energy surfaces for the ground and first three excited (1−43 A" ) electronic states of H++ O2 system are reported on a finer grid points in the Jacobi coordinates using Dunning’s cc-pVTZ basis set and internally contracted multi-reference (single and double) configuration interaction method. Ab initio procedures have been used to compute the corresponding quasidiabatic surfaces and radial coupling potentials which are relevant for the dynamical studies of inelastic vibrational excitation andcharge transfer processes. Nonadiabatic couplings arising out of relative motion of proton and the vibrational motions of O2 between the adiabatic electronic states have also been analyzed. • Ab initio potential energy surface and quantum scattering studies of Li+ with N2: comparison with experiments at Ec.m = 2.47 eV and 3.64 eV A new ground electronic state potential energy surface of Li+ + N2 system is presented in the Jacobi scattering coordinates at MRCI level of accuracy employing the augmented correlation-consistent polarized valence quadrupole zeta (aug-cc-pVQZ) basis set. An analytic fit of the computed ab initio surface has also been obtained. The surface has a global minimum for the collinear geometry at the internuclear distance of N2, r = 2.078a0, and the distance between Li+ and N2, R = 4.96a0. Quantum dynamics studieshave been performed within the vibrational close coupling-rotational infinite-order sudden approximation at Ec.m = 3.64 eV, and the collision attributes have been analyzed. The computed total differential crosssections are found in quantitative agreement with those available from the experiments at Ec.m = 3.64 eV. The other dynamic attributes such as angle dependent opacities and integral cross-sections are also reported.Preliminary rigid-rotor and vibrational–rotational coupled-state calculations at Ec.m.= 2.47 eV also support the experimental observation that the system exhibits a large number of rotational excitations in the vibrational manifold v = 0 • # Journal of Chemical Sciences Current Issue Volume 131 | Issue 9 September 2019 • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103956580162048, "perplexity": 2405.927272663445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00083.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/83975
## Files in this item FilesDescriptionFormat application/pdf 9912263.pdf (8MB) (no description provided)PDF ## Description Title: Oscillatory Behavior of Fine AP/HTPB Composite Propellants Author(s): Hickman, Scott Ralston Doctoral Committee Chair(s): Quinn Brewster Department / Program: Mechanical Engineering Discipline: Mechanical Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Engineering, Aerospace Abstract: The steady and oscillatory combustion of wide distribution AP/HTPB composite propellants containing coarse AP and fuel-rich, fine-AP/HTPB pocket regions has been investigated experimentally and theoretically. These propellants are of special interest because they are similar to many wide distribution bi-modal tailorable plateau propellants. The unsteady combustion response was measured using the laser-recoil method. It was found that at 1 atm monomodal fuel rich propellants containing fine AP (representative of the pocket propellant in the bimodal propellants) exhibit both a low frequency combustion response peak (∼10 Hz) due to the thermal relaxation in the solid and a secondary peak at a higher frequency (50--300 Hz). The frequency of this second peak has a strong correlation with particle size; it only appears for small AP (≤ 50 mum) and its frequency increases with decreasing AP size, even down to the smallest size tested to date (2 mum). The addition of coarse AP (which results in a nearly stoichiometric overall mixture but still has a reasonably large Peclet number, of order 10) suppresses the second peak. The frequency of the second peak was found to scale linearly with mean burning rate to AP particle size ratio (rb/d) except in the case of very fuel rich propellants. At 2 atm, it was found that the frequency of the second peak for the pocket propellant formulation doubled in frequency with very little change in mean bum rate. Also, the weak second peak found at 1 atm for a bimodal formulation was larger, on the order of the magnitude of the thermal relaxation peak, at 2 atm. Microthermocouple tests at 1 atm in pocket propellant formulations showed oscillatory flame temperatures in the gas phase with a frequency that corresponded to that of the second peak in the combustion recoil response function. An investigation of the mechanism of the second peak in the response function was conducted. The mechanism of the second peak in the response function was concluded to be a coupling between selective pyrolysis of the AP and binder and gas phase compositional (stoichiometry) fluctuations. Issue Date: 1998 Type: Text Language: English Description: 182 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1998. URI: http://hdl.handle.net/2142/83975 Other Identifier(s): (MiAaPQ)AAI9912263 Date Available in IDEALS: 2015-09-25 Date Deposited: 1998 
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488519787788391, "perplexity": 3885.461218912232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00154-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/worksheet-area-of-triangles--9
# Area of Triangles For this math learning exercise, students draw two triangles and one pentagon. Then they use the correct formula to calculate the area for each. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719335436820984, "perplexity": 1375.51501791149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540519149.79/warc/CC-MAIN-20191209145254-20191209173254-00236.warc.gz"}
https://par.nsf.gov/biblio/10297093-optimal-shape-tree-roots-branches
On the optimal shape of tree roots and branches This paper introduces two classes of variational problems, determining optimal shapes for tree roots and branches. Given a measure [Formula: see text], describing the distribution of leaves, we introduce a sunlight functional [Formula: see text] computing the total amount of light captured by the leaves. On the other hand, given a measure [Formula: see text] describing the distribution of root hair cells, we consider a harvest functional [Formula: see text] computing the total amount of water and nutrients gathered by the roots. In both cases, we seek to maximize these functionals subject to a ramified transportation cost, for transporting nutrients from the roots to the trunk and from the trunk to the leaves. The main results establish various properties of these functionals, and the existence of optimal distributions. In particular, we prove the upper semicontinuity of [Formula: see text] and [Formula: see text], together with a priori estimates on the support of optimal distributions. Authors: ; Award ID(s): Publication Date: NSF-PAR ID: 10297093 Journal Name: Mathematical Models and Methods in Applied Sciences Volume: 28 Issue: 14 Page Range or eLocation-ID: 2763 to 2801 ISSN: 0218-2025
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681024312973022, "perplexity": 1629.8780072491797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00045.warc.gz"}
http://mathhelpforum.com/differential-geometry/81923-rigging-method.html
## The 'Rigging' Method Hi, i am a little confused with this method. I am supposed to 'rig' on this divergence. Prove that lim n-> infinity e^n + n! + ln n = infinity e^n +n! + ln n > k , e^n +ln n > k , e^n > k , n > ln k I have no idea is this is correct. I am supposed to be making the LHS smaller for a divergence question though.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846102595329285, "perplexity": 1732.6589834385504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297831.4/warc/CC-MAIN-20150323172137-00286-ip-10-168-14-71.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/208159/extending-of-domain-of-smooth-function-of-two-variables
# Extending of domain of smooth function of two variables Let $f: [a,b]\times [c,d] \rightarrow \mathbb R$ be a smooth function of two variable (assuming that in boundary points $f$ has continuous one side partial derivatives). Is a simple way to extend $f$ to a smooth function $F: \mathbb R \times \mathbb R \rightarrow \mathbb R$? - ## 1 Answer We can assume that $a=c=0$ and $b=d=1$. Then define $f_1(x,y)=f(-x,y)$ for $x\in [-1,0]$ and $y\in [0,1]$ (it will have continuous partial derivatives. Then define $f_2(x,-y)=f_1(x,y)$ for $y\in [-1,0]$ to extend the map to $[-1,1]\times [-1,1]$. Now we can repeat this procedure to extend the map to a smooth one on $\Bbb R^2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939430356025696, "perplexity": 110.50391156865216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658904.34/warc/CC-MAIN-20150417045738-00105-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.jobilize.com/online/course/4-4-newton-s-second-law-of-motion-concept-of-a-system-by-openstax?qcr=www.quizover.com&page=4
# 4.4 Newton’s second law of motion: concept of a system  (Page 5/14) Page 5 / 14 ## What rocket thrust accelerates this sled? Prior to manned space flights, rocket sleds were used to test aircraft, missile equipment, and physiological effects on human subjects at high speeds. They consisted of a platform that was mounted on one or two rails and propelled by several rockets. Calculate the magnitude of force exerted by each rocket, called its thrust $\mathbf{\text{T}}$ , for the four-rocket propulsion system shown in [link] . The sled’s initial acceleration is $\text{49}\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2},$ the mass of the system is 2100 kg, and the force of friction opposing the motion is known to be 650 N. Strategy Although there are forces acting vertically and horizontally, we assume the vertical forces cancel since there is no vertical acceleration. This leaves us with only horizontal forces and a simpler one-dimensional problem. Directions are indicated with plus or minus signs, with right taken as the positive direction. See the free-body diagram in the figure. Solution Since acceleration, mass, and the force of friction are given, we start with Newton’s second law and look for ways to find the thrust of the engines. Since we have defined the direction of the force and acceleration as acting “to the right,” we need to consider only the magnitudes of these quantities in the calculations. Hence we begin with ${F}_{\text{net}}=\text{ma},$ where ${F}_{\text{net}}$ is the net force along the horizontal direction. We can see from [link] that the engine thrusts add, while friction opposes the thrust. In equation form, the net external force is ${F}_{\text{net}}=4T-f.$ Substituting this into Newton’s second law gives ${F}_{\text{net}}=\text{ma}=4T-f.$ Using a little algebra, we solve for the total thrust 4 T : $4T=\text{ma}+f.$ Substituting known values yields $4T=\text{ma}+f=\left(\text{2100 kg}\right)\left({\text{49 m/s}}^{2}\right)+\text{650 N}.$ So the total thrust is $4T=1.0×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{N},$ and the individual thrusts are $T=\frac{1.0×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{N}}{4}=2\text{.}5×{\text{10}}^{4}\phantom{\rule{0.25em}{0ex}}\text{N}.$ Discussion The numbers are quite large, so the result might surprise you. Experiments such as this were performed in the early 1960s to test the limits of human endurance and the setup designed to protect human subjects in jet fighter emergency ejections. Speeds of 1000 km/h were obtained, with accelerations of 45 $g$ 's. (Recall that $g$ , the acceleration due to gravity, is $9\text{.}{\text{80 m/s}}^{2}$ . When we say that an acceleration is 45 $g$ 's, it is $\text{45}×9\text{.}{\text{80 m/s}}^{2}$ , which is approximately ${\text{440 m/s}}^{2}$ .) While living subjects are not used any more, land speeds of 10,000 km/h have been obtained with rocket sleds. In this example, as in the preceding one, the system of interest is obvious. We will see in later examples that choosing the system of interest is crucial—and the choice is not always obvious. #### Questions & Answers where we get a research paper on Nano chemistry....? nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review Ali what are the products of Nano chemistry? There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others.. learn Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level learn da no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts Bhagvanji hey Giriraj Preparation and Applications of Nanomaterial for Drug Delivery revolt da Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials ya I also want to know the raman spectra Bhagvanji I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor Nasa has use it in the 60's, copper as water purification in the moon travel. Alexandre nanocopper obvius Alexandre what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe Got questions? Join the online conversation and get instant answers!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231709003448486, "perplexity": 1791.2570970579395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00328.warc.gz"}
https://handwiki.org/wiki/Infinite_set
# Infinite set Short description: Set that is not a finite set Set Theory Image In set theory, an infinite set is a set that is not a finite set. Infinite sets may be countable or uncountable.[1][2] ## Properties The set of natural numbers (whose existence is postulated by the axiom of infinity) is infinite.[2][3] It is the only set that is directly required by the axioms to be infinite. The existence of any other infinite set can be proved in Zermelo–Fraenkel set theory (ZFC), but only by showing that it follows from the existence of the natural numbers. A set is infinite if and only if for every natural number, the set has a subset whose cardinality is that natural number. If the axiom of choice holds, then a set is infinite if and only if it includes a countable infinite subset. If a set of sets is infinite or contains an infinite element, then its union is infinite. The power set of an infinite set is infinite.[4] Any superset of an infinite set is infinite. If an infinite set is partitioned into finitely many subsets, then at least one of them must be infinite. Any set which can be mapped onto an infinite set is infinite. The Cartesian product of an infinite set and a nonempty set is infinite. The Cartesian product of an infinite number of sets, each containing at least two elements, is either empty or infinite; if the axiom of choice holds, then it is infinite. If an infinite set is a well-ordered set, then it must have a nonempty, nontrivial subset that has no greatest element. In ZF, a set is infinite if and only if the power set of its power set is a Dedekind-infinite set, having a proper subset equinumerous to itself.[5] If the axiom of choice is also true, then infinite sets are precisely the Dedekind-infinite sets. If an infinite set is a well-orderable set, then it has many well-orderings which are non-isomorphic. Infinite set theory involves proofs and definitions.[6] Important ideas discussed by Burton include how to define "elements" or parts of a set, how to define unique elements in the set, and how to prove infinity.[6] Burton also discusses proofs for different types of infinity, including countable and uncountable sets.[6] Topics used when comparing infinite and finite sets include ordered sets, cardinality, equivalency, coordinate planes, universal sets, mapping, subsets, continuity, and transcendence.[6] Candor's set ideas were influenced by trigonometry and irrational numbers. Other key ideas in infinite set theory mentioned by Burton, Paula, Narli and Rodger include real numbers such as pi, integers, and Euler's number.[6][7][8] Both Burton and Rogers use finite sets to start to explain infinite sets using proof concepts such as mapping, proof by induction, or proof by contradiction.[6][8] Mathematical trees can also be used to understand infinite sets.[9] Burton also discusses proofs of infinite sets including ideas such as unions and subsets.[6] In Chapter 12 of The History of Mathematics: An Introduction, Burton emphasizes how mathematicians such as Zermelo, Dedekind, Galileo, Kronecker, Cantor, and Bolzano investigated and influenced infinite set theory.[6] Potential historical influences, such as how Prussia's history in the 1800's, resulted in an increase in scholarly mathematical knowledge, including Candor's theory of infinite sets.[6] Mathematicians including Zermelo, Dedekind, Galileo, Kronecker, Cantor, and Bolzano investigated or influenced infinite set theory. Many of these mathematicians either debated infinity or otherwise added to the ideas of infinite sets.[6] One potential application of infinite set theory is in genetics and biology.[10] ## Examples ### Countably infinite sets The set of all integers, {..., -1, 0, 1, 2, ...} is a countably infinite set. The set of all even integers is also a countably infinite set, even if it is a proper subset of the integers.[4] The set of all rational numbers is a countably infinite set as there is a bijection to the set of integers.[4] ### Uncountably infinite sets The set of all real numbers is an uncountably infinite set. The set of all irrational numbers is also an uncountably infinite set.[4]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772106409072876, "perplexity": 435.38705479056745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00092.warc.gz"}
https://iwaponline.com/aqua/article-abstract/67/5/484/41378/Study-on-the-effect-of-a-static-magnetic-field-in
## Abstract Successful aggregation and surface hydrophobicity play a significant role towards effective initial state development of the biogranulation process. Considering previous studies, there is sparse research available for the study of the static magnetic field effect on aggregation and surface hydrophobicity of microbial granules. Therefore, this work aimed at exploring the feasibility of enhancing both aggregation and surface hydrophobicity using a static magnetic field. The influence of the static magnetic field on the removal of chemical oxygen demand (COD) was also monitored. The results showed that magnetically exposed activated sludge of 15 mT has better performance than other investigated intensity levels of static magnetic field. At 15 mT intensity, a maximum of 54% surface hydrophobicity was retained in 48 hours of starting the experiment and 90.4% aggregation was achieved in 10 hours. The removal efficiency of COD was also increased under similar static magnetic field intensity compared to the case without static magnetic field exposure. With this initial finding, it can be concluded that a static magnetic field of moderate field intensity stands a good chance of positively influencing the initial state of biogranulation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8251994848251343, "perplexity": 1300.4568202111805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00046.warc.gz"}
https://byjus.com/spring-force-formula
# Spring Force Formula Spring is a tool used daily by many of us and their inertia are frequently neglected by assuming it as massless. It’s an extremely casual activity that a Spring, when strained, undergoes displacement when it is compacted it gets compressed and when its free it comes to its equilibrium position. This fact tells us that spring exerts an equal as well as an opposite force on a body which compresses or stretches it. The Spring force formula is given by, F = k(x – x00). Where, the spring force is F, the equilibrium position is xo , the displacement of the spring from its position at equilibrium is x, the spring constant is k. The negative sign tells that the visualized spring force is a restoring force and acts in the opposite direction. Spring Force Solved Problems Underneath are some of the samples based on Spring force: Problem 1: A spring has length 22 cm/s. If it is loaded with 2 kg, it gets stretched by 38 cm/s. Compute its spring constant. Knwn: (Mass) m = 2 kg, (initial length) xo = 22 cm, (displacement) x = 38 cm Final displacement = x – xo = 38 cm – 22 cm = 16 cm = 0.16 m The spring force is articulated as, F = ma F = 2 kg × 0.16 m F = 0.32 N The spring constant is articulated as, $k\,&space;=\,&space;-\,\frac{F}{x-x_{0}}$ $k\,&space;=\,&space;-\,\frac{0.32N}{0.16m}$ k = – 2 N/m Thus, the spring constant is -2 N/m. Problem 2: If a body is stretched by 2m with force 100 N. Calculate its spring constant. Known: (Displacement) x = 2m, (force) F = 100 N The spring constant is articulated as, $k\,&space;=\,&space;-\,\frac{F}{x}$ $k\,&space;=\,&space;-\,\frac{100N}{2m}$ k = – 50 N/m Thus, the spring constant is -50 N/m.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127071857452393, "perplexity": 2418.00760521892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00351.warc.gz"}
http://math.stackexchange.com/help/badges/258?page=12
# Help Center > Badges > Informed Awarded 6933 times Awarded aug 8 at 14:18 to Awarded aug 8 at 11:47 to Awarded aug 8 at 9:23 to Awarded aug 8 at 8:38 to Awarded aug 8 at 8:35 to Awarded aug 8 at 8:32 to Awarded aug 8 at 7:19 to Awarded aug 8 at 6:35 to Awarded aug 8 at 6:12 to Awarded aug 8 at 1:21 to Awarded aug 8 at 0:50 to Awarded aug 7 at 21:22 to Awarded aug 7 at 20:34 to Awarded aug 7 at 20:06 to Awarded aug 7 at 19:09 to Awarded aug 7 at 17:48 to Awarded aug 7 at 17:31 to Awarded aug 7 at 15:05 to Awarded aug 7 at 14:19 to Awarded aug 7 at 13:41 to Awarded aug 7 at 12:48 to Awarded aug 7 at 10:21 to Awarded aug 7 at 10:03 to Awarded aug 7 at 7:01 to Awarded aug 7 at 4:03 to Awarded aug 7 at 1:59 to Awarded aug 6 at 22:41 to Awarded aug 6 at 19:14 to Awarded aug 6 at 19:11 to Awarded aug 6 at 17:30 to Awarded aug 6 at 15:45 to Awarded aug 6 at 14:20 to Awarded aug 6 at 13:53 to Awarded aug 6 at 11:48 to Awarded aug 6 at 9:46 to Awarded aug 6 at 9:12 to Awarded aug 6 at 7:07 to Awarded aug 6 at 6:36 to Awarded aug 6 at 6:18 to Awarded aug 6 at 5:14 to Awarded aug 6 at 4:05 to Awarded aug 6 at 3:35 to Awarded aug 6 at 0:09 to Awarded aug 5 at 22:25 to Awarded aug 5 at 22:12 to Awarded aug 5 at 21:25 to Awarded aug 5 at 19:18 to Awarded aug 5 at 18:21 to Awarded aug 5 at 18:12 to Awarded aug 5 at 17:32 to Awarded aug 5 at 12:42 to Awarded aug 5 at 12:17 to Awarded aug 5 at 12:16 to Awarded aug 5 at 11:57 to Awarded aug 5 at 11:50 to Awarded aug 5 at 11:33 to Awarded aug 5 at 10:20 to Awarded aug 5 at 8:17 to Awarded aug 5 at 7:28 to Awarded aug 5 at 3:17 to
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8766462206840515, "perplexity": 3945.188474832982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133033.29/warc/CC-MAIN-20140914011213-00223-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://mathhelpforum.com/algebra/199467-election-word-problem.html
# Math Help - Election word problem 1. ## Election word problem In an election, 2.8 million votes were cast and each vote was for either Candidate I or Candidate II. Candidate I received 28,000 more votes than Candidate II. What percent of the 2.8 million votes were case for Candidate I ? A. 50.05% B. 50.1% C. 50.5% D. 51% E. 55% 2. ## Re: Election word problem Let $x$ denote the number of votes Candidate I recieved. Then since he/she recieved $28.000$ votes more than Candidate II, number $x-28000$ is the number of votes Candidate II recieved. Because the total number of votes is $2.800.000$ you can form the equation: $x+(x-28000)=2800000.$ You will find that $x=1414000$, and number represents $\frac{1414000}{2800000}=0,505=50,5\%$. So correct answer is C. 3. ## Re: Election word problem $\frac{28000}{2.8 \, million} = \frac{2.8 \times 10^4}{2.8 \times 10^6} = 10^{-2} = 0.01 = 1$% difference of the total vote
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878755509853363, "perplexity": 2418.130691566947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
http://www.mediander.com/connects/675/affirming-the-consequent/
Affirming the consequent Affirming the consequent, sometimes called converse error, fallacy of the converse or confusion of necessity and sufficiency, is a formal fallacy of inferring the converse from the original statement. The corresponding argument has the general form: MORE Mediander uses proprietary software that curates millions of interconnected topics to produce the Mediander Topics search results. As with any algorithmic search, anomalous results may occur. If you notice such an anomaly, or have any comments or suggestions, please contact us.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254932165145874, "perplexity": 2680.6328513133194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00170-ip-10-171-10-108.ec2.internal.warc.gz"}
https://astronomy.stackexchange.com/questions/25339/how-did-uv-from-the-earliest-stars-alter-the-state-of-the-21-cm-line-such-that?noredirect=1
# How did UV from the earliest stars 'alter the state of the 21 cm line' such that it shows up in CMB today? In this question I discuss the recent (open access) paper in Nature An absorption profile centred at 78 megahertz in the sky-averaged spectrum at length. The abstract begins: After stars formed in the early Universe, their ultraviolet light is expected, eventually, to have penetrated the primordial hydrogen gas and altered the excitation state of its 21-centimetre hyperfine line. This alteration would cause the gas to absorb photons from the cosmic microwave background, producing a spectral distortion that should be observable today at radio frequencies of less than 200 megahertz(1). where reference 1 is Jonathan Pritchard and Abraham Loeb (2012) 21 cm cosmology in the 21st century. From there I've found Leonid Chuzhoy and Zheng Zheng (2007) Radiative Transfer Effect on Ultraviolet Pumping of the 21 cm Line in the High-Redshift Universe. I understand some basics about the hyperfine transition in hydrogen, and that the "spin temperature" of a gas in space can differ from the temperature of other partitions if it's being pumped, but these papers are more than a bit hard to read. Is there a simple way to explain the basics behind how the exposure of hydrogen to the UV light produced in early stars would cause the "blip" in the (now red-shifted) 21 cm part of the Cosmic Microwave Background radiation spectrum we see today? • Sounds like multi-photon absorption, where the second absorption can only take place when the electron is in a short-lived excited state. Is that what you're thinking of? – Carl Witthoft Mar 1 '18 at 15:12 • @CarlWitthoft I'm guessing it has something to do with the 21 cm line being associated with the electron in the n=1 level. If excited by UV, the hyperfine splitting is going to be very different and there won't be a transition at 21 cm anymore. But how that affects the blackbody spectrum is complicated. – uhoh Mar 1 '18 at 15:36 (one word took the other, and it became a rather long comment.) ### The hyperfine level Neutral hydrogen in its ground state can be in two different configurations; either the proton and the electron may have parallel spins ($\uparrow\uparrow$), or they may have antiparallel spins ($\uparrow\downarrow$). When the spins are parallel, the atom has a slightly higher energy than when they're antiparallel. The atoms "wants" to make a spin flip to the lower energy configuration$^\dagger$, and will eventually do so, but since the line is forbidden, the lifetime of the parallel state is of the order $10^7\,\mathrm{yr}$. The relative population of the states is given by the Boltzmann distribution $$\begin{array}{rcl} \frac{n_1}{n_0} & = & \frac{g_1}{g_0} \, e^{-\Delta E \, / \, k_\mathrm{B} T_S} \\ & = & 3 \, e^{-0.068\,\mathrm{K} \, / \, T_S}\label{a}\tag{1}, \end{array}$$ where subscripts 1 and 0 denote the $\uparrow\uparrow$ and $\uparrow\downarrow$ states, respectively, $n$ is the density, $g$ is the statistical weights (with $g_0,g_1 = 1,3$), $\Delta E = 5.9\times10^{-6}\,\mathrm{eV}$ is the energy difference of the states, $k_\mathrm{B}$ is the Boltzmann constant, and $T_S$ is the spin temperature, which I think is better thought of as "a number that describes the relative populations" than an actual temperature. ### Departure from equilibrium In thermal equilibrium, the spin temperature is equal to the "real", kinetic temperature. Just after decoupling of the radiation from matter at a redshift of $z\simeq1100$, the gas and the photons share the same energy, and since $T\gg 1$, we have that $n_1/n_0 \simeq 3$. But when the first stars begin to shine, they produce massive amounts of hard UV radiation which ionizes their surrounding medium. The ionized gas quickly recombines (in the beginning, at least), with $\sim2/3$ of the recombinations resulting in the emission of a Lyman $\alpha$ photon, i.e. a photon with an energy corresponding to the energy difference between the first excited state (one of the three $2P$ states) and the ground state (the $1S$ state) of the hydrogen atom (10.2 eV). The Ly$\alpha$ photons scatter multiple times on the neutral hydrogen. Each scattering excites an atom from $1S\rightarrow 2P$, which subsequently de-excites and emits an Ly$\alpha$ photon in another direction. But since the energy difference between the $2P$ and the $1S$ state is a million times larger than between the hyperfine states, there is equal chance of ending in the $\uparrow\uparrow$ and the $\uparrow\downarrow$ state. That is, $n_1/n_0$ is no longer $\simeq 3$, but is driven toward $\sim 1$. This is the Wouthuysen–Field effect that Guiseppe Rossi mentions; from eq. \ref{a}, you see that this corresponds to a much smaller spin temperature, and thus the factor that Guiseppe Rossi mentions becomes negative. The full equation describing the brightness (or, equivalently, the flux received) as a function of redshift can be written (e.g. Zaldarriaga et al. 2004) $$T(z) = 23\,\mathrm{mK} \, \frac{T_S - T_\mathrm{CMB}}{T_S} \, (1+\delta) x_\mathrm{HI}(z) \frac{\Omega_\mathrm{b}h^2}{0.02} \left( \frac{0.15}{\Omega_\mathrm{m}h^2} \, \frac{1+z}{10} \right)^{1/2}\label{b}\tag{2}$$ and when the $(T_S - T_\mathrm{CMB})\,/\,T_S$ factor is negative, you will get an absorption line. (In eq. \ref{b}, $\delta$, $x_\mathrm{HI}(z)$, $\Omega_\mathrm{b}$, $\Omega_\mathrm{m}$, and $h$, are the local overdensity, the neutral fraction of hydrogen, the baryon and matter density parameter, and the dimensionless (reduced) Hubble constant, respectively, but this is of less importance.) Since the observed absorption line (Bowman et al. 2018) starts to drop around an observed frequency of $\nu_\mathrm{obs} = 65\text{–}70\,\mathrm{MHz}$, and since the rest frequency of the hyperfine line is $\nu_\mathrm{rest} = 1420\,\mathrm{MHz}$, this means that the first stars appeared around a redshift of $z = \nu_\mathrm{rest}/\nu_\mathrm{obs} - 1 \simeq 20$, corresponding to an age of the Universe of $\sim 180\,\mathrm{Myr}$ (i.e. million years — the largest absorption is reached at $z\simeq 17$, or $t\simeq 200\,\mathrm{Myr}$). Now the big question is, according to eq. \ref{b} the dip should be of the order a few tens of mK, but is in fact roughly 0.5 K, i.e. an order of magnitude larger. One possible mechanism that could produce this effect is coupling of the gas with dark matter, something which is not usually considered possible but could happen if the dark matter particle has a very small charge (Barkana et al. 2018). ### Time evolution of the 21 cm signal The figure below (from a great review by Pritchard & Loeb 2012) shows how the 21 cm signal evolves with time. The dip discussed in this answer is the orange and red part. $^\dagger$An analogy would be two magnets aligned parallel to each other with north in the same direction, preferring to flip around, but note Ken G's comment below; the transition doesn't necessarily involve a spin flip, and the analogy is not to be taken literally, since parallel magnets are alike, whereas parallel electrons/protons have opposite charges. • This is extremely helpful, thank you for taking the time to write this up and explain so clearly. I'll take some time to read it and the references as well. – uhoh Apr 4 '18 at 9:23 • @uhoh I added another good reference to a review. – pela Apr 4 '18 at 12:23 • Minor correction to that wonderful answer: The 21 cm transition does not necessarily involve a spin flip, indeed the ground state involves a state where the spins of neither the proton nor the electron are specified (they are only specified to be opposite each other in an antisymmetric combination). The upper level can have parallel definite spins, or indefinite spins that are known only to be antiparallel in a symmetric combination. There is a loss of angular momentum in the transition, and given all this complexity, you can see why people claim it's a spin flip, but it's not really true. – Ken G Apr 4 '18 at 14:07 • Another interesting point: the electron and proton have opposite charges, so you'd think that if their spins pointed the same way, that would be like magnets aligned in opposite directions, which is the lowest energy, not the highest. The reason it is the highest energy when the spins are aligned and definite is actually that the strongest interaction between electron and proton is when the electron is inside the proton, so it's like anti-aligned magnets with one inside the other-- that's the highest energy configuration. – Ken G Apr 4 '18 at 14:31 • And when the spins are anti-aligned and indefinite, that's also the highest energy in the symmetric configuration ,because the symmetric (two swapping particle states) configuration is the one where the electron is going to be outside the proton, so that's the only situation that acts like two normal magnets. – Ken G Apr 4 '18 at 14:33 The Intergalactic medium at the relevant redshift is made of neutral hydrogen. What we can measure is the brightness temperature (the temperature that the IGM would have if it emitted as a blackbody) relative to the CMB. This quantity depends crucially on the following expression: $\frac{T_S - T_{CMB}}{T_S}$ where $T_S$ is called spin temperature. The spin temperature is just a measure of the ratio between the number of hydrogen atoms in the first excited hyperfine state (spin parallel) and ground state (spin antiparallel). It turns out that the spin temperature can be modified in three ways: one of them is Lyman resonant scattering (the other two are collisional coupling and scattering of CMB photons). Using atomic physics it's not difficult to see that $T_S$ is a weighted mean of the temperature of the CMB and gas temperature. UV photons can change the spin temperature because a hydrogen atom in the lowest state, n=1 (if you are familiar with chemistry, a s state) with antiparallel spins, can absorb an UV photon and jump in a p (n=2) state, and then fall again in a s state but with parallel spins. This mechanism is known as Wouthuysen–Field coupling. Before the emission takes place, the gas is at the same temperature of the CMB and both are equal to spin temperature, so the relative brightness temperature is zero. When Lyman alpha emission begins the spin temperature decreases resulting in the observed absorption peak. At some point the peak stops because the first stars emit X-rays and the gas becomes hotter than the background radiation. • Thanks for your answer. This focuses mainly on altering the spin temperature, but the last two sentences of my question explain that I'm asking how this "...would cause the "blip" in the (now red-shifted) 21 cm part of the Cosmic Microwave Background..." The linked question includes the line shape, and it's a dip, a decrease in intensity. How does the modified $T_S$ produce this feature? – uhoh Apr 4 '18 at 4:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237756133079529, "perplexity": 536.3594171459517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00398.warc.gz"}
https://simons.berkeley.edu/events/approximate-degree-surjectivity-how-sausage-made
Events Fall 2018 # The Approximate Degree of Surjectivity: How the Sausage is Made Sep 26, 2018 10:30 am – 12:00 pm Speaker: Mark Bun Location: Room 116 Approximate degree is a basic measure of the complexity of boolean functions which has found applications to lower bounds in quantum query complexity, communication complexity, circuit complexity, and relativized complexity. I will describe in as much detail as possible the proof of a tight ~Omega(n^{3/4}) lower bound on the approximate degree of the Surjectivity function. The lower bound for Surjectivity is, in fact, a special case of a more general hardness amplification theorem which yields approximate degree lower bounds for a broad class of (non-block-)composed functions. This talk is intended to complement the high-level overview given in the Boolean Devices workshop, but will be entirely self-contained. It is based on joint works with Robin Kothari and Justin Thaler.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343034625053406, "perplexity": 889.2865994742552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00243.warc.gz"}
http://mathhelpforum.com/pre-calculus/138916-differentiation-y-w-r-t-x.html
# Math Help - Differentiation y w.r.t.x 1. ## Differentiation y w.r.t.x x^-1 + y^-1 = e^y My textbook has a pretty rubbish explanation of how to differentiate y, from what i gather its like x but you stick a dy/dx after it and use the product rule for xy terms, can anyone do the above question with a slightly better explanation than ive given?! Thanks! 2. Originally Posted by darksupernova x^-1 + y^-1 = e^y My textbook has a pretty rubbish explanation of how to differentiate y, from what i gather its like x but you stick a dy/dx after it and use the product rule for xy terms, can anyone do the above question with a slightly better explanation than ive given?! Thanks! Hi darksupernova, i guess you mean when you differentiate both sides with respect to x, and y is a function of x. Then you use $\frac{d}{dx}f(y)=\frac{dy}{dy}\frac{d}{dx}f(y)=\fr ac{dy}{dx}\frac{d}{dy}f(y)$ You can just think of swopping denominators, as in $\left(\frac{10}{3}\right)\frac{6}{5}=\frac{60}{15} =\left(\frac{10}{5}\right)\frac{6}{3}$ It's called the chain rule $\frac{d}{dx}x^{-1}+\frac{dy}{dx}\frac{d}{dy}y^{-1}=\frac{dy}{dx}\frac{d}{dy}e^y$ and continue 3. Thankyou, its a bit odd to get my head round but thats probably because ive never done it! 4. Hi darksupernova, i forgot to mention, you can differentiate y with respect to y, we can differentiate x with respect to x. when we have $\frac{d}{dx}f(y)$ and we don't have y expressed in terms of x, we've got a dx where we'd prefer to have a dy We can multiply any value by 1 without changing it, and there are an infinite number of ways to write 1. We get dy where we want it by multiplying by $\frac{dy}{dy}$ and swopping denominators. 5. okay i get that part, but this question has thrown me a little... Find \frac{dy}{dx} as a function of x if y^2 = 2x + 1. How do i make dy/dx a duction of x? 6. Originally Posted by darksupernova okay i get that part, but this question has thrown me a little... Find \frac{dy}{dx} as a function of x if y^2 = 2x + 1. How do i make dy/dx a duction of x? y^2 = 2x + 1 Differentiating this you get 2y*dy/dx = 2. Or dy/dx = 1/y = 1/sqrt(2x + 1) 7. Originally Posted by darksupernova okay i get that part, but this question has thrown me a little... Find $\frac{dy}{dx}$ as a function of x if y^2 = 2x + 1. How do i make dy/dx a function of x? $y^2=2x+1$ $\frac{d}{dx}y^2=\frac{d}{dx}(2x+1)=2$ $\frac{dy}{dx}\frac{d}{dy}y^2=2$ now $y^2$ can be differentiated wrt y $\frac{dy}{dx}2y=2$ $\frac{dy}{dx}=\frac{2}{2y}=\frac{1}{y}$ You get it by differentiating the x terms wrt x and the y terms wrt y, then dividing by the multiplier of $\frac{dy}{dx}$ 8. i differentiated it correctly but i didnt substitue y^2 = 2x + 1 back in :S is that because it says as a function of x? 9. Originally Posted by darksupernova i differentiated it correctly but i didnt substitue y^2 = 2x + 1 back in :S is that because it says as a function of x? yes, as a function of x, then continue as sa-ri-ga-ma showed. $y^2=2x+1\ \Rightarrow\ y=\pm\sqrt{2x+1}$ Initially, you could also write $y=\pm\sqrt{2x+1}$ and calculate $\frac{dy}{dx}=\pm\frac{d}{dx}(2x+1)^{0.5}=\pm(2)(0 .5)(2x+1)^{-0.5}$ 10. It should be (0.5)(2)(2x + 1)^-0.5 11. yeah i got it now, thanks guys, it was just the fact i hadnt substituted in the value for y. Got it now, thank you both!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9160999655723572, "perplexity": 1137.3002568706643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122328486.60/warc/CC-MAIN-20150124175848-00122-ip-10-180-212-252.ec2.internal.warc.gz"}
http://cpresourcesllc.com/standard-error/standard-error-larger-than-coefficient.php
Home > Standard Error > Standard Error Larger Than Coefficient Standard Error Larger Than Coefficient Contents temperature What to look for in regression output What's a good value for R-squared? Sep 29, 2012 Jochen Wilhelm · Justus-Liebig-Universität Gießen Barbara, could you explain me why/how a multivariate analysis should/does avoid the problem of collinear predictors? When you chose your sample size, took steps to reduce random error (e.g. Related -1Using coefficient estimates and standard errors to assess significance4Confused by Derivation of Regression Function4Understand the reasons of using Kernel method in SVM2Unbiased estimator of the variance5Understanding sample complexity in the http://cpresourcesllc.com/standard-error/standard-error-coefficient.php The fact that my regression estimators come out differently each time I resample, tells me that they follow a sampling distribution. However, if the sample size is very large, for example, sample sizes greater than 1,000, then virtually any statistical result calculated on that sample will be statistically significant. temperature What to look for in regression output What's a good value for R-squared? On the other hand, a regression model fitted to stationarized time series data might have an adjusted R-squared of 10%-20% and still be considered useful (although out-of-sample validation would be advisable--see Significance Of Standard Error In Sampling Analysis Confidence intervals and significance testing rely on essentially the same logic and it all comes back to standard deviations. Whenever you are working with time series data, you should also ask: does the current regression model improve on the best naive (random walk or random trend) model, according to these And, if a regression model is fitted using the skewed variables in their raw form, the distribution of the predictions and/or the dependent variable will also be skewed, which may yield There's nothing magical about the 0.05 criterion, but in practice it usually turns out that a variable whose estimated coefficient has a p-value of greater than 0.05 can be dropped from 1. The smaller the standard error, the closer the sample statistic is to the population parameter. 2. McHugh. 3. In this case, your mean could be 85, and your standard deviation could be 10, indicating that most of the residents fall between the ages of 75 and 95. 4. Available at: http://damidmlane.com/hyperstat/A103397.html. 5. Sep 18, 2012 Jochen Wilhelm · Justus-Liebig-Universität Gießen If you divide the estimate by its standard error you get a "t-value" that is known to be t-distributed if the expected value 6. If the sample size is large and the values of the independent variables are not extreme, the forecast standard error will be only slightly larger than the standard error of the 7. This makes it possible to test so called null hypotheses about the value of the population regression coefficient. 8. Thus, if we choose 5 % likelihood as our criterion, there is a 5% chance that we might refute a correct null hypothesis. 9. In the first case, the standard deviation is greater than the mean. 10. This statistic is used with the correlation measure, the Pearson R. If a variable's coefficient estimate is significantly different from zero (or some other null hypothesis value), then the corresponding variable is said to be significant. When there are two or more variables/factors/predictors in a regression analysis, one needs to be aware first of how the dependent variable looks on each one by itself. Now, the standard error of the regression may be considered to measure the overall amount of "noise" in the data, whereas the standard deviation of X measures the strength of the Standard Error Of Beta Hat http://dx.doi.org/10.11613/BM.2008.002 School of Nursing, University of Indianapolis, Indianapolis, Indiana, USA  *Corresponding author: Mary [dot] McHugh [at] uchsc [dot] edu   Abstract Standard error statistics are a class of inferential statistics that But the standard deviation is not exactly known; instead, we have only an estimate of it, namely the standard error of the coefficient estimate. Standard Error Of Coefficient Formula estimate – Predicted Y values scattered widely above and below regression line   Other standard errors Every inferential statistic has an associated standard error. Join for free An error occurred while rendering template. With the assumptions listed above, it turns out that: $$\hat{\beta_0} \sim \mathcal{N}\left(\beta_0,\, \sigma^2 \left( \frac{1}{n} + \frac{\bar{x}^2}{\sum(X_i - \bar{X})^2} \right) \right)$$ $$\hat{\beta_1} \sim \mathcal{N}\left(\beta_1, \, \frac{\sigma^2}{\sum(X_i - \bar{X})^2} \right)$$ Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Standard Error Of Beta Linear Regression Another use of the value, 1.96 ± SEM is to determine whether the population parameter is zero. In a regression model, you want your dependent variable to be statistically dependent on the independent variables, which must be linearly (but not necessarily statistically) independent among themselves. What's the bottom line? Standard Error Of Coefficient Formula Taken together with such measures as effect size, p-value and sample size, the effect size can be a useful tool to the researcher who seeks to understand the accuracy of statistics Levels that are lower than 1% may occur. Significance Of Standard Error In Sampling Analysis Topics Applied Statistics × 837 Questions 2,816 Followers Follow Sep 9, 2012 Share Facebook Twitter LinkedIn Google+ 1 / 1 Popular Answers Deleted The significance of a regression coefficient in a How To Interpret Standard Error In Regression Indeed, given that the p-value is the probability for an event conditional on assuming the null hypothesis, if you don't know for sure whether the null is true, then why would In regression modeling, the best single error statistic to look at is the standard error of the regression, which is the estimated standard deviation of the unexplainable variations in the dependent http://cpresourcesllc.com/standard-error/standard-error-regression-coefficient.php Sometimes you will discover data entry errors: e.g., "2138" might have been punched instead of "3128." You may discover some other reason: e.g., a strike or stock split occurred, a regulation price, part 1: descriptive analysis · Beer sales vs. The SPSS ANOVA command does not automatically provide a report of the Eta-square statistic, but the researcher can obtain the Eta-square as an optional test on the ANOVA menu. Standard Error Of Coefficient In Linear Regression Extremely high values here (say, much above 0.9 in absolute value) suggest that some pairs of variables are not providing independent information. Use the standard error of the coefficient to measure the precision of the estimate of the coefficient. Higher levels than 10% are very rare. have a peek here Am I missing something? A designed experiment looking for small but statistically significant effects in a very large sample might accept even lower values. Importance Of Standard Error In Statistics X has mean = 3, sd = 1.58, CV = 0.53Y has mean = 30, sd = 15.81, CV = 0.53Z has mean = 0, sd = 1.58, CV = infinite25.2k For example if both X and LAG(X,1) are included in the model, and their estimated coefficients turn out to have similar magnitudes but opposite signs, this suggests that they could both In a regression, the effect size statistic is the Pearson Product Moment Correlation Coefficient (which is the full and correct name for the Pearson r correlation, often noted simply as, R). An electronics company produces devices that work properly 95% of the time Need a way for Earth not to detect an extrasolar civilization that has radio Resubmitting elsewhere without any key You can do this in Statgraphics by using the WEIGHTS option: e.g., if outliers occur at observations 23 and 59, and you have already created a time-index variable called INDEX, you In a multiple regression model, the exceedance probability for F will generally be smaller than the lowest exceedance probability of the t-statistics of the independent variables (other than the constant). Standard Error Significance Rule Of Thumb Better to determine the best naive model first, and then compare the various error measures of your regression model (both in the estimation and validation periods) against that naive model. More than 2 might be required if you have few degrees freedom and are using a 2 tailed test. The coefficient? (Since none of those are true, it seems something is wrong with your assertion. Usually the decision to include or exclude the constant is based on a priori reasoning, as noted above. http://cpresourcesllc.com/standard-error/standard-error-coefficient-regression.php The standard error is not the only measure of dispersion and accuracy of the sample statistic. If you look closely, you will see that the confidence intervals for means (represented by the inner set of bars around the point forecasts) are noticeably wider for extremely high or If it turns out the outlier (or group thereof) does have a significant effect on the model, then you must ask whether there is justification for throwing it out. If the p-value is less than the chosen threshold then it is significant. I don't question your knowledge, but it seems there is a serious lack of clarity in your exposition at this point.) –whuber♦ Dec 3 '14 at 20:54 @whuber For If heteroscedasticity and/or non-normality is a problem, you may wish to consider a nonlinear transformation of the dependent variable, such as logging or deflating, if such transformations are appropriate for your If the model is not correct or there are unusual patterns in the data, then if the confidence interval for one period's forecast fails to cover the true value, it is
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245573043823242, "perplexity": 770.3739300036366}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948125.20/warc/CC-MAIN-20180426090041-20180426110041-00107.warc.gz"}
http://math.stackexchange.com/questions/186195/how-to-create-the-generating-function-for-this-sequence
# How to create the generating function for this sequence This is a question from my discrete math exam of last semester. And I don't really know how to tackle this question. $$a_i$$ is the number of different sequences of i symbols (i >= 0) chosen from {0,1,2}, where no 2 1's appear next to each other (so xxx11xx would be impossible), nor two 2's appearing next to eachother. For example 1200 is a valid sequence for 4 symbols. We assume that $$a_0 = 1$$ (the empty row for i = 0) Question: (a) What do $$a_1, a_2, a_3$$ equal to? (b) Make the recurrent equation (with initial values) for the sequence $${a_i}$$ For i from 0 to infinity. Explain your answer (c) Calculate the normal generating function of $${a_i}$$ i from 0 to infinity - What did you do? Question (a) does not seem that difficult... – Did Aug 24 '12 at 7:07 I have a part of the solution on paper. (a) and (b) aren't the 'biggest' problems, it's especially part (c) that I don't know how to do (but the correct solution for a and b would help to) – Spyral Aug 24 '12 at 7:09 Then add them to your question. – Did Aug 24 '12 at 7:11 You can calculate $a_n$ by computing $a_n=2a_{n-1}+a_{n-2}$: Any string of length $n-1$ can extended in at least two different ways to a string of length $n$, namely by choosing a number which is different to the last one in the string. Therefore we get $2a_{n-1}$ strings. We forgot those which end with two $0$. How many are there? In fact such a string consists of an arbitrary string of length $n-2$, and of course the two $0$. We end up with $a_n=2a_{n-1}+a_{n-2}$. I suggest you work with this explicit recursion. Finding the generating function should be purely technical from here. - Hint: Define $b_n$ as the number of admissible sequences of length $n$ ending with 0 and $c_n$ as the number of admissible sequences of length $n$ ending with 1 or 2. Write $a_n$, $b_{n+1}$ and $c_{n+1}$ in terms of $b_n$ and $c_n$. Deduce that $a_{n+1}=2a_n+b_n$ for every $n\geqslant0$, with the convention that $b_0=0$, and that $(a_n)_{n\geqslant0}$ solves a second order difference equation with the initial condition $a_0=a_{-1}=1$. Conclude. - $b_n$=$a_{n-1}$ – noname1014 Aug 24 '12 at 7:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8958609104156494, "perplexity": 209.88862343291498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160918.28/warc/CC-MAIN-20160205193920-00203-ip-10-236-182-209.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1365220/derivation-of-euler-equation
# Derivation of Euler Equation In the following notes here I don't understand the very last line of proof of theorem 6.1 . We now use the fact that $\frac{\partial}{\partial a}S[x_a(t)]$ must be zero for any function $\beta(t)$, because we are assuming that $x_{0}(t)$ yields a stationary value. The function $\beta$ is a small perturbation of $x_0$. Note earlier on the page: "Our requirement is that there be no change in $S$ at first order in $a$." That is, when you differentiate the perturbation $x_a = x_0 + a\beta$ with respect to $a$, you get zero for any perturbation $\beta$. Think of this as the directional derivative of $L$ at $x_0$ in the direction of $\beta$. If $L$ is stationary at $x_0$, then the derivative in any direction vanishes. Integration by parts then gives you the Euler-Lagrange equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910845160484314, "perplexity": 86.11418199261264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363072.47/warc/CC-MAIN-20210301212939-20210302002939-00057.warc.gz"}
http://www.newgrounds.com/portal/view/184511/review_page/416
## Credits & Info Aug 1, 2004 | 7:43 PM EDT • Daily Feature August 2, 2004 • Weekly 4th Place August 3, 2004 #### Plenty more like this here! • Salad Fingers A strange cartoon about a creature who loves to touch rusty metal. My 5th Newgrounds release. Salad Fingers Episode 3: "Nettles" Thanks to everyone for all the kind comments about Episode 2 (it got in the top 20!!) I hope you all enjoy this episode as much. They will keep coming! Don't worry. doki - www.fat-pie.com ## Reviews #### Kp-killa-04 ##### Rated 5 / 5 stars2004-08-04 18:57:08 Wow....morbid.... I love this series...Salad Fingers is going to be a character i wont forget.....creepy tho.....in a good way ##### Rated 4 / 5 stars2004-08-04 17:23:55 Im sorry to be writing this but..... I started watching salad fingers for the first time a couple of days ago and ive seen all three, in the start i hated them but its weird i kept coming back and watching them over and over again now its like i cant stop thinking about him, even when i dont want i just cant get him out, and its not a good feeling either, sometimes i find myself dictating his script. Is this your intention? to make an unforgetable character? Now i cant wait for salad fingers 4 . Good work on brainwashing me. Do any other people share this weird trend. I think i might be going crazy!!! #### Panaeos ##### Rated 4.5 / 5 stars2004-08-04 16:05:22 Creeeepy One word: Great. Hope you make more BFM episodes. His voice rocks ^^ #### looper ##### Rated 3 / 5 stars2004-08-04 15:41:34 o god man this series really creeps me out...and that guy is so krazy! #### virgin-suicide ##### Rated 5 / 5 stars2004-08-04 15:41:03 omg! omg! omg! omg! omg! omg! OMG!!!!!!!!!! i want to rape salad fingers!!!!!!! ok it was like 2 weeks ago when i found your site and i would o there every day to look for some more salad fingers or scribbler or something.... yeah but then i stoped going cuz my computer freaked out and i coulfn't get on it till today and i got this thing in my email that said "Salad Fingers Episode 3: "Nettles" and i was like holy shit!!! omfg! and was like freaking out cuz i love salad fingers SOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOOOOOOOOO MUCH!!!!!!!!!!!!!! hehehe thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you thank you lol -later
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940626382827759, "perplexity": 2805.93100149357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296462.22/warc/CC-MAIN-20150323172136-00120-ip-10-168-14-71.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/140299-integrals-problem-1-a.html
1. ## Integrals problem 1 Let p(x) be a polynomial so that int(a-->a+1){p(x)dx} = 0 for all a in R. Prove p(x)=0 for all x in R. 2. Originally Posted by Also sprach Zarathustra Let p(x) be a polynomial so that int(a-->a+1){p(x)dx} = 0 for all a in R. Prove p(x)=0 for all x in R. Suppose $p(x)\neq 0$ and let $a\in\mathbb{R}$ be the largest real root of $p(x)$ (if the pol. has no real roots then the contradiction is immediate since then the pol's values are all positive or all negative), and get a contradiction evaluating the integral from a to a+1. Tonio 3. "...and get a contradiction evaluating the integral from a to a+1." How I get this kind of contradiction(for the maximality of a as root of p(x))? My try: let deg(p(x))=n so int(a-->a+1){p(x)dx} = q(a+1) - q(a) = 0 , when deg(q(x))=n+1 so, deg{q(x+1) - q(x)}=n q(a+1) - q(a)= m(a+1)=0 Thanks! 4. Originally Posted by Also sprach Zarathustra Let p(x) be a polynomial so that int(a-->a+1){p(x)dx} = 0 for all a in R. Prove p(x)=0 for all x in R. So, $I_n=\sum_{j=1}^{n}\int_{j}^{j+1}p(x)dx=\int_{1}^{n +1}p(x)=0$. Thus, $\lim_{n\to\infty}I_n=0$. Thus, if $p(x)=a_0+\cdots+a_nx^n$ then $\int p(x)=a_0x+\cdots+ a_nx^{n+1}$ and so $\lim_{n\to\infty}\left(a_0x+\cdots+a_nx^{n+1}\righ t)=0$. Do your stuff. 5. WHY the coefficients p(x) and int(p(x)) are identical? 6. Originally Posted by Also sprach Zarathustra WHY the coefficients p(x) and int(p(x)) are identical? Oops. Stupid typo. Won't affect anything. 7. Originally Posted by Also sprach Zarathustra "...and get a contradiction evaluating the integral from a to a+1." How I get this kind of contradiction(for the maximality of a as root of p(x))? Because after the largest real root the polynomial will be either all positive or all negative, so its integral cannot be zero, of course. Tonio My try: let deg(p(x))=n so int(a-->a+1){p(x)dx} = q(a+1) - q(a) = 0 , when deg(q(x))=n+1 so, deg{q(x+1) - q(x)}=n q(a+1) - q(a)= m(a+1)=0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828100562095642, "perplexity": 2144.175599313253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948608836.84/warc/CC-MAIN-20171218044514-20171218070514-00480.warc.gz"}
http://mathhelpforum.com/discrete-math/101541-axiom-proofs.html
1. ## Axiom Proofs Prove the Axiom of Pair using: "for any A and B, there is a set C such that A is in C and B is in C." Prove the Axiom of Union using: "for any S, there exist U such that if X is in A and A is in S, then X is in U." Prove the Axiom of Power Set using: "for any set S, there exists P such that X is a subset of S implies X is in P." I would really appreciate any input. Thanks! 2. Originally Posted by larryj76 Prove the Axiom of Pair using: "for any A and B, there is a set C such that A is in C and B is in C." Prove the Axiom of Union using: "for any S, there exist U such that if X is in A and A is in S, then X is in U." Prove the Axiom of Power Set using: "for any set S, there exists P such that X is a subset of S implies X is in P." I would really appreciate any input. Thanks! Can you prove an axiom??
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927679300308228, "perplexity": 276.6826569948478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721067.83/warc/CC-MAIN-20161020183841-00338-ip-10-171-6-4.ec2.internal.warc.gz"}
http://cs.stackexchange.com/questions/9088/visualized-definition-of-cohomology
# Visualized definition of cohomology I cannot imagine how cohomology is related to graph theory, actually I read solid definition from wiki, and to be honest, I cannot understand it. e.g I know what is homotopy (in simple term), group of functions such that I can continuously convert each of them to another one, and I think this is useful for understanding homology, but, is there similar visualization method for cohomology? (I'm not looking for exact definition, I want to imagine it, actually this is in graph theoretic concept). for more information see introduction of this paper. I want to understand it in this paper, how is useful? how to imagine it? P.S1: my field is not related to group theory, and as in introduction author wrote, this paper doesn't need deep group theoretic definition! and I don't want to be deep in group theory. Just looking for simple way to understand them. P.S2: I think I can imagine what is free group (which is in introduction of paper), at least by Calay graph seems to be easy to imagine it. P.S3: I also asked this in math.stackexchange but I think this is something between two field and may I get some mathematical answer there and some others here (from CS point of view) to understand it well. - "Wanna" is not an English word. –  Dave Clarke Jan 22 '13 at 11:11 @DaveClarke, may be its origin is from other languages, but is also english: oxforddictionaries.com/definition/english/wanna?q=wanna also see this :dictionary.reference.com/browse/wanna –  user742 Jan 22 '13 at 11:16 Should this be in math.SE? –  Peter Shor Jan 22 '13 at 16:55 As far as the use of "wanna" is concerned. It may be fine in informal conversational English, but is never used in written form. –  Nicholas Mancuso Jan 23 '13 at 2:27 @NicholasMancuso: Utter nonsense. The word is absolutely used in informal writing, a fact which is easily verified empirically, and is perfectly acceptable and clearly understood in that context. The (inconsistent and mostly arbitrary, but expected on this site) demands of formal writing style have little to do with the English language as a whole. –  C. A. McCann Jan 25 '13 at 17:31 Apparently all algebraic topology is useful for is earning imaginary internet points. More than I expected, I guess... (Actually now that I've finished writing I expect to lose points on this...) tl;dr: Honestly, I don't think anyone here can give you an easy way to really understand what homology and cohomology are, with just a short description. I made an attempt below, but I took a whole course on the subject and I still don't really know what they are. Particularly if you don't know, or care to know, what a group is. These things are groups, that's sort of the whole point. As they relate to graph theory, you can treat a graph as a simplicial complex of dimension 1. Thus you can consider the homology and cohomology groups of the graph and use them to understand the topology of the graph. Here are some notes by Herbert Edelsbrunner on homology and cohomology, the latter of which provides a useful example. Before I can define cohomology I must first make sure you understand the definition of homology. For simplicity I'll describe everything using simplicial complexes, but (co)homology can be, and usually are, defined for more complex complexes (see what I did there?). Simplicial complexes are enough for graph theory, at least as far as I've seen. It turns out that it's really difficult to try to create homeomorphisms between two topological spaces to show that they're topologically equivalent. It's even harder to try an show that no such homeomorphism exists. So the idea is to establish topological invariants, a property that must be equal for two topological spaces if they are topologically equivalent. If the invariant isn't the same for both spaces, then they can't possibly be topologically equivalent. Homology and cohomology groups are two such invariants. They attempt to put a group structure on a topological space, so that we can work with groups and homomorphisms instead of topological spaces and homeomorphisms. Computer scientists like homology groups because they are easy to compute and lead to fast algorithms. The downside is they're much more difficult to visualize. So how do we describe a topological space in such a way that we can place a group structure on it? Well we build our topological space with a set of simplicies, called a simplicial complex. A 0-dimensional simplex is a vertex, a 1-dim simplex is an edge, 2-simplex is a triangle, 3-simplex is a tetrahedron, etc. A valid simplicial complex must obey certain rules about how it's simplices connect to one another: the intersection of two simplices must also be a simplex in the set and all subsimplces must be in the set. Notice that graphs can be thought about in this way. It makes sense to talk about sums of simplices of the same dimension $p$, what we call $p$-chains. So a $p$-chain can be written as $\sum_{i} a_i \sigma_i$ where $\sigma_i$ is a $p$-simplex and $a_i$ is an integer. With this operation we can define the group of $p$-chains $(C_p,+)$, or just $C_p$. We also have one more operation called the boundary operator, which takes a chain to it's boundary. So for an edge $(v,u)$ in some graph, the boundary is just the sum of its vertices $u+v$. The boundary operator of a $p$-simplex $\sigma = [u_0,\ldots, u_1]$ is defined as $\partial \sigma = \sum_{i}[u_0, \ldots, \hat{u_i}, \ldots, u_n]$ where $\hat{u_i}$ means that we remove the $u_i$ simplex and create a sum of $p-1$-simplices. To apply the boundary operator to a $p$-chain $c$, we just apply it to each of it's $p$-simplices, $\partial c = \sum_{i}a_i \partial \sigma_i$. Now there are two very important types of chains which we use to construct homology groups, cycles and boundaries. A $p$-cycles is a $p$-chain $c$ with no boundary, meaning $\partial c = 0$. A $p$-boundary $c$ is a $p$-chain that is the boundary of some $p+1$-chain $d$, $c = \partial d$. Once again we can define groups $Z_p$ (our group $p$-cycles) and $B_p$ (our group of $p$-boundaries). The group $B_p$ is a subgroup of $Z_p$ which is a subgroup of $C_p$. Homology groups are not a group of functions where one element can be deformed into another. Intuitively, what the homology group is trying to do is characterize the different loops. Think of a torus (donut) which has two distinct loops, colored in the image below. I can move the blue loop anywhere around the torus, but it's still the same loop because it differs only by it's boundary. The elements of a homology group are equivalence classes, where two $p$-cycles in the simplicial complex are in the same equivalence class if and only if they differ only by a boundary chain. Meaning if you take two cycles $c,d \in Z_p$, and there exists boundaries $a,b \in B_p$ such that $c+a = d+b$, then $c$ and $d$ are in the same equivalence class. They are constructed as the quotient groups $H_p = Z_p/B_p$. Now cohomology is much less geometrically intuitive and motivated by algebraic considerations. We define cohomology groups in terms of cochains. A $p$-cochain is a homomorphism $\psi: C_p \rightarrow G$, where $G$ is the group used for the coefficients $a_i$ (usually $\mathbb{Z}$ or $\mathbb{Z_2}$). Instead of considering a group of chains, we consider the group of cochains, all homomorphisms between $C_p$ and $G$ denoted as $C^p=Hom(C_p, G)$. Similarly we can define a boundary operator on cochains, as the dual homomorphism of $\partial$, which we denote $\delta: C^{p} \rightarrow C^{p+1}$. Notice that since the boundary operator $\partial$ took $p+1$-chains to $p$-chains, the dual homomorphism takes $p$-cochains to $p+1$-cochains. Now consider a $(p-1)$-cochain $\psi$ and $\partial c$ a $p-1$-chain. The coboundary operator $\delta$ requires that $\psi$ applied to $\partial c$ is the same as $\delta \psi$ applied to $c$. Once we have the coboundary operator, we can define cocycle and coboundary groups, denoted $Z^{p}$ and $B^{p}$, just as we did before. Then the $p$th cohomology group is a set of equivalence classes where two cocylces are equivalent iff they differ by a coboundary. They are constructed as the quotient groups $H^{p}=Z^{p}/B^{p}$. Do you see why this is much less geometrically intuitive? - Thank you very much, thoughtful answer, I read it right now, but I think I should read it again tomorrow (with good look to your references). –  user742 Jan 22 '13 at 19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000251293182373, "perplexity": 256.57540812183817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121899763.95/warc/CC-MAIN-20150124175139-00157-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-2nd-edition/chapter-1-functions-1-1-review-of-functions-1-1-exercises-page-12/88
## Calculus: Early Transcendentals (2nd Edition) The solution is $$f(x)=x^2-6.$$ We need that $(f(x))^2$ is equal to a 4th degree polynomial so we will demand that $f(x)$ is a 2nd degree polynomial i.e. that it is given by $f(x)=ax^2+bx+c$. This gives $$(f(x))^2=(ax^2+bx+c)^2=a^2x^4+b^2x^2+c^2+2abx^3+2acx^2+2bcx = a^2x^4+ 2abx^3+(b^2+2ac)x^2+2bcx+c^2 = x^4-12x^2+36.$$ Equating the coefficients multiplying same powers of $x$ we get $$a^2=1\Rightarrow a=1$$ $$2ab=0\Rightarrow 2b=0\Rightarrow b=0$$ $$b^2+2ac=-12\Rightarrow 2c=-12\Rightarrow c=-6$$ The last equation is only for checking. Indeed $$c^2=(-6)^2=36.$$ Finally $$f(x)=x^2-6.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984950840473175, "perplexity": 132.92193909665576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945940.14/warc/CC-MAIN-20180423085920-20180423105920-00512.warc.gz"}
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0075569
Research Article # Community Structure and Multi-Modal Oscillations in Complex Networks • Affiliation: School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, United Kingdom X • [email protected] Affiliation: School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, United Kingdom X • Affiliation: School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, United Kingdom X • Published: October 10, 2013 • DOI: 10.1371/journal.pone.0075569 ## Abstract In many types of network, the relationship between structure and function is of great significance. We are particularly interested in community structures, which arise in a wide variety of domains. We apply a simple oscillator model to networks with community structures and show that waves of regular oscillation are caused by synchronised clusters of nodes. Moreover, we show that such global oscillations may arise as a direct result of network topology. We also observe that additional modes of oscillation (as detected through frequency analysis) occur in networks with additional levels of topological hierarchy and that such modes may be directly related to network structure. We apply the method in two specific domains (metabolic networks and metropolitan transport) demonstrating the robustness of our results when applied to real world systems. We conclude that (where the distribution of oscillator frequencies and the interactions between them are known to be unimodal) our observations may be applicable to the detection of underlying community structure in networks, shedding further light on the general relationship between structure and function in complex systems. ### Introduction The problem of relating the structure of a network to the dynamical behaviour it supports is of significant interest in a large number of domains. Many different systems may be represented as networks of connected entities, from friends communicating via social media [1], [2], to groups of neurons [3] and chemical reactions [4]. The fundamental issue we address here is how to link the observed dynamics of a system to certain properties of its underlying network structure. The hope is that, by deepening our understanding of how particular types of network behave (in a global sense) over time, we may gain the ability to predict the behaviour of so-far unknown networks with similar structures. In addition, by studying the recurring features of complex networks from a number of different disciplines, we may gain a deeper, more over-arching theoretical understanding of network dynamics. Early work in this area focused on the development of model systems, which were used to analytically study the onset of certain behaviours (such as oscillations) [5], [6] (see also [7], [8] for reviews). These model systems have been successfully applied in a number of different disciplines, including chemistry [9], ecology [10] and sociology [2]. Of particular interest are networks which possess some form of community structure [11][15]; for an overview of methods for determining such community structure, see [16]. These are generally characterised as having groups of nodes that are tightly knit (i.e. highly connected) with less dense connections existing between these groups [17]. Such structures are interesting because many ‘real world’ networks (e.g. social, biological, technological) are naturally partitioned into sets of loosely-connected ‘communities’, or ‘modules’ [18][22]. Moreover, we do not restrict ourselves to networks which are static (i.e. we consider the possibility that connections are added and removed and nodes update their state) since such structures capture the fact that links between individual nodes - and the properties of nodes - may change over time. Recent work [23] on community structure in dynamic networks has shown that allowing nodes to influence the state of other nodes facilitates the spontaneous emergence of dynamic equilibrium (that is, the community structure of the network remains stable, even as group composition changes over time) [24]. The idea of nodes influencing one another leads naturally to the notion of synchronisation. The ability of connected dynamic elements to synchronise their behaviour through interaction is ubiquitous (see [25] for a general introduction) and has profound implications for a wide variety of systems. We are particularly interested in the situation where the connected elements are oscillators [6], as their synchrony is observed in many settings, from the human heart [26] and brain [27], to insect locomotion [28] and novel forms of computation [29]. Previous work [30] has established a strong correlation between the connectivity of groups of nodes and the time required for oscillators to synchronise. However, given that full synchronisation does not (and, indeed, should not) occur in many networks (for example, the abnormal synchronisation in neurones is known to be a feature of epilepsy [24]) we are interested in the possible relationship between structure and dynamical behaviour for oscillator networks where the coupling between oscillators is weak enough and the connectivity in the graph is sparse enough, such that synchronisation does not occur. In this paper, we precisely address this question. Network topology has a strong effect on the observed dynamics of oscillator networks [31][34]. Previous work has mainly focused on whether or not a network will synchronise, relating this to graphical measures such as the eigenvalues of the Laplacian [31] or clustering coefficients [35]. This work suggests that the ability of an oscillator network to synchronise is enhanced by homogeneity in the distribution of connections [31]. Many complex networks have been shown to demonstrate periodic dynamics. Neural systems, for example, display modes of oscillation at particular frequencies and this has in turn been linked to the hierarchical organisation of the brain network itself [36]. In coupled oscillator networks with all-to-all coupling, oscillating waves of synchronization have been observed in systems with bimodal and trimodal frequencies [37], [38] and in systems of interacting populations of oscillators [39]. Such oscillations may also be observed in globally coupled oscillators, where there is both an excitatory and inhibitory component to the interactions, as observed in [40], [41]. However, in each of these cases the global oscillations are in some way attributable to the individual nodes in the network and not to the network structure itself. In this paper we show how the community structure of a complex network may actively drive periodic dynamics and that such periodic dynamics occur in real world networks. The remainder of this paper describes our methodology in detail, showing how a simple model system is capable of a variety of dynamical behaviours. We then give the results of experimental investigations into the effect of network topology on oscillatory dynamics and how the latter may be used to detect the former. In particular, we demonstrate how our methodology may be applied to two real world networks. We conclude with a discussion and suggestions for future work. ### Methods In order to rigorously establish the relationship between network structure and dynamics, we require a model system that is broadly applicable, but which supports a wide range of dynamical behaviours. We also need to be able to measure the global network dynamics in a way that readily admits analysis. The well-established Kuramoto model [5], [42], [43] meets all of these requirements and is widely used in related work [44][47]. The model describes a system of coupled oscillators described by ordinary differential equations (ODEs) where interaction terms between oscillators are connected according to the specific network topology:(1) where is the number of nodes in the network, is the natural frequency of oscillator , is the coupling strength between connected oscillators and is some oscillatory phase . This original model of Kuramoto assumes mean-field interactions. In the absence of any external noise, the global dynamics are determined by the coupling strength , the distribution of natural frequencies and the connectivity within the underlying network. In general, the coupling strength acts to synchronise the oscillators, the wider the distribution of , the harder it is for the oscillators to synchronise and higher connectivity within the graph also serves to cause the oscillators to synchronise (i.e. all to all coupling will synchronise more easily than sparsely coupled networks). Many variations of the original Kuramoto model have been developed; of particular interest is the introduction of a phase lag, , between oscillators, which can give rise to so-called chimera states [48][50]. These occur when oscillators form into clusters, some of which are synchronised and some of which are desynchronised. Chimera states are inherently interesting, because they describe a situation in which a collection of identical oscillators splits into two domains, one coherent and the other incoherent. As Abrams and Strogatz [48] observe, “Nothing like this has ever been seen for identical oscillators.” Chimera states can arise as a direct result of network topology; specifically, the existence of community structure [51]. The observations we describe in this paper, although in many respects similar to such Chimera states in that global observations can be directly attributed to topology, are significantly different. Motivated, in part, by the realisation that many naturally-occurring networks have complex topologies, recent studies have been extended to systems where the pattern of connections is local but not necessarily regular [52]. Due to the complexity of the analysis, further assumptions have generally been introduced. For example, it is usually assumed that the oscillators are identical. Obviously, therefore, in the absence of disorder, (i.e. if ) there is only one attractor of the dynamics: the fully synchronised regime, where . This scenario suggests that, starting from random initial conditions, a complex network with a non-trivial connectivity pattern will exhibit the following behaviour: first, the highly interconnected units that form local clusters will synchronise; second, in a sequential process, increasingly large synchronised spatial structures will emerge, until, finally, the whole population is synchronised [30]. However, for many dynamical complex networks, synchronisation is neither realised nor desirable. In these instances, weakly coupled oscillators may display partial synchronisation or clustering, but not full synchronisation. More formally, Equation 1 can give rise to a variety of dynamical behaviours. For strongly coupled networks (those with high connectivity and coupling strength ) the phases of all oscillators quickly synchronise. With weak coupling, the oscillators appear to move randomly. Between these regimes, we observe partial synchronisation, where some oscillators are synchronised and others form clusters, but no global synchronisation is evident. We use a global order parameter [47]:(2) as a measure of coherence over the entire network. This is the average phase of all oscillators within the network; for fully synchronised networks, ; for networks where the phases of all oscillators are equally distributed around , and for all other states, . In what follows, we use the global order parameter to investigate the effect of network topology on synchronisation. ### Results We now present the results of our experimental investigations. The over-arching aim is to show how global oscillatory behaviour may be related directly to the community structure of the underlying complex network. #### Artificial networks We first study two classes of graph; those with and those without any community structure. For example, consider the typical community structured graph in Figure 1. Given weak coupling, the dynamics of such a graph allow for the possibility of synchronisation within the smaller globally connected clusters, while the entire graph remains only partially synchronised. As such, any global measure of synchronisation appears to oscillate (Figure 2) the oscillation being dependent upon the differences in the frequencies of oscillations between each of the clusters. Figure 2 shows the order parameter oscillating between relatively low levels of synchronisation and almost full synchronisation. We emphasise, though, that the internal frequencies of the oscillators, , have been specifically selected in order to demonstrate such dynamics and that this will not occur in all cases. In graphs without any community structure, we fail to observe any discernible oscillation above that of the natural frequency of the oscillators. In order to demonstrate that the oscillating dynamics shown above are not simply an artefact of network symmetry, we perturb the original network by repeatedly adding random connections. Figure 3 demonstrates the structural stability of the modal dynamics when the network structure is no longer symmetric, but the community structure is retained. Figure 3 demonstrates that the global oscillations observed are not due to symmetry of the graph structure. Although the asymmetric graphs no longer produce strong regular oscillations, the dynamics are not significantly affected by symmetry-breaking through the addition of connections. For this particular graph, it is possible to add a further connections before the onset of global synchronisation. Figure 4 shows another example of network rewiring, in this case using the Xswap algorithm [53], in which the network is randomised, with the degree of each node remaining constant. This is achieved by randomly selecting a pair of edges in the network, and . If then and . It should be noted that in the unperturbed network the nodes are self connected, so on some iterations these edges are swapped. The oscillations break down as the network is randomised demonstrating that it is the overall graphical structure that causes this behaviour. To further develop the study of non-symmetric networks, we consider a large, idealised network of oscillators arranged such that three highly coupled sub-networks of oscillators are connected via a sparse network of random connections. We report the results of simulations for subgraphs of oscillators with approximately connections within each cluster. We first investigate the effect of varying coupling strength, , using standard bifurcation techniques. Figure 5 shows typical one parameter bifurcation diagrams of the global order parameter, , as is increased from an initial value of to . Here, the initial phases of the oscillators are drawn from a uniform distribution, . At each iteration of the simulation the value of is increased in small increments, typically of around and we show bifurcations using , and random additional connections (see Figure 5). In common with networks lacking community structure, these networks synchronise above a critical coupling strength; for small values of coupling strength, the oscillators are incoherent. In the first example there exists a specific region for for which the order parameter, , appears to oscillate between the ordered and disordered state. Figure 6 shows the time series of the order parameter for the three networks described above, with a distribution of internal frequencies of and respective coupling strengths of (A), (B) and (C). We now consider a more complex network, which displays an additional level of hierarchy (Figure 7). For optimised parameter values of we observe multi-modal oscillations of the global order parameter, , within a range of to . A Fourier spectrum of this time series demonstrates two modes of oscillation, at modes and , with strong echoes at modes and (Figure 8). The relationship between these oscillating modes strongly mirrors the graphical structure of the network, in that the two levels of hierarchy cause a bimodal oscillation and therefore two peaks in the Fourier spectrum. #### ‘Real world’ networks In the previous section, we established the feasibility of using a global order measure to detect community structure in artificial networks. We now validate this approach against two classes of ‘real world’ network, both of which present examples that may or may not possess community structure. In order to provide a metric for comparison, we use the standard measure of modularity [54]. The measure gives a sense of community structure and is defined as the proportion of the edges that fall within any cluster, minus the expected proportion if such edges were distributed at random. Other metrics for determining such modularity have also been proposed (see [55], for example); however we use the most well known (the MATLAB program to calculate modularity was downloaded from VisualConnectome [56]). #### Human metabolic network The metabolic network of a cell or microorganism describes the connections between various cellular processes that are essential for sustaining function [4]. Metabolic networks often exhibit strong community structure [57][59] and existing examples are usually examples of pseudo-hierarchical networks, in that their structure is not fully hierarchical [60]. In this Section we use our method to correctly identify community structure in metabolic networks. We use metabolic pathway networks in SBML format [61], taken from the BiGG database [62]. These are imported to MATLAB using libSBML [63]. In this analysis, the Homo Sapiens Recon 1 (human) metabolic network is used, as this is perhaps the most interesting example available. Similar results have been observed on other metabolic networks formulated in a similar manner. In order to establish a relationship between community structure and dynamics, we consider two versions of this network. The first comprises the global connectivity matrix of all chemical reactants in the cell, a connection being present if two or more components are involved in a known reaction (we exclude water and ATP, as these occur in almost all reactions). The second formulation of the metabolic network partitions reactions into sub-cellular networks, each representing different regions of the cell (nucleus, golgi bodies, etc.) which are connected in turn by reactions. Graphical representations of these networks are shown in Figure 9. From a graph theoretical perspective, these two networks are very similar. Standard graph metrics such as the clustering coefficient, mean and maximum path length do not distinguish between the two. Furthermore, the eigenvalue spectrum (as described in [30]) also shows no discernible difference. The main difference between these two networks lies in the values for modularity, with the compartmentalised version having a value of and the non-compartmentalised version having a value of . Due to the higher modularity of the compartmentalised version, we would expect to see regular oscillations in this representation. Simulations for optimised coupling strengths and frequency distributions are conducted on both forms of the metabolic network. For the non-partitioned network, we fail to observe multi-modal oscillations in the global order parameter. However, for the partitioned network we observe strong modal dynamics (See Figures 10 and 11 for a comparison) which is consistent with the results for modularity. This demonstrates that our method of community detection is a viable method for use on complex real-world networks, where the underlying structure is not as regular as those formed using generative models. #### Transport networks We now investigate a completely different type of network; those describing mass transit systems in major cities. Specifically, we compare the network of the London Underground and the New York Subway systems, as both are large enough to be interesting, but they have very different underlying geographical structures. In particular, stations on the London Underground are more evenly distributed than in New York, where the presence of islands in the geography of the city gives rise to clusters of stations, particularly in South Manhattan and Brooklyn (Figure 12). Taking the modularity of both of these networks gives London a value of and New York a value of . From this, we predict that our method will generate a regular oscillating pattern for New York, but not for London. The London underground and New York Subway maps were taken from the ‘Transport For London’ [64] and the ‘Metropolitan Transportation Authority’ [65] websites respectively. Using these maps we constructed, by hand, adjacency matrices in which stations are represented by nodes, with an edge connecting pairs of nodes if there exists a direct line between stations. Structurally, these networks are significantly different from the previous examples. Notably, there exist many long chains, the overall graph connectivity is low and there exists very few ‘small world’ effects. As such, we are confident that these networks present a novel challenge, over and above that offered by both the artificially-generated networks and the metabolic networks. As before, we run numerical simulations in order to optimise model parameters, in an attempt to maximise any oscillatory dynamics. On the London network, we observe a small amount of oscillatory behaviour, although the amplitude of such oscillation is small - the maximum observed oscillation has an amplitude of . The resulting Fourier spectrum has a peak strength of (Figure 13). On the other hand, experiments on the New York network yield a significantly more pronounced oscillation, which displays very strong periodicity. The primary oscillatory mode has a strength of - and a strong echo. A second oscillatory mode is also observed (Figure 14). In order to demonstrate that this oscillating behaviour is indeed caused by the underlying hierarchy of the network, the New York subway network was rewired using the Xswap algorithm previously described. We observe that as the network is rewired and the modularity reduced to below , oscillations no longer occur. As the Xswap algorithm maintains the degree distribution of the network but reduces modularity, this precisely demonstrates that modularity directly causes the oscillations in the order parameter of the phase. ### Discussion In this paper, we have demonstrated a robust and structurally stable relationship between form and function in complex networks whereby global oscillations are shown to be a factor of network topology. We observe modal oscillations in a measure of global synchronization which can be directly related to the community structure of the network itself. By applying the method to two types of real world networks - whereby examples exist with significantly different community structures but with similar underlying topology, we show that this method also works on realistic, more irregular structures. We demonstrate the breakdown in oscillatory behaviour when networks are rewired (with the degree of each node remaining constant). This confirms that network modularity drives oscillations, as reducing the degree of modularity causes these oscillations to break down. We should note, however, that for the real world examples given, the underlying dynamics of the nodes on the network (chemical reactions and subway trains) are considerably more complex than the simple Kuramoto oscillators used to demonstrate the principle. As such, it is not possible to directly attribute any observed oscillatory dynamics in such systems to the network structure alone. Many real world networks (e.g. transport, the brain) are examples of pseudo-hierarchical networks, in that their structure is not fully hierarchical [60]. In the particular example of the brain, for instance, multi-modal oscillations (observed as Gamma Beta and Alpha waves etc in EEG measurements) may attribute to structural hierarchies in the neural connectivity. As such, for systems where the dynamics of the individual elements of a complex network are known to be unimodal and the interactions between them are likewise, global observations of oscillatory behaviour may give some indication as to underlying structures and network connectivity, yielding novel methods of community detection. ### Acknowledgments We thank the editor Petter Holme and anonymous reviewers for their comments and suggestions. We would also particularly like to thank Kieran Smallbone for providing the metabolic network data in a usable format. ### Author Contributions Conceived and designed the experiments: HD JB. Performed the experiments: HD JB. Analyzed the data: HD JB. Wrote the paper: HD JB MA. ### References 1. 1. Milgram S (1967) The small world problem. Psychology Today 2: 60–67. 2. 2. Bearman P, Moody J, Stovel K (2004) Chains of affection: The structure of adolescent romantic and sexual networks. American Journal of Sociology 110: 44–91. doi: 10.1086/386272 3. 3. Watts D, Strogatz S (1998) Collective dynamics of ‘small-world’ networks. Nature 393: 440–442. doi: 10.1038/30918 4. 4. Jeong H, Tombor B, Albert R, Oltvai Z, Barabási A (2000) The large-scale organization of metabolic networks. Nature 407: 651–654. doi: 10.1038/35036627 5. 5. Kuramoto Y, Nishikawa I (1987) Statistical macrodynamics of large dynamical systems. case of a phase transition in oscillator communities. Journal of Statistical Physics 49: 569–605. doi: 10.1007/bf01009349 6. 6. Mirollo R, Strogatz S (1990) Synchronization of pulse-coupled biological oscillators. SIAM Journal on Applied Mathematics 50: 1645–1662. doi: 10.1137/0150098 7. 7. Acebrón JA, Bonilla LL, Vicente CJP, Ritort F, Spigler R (2005) The Kuramoto model: A simple paradigm for synchronization phenomena. Reviews of Modern Physics 77: 137. doi: 10.1103/revmodphys.77.137 8. 8. Strogatz SH (2000) From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D: Nonlinear Phenomena 143: 1–20. doi: 10.1016/s0167-2789(00)00094-4 9. 9. Lin Y, Fan L, Shafie S, Bertók B, Friedler F (2010) Graph-theoretic approach to the catalytic-pathway identification of methanol decomposition. Computers & Chemical Engineering 34: 821–824. doi: 10.1016/j.compchemeng.2009.12.004 10. 10. Montoya J, Solé R (2002) Small world patterns in food webs. Journal of Theoretical Biology 214: 405–412. doi: 10.1006/jtbi.2001.2460 11. 11. Girvan M, Newman ME (2002) Community structure in social and biological networks. Proceedings of the National Academy of Sciences 99: 7821–7826. doi: 10.1073/pnas.122653799 12. 12. Hu C, Yu J, Jiang H (2010) Synchronization of complex community networks with nonidentical nodes and adaptive coupling strength. Physics Letters A. 13. 13. Gulbahce N, Lehmann S (2008) The art of community detection. Bioessays 30: 934–938. doi: 10.1002/bies.20820 14. 14. Lancichinetti A, Fortunato S, Kertész J (2009) Detecting the overlapping and hierarchical community structure in complex networks. New Journal of Physics 11: 033015. doi: 10.1088/1367-2630/11/3/033015 15. 15. Sporns O, Chialvo DR, Kaiser M, Hilgetag CC (2004) Organization, development and function of complex brain networks. Trends in Cognitive Sciences 8: 418–425. doi: 10.1016/j.tics.2004.07.008 16. 16. Fortunato S (2010) Community detection in graphs. Physics Reports 486: 75–174. doi: 10.1016/j.physrep.2009.11.002 17. 17. Newman M (2003) The structure and function of complex networks. SIAM Review 45: 167–256. doi: 10.1137/s003614450342480 18. 18. Newman ME (2006) Modularity and community structure in networks. Proceedings of the National Academy of Sciences 103: 8577–8582. doi: 10.1073/pnas.0601602103 19. 19. Litvin O, Causton H, Chen B, Pe'Er D (2009) Modularity and interactions in the genetics of gene expression. Proceedings of the National Academy of Sciences 106: 6441. doi: 10.1073/pnas.0810208106 20. 20. Ravasz E, Somera A, Mongru D, Oltvai Z, Barabási A (2002) Hierarchical organization of modularity in metabolic networks. Science 297: 1551. doi: 10.1126/science.1073374 21. 21. Zhou C, Zemanová L, Zamora G, Hilgetag CC, Kurths J (2006) Hierarchical organization unveiled by functional connectivity in complex brain networks. Physical Review Letters 97: 238103. doi: 10.1103/physrevlett.97.238103 22. 22. Stam CJ, Reijneveld JC (2007) Graph theoretical analysis of complex networks in the brain. Nonlinear Biomedical Physics 1: 3. doi: 10.1186/1753-4631-1-3 23. 23. Bryden J, Funk S, Geard N, Bullock S, Jansen VA (2011) Stability in flux: community structure in dynamic networks. Journal of The Royal Society Interface 8: 1031–1040. doi: 10.1098/rsif.2010.0524 24. 24. Lehnertz K, Bialonski S, Horstmann MT, Krug D, Rothkegel A, et al. (2009) Synchronization phenomena in human epileptic brain networks. Journal of Neuroscience Methods 183: 42–48. doi: 10.1016/j.jneumeth.2009.05.015 25. 25. Strogatz S (2003) Sync: The Emerging Science of Spontaneous Order. Hyperion. 26. 26. Honerkamp J (1983) The heart as a system of coupled nonlinear oscillators. Journal of Mathematical Biology 18: 69–88. doi: 10.1007/bf00275911 27. 27. Enright JT (1980) Temporal precision in circadian systems: a reliable neuronal clock from unreliable components? Science 209: 1542. doi: 10.1126/science.7433976 28. 28. Collins J, Stewart I (1993) Hexapodal gaits and coupled nonlinear oscillator models. Biological Cybernetics 68: 287–298. doi: 10.1007/bf00201854 29. 29. Ashwin P, Borresen J (2005) Discrete computation using a perturbed heteroclinic network. Physics Letters A 347: 208–214. doi: 10.1016/j.physleta.2005.08.013 30. 30. Arenas A, Diaz-Guilera A, Pérez-Vicente C (2006) Synchronization reveals topological scales in complex networks. Physical Review Letters 96: 114102. doi: 10.1103/physrevlett.96.114102 31. 31. Nishikawa T, Motter AE, Lai YC, Hoppensteadt FC (2003) Heterogeneity in oscillator networks: Are smaller worlds easier to synchronize? Physical Review Letters 91: 14101. doi: 10.1103/physrevlett.91.014101 32. 32. Lou X, Suykens JA (2011) Finding communities in weighted networks through synchronization. Chaos: An Interdisciplinary Journal of Nonlinear Science 21: 043116–043116. doi: 10.1063/1.3655371 33. 33. Boccaletti S, Ivanchenko M, Latora V, Pluchino A, Rapisarda A (2007) Detecting complex network modularity by dynamical clustering. Physical Review E 75: 045102. doi: 10.1103/physreve.75.045102 34. 34. Wang XH, Jiao LC, Wu JS (2009) Extracting hierarchical organization of complex networks by dynamics towards synchronization. Physica A: Statistical Mechanics and its Applications 388: 2975–2986. doi: 10.1016/j.physa.2009.03.044 35. 35. McGraw PN, Menzinger M (2005) Clustering and the synchronization of oscillator networks. Physical Review E 72: 015101. doi: 10.1103/physreve.72.015101 36. 36. Bullmore E, Sporns O (2009) Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience 10: 186–198. doi: 10.1038/nrn2575 37. 37. Acebrón JA, Bonilla LL, Leo SD, Spigler R (1998) Breaking the symmetry in bimodal frequency distributions of globally coupled oscillators. Physical Review E 57. 38. 38. Acebrón J, Perales A, Spigler R (2001) Bifurcations and global stability of synchronized stationary states in the Kuramoto model for oscillator populations. Physical Review E 64: 016218. doi: 10.1103/physreve.64.016218 39. 39. Montbrió E, Kurths J, Blasius B (2004) Synchronization of two interacting populations of oscillators. Physical Review E 70: 056125. doi: 10.1103/physreve.70.056125 40. 40. Ashwin P, Orosz G, Borresen J (2010) Heteroclinic switching in coupled oscillator networks: Dynamics on odd graphs. In: Nonlinear Dynamics and Chaos: Advances and Perspectives, Springer Berlin Heidelberg. 31–50. 41. 41. Ashwin P, Borresen J (2004) Encoding via conjugate symmetries of slow oscillations for globally coupled oscillators. Physical Review E 70: 026203. doi: 10.1103/physreve.70.026203 42. 42. Kuramoto Y (1975) Self-entrainment of a population of coupled non-linear oscillators. In: International Symposium on Mathematical Problems in Theoretical Physics. Springer, 420–422. 43. 43. Kuramoto Y (2003) Chemical oscillations, waves, and turbulence. Dover Publications. 44. 44. Hong H, Strogatz S (2011) Kuramoto model of coupled oscillators with positive and negative coupling parameters: An example of conformist and contrarian oscillators. Physical Review Letters 106: 54102. doi: 10.1103/physrevlett.106.054102 45. 45. Dorer F, Bullo F (2012) Synchronization and transient stability in power networks and nonuniform Kuramoto oscillators. SIAM Journal on Control and Optimization 50: 1616–1642. doi: 10.1137/110851584 46. 46. DeLellis P, DiBernardo M, Garofalo F (2009) Novel decentralized adaptive strategies for the synchronization of complex networks. Automatica 45: 1312–1318. doi: 10.1016/j.automatica.2009.01.001 47. 47. Assenza S, Gutiérrez R, Gómez-Gardeñes J, Latora V, Boccaletti S (2011) Emergence of structural patterns out of synchronization in networks with competitive interactions. Scientific Reports 1.. 48. 48. Abrams DM, Strogatz SH (2004) Chimera states for coupled oscillators. Physical review letters 93: 174102. doi: 10.1103/physrevlett.93.174102 49. 49. Abrams D, Strogatz S (2006) Chimera states in a ring of nonlocally coupled oscillators. International Journal of Bifurcation and Chaos 16: 21–37. doi: 10.1142/s0218127406014551 50. 50. Abrams D, Mirollo R, Strogatz S, Wiley D (2008) Solvable model for chimera states of coupled oscillators. Physical Review Letters 101: 84103. doi: 10.1103/physrevlett.101.084103 51. 51. Laing C (2009) The dynamics of chimera states in heterogeneous Kuramoto networks. Physica D: Nonlinear Phenomena 238: 1569–1588. doi: 10.1016/j.physd.2009.04.012 52. 52. Mirollo R, Strogatz S (2007) The spectrum of the partially locked state for the Kuramoto model. Journal of Nonlinear Science 17: 309–347. doi: 10.1007/s00332-006-0806-x 53. 53. Hanhijärvi S, Garriga G, Puolamäki K (2013). Randomization techniques for graphs. URL http://research.ics.aalto.fi/publication​s/. Accessed 2013 Sep 9. 54. 54. Leicht EA, Newman ME (2008) Community structure in directed networks. Physical Review Letters 100: 118703. doi: 10.1103/physrevlett.100.118703 55. 55. Aldecoa R, Marín I (2011) Deciphering network community structure by surprise. PLoS ONE 6: e24195. doi: 10.1371/journal.pone.0024195 56. 56. Dai Dai HH (2011). Visualconnectome: Toolbox for brain network visualization and analysis, human brain mapping. URL http://code.google.com/p/visualconnectom​e/. Accessed 2013 Sep 9. 57. 57. Ravasz E, Barabási A (2003) Hierarchical organization in complex networks. Physical Review E 67: 026112. doi: 10.1103/physreve.67.026112 58. 58. Ioannides A (2007) Dynamic functional connectivity. Current Opinion in Neurobiology 17: 161–170. doi: 10.1016/j.conb.2007.03.008 59. 59. Tangmunarunkit H, Govindan R, Jamin S, Shenker S, Willinger W (2002) Network topology generators: Degree-based vs. structural. In: ACM SIGCOMM Computer Communication Review. ACM, volume 32: , 147–159. 60. 60. Trusina A, Maslov S, Minnhagen P, Sneppen K (2004) Hierarchy measures in complex networks. Physical Review Letters 92: 178702. doi: 10.1103/physrevlett.92.178702 61. 61. Hucka M, Finney A, Sauro HM, Bolouri H, Doyle JC, et al. (2003) The Systems Biology Markup Language (SBML): A medium for representation and exchange of biochemical network models. Bioinformatics 19: 524–531. doi: 10.1093/bioinformatics/btg015 62. 62. Schellenberger J, Park J, Conrad T, Palsson B (2010) BiGG: a Biochemical Genetic and Genomic knowledgebase of large scale metabolic reconstructions. BMC Bioinformatics 11: 213. doi: 10.1186/1471-2105-11-213 63. 63. Bornstein BJ, Keating SM, Jouraku A, Hucka M (2008) LibSBML: an API library for SBML. Bioinformatics 24: 880–881. doi: 10.1093/bioinformatics/btn051 64. 64. TFL (2013). Transport for London. URL http://www.tfl.gov.uk/. Accessed 2013 Sep 9. 65. 65. MTA (2013). Metropolitan Transportation Authority (New York, NY, USA). URL http://www.mta.info/nyct/maps/submap.htm. Accessed 2013 Sep 9. Ambra 2.9.22 Managed Colocation provided by Internet Systems Consortium.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128859400749207, "perplexity": 1972.028639567421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00167-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/73912-help-riccati-equation.html
# Math Help - help with riccati equation 1. ## help with riccati equation dy/dx = A(x)y2 + B(x)y + C(x) is called a riccati equation. Suppose that one particular solution y1(x) of this equation is known. Show that the substitution Y= y1 + 1/v transforms the riccati equation into the linear equation: Dv/dx + (B(x) +2A(x)y1)v = -A(x) I understand what you need to do. I know that I need to plug in the substitution for y, but I don’t understand how is simplifies. I don’t know how to do intermediate steps, only the first and last. Can anyone help…??? 2. Originally Posted by mathprincess24 dy/dx = A(x)y2 + B(x)y + C(x) is called a riccati equation. Suppose that one particular solution y1(x) of this equation is known. Show that the substitution Y= y1 + 1/v transforms the riccati equation into the linear equation: Dv/dx + (B(x) +2A(x)y1)v = -A(x) I understand what you need to do. I know that I need to plug in the substitution for y, but I don’t understand how is simplifies. I don’t know how to do intermediate steps, only the first and last. Can anyone help…??? Show what you have done so far, please. perhaps you are just forgetting that y1 is a solution to the original equation so that many things will cancel. 3. I start by plugging in y = y1+ 1/v for all the y’s in the original equation. Then I get: Dv/dx = A(x) ( y1+ 1/v)2 + B(x)( y1+ 1/v) + C(x) I tried expanding the squared term and distributing, but I don’t see any common things that would cancel. Am I on the right track and just not seeing it? 4. can anyone tell me if i'm on the right track?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768863677978516, "perplexity": 676.4165051362035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894976.0/warc/CC-MAIN-20140722025814-00082-ip-10-33-131-23.ec2.internal.warc.gz"}
https://us.edugain.com/questions/If-a-hemisphere-and-a-cylinder-stand-on-equal-base-and-have-the-same-height-find-the-ratio-of-their-volumes
### If a hemisphere and a cylinder stand on equal base and have the same height, find the ratio of their volumes. $2:3$ 1. The volume of a cylinder of radius $r$ and height $h$ is $\pi r^2h.$ 2. The volume of a hemisphere of radius $r$ is $\dfrac {2} {3} \pi r^3$. 3. \begin{align} \dfrac { \text { Volume of hemisphere } } { \text { Volume of cylinder } } & = \dfrac { \dfrac {2} {3} \pi r^3 }{ \pi r^2h } \space [\because r = h] \\ & = \dfrac { 2 } { 3 } \end{align} So, the ratio of their volumes will be . Cancelling out the equal terms we find the ratio as $2:3$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989818334579468, "perplexity": 279.25091776602255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00805.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/Siddhartha_Lal
• Siddhartha Lal Articles written in Pramana – Journal of Physics • Transport in quantum wires With a brief introduction to one-dimensional channels and conductance quantization in mesoscopic systems, we discuss some recent experimental puzzles in these systems, which include reduction of quantized conductances and an interesting odd-even effect in the presence of an in-plane magnetic field. We then discuss a recent non-homogeneous Luttinger liquid model proposed by us, which addresses and gives an explanation for the reduced conductances and the odd-even effect. We end with a brief summary and discussion of future projects. • # Pramana – Journal of Physics Volume 94, 2019 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.819932758808136, "perplexity": 2233.937580968565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735964.82/warc/CC-MAIN-20200805183003-20200805213003-00241.warc.gz"}
http://physics.stackexchange.com/questions/51955/how-exactly-or-whats-the-process-rather-of-energy-changing-into-matter/51959
How exactly, or whats the process, rather, of energy changing into matter? $E=mc^2$ this is the equation by Einstein claiming energy can change from energy to mass. this would happened at the big bang I assume, when electrons and protons were made to create hydrogen and some helium. - – Kyle Oman Jan 23 '13 at 16:22 The law of conservation of energy states that the amount of energy doesn't change. This is basically a definition of energy. That is, if according to your current definition of energy, there is some disappearing or appearing , that means your definition is not complete enough. Einstein equation $E=mc^2$ means that mass is a form of energy. It's not energy changing into mass, it's a given form of energy changing in another form that is mass. It can be observed when two particles with high velocity collides. Particles are created during collision, whose mass includes the former kinetic energy. - but wasn't the big bang made of just energy? because ik that atoms came from stars except for hydrogen, it was made when the the universe expanded and got cool enough for protons and electrons to combine... by the way, im only 14 so im not as smart.... >.< – michael Jan 23 '13 at 23:53 The equation $E =mc^2$ describes the mass–energy equivalence in relativity. It indicates that mass and energy are indeed the same thing, they are convertible. One way to convert matter into usable energy is to annihilate matter with antimatter. And the energy released is proportional to the mass as the equation $E =mc^2$ shows, which means it is very efficient. The common but less efficient ways to convert matter into energy are nuclear fusion and nuclear fission. Nuclear fusion is a type of nuclear reaction. It is the process of colliding two or more atomic nuclei at very high speed and joining them to form a new type of atomic nucleus. The first man-made nuclear fusion on a large scale was carried out on November 1, 1952, in the Ivy Mike hydrogen bomb test. Sun generates its energy through a fusion process called proton–proton chain reaction, which converts hydrogen to helium at a very high temperature. Nuclear fission is the process in which the nucleus of an atom is split into smaller parts. It can be either a nuclear reaction or a radioactive decay process. It is commonly used in nuclear power plant to produce nuclear power. You can read more about the mass–energy equivalence here if you like. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132041931152344, "perplexity": 456.8038559163396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860114285.77/warc/CC-MAIN-20160428161514-00073-ip-10-239-7-51.ec2.internal.warc.gz"}
https://par.nsf.gov/search/author:%22Gligorov,%20Vladimir%20V.%22
# Search for:All records 1. Abstract This document presents the physics case and ancillary studies for the proposed CODEX-b long-lived particle (LLP) detector, as well as for a smaller proof-of-concept demonstrator detector, CODEX- $$\beta$$ β , to be operated during Run 3 of the LHC. Our development of the CODEX-b physics case synthesizes ‘top-down’ and ‘bottom-up’ theoretical approaches, providing a detailed survey of both minimal and complete models featuring LLPs. Several of these models have not been studied previously, and for some others we amend studies from previous literature: In particular, for gluon and fermion-coupled axion-like particles. We moreover present updated simulations of expected backgroundsmore »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634523987770081, "perplexity": 4869.51556618383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00742.warc.gz"}