url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://www.machinedlearnings.com/2011/09/online-label-extraction-from.html
## Friday, September 23, 2011 ### Online Label Extraction from Crowdsourced Data So far I'm been using batch EM to optimize the various generative models I've developed to process crowdsourced data (nominal, ordinal, and multi-label). This is despite my fondness for online techniques, but I had to crawl before I walked and my datasets were fairly modest in size. The business is happy with the results from Mechanical Turk, however, and wants to scale up from tasks involving multiples of $10^4$ items to tasks involving multiples of $10^6$ items. Although that will still fit in memory on my laptop, it seemed a good excuse to develop an online variant of the algorithm. My previous batch EM approaches can be considered as maximizing the auxiliary function $F (\alpha, \beta, \gamma, q) = E_{Z \sim q}[\log L (D | \alpha, \beta, \gamma, Z)] + \log P (\alpha, \beta, \gamma) + E_{Z \sim q}[\log P (Z)] + H (q),$ where $\alpha$ are worker-indexed parameters, $\beta$ are item-indexed parameters, $\gamma$ are global parameters, $q$ is the joint distribution over all unobserved labels, $Z$ is the set of all unobserved labels, $D$ is the data set of item-worker-label triples, $\log L (D | \alpha, \beta, \gamma, Z)$ is the log-likelihood of the data set, $P (\alpha, \beta, \gamma)$ is the prior distribution over the generative model parameters, $P (Z)$ is the prior unobserved label distribution, and $H (q)$ is the entropy of the unobserved label distribution. The unobserved label distribution is assumed to factor over items, $q (Z) = \prod_i q_i (Z_i)$, as is the prior distribution $P (Z) = \prod_i P_i (Z_i)$. Alternatively only a constrained maximum of the auxiliary function is found, subject to this constraint. The data likelihood is assumed independent conditioned upon $(\alpha, \beta, Z)$, leading to $\begin{split} F (\alpha, \beta, \gamma, q) &= \\ &\sum_i E_{Z_i \sim q_i} [ \log L (D_i | \alpha, \beta_i, \gamma, Z_i)] + \log P (\alpha, \beta, \gamma) \\ &+ \sum_i E_{Z_i \sim q_i} [ \log P_i (Z_i)] + \sum_{q_i} H (q_i), \end{split}$ where $i$ indexes items and $D_i$ is the set of data associated with item $i$. Further assuming the prior distribution is of the form $P (\alpha, \beta, \gamma) = P (\alpha, \gamma) \prod_i P (\beta_i)$ and rearranging yields \begin{aligned} F (\alpha, \beta, \gamma, q) &= \sum_i F_i (\alpha, \beta_i, \gamma, q_i), \\ F_i (\alpha, \beta_i, \gamma, q_i) &= \\ E_{Z_i \sim q_i} [ \log L &(D_i | \alpha, \beta_i, \gamma, Z_i)] + \frac{1}{|I|} \log P (\alpha, \gamma) + \log P (\beta_i) + E_{Z_i \sim q_i} [ \log P_i (Z_i)] + H (q_i), \end{aligned} where $|I|$ is the total number of items. Now the objective function looks like a sum of terms where $\beta_i$ and $q_i$ only appear once. This indicates that, if the data were streamed in blocks corresponding to the same item and the optimal $\alpha$ and $\gamma$ were already known, the $\beta_i$ and $q_i$ could be individually maximized and discarded. Of course, the optimal $\alpha$ and $\gamma$ are not known, but hopefully over time as more data is encountered the estimates get increasingly good. That suggests the following procedure: 1. Receive a block $D_i$ of item-worker-label triples corresponding to a single item. 2. Maximize $F_i (\alpha, \beta_i, \gamma, q_i)$ with respect to $\beta_i$ and $q_i$. • Basically I run EM on this block of data with fixed $\alpha$ and $\gamma$. 3. Set $\alpha \leftarrow \alpha + \eta_t \nabla_{\alpha} F_i\bigr|_{\alpha, \beta^*_i, \gamma, q^*_i}$ and $\gamma \leftarrow \gamma + \eta_t \nabla_{\gamma} F_i\bigr|_{\alpha, \beta^*_i, \gamma, q^*_i}$. • $\eta_t$ is a learning which decays over time, e.g., $\eta_t = \eta_0 (\tau_0 + t)^{-\rho}$. • $\eta_0 \geq 0$, $\tau_0 \geq 0$ and $\rho \geq 0$ are tuning parameters for the learning algorithm. • Effectively $|I|$ is also a tuning parameter which sets the relative importance of the prior. 4. If desired (e.g., inference mode''), output $\beta^*_i$ and $q^*_i$. 5. Discard $\beta^*_i$ and $q^*_i$. This has very good scalability with respect to number of items, since no per-item state is maintained across input blocks. It does require that all the labels for a particular item are aggregated: however, even in a true online crowdsourcing scenario this does not present a scalability issue. In practice, items are individually programatically submitted for crowdsourced analysis and the number of redundant assessments is typically small (e.g., 5) so a receiving system which buffered crowdsourced data until the entire block of item labels were available would have very modest space requirements. In my case I'm actually applying this online algorithm to an offline previously collected data set, so I can easily arrange for all the labels corresponding to a particular item to be together. Scalability with respect to the number of workers is a potential issue. This is because $\alpha$ is maintained as state, and it is indexed by worker (e.g., in nominallabelextract, $\alpha_w$ is the confusion matrix for worker $w$). To overcome this I use the hashing trick: I have a fixed finite number of $\alpha$ parameters and I hash the worker id to get the $\alpha$ for that worker. When I get a hash collision this means I treat two (or more) workers as equivalent, but it allows me to bound the space usage of the algorithm up front. In practice doing hashing tricks like this always seems to work out fabulously. In this particular context, in the limit of a very large number of workers I will model every worker with the population confusion matrix. This is a graceful way to degrade as the sample complexity overwhelms the (fixed) model complexity. (I don't actually anticipate having a large number of workers; the way crowdsourcing seems to go is, one does some small tasks to identify high quality workers and then a larger version of the task restricted to those workers). Here's an example run involving 40 passes over a small test dataset. % time ~/src/nincompoop/nominalonlineextract/src/nominalonlineextract --initial_t 10000 --n_items 9859 --n_labels 5 --priorz 1,1,1,1,1 --model flass --data <(./multicat 40 =(sort -R ethnicity4.noe.in)) --eta 1 --rho 0.5 initial_t = 10000 eta = 1.000000 rho = 0.500000 n_items = 9859 n_labels = 5 n_workers = 65536 symmetric = false test_only = false prediction file = (no output) priorz = 0.199987,0.199987,0.199987,0.199987,0.199987 cumul since example current current current avg q last counter label predict ratings -1.183628 -1.183628 2 -1 0 5 -1.125888 -1.092893 5 -1 0 5 -1.145204 -1.162910 10 -1 0 5 -1.081261 -1.009520 19 0 0 5 -1.124367 -1.173712 36 -1 3 3 -1.083097 -1.039129 69 -1 0 4 -1.037481 -0.988452 134 -1 1 2 -0.929367 -0.820539 263 -1 1 5 -0.820125 -0.709057 520 -1 4 5 -0.738361 -0.653392 1033 -1 1 4 -0.658806 -0.579719 2058 -1 1 5 -0.610473 -0.562028 4107 -1 4 5 -0.566530 -0.522431 8204 -1 0 3 -0.522385 -0.478110 16397 -1 2 4 -0.487094 -0.451771 32782 -1 0 3 -0.460216 -0.433323 65551 -1 4 5 -0.441042 -0.421860 131088 -1 2 5 -0.427205 -0.413365 262161 -1 0 5 -0.420944 -0.408528 394360 -1 1 -1 ~/src/nincompoop/nominalonlineextract/src/nominalonlineextract --initial_t 85.77s user 0.22s system 99% cpu 1:26.41 total If that output format looks familiar, it's because I've jacked vowpal wabbit's output style (again). The first column is the progressive validated auxiliary function, i.e., the (averaged over items) $F_i$ function evaluated prior to updating the model parameters ($\alpha$ and $\gamma$). It is akin to a log-likelihood and if everything is working well it should get bigger as more data is consumed. nominallabelextract, the implementation of the batch EM analog to the above, converges in about 90 seconds on this dataset and so the run times are a dead heat. For larger datasets, there is less need to do so many passes over the dataset so I would expect the online version to become increasingly advantageous. Furthermore I've been improving the performance of nominallabelextract for several months whereas I just wrote nominalonlineextract so there might be additional speed improvements in the latter. Nonetheless it appears for datasets that fit into memory batch EM is competitive. nominalonlineextract is available from the nincompoop code repository on Google code. I'll be putting together online versions of the other algorithms in the near-term (the basic approach holds for all of them, but there are different tricks for each specific likelihood).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713259100914001, "perplexity": 1486.1950548789603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528523.35/warc/CC-MAIN-20190420040932-20190420062932-00219.warc.gz"}
https://www.arxiv-vanity.com/papers/0711.2216/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Comments on TPC and RPC calibrations reported by the HARP Collaboration V Ammosov    I Boyko    G Chelkov    D Dedovitch    F Dydak    A Elagin Now at Texas A&M University, College Station, USA.    V Gapienko    M Gostkin    A Guskov    Z Kroumchtein    V Koreshev    Yu Nefedov    K Nikolaev    A Semak    Yu Sviridov    E Usenko    J Wotschack Corresponding author; E-mail: Joerg.W    V Zaets and A Zhemchugov CERN Geneva    Switzerland Joint Institute for Nuclear Research Dubna    Russia Institute for High Energy Physics Protvino    Russia ###### Abstract The HARP Collaboration recently published calibrations of their TPC and RPC detectors, and differential cross-sections of large-angle pion production in proton–nucleus collisions. We argue that these calibrations are biased and cross-sections based on them should not be trusted. Gaseous detectors, time projection chambers, TPC, timing detectors, resistive plate chambers, RPC The larger part of the HARP Collaboration (hereafter referred to as ‘HARP’ or ‘authors’) recently published a paper entitled ‘Momentum scale in the HARP TPC’ [1]. Therein, they claim that they calibrated the momentum scale of the HARP TPC with a precision of 3.5%. They published a paper entitled ‘The time response of glass resistive plate chambers to heavily ionizing particles’ [2]. Therein, they claim a 500 ps time advance of protons with respect to minimum-ionizing pions in the HARP multi-gap timing RPCs [3][6]. Further, they published differential cross-sections of pion production on Ta [7], C, Cu and Sn [8], and Be, Al and Pb [9] targets. We, also members of the HARP Collaboration and referred to as ‘HARP-CDP’111CDP stands for CERN–Dubna–Protvino, have not signed the above-cited papers because we are unable to take responsibility for the reported calibrations and physics results. We shall argue that there is no reason to invoke a new detector physics effect in multi-gap timing resistive plate chambers (RPCs), yet there are good reasons why HARP’s time projection chamber (TPC) and RPC calibrations should not be trusted, and also cross-sections of large-angle pion production on nuclear targets based on them. ## 2 HARP’s biased pT scale and bad pT resolution The performance of the HARP TPC was affected by dynamic track distortions that were primarily caused by the build-up of an Ar ion cloud during the 400 ms long spill of the CERN Proton Synchrotron. This ion cloud emanates from the TPC’s sense wires and drifts across its active volume toward the high-voltage membrane222The cause of this hardware problem, the physics of the track distortions, their quantitative assessment, and their corrections, are described in Refs. [10][13].. These dynamic track distortions increase approximately linearly with time in the spill. Their size in the  coordinate typically reaches 15 mm, at small radius, at the end of the spill. That exceeds the TPC’s design   resolution of 500 m by a factor of 30 and therefore requires very precise track distortion corrections. The authors published two quite different analysis concepts to deal with dynamic track distortions. The first concept is to use only the first 100 events out of typically 300 events in the whole accelerator spill. From the ‘physics benchmark’ of proton–proton elastic scattering they claim that dynamic distortions do not affect the quality of the first 100 events, and hence dynamic track distortions need not be corrected at all. The second concept is a correction of the distortions based on a specific radial dependence of the charge density of the Ar ion cloud. In the HARP TPC, with a positive magnetic field polarity, dynamic distortions shift cluster positions such that positive tracks are biased toward higher (conversely, negative tracks are biased toward smaller ). The authors chose—in principle correctly—to fit TPC tracks with the constraint of the beam point because the increased lever arm permits an approximate doubling of the precision. While the beam point remains unaffected, the cluster positions get shifted by dynamic distortions. Assigning a sufficiently small position error to the beam point renders its weight (the inverse error squared) in the track fit so large that positive tracks get biased toward lower , i.e., the trend of the bias is even reversed with respect to the fit without beam point. This—artificially enforced—decrease of the of positive tracks with the time in the spill is demonstrated in the right panel of figure 15 in Ref. [7]. This makes clear that the weight assigned in the track fit to the beam point is of paramount importance. Despite this importance, the weight of the beam point has never been quantitatively stated by HARP. Because the bias has different size and opposite sign depending on whether the beam point has been used in the fit or not, we recall that in the cross-section results reported by HARP the fit with the beam point has been used, but not in all their ‘physics benchmarks’. There is no claim that HARP’s scale is wrong per se. Rather, we claim that HARP’s initially (more or less) correct scale develops a bias that increases about linearly with the time in the spill. This bias is a direct consequence of the development of dynamic track distortions with time in the spill. This means that the percentage of the claimed bias is not constant but proportional to . This means that the claimed bias is a priori different from data set to data set since dynamic distortions are different in different data sets. Therefore, conclusions on a bias in one data set (e.g., elastic scattering of 3 GeV/c protons on protons at rest) cannot be applied quantitatively to other data sets. In their first analysis concept (that underlies the cross-sections published in Refs. [7][9]), the authors fit the distorted track together with the undistorted beam point. The beam point is assigned a weight ‘similar to a TPC hit’ [7] which implies that the beam point’s error is constant and not what it must be: the convolution of the errors of two extrapolations to the interaction vertex, of the beam particle’s trajectory and of the secondary track’s trajectory. Primarily because of the momentum-dependence of multiple scattering, the correct error of the beam point varies considerably for different beam momenta and from track to track. The authors fit a circle to distorted TPC cluster positions that deviate in a radius-dependent way by up to 5 mm from their nominal positions, and to the undistorted beam point that has a wrong weight in the fit. Under such circumstances, the fit of cannot be unbiased. How large is the bias in this concept? The authors give the answer themselves in the upper left panel of figure 17 in Ref. [7] where they show the measurement of the specific ionization  of protons as a function of momentum. One reads off that an 800 MeV/c proton is measured with a momentum of 650 MeV/c. From this 20% scale error for positive particles at  MeV/c, one infers a scale error of 20% in the opposite direction for negative particles. Expressed as a shift of (where denotes the particle’s charge), the bias is of order  (GeV/c) for positive magnet polarity. The effect of this bias is well visible in a comparison of HARP’s spectrum with the one from our group, see figure 10 in Ref. [14]. In their second analysis concept, the authors apply a correction of dynamic track distortions and use data from the whole spill. The correction stems from the electric field of a charge density of Ar ions that falls with the radial distance from the beam like  [15]. Is a distribution realistic? The answer is no. The radial charge distribution depends on beam energy, beam polarity, beam intensity, beam scraping, target type, photon conversion in materials and spiralling low-momentum electrons. Therefore, the correction algorithm cannot be expected to work with adequate precision. This expectation is confirmed by the difference between the data shown in figure 14 in Ref. [1] and the same data analysed by our group, see figure 1 that shows the spectra of secondary particles from the interactions of  GeV/c protons in a 5% Be target. The difference of the spectra is again consistent with a HARP bias of  (GeV/c) with respect to our results from the same data. The authors claim [3, 5, 7] a resolution of σ(pT)/pT=(0.25±0.01)pT+(0.04±0.005)(GeV/c)−1 or, approximately,  (GeV/c). This claimed resolution refers to fits with the beam point included (in fits without the beam point the resolution is around 0.60). The information given by the authors on the experimental resolution for fits with the beam point included (on which all reported cross-sections are based) is very scarce. It consists of a mere three points in figure 9 in Ref. [7]. One reads off the resolution  (GeV/c). Although this resolution represents a convolution with the  resolution, it is hardly compatible with the claimed 0.30 (GeV/c). Confirmation that the resolution is much worse than claimed is given in Refs. [5] and [6]. Therein, the RPC time-of-flight resolution of  MeV/c pions that is equivalent to the resolution in the TPC is quoted as 260 ps. As succinctly proven in Refs. [16] and [17], a time-of-flight resolution of 260 ps of pions with  MeV/c is equivalent to a resolution of 46%, which is worse by a stunning factor of 4.6 than the claimed resolution333This result is obtained when taking literally two more claims by HARP: a beam-particle timing resolution of 70 ps and an RPC timing resolution of 141 ps; however, it is more likely that the overall discrepancy of 4.6 stems from all three sources and not only from the bad resolution.. Figure 1 also proves that HARP’s resolution is much worse than claimed. The depth of the dip at reflects directly the resolution, and HARP’s dip is considerably more shallow than ours. The difference between HARP’s and our spectra is consistent with a HARP bias of  (GeV/c), and a HARP resolution of  (GeV/c). The discrepancy between the spectra means that cross-sections are different by factors of up to two. The authors claim that results from the second concept of correcting dynamic track distortions and using the data from the full spill, is in ‘excellent agreement’ [1] with results from the first concept of not correcting for dynamic track distortions and using the first 30% of the spill only. We agree that there is no difference in the results from these two concepts. Both are affected by a comparable bias and a comparably bad resolution. That the biases in HARP’s two analysis concepts happen to have the same size and sign, is accidental. ## 3 HARP’s ‘500 ps effect’ The authors reported in Ref. [3] a 500 ps advance of the RPC timing signal of protons with respect to the one of pions. They confirmed their discovery in three subsequent publications [4][6], and most recently in Ref. [2]. In the latter paper, the authors acknowledge that ‘…it has been pointed out that a similar behaviour can be obtained when a systematic shift in the measurement of momentum is present’ but conclude that ‘Momentum measurement biases in the TPC, if any, have been eliminated as possible cause of the effect.’ In stark contrast, our group’s interpretation of the authors’ result is that their scale is systematically biased by  (GeV/c) which leads to the prediction of a longer time of flight for non-relativistic protons (whereas the time of flight of relativistic pions is unchanged). In turn, if the proton momentum is considered correct, the RPC timing of protons would appear to be advanced. The relevant experimental variable is the proton time of flight as measured by the RPCs minus the time of flight calculated from the proton momentum. Figure 2 shows HARP’s respective data, taken from their most recent papers [1] (17 Sep 2007) and [2] (24 Sep 2007), data which are based on their measurement in the TPC and hence affected by a bias in the TPC scale444All data shown in this Section refer to the RPC padring 3, i.e., to tracks with polar angles 55–80.. Also shown are data from the calculated momentum of recoil protons in elastic proton–proton scattering, published by HARP in Ref. [2], data that are not affected by a bias in the measurement in the TPC. All three data sets should show the same time advance but disagree seriously with each other. This hardly supports the notion of a novel detector physics effect. Figure 3 shows the comparison of HARP and HARP–CDP data on the timing difference of recoil protons from elastic proton–proton scattering. There is good agreement between the data which confirms that both HARP and HARP–CDP correctly calibrated the RPCs with relativistic pions. The data from elastic proton–proton scattering are consistent with the theoretically expected time advance (for the calculation of the theoretically expected time advance, we refer to our pertinent discussion in Ref. [18]). Figure 4 shows the comparison of HARP and HARP–CDP data for the case that the reconstruction in the TPC is used to determine the time of flight of the recoil proton. While the HARP–CDP data confirm the results from proton–proton elastic scattering, the HARP data are inconsistent with these results. Figure 5 shows that HARP’s time advance of protons (black points; data from Ref. [2]) is satisfactorily explained by a simulation of the time advance that results from a bias  (GeV/c). There is no need and no room for a novel detector physics effect. ## 4 HARP’s ‘physics benchmark’ The authors make extensive use of elastic scattering of 3 and 5 GeV/c protons and pions on protons at rest to support the claim that their scale is correct within 3.5%. In the following we show that their arguments are not conclusive. ### 4.1 Fits of recoil protons with and without beam point In stark contrast with our claim of a positive bias in in fits with the beam point, and a negative bias in fits without the beam point, the authors write ‘The ratio of the unconstrained and constrained fits was checked to be unity with a high precision’ and show figure 4 in Ref. [1] in support of this claim. For its importance, this figure is reproduced in the left panel of our figure 6. One would expect to see a Gaussian distribution in the authors’ variable ( is the momentum from a fit without the beam point, and the momentum from a fit with the beam point). Since the claimed resolution with the beam point included is 0.30, and without the beam point about 0.60, the Gaussian should have a . Their plot shows something very different, though: a narrow spike centred at zero, on top of a broad distribution. The authors interpret this as evidence that the two fits give the same result. The spike at zero is an artefact which stems from the assignment of a wrong error in the  position of clusters: the authors multiply the  error of each TPC cluster with (a conceptual mistake of their algorithm as discussed in Ref. [19]) and hence produce nearly infinite weights of clusters close to the angles 45, 135, 225 and 315. In comparison with these wrong large weights, the weight of the beam point becomes negligible, which explains that the fits of tracks close to the singular angles yield the same with and without the beam point. The expected Gaussian distribution with of about 0.50 is indeed visible in their plot: it is the broad distribution below the artifical spike. This is evident from the right panel in figure 6 which shows a simulation how the Gaussian becomes deformed by the term. We conclude that the authors did not prove that the fits with and without beam point give the same result. Rather, they proved that their track fit is seriously compromised. ### 4.2 Missing mass from elastic scattering The authors write ‘A fit to the distribution [of missing mass squared] provides  (GeV/c) in agreement with the PDG value of 0.88035 (GeV/c) … a momentum scale bias of 15% would produce a displacement of about 0.085 (GeV/c) on . As a result, we can conclude that the momentum scale bias (if any) is significantly less than 15%.’ For its importance, their supporting figure 2 in Ref. [1] is reproduced in our figure 7. The authors state that the fit of the recoil protons included the beam point. But they do not give important information: which fraction of the spill was used555We assume that the first 100 events of the spill were used., and they do not state how the significant energy loss of protons in materials before the TPC volume was handled666We assume that the proton energy loss was corrected as a function of the proton momentum measured in the TPC.. Since the beam point was used, the bias in will be positive. For the typical of the recoil proton of 0.45 GeV/c, we estimate from the strength of the dynamic distortions in the respective data taking a bias or, equivalently, %. The difference to +15% in figure 7 is important since the missing mass squared is not Gaussian-distributed. Figure 8 shows simulations of the missing mass squared in the elastic scattering of 3 GeV/c protons on protons at rest. The left panel shows the difference, for a proton recoil angle of 69, between a distribution with a resolution of of 0.55 and no bias, and a distribution with the same resolution and a bias of +0.20. The missing mass squared distribution is less sensitive to a bias than purported by the authors. The right panel shows for a resolution of of 0.55, and a bias of +0.20, the differences between the proton recoil angles of 65, 69 and 73, where the contributions from the three angles are weighted with their cross-sections. The sum of the three contribution may look ‘Gaussian’ but the central value of this ‘Gaussian’ cannot be taken as the physical missing mass squared. The rather erratic nature of results from this analysis is corroborated by the fit results of the missing-mass-squared distribution published by the authors in figure 15 in Ref. [3] and reported in Ref. [20]. The result is 15.6 away from the PDG value. We conclude that the authors did not prove that their from fits with the beam point included is unbiased, certainly not with the precision claimed by them. Rather, they proved that their analysis of missing-mass-squared distributions is too simplistic. For comparison, we show in figure 9 our own results for the missing mass squared in the elastic scattering of 3 GeV/c protons on protons at rest, and compare them with a GEANT simulation. We show the data for two bins in the proton recoil angle, with a view to highlighting the differences both in shape and in rate. ### 4.3 pT scale from elastic scattering From the comparison of the momentum of recoil protons from the scattering of 3 and 5 GeV/c protons and pions on protons at rest as measured in the TPC, and as predicted from the measurement of the scattering angle of the forward-going beam particle in the forward spectrometer, the authors conclude that ‘… a 10% bias [of the momentum scale] is excluded at 18 level (statistics only)…’ In this comparison, a fit without the beam point was used. This is important: (i) the resolution will be about twice worse than in fits with the beam point; and (ii) the expected bias from dynamic distortions will have different magnitude and opposite sign compared to the bias from fits with the beam point. Since all data published by HARP are based on fits with the beam point, evidence on a bias from dynamic distortions from fits without beam point is irrelevant; furthermore, conclusions from the dynamic distortions in one data set cannot be applied to another data set. We conclude that the authors have not proven that the scale of fits with the beam point is unbiased, and we could stop our argumentation here. Nevertheless, we follow the argumentation of the authors a bit further. We note that the authors chose to use only the first 50 events in the spill which reduces the expected bias from dynamic distortions by a factor of about two compared to the use of the first 100 events in the spill. We note that for reasons of acceptance, the use of the scattering angle of the forward-going beam particle restricts the recoil protons to the two horizontal sectors 2 and 5 of the TPC. These are the two sectors which our group decided not to use for data analysis, for the much stronger electronics cross-talk and the many more bad electronics channels in comparison with the four other TPC sectors, and for the absence of cross-calibration of performance with cosmic-muon tracks. Still, one is puzzled why HARP find good agreement between the measured and the predicted momentum of the recoil proton. We know from our own analysis of the same data that they are affected by fairly strong dynamic distortions, albeit smaller in amplitude than the GeV/c 5% data shown in Section 2, and with a steeper radial decrease of the Ar ion cloud in the TPC. We have shown in Ref. [21] that at the start of the spill, the so-called ‘margaritka’ effect is dominant with a sign that is opposite to the sign of the so-called ‘stalactite’ effect that becomes by far dominant later in the spill. Near the start of the spill, there is a partial cancellation between the two effects (the cancellation is not complete since the radial distributions of these track distortions are different). It is this accidental cancellation that has been exploited by HARP to claim that their analysis is not affected by a bias in the scale. We show in figure 10 with the shaded histogram the absence of any momentum bias, and the momentum resolution, obtained by our group in the elastic scattering of 3 GeV/c pions and protons on protons at rest. Our resolution, from fits with the beam point included, is  (GeV/c), well consistent with what is expected from our TPC calibration work [21]. It is unclear why the authors avoid proving their claim of a resolution of  (GeV/c) by showing their analogous distribution. Rather, they argue their case with the much worse resolution from fits without the beam point (although the authors’ missing-mass analysis is based on fits with the beam point). For comparison, their data (copied from the middle panel of figure 6 in Ref. [1]), are shown as open histogram in figure 10. Superimposed on their data is a Gaussian fit with . With an approximate  GeV/c the authors’ resolution is  (GeV/c), worse than the 0.60 (GeV/c) expected for fits without beam points. This is consistent with the evidence shown in Section 2 that their resolution is much worse than 0.30 (GeV/c). ## 5 Concluding commentary We presented evidence of serious defects in the large-angle data analysis of the HARP Collaboration: (i) the scale is systematically biased by  (GeV/c); (ii) the resolution is by a factor of two worse than claimed; and (iii) the discovery of the ‘500 ps effect’ in the HARP multi-gap RPCs is false. In defiance of explicit and repeated criticism of their work at various levels, including published ‘Comments’ [22][23], HARP keep insisting on the validity of their work [4, 6]. Yet HARP have been unable to disprove any of the critical arguments against their results. Their arguments in their defence confirm, rather than disprove, our claims of serious defects in their large-angle data analysis. In this unusual and regrettable situation, we warn the community that cross-sections that are based on the TPC and RPC calibrations reported by HARP, are wrong by factors of up to two.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252636194229126, "perplexity": 1351.7087712865832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747774.97/warc/CC-MAIN-20201205104937-20201205134937-00698.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_latest/flhtml/d05/d05bdf.html
# NAG FL Interfaced05bdf (abel2_​weak) ## ▸▿ Contents Settings help FL Name Style: FL Specification Language: ## 1Purpose d05bdf computes the solution of a weakly singular nonlinear convolution Volterra–Abel integral equation of the second kind using a fractional Backward Differentiation Formulae (BDF) method. ## 2Specification Fortran Interface Subroutine d05bdf ( ck, cf, cg, tlim, yn, work, lwk, nct, Integer, Intent (In) :: iorder, nmesh, lwk Integer, Intent (Inout) :: ifail Integer, Intent (Out) :: nct(nmesh/32+1) Real (Kind=nag_wp), External :: ck, cf, cg Real (Kind=nag_wp), Intent (In) :: tlim, tolnl Real (Kind=nag_wp), Intent (Inout) :: work(lwk) Real (Kind=nag_wp), Intent (Out) :: yn(nmesh) Character (1), Intent (In) :: initwt #include <nag.h> void d05bdf_ (double (NAG_CALL *ck)(const double *t),double (NAG_CALL *cf)(const double *t),double (NAG_CALL *cg)(const double *s, const double *y),const char *initwt, const Integer *iorder, const double *tlim, const double *tolnl, const Integer *nmesh, double yn[], double work[], const Integer *lwk, Integer nct[], Integer *ifail, const Charlen length_initwt) The routine may be called by the names d05bdf or nagf_inteq_abel2_weak. ## 3Description d05bdf computes the numerical solution of the weakly singular convolution Volterra–Abel integral equation of the second kind $y(t) = f(t) + 1π ∫0t k(t-s) t-s g (s,y(s)) ds , 0≤t≤T .$ (1) Note the constant $\frac{1}{\sqrt{\pi }}$ in (1). It is assumed that the functions involved in (1) are sufficiently smooth. The routine uses a fractional BDF linear multi-step method to generate a family of quadrature rules (see d05byf). The BDF methods available in d05bdf are of orders $4$, $5$ and $6$ ($\text{}=p$ say). For a description of the theoretical and practical background to these methods we refer to Lubich (1985) and to Baker and Derakhshan (1987) and Hairer et al. (1988) respectively. The algorithm is based on computing the solution $y\left(t\right)$ in a step-by-step fashion on a mesh of equispaced points. The size of the mesh is given by $T/\left(N-1\right)$, $N$ being the number of points at which the solution is sought. These methods require $2p-1$ (including $y\left(0\right)$) starting values which are evaluated internally. The computation of the lag term arising from the discretization of (1) is performed by fast Fourier transform (FFT) techniques when $N>32+2p-1$, and directly otherwise. The routine does not provide an error estimate and you are advised to check the behaviour of the solution with a different value of $N$. An option is provided which avoids the re-evaluation of the fractional weights when d05bdf is to be called several times (with the same value of $N$) within the same program unit with different functions. ## 4References Baker C T H and Derakhshan M S (1987) FFT techniques in the numerical solution of convolution equations J. Comput. Appl. Math. 20 5–24 Hairer E, Lubich Ch and Schlichte M (1988) Fast numerical solution of weakly singular Volterra integral equations J. Comput. Appl. Math. 23 87–98 Lubich Ch (1985) Fractional linear multistep methods for Abel–Volterra integral equations of the second kind Math. Comput. 45 463–469 ## 5Arguments 1: $\mathbf{ck}$real (Kind=nag_wp) Function, supplied by the user. External Procedure ck must evaluate the kernel $k\left(t\right)$ of the integral equation (1). The specification of ck is: Fortran Interface Function ck ( t) Real (Kind=nag_wp) :: ck Real (Kind=nag_wp), Intent (In) :: t double ck (const double *t) 1: $\mathbf{t}$Real (Kind=nag_wp) Input On entry: $t$, the value of the independent variable. ck must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which d05bdf is called. Arguments denoted as Input must not be changed by this procedure. Note: ck should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by d05bdf. If your code inadvertently does return any NaNs or infinities, d05bdf is likely to produce unexpected results. 2: $\mathbf{cf}$real (Kind=nag_wp) Function, supplied by the user. External Procedure cf must evaluate the function $f\left(t\right)$ in (1). The specification of cf is: Fortran Interface Function cf ( t) Real (Kind=nag_wp) :: cf Real (Kind=nag_wp), Intent (In) :: t double cf (const double *t) 1: $\mathbf{t}$Real (Kind=nag_wp) Input On entry: $t$, the value of the independent variable. cf must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which d05bdf is called. Arguments denoted as Input must not be changed by this procedure. Note: cf should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by d05bdf. If your code inadvertently does return any NaNs or infinities, d05bdf is likely to produce unexpected results. 3: $\mathbf{cg}$real (Kind=nag_wp) Function, supplied by the user. External Procedure cg must evaluate the function $g\left(s,y\left(s\right)\right)$ in (1). The specification of cg is: Fortran Interface Function cg ( s, y) Real (Kind=nag_wp) :: cg Real (Kind=nag_wp), Intent (In) :: s, y double cg (const double *s, const double *y) 1: $\mathbf{s}$Real (Kind=nag_wp) Input On entry: $s$, the value of the independent variable. 2: $\mathbf{y}$Real (Kind=nag_wp) Input On entry: the value of the solution $y$ at the point s. cg must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which d05bdf is called. Arguments denoted as Input must not be changed by this procedure. Note: cg should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by d05bdf. If your code inadvertently does return any NaNs or infinities, d05bdf is likely to produce unexpected results. 4: $\mathbf{initwt}$Character(1) Input On entry: if the fractional weights required by the method need to be calculated by the routine then set ${\mathbf{initwt}}=\text{'I'}$ (Initial call). If ${\mathbf{initwt}}=\text{'S'}$ (Subsequent call), the routine assumes the fractional weights have been computed on a previous call and are stored in work. Constraint: ${\mathbf{initwt}}=\text{'I'}$ or $\text{'S'}$. Note: when d05bdf is re-entered with the value of ${\mathbf{initwt}}=\text{'S'}$, the values of nmesh, iorder and the contents of work must not be changed. 5: $\mathbf{iorder}$Integer Input On entry: $p$, the order of the BDF method to be used. Suggested value: ${\mathbf{iorder}}=4$. Constraint: $4\le {\mathbf{iorder}}\le 6$. 6: $\mathbf{tlim}$Real (Kind=nag_wp) Input On entry: the final point of the integration interval, $T$. Constraint: . 7: $\mathbf{tolnl}$Real (Kind=nag_wp) Input On entry: the accuracy required for the computation of the starting value and the solution of the nonlinear equation at each step of the computation (see Section 9). Suggested value: ${\mathbf{tolnl}}=\sqrt{\epsilon }$ where $\epsilon$ is the machine precision. Constraint: . 8: $\mathbf{nmesh}$Integer Input On entry: $N$, the number of equispaced points at which the solution is sought. Constraint: ${\mathbf{nmesh}}={2}^{m}+2×{\mathbf{iorder}}-1$, where $m\ge 1$. 9: $\mathbf{yn}\left({\mathbf{nmesh}}\right)$Real (Kind=nag_wp) array Output On exit: ${\mathbf{yn}}\left(\mathit{i}\right)$ contains the approximate value of the true solution $y\left(t\right)$ at the point $t=\left(\mathit{i}-1\right)×h$, for $\mathit{i}=1,2,\dots ,{\mathbf{nmesh}}$, where $h={\mathbf{tlim}}/\left({\mathbf{nmesh}}-1\right)$. 10: $\mathbf{work}\left({\mathbf{lwk}}\right)$Real (Kind=nag_wp) array Communication Array On entry: if ${\mathbf{initwt}}=\text{'S'}$, work must contain fractional weights computed by a previous call of d05bdf (see description of initwt). On exit: contains fractional weights which may be used by a subsequent call of d05bdf. 11: $\mathbf{lwk}$Integer Input On entry: the dimension of the array work as declared in the (sub)program from which d05bdf is called. Constraint: ${\mathbf{lwk}}\ge \left(2×{\mathbf{iorder}}+6\right)×{\mathbf{nmesh}}+8×{{\mathbf{iorder}}}^{2}-16×{\mathbf{iorder}}+1$. 12: $\mathbf{nct}\left({\mathbf{nmesh}}/32+1\right)$Integer array Workspace 13: $\mathbf{ifail}$Integer Input/Output On entry: ifail must be set to $0$, $-1$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected. A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $-1$ means that an error message is printed while a value of $1$ means that it is not. If halting is not appropriate, the value $-1$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit. On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6Error Indicators and Warnings If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf). Errors or warnings detected by the routine: ${\mathbf{ifail}}=1$ On entry, ${\mathbf{initwt}}=⟨\mathit{\text{value}}⟩$. Constraints: ${\mathbf{initwt}}=\text{'I'}$ or $\text{'S'}$. On entry, ${\mathbf{iorder}}=⟨\mathit{\text{value}}⟩$. Constraint: $4\le {\mathbf{iorder}}\le 6$. On entry, ${\mathbf{lwk}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{lwk}}\ge \left(2×{\mathbf{iorder}}+6\right)×{\mathbf{nmesh}}+8×{{\mathbf{iorder}}}^{2}-16×{\mathbf{iorder}}+1$; that is, $⟨\mathit{\text{value}}⟩$. On entry, ${\mathbf{nmesh}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{iorder}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{nmesh}}={2}^{m}+2×{\mathbf{iorder}}-1$, for some $m$. On entry, ${\mathbf{nmesh}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{iorder}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{nmesh}}\ge 2×{\mathbf{iorder}}+1$. On entry, ${\mathbf{tlim}}=⟨\mathit{\text{value}}⟩$. Constraints: . On entry, ${\mathbf{tolnl}}=⟨\mathit{\text{value}}⟩$. Constraint: . ${\mathbf{ifail}}=2$ An error occurred when trying to compute the starting values. Relaxing the value of tolnl and/or increasing the value of nmesh may overcome this problem (see Section 9 for further details). ${\mathbf{ifail}}=3$ An error occurred when trying to compute the solution at a specific step. Relaxing the value of tolnl and/or increasing the value of nmesh may overcome this problem (see Section 9 for further details). ${\mathbf{ifail}}=-99$ An unexpected error has been triggered by this routine. Please contact NAG. See Section 7 in the Introduction to the NAG Library FL Interface for further information. ${\mathbf{ifail}}=-399$ Your licence key may have expired or may not have been installed correctly. See Section 8 in the Introduction to the NAG Library FL Interface for further information. ${\mathbf{ifail}}=-999$ Dynamic memory allocation failed. See Section 9 in the Introduction to the NAG Library FL Interface for further information. ## 7Accuracy The accuracy depends on nmesh and tolnl, the theoretical behaviour of the solution of the integral equation and the interval of integration. The value of tolnl controls the accuracy required for computing the starting values and the solution of (2) at each step of computation. This value can affect the accuracy of the solution. However, for most problems, the value of $\sqrt{\epsilon }$, where $\epsilon$ is the machine precision, should be sufficient. ## 8Parallelism and Performance d05bdf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library. d05bdf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information. In solving (1), initially, d05bdf computes the solution of a system of nonlinear equations for obtaining the $2p-1$ starting values. c05qdf is used for this purpose. When a failure with ${\mathbf{ifail}}={\mathbf{2}}$ occurs (which corresponds to an error exit from c05qdf), you are advised to either relax the value of tolnl or choose a smaller step size by increasing the value of nmesh. Once the starting values are computed successfully, the solution of a nonlinear equation of the form $Yn-αg(tn,Yn)-Ψn=0,$ (2) is required at each step of computation, where ${\Psi }_{n}$ and $\alpha$ are constants. d05bdf calls c05axf to find the root of this equation. If a failure with ${\mathbf{ifail}}={\mathbf{3}}$ occurs (which corresponds to an error exit from c05axf), you are advised to relax the value of the tolnl or choose a smaller step size by increasing the value of nmesh. If a failure with ${\mathbf{ifail}}={\mathbf{2}}$ or ${\mathbf{3}}$ persists even after adjustments to tolnl and/or nmesh then you should consider whether there is a more fundamental difficulty. For example, the problem is ill-posed or the functions in (1) are not sufficiently smooth. ## 10Example In this example we solve the following integral equations $y(t) = t + 38 π t2 - ∫0t 1 t-s [y(s)] 3 ds , 0≤t≤7 ,$ with the solution $y\left(t\right)=\sqrt{t}$, and $y (t) = (3-t) t - ∫0t 1t-s exp(s (1-s) 2 - [y(s)] 2 ) d s , 0≤t≤5 ,$ with the solution $y\left(t\right)=\left(1-t\right)\sqrt{t}$. In the above examples, the fourth-order BDF is used, and nmesh is set to ${2}^{6}+7$. ### 10.1Program Text Program Text (d05bdfe.f90) None. ### 10.3Program Results Program Results (d05bdfe.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 112, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000430703163147, "perplexity": 2422.828867289803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703549416.62/warc/CC-MAIN-20210124141945-20210124171945-00647.warc.gz"}
http://math.stackexchange.com/questions/231045/limits-of-infimum-and-supremum-for-sequences-of-functions
# Limits of infimum and supremum for sequences of functions I need to show that $-\infty \leq \liminf_{k \to \infty}f_k \leq \limsup_{k \to \infty}f_k \leq \infty$ , where $f_k$ is a sequence of functions from $\mathbb{R}^n$ to $\mathbb{R}$. This seems inherently true, but for some reason, I am having difficulty with the details. - what definition of liminf and limsup are you using? Also did you intend to write $x\rightarrow\infty$? as opposed to $k\rightarrow\infty?$. Or are you taking the infimum over $k$ first and then taking a limit in $x$? –  Alex R. Nov 6 '12 at 1:15 No I did intend for it to be the limit as k approaches infinity. –  Angelo Christophell Nov 6 '12 at 15:23 I assume that what you want to show is that $(\liminf_{k\to\infty}f_k)(x)\le(\limsup_{k\to\infty}f_k)(x)$ holds for all $x$ and that the function $\liminf f_k$ is defined by $(\liminf_{k\to\infty}f_k)(x)=\liminf_{k\to\infty}(f_k(x))$, and likewise for $\limsup f_k$. Let $n\in\mathbb N$. Now by definition, $\inf_{k\ge n}f_k(x)$ is a lower bound for the set $\{f_k(x)\mid k\ge n\}$ and $\sup_{k\ge n}f_k(x)$ is an upper bound for it. Since $n\ge n$, $f_n(x)$ is an element of this set and we get $$\inf_{k\ge n}f_k(x)\le f_n(x)\le\sup_{k\ge n}f_k(x).$$ Thus $\inf_{k\ge n}f_k(x)\le\sup_{k\ge n}f_k(x)$ holds for every $n$, and we have that $$\lim_{n\to\infty}\inf_{k\ge n}f_k(x)\le\lim_{n\to\infty}\sup_{k\ge n}f_k(x).$$ - I cannot understand why you are focusing on a sequence of functions. Would anything change if $\{f_k\}_k$ were a sequence of numbers? Anyway, it all depends on the definition of $\liminf$ and $\limsup$. Usually, given a sequence $\{p_k\}_k$ of real numbers, you consider the set $E$ of those points $p \in \mathbb{R}$ such that a subsequence of $\{p_k\}_k$ converges to $p$. By definition, $\liminf_{k \to +\infty} p_k = \inf E$ and $\limsup_{k \to +\infty} p_k = \sup E$. Since $\inf E \leq \sup E$, the conclusion should be clear. If you want to use an equivalent definition for $\liminf$ and $\limsup$, then you might need to work a little bit. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912246465682983, "perplexity": 65.85176162742894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775656.66/warc/CC-MAIN-20141217075255-00032-ip-10-231-17-201.ec2.internal.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=8370
2019 Том 71 № 6 # On the Logarithmic Residues of Monogenic functions in a Three-Dimensional Harmonic Algebra with Two-Dimensional Radical Abstract For monogenic (continuous and Gâteaux-differentiable) functions taking values in a three-dimensional harmonic algebra with two-dimensional radical, we compute the logarithmic residue. It is shown that the logarithmic residue depends not only on the roots and singular points of a function but also on the points at which the function takes values in the radical of a harmonic algebra. English version (Springer): Ukrainian Mathematical Journal 65 (2013), no. 7, pp 1079-1086. Citation Example: Plaksa S. A., Shpakovskii V. S. On the Logarithmic Residues of Monogenic functions in a Three-Dimensional Harmonic Algebra with Two-Dimensional Radical // Ukr. Mat. Zh. - 2013. - 65, № 7. - pp. 967–973. Full text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530650734901428, "perplexity": 1015.3822533724151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00081.warc.gz"}
http://mathhelpforum.com/advanced-statistics/76326-stat-q.html
1. ## stat Q so for binomial distribution if I have my n=15 and p=.2 and I have to calculate the probability of exactly 8 fails, I know I can do something like this, P(X=8) = P(X<=8) - P(X<=7) = B(8;15,.2) - B(7;15,.2) in which case,I'm subtracting the two cdf's to obtain the value at X=8, but can I do something like this B(8;15,.2) = (15 (.2)^8 (1-.2)^(15-8) 8) do I have to have to use the cdf's of the binomial random variable or I can also use the simple formula of the binomial theorem? 2. Originally Posted by NidhiS so for binomial distribution if I have my n=15 and p=.2 and I have to calculate the probability of exactly 8 fails, I know I can do something like this, P(X=8) = P(X<=8) - P(X<=7) = B(8;15,.2) - B(7;15,.2) in which case,I'm subtracting the two cdf's to obtain the value at X=8, but can I do something like this B(8;15,.2) = (15 (.2)^8 (1-.2)^(15-8) 8) do I have to have to use the cdf's of the binomial random variable or I can also use the simple formula of the binomial theorem? In cases like this (X=x), you can just say that $P\!\left(X=8\right)={15\choose 8}\left(.2\right)^8\left(.8\right)^{15-8}$ Otherwise, if you have $P\!\left(x_1\leq X\leq x_2\right)$, you can evaluate $\sum_{x=x_1}^{x_2}{15\choose x}\left(.2\right)^x\left(.8\right)^{15-x}$ (which in a sense is taking the difference between two cdfs... Does this make sense?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057494401931763, "perplexity": 819.0351725770121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164013918/warc/CC-MAIN-20131204133333-00038-ip-10-33-133-15.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A1147.30003
× # zbMATH — the first resource for mathematics Abstract theory of universal series and applications. (English) Zbl 1147.30003 Inspired by the fact that all – constructive or Baire-category – proofs of existence of universal series (of several kinds, such as Taylor series, Laurent series, Dirichlet series, Faber series, Fourier series, harmonic expansions, etc) share some similarities (in particular, they need to exhibit a function which approximates both a given function in the space where the universal function should live and a given function in the space where the universal property holds), the authors undertake the task of putting the theory of universal series in an appropriate abstract framework, so obtaining in a unified way most of the existing results as well as new and stronger results. This is carried out by using, in each case, an appropriate approximation theorem: Mergelyan’s theorem, Runge’s theorem, Weierstrass’ theorem, DuBois-Reymond’s lemma, Walsh’s theorem, Lusin’s theorem, etc. In addition, dense lineability, that is, existence of large linear manifolds of universal series, as well as extensions to several dimensions, are obtained. The following known results, among others, are proved or even improved as special cases of their abstract theory: {$$\bullet$$} M. Fekete’s theorem [C. R. Acad. Sci. 158, 1256–1258 (1914; JFM 45.0630.03)] on the existence of a power series $$\sum_{n=1}^\infty a_n x^n$$ such that, for every continuous function $$f$$ on $$[-1,1]$$ with $$f(0) = 0$$, there exists a sequence $$(\lambda_n) \subset\mathbb N= \{0,1,2,\dots\}$$ such that $$\sum_{j=1}^{\lambda_n} a_j x^j \to f(x)$$ uniformly on $$[-1,1]$$. {$$\bullet$$} D. Menshov’s theorem [ C. R. (Dokl.) Acad. Sci. URSS, New. Ser. 49, 79–82 (1945; Zbl 0060.18504)] on the existence of a trigonometric series $$\sum_{n=-\infty}^{+\infty} a_n e^{\text{int}}$$ with $$a_n \to 0$$ $$(| n| \to \infty)$$ such that every Lebesgue-measurable function $$f:\mathbb T\to\mathbb C$$ ($$\mathbb T=$$ the unit circle) is a limit almost everywhere of partial sums of this series. Some enhancements were obtained by Chui and Parnes, Luh, Grosse-Erdmann, Kahane, Melas, Nestoridis, Koumoulis, etc. All of them are covered by the results of the present paper. {$$\bullet$$} A. I. Seleznev’s theorem [Mat. Sb., N. Ser. 28 (70), 453–460 (1951; Zbl 0043.29501)] on the existence of a series $$\sum_{n=0}^\infty a_n z^n$$ whose partial sums approximate every entire function in each compact subset $$K \subset\mathbb C\setminus \{0\}$$ without holes. Several assertions about convergence of Taylor series (or their Cesàro means) to holomorphic functions in a simply connected domain $$\Omega$$, with approximation properties in $$\mathbb C\setminus \Omega$$ or $$\mathbb C\setminus \overline{\Omega}$$, due to Chui and Parnes, Luh, Grosse-Erdmann, Nestoridis, Melas, Bayart, Armitage, Costakis, Kariofillis, Konstadilaki, etc. In particular, universal series in the space $$A^\infty (\Omega )$$ of boundary-regular holomorphic functions are showed to exist. {$$\bullet$$} Corresponding universality theorems for multiply connected domains, via Taylor series, or Laurent series or Faber series, due to Gehlen, Luh, Vlachou, Costakis, Nestoridis, Papadoperakis, Kariofillis, Mouratides, Diamantopoulos, Müller, Yavrian, etc. {$$\bullet$$} Armitage’s theorem (2006) on the existence of an harmonic function $$f:\Omega \to\mathbb R$$ – where $$\Omega$$ is a domain in $$\mathbb R^N$$, $$N \geq 2$$, with $$(\mathbb R^N \cup \{\infty\}) \setminus \overline{\Omega}$$ connected – such that all its partial derivatives extend continuously to $$\overline{\Omega}$$ and approximate any harmonic function $$\mathbb R^N \to\mathbb R$$ on any compact set $$K \subset\mathbb R^N \setminus \overline{\Omega}$$ without holes. {$$\bullet$$} F. Bayart’s theorem [Rev. Mat. Complut. 19, No. 1, 235–247 (2006; Zbl 1103.30003)] on the existence of universal Dirichlet series $$\sum_{n=1}^\infty a_n n^{-z}$$. {$$\bullet$$} Costakis-Marias-Nestoridis’ theorem G. Costakis, M. Marias, V. Nestoridis, Analysis, München 26, No. 3, 401–409 (2006; Zbl 1148.41303)] on the existence of a real $$C^\infty$$-function on an open subset $$\Omega \subset\mathbb R^N$$, whose partial sums $$S_n(f,x)$$ of Taylor series at a point $$x \in \Omega$$ approximate any $$C^\infty$$-function $$\mathbb R^N \to\mathbb R$$ on some open set containing $$K$$, where $$K \subset\mathbb R^N \setminus \Omega$$ is a compact set. Most of the results in the paper under review are obtained by using either the following main theorem, or some corollary, or some variant of it. Theorem. Let $$X$$ be a metrizable topological vector space over the field $$\mathbb K= \mathbb R$$ or $$\mathbb C$$ whose topology is induced by a translation-invariant metric $$\rho$$. Let $$x_0,x_1,x_2, \dots$$ be a fixed sequence of elements in $$X$$. Fix a subspace $$A$$ of the space $$\omega$$ of all $$\mathbb K$$-valued sequences, and assume that $$A$$ carries a complete metrizable vector space topology, induced by a translation-invariant metric $$d$$. Suppose that the coordinate projections $$a = (a_n)_{n \geq 0} \mapsto a_m \in\mathbb K$$ are continuous for any $$m \in\mathbb N$$, and that the set $$\{a = (a_n)_{n \geq 0} \in \omega: \{n:a_n \neq 0\}$$ is finite} is a dense subset of $$A$$. Let $$\mu = (\mu_n)_{n \geq 0}$$ be an increasing sequence of positive integers, and denote by $$U_A^\mu$$ (with $$U_A := U_A^\mu$$ when $$\mu =\mathbb N$$) the class of all sequences $$a = (a_n)_{n \geq 0} \in A$$ such that, for every $$x \in X$$, there exists a subsequence $$(\lambda_n)$$ of $$\mu$$ satisfying $$\sum_{j=0}^{\lambda_n} a_j x_j \to x$$ and $$\sum_{j=0}^{\lambda_n} a_j e_j \to a$$ as $$n \to \infty$$, where $$(e_j)$$ is the canonical basis of $$\omega$$. Then the following are equivalent: (1) $$U_A \neq \emptyset$$. (2) For every $$p \in\mathbb N$$, $$x \in X$$ and $$\varepsilon > 0$$, there exist $$n \geq p$$ and $$a_p,a_{p+1}, \dots a_n \in\mathbb K$$ such that $$\rho (\sum_{j=p}^n a_jx_j,x) < \varepsilon$$ and $$d(\sum_{j=p}^n a_je_j,0) < \varepsilon$$. (3) For every $$x \in X$$ and $$\varepsilon > 0$$, there exist $$n \geq 0$$ and $$a_0,a_1, \dots a_n \in\mathbb K$$ such that $$\rho (\sum_{j=0}^n a_j x_j,x) < \varepsilon$$ and $$d(\sum_{j=0}^n a_je_j,0) < \varepsilon$$. (4) For every increasing sequence $$\mu$$ of positive integers, $$U_A^\mu$$ is a dense $$G_\delta$$ subset of $$A$$. (5) For every increasing sequence $$\mu$$ of positive integers, $$U_A^\mu \cup \{0\}$$ contains a dense subspace of $$A$$. It is also considered the unrestricted universality of series (i.e., without imposing $$\sum_{j=0}^{\lambda_n} a_j e_j \to a$$) as well as the restricted universality with respect to $$M$$ of sequences of continuous mappings $$Y \to Z$$, where $$M$$ is a closed subset of $$Z$$. Finally, the paper contains an extensive bibliography and may well be useful as a survey about universal series up to date. ##### MSC: 30B30 Boundary behavior of power series in one complex variable; over-convergence 47A16 Cyclic vectors, hypercyclic and chaotic operators Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985279381275177, "perplexity": 332.8378200525308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00079.warc.gz"}
https://zbmath.org/?q=an:0627.12004
× # zbMATH — the first resource for mathematics The computation of the fundamental unit of totally complex quartic orders. (English) Zbl 0627.12004 It is shown how the generalized Voronoi algorithm can be used to produce a practical algorithm for determining the regulator and/or fundamental unit of an arbitrary totally complex quartic order. The author shows that his algorithm will yield a fundamental unit of any such order in O(R $$D^{\epsilon})$$ binary operations (for every $$\epsilon >0)$$, where D is the absolute value of the discriminant and R is the regulator of the order. An analogue of Galois’ theorem on the symmetry of the continued fraction expansion of the square root of a rational number is also established. The paper concludes with a table of computational results for the orders $${\mathbb{Z}}[^ 4\sqrt{-d}]$$ with $$1\leq d\leq 500$$. Reviewer: H.C.Williams ##### MSC: 11R16 Cubic and quartic extensions 11R27 Units and factorization 12-04 Software, source code, etc. for problems pertaining to field theory Full Text:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095256328582764, "perplexity": 459.0091870261656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00507.warc.gz"}
http://blekko.com/wiki/Raised-cosine_filter?source=672620ff
# Raised-cosine filter The raised-cosine filter is a filter frequently used for pulse-shaping in digital modulation due to its ability to minimise intersymbol interference (ISI). Its name stems from the fact that the non-zero portion of the frequency spectrum of its simplest form ($\beta = 1$) is a cosine function, 'raised' up to sit above the $f$ (horizontal) axis. ## Mathematical description Frequency response of raised-cosine filter with various roll-off factors Impulse response of raised-cosine filter with various roll-off factors The raised-cosine filter is an implementation of a low-pass Nyquist filter, i.e., one that has the property of vestigial symmetry. This means that its spectrum exhibits odd symmetry about $\frac{1}{2T}$, where $T$ is the symbol-period of the communications system. Its frequency-domain description is a piecewise function, given by: $H(f) = \begin{cases} T, & |f| \leq \frac{1 - \beta}{2T} \\ \frac{T}{2}\left[1 + \cos\left(\frac{\pi T}{\beta}\left[|f| - \frac{1 - \beta}{2T}\right]\right)\right], & \frac{1 - \beta}{2T} < |f| \leq \frac{1 + \beta}{2T} \\ 0, & \mbox{otherwise} \end{cases}$ $0 \leq \beta \leq 1$ and characterised by two values; $\beta$, the roll-off factor, and $T$, the reciprocal of the symbol-rate. The impulse response of such a filter [1] is given by: $h(t) = \mathrm{sinc}\left(\frac{t}{T}\right)\frac{\cos\left(\frac{\pi\beta t}{T}\right)}{1 - \frac{4\beta^2 t^2}{T^2}}$, in terms of the normalised sinc function. ### Roll-off factor The roll-off factor, $\beta$, is a measure of the excess bandwidth of the filter, i.e. the bandwidth occupied beyond the Nyquist bandwidth of $\frac{1}{2T}$. If we denote the excess bandwidth as $\Delta f$, then: $\beta = \frac{\Delta f}{\left(\frac{1}{2T}\right)} = \frac{\Delta f}{R_S/2} = 2T\Delta f$ where $R_S = \frac{1}{T}$ is the symbol-rate. The graph shows the amplitude response as $\beta$ is varied between 0 and 1, and the corresponding effect on the impulse response. As can be seen, the time-domain ripple level increases as $\beta$ decreases. This shows that the excess bandwidth of the filter can be reduced, but only at the expense of an elongated impulse response. #### $\beta = 0$ As $\beta$ approaches 0, the roll-off zone becomes infinitesimally narrow, hence: $\lim_{\beta \rightarrow 0}H(f) = \mathrm{rect}(fT)$ where $\mathrm{rect}(.)$ is the rectangular function, so the impulse response approaches $\mathrm{sinc}\left(\frac{t}{T}\right)$. Hence, it converges to an ideal or brick-wall filter in this case. #### $\beta = 1$ When $\beta = 1$, the non-zero portion of the spectrum is a pure raised cosine, leading to the simplification: $H(f)|_{\beta=1} = \left \{ \begin{matrix} \frac{T}{2}\left[1 + \cos\left(\pi fT\right)\right], & |f| \leq \frac{1}{T} \\ 0, & \mbox{otherwise} \end{matrix} \right.$ ### Bandwidth The bandwidth of a raised cosine filter is most commonly defined as the width of the non-zero portion of its spectrum, i.e.: $BW = \frac{1}{2}R_S(\beta+1)$(0<T<1) ### Auto-correlation function The auto-correlation function of raised cosine function is as follows: $R\left(\tau\right) = T \left[\mathrm{sinc}\left( \frac{\tau}{T} \right) \frac{\cos\left( \beta \frac{\pi \tau}{T} \right)}{1 - \left( \frac{2 \beta \tau}{T} \right)^2} - \frac{\beta}{4} \mathrm{sinc}\left(\beta \frac{\tau}{T} \right) \frac{\cos\left( \frac{\pi \tau}{T} \right)}{1 - \left( \frac{\beta \tau}{T} \right)^2} \right]$ The auto-correlation result can be used to analyze various sampling offset results when analyzed with auto-correlation. ## Application Consecutive raised-cosine impulses, demonstrating zero-ISI property When used to filter a symbol stream, a Nyquist filter has the property of eliminating ISI, as its impulse response is zero at all $nT$ (where $n$ is an integer), except $n = 0$. Therefore, if the transmitted waveform is correctly sampled at the receiver, the original symbol values can be recovered completely. However, in many practical communications systems, a matched filter is used in the receiver, due to the effects of white noise. For zero ISI, it is the net response of the transmit and receive filters that must equal $H(f)$: $H_R(f)\cdot H_T(f) = H(f)$ And therefore: $|H_R(f)| = |H_T(f)| = \sqrt{|H(f)|}$ These filters are called root-raised-cosine filters. ## References • Glover, I.; Grant, P. (2004). Digital Communications (2nd ed.). Pearson Education Ltd. ISBN 0-13-089399-4. • Proakis, J. (1995). Digital Communications (3rd ed.). McGraw-Hill Inc. ISBN 0-07-113814-5. • Tavares, L.M.; Tavares G.N. (1998) Comments on "Performance of Asynchronous Band-Limited DS/SSMA Systems" . IEICE Trans. Commun., Vol. E81-B, No. 9
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909607470035553, "perplexity": 1411.8481732771638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00183-ip-10-146-231-18.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/64251/differential-equations
Differential Equations I have solve this differential equation : \left\{\begin{aligned}\frac{dy}{dx} + y &= f(x) \\ y (0)&=0. \end{aligned}\right. where $f (x) = \begin{cases}2 & \text{if } 0 \leq x < 1 \\ 0 &\text{if }x \geq 1.\end{cases}$ Please explain how to solve as it involves discontinuous function $f$. - If it is homework, please add the [homework] tag. What did you try? Can you tell us where you are stuck? –  Srivatsan Sep 13 '11 at 19:47 This is not an exact differential equation. You'd need to multiply it by an integrating factor to get an exact differential equation. –  Robert Israel Sep 13 '11 at 20:07 The integrating factor does not involve the function on the right. It would be the same integrating factor if the equation was $\frac{dy}{dx} + y = 0$. –  Robert Israel Sep 13 '11 at 20:15 @ Robert Israel: Thanks, I got the I.F as e ^x. –  rupa Sep 13 '11 at 20:18 @Robert Israel : Please suggest how to proceed beyond this. –  rupa Sep 13 '11 at 20:24 Multiply by the integrating factor $e^x$ and then you can factor the LHS: $$(e^xy)'=e^xf(x).$$ Now integrate from $0$ to $x$ to get $$e^xy(x)-e^0y(0)=\int_0^xe^uf(u)du$$ but remember $y(0)=0$. Now divide by the integrating factor and you have $y(x)=e^{-x}\int_0^xe^uf(u)du$. We can evaluate this by (a) looking at $x\in[0,1)$ and then (b) looking at $x\ge1$ for a piecewised defined solution $y$ (note: in the latter case you will have to split $\int_0^x$ into $\int_0^1+\int_1^x$ to substitute for $f$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847220182418823, "perplexity": 454.473653267355}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929978.35/warc/CC-MAIN-20150521113209-00273-ip-10-180-206-219.ec2.internal.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/11/lesson/11.2.5/problem/11-147
### Home > PC3 > Chapter 11 > Lesson 11.2.5 > Problem11-147 11-147. Given $\cos(A) = \frac { 5 } { 13 }$ where $0 < A <\frac { \pi } { 2 }$, evaluate $\csc(A)$ and $\cos(2A)$. Sketch a right triangle for the given situation and then determine the length of the missing leg. $\csc(A) = \frac{1}{\sin(A)}$ $\cos(2A) = \cos^2 (A)- \sin^2(A)$
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9563695192337036, "perplexity": 1779.7384523097603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373761.80/warc/CC-MAIN-20210305214044-20210306004044-00039.warc.gz"}
https://www.physicsforums.com/threads/maximum-minimum-values-of-a-graph-on-a-restricted-interval.388865/
# Maximum/Minimum values of a graph on a restricted interval • Start date • #1 3 0 ## Homework Statement f(x) = x^3 + 12x^2 - 27x + 11 Absolute Maximum Absolute Minimum on the interval [-10,0] (there are 3 different interval sets, but if I can do this one, I think I can figure out the rest.) ## Homework Equations Derivative, set equal to 0, then solve for the problem, but what I'm confused about is how the solving process differs as the interval changes. ## The Attempt at a Solution I have the derivative set as 3x^2 + 24x - 27 but what I'm unsure about is how finding the absolute maximum on the interval [-10,0] differs in process from finding the interval on, say, [-7, 2] I think what I'm really trying to ask is how the restrictions on the intervals are reflected in the mathematical process to solve for an absolute maximum and minimum. Related Calculus and Beyond Homework Help News on Phys.org • #2 LCKurtz Homework Helper Gold Member 9,508 730 The extremes of a continuous function on a closed interval must occur for values of x on the interval where one of the following is true: 1. f'(x) = 0 2. f'(x) fails to exist. 3. x is an end point of the closed interval. You have to check all three cases, noting that values of x that aren't on the interval are irrelevant. Just list the possibilities and pick out the max and min. • #3 28 0 edit: ah, someone just posted earlier delete some part of my post • Last Post Replies 7 Views 7K • Last Post Replies 35 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 11 Views 3K • Last Post Replies 8 Views 4K • Last Post Replies 8 Views 2K • Last Post Replies 15 Views 735 • Last Post Replies 3 Views 6K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776586651802063, "perplexity": 989.4648951012061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250620381.59/warc/CC-MAIN-20200124130719-20200124155719-00268.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/265
## A Tableau Calculus for Partial Functions • Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago, but there has not been a satisfact-ory mechanization. Recent years have seen a thorough investigation ofthe framework of many-valued truth-functional logics. However, strongKleene logic, where quantification is restricted and therefore not truth-functional, does not fit the framework directly. We solve this problemby applying recent methods from sorted logics. This paper presents atableau calculus that combines the proper treatment of partial functionswith the efficiency of sorted calculi. $Rev: 13581$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013824462890625, "perplexity": 1561.717208196999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00614.warc.gz"}
https://en.wikipedia.org/wiki/Weight_(representation_theory)
# Weight (representation theory) In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group. The importance of the concept, however, stems from its application to representations of Lie algebras and hence also to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, and the corresponding eigenspace is called a weight space. ## Motivation and general concept Given a set S of matrices, each of which is diagonalizable, and any two of which commute, it is always possible to simultaneously diagonalize all of the elements of S.[note 1][note 2] Equivalently, for any set S of mutually commuting semisimple linear transformations of a finite-dimensional vector space V there exists a basis of V consisting of simultaneous eigenvectors of all elements of S. Each of these common eigenvectors vV defines a linear functional on the subalgebra U of End(V) generated by the set of endomorphisms S; this functional is defined as the map which associates to each element of U its eigenvalue on the eigenvector v. This map is also multiplicative, and sends the identity to 1; thus it is an algebra homomorphism from U to the base field. This "generalized eigenvalue" is a prototype for the notion of a weight. The notion is closely related to the idea of a multiplicative character in group theory, which is a homomorphism χ from a group G to the multiplicative group of a field F. Thus χ: GF× satisfies χ(e) = 1 (where e is the identity element of G) and ${\displaystyle \chi (gh)=\chi (g)\chi (h)}$ for all g, h in G. Indeed, if G acts on a vector space V over F, each simultaneous eigenspace for every element of G, if such exists, determines a multiplicative character on G: the eigenvalue on this common eigenspace of each element of the group. The notion of multiplicative character can be extended to any algebra A over F, by replacing χ: GF× by a linear map χ: AF with: ${\displaystyle \chi (ab)=\chi (a)\chi (b)}$ for all a, b in A. If an algebra A acts on a vector space V over F to any simultaneous eigenspace, this corresponds an algebra homomorphism from A to F assigning to each element of A its eigenvalue. If A is a Lie algebra (which is generally not an associative algebra), then instead of requiring multiplicativity of a character, one requires that it maps any Lie bracket to the corresponding commutator; but since F is commutative this simply means that this map must vanish on Lie brackets: χ([a,b])=0. A weight on a Lie algebra g over a field F is a linear map λ: gF with λ([x, y])=0 for all x, y in g. Any weight on a Lie algebra g vanishes on the derived algebra [g,g] and hence descends to a weight on the abelian Lie algebra g/[g,g]. Thus weights are primarily of interest for abelian Lie algebras, where they reduce to the simple notion of a generalized eigenvalue for space of commuting linear transformations. If G is a Lie group or an algebraic group, then a multiplicative character θ: GF× induces a weight χ = dθ: gF on its Lie algebra by differentiation. (For Lie groups, this is differentiation at the identity element of G, and the algebraic group case is an abstraction using the notion of a derivation.) ## Weights in the representation theory of semisimple Lie algebras Let ${\displaystyle {\mathfrak {g}}}$ be a complex semisimple Lie algebra and ${\displaystyle {\mathfrak {h}}}$ a Cartan subalgebra of ${\displaystyle {\mathfrak {g}}}$. In this section, we describe the concepts needed to formulate the "theorem of the highest weight" classifying the finite-dimensional representations of ${\displaystyle {\mathfrak {g}}}$. Notably, we will explain the notion of a "dominant integral element." The representations themselves are described in the article linked to above. ### Weight of a representation Example of the weights of a representation of the Lie algebra sl(3,C) Let V be a representation of a Lie algebra ${\displaystyle {\mathfrak {g}}}$ over C and let λ a linear functional on ${\displaystyle {\mathfrak {h}}}$. Then the weight space of V with weight λ is the subspace ${\displaystyle V_{\lambda }}$ given by ${\displaystyle V_{\lambda }:=\{v\in V:\forall H\in {\mathfrak {h}},\quad H\cdot v=\lambda (H)v\}}$. A weight of the representation V is a linear functional λ such that the corresponding weight space is nonzero. Nonzero elements of the weight space are called weight vectors. That is to say, a weight vector is a simultaneous eigenvector for the action of the elements of ${\displaystyle {\mathfrak {h}}}$, with the corresponding eigenvalues given by λ. If V is the direct sum of its weight spaces ${\displaystyle V=\bigoplus _{\lambda \in {\mathfrak {h}}^{*}}V_{\lambda }}$ then it is called a weight module; this corresponds to there being a common eigenbasis (a basis of simultaneous eigenvectors) for all the represented elements of the algebra, i.e., to their being simultaneously diagonalizable matrices (see diagonalizable matrix). If G is group with Lie algebra ${\displaystyle {\mathfrak {g}}}$, every finite-dimensional representation of G induces a representation of ${\displaystyle {\mathfrak {g}}}$. A weight of the representation of G is then simply a weight of the associated representation of ${\displaystyle {\mathfrak {g}}}$. There is a subtle distinction between weights of group representations and Lie algebra representations, which is that there is a different notion of integrality condition in the two cases; see below. (The integrality condition is more restrictive in the group case, reflecting that not every representation of the Lie algebra comes from a representation of the group.) ### Action of the root vectors If V is the adjoint representation of ${\displaystyle {\mathfrak {g}}}$, the nonzero weights of V are called roots, the weight spaces are called root spaces, and weight vectors are called root vectors. Explicitly, a linear functional ${\displaystyle \alpha }$ on ${\displaystyle {\mathfrak {h}}}$ is called a root if ${\displaystyle \alpha \neq 0}$ and there exists a nonzero ${\displaystyle X}$ in ${\displaystyle {\mathfrak {g}}}$ such that ${\displaystyle [H,X]=\alpha (H)X}$ for all ${\displaystyle H}$ in ${\displaystyle {\mathfrak {h}}}$. The collection of roots forms a root system. From the perspective of representation theory, the significance of the roots and root vectors is the following elementary but important result: If V is a representation of ${\displaystyle {\mathfrak {g}}}$, v is a weight vector with weight ${\displaystyle \lambda }$ and X is a root vector with root ${\displaystyle \alpha }$, then ${\displaystyle H\cdot (X\cdot v)=[(\lambda +\alpha )(H)](X\cdot v)}$ for all H in ${\displaystyle {\mathfrak {h}}}$. That is, ${\displaystyle X\cdot v}$ is either the zero vector or a weight vector with weight ${\displaystyle \lambda +\alpha }$. Thus, the action of ${\displaystyle X}$ maps the weight space with weight ${\displaystyle \lambda }$ into the weight space with weight ${\displaystyle \lambda +\alpha }$. ### Integral element Algebraically integral elements (triangular lattice), dominant integral elements (black dots), and fundamental weights for sl(3,C) Let ${\displaystyle {\mathfrak {h}}_{0}^{*}}$ be the real subspace of ${\displaystyle {\mathfrak {h}}^{*}}$ generated by the roots of ${\displaystyle {\mathfrak {g}}}$. For computations, it is convenient to choose an inner product that is invariant under the Weyl group, that is, under reflections about the hyperplanes orthogonal to the roots. We may then use this inner product to identify ${\displaystyle {\mathfrak {h}}_{0}^{*}}$ with a subspace ${\displaystyle {\mathfrak {h}}_{0}}$ of ${\displaystyle {\mathfrak {h}}}$. With this identification, the coroot associated to a root ${\displaystyle \alpha }$ is given as ${\displaystyle H_{\alpha }=2{\frac {\alpha }{\langle \alpha ,\alpha \rangle }}}$. We now define two different notions of integrality for elements of ${\displaystyle {\mathfrak {h}}_{0}}$. The motivation for these definitions is simple: The weights of finite-dimensional representations of ${\displaystyle {\mathfrak {g}}}$ satisfy the first integrality condition, while if G is a group with Lie algebra ${\displaystyle {\mathfrak {g}}}$, the weights of finite-dimensional representations of G satisfy the second integrality condition. A element ${\displaystyle \lambda \in {\mathfrak {h}}_{0}}$ is algebraically integral if ${\displaystyle \langle \lambda ,H_{\alpha }\rangle =2{\frac {\langle \lambda ,\alpha \rangle }{\langle \alpha ,\alpha \rangle }}\in \mathbf {Z} }$ for all roots ${\displaystyle \alpha }$. The motivation for this condition is that the coroot ${\displaystyle H_{\alpha }}$ can be identified with the H element in a standard ${\displaystyle {X,Y,H}}$ basis for an sl(2,C)-subalgebra of g.[1] By elementary results for sl(2,C), the eigenvalues of ${\displaystyle H_{\alpha }}$ in any finite-dimensional representation must be an integer. We conclude that, as stated above, the weight of any finite-dimensional representation of ${\displaystyle {\mathfrak {g}}}$ is algebraically integral.[2] The fundamental weights ${\displaystyle \omega _{1},\ldots ,\omega _{n}}$ are defined by the property that they form a basis of ${\displaystyle {\mathfrak {h}}_{0}}$ dual to the set of coroots associated to the simple roots. That is, the fundamental weights are defined by the condition ${\displaystyle 2{\frac {\langle \omega _{i},\alpha _{j}\rangle }{\langle \alpha _{j},\alpha _{j}\rangle }}=\delta _{i,j}}$ where ${\displaystyle \alpha _{1},\ldots \alpha _{n}}$ are the simple roots. An element ${\displaystyle \lambda }$ is then algebraically integral if and only if it is an integral combination of the fundamental weights.[3] The set of all ${\displaystyle {\mathfrak {g}}}$-integral weights is a lattice in ${\displaystyle {\mathfrak {h}}_{0}}$ called weight lattice for ${\displaystyle {\mathfrak {g}}}$, denoted by ${\displaystyle P({\mathfrak {g}})}$. The figure shows the example of the Lie algebra sl(3,C), whose root system is the ${\displaystyle A_{2}}$ root system. There are two simple roots, ${\displaystyle \gamma _{1}}$ and ${\displaystyle \gamma _{2}}$. The first fundamental weight, ${\displaystyle \omega _{1}}$, should be orthogonal to ${\displaystyle \gamma _{2}}$ and should project orthogonally to half of ${\displaystyle \gamma _{1}}$, and similarly for ${\displaystyle \omega _{2}}$. The weight lattice is then the triangular lattice. Suppose now that the Lie algebra ${\displaystyle {\mathfrak {g}}}$ is the Lie algebra of a Lie group G. Then we say that ${\displaystyle \lambda \in {\mathfrak {h}}_{0}}$ is analytically integral (G-integral) if for each t in ${\displaystyle {\mathfrak {h}}}$ such that ${\displaystyle \exp(t)=1\in G}$ we have ${\displaystyle \langle \lambda ,t\rangle \in 2\pi i\mathbf {Z} }$. The reason for making this definition is that if a representation of ${\displaystyle {\mathfrak {g}}}$ arises from a representation of G, then the weights of the representation will be G-integral.[4] For G semisimple, the set of all G-integral weights is a sublattice P(G) ⊂ P(${\displaystyle {\mathfrak {g}}}$). If G is simply connected, then P(G) = P(${\displaystyle {\mathfrak {g}}}$). If G is not simply connected, then the lattice P(G) is smaller than P(${\displaystyle {\mathfrak {g}}}$) and their quotient is isomorphic to the fundamental group of G.[5] ### Partial ordering on the space of weights If the positive roots are ${\displaystyle \alpha _{1}}$, ${\displaystyle \alpha _{2}}$, and ${\displaystyle \alpha _{3}}$, the shaded region is the set of points higher than ${\displaystyle \lambda }$ We now introduce a partial ordering on the set of weights, which will be used to formulate the theorem of the highest weight describing the representations of g. Recall that R is the set of roots; we now fix a set ${\displaystyle R^{+}}$ of positive roots. Consider two elements ${\displaystyle \mu }$ and ${\displaystyle \lambda }$ of ${\displaystyle {\mathfrak {h}}_{0}}$. We are mainly interested in the case where ${\displaystyle \mu }$ and ${\displaystyle \lambda }$ are integral, but this assumption is not necessary to the definition we are about to introduce. We then say that ${\displaystyle \mu }$ is higher than ${\displaystyle \lambda }$, which we write as ${\displaystyle \mu \succeq \lambda }$, if ${\displaystyle \mu -\lambda }$ is expressible as a linear combination of positive roots with non-negative real coefficients.[6] This means, roughly, that "higher" means in the directions of the positive roots. We equivalently say that ${\displaystyle \lambda }$ is "lower" than ${\displaystyle \mu }$, which we write as ${\displaystyle \lambda \preceq \mu }$. It should be emphasized that ${\displaystyle \preceq }$ is only a partial ordering; it can easily happen that ${\displaystyle \mu }$ is neither higher nor lower than ${\displaystyle \lambda }$. ### Dominant weight An integral element λ is dominant if ${\displaystyle \langle \lambda ,\gamma \rangle \geq 0}$ for each positive root γ. Equivalently, λ is dominant if it is a non-negative integer combination of the fundamental weights. In the ${\displaystyle A_{2}}$ case, the dominant integral elements live in a 60-degree sector. The notion of being dominant is not the same as being higher than zero. The set of all λ (not necessarily integral) such that ${\displaystyle \langle \lambda ,\gamma \rangle \geq 0}$ is known as the fundamental Weyl chamber associated to the given set of positive roots. ### Theorem of the highest weight A weight ${\displaystyle \lambda }$ of a representation ${\displaystyle V}$ of ${\displaystyle {\mathfrak {g}}}$ is called a highest weight if every other weight of ${\displaystyle V}$ is lower than ${\displaystyle \lambda }$. The theory classifying the finite-dimensional irreducible representations of ${\displaystyle {\mathfrak {g}}}$ is by means of a "theorem of the highest weight." The theorem says that[7] (1) every irreducible (finite-dimensional) representation has a highest weight, (2) the highest weight is always a dominant, algebraically integral element, (3) two irreducible representations with the same highest weight are isomorphic, and (4) every dominant, algebraically integral element is the highest weight of an irreducible representation. The last point is the most difficult one; the representations may be constructed using Verma modules. ### Highest-weight module A representation (not necessarily finite dimensional) V of ${\displaystyle {\mathfrak {g}}}$ is called highest-weight module if it is generated by a weight vector vV that is annihilated by the action of all positive root spaces in ${\displaystyle {\mathfrak {g}}}$. Every irreducible ${\displaystyle {\mathfrak {g}}}$-module with a highest weight is necessarily a highest-weight module, but in the infinite-dimensional case, a highest weight module need not be irreducible. For each ${\displaystyle \lambda \in {\mathfrak {h}}^{*}}$—not necessarily dominant or integral—there exists a unique (up to isomorphism) simple highest-weight ${\displaystyle {\mathfrak {g}}}$-module with highest weight λ, which is denoted L(λ), but this module is infinite dimensional unless λ is dominant integral. It can be shown that each highest weight module with highest weight λ is a quotient of the Verma module M(λ). This is just a restatement of universality property in the definition of a Verma module. Every finite-dimensional highest weight module is irreducible.[8] ## Notes 1. ^ The converse is also true – a set of diagonalizable matrices commutes if and only if the set is simultaneously diagonalisable (Horn & Johnson 1985, pp. 51–53). 2. ^ In fact, given a set of commuting matrices over an algebraically closed field, they are simultaneously triangularizable, without needing to assume that they are diagonalizable. ## References 1. ^ Hall 2015 Theorem 7.19 and Eq. (7.9) 2. ^ Hall 2015 Proposition 9.2 3. ^ Hall 2015 Proposition 8.36 4. ^ Hall 2015 Proposition 12.5 5. ^ Hall 2015 Corollary 13.8 and Corollary 13.20 6. ^ Hall 2015 Definition 8.39 7. ^ Hall 2015 Theorems 9.4 and 9.5 8. ^ This follows from (the proof of) Proposition 6.13 in Hall 2015 together with the general result on complete reducibility of finite-dimensional representations of semisimple Lie algebras
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 111, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974111795425415, "perplexity": 204.32827260334398}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861981.50/warc/CC-MAIN-20180619080121-20180619100121-00539.warc.gz"}
http://www.mathworks.com/matlabcentral/fileexchange/5782-eps2pdf
Code covered by the BSD License ### Highlights from eps2pdf 4.78947 4.8 | 19 ratings Rate this file 87 Downloads (last 30 days) File Size: 4.23 KB File ID: #5782 # eps2pdf 25 Aug 2004 (Updated ) Converts an EPS file to a PDF file. File Information Description EPS2PDF converts an existing EPS file to a PDF file using Ghostscript. EPS2PDF reads an eps file, modifies the bounding box and creates a pdf file whose size is determined by the bounding box and not by the paper size. This can not be accomplished by using Ghostscript only. So, all that one needs is of course MATLAB and Ghostscript drivers. This tool is especially suited for LaTeX (TeX) users who want to create pdf documents on the fly (by including pdf graphics and using either pdftex or pdflatex). An example would be, if you are using LaTeX (TeX) to typeset documents then the usual (simple) way to include graphics is to include eps graphics using for example (if there exists myfigure.eps) \begin{figure} \centering \includegraphics[scale=0.8]{myfigure}\\ \caption{some caption.} \end{figure} To use pdflatex (pdftex) you do not need to change anything but provide another file myfigure.pdf in the same directory along with myfigure.eps. And this file, of course, can be generated by EPS2PDF. Acknowledgements This file inspired Savefig. MATLAB release MATLAB 6.5.1 (R13SP1) Other requirements Ghostscript 29 Mar 2014 This is very good script. On a 64-bit Windows 7 machine I sometimes get the error "r6016: not enough space for thread data" The error disappears when I use a 32-bit version of Ghostscript. 22 Apr 2013 This script has worked very well for me through Windows XP and Windows 7. Note, that you do need to locate the exact path to the ghostscript file on your local drive. The example provided in the .m notes is just that, an example and will likely not reflect where the gs file is located on your machine. For Windows 7, i found that the OS doesn't have a suitable gs to path to. www.ghostscript.com has a free download for ghostscript 9.07 which resolves this problem. Just path to the .exe and change the eps2pdf script to the correct file name and you're all set. 05 Nov 2012 I just downloaded the code but I’m having a problem using it. I have a Ghostscript version 9.05 running on 64 bit Windows 7, so I changed the part of the code to be gswin64c.exe instead of gswin32c.exe, and I included the right path as input argument, but it shows me the error “Ghostscript executable could not be found: c:\programfiles\gs\gs9.05\bin\gswin64c.exe” Does anyone have any idea how could I solve this? 23 Aug 2012 18 Feb 2012 Many thanks for this helpful code. Would you please let me how you give the resource and dest addresses via an example? 27 Nov 2011 Big big thanks! 25 Oct 2011 Great job. Thanks a lot! 03 Aug 2011 Beware that this does not embed fonts in the PDF generated! I used this to create figures for an IEEE paper (which requires all fonts to be embedded) with pdflatex. The only way to get all fonts embedded was to embed them in the figures. I did this by adding the following options to GS_PARAMETERS: -dEmbedAllFonts=true -dSubsetFonts=true -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer It might be an idea to include these options by default or to comment on this in the comment for the file. Apart from that it is a great function, I combine it with the function 'exportfig'. 01 Nov 2010 This is exactly what I want! 12 Apr 2010 18 Sep 2009 Works smoothly under Linux!:) 10 Sep 2008 This script works great! 21 May 2008 it helps a lot! 17 Apr 2008 c 03 Dec 2007 wonderful, thank you very much! 13 May 2007 This function is exactly what's needed to convert EPS files printed from MATLAB into PDF's that can be interpreted correctly by pdflatex. (A rant: I think it's stupid that pdflatex cannot directly place EPS or PS files when standard latex can. I'm sure there's a reason, but I still think it's stupid. Annoying nuances like this will keep people using Word). Anyway, I'm looking forward to putting this function through its paces with some papers I'm writing. Many thanks for this contribution. 20 Mar 2006 19 Feb 2006 Just what I was looking for!!!!!!!!!! Thanks, thanks, thanks!!!! 08 Feb 2006 great! 02 Sep 2005 Thanks!!!! This is perfect! 21 Jun 2005 Very useful, many thanks! 03 Sep 2004 Excellent! Thanks a lot Primoz! After a few hours of searching, I found the perfect answer here.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813836395740509, "perplexity": 3314.0819362250586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771384.149/warc/CC-MAIN-20141217075251-00006-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/why-there-are-no-spinors-for-gl-n.240240/
# Why there are no spinors for GL(n) • Start date • #1 8 0 ## Main Question or Discussion Point Does anybody know a simple proof of the fact that there are no finite-dimensional extensions of the $$\textsl{so(n)}$$-spinor representation to the group of general linear transformations. The proof seems can be based on the well-known fact that when rotated $$2\pi$$ a spinor transforms $$\psi\rightarrow-\psi$$. But i have found no elementary proof... Related High Energy, Nuclear, Particle Physics News on Phys.org • #2 samalkhaiat 1,649 863 Does anybody know a simple proof of the fact that there are no finite-dimensional extensions of the $$\textsl{so(n)}$$-spinor representation to the group of general linear transformations. The proof seems can be based on the well-known fact that when rotated $$2\pi$$ a spinor transforms $$\psi\rightarrow-\psi$$. But i have found no elementary proof. SO(n)-spinors belong not to the group SO(n) but to its universal covering group which happens to have linear representations (spinors) other than SO(n) representations: SU(2) for SO(3) and SL(2,C) for SO(1,3). A spinor or double-valued representation of SO(n,m) is by definition a linear representation of Sp(n,m) that cannot be obtained from a representation of SO(n,m). Like SO(n), the general linear group GL(n) is not simply connected. However [unlike SO(n)], its universal covering group has no linear representations other than GL(n) representations. See the theorem on page 151 of E. Cartan, "The Theory of Spinors", Dover Edition 1981. regards sam • #3 8 0 Thank you very much. These are well or less-known but nontrivial facts. For example, highly nontrivial is the fact that the double covering of the real general linear group is not a matrix group. I wonder if there is a simple proof with no mention of double-coverings etc • #4 Haelfix 1,949 212 Theres a cute little book called "spin geometry" by Lawson et al that proves the result (chapter 5) by noting there is no faithful finite dimensional representations possible. The double cover is called the metalinear group incidentally. • #5 8 0 Excuse me, I found no chapter 5 in this book. Paragraph 5 deals with representation, but non a single word had Lawson said about general linear group and its infinite-dimensional spinors. • #6 Haelfix 1,949 212 Eeep, apologies, The PDF I have is evidently a lot different than the published book. Unfortunately the book is checked out of our library so I can't find the appropriate corresponding sections. Eyeballing the google book chapter content, maybe look in the representation section on page 30, or somewhere where they talk about Dirac operators? Its possible that its not there though, in which case I apologize. • #7 samalkhaiat 1,649 863 I wonder if there is a simple proof with no mention of double-coverings etc No, I don't think you will find such a "proof". The Following is the only way to define spinors on an oriented manifold M: One starts from a principal dundle, say a, over M, with total space denoted by E(a). Then one assumes that SO(n) is the structural group of a. A spin-structure on a is (by definition) a pair (b,f) consisting of; 1) a principal bundle b over M, with total space E(b) and structural group identified with Spin(n), i.e., the 2-fold covering of SO(n). 2) a map $f: E(b) \rightarrow E(a)$ such that $$fr_{1} = r_{2}( f \times g )$$ where g is the homomorphism from Spin(n) to SO(n), $r_{1}$ is the right action of Spin(n) and $r_{2}$ is the right action of SO(n). So, it is all about replacing SO(n) by its 2-fold covering group Spin(n). If this is possible, one then says that M admits a spin-structure; " The necessary and sufficient condition for an SO(n) bundle to be endowed with a spin-structure is that its second Stiefel-Whitney class index should vanish." The point is that we can not construct spinors from the metric tensor alone and the GL(n) generators can not be written in terms of Clifford numbers. regards sam Last edited: • #8 2,425 6 No, I don't think you will find such a "proof". I agree with you that modern language is best suited, and I would strongly recommand to use it as you do. However, I want to point out that spinors were first defined by Cartan, as you know since you have his book, at a time where vector bundles were not even known. So I do believe one could construct a proof without explicit use of fiber bundles (although one will not get away without topological consideration of course, such as double-covering) • #9 samalkhaiat 1,649 863 So I do believe one could construct a proof without explicit use of fiber bundles (although one will not get away without topological consideration of course, such as double-covering) Yes. And I did mention that Cartan proves it on page 151. The OP asked for a "Simple Proof" that avoids the use of the double covering! Such a proof, I believe, does not exists. regards sam • #10 8 0 You are too skilled! One need not "spinor bundles" to prove that it is not possible to extend spinorial representation to a representation of general linear group without enlargring the representation space. It is all about spinors and vectors, not about sections of bundles. Thank you for trying. • Last Post Replies 19 Views 5K • Last Post Replies 27 Views 5K • Last Post Replies 2 Views 2K • Last Post Replies 3 Views 595 • Last Post Replies 3 Views 535 • Last Post Replies 1 Views 2K • Last Post Replies 3 Views 3K • Last Post Replies 4 Views 551
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8929171562194824, "perplexity": 717.2863569527219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00127.warc.gz"}
http://mathhelpforum.com/statistics/143051-probability-distribution-expected-value.html
# Math Help - probability distribution and expected value 1. ## probability distribution and expected value I have attempted the question, but I am doubting my answer because it is a rational number and the numbers in context are not. I am providing the whole question for context: Two standard six-sided dice are tossed. Let X be the sum of the scores on the two dice. (a) Find (i) P(X=6) (ii) P (X>6) (iii) P(X=7 | X>5) (b) Elena plays a game where she tosses two dice. If the sum is 6, she wins 3 points. If the sum is greater than 6, she wins 1 point. If the sum is less than 6, she loses kpoints. Find the value of k for which Elena's expected number of points is zero. I need verification with part b). What I did was construct a probability distribution table where X was k, 3, and 1. And my probabilities were 10/36, 5/36, 21/36, respectively. Then I set the expected value equal to zero: (10k/36) + (3 x 5/36) + (1 x 21/36) =0 to solve for k. I get k as -36/10. But I'm not sure if this is right, can someone please check if my method is correct? 2. Originally Posted by bhuang I am providing the whole question for context: Two standard six-sided dice are tossed. Let X be the sum of the scores on the two dice. (a) Find (i) P(X=6) (ii) P (X>6) (iii) P(X=7 | X>5) You should make a two row table. first row $x= 2,3\dots ,12$ with a second row being $P(x)$ there will be some symmetry here. 3. My probability distribution table is my two row table. Instead of 2, 3, ... I put the points obtained from the game in. It should be the same thing, right? What do you mean by symmetrical... 4. Originally Posted by bhuang My probability distribution table is my two row table. Instead of 2, 3, ... I put the points obtained from the game in. It should be the same thing, right? What do you mean by symmetrical... The sum of two dice can only be from 2 to 12. These are the x's. You then need to find the probabililty of each of these occuring. I.e $P(2) =\frac{1}{36}, P(3) =\frac{2}{36}, P(4) =\frac{3}{36}, \dots ,P(12) =\frac{1}{36}$ Do you know how I found these? After you have this information you can answer the questions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9361868500709534, "perplexity": 533.7686747674702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010672371/warc/CC-MAIN-20140305091112-00054-ip-10-183-142-35.ec2.internal.warc.gz"}
https://maharashtraboardsolutions.in/maharashtra-board-12th-maths-solutions-chapter-6-miscellaneous-exercise-6-part-2/
# Maharashtra Board 12th Maths Solutions Chapter 6 Differential Equations Miscellaneous Exercise 6 Balbharti 12th Maharashtra State Board Maths Solutions Book Pdf Chapter 6 Differential Equations Miscellaneous Exercise 6 Questions and Answers. ## Maharashtra State Board 12th Maths Solutions Chapter 6 Differential Equations Miscellaneous Exercise 6 (I) Choose the correct option from the given alternatives: Question 1. The order and degree of the differential equation $$\sqrt{1+\left(\frac{d y}{d x}\right)^{2}}=\left(\frac{d^{2} y}{d x^{2}}\right)^{\frac{3}{2}}$$ are respectively…….. (a) 2, 1 (b) 1, 2 (c) 3, 2 (d) 2, 3 (d) 2, 3 Question 2. The differential equation of y = c2 + $$\frac{c}{x}$$ is……. (a) $$x^{4}\left(\frac{d y}{d x}\right)^{2}-x \frac{d y}{d x}=y$$ (b) $$\frac{d y}{d x^{2}}+x \frac{d y}{d x}+y=0$$ (c) $$x^{3}\left(\frac{d y}{d x}\right)^{2}+x \frac{d y}{d x}=y$$ (d) $$\frac{d^{2} y}{d x^{2}}+\frac{d y}{d x}-y=0$$ (a) $$x^{4}\left(\frac{d y}{d x}\right)^{2}-x \frac{d y}{d x}=y$$ Question 3. x2 + y2 = a2 is a solution of ……… (a) $$\frac{d^{2} y}{d x^{2}}+\frac{d y}{d x}-y=0$$ (b) $$y=x \sqrt{1+\left(\frac{d y}{d x}\right)^{2}}+a^{2} y$$ (c) $$y=x \frac{d y}{d x}+a \sqrt{1+\left(\frac{d y}{d x}\right)^{2}}$$ (d) $$\frac{d^{2} y}{d x^{2}}=(x+1) \frac{d y}{d x}$$ (c) $$y=x \frac{d y}{d x}+a \sqrt{1+\left(\frac{d y}{d x}\right)^{2}}$$ Question 4. The differential equation of all circles having their centres on the line y = 5 and touching the X-axis is (a) $$y^{2}\left(1+\frac{d y}{d x}\right)=25$$ (b) $$(y-5)^{2}\left[1+\left(\frac{d y}{d x}\right)^{2}\right]=25$$ (c) $$(y-5)^{2}+\left[1+\left(\frac{d y}{d x}\right)^{2}\right]=25$$ (d) $$(y-5)^{2}\left[1-\left(\frac{d y}{d x}\right)^{2}\right]=25$$ (b) $$(y-5)^{2}\left[1+\left(\frac{d y}{d x}\right)^{2}\right]=25$$ Question 5. The differential equation y $$\frac{d y}{d x}$$ + x = 0 represents family of ……… (a) circles (b) parabolas (c) ellipses (d) hyperbolas (a) circles Hint: y $$\frac{d y}{d x}$$ + x = 0 ∴ ∫y dy + ∫x dx = c ∴ $$\frac{y^{2}}{2}+\frac{x^{2}}{2}=c$$ ∴ x2 + y2 = 2c which is a circle. Question 6. The solution of $$\frac{1}{x} \cdot \frac{d y}{d x}=\tan ^{-1} x$$ is…… (a) $$\frac{x^{2} \tan ^{-1} x}{2}+c=0$$ (b) x tan-1x + c = 0 (c) x – tan-1x = c (d) $$y=\frac{x^{2} \tan ^{-1} x}{2}-\frac{1}{2}\left(x-\tan ^{-1} x\right)+c$$ (d) $$y=\frac{x^{2} \tan ^{-1} x}{2}-\frac{1}{2}\left(x-\tan ^{-1} x\right)+c$$ Question 7. The solution of (x + y)2 $$\frac{d y}{d x}$$ = 1 is……. (a) x = tan-1(x + y) + c (b) y tan-1($$\frac{x}{y}$$) = c (c) y = tan-1(x + y) + c (d) y + tan-1(x + y) = c (c) y = tan-1(x + y) + c Question 8. The Solution of $$\frac{d y}{d x}=\frac{y+\sqrt{x^{2}-y^{2}}}{2}$$ is…… (a) sin-1($$\frac{y}{x}$$) = 2 log |x| + c (b) sin-1($$\frac{y}{x}$$) = log |x| + c (c) sin($$\frac{x}{y}$$) = log |x| + c (d) sin($$\frac{y}{x}$$) = log |y| + c (b) sin-1($$\frac{y}{x}$$) = log |x| + c Question 9. The solution of $$\frac{d y}{d x}$$ + y = cos x – sin x is…… (a) y ex = cos x + c (b) y ex + ex cos x = c (c) y ex = ex cos x + c (d) y2 ex = ex cos x + c (c) y ex = ex cos x + c Hint: $$\frac{d y}{d x}$$ + y = cos x – sin x I.F. = $$e^{\int 1 d x}=e^{x}$$ ∴ the solution is y . ex = ∫(cos x – sin x) ex + c ∴ y . ex = ex cos x + c Question 10. The integrating factor of linear differential equation x $$\frac{d y}{d x}$$ + 2y = x2 log x is…….. (a) $$\frac{1}{x}$$ (b) k (c) $$\frac{1}{n^{2}}$$ (d) x2 (d) x2 Hint: I.F. = $$e^{\int \frac{2}{x} d x}$$ = e2 log x = x2 Question 11. The solution of the differential equation $$\frac{d y}{d x}$$ = sec x – y tan x is……. (a) y sec x + tan x = c (b) y sec x = tan x + c (c) sec x + y tan x = c (d) sec x = y tan x + c (b) y sec x = tan x + c Hint: $$\frac{d y}{d x}$$ = sec x – y tan x ∴ $$\frac{d y}{d x}$$ + y tan x = sec x I.F. = $$e^{\int \tan x d x}=e^{\log \sec x}$$ = sec x ∴ the solution is y . sec x = ∫sec x . sec x dx + c ∴ y sec x = tan x + c Question 12. The particular solution of $$\frac{d y}{d x}=x e^{y-x}$$, when x = y = 0 is…… (a) ex-y = x + 1 (b) ex+y = x + 1 (c) ex + ey = x + 1 (d) ey-x = x – 1 (a) ex-y = x + 1 Question 13. $$\frac{x^{2}}{a^{2}}-\frac{y^{2}}{b^{2}}=1$$ is a solution of…….. (a) $$\frac{d^{2} y}{d x^{2}}+y x+\left(\frac{d y}{d x}\right)^{2}=0$$ (b) $$x y \frac{d^{2} y}{d x^{2}}+2\left(\frac{d y}{d x}\right)^{2}-y \frac{d y}{d x}=0$$ (c) $$y \frac{d^{2} y}{d x^{2}}+2\left(\frac{d y}{d x}\right)^{2}+y=0$$ (d) $$x y \frac{d y}{d x}+y \frac{d^{2} y}{d x^{2}}=0$$ (b) $$x y \frac{d^{2} y}{d x^{2}}+2\left(\frac{d y}{d x}\right)^{2}-y \frac{d y}{d x}=0$$ Question 14. The decay rate of certain substances is directly proportional to the amount present at that instant. Initially, there are 27 grams of substance and 3 hours later it is found that 8 grams left. The amount left after one more hour is…… (a) 5$$\frac{2}{3}$$ grams (b) 5$$\frac{1}{3}$$ grams (c) 5.1 grams (d) 5 grams (b) 5$$\frac{1}{3}$$ grams Question 15. If the surrounding air is kept at 20°C and the body cools from 80°C to 70°C in 5 minutes, the temperature of the body after 15 minutes will be….. (a) 51.7°C (b) 54.7°C (c) 52.7°C (d) 50.7°C (b) 54.7°C (II) Solve the following: Question 1. Determine the order and degree of the following differential equations: (i) $$\frac{d^{2} y}{d x^{2}}+5 \frac{d y}{d x}+y=x^{3}$$ Solution: The given D.E. is $$\frac{d^{2} y}{d x^{2}}+5 \frac{d y}{d x}+y=x^{3}$$ This D.E. has highest order derivative $$\frac{d^{2} y}{d x^{2}}$$ with power 1. ∴ the given D.E. is of order 2 and degree 1. (ii) $$\left(\frac{d^{3} y}{d x^{3}}\right)^{2}=\sqrt[5]{1+\frac{d y}{d x}}$$ Solution: The given D.E. is $$\left(\frac{d^{3} y}{d x^{3}}\right)^{2}=\sqrt[5]{1+\frac{d y}{d x}}$$ $$\left(\frac{d^{3} y}{d x^{3}}\right)^{2 \times 5}=1+\frac{d y}{d x}$$ $$\left(\frac{d^{3} y}{d x^{3}}\right)^{10}=1+\frac{d y}{d x}$$ This D.E. has highest order derivative $$\frac{d^{3} y}{d x^{3}}$$ with power 10. ∴ the given D.E. is of order 3 and degree 10. (iii) $$\sqrt[3]{1+\left(\frac{d y}{d x}\right)^{2}}=\frac{d^{2} y}{d x^{2}}$$ Solution: The given D.E. is $$\sqrt[3]{1+\left(\frac{d y}{d x}\right)^{2}}=\frac{d^{2} y}{d x^{2}}$$ On cubing both sides, we get $$1+\left(\frac{d y}{d x}\right)^{2}=\left(\frac{d^{2} y}{d x^{2}}\right)^{3}$$ This D.E. has highest order derivative $$\frac{d^{2} y}{d x^{2}}$$ with power 3. ∴ the given D.E. is of order 2 and degree 3. (iv) $$\frac{d y}{d x}=3 y+\sqrt[4]{1+5\left(\frac{d y}{d x}\right)^{2}}$$ Solution: The given D.E. is This D.E. has the highest order derivative $$\frac{d y}{d x}$$ with power 4. ∴ the given D.E. is of order 1 and degree 4. (v) $$\frac{d^{4} y}{d x^{4}}+\sin \left(\frac{d y}{d x}\right)=0$$ Solution: The given D.E. is $$\frac{d^{4} y}{d x^{4}}+\sin \left(\frac{d y}{d x}\right)=0$$ This D.E. has highest order derivative $$\frac{d^{4} y}{d x^{4}}$$. ∴ order = 4 Since this D.E. cannot be expressed as a polynomial in differential coefficient, the degree is not defined. Question 2. In each of the following examples verify that the given function is a solution of the differential equation. (i) $$x^{2}+y^{2}=r^{2} ; x \frac{d y}{d x}+r \sqrt{1+\left(\frac{d y}{d x}\right)^{2}}=y$$ Solution: x2 + y2 = r2 ……. (1) Differentiating both sides w.r.t. x, we get Hence, x2 + y2 = r2 is a solution of the D.E. $$x \frac{d y}{d x}+r \sqrt{1+\left(\frac{d y}{d x}\right)^{2}}=y$$ (ii) y = eax sin bx; $$\frac{d^{2} y}{d x^{2}}-2 a \frac{d y}{d x}+\left(a^{2}+b^{2}\right) y=0$$ Solution: (iii) y = 3 cos(log x) + 4 sin(log x); $$x^{2} \frac{d^{2} y}{d x^{2}}+x \frac{d y}{d x}+y=0$$ Solution: y = 3 cos(log x) + 4 sin (log x) …… (1) Differentiating both sides w.r.t. x, we get (iv) xy = aex + be-x + x2; $$x \frac{d^{2} y}{d x^{2}}+2 \frac{d y}{d x}+x^{2}=x y+2$$ Solution: (v) x2 = 2y2 log y, x2 + y2 = xy $$\frac{d x}{d y}$$ Solution: x2 = 2y2 log y ……(1) Differentiating both sides w.r.t. y, we get ∴ x2 + y2 = xy $$\frac{d x}{d y}$$ Hence, x2 = 2y2 log y is a solution of the D.E. x2 + y2 = xy $$\frac{d x}{d y}$$ Question 3. Obtain the differential equation by eliminating the arbitrary constants from the following equations: (i) y2 = a(b – x)(b + x) Solution: y2 = a(b – x)(b + x) = a(b2 – x2) Differentiating both sides w.r.t. x, we get 2y $$\frac{d y}{d x}$$ = a(0 – 2x) = -2ax ∴ y $$\frac{d y}{d x}$$ = -ax …….(1) Differentiating again w.r.t. x, we get This is the required D.E. (ii) y = a sin(x + b) Solution: y = a sin(x + b) This is the required D.E. (iii) (y – a)2 = b(x + 4) Solution: (y – a)2 = b(x + 4) …….(1) Differentiating both sides w.r.t. x, we get $$2(y-a) \cdot \frac{d}{d x}(y-a)=b \frac{d}{d x}(x+4)$$ (iv) y = $$\sqrt{a \cos (\log x)+b \sin (\log x)}$$ Solution: y = $$\sqrt{a \cos (\log x)+b \sin (\log x)}$$ ∴ y2 = a cos (log x) + b sin (log x) …….(1) Differentiating both sides w.r.t. x, we get (v) y = Ae3x+1 + Be-3x+1 Solution: y = Ae3x+1 + Be-3x+1 …… (1) Differentiating twice w.r.t. x, we get This is the required D.E. Question 4. Form the differential equation of: (i) all circles which pass through the origin and whose centres lie on X-axis. Solution: Let C (h, 0) be the centre of the circle which pass through the origin. Then radius of the circle is h. ∴ equation of the circle is (x – h)2 + (y – 0)2 = h2 ∴ x2 – 2hx + h2 + y2 = h2 ∴ x2 + y2 = 2hx ……..(1) Differentiating both sides w.r.t. x, we get 2x + 2y $$\frac{d y}{d x}$$ = 2h Substituting the value of 2h in equation (1), we get x2 + y2 = (2x + 2y $$\frac{d y}{d x}$$) x ∴ x2 + y2 = 2x2 + 2xy $$\frac{d y}{d x}$$ ∴ 2xy $$\frac{d y}{d x}$$ + x2 – y2 = 0 This is the required D.E. (ii) all parabolas which have 4b as latus rectum and whose axis is parallel to Y-axis. Solution: Let A(h, k) be the vertex of the parabola which has 4b as latus rectum and whose axis is parallel to the Y-axis. Then equation of the parabola is (x – h)2 = 4b(y – k) ……. (1) where h and k are arbitrary constants. Differentiating both sides of (1) w.r.t. x, we get 2(x – h). $$\frac{d}{d x}$$(x – h) = 4b . $$\frac{d}{d x}$$(y – k) ∴ 2(x – h) x (1 – 0) = 4b($$\frac{d y}{d x}$$ – 0) ∴ (x – h) = 2b $$\frac{d y}{d x}$$ Differentiating again w.r.t. x, we get 1 – 0 = 2b $$\frac{d^{2} y}{d x^{2}}$$ ∴ 2b $$\frac{d^{2} y}{d x^{2}}$$ – 1 = 0 This is the required D.E. (iii) an ellipse whose major axis is twice its minor axis. Solution: Let 2a and 2b be lengths of the major axis and minor axis of the ellipse. Then 2a = 2(2b) ∴ a = 2b ∴ equation of the ellipse is $$\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1$$ ∴ $$\frac{x^{2}}{(2 b)^{2}}+\frac{y^{2}}{b^{2}}=1$$ ∴ $$\frac{x^{2}}{4 b^{2}}+\frac{y^{2}}{b^{2}}=1$$ ∴ x2 + 4y2 = 4b2 Differentiating w.r.t. x, we get 2x + 4 × 2y $$\frac{d y}{d x}$$ = 0 ∴ x + 4y $$\frac{d y}{d x}$$ = 0 This is the required D.E. (iv) all the lines which are normal to the line 3x + 2y + 7 = 0. Solution: Slope of the line 3x – 2y + 7 = 0 is $$\frac{-3}{-2}=\frac{3}{2}$$. ∴ slope of normal to this line is $$-\frac{2}{3}$$ Then the equation of the normal is y = $$-\frac{2}{3}$$x + k, where k is an arbitrary constant. Differentiating w.r.t. x, we get $$\frac{d y}{d x}=-\frac{2}{3} \times 1+0$$ ∴ 3$$\frac{d y}{d x}$$ + 2 = 0 This is the required D.E. (v) the hyperbola whose length of transverse and conjugate axes are half of that of the given hyperbola $$\frac{x^{2}}{16}-\frac{y^{2}}{36}=k$$. Solution: The equation of the hyperbola is $$\frac{x^{2}}{16}-\frac{y^{2}}{36}=k$$ i.e., $$\frac{x^{2}}{16 k}-\frac{y^{2}}{36 k}=1$$ Comparing this equation with $$\frac{x^{2}}{a^{2}}-\frac{y^{2}}{b^{2}}=1$$, we get a2 = 16k, b2 = 36k ∴ a = 4√k, b = 6√k ∴ l(transverse axis) = 2a = 8√k and l(conjugate axis) = 2b = 12√k Let 2A and 2B be the lengths of the transverse and conjugate axes of the required hyperbola. Then according to the given condition 2A = a = 4√k and 2B = b = 6√k ∴ A = 2√k and B = 3√k ∴ equation of the required hyperbola is $$\frac{x^{2}}{A^{2}}-\frac{y^{2}}{B^{2}}=1$$ i.e., $$\frac{x^{2}}{4 k}-\frac{y^{2}}{9 k}=1$$ ∴ 9x2 – 4y2 = 36k, where k is an arbitrary constant. Differentiating w.r.t. x, we get 9 × 2x – 4 × 2y $$\frac{d y}{d x}$$ = 0 ∴ 9x – 4y $$\frac{d y}{d x}$$ = 0 This is the required D.E. Question 5. Solve the following differential equations: (i) log($$\frac{d y}{d x}$$) = 2x + 3y Solution: (ii) $$\frac{d y}{d x}$$ = x2y + y Solution: (iii) $$\frac{d y}{d x}=\frac{2 y-x}{2 y+x}$$ Solution: (iv) x dy = (x + y + 1) dx Solution: (v) $$\frac{d y}{d x}$$ + y cot x = x2 cot x + 2x Solution: $$\frac{d y}{d x}$$ + y cot x = x cot x + 2x ……..(1) This is the linear differential equation of the form $$\frac{d y}{d x}$$ + Py = Q, where P = cot x and Q = x2 cot x + 2x ∴ I.F. = $$e^{\int P d x}$$ = $$e^{\int \cot x d x}$$ = $$e^{\log (\sin x)}$$ = sin x ∴ the solution of (1) is given by y(I.F.) = ∫Q . (I.F.) dx + c ∴ y sin x = ∫(x2 cot x + 2x) sin x dx + c ∴ y sinx = ∫(x2 cot x . sin x + 2x sin x) dx + c ∴ y sinx = ∫x2 cos x dx + 2∫x sin x dx + c ∴ y sinx = x2 ∫cos x dx – ∫[$$\frac{d}{d x}\left(x^{2}\right)$$ ∫cos x dx] dx + 2∫x sin x dx + c ∴ y sin x = x2 (sin x) – ∫2x(sin x) dx + 2∫x sin x dx + c ∴ y sin x = x2 sin x – 2∫x sin x dx + 2∫x sin x dx + c ∴ y sin x = x2 sin x + c ∴ y = x2 + c cosec x This is the general solution. (vi) y log y = (log y2 – x) $$\frac{d y}{d x}$$ Solution: (vii) 4 $$\frac{d x}{d y}$$ + 8x = 5e-3y Solution: Question 6. Find the particular solution of the following differential equations: (i) y(1 + log x) = (log xx) $$\frac{d y}{d x}$$, when y(e) = e2 Solution: (ii) (x + 2y2) $$\frac{d y}{d x}$$ = y, when x = 2, y = 1 Solution: This is the general solution. When x = 2, y = 1, we have 2 = 2(1)2 + c(1) ∴ c = 0 ∴ the particular solution is x = 2y2. (iii) $$\frac{d y}{d x}$$ – 3y cot x = sin 2x, when y($$\frac{\pi}{2}$$) = 2 Solution: $$\frac{d y}{d x}$$ – 3y cot x = sin 2x $$\frac{d y}{d x}$$ = (3 cot x) y = sin 2x ……..(1) This is the linear differential equation of the form (iv) (x + y) dy + (x – y) dx = 0; when x = 1 = y Solution: (v) $$2 e^{\frac{x}{y}} d x+\left(y-2 x e^{\frac{x}{y}}\right) d y=0$$, when y(0) = 1 Solution: Question 7. Show that the general solution of defferential equation $$\frac{d y}{d x}+\frac{y^{2}+y+1}{x^{2}+x+1}=0$$ is given by (x + y + 1) = c(1 – x – y – 2xy). Solution: Question 8. The normal lines to a given curve at each point (x, y) on the curve pass through (2, 0). The curve passes through (2, 3). Find the equation of the curve. Solution: Let P(x, y) be a point on the curve y = f(x). Then slope of the normal to the curve is $$-\frac{1}{\left(\frac{d y}{d x}\right)}$$ ∴ equation of the normal is This is the general equation of the curve. Since, the required curve passed through the point (2, 3), we get 22 + 32 = 4(2) + c ∴ c = 5 ∴ equation of the required curve is x2 + y2 = 4x + 5. Question 9. The volume of a spherical balloon being inflated changes at a constant rate. If initially its radius is 3 units and after 3 seconds it is 6 units. Find the radius of the balloon after t seconds. Solution: Let r be the radius and V be the volume of the spherical balloon at any time t. Then the rate of change in volume of the spherical balloon is $$\frac{d V}{d t}$$ which is a constant. Hence, the radius of the spherical balloon after t seconds is $$(63 t+27)^{\frac{1}{3}}$$ units. Question 10. A person’s assets start reducing in such a way that the rate of reduction of assets is proportional to the square root of the assets existing at that moment. If the assets at the beginning are ₹ 10 lakhs and they dwindle down to ₹ 10,000 after 2 years, show that the person will be bankrupt in 2$$\frac{2}{9}$$ years from the start. Solution: Let x be the assets of the presort at time t years. Then the rate of reduction is $$\frac{d x}{d t}$$ which is proportional to √x. ∴ $$\frac{d x}{d t}$$ ∝ √x ∴ $$\frac{d x}{d t}$$ = -k√x, where k > 0 ∴ $$\frac{d x}{\sqrt{x}}$$ = -k dt Integrating both sides, we get $$\int x^{-\frac{1}{2}} d x$$ = -k∫dt ∴ $$\frac{x^{\frac{1}{2}}}{\left(\frac{1}{2}\right)}$$ = -kt + c ∴ 2√x = -kt + c At the beginning, i.e. at t = 0, x = 10,00,000 2√10,00,000 = -k(0) + c ∴ c = 2000 ∴ 2√x = -kt + 2000 ……..(1) Also, when t = 2, x = 10,000 ∴ 2√10000 = -k × 2 + 2000 ∴ 2k = 1800 ∴ k = 900 ∴ (1) becomes, ∴ 2√x = -900t + 2000 When the person will be bankrupt, x = 0 ∴ 0 = -900t + 2000 ∴ 900t = 2000 ∴ t = $$\frac{20}{9}=2 \frac{2}{9}$$ Hence, the person will be bankrupt in $$2 \frac{2}{9}$$ years.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115252494812012, "perplexity": 2173.8015290575454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00410.warc.gz"}
http://cms.math.ca/cjm/msc/46B07
The behaviour of Legendre and ultraspherical polynomials in $L_p$-spaces We consider the analogue of the $\Lambda(p)-$problem for subsets of the Legendre polynomials or more general ultraspherical polynomials. We obtain the best possible'' result that if \$2 Categories:42C10, 33C45, 46B07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9763513803482056, "perplexity": 918.8207993198682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634333.17/warc/CC-MAIN-20150417045714-00087-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/statistics-667290.htm
# statistics 1 answer below » Assignment Objectives: Construct a scatter plot for two variables, Compute the correlation coefficient, Compute the formula for the regression equation Purpose: Use the regression equation to predict, Interpret a correlation coefficient, Define independent and dependent variables Assignment Description: This week we will use the data collected in week one for height and length of foot. We will determine whether there is a significant relationship between the two variables. 1. Create a scatter plot for height and length of foot. We will assume here that height influences length of foot so height will be the independent variable. Does it appear from the plot that there is a relationship between the two variables? What type of relationship would that be? 2. Calculate the correlation coefficient for the two variables. How would you interpret this measure? 3. What would be the hypotheses to test this relationship? Is the relationship significant? 4. Find the regression equation to predict length of foot using height. Write it in slope-intercept format. 5. Use the regression equation to predict a foot length for a height of 62 inches. Parameters: Each question will be worth 10 points. ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker ## Recent Questions in Descriptive Statistics Looking for Something Else? Ask a Similar Question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8170210719108582, "perplexity": 900.6774779922528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00015.warc.gz"}
https://mathoverflow.net/questions/154766/rigidity-vs-super-rigidity-of-representations-of-k%C3%A4hler-surface-groups
# Rigidity vs Super-rigidity of representations (of Kähler/surface groups) In the literature there are several definitions of "rigidity" (or "super-rigidity") of representations, adapted to the circumastances. I wonder what are the relations between them; I excuse in advance if the answers are known, after some research I could not find them. I have singled out a few definitions for concreteness. Notation first: let $\Gamma = \pi_1(X, x_0)$ where $X$ is Kähler, and $G$ a (semi-)simple connected algebraic group. 1. (classic rigidity): Every $\phi$ close to $\rho$ (in the analytic topology of $\text{Hom}(\Gamma, G)$) is conjugated to $\rho$ by an element of $G$. 2. (super-rigidity, e.g. Margulis): The harmonic $\rho$-equivariant map $f \colon \tilde{X} \to G/K$ is totally geodesic. 3. (Hermitian super-rigidity, e.g. Siu): Here we suppose $G/K$ is Hermitian symmetric. Then $f$ as above is holomorphic. 4. (from Kim-Pansu, Duke Math. J., to appear): (Here we may need to suppose $X$ to be a surface of high enough genus) $\rho$ is rigid if every $\phi$ close enough to $\rho$ is not smooth in $\text{Hom}(\Gamma, G)$ (equivalently, under the assumption on $X$: not Zariski-dense). The last one is a bit "ad hoc" to their situation, since it allows them to prove that $\text{Hom}(\Gamma, G)$ splits in connected components, either entirely rigid or where Zariski dense points form a dense subset. I think that 3. has been introduced as an intermediate step to 2. (when $\tilde{X}$ is a Hermitian symmetric space), but I do not know how general this is. To me, the only clear implication is 1. $\implies$ 4. (if $\phi_n \to \rho$ are smooth, then $\dim T_{\phi_n}\text{Hom}(\Gamma, G) = vdim = (1-\chi(X)) \dim(G)$ but if a neighborhood of $\phi_n$ is conjugated to $\rho$ then $\dim T_{\phi_n}\text{Hom}(\Gamma, G) \leq \dim G$). I also think that despite the "super-" name there is a good chance that 1. $\implies$ 3. (at least if $X$ is a surface); so, main question: Is there any implication between 1. and 3.? Or counter-example? More with general-knowledge purposes: Is there any other general implication / counterexample of implications between the points above? For example, how generally 3. $\implies$ 2.? Is 4. much weaker then any other one? Of course, I would also be happy with answers for $\Gamma$ a cocompact lattice in $H$ and $\tilde{X} = H/K'$, or even just $X$ a Riemann surface.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225002527236938, "perplexity": 465.91221062234564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00461.warc.gz"}
https://arxiv.org/abs/0704.1269
cond-mat.dis-nn (what is this?) (what is this?) Title: Phase Transitions in the Coloring of Random Graphs Abstract: We consider the problem of coloring the vertices of a large sparse random graph with a given number of colors so that no adjacent vertices have the same color. Using the cavity method, we present a detailed and systematic analytical study of the space of proper colorings (solutions). We show that for a fixed number of colors and as the average vertex degree (number of constraints) increases, the set of solutions undergoes several phase transitions similar to those observed in the mean field theory of glasses. First, at the clustering transition, the entropically dominant part of the phase space decomposes into an exponential number of pure states so that beyond this transition a uniform sampling of solutions becomes hard. Afterward, the space of solutions condenses over a finite number of the largest states and consequently the total entropy of solutions becomes smaller than the annealed one. Another transition takes place when in all the entropically dominant states a finite fraction of nodes freezes so that each of these nodes is allowed a single color in all the solutions inside the state. Eventually, above the coloring threshold, no more solutions are available. We compute all the critical connectivities for Erdos-Renyi and regular random graphs and determine their asymptotic values for large number of colors. Finally, we discuss the algorithmic consequences of our findings. We argue that the onset of computational hardness is not associated with the clustering transition and we suggest instead that the freezing transition might be the relevant phenomenon. We also discuss the performance of a simple local Walk-COL algorithm and of the belief propagation algorithm in the light of our results. Comments: 36 pages, 15 figures Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Computational Complexity (cs.CC) Journal reference: Phys. Rev. E 76, 031131 (2007) DOI: 10.1103/PhysRevE.76.031131 Cite as: arXiv:0704.1269 [cond-mat.dis-nn] (or arXiv:0704.1269v2 [cond-mat.dis-nn] for this version) Submission history From: Lenka Zdeborova [view email] [v1] Tue, 10 Apr 2007 16:42:15 GMT (167kb) [v2] Wed, 20 Jun 2007 15:26:20 GMT (168kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8381109833717346, "perplexity": 723.9656194744459}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612013.52/warc/CC-MAIN-20170529034049-20170529054049-00437.warc.gz"}
https://crypto.stackexchange.com/questions/43660/rsa-how-to-compute-the-number-of-possible-plaintexts-if-e-is-not-coprime-to
# RSA: How to compute the number of possible plaintexts if $e$ is not coprime to $\varphi(n)$? I'm studying for an exam next week, and there are two questions that me and my friend don't understand: Question 1: Let $p,q$ be big prime numbers and $n=pq$. Let $e$ be a number between $1$ and $\varphi(n)$ such that $e\mid(p-1)$, $e\mid(q-1)$. For a plaintext $M$, the ciphertext is $C=M^e \bmod n$. How many different $M$s will be ciphered to the same $C$? Edit: what is the maximum number of messages $M$ that will yield to the same $C$? Why is the answer $e^2$? Question 2: Let $p,q$ be big prime numbers and $n=pq$. Let $e$ be a number between $1$ and $\varphi(n)$ such that $\gcd(e,p-1)=a > 1$ and $\gcd(e,q-1)=1$. For a plaintext $M$, the ciphertext is $C=M^e \bmod n$. How may different $M$s will be ciphered to the same $C$? Why is the answer $a$? • You might want to clarify which set $M$ is chosen from. For example, if you allow $\gcd(M,n)\neq1$, a counterexample to the first question's claimed answer is given by $n=15$, $e=2$, $M=6$. – yyyyyyy Feb 7 '17 at 13:49 • Fixed the questions, for C=4, the following M yield to the same C: 13, 8,7,2 – CSE371 Feb 7 '17 at 14:05 • Since you're learning for an exam, the answers for both those questions is based on the number of roots or the order of subgroups (use Lagrange's theorem or the fundamental theorem of algebra), combined with the Chinese Remainder Theorem to assemble solutions mod $n$. – tylo Feb 7 '17 at 14:44 ## 2 Answers Both questions can be answered using the same kind of argument. First, observe that in both cases we are really only interested in the number of $e$th roots of unity in $\mathbb Z/n$: Any $e$th root of unity can be multiplied to some $M$ without changing the value of $M^e\bmod n$, and two different $M$ with equal $M^e\bmod n$ differ by an $e$th root of unity. Thus we restrict our attention to the case $M=1$. For succinctness, I will sometimes write only root for "$e$th root of unity". The Chinese remainder theorem gives an isomorphism of rings $$\pi\colon\; \mathbb Z/n \cong \mathbb Z/p \times \mathbb Z/q \text.$$ The image of $x^e\bmod n$ under $\varphi$ is $(x^e\bmod p,x^e\bmod q)$. Some element $x\in\mathbb Z/n$ is a root if and only if both components of $\varphi(x)$ are roots in $\mathbb Z/p$ and $\mathbb Z/q$. In other words: Each root in $\mathbb Z/n$ "consists" of roots in $\mathbb Z/p$ and $\mathbb Z/q$, and vice-versa. Thus, if $\mathbb Z/p$ has $a$ roots and $\mathbb Z/q$ has $b$, the ring $\mathbb Z/n$ has $ab$ roots. Now note that the order of the cyclic group $(\mathbb Z/p)^\ast$ is $p-1$, thus if $e$ divides $p-1$, there must be an element $r$ of multiplicative order $e$ in $\mathbb Z/p$. Its powers $r^1,r^2,\dots,r^{e-1}$ are nontrivial $e$th roots of unity in $\mathbb Z/p$, thus (including $1$) there are $e$ roots in $\mathbb Z/p$. There cannot be more than $e$ roots since $\mathbb Z/p$ is a field. Of course, the same holds for $q$. Armed with that knowledge, let's tackle question 1: If $e$ divides both $p-1$ and $q-1$, there must exist $e$ different roots in both $\mathbb Z/p$ and $\mathbb Z/q$. Thus, by the above, there are $e^2$ different $e$th roots of unity in $\mathbb Z/n$. As to question 2: Since $(\mathbb Z/q)^\ast$ has order $q-1$ and $\gcd(e,q-1)=1$, the map $x\mapsto x^e\bmod q$ is a permutation on $\mathbb Z/q$. In particular, there are no nontrivial $e$th roots of unity in $\mathbb Z/q$. Concerning $\mathbb Z/p$, note that the map $x\mapsto x^{e/a}\bmod p$ is a permutation since $e/a$ is coprime to $p-1$. Thus, $1=x^e=(x^a)^{e/a}$ in $\mathbb Z/p$ if and only if $x^a=1$. As $a\mid p-1$, we can apply the above results (with $a$ in place of $e$) to conclude that $\mathbb Z/p$ has $a$ roots, thus $\mathbb Z/n$ has $a\cdot 1=a$. For arbitrary givens $$n$$, $$e$$, $$c$$ with $$e>0$$ and $$0\le c, we want to solve for $$m$$ with $$0\le m the equation $$c=m^e\bmod n$$. We assume $$n=p\,q$$ with $$p$$ and $$q$$ distinct primes as in standard RSA. All quantities are integers. $$p$$ and $$q$$ are distinct primes, thus coprime, thus by the Chinese Remainder Theorem we can: • Solve for $$0\le x the equation $$c\equiv x^e\pmod p\quad$$🄐 • Solve for $$0\le y the equation $$c\equiv x^e\,\pmod q\quad$$🄑 • Use each possible $$(x,y)$$ combination to get all $$m=(q^{-1}(x-y)\bmod p)\,q+y$$. Note: Most actual implementations of RSA decryption follow these steps, because that requires several times less computational effort than computing $$m=c^d\bmod n$$ directly, and parallelizes better on top of that. Each $$(x,y)$$ leads to a unique $$m$$, with $$0\le m. Thus the number of possible messages $$m$$ for a given ciphertext $$c$$ is $$u\,v$$, where $$u$$ [resp. $$v\,$$] is the number of solutions to 🄐 [resp. 🄑 ]. Depending on conditions about $$p$$, $$e$$, $$c$$ that we will detail, $$u$$ is one of $$\gcd(e,p-1)$$, $$1$$, or $$0$$ (and similar for $$v$$). The 3×3 cases for $$(u,v)$$ reduce to at most 5 for the numbers $$u\,v$$ of solutions for $$m$$: 1. $$\;\gcd(e,p-1)\gcd(e,q-1)\quad$$ [when $$\gcd(c,n)=1\,$$]. 2. $$\;\gcd(e,p-1)\quad$$ [when $$q$$ divides $$c$$; value can conflate with case 1] 3. $$\;\gcd(e,q-1)\quad$$ [when $$p$$ divides $$c$$; value can conflate with case 1] 4. $$\;1\quad$$ [when $$c=0$$; value can conflate with cases 1/2/3] 5. $$\;0\quad$$ [can occur only when $$c$$ is not obtained by actual encryption] In normal RSA, the condition $$\gcd(e,\varphi(n))=1$$ implies $$u=v=1$$, therefore a single $$m$$ is possible for every $$c$$. Otherwise said $$\gcd(e,p-1)=1=\gcd(e,q-1)$$, cases 1/2/3/4 conflate to $$1$$, and the later case can't occur. In this section we detail determining the number $$u$$ of distinct solutions for $$0\le x the equation $$c\equiv x^e\pmod p$$; and solving for $$x$$ in some cases. If $$c\bmod p=0$$, then the only solution is $$x=0$$, and $$u=1$$. There remains to handle $$c\bmod p\ne 0$$, and we assume that. Since $$p$$ is prime, $$\gcd(c,p)=1$$. Thus by Fermat's Little Theorem $$x^{p-1}\equiv1\pmod p$$. Thus $$x^e\equiv x^{e\bmod(p-1)}\pmod p$$. If $$e\bmod(p-1)=0$$, then equation $$c\equiv x^e\pmod p$$ becomes $$c\equiv 1\pmod p$$. If that holds, there are $$p-1$$ solutions with $$1\le x; and $$\gcd(e,p-1)=p-1$$ thus $$u=\gcd(e,p-1)$$ (a case we'll meet later). Otherwise $$u=0$$ (that can't happen if $$c$$ was actually obtained by computing $$m^e\bmod n\,$$). There remains to handle $$e\bmod(p-1)\ne0$$, and we assume that. Compute $$r=\gcd(p-1,e)$$, then $$f=e/r$$. Define the auxiliary unknown $$z=x^r\bmod p$$. The equation $$c\equiv x^e\pmod p$$ becomes $$z^f\equiv c\pmod{p-1}$$, with $$\gcd(f,p-1)=1$$. By the FLT that has (modulo $$p$$) a single solution $$z=c^{f^{-1}\bmod(p-1)}\bmod p$$. When $$r=1$$, we have found the only solution $$x=z$$. That's the case in normal RSA. But in the question we want to handle $$\gcd(e,\varphi(n))>1$$, thus $$r>1$$ will hold while soving for 🄐 or/and 🄑 There remains to solve for $$0 the equation $$z=x^r\bmod p$$, where $$p$$, $$r$$ and $$\hat x$$ are known, $$p$$ is prime, $$r$$ divides $$p-1$$, it holds $$2\le r, and $$0<\hat x. • if $$z^{(p-1)/r}\bmod p\ne1$$ then there is (per FLT) no solution, thus $$u=0$$. • otherwise (without proof) there are $$r$$ distinct solutions, thus $$u=\gcd(e,p-1)$$ [To be expanded maybe: when $$\gcd(e,p-1)$$ is neither $$1$$ nor $$p-1$$, we have not told how to compute the solutions $$x$$ in the general case. Some of it is covered here]. • I believe that the answer you're trying to get to is, assuming $c$ is relatively prime to $pq$, then $c^e = m \bmod pq$ has either $\gcd( p-1, e ) \gcd( q-1, e )$ solutions or none. Complications that you've considered: what if $e$ is composite? What if both $p-1, q-1$ is not r.p. to $e$? – poncho Aug 27 at 18:16 • @poncho: Yes. I've added that, and the conditions. – fgrieu Aug 27 at 19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 110, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.976189136505127, "perplexity": 166.76289499637983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00493.warc.gz"}
https://projecteuclid.org/euclid.bbms/1307452077
## Bulletin of the Belgian Mathematical Society - Simon Stevin ### Constant angle surfaces in Minkowski space #### Abstract A constant angle surface in Minkowski space is a spacelike surface whose unit normal vector field makes a constant hyperbolic angle with a fixed timelike vector. In this work we study and classify these surfaces. In particular, we show that they are flat. Next we prove that a tangent developable surface (resp. cylinder, cone) is a constant angle surface if and only if the generating curve is a helix (resp. a straight line, a circle). #### Article information Source Bull. Belg. Math. Soc. Simon Stevin, Volume 18, Number 2 (2011), 271-286. Dates First available in Project Euclid: 7 June 2011 https://projecteuclid.org/euclid.bbms/1307452077 Digital Object Identifier doi:10.36045/bbms/1307452077 Mathematical Reviews number (MathSciNet) MR2847763 Zentralblatt MATH identifier 1220.53024 Subjects
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8306385278701782, "perplexity": 1100.5609011593697}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00243.warc.gz"}
http://tex.stackexchange.com/questions/134278/overriding-the-paragraph-style-in-the-lncs-document-class
# Overriding the paragraph style in the LNCS document class I am using the LNCS document class to typeset a paper. I would like to keep most of this document style, but I do not like how paragraphs are typeset. The heading of the paragraph is in italics, which is not enough contrast to the rest of the text. I would like to change this to bold text. How can I override the paragraph style, being minimally invasive? - Could you provide a minimal working example (MWE) of your paragraph usage that displays the current, unwanted behaviour? It avoids having to re-invent the wheel to get at the point you're at. – Werner Sep 20 '13 at 15:33 Are you planning on submitting your document to LNCS? If you are, it's best not to modify their style. – Nicola Talbot Sep 20 '13 at 15:34 `\usepackage{etoolbox}\patchcmd{\paragraph}{\itshape}{\bfseries\boldmath}{}{}` should be sufficient. But I have a couple of doubts: (1) if you're using the class for submitting to Springer, don't modify the style; (2) `\paragraph` is the fourth level of sectioning: are you sure you want to use it and give it much prominence? Moreover, you'd get the same as `\subsubsection`. – egreg Sep 20 '13 at 15:38 @NicolaTalbot Not submitting to Springer and the LNCS style is not required, so patching is okay. – clstaudt Sep 20 '13 at 15:49 @clstaudt `\subsubsection` already does that. – egreg Sep 20 '13 at 15:56 ``````\usepackage{etoolbox} \patchcmd{\paragraph}{\itshape}{\bfseries\boldmath}{}{} `````` However, this will make the typesetting of `\paragraph` titles very similar to what's already done for `\subsubsection`. As far as I understand, the `llncs` class has three levels of sectioning, corresponding to `\section`, `\subsection` and `\subsubsection`. The last one is unnumbered by default. The example document uses `\paragraph` for comments about proofs or similar things, so just as a “special paragraph” of text. I believe this is good usage, but I wouldn't give these special paragraphs much prominence than they already are given with the title in italics. Trust me: a paragraph starting with a title in italics and preceded by some vertical space is noticeable. Rather, it's more questionable that there is no vertical space at the end of this special paragraph. You can define a `comments` environment that remedies to this situation: ``````\newenvironment{comments}[1] {\paragraph{#1}} Don't abuse `\subsubsection` for this purpose.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307821393013, "perplexity": 1014.7482921596574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00057-ip-10-164-35-72.ec2.internal.warc.gz"}
https://zjkmxy.github.io/posts/2020/11/algebra-misc/
# Algebra Miscs Published: Misc things that are taught in class but not written in the note of MATH 210. # Functors ## Functor as a Colimit For any small category $\mathcal{C}$ and any functor $F: \mathcal{C}^{op}\to Sets$, $F$ can be written as a colimit of representable functors $\hat{X} = \mathrm{Mor}_{\mathcal{C}}(-,x)$. ### Proof: The trick here is to pick a diagram $\mathcal{I}$. TA said “stop thinking and just write for category problems”, so let’s just write down what we can do. By Yoneda lemma, there exists a natural transformation $\alpha: \hat{X}\Rightarrow F$ if and only if $F(X)$ is not empty. Therefore, foreach nonempty $F(X)$ we can take an arbitrary $u_X\in F(X)$ To satisfy the commutativity requirements we need to have $u_X = Ff(u_Y)$ for $f: X\to Y$. There is no guarentee that we can pick such elements for every $F(X)$, so let’s just focus on the part we can pick. Define a natural transformation $\lambda^X: \hat{X}\Rightarrow F$ as $\alpha^{u_X}$ in the Yoneda lemma: $\begin{eqnarray} \lambda^X_Y: \hat{X}(Y)\to F(Y) \nonumber \\ f \mapsto Ff(u_X) \end{eqnarray}$ $F$ is a cocone. Because for all $g: X\to Y$, $\lambda^Y \circ g_* = \lambda^X$ as: $\lambda^Y_Z\circ g_*(f) = \lambda^Y_Z(g\circ f) = Ff\circ Fg(u_Y) = Ff(u_X) = \lambda_Z^X(f).$ Clearly, $F$ is not a colimit: • We did not handle every $F(X)$. • We only picked one special element from each $F(X)$, which cannot represent the whole property of $F$. So let’s fix this. What we do is picking all possible $u_X$, instead of one element. (i.e. Comma Category) Let $\mathcal{J}$ be the category of all pairs $(Y, x)$ s.t. $x\in F(Y)$, with morphisms $(Y, x)\to (Y’,x’)$ for $f: Y’\to Y$ in $\mathcal{C}$ and $Ff(x) = x’$ in $\mathrm{Set}$. Note that multiple objects in the diagram can be mapped to one in the category. Therefore, we define this map $M: \mathcal{J}\to\mathrm{Fun}(\mathcal{C}^{op}, \mathrm{Set})$ as $M(Y, x) = \hat{Y}$. Then, there are $|F(X)|$ many pairs mapped to the same $\hat{X}$. We need to specify a $\lambda_X: \hat{X}\Rightarrow F$ for each of this copy. Naturally, just assign $\lambda_X = \alpha^{x}$ for $(X, x) \to \hat{X} \to F$. Recall how we defined $\alpha^X: \hat{X}\Rightarrow F$ in the proof of Yoneda lemma: $\begin{eqnarray} \alpha^x_Y: \hat{X}(Y) = \mathrm{Mor}_{\mathcal{C}}(Y, X)\to F(Y) \nonumber \\ f \mapsto Ff(x) \end{eqnarray}$ There is another way of thinking this. Let $\mathrm{Fun}(\mathcal{C}^{op}, \mathrm{Set}) / F$ denote this category of objects $G\Rightarrow F$, with morphisms $G_1\to G_2$ if $G_1\Rightarrow G_2\Rightarrow F$ commutes with $G_1\Rightarrow F$. Let $\mathcal{J}$ be the full subcategory of $\hat{X}\Rightarrow F$ for all $X\in\mathcal{C}$. There are $|F(X)|$ many of them, as a natural transformation $\alpha$ is uniquely decided by $\alpha_X(\mathrm{id}_X)$, which can be any element in $F(X)$. Let $u: \mathcal{J}\to\mathrm{Fun}(\mathcal{C}^{op}, \mathrm{Set})$ be the forgetful functor: $u(\hat{X}\Rightarrow F) = \hat{X}.$ Then, $u$ is the map from the diagram to representable functors. $Y$ is a union of multiple cocones, so it is a cocone, which can be proved exactly as what we did at the beginning. Now, we need to show that $Y$ is a colimit. Suppose $F’: \mathcal{C}^{op} \to \mathrm{Set}$ is another cocone, with natural transformations $\lambda’^{X}: \hat{X}\Rightarrow F’$ for the diagram object $(X, x)$ in $\mathcal{J}$. By Yoneda lemma, for all $X$ there exists an element $u’_X\in F’(X)$ (namely $u’_X = \lambda’^X_X(\mathrm{id}_X)$) s.t. for all $f: Y \to X$, $\lambda’^X_Y(f) = Ff(u’_X)$. Now, we can define a natural transformation $\beta: F \Rightarrow F’$ as: $\beta_X(x) = u’_X$, i.e. mapping the deciding element to the deciding element. Note that $u’_X$ depends on the pair $(X, x)$ and there is one for each $x\in F(X)$. Since for any $X,Y\in\mathcal{C}$ and $f: Y\to X$, $\beta_Y\circ Ff(x) = u'_Y = F'f(u'_X) = F'f\circ \beta_X(x)$ $\beta$ is natural. Clearly $\beta$ is unique. Thus, $F$ is a colimit. Given a small-complete category $\mathcal{D}$ with small morphism sets, a functor $R:\mathcal{D}\to\mathcal{C}$ has a left joint if and only if it preserves all small limits and satisfying the following solution set condition. Solution Set Condition: For each $X\in\mathcal{C}$ there is a small set $I$ and $g_i: X\to R(Y_i)$ s.t. every $h: X\to R(Y)$ can be written as $h = R(f)\circ g_i$ for some $f: Y_i\to Y$. ### Proof If $R$ has a left adjoint $L:\mathcal{C}\to\mathcal{D}$, it must preserve all limits and the natural transformation $\eta: \mathrm{Id}_{\mathcal{C}}\Rightarrow RL$ satisfies the solution set condition, with $I$ the one-point set. This is because for all $h: X\to R(Y)$, we can find a unique $\alpha_{X,Y}^{-1}(h) = f: L(X)\to Y$ s.t. $h = R(f)\circ \eta_X$. Conversely, given those conditions, it suffices to construct for each $X\in\mathcal{C}$ a universal arrow $\eta_X: X\to R(Y_X)$, where universality means for each $h: X\to R(Y)$, we can find a unique $f: Y_X\to Y$ s.t. $h = R(f)\circ \eta_X$. to $R$. Then, $R$ has a left adjoint $L$, by $L(X) = Y_X$ and $L(h)$ being the unique $f$. In other words, we need to find a initial object $\langle\eta_X, Y_X\rangle$ in the comma category $(X\downarrow R)$ for all $X$. Here, given $X\in\mathcal{C}$ and $F:\mathcal{D}\to\mathcal{C}$, the comma category $(X\downarrow F)$ is defined as a category with objects $\langle f, Y\rangle$ s.t. $f: X\to F(Y)$. Morphisms $h: \langle f, Y\rangle\to\langle f’, Y’\rangle$ are $h: Y\to Y’$ s.t. $f’= F(h)\circ f$. Lemma: If $F:\mathcal{D}\to\mathcal{C}$ preserve all small limits, then for each $X\in\mathcal{C}$, the projection: $\begin{eqnarray} Q_X: (X\downarrow F) &\to& \mathcal{D} \nonumber \\ (X\to F(Y)) &\mapsto& Y \end{eqnarray}$ creates all small limits in the comma category. Proof of Lemma: Suppose $f_i: X\to F(Y_i)$ is an $I$ indexed family of objects in the comma category. Let $(\lim_I Y_i, \lambda)$ be a limit in $\mathcal{D}$, and $\theta: Y_j\to Y_k$ s.t. $f_k = F(\theta)\circ f_j$. Since $F$ preserves limits, $F(\lim_I Y_i)$ is a limit of $F(Y_j)$ and $F(\lambda_k) = F(\lambda_j)\circ F(\theta)$. Then, there is a unique $f: X\to F(\lim_I Y_i)$ s.t. $f_i = F(\lambda_i)\circ f$ for all $i\in I$. Thus, $f = \lim_I f_i$. Now, $(X\downarrow R)$ is a small-complete category satisfying the solution set condition: there is a small set $I$ and $g_i \in (X\downarrow R)$, s.t. for every $h\in (X\downarrow R)$ there exists a morphism $g_i\to h$ for some $i\in I$. It suffices to prove $(X\downarrow R)$ has an initial object. This is true because $(X\downarrow R)$ is small complete, we can construct a product of all $g_i$ and then construct an equalizer of all endomorphisms of it. Then, the equalizer is an initial object. Actually this is what we did in the class to construct the free group functor. Check MacLane p123. # Sylow Theorem and Finite Groups ## Notations • ${}^{g}h = ghg^{-1}$: conjugation – 共軛(やく) • $\mathrm{orb}_G(x) = G\cdot x = \{ g\cdot x: g\in G \}$: orbit – 軌道 • $\mathrm{stab}_G(x) = \{g\in G: g\cdot x = x\}$ – 固定部分群 • $X^G = \{ x\in X: g\cdot x = x, \forall g\in G \}$ – 不変元 • If $G$ is a finite $p$-group, $|X| \equiv |X^G| \text{ modulo } p$ • $Z(G) = \{ g\in G: xg = gx, \forall x\in G\}$: center – 中心 • If $G$ is a finite $p$-group, $p| Z(G)$ and thus $Z(G) > 0$. • $n = v_p(|G|)$: the highest power of $p$ that divides the order $G$ – $G$の位数に於ける$p$の重複度 ## Cauchy’s Lemma If $p$ divides $|G|$, then $G$ contains an element of order $p$. ## Sylow Theorems • Existence: $p$-Sylow always exists. • (a) Every $p$-subgroup is contained in a $p$-Sylow subgroup. • (b) Two Sylow subgroups are conjugate in $G$. • (c) The number of Sylow subgroups is congruent to $1$ modulo $p$. • Corollary: A $p$-Sylow is unique if and only if it is normal in $G$. ### Proof Existence: Inductive Proof: Prove by mathematical induction on the order. If there exists a proper subgroup $H < G$ of index prime to $p$, then by induction $H$ has a $p$-Sylow. If all proper subgroups have index divisible by $p$, then $|Z(G)| = |G^G| \equiv |G| \equiv 0\ (\text{mod }p)$ Here actions are conjugations. Thus, there exists an element $x$ of order $p$. If $v_p(|G|) = 1$, then $\langle x\rangle$ is a Sylow. If $v_p(|G|) > 1$, then $G / \langle x\rangle$ has a $p$-Sylow $\bar{P} = P / \langle x\rangle$, which gives $P$ is a Sylow. Constructive Proof: Consider $Y$ being the set of all subsets of size $p^{n}$ of $G$, where $n = v_p(|G|)$. Consider $G$ acts on $Y$ via left multiplication (coset). By Lucas theorem, $|Y| = {|G| \choose p^n} \equiv {|G|/p^n \choose 1}{0 \choose 0}^{n-1} = |G|/p^n \not\equiv 0 \ (\text{mod } p)$ Since $|Y| = \sum_{X\in Y}[G : \mathrm{stab}_G(X)]$, there exists some $X$ s.t. $[G : \mathrm{stab}_G(X)]$ is not divided by $p$. Let $H = \mathrm{stab}_G(X)$. Since $[G : \mathrm{stab}_G(X)] = \frac{|G|}{|H|}$, $H$ is divided by $p^n$. By the definition of stablizer, $Hx\subseteq X$ for all $x\in X$. Then, $|H| = |Hx| \leq |X| = p^n$. Thus, $|H| = p^n$. Key Remark: If $H\leq G$ is a $p$-group and $P\leq G$ is a $p$-Sylow s.t. $H$ normalizes $P$, then $H\leq P$. Consider $\left\langle H\cup P \right\rangle = HP$. By second isomorphism theorem, $[HP: P] = [H: H\cap P]$ is a power of $p$. Thus, $HP$ is a $p$-subgroup containing $P$. Since $P$ is $p$-Sylow, $HP = P$, which gives $[H: H\cap P] = 1$ and $H\leq P$. (a) & (b): Let $P_0$ be one Sylow subgroup and $X = \{ {}^gP_0: g\in G\}$ be the set of its conjugates. Clearly, $X$ is a set of $p$-Sylow subgroups. Suppose $H\leq G$ is a $p$ group and let $H$ acts on $X$ via conjugation. $|X^H| \equiv |X| = [G : \mathrm{stab}_G(P_0)] = [G: N_G(P_0)] \ (\text{mod } p)$ Since $P_0\leq N_G(P_0)$, $[G: N_G(P_0)]$ is a factor of $[G: P_0]$, and thus prime to $p$. Then, $|X^H|$ is prime to $p$. Thus, there exists $P\in X^H$ which is conjugate to $P_0$ and normalized by $H$. Note that $P$ is a $p$-Sylow subgroup, so by the key remark, $H\leq P$. (c): Consider the action $P_0$ on $X$ by conjugation. By the key remark, $P_0$ is a subgroup of any fixed point, so the only fixed point is $P_0$ itself. Hence, $|X| \equiv |X^{P_0}| = 1\ (\text{mod } p)$. By (b), $X$ contains all Sylow groups. Corollary: $X = \{ P \}$ iff. $P$ is normal in $G$. Remark: $|X|$ divides $|G|$, because it’s the index of a stablizer. ## Schur–Zassenhaus theorem Suppose $N\to G \to H$ is a short exact sequence (s.e.s., 短完全系列) s.t. $|N|$ and $|H|$ are coprime. Then, the s.e.s. is a split; i.e. $G\cong N\rtimes H$. Moreover, any two subgroups in $G$ of order $|H|$ are conjugate to each other. A general case is $N\unlhd G$ and $H = G / N$ s.t. $|N|$ and $|G / N|$ are coprime. The general proof (especially part 2) is difficult and thus omitted here. For a special case, refer to Prop. 2.7.20 in Math210 Notes, case (ii). ## Application: Groups of Order 30 ### Any group of prime order is cyclic By the fact that any subgroup generated by a non-trivial element is the whole group. ### The only group of order 15 is cyclic Since $15 = 3\times 5$ and $5$, $3$ is not congruent to 1 modulo each other, $G_{15}$ contains exactly one $5$ subgroup and one $3$ subgroup, which are all normal in $G_{15}$. Thus, $G_{15}\cong C_3 \times C_5 = C_{15}$. ### The number of automorphisms of a cyclic group is its order’s Euler function Since $C_n = \{ 1, c, \ldots, c^{n-1} \}$ is gnerated by $c$, an automorphism $\varphi$ on it should be decided by $\varphi(c)$. Given that every $c^k$ where $k$ is coprime to $n$ is a generator, there are $\phi(n)$ automorphisms. Actually, $\mathrm{Aut}(C_n) \cong C_n^{\times}$, with the latter one denotes the reduced residue system group of $n$. When $n$ is prime, by the existence of primitive root, $\mathrm{Aut}(C_n) \cong C_{n-1}$. ### There are 4 isomorphic types for 30 Let $n_p$ denote the number of $p$-Sylow subgroups. By Sylow, $n_2\in \{1, 3, 5, 15\}$, $n_3\in \{1, 10\}$, $n_5\in \{1, 6\}$. It is impossible for $n_3 = 10 \wedge n_5 = 6$, because otherwise there will be $(3-1)\times 10$ elements of order $3$ and $(5-1)\times 6$ elements of order $5$. Then, at least one of the Sylow subgroup $P_3$ and $P_5$ will be normal in $G$. Thus, by Second Isomorphism Theorem, $N = P_3P_5\leq G$. By $\rm{gcd}(3, 5) = 1$, $P_3\cap P_5 = \{1\}$ and thus $|N| = 15$, $N\cong C_{15}$. $[G: N] = 2$, so $N$ is normal. Thus, by S-Z theorem, $G\cong C_{15} \rtimes C_2$. Now consider the group action $\alpha$ of $C_2$ on $C_{15}$. We have $\mathrm{Aut}(C_{15}) \cong C_3^{\times}\times C_5^{\times}\cong C_2\times C_4$. Since $1$ in $C_2$ has order 2, its image’s order should divide 2. There are only 4 elements: $\{(0,0),(1,0),(0,2),(1,2)\}$ or equivalently $\{[1 ], [11 ], [4 ], [14 ]\}$ Thus, there are 4 isomorphic types. By the definition of $\rtimes$, for $a, b\in C_{15}$ and $h\in C_2$, we have • $(a, 0)\cdot (b, h) = (a+b, h)$; • $(a, 1)\cdot (b,h) = (a+\alpha(b), h) = (a+\alpha(1)b, h)$. Now let’s inspect these four types: • For $\alpha(1) = [1 ]$, we have $C_{15} \rtimes C_2\cong C_{15} \times C_2 \cong C_{30}$. • If $n, m$ coprime, then $(1,1)\in C_n\times C_m$ is of order $nm$ and thus $C_n\times C_m\cong C_{nm}$. • For $\alpha(1) = [14 ] = [-1 ]$, $C_{15} \rtimes C_2\cong D_{15}$. • $D_n \cong C_n\rtimes_{-1} C_2$, with $(p,q)$ mapping to $r^ps^q$. • For $\alpha(1) = [4 ]$, the element $(1,1)$ has order $6$ ($(1,1)\to(5,0)\to(6,1)\to(10,0)\to(11,1)\to(0,0)$). Thus, we have $C_{15} \rtimes C_2\cong C_3\times D_{5}$. • With $(0,0)\mapsto (0,e)$, $(0,1)\mapsto (0,s)$, $(1,0)\mapsto (1,r)$, $(6,0)\mapsto (0,r), (10,0)\mapsto (1,e)$. • For $\alpha(1) = [11 ] = [-4 ]$, the element $(1,1)$ has order $10$. We have $C_{15} \rtimes C_2\cong C_5\times S_3$. • With $(0,0)\mapsto(0,())$, $(6,0)\mapsto(1,())$, $(0,1)\mapsto(0,(12))$, $(0,1)\mapsto(0,(23))$, $(5,1)\mapsto(0,(23))$, $(10,0)\mapsto(0,(123))$, $(5,0)\mapsto(0,(123))$. # Misc on Group ## Five Lemma (5項補題) It’s hard to show without figure (>_<) Let $G_1 \to G_2 \to G_3 \to G_4 \to G_5$ and $H_1 \to H_2 \to H_3 \to H_4 \to H_5$ be exact (完全) at 2,3,4. Let homomorphisms $f_i: G_i \to H_i$ be commutative in the square diagram. Then, if $f_1, f_2, f_4, f_5$ are isomorphisms, so is $f_3$. ### Proof: Prove by chasing on two 4-squares. Let $\alpha_i: G_i\to G_{i+1}$ and $\beta_i: H_i\to H_{i+1}$ denote those exact homomorphisms. Claim: $f_3$ is onto: • Arbitrarily pick $h_3\in H_3$. • Since $f_4$ is surjective, there exists $g_4\in G_4$ s.t. $f_4(g_4) = \beta_3(h_3)$. • By exactness at $H_4$, $\beta_5\circ\beta_4(h_3) = 1$. • By commutativity, $f_5\circ\alpha_4(g_4) = \beta_4\circ f_4(g_4) = \beta_5\circ\beta_4(h_3) = 1$. • Since $f_5$ is injective, $\alpha_4(g_4) = 1$; thus, $g_4\in\mathrm{Im}(\alpha_3)$. • Pick $g_3\in G_3$ s.t. $\alpha_3(g_3) = g_4$. • Let $z = f_3(g_3)^{-1}h_3$. • Then, $\beta_3(z) = f_4(g_4)^{-1}\beta_3(h_3) = 1$. • By the exactness at $H_3$, $z\in \mathrm{Im}(\beta_2)$. • Since $f_2$ is surjective, there exists $g_2\in G_2$ s.t. $\beta_2\circ f_2(g_2) = z = f_3\circ\alpha_3(g_2)$. • Thus, $f_3(g_3\cdot\alpha_3(g_2)) = h_3$. Similarly, $f_3$ is one to one. Thus, $f_3$ is an isomorphism. # References • Saunders Mac Lane, Categories for the working mathematician, Springer Science & Business Media, 2013. • Paul Balmer, UCLA MATH210A, Fall 2020. Tags:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994721531867981, "perplexity": 170.66698937049256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00589.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=10F20
# American Mathematical Society Publications Meetings The Profession Membership Programs Math Samplings Policy and Advocacy In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(10F20) AND publication=(all) Sort order: Date Format: Standard display Results: 1 to 20 of 20 found      Go to page: 1 [1] J. H. McCabe. The quotient-difference algorithm and the Pad\'e table: an alternative form and a general continued fraction . Math. Comp. 41 (1983) 183-197. MR 701633. Abstract, references, and article information    View Article: PDF This article is available free of charge [2] K. Alniaçik. On $U\sb{m}$-numbers . Proc. Amer. Math. Soc. 85 (1982) 499-505. MR 660590. Abstract, references, and article information    View Article: PDF This article is available free of charge [3] J. C. Lagarias. Best simultaneous Diophantine approximations. I. Growth rates of best approximation denominators . Trans. Amer. Math. Soc. 272 (1982) 545-554. MR 662052. Abstract, references, and article information    View Article: PDF This article is available free of charge [4] Don Zagier. On the number of Markoff numbers below a given bound . Math. Comp. 39 (1982) 709-723. MR 669663. Abstract, references, and article information    View Article: PDF This article is available free of charge [5] D. H. Fowler. Ratio in early Greek mathematics. Bull. Amer. Math. Soc. 1 (1979) 807-846. MR 546311. Abstract, references, and article information    View Article: PDF [6] H. R. P. Ferguson and R. W. Forcade. Generalization of the Euclidean algorithm for real numbers to all dimensions higher than two. Bull. Amer. Math. Soc. 1 (1979) 912-914. MR 546316. Abstract, references, and article information    View Article: PDF [7] Harald Niederreiter. Quasi-Monte Carlo methods and pseudo-random numbers. Bull. Amer. Math. Soc. 84 (1978) 957-1041. MR 508447. Abstract, references, and article information    View Article: PDF [8] Theresa P. Vaughan. A generalization of the simple continued fraction algorithm . Math. Comp. 32 (1978) 537-558. MR 0480367. Abstract, references, and article information    View Article: PDF This article is available free of charge [9] T. W. Cusick. The Szekeres multidimensional continued fraction . Math. Comp. 31 (1977) 280-317. MR 0429765. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] Mary E. Gbur. The Markoff spectrum and minima of indefinite binary quadratic forms . Proc. Amer. Math. Soc. 63 (1977) 17-22. MR 0434963. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] D. Shanks. Table errata: Regular continued fractions for $\pi$ and $\gamma$'', (Math. Comp. {\bf 25} (1971), 403); Rational approximations to $\pi$'' (ibid. {\bf 25} (1971), 387--392) by K. Y. Choong, D. E. Daykin and C. R. Rathbone . Math. Comp. 30 (1976) 381. MR 0386215. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] James L. Hlavka. Results on sums of continued fractions . Trans. Amer. Math. Soc. 211 (1975) 123-134. MR 0376545. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] Melvyn B. Nathanson. Approximation by continued fractions . Proc. Amer. Math. Soc. 45 (1974) 323-324. MR 0349594. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] Richard B. Lakein. Continued fractions and equivalent complex numbers . Proc. Amer. Math. Soc. 42 (1974) 641-642. MR 0382179. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] M. D. Hendy. Applications of a continued fraction algorithm to some class number problems . Math. Comp. 28 (1974) 267-277. MR 0330102. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] T. W. Cusick. On M. Hall's continued fraction theorem . Proc. Amer. Math. Soc. 38 (1973) 253-254. MR 0309875. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] K. E. Hirst. Continued fractions with sequences of partial quotients . Proc. Amer. Math. Soc. 38 (1973) 221-227. MR 0311581. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] David G. Cantor, Paul H. Galyean and Horst G. Zimmer. A continued fraction algorithm for real algebraic numbers . Math. Comp. 26 (1972) 785-791. MR 0330118. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] Michael A. Morrison and John Brillhart. The factorization of $F_7$. Bull. Amer. Math. Soc. 77 (1971) 264. MR 0268113. Abstract, references, and article information    View Article: PDF [20] K. Y. Choong, D. E. Daykin and C. R. Rathbone. Rational approximations to $\pi$ . Math. Comp. 25 (1971) 387-392. MR 0300981. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 20 of 20 found      Go to page: 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675726890563965, "perplexity": 1981.832650534274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770742.121/warc/CC-MAIN-20141217075250-00151-ip-10-231-17-201.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Finite-difference_approximation
# Finite difference method (Redirected from Finite-difference approximation) Not to be confused with "finite difference method based on variation principle", the first name of finite element method[citation needed]. In mathematics, finite-difference methods (FDM) are numerical methods for solving differential equations by approximating them with difference equations, in which finite differences approximate the derivatives. FDMs are thus discretization methods. Today, FDMs are the dominant approach to numerical solutions of partial differential equations.[1] ## Derivation from Taylor's polynomial First, assuming the function whose derivatives are to be approximated is properly-behaved, by Taylor's theorem, we can create a Taylor Series expansion ${\displaystyle f(x_{0}+h)=f(x_{0})+{\frac {f'(x_{0})}{1!}}h+{\frac {f^{(2)}(x_{0})}{2!}}h^{2}+\cdots +{\frac {f^{(n)}(x_{0})}{n!}}h^{n}+R_{n}(x),}$ where n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. We will derive an approximation for the first derivative of the function "f" by first truncating the Taylor polynomial: ${\displaystyle f(x_{0}+h)=f(x_{0})+f'(x_{0})h+R_{1}(x),}$ Setting, x0=a we have, ${\displaystyle f(a+h)=f(a)+f'(a)h+R_{1}(x),}$ Dividing across by h gives: ${\displaystyle {f(a+h) \over h}={f(a) \over h}+f'(a)+{R_{1}(x) \over h}}$ Solving for f'(a): ${\displaystyle f'(a)={f(a+h)-f(a) \over h}-{R_{1}(x) \over h}}$ Assuming that ${\displaystyle R_{1}(x)}$ is sufficiently small, the approximation of the first derivative of "f" is: ${\displaystyle f'(a)\approx {f(a+h)-f(a) \over h}.}$ ## Accuracy and order The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (that is, assuming no round-off). The finite difference method relies on discretizing a function on a grid. To use a finite difference method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid (see image to the right). Note that this means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a "time-stepping" manner. An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method. That is, it is the quantity ${\displaystyle f'(x_{i})-f'_{i}}$ if ${\displaystyle f'(x_{i})}$ refers to the exact value and ${\displaystyle f'_{i}}$ to the numerical approximation. The remainder term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for ${\displaystyle f(x_{0}+h)}$, which is ${\displaystyle R_{n}(x_{0}+h)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}(h)^{n+1}}$, where ${\displaystyle x_{0}<\xi , the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that ${\displaystyle f(x_{i})=f(x_{0}+ih)}$, ${\displaystyle f(x_{0}+ih)=f(x_{0})+f'(x_{0})ih+{\frac {f''(\xi )}{2!}}(ih)^{2},}$ and with some algebraic manipulation, this leads to ${\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+{\frac {f''(\xi )}{2!}}ih,}$ and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is: ${\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+O(h).}$ This means that, in this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size.[2] Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing simulation speed in practice. However, time steps which are too large may create instabilities and affect the data quality.[3][4] The von Neumann method is usually applied to determine the numerical model stability.[3][4][5][6] ## Example: ordinary differential equation For example, consider the ordinary differential equation ${\displaystyle u'(x)=3u(x)+2.\,}$ The Euler method for solving this equation uses the finite difference quotient ${\displaystyle {\frac {u(x+h)-u(x)}{h}}\approx u'(x)}$ to approximate the differential equation by first substituting it for u'(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get ${\displaystyle u(x+h)=u(x)+h(3u(x)+2).\,}$ The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation. ## Example: The heat equation Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions ${\displaystyle U_{t}=U_{xx}\,}$ ${\displaystyle U(0,t)=U(1,t)=0\,}$ (boundary condition) ${\displaystyle U(x,0)=U_{0}(x)\,}$ (initial condition) One way to numerically solve this equation is to approximate all the derivatives by finite differences. We partition the domain in space using a mesh ${\displaystyle x_{0},...,x_{J}}$ and in time using a mesh ${\displaystyle t_{0},....,t_{N}}$. We assume a uniform partition both in space and in time, so the difference between two consecutive space points will be h and between two consecutive time points will be k. The points ${\displaystyle u(x_{j},t_{n})=u_{j}^{n}}$ will represent the numerical approximation of ${\displaystyle u(x_{j},t_{n}).}$ ### Explicit method The stencil for the most common explicit method for the heat equation. Using a forward difference at time ${\displaystyle t_{n}}$ and a second-order central difference for the space derivative at position ${\displaystyle x_{j}}$ (FTCS) we get the recurrence equation: ${\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}.\,}$ This is an explicit method for solving the one-dimensional heat equation. We can obtain ${\displaystyle u_{j}^{n+1}}$ from the other values this way: ${\displaystyle u_{j}^{n+1}=(1-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}}$ where ${\displaystyle r=k/h^{2}.}$ So, with this recurrence relation, and knowing the values at time n, one can obtain the corresponding values at time n+1. ${\displaystyle u_{0}^{n}}$ and ${\displaystyle u_{J}^{n}}$ must be replaced by the boundary conditions, in this example they are both 0. This explicit method is known to be numerically stable and convergent whenever ${\displaystyle r\leq 1/2}$.[7] The numerical errors are proportional to the time step and the square of the space step: ${\displaystyle \Delta u=O(k)+O(h^{2})\,}$ ### Implicit method The implicit method stencil. If we use the backward difference at time ${\displaystyle t_{n+1}}$ and a second-order central difference for the space derivative at position ${\displaystyle x_{j}}$ (The Backward Time, Centered Space Method "BTCS") we get the recurrence equation: ${\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}.\,}$ This is an implicit method for solving the one-dimensional heat equation. We can obtain ${\displaystyle u_{j}^{n+1}}$ from solving a system of linear equations: ${\displaystyle (1+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=u_{j}^{n}}$ The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step: ${\displaystyle \Delta u=O(k)+O(h^{2}).\,}$ ### Crank–Nicolson method Finally if we use the central difference at time ${\displaystyle t_{n+1/2}}$ and a second-order central difference for the space derivative at position ${\displaystyle x_{j}}$ ("CTCS") we get the recurrence equation: ${\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {1}{2}}\left({\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}+{\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}\right).\,}$ This formula is known as the Crank–Nicolson method. The Crank–Nicolson stencil. We can obtain ${\displaystyle u_{j}^{n+1}}$ from solving a system of linear equations: ${\displaystyle (2+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=(2-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}}$ The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step: ${\displaystyle \Delta u=O(k^{2})+O(h^{2}).\,}$ Usually the Crank–Nicolson scheme is the most accurate scheme for small time steps. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. The implicit scheme works the best for large time steps.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 49, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727559685707092, "perplexity": 1096.4560825726478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660214.50/warc/CC-MAIN-20160924173740-00138-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.freemathhelp.com/forum/threads/is-zero-a-positive-integer-could-it-be-a-prime-number.111230/
# Is zero a positive integer? Could it be a prime number? #### Phatfoo ##### New member Joined May 22, 2018 Messages 16 Zero is not a negative integer, so is it positive? And could it be an even prime number? Zero divided by itself is zero, but zero divided by one is zero. So this confuses me. #### HallsofIvy ##### Elite Member Joined Jan 27, 2012 Messages 4,877 No, zero is not a positive number. Zero is neither positive nor negative. No, zero is not a prime number. No, "zero divided by zero" is NOT 0. It is 'undetermined'. a/b= c if and only if a= bc. If we claim that 0/0= c for some specific number, c, then 0= 0(c). But 0 times any number is 0. #### tkhunny ##### Moderator Staff member Joined Apr 12, 2005 Messages 9,832 Zero is not a negative integer, so is it positive? View attachment 9533 And could it be an even prime number? Zero divided by itself is zero, but zero divided by one is zero. So this confuses me. There are programming languages that make various assumptions. Some tell you that 0/0 = 1 if it encountered in a program sequence. This is just an assumption of the code. It's not gospel. I think most of the time, if a program encounters 0/0, you will get either "NAN" (not a number) or a division error or something else catastrophic. #### JeffM ##### Elite Member Joined Sep 14, 2012 Messages 3,258 Zero is not a negative integer, so is it positive? View attachment 9533 And could it be an even prime number? Zero divided by itself is zero, but zero divided by one is zero. So this confuses me. Consider red, blue, and yellow. Do you argue that a red ball is yellow because that ball is not blue? Consider the boundary line between Canada and the US: is it in Canada or the US or both? Tell me what your definition of "positive" is, and I'll tell you whether or not zero is positive according to that definition. A common definition is that a number n is positive if and only if it is greater than zero and that a number is negative if and only if it is less than zero. Under that definition, zero is clearly neither negative nor positive. That is possible because, by definition, we have divided numbers into three classes rather than two. If we do want to divide numbers into two classes, one way to do it is to define positive and non-positive numbers, in which case zero is non-positive. Or we can divide numbers into two classes as negative and non-negative, in which case zero is non-negative. Pay attention to definitions. Last edited: #### Jomo ##### Elite Member Joined Dec 30, 2014 Messages 3,344 Zero is not a negative integer, so is it positive? View attachment 9533 And could it be an even prime number? Zero divided by itself is zero, but zero divided by one is zero. So this confuses me. Zero is said to be a neutral number, neither positive nor negative. A positive integer n is said to be prime if the only positive integers that go evenly into it are 1 and the number n itself. Well 1, 2, 3, 4, 5, 6, ... all go evenly into zero so zero is the absolute worst candidate for being prime. 0/0 is not zero. 0/0 is indetermined. You really need to learn your definitions. Math is like a game where you can't play the game very well if you do not know the rules (definitions). #### mmm4444bot ##### Super Moderator Staff member Joined Oct 6, 2005 Messages 10,257 As JeffM wrote, definitions can vary. I would say that most math courses treat zero as neither positive nor negative. But, there are exceptions. One example: electrical engineers sometimes treat zero as a negative number because it makes their life easier.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464978337287903, "perplexity": 700.3736034319755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578626296.62/warc/CC-MAIN-20190424034609-20190424060609-00139.warc.gz"}
http://mathoverflow.net/revisions/1959/list
7 reworded a bit There's an important piece of geometric knowledge usually quoted as Beilinson-Bernstein-Deligne. Here's a refresher: by $IC$ one means the intersection complex, which is just $\mathbb Q$ for a smooth scheme but more complicated for others, and by $IC_i$ one denotes the complex constructed from a subscheme pair ($Y_i$, $Y_i$ \mathcal L_i$) of subvariety together with the local system as$IC_i := j_{!*}\mathcal L_i$. Now it turns out that for a projective morphism$f: X\to Y$, turns out you can decompose in the derived category $$f_*IC = \oplus IC_i[n_i].$$ The special beauty of this decomposition theorem is in its examples. Here are some I think I know: • For a free action of a group G on some X, you get the decomposition by representation of G. • For a resolution of singularities, you get$f_*\mathbb Q = IC_Y \oplus F$(and$F$should have support on the exceptional divisor.) • For a smooth algebraic bundle$f_*\mathbb Q = \oplus oplus\, \mathbb Q[-]$(spectral sequence degenerates) There are many known applications of the theorem, described, e.g. in the review but I wonder if there are more examples that would continue the list above, that is, "corner cases" which highlight particularly one specific aspect aspects of the decomposition theorem? Question: What are other examples, especially the "corner" cases? 6 worked up There's an important piece of geometric knowledge usually quoted as Beilinson-Bernstein-Deligne. Here's a refresher: by$IC$one means the intersection complex, which is just$\mathbb Q$for a smooth scheme but more complicated for others, and by$IC_i$one denotes the complex constructed from a subscheme$Y_i$together with the local system as$j_{!*}\mathcal L_i$. Now it turns out that for a morphism$f: X\to Y$, you can decompose in the derived category $$f_*IC = \oplus IC_i[n_i].$$ The special beauty of this decomposition theorem is in its examples. Here are some I think I know: • For a free action of a group G on some X, you get the decomposition by representation of G. • For a resolution of singularities, you get$f_*\mathbb Q = IC_Y \oplus F$(and$F$should have support on the exceptional divisor.) • For a smooth algebraic bundle$f_*\mathbb Q = \oplus \mathbb Q[-]$(spectral sequence degenerates) There are many known applications of the theorem, described, e.g. this in the review but I wonder if there are more important examples that would continue the list above, that is, "corner cases" which highlight particularly one specific aspect of the decomposition theorem, as do the examples above? Question: What are other examples, especially the "fundamental" corner" cases? 5 a bi There's an important piece of geometric knowledge usually quoted as Beilinson-Bernstein-Deligne. Here's a refresher: by$IC$I mean one means the intersection complex, the one that which is just$\mathbb Q$for a smooth scheme but more complicated for others, and by$IC_i$its version correctly extended (one denotes the$j_{!*}$notation) complex constructed from a subscheme of$Y_i$(together with bundle the local system as$\mathcal L_i$)j_{!*}\mathcal L_i$. Now it turns out that for a morphism $f: X\to Y$, you can decompose in the derived category $$f_*IC = \oplus IC_i[n_i].$$ The special beauty of this decomposition theorem is in its examples. Here are some I think I know: • For a free action of a group G on some X, you get the decomposition by representation of G. • For a resolution of singularities, you get $f_*\mathbb Q = \mathbb Q IC_Y \oplus F$ , ($F$ has and $F$ should have support on the exceptional divisor.) • For a smooth algebraic bundle $f_*\mathbb Q = \oplus \mathbb Q[-]$ (spectral sequence degenerates) There are many known applications of the theorem, described, e.g. this review, but I wonder if there are more important "corner cases" which highlight particularly one aspect of the decomposition theorem, as do the examples above? Question: What are other examples, especially the important special "fundamental" cases? 4 edited tags 3 formulas 2 retag 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798421859741211, "perplexity": 1686.4320653888744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701806508/warc/CC-MAIN-20130516105646-00009-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.thespectrumofriemannium.com/tag/interstellar-trip/
## LOG#029. Interstellar trips in SR. My final article dedicated to the memory of Neil Armstrong. The idea is to study quantitatively the relativistic rocket motion with numbers, after all we have deduced the important formulae, and we will explain what is happening in the two … Continue reading
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.879537045955658, "perplexity": 896.8943848079946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00199.warc.gz"}
https://skyciv.com/technical/area-loads-in-one-way-and-two-way-systems/
# Area Loads in One-way and Two-way Systems When you’re designing a structure to support a slab, it’s important to consider how the slab transmits its load to the structural members. In your model, you might consider adding a plate element to model the slab. However in the context of considering the adequacy of your beams, joists, and girders, this may not be appropriate. This is because loads are transferred from plate to nodes (no mid-member interaction) and unwanted stiffness may be introduced into your model. A more appropriate approach would be to instead consider what is known as an “area load” or “tributary load”. This involves converting a pressure load (that acts along a surface) to distributed loads that act on the beams, joists, and girders (members) which support the surface. In this article we’re going to explore two examples of how this can be done. Mathematically, the calculation for the distributed load is simply expressed as: [math]$$w=qt_w$$[math] Where w is distributed/area load magnitude, q is pressure load magnitude, and tw is tributary width. Tributary width is the width of an area that is divided up according to the area load type. The consideration at this point is to determine whether a system is one-way or two-way. This is so that the tributary width and load distribution can be assigned to the member. In general, if the load from the slab is delivered to the beams in one direction, then the system is one-way. Conversely, if the load is delivered to the beams and the girders in two directions, then the system is considered two-way. ### Example: One-way System In this example, the pressure load from the slab is considered to be transferred directly to the beams. Since the girders are not directly supporting the slab, the system is considered to be one-way. The area load is thus calculated as, [math]$$w=qt_w=1.2 \times 3/2=1.8 kip / ft$$[math] A one-way system will divide up the area formed by the two members selected. In this example, since it is a rectangle, the profile of the area load is also rectangular as shown below. Note that the profile is not always rectangular, but rather it is always quadrilateral and represents an equal division of area between the two members. In some cases, point loads are added in order to force balance the system if the slab overhangs the member. ### Two-way System For the same structure as the previous example, if the slab were to instead be directly supported by members 2,8,5,9 the system would be considered two-way. The slab occupies the same area as the previous question, however the load profile will change. For two-way systems, bisecting lines are drawn from the corners to create a pair of triangles and a pair of quadrilaterals as shown in the diagram below. Thus the calculation for area load for members 8 and 9 is given by, [math]$$w=qt_w=1.2 \times 3/2=1.8 kip / ft$$[math] And for members 2 and 5 (by simple trigonometry), [math]$$w=qt_w=1.2 \times 1.5\tan(45^\circ)=1.8 kip / ft$$[math] Applying these area loads to the structure is shown below,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.8559046387672424, "perplexity": 725.2132814547987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945648.77/warc/CC-MAIN-20180422193501-20180422213501-00572.warc.gz"}
http://archives.evergreen.edu/webpages/curricular/2003-2004/transformingtheglobe/Homework/AtmosphericPlanetaryTemps.htm
Mathematical Modeling Solar Fluxes and the Earth's Atmosphere Objectives The objectives of this lab are to study a simple model of planetary temperatures with atmospheres based upon energy flows. Background The fundamental difference between this lab and the one you did last week, is that we now add an atmosphere to the planet. An atmosphere will affect the surface temperature of a planet, but will not affect the exterior temperature (the radiant temperature seen by an observer out in space) of the planet. The exterior radiant temperature is determined from equilibrium where the net solar inflow is equal to the IR radiated outflow. The net solar inflow is a function only of the solar flux and the planetary albedo. A schematic of the model we used last week looked like Figure 4.1. Figure 4.1 We now add an atmosphere that we assume (for simplicity) is transparent to visible light, but totally opaque to infrared radiation. Such an atmosphere absorbs all the IR radiated by the earth, and re-radiates it in all directions, half of it going back to the earth, and half of it going out into space. Such a model is schematically depicted in Figure 4.2. Figure 4.2 The atmosphere of Earth is, in fact, not totally opaque to IR radiated from the earth, but rather allows some of the IR to pass directly through. If we define fIR abs as the fraction of the IR radiated by the ground that is absorbed by the atmosphere, then (1 - fIR abs ) passes directly from the ground to space. A schematic for such a model is shown in Figure 4.3, at the top of the next page. Figure 4.3 To explore such models, we will modify the model you constructed last week. Rather than compute heat capacities for water and for the atmosphere in the midst of the diagram, we'll separate them from the main schematic portion of the flows. Examine Figure 4.4 (on the next page) carefully, and then build a similar model, using your model from last week as a starting point. Use the initial values of zero for the Earth Energy and Atmosphere stocks, and all of the parameter values from last week. In addition, the following values will be necessary: Atmos Depth 5600 { m } Density Atmos 1.293 { kg / m^3 } Specific Heat Atmos 1004 { J / (kg * K) } Note that we are assuming that the atmosphere is of constant density, but only 5600 meters thick. This is an excellent approximation to our real atmosphere whose density decreases with height, and is actually much thicker. After setting up a graph to plot T_grd and T_atmos, and Numerical Displays for each quantity, run a simulation with from time 0 to 1 with DT = 0.0025. Your temperature values should be the same as those show at the bottom of Figure 4.4. The atmosphere temperature remains at 255 K, but the ground temperature is now 303 K, or 30 C, which is the same as 86 F. This is a bit warmer for an average earth temperature than actually exists, but recall that our starting assumption was that the atmosphere absorbed all the infrared radiation from the earth, and so half of that got radiated back to the earth to be reabsorbed. In actual fact, the atmosphere is transparent to some wavelengths of infrared radiation, and so less radiation that our model projects gets radiated back to the earth, and hence the actual surface temperature is lower than the 303 K our model suggests. One of the concerns that climatologists have is that human activity (clearing of forests, building of industrial parks, strip malls and suburbs, is that this may be changing the surface albedo of the planet. Use this model to find the sensitivity of the surface temperature to albedo change. That is, find the percent change in the surface temperature if the albedo changes by 1%. Figure 4.4 Now we will assume that the atmosphere absorbs only a fraction, fIR abs , of the IR radiated by the Earth. That means that ( 1 - fIR abs ) of the IR radiated by the Earth flows directly to space. Such a model is depicted on the next page. Construct such a model (see the next page), and vary the value of fIR abs until you obtain an equilibrium surface temperature of 289.4 K, a value for the mean surface temperature of the Earth that is generally accepted by climatologists. What value for fIR abs did you obtain? Figure 4.5 This model depicts the energy flow for the Earth with the assumption that the atmosphere absorbs only a fraction, fIR abs , of the IR radiated by the surface of the planet. When we vary fIR abs such that the equilibrium surface is 289.4 K, then the equilibrium atmosphere temperature is just shy of 230 K. This is cooler than we observe on the Earth, but we have neglected the convective flow of heat and latent heat transfer from the surface to the atmosphere. Were we to include this as a next step, we would find the model predicts an equilibrium atmospheric temperature very close to that which climatologists believe is accurate. However, we've done enough with our model for this week.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905607283115387, "perplexity": 612.0513184576962}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00082.warc.gz"}
http://mathhelpforum.com/algebra/184849-determine-whether-each-equation-determines-y-function-x.html
# Math Help - Determine whether each equation determines y to be a function of x 1. ## Determine whether each equation determines y to be a function of x I am just completely stuck on these problems. I am sort of making headway, but I am still confused. $y^2=x$ I know the fundamentals of domain, range, function... but I have no idea how to solve the equation to see if it is a function. 2. ## Re: Determine whether each equation determines y to be a function of x Have a go at graphing $y= \pm\sqrt{x}$ What do you notice? 3. ## Re: Determine whether each equation determines y to be a function of x Best way to do any such problem is to start with the definition! The definition of "function" is: given a relation between x and y, y is a function of x if and only if for any value of x, there is, at most, only one corresponding value of y. Now look at x= 1. What values of y give $y^2= 1$? 4. ## Re: Determine whether each equation determines y to be a function of x Originally Posted by HallsofIvy Best way to do any such problem is to start with the definition! The definition of "function" is: given a relation between x and y, y is a function of x if and only if for any value of x, there is, at most, only one corresponding value of y. Now look at x= 1. What values of y give $y^2= 1$? It should be that for every x in the domain, there is exactly one y in the co-domain associated with it. Not 'at most...one value of y' . 5. ## Re: Determine whether each equation determines y to be a function of x That is not a correction of what I said, just a restatement. If "x" is not in the domain, there is no corresponding "y" hence "at most, only one corresponding value of y."
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844862580299377, "perplexity": 311.87393792962973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00103-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.talkstats.com/threads/interpretation-of-various-output-of-lmer-function-in-r.61787/
# Interpretation of various output of "lmer" function in R #### Cynderella ##### New Member Code: library(lme4) fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy) The notation (Days | Subject) says to allow the intercept and Days to vary randomly for each level of Subject . Can you please explain me the result of the following commands ? Code: attr(summary(fm1)$varcor$Subject,"stddev") (Intercept) Days 24.740448 5.922133 c(sd(ranef(fm1)$Subject[,1]),sd(ranef(fm1)$Subject[,2])) [1] 21.595943 5.455217 summary(fm1)$sigma [1] 25.59182 residuals(summary(fm1)) sd(residuals(summary(fm1))) [1] 0.9183965 What is the INTERPRETATION of the results found from various commands? That is , if one asks me what is the meaning of the results that you have found from "sd(ranef(fm1)$Subject[,1])" and "attr(summary(fm1)$varcor$Subject,"stddev")[1]" ? Both are standard deviation of "Intercept" but of course there is difference between these two results . But I don't know what is this ? In "?getMe" , it is said that from "summary(fm1)$sigma" , we found residual standard error . But why doesn't the result match with "sd(residuals(summary(fm1)))" ? Also , In "summary(fm1)$varcor" there is value 0.066 under the column "Corr" . Does it mean correlation between two random effects "(Intercept)" and "Days " is 0.066 ? Any help is appreciated . Thank you . #### Jake ##### Cookie Scientist Code: attr(summary(fm1)$varcor$Subject,"stddev") (Intercept) Days 24.740448 5.922133 c(sd(ranef(fm1)$Subject[,1]),sd(ranef(fm1)$Subject[,2])) [1] 21.595943 5.455217 The difference between these two pairs of quantities is subtle but conceptually important. The first pair are the actual parameter estimates: they are our best guess about the standard deviation of the intercepts and the standard deviation of the slopes in the population of Subjects. The second pair are the sample standard deviations of the BLUPs for the Subjects in our particular study. This is typically (or maybe always, but I hesitate to make the general statement) lower than the estimated standard deviation for the population because the BLUPs have been subjected to some degree of shrinkage (some info here). Because of this shrinkage, the standard deviation of our predictions for a particular sample does not equal our estimate of the standard deviation in the population. Code: summary(fm1)$sigma [1] 25.59182 This is the estimate of the standard deviation of the errors. It is similar conceptually to our estimates of the standard deviations of the random effects. Code: residuals(summary(fm1)) sd(residuals(summary(fm1))) [1] 0.9183965 I actually didn't know before now that you could call residuals() on a summary.merMod object. But apparently you can, and the results are the "scaled residuals", i.e., the raw residuals (which you would obtain with residuals(fm1) -- omitting the summary() command) divided by sigma. In other words: Code: identical(resid(summary(fm1)), resid(fm1)/summary(fm1)$sigma) # [1] TRUE #### TheEcologist ##### Global Moderator I actually didn't know before now that you could call residuals() on a summary.merMod object. But apparently you can, and the results are the "scaled residuals", i.e., the raw residuals (which you would obtain with residuals(fm1) -- omitting the summary() command) divided by sigma. In other words: Code: identical(resid(summary(fm1)), resid(fm1)/summary(fm1)$sigma) # [1] TRUE Yeah, lme4 is constantly evolving and adding more s3/s4 methods to the packages. Two years ago predict didn't work on a merMod object, now it does. It helps to read the the release notes. It's pain because it also invariably breaks legacy code, in each version they seem to change the slotNames or input forms (re.form, ReForm, REForm, REform) - but it's still the best package for hierarchical work short of going Bayesian. #### Cynderella ##### New Member Code: attr(summary(fm1)$varcor$Subject,"stddev") (Intercept) Days 24.740448 5.922133 c(sd(ranef(fm1)$Subject[,1]),sd(ranef(fm1)$Subject[,2])) [1] 21.595943 5.455217 The first pair are the actual parameter estimates: they are our best guess about the standard deviation of the intercepts and the standard deviation of the slopes in the population of Subjects. Does it mean the first pair are the point estimates of variance components of random effects ? This link https://stat.ethz.ch/pipermail/r-sig-mixed-models/2014q3/022647.html tells attr(summary(fm1)$varcor$Subject,"stddev") is the point estimates of variance components of random effects . Is it ? Thank you again . Regards . #### Cynderella ##### New Member Code: attr(summary(fm1)$varcor$Subject,"stddev") (Intercept) Days 24.740448 5.922133 c(sd(ranef(fm1)$Subject[,1]),sd(ranef(fm1)$Subject[,2])) [1] 21.595943 5.455217 The first pair are the actual parameter estimates: they are our best guess about the standard deviation of the intercepts and the standard deviation of the slopes in the population of Subjects. Does it mean the first pair are the point estimates of variance components of random effects ? This link https://stat.ethz.ch/pipermail/r-sig-mixed-models/2014q3/022647.html tells attr(summary(fm1)$varcor\$Subject,"stddev") is the point estimates of variance components of random effects . Is it ? Thank you again . Regards .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041905999183655, "perplexity": 1655.8601099755153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00495.warc.gz"}
https://www.maa.org/press/periodicals/american-mathematical-monthly/american-mathematical-monthly-april-1999
# American Mathematical Monthly - April 1999 ### Articles The Mandelbrot Set, the Farey Tree, and the Fibonacci Sequence by Robert L. Devaney [email protected] In this paper we discuss several folk theorems involving the Mandelbrot set. Each of these theorems concerns the size of the bulbs or decorations attached to the Mandelbrot set. One may recognize the p/q bulbs from the geometry of their antennas. Moreover, the size of these bulbs is determined by the Farey tree. These results are proved using the theory of external rays of Douady and Hubbard, so we also give an explanation of their results. Existence Proofs by Fred Richman [email protected] The difference between constructive and nonconstructive existence proofs is examined through three examples. Bezout's equation has well-known nonconstructive and constructive proofs. There is no known constructive proof that some digit appears infinitely often in the decimal expansion of pi. The zero-one valued function of f defined by f(n) = 1 if there are at least n consecutive 4's in the decimal expansion of pi, is computable according to the received wisdom. Yet no one has a program to compute it, and a proof that the program is right. Introduction to Metric-Preserving Functions by Paul Corazza [email protected] When I was a graduate student, a professor asked us to verify that for any metric d, d/(1+d) is also a metric, topologically equivalent to d, while d/(1+d*d) is not generally a metric at all. I became curious about the possibility of characterizing those functions f, like x/(1+x), for which f(d(×)) is a metric, or a metric that is equivalent to d. After coming up with such a characterization and obtaining a few related results, I learned that there is a substantial literature on the subject. This paper is a survey of this literature, containing a number of surprising results: • Every nondecreasing, subadditive function from the nonnegative reals to the nonnegative reals is metric-preserving. • Metric-preserving functions need not be continuous. Indeed, there exist nowhere continuous metric-preserving functions, and there are "more" metric-preserving functions than there are continuous functions. • If a metric-preserving function is continuous at 0, it is continuous everywhere. • A metric-preserving function f has the property that f(d(×)) is equivalent to d for every metric d if and only if f is continuous. • Every metric-preserving function has a (possibly infinite) derivative at 0. If it has a finite derivative at 0, then it's differentiable almost everywhere. Magic Dice by Bernard D. Flury, Robert Irving, and M. N. Goria [email protected], [email protected], [email protected] A magician offers her audience a game with two dice. For certain pairs of numbers (X,Y) shown by the two dice the magician wins 1 ruble, and for other combinations she loses 1 ruble. Sure enough, the magician wins all the time, despite the fact that two people who tally the frequencies of X and Y find that both dice take values 1 to 6 with the required probabilities of 1/6. A third observer tallies X + Y, but again nothing is found to be wrong. A fourth observer tallies X - Y, and a fifth observer X + 2Y, and still all frequencies are in agreement with the assumption that the two dice are fair and independent. However, when the magician is asked to admit a sixth observer she stops the game. How many observers (that is, different linear combinations) can the magician admit and still cheat? For six-sided dice the answer is five. What if she uses 20-sided dice, or 1998-sided dice? The Set of Differences of a Given Set by Andrew Granville and Friedrich Roesler [email protected], [email protected] Given a finite set of integers, form all of the reduced fractions that result from dividing any one element of this set from any other. We show that there are at least as many distinct products of the numerator and denominator of such fractions as there were elements in the original set. We show how this (somewhat contrived sounding) result links up with many questions of current interest in number theory and combinatorial geometry. Several natural open questions are posed, both in terms of number theory and in terms of geometry, and a few partial results are given. Six Ways of Looking at Burtin's Lemma by S. Anoulova, J. Bennies, J. Lenhard, D. Metzler, Y. Sung, A. Weber [email protected], [email protected], [email protected], [email protected], [email protected] What is the probability that the graph of a non-uniform random mapping consists of only one component? Consider a random mapping F : {1,..., n:} -> {0,..., n: }, where for each i its image F( i) is chosen independently according to given probability weights p 0 , ..., p n. Associate a random graph consisting of vertices 0, 1, ..., n and edges, which are directed from each i to F( i ). Note that there is no edge pointing from 0 to another vertex. Well, what is the probability that all vertices are connected to 0? The answer is surprisingly simple: it equals p0. This result is known as Burtin's lemma and was originally proved by induction. Should not such a simple result have a more simple proof? We arrived at six different approaches, which still leave the question open whether there is a most natural way of looking at Burtin's lemma. ### NOTES Lexell's Theorem Via an Inscribed Angle Theorem by Hiroshi Maehara [email protected] A Characteristic Property of Differentiation [email protected] A Weighted Mixed-Mean Inequality by Kiran S. Kedlaya [email protected] ### UNSOLVED PROBLEMS Periods in Taking and Splitting Games by Ian Caines, Carrie Gates, Richard K. Guy, and Richard J. Nowakowski ### REVIEWS The French Mathematician By Tom Petsinis Reviewed by Tony Rothman [email protected] Social Constructivism as a Philosophy of Mathematics By Paul Ernest What is Mathematics, Really? By Reuben Hersh Reviewed by Bonnie Gold [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208003640174866, "perplexity": 781.3156285844304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00194.warc.gz"}
https://www.hepdata.net/search/?q=&phrases=Integrated+Cross+Section&sort_order=&page=1&sort_by=latest
Showing 25 of 3842 results #### Observation of electroweak production of a same-sign $W$ boson pair in association with two jets in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector The collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. No Journal Information, 2019. Inspire Record 1738841 This Letter presents the observation and measurement of electroweak production of a same-sign $W$ boson pair in association with two jets using 36.1 fb$^{-1}$ of proton-proton collision data recorded at a center-of-mass energy of $\sqrt{s}=13$ TeV by the ATLAS detector at the Large Hadron Collider. The analysis is performed in the detector fiducial phase-space region, defined by the presence of two same-sign leptons, electron or muon, and at least two jets with a large invariant mass and rapidity difference. A total of 122 candidate events are observed for a background expectation of $69 \pm 7$ events, corresponding to an observed signal significance of 6.5 standard deviations. The measured fiducial signal cross section is $\sigma^{\mathrm {fid.}}=2.89^{+0.51}_{-0.48} \mathrm{(stat.)} ^{+0.29}_{-0.28} \mathrm{(syst.)}$ fb. 6 data tables Measured fiducial cross section. The $m_{jj}$ distribution for events meeting all selection criteria for the signal region. Signal and individual background distributions are shown as predicted after the fit. The last bin includes the overflow. The highest value measured in a candidate event in data is $m_{jj}=3.8$ TeV. The $m_{ll}$ distribution for events meeting all selection criteria for the signal region as predicted after the fit. The fitted signal strength and nuisance parameters have been propagated, with the exception of the uncertainties due to the interference and electroweak corrections for which a flat uncertainty is assigned. The last bin includes the overflow. The highest value measured in a candidate event in data is $m_{ll}=824$ GeV. More… #### Measurement of the cross-section and charge asymmetry of $W$ bosons produced in proton-proton collisions at $\sqrt{s}=8$ TeV with the ATLAS detector The collaboration Aad, Georges ; Abbott, Brad ; Abbott, Dale Charles ; et al. Eur.Phys.J., 2019. Inspire Record 1729240 This paper presents measurements of the $W^+ \rightarrow \mu^+\nu$ and $W^- \rightarrow \mu^-\nu$ cross-sections and the associated charge asymmetry as a function of the absolute pseudorapidity of the decay muon. The data were collected in proton--proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS experiment at the LHC and correspond to a total integrated luminosity of $20.2~\mbox{fb$^{-1}$}$. The precision of the cross-section measurements varies between 0.8% to 1.5% as a function of the pseudorapidity, excluding the 1.9% uncertainty on the integrated luminosity. The charge asymmetry is measured with an uncertainty between 0.002 and 0.003. The results are compared with predictions based on next-to-next-to-leading-order calculations with various parton distribution functions and have the sensitivity to discriminate between them. 8 data tables Cross-sections (differential in $\eta_{\mu}$) and asymmetry, as a function of $|\eta_{\mu}|$). The central values are provided along with the statistical and dominant systematic uncertainties: the data statistical uncertainty (Data Stat.), the $E_T^{\textrm{miss}}$ uncertainty, the uncertainties related to muon reconstruction (Muon Reco.), those related to the background, those from MC statistics (MC Stat.), and modelling uncertainties. The uncertainties of the cross-sections are given in percent and those of the asymmetry as an absolute difference from the nominal. The correction factors, $C_{W^±,i}$ with their associated systematic uncertainties as a function of $|\eta_{\mu}|$, for $W^+$ and $W^−$ The integrated global correction factor $C_{W^±}$, for $W^+$ and $W^−$ More… #### Search for light pseudoscalar boson pairs produced from decays of the 125 GeV Higgs boson in final states with two muons and two nearby tracks in pp collisions at $\sqrt{s}=$ 13 TeV The collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. No Journal Information, 2019. Inspire Record 1744267 A search is presented for pairs of light pseudoscalar bosons, in the mass range from 4 to 15 GeV, produced from decays of the 125 GeV Higgs boson. The decay modes considered are final states that arise when one of the pseudoscalars decays to a pair of tau leptons, and the other one either into a pair of tau leptons or muons. The search is based on proton-proton collisions collected by the CMS experiment in 2016 at a center-of-mass energy of 13 TeV that correspond to an integrated luminosity of 35.9 fb${-1}$. The 2$\mu$2$\tau$ and 4$\tau$ channels are used in combination to constrain the product of the Higgs boson production cross section and the branching fraction into 4$\tau$ final state, $\sigma\mathcal{B}$, exploiting the linear dependence of the fermionic coupling strength of pseudoscalar bosons on the fermion mass. No significant excess is observed beyond the expectation from the standard model. The observed and expected upper limits at 95% confidence level on $\sigma\mathcal{B}$, relative to the standard model Higgs boson production cross section, are set respectively between 0.022 and 0.23 and between 0.027 and 0.19 in the mass range probed by the analysis. 1 data table Expected and observed 95% CL upper limits on (sigma(pp->h)/sigma(pp->hSM)) * B(h -> aa -> tautautautau) as a function of m(a) obtained from the 13 TeV data, where h(SM) is the Higgs boson of the standard model, h is the observed particle with mass of 125 GeV, and (a) denotes a light Higgs-like state. #### Measurement of fiducial and differential $W^+W^-$ production cross-sections at $\sqrt{s}=$13 TeV with the ATLAS detector The collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. No Journal Information, 2019. Inspire Record 1734263 A measurement of fiducial and differential cross-sections for $W^+W^-$ production in proton-proton collisions at $\sqrt{s}=$13 TeV with the ATLAS experiment at the Large Hadron Collider using data corresponding to an integrated luminosity of $36.1$ fb$^{-1}$ is presented. Events with one electron and one muon are selected, corresponding to the decay of the diboson system as $WW\rightarrow e^{\pm}\nu\mu^{\mp}\nu$. To suppress top-quark background, events containing jets with a transverse momentum exceeding 35 GeV are not included in the measurement phase space. The fiducial cross-section, six differential distributions and the cross-section as a function of the jet-veto transverse momentum threshold are measured and compared with several theoretical predictions. Constraints on anomalous electroweak gauge boson self-interactions are also presented in the framework of a dimension-six effective field theory. 43 data tables Measured fiducial cross-section as a function of the jet-veto $p_{T}$ threshold. The value at the jet-veto $p_{T}$ threshold of 35GeV corresponds to the nominal fiducial cross section measured in this publication. Statistical correlation between bins in data for the measured fiducial cross-section as a function of the jet-veto $p_{T}$ threshold. The value at the jet-veto $p_{T}$ threshold of 35GeV corresponds to the nominal fiducial cross section measured in this publication. Total correlation between bins in data for the measured fiducial cross-section as a function of the jet-veto $p_{T}$ threshold. The value at the jet-veto $p_{T}$ threshold of 35GeV corresponds to the nominal fiducial cross section measured in this publication. More… #### Measurement of the inelastic $pp$ cross-section at a centre-of-mass energy of 13 TeV The collaboration Aaij, Roel ; Adeva, Bernardo ; Adinolfi, Marco ; et al. JHEP 1806 (2018) 100, 2018. Inspire Record 1665223 The cross-section for inelastic proton-proton collisions at a centre-of-mass energy of 13 TeV is measured with the LHCb detector. The fiducial cross-section for inelastic interactions producing at least one prompt long-lived charged particle with momentum p > 2 GeV/c in the pseudorapidity range 2 < η < 5 is determined to be σ$_{acc}$ = 62.2 ± 0.2 ± 2.5 mb. The first uncertainty is the intrinsic systematic uncertainty of the measurement, the second is due to the uncertainty on the integrated luminosity. The statistical uncertainty is negligible. Extrapolation to full phase space yields the total inelastic proton-proton cross-section σ$_{inel}$ = 75.4 ± 3.0 ± 4.5 mb, where the first uncertainty is experimental and the second due to the extrapolation. An updated value of the inelastic cross-section at a centre-of-mass energy of 7 TeV is also reported. 3 data tables The cross-section for inelastic $pp$ collisions at a centre-of-mass energy $\sqrt{s} = 13$ TeV, yielding one or more prompt long-lived charged particles in the kinematic range $p > 2.0$ GeV/$c$ and $2.0 < \eta < 5.0$ (LHCb acceptance). The quoted uncertainty that is almost completely systematic in nature as the purely statistical uncertainty is found negligible. A particle is long-lived if its proper (mean) lifetime is larger than 30 ps, and it is prompt if it is produced directly in the $pp$ interaction or if none of its ancestors is long-lived. The total cross-section for inelastic $pp$ collisions at a centre-of-mass energy $\sqrt{s} = 13$ TeV, extrapolated from Monte Carlo in similar way to measurement at $\sqrt{s}=7$ TeV. Update of the total cross-section for inelastic $pp$ collisions at a centre-of-mass energy $\sqrt{s} = 7$ TeV due to improved calibration of the luminosity scale. #### Energy dependence of exclusive $J/\psi$ photoproduction off protons in ultra-peripheral p-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV The collaboration Acharya, Shreyasi ; Torales - Acosta, Fernando ; Adamova, Dagmar ; et al. No Journal Information, 2018. Inspire Record 1693305 The ALICE Collaboration has measured the energy dependence of exclusive photoproduction of $J/\psi$ vector mesons off proton targets in ultra-peripheral p-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm NN}} = 5.02$ TeV. The e$^+$e$^-$ and $\mu^+\mu^-$ decay channels are used to measure the cross section as a function of the rapidity of the $J/\psi$ in the range $-2.5 &lt; y &lt; 2.7$, corresponding to an energy in the $\gamma$p centre-of-mass in the interval $40 &lt; W_{\gamma\mathrm{p}}&lt;550$ GeV. The measurements, which are consistent with a power law dependence of the exclusive $J/\psi$ photoproduction cross section, are compared to previous results from HERA and the LHC and to several theoretical models. They are found to be compatible with previous measurements. 1 data table Differential cross sections as a function of rapidity for exclusive J/PSI photoproduction off protons in ultra-peripheral p-Pb collisions. The corresponding J/PSI photoproduction cross sections in bins of the GAMMA-P centre-of-mass, W(GAMMA P), are also presented. #### Measurement of differential cross sections and $W^+/W^-$ cross-section ratios for $W$ boson production in association with jets at $\sqrt{s}=8$ TeV with the ATLAS detector The collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. JHEP 1805 (2018) 077, 2018. Inspire Record 1635273 This paper presents a measurement of the W boson production cross section and the W$^{+}$/W$^{−}$ cross-section ratio, both in association with jets, in proton-proton collisions at $\sqrt{s}=8$ TeV with the ATLAS experiment at the Large Hadron Collider. The measurement is performed in final states containing one electron and missing transverse momentum using data corresponding to an integrated luminosity of 20.2 fb$^{−1}$. Differential cross sections for events with at least one or two jets are presented for a range of observables, including jet transverse momenta and rapidities, the scalar sum of transverse momenta of the visible particles and the missing transverse momentum in the event, and the transverse momentum of the W boson. For a subset of the observables, the differential cross sections of positively and negatively charged W bosons are measured separately. In the cross-section ratio of W$^{+}$/W$^{−}$ the dominant systematic uncertainties cancel out, improving the measurement precision by up to a factor of nine. The observables and ratios selected for this paper provide valuable input for the up quark, down quark, and gluon parton distribution functions of the proton. 86 data tables Cross section for the production of W bosons for different inclusive jet multiplicities. Statistical correlation between bins in data for the cross section for the production of W bosons for different inclusive jet multiplicities. Differential cross sections for the production of W<sup>+</sup> bosons, W<sup>-</sup> bosons and the W<sup>+</sup>/W<sup>-</sup> cross section ratio as a function of the inclusive jet multiplicity. More… #### Measurements of the $\mathrm {p}\mathrm {p}\rightarrow \mathrm{Z}\mathrm{Z}$ production cross section and the $\mathrm{Z}\rightarrow 4\ell$ branching fraction, and constraints on anomalous triple gauge couplings at $\sqrt{s} = 13\,\text {TeV}$ The collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. Eur.Phys.J. C78 (2018) 165, 2018. Inspire Record 1625296 Four-lepton production in proton-proton collisions, $\mathrm {p}\mathrm {p}\rightarrow (\mathrm{Z}/ \gamma ^*)(\mathrm{Z}/\gamma ^*) \rightarrow 4\ell$ , where $\ell = \mathrm {e}$ or $\mu$ , is studied at a center-of-mass energy of 13 $\,\text {TeV}$ with the CMS detector at the LHC. The data sample corresponds to an integrated luminosity of 35.9 $\,\text {fb}^{-1}$ . The ZZ production cross section, $\sigma (\mathrm {p}\mathrm {p}\rightarrow \mathrm{Z}\mathrm{Z}) = 17.2 \pm 0.5\,\text {(stat)} \pm 0.7\,\text {(syst)} \pm 0.4\,\text {(theo)} \pm 0.4\,\text {(lumi)} \text { pb}$ , measured using events with two opposite-sign, same-flavor lepton pairs produced in the mass region $60< m_{\ell ^+\ell ^-} < 120\,\text {GeV}$ , is consistent with standard model predictions. Differential cross sections are measured and are well described by the theoretical predictions. The Z boson branching fraction to four leptons is measured to be $\mathcal {B}(\mathrm{Z}\rightarrow 4\ell ) = 4.8 \pm 0.2\,\text {(stat)} \pm 0.2\,\text {(syst)} \pm 0.1\,\text {(theo)} \pm 0.1\,\text {(lumi)} \times 10^{-6}$ for events with a four-lepton invariant mass in the range $80< m_{4\ell } < 100\,\text {GeV}$ and a dilepton mass $m_{\ell \ell } > 4\,\text {GeV}$ for all opposite-sign, same-flavor lepton pairs. The results agree with standard model predictions. The invariant mass distribution of the four-lepton system is used to set limits on anomalous ZZZ and ZZ $\gamma$ couplings at 95% confidence level: $-0.0012<f_4^\mathrm{Z}<0.0010$ , $-0.0010<f_5^\mathrm{Z}<0.0013$ , $-0.0012<f_4^{\gamma }<0.0013$ , $-0.0012<f_5^{\gamma }< 0.0013$ . 14 data tables The measured total ZZ cross section using 2016 data. The first systematic uncertainty is the combined systematic uncertainty excluding luminosity and theortical sources, the second is theoretical uncertianty on the extrapolation from the selected region to the total phase space, the third is the luminosity uncertianty The measured total ZZ cross section using 2015 and 2016. The first systematic uncertainty is the combined systematic uncertainty excluding luminosity and theortical sources, the second is theoretical uncertianty on the extrapolation from the selected region to the total phase space, the third is the luminosity uncertianty The measured fiducial ZZ cross sections. The first systematic uncertainty is the combined systematic uncertainty excluding luminosity, the second is the luminosity uncertianty More… #### Measurement of the inclusive and fiducial $t\bar{t}$ production cross-sections in the lepton+jets channel in $pp$ collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector The collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. Eur.Phys.J. C78 (2018) 487, 2018. Inspire Record 1644099 The inclusive and fiducial $t\bar{t}$ production cross-sections are measured in the lepton+jets channel using $20.2~\hbox {fb}^{-1}$ of proton–proton collision data at a centre-of-mass energy of 8 TeV recorded with the ATLAS detector at the LHC. Major systematic uncertainties due to the modelling of the jet energy scale and b-tagging efficiency are constrained by separating selected events into three disjoint regions. In order to reduce systematic uncertainties in the most important background, the $W \text {+\,jets}$ process is modelled using $Z$ + jets events in a data-driven approach. The inclusive $t\bar{t}$ cross-section is measured with a precision of 5.7% to be $\sigma _{\text {inc}}(t\bar{t}) = 248.3 \pm 0.7 \, ({\mathrm {stat.}}) \pm 13.4 \, ({\mathrm {syst.}}) \pm 4.7 \, ({\mathrm {lumi.}})~\text {pb}$ , assuming a top-quark mass of 172.5 GeV. The result is in agreement with the Standard Model prediction. The cross-section is also measured in a phase space close to that of the selected data. The fiducial cross-section is $\sigma _{\text {fid}}(t\bar{t}) = 48.8 \pm 0.1 \, ({\mathrm {stat.}}) \pm 2.0 \, ({\mathrm {syst.}}) \pm 0.9 \, ({\mathrm {lumi.}})~\text {pb}$ with a precision of 4.5%. 2 data tables The measured inclusive cross section. The first systematic uncertainty (sys_1) is the combined systematic uncertainty excluding luminosity, the second (sys_2) is the luminosity The measured fiducial cross section. The first systematic uncertainty (sys_1) is the combined systematic uncertainty excluding luminosity, the second (sys_2) is the luminosity #### Search for leptoquarks coupled to third-generation quarks in proton-proton collisions at $\sqrt{s}=$ 13 TeV The collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. Phys.Rev.Lett. 121 (2018) 241802, 2018. Inspire Record 1694381 Three of the most significant measured deviations from standard model predictions, the enhanced decay rate for B→D(*)τν, hints of lepton universality violation in B→K(*)ℓℓ decays, and the anomalous magnetic moment of the muon, can be explained by the existence of leptoquarks (LQs) with large couplings to third-generation quarks and masses at the TeV scale. The existence of these states can be probed at the LHC in high energy proton-proton collisions. A novel search is presented for pair production of LQs coupled to a top quark and a muon using data at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9  fb-1, recorded by the CMS experiment. No deviation from the standard model prediction has been observed and scalar LQs decaying exclusively into tμ are excluded up to masses of 1420 GeV. The results of this search are combined with those from previous searches for LQ decays into tτ and bν, which excluded scalar LQs below masses of 900 and 1080 GeV. Vector LQs are excluded up to masses of 1190 GeV for all possible combinations of branching fractions to tμ, tτ and bν. With this analysis, all relevant couplings of LQs with an electric charge of -1/3 to third-generation quarks are probed for the first time. 6 data tables Distributions for $M_{LQ}^{rec}$ (category A) after applying the full selection. All backgrounds are normalized according to the post-fit nuisance parameters based on the corresponding SM cross sections. Distributions for $S_{T}$ (category B) after applying the full selection and estimating the $t\overline{t}$ and DY+jets background contributions from data in category B. All backgrounds are normalized according to the post-fit nuisance parameters based on the corresponding SM cross sections. Observed upper limits on the production cross section for pair production of LQs decaying into a top quark and a muon or a $\tau$ lepton at 95% CL in the $M_{LQ} - B(LQ \rightarrow t\mu)$ plane. More… #### Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two b quarks and two $\tau$ leptons in proton-proton collisions at $\sqrt{s}=$ 13 TeV The collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. Phys.Lett. B785 (2018) 462, 2018. Inspire Record 1674926 A search for an exotic decay of the Higgs boson to a pair of light pseudoscalar bosons is performed for the first time in the final state with two b quarks and two $\tau$ leptons. The search is motivated in the context of models of physics beyond the standard model (SM), such as two Higgs doublet models extended with a complex scalar singlet (2HDM+S), which include the next-to-minimal supersymmetric SM (NMSSM). The results are based on a data set of proton-proton collisions corresponding to an integrated luminosity of 35.9 fb$^{-1}$, accumulated by the CMS experiment at the LHC in 2016 at a center-of-mass energy of 13 TeV. Masses of the pseudoscalar boson between 15 and 60 GeV are probed, and no excess of events above the SM expectation is observed. Upper limits between 3 and 12% are set on the branching fraction $\mathcal{B}$(h $\to$ aa $\to$ 2$\tau$2b) assuming the SM production of the Higgs boson. Upper limits are also set on the branching fraction of the Higgs boson to two light pseudoscalar bosons in different 2HDM+S scenarios. Assuming the SM production cross section for the Higgs boson, the upper limit on this quantity is as low as 20% for a mass of the pseudoscalar of 40 GeV in the NMSSM. 1 data table Expected and observed 95% CL upper limits on (sigma(pp->h)/sigma(pp->hSM)) * B(h -> aa -> bbtautau) as a function of m(a), where h(SM) is the Higgs boson of the standard model, h is the observed particle with mass of 125 GeV, and a denotes a light Higgs-like state, as obtained from the 13 TeV data. #### Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state of two muons and two $\tau$ leptons in proton-proton collisions at $\sqrt{s}=13$ TeV The collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. JHEP 1811 (2018) 018, 2018. Inspire Record 1673011 A search for exotic Higgs boson decays to light pseudoscalars in the final state of two muons and two $\tau$ leptons is performed using proton-proton collision data recorded by the CMS experiment at the LHC at a center-of-mass energy of 13 TeV in 2016, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. Masses of the pseudoscalar boson between 15.0 and 62.5 GeV are probed, and no significant excess of data is observed above the prediction of the standard model. Upper limits are set on the branching fraction of the Higgs boson to two light pseudoscalar bosons in different types of two-Higgs-doublet models extended with a complex scalar singlet. 1 data table Expected and observed 95% CL upper limits on (sigma(pp->h)/sigma(pp->hSM)) * B(h -> aa -> mumutautau) as a function of m(a), where h(SM) is the Higgs boson of the standard model, h is the observed particle with mass of 125 GeV, and a denotes a light Higgs-like state, as obtained from the 13 TeV data. #### Search for beyond the standard model Higgs bosons decaying into a $\mathrm{b\overline{b}}$ pair in pp collisions at $\sqrt{s} =$ 13 TeV The collaboration Sirunyan, Albert M ; Tumasyan, Armen ; Adam, Wolfgang ; et al. JHEP 1808 (2018) 113, 2018. Inspire Record 1675818 A search for Higgs bosons that decay into a bottom quark-antiquark pair and are accompanied by at least one additional bottom quark is performed with the CMS detector. The data analyzed were recorded in proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=13$ TeV at the LHC, corresponding to an integrated luminosity of 35.7 fb$^{−1}$. The final state considered in this analysis is particularly sensitive to signatures of a Higgs sector beyond the standard model, as predicted in the generic class of two Higgs doublet models (2HDMs). No signal above the standard model background expectation is observed. Stringent upper limits on the cross section times branching fraction are set for Higgs bosons with masses up to 1300 GeV. The results are interpreted within several MSSM and 2HDM scenarios. 3 data tables Expected and observed 95% CL upper limits on sigma(pp->b+H(MSSM)+X) * B(H(MSSM) -> bb) in pb as a function of m(H(MSSM)), where H(MSSM) denotes a heavy Higgs-like state like the H and A bosons of MSSM and 2HDM, as obtained from the 13 TeV data. Expected and observed 95% CL upper limits on tan(beta) as a function of m(A) in the mhmodp benchmark scenario for a higgsino mass parameter of mu=+200 GeV. Since theoretical predictions are not reliable for tan(beta)>60, entries for which tan(beta) would exceed this value are indicated by N/A. Expected and observed 95% CL upper limits on tan(beta) as a function of m(A) in the hMSSM benchmark scenario. Since theoretical predictions are not reliable for tan(beta)>60, entries for which tan(beta) would exceed this value are indicated by N/A. #### Search for the production of a long-lived neutral particle decaying within the ATLAS hadronic calorimeter in association with a $Z$ boson from $pp$ collisions at $\sqrt{s} = 13$ TeV The collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. No Journal Information, 2018. Inspire Record 1702261 This Letter presents a search for the production of a long-lived neutral particle decaying within the ATLAS hadronic calorimeter, in association with a Standard Model $Z$ boson produced via an intermediate scalar boson, where $Z \rightarrow l^+ l^-$ ($l=e,\mu$). The data used were collected by the ATLAS detector during 2015 and 2016 $pp$ collisions with a center-of-mass energy of $\sqrt{s} = 13$ TeV at the Large Hadron Collider and corresponds to an integrated luminosity of 36.1 fb$^{-1}$. No significant excess of events is observed above the expected background. Limits on the production cross section of the scalar boson times its decay branching fraction into the long-lived neutral particle are derived as a function of the mass of the intermediate scalar boson, the mass of the long-lived neutral particle, and its $c\tau$ from a few centimeters to one hundred meters. In the case that the intermediate scalar boson is the SM Higgs boson, its decay branching fraction to a long-lived neutral particle with a $c\tau$ approximately between 0.1 m and 7 m is excluded with a 95% confidence level up to 10% for $m_{Z_d}$ between 5 and 15 GeV. 1 data table The product of acceptance and efficiency for all signal MC samples. #### Suppression of $\Lambda(1520)$ resonance production in central Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV The collaboration Acharya, Shreyasi ; Torales - Acosta, Fernando ; Adamova, Dagmar ; et al. Phys.Rev.Lett., 2018. Inspire Record 1672806 The production yield of the $\Lambda(1520)$ baryon resonance is measured at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the $\Lambda(1520)\rightarrow {\rm pK}^{-}$ (and charge conjugate) hadronic decay channel as a function of the transverse momentum ($p_{\rm T}$) and collision centrality. The $p_{\rm T}$-integrated production rate of $\Lambda(1520)$ relative to $\Lambda$ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at LHC and the first evidence of $\Lambda(1520)$ suppression in heavy-ion collisions. The measured $\Lambda(1520)/\Lambda$ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured $p_{\rm T}$ distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances. 5 data tables $p_{\rm T}$-differential yields of $\Lambda$(1520) (sum of particle and anti-particle states) at midrapidity in the 0-20% centrality class. The uncertainty 'syst,uncorrelated' indicates the systematic uncertainty after removing the contributions common to all centrality classes $p_{\rm T}$-differential yields of $\Lambda$(1520) (sum of particle and anti-particle states) at midrapidity in the 20-50% centrality class. The uncertainty 'syst,uncorrelated' indicates the systematic uncertainty after removing the contributions common to all centrality classes $p_{\rm T}$-differential yields of $\Lambda$(1520) (sum of particle and anti-particle states) at midrapidity in the 50-80% centrality class. The uncertainty 'syst,uncorrelated' indicates the systematic uncertainty after removing the contributions common to all centrality classes More… #### Version 3 Search for supersymmetry in final states with charm jets and missing transverse momentum in 13 TeV $pp$ collisions with the ATLAS detector The collaboration Aaboud, Morad ; Aad, Georges ; Abbott, Brad ; et al. No Journal Information, 2018. Inspire Record 1672099
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995339035987854, "perplexity": 1618.199612571559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00544.warc.gz"}
https://math.stackexchange.com/questions/175240/solve-system-of-nonlinear-differential-equations
# Solve system of nonlinear differential equations I am trying to solve a large system of differential equations. Ideally, I would like to solve it exactly, but if not, can anyone suggest me a numerical method? In all its generality, the system I am trying to solve is like this: (here, $x = x(t) \in R^n$, and $\dot x = dx/dt$) $$(a_i + P_ix/\Vert P_ix \Vert)^T \dot x = -\Vert P_ix \Vert$$ for $i = 1,\ldots,n$. Here all $P_i$ are positive definite matrices, and the set of $a_i$ is linearly independent. Also, $\Vert . \Vert$ is the 2-norm. It would help me a great deal if someone can help me to solve even a highly restricted special case of it, where $n=2$, $a_i = e_i$ (the $i$-th vector of the canonical basis), and $P_i = I$ for all $i$. Namely, this system: $$(e_i + x/\Vert x \Vert)^T \dot x = -\Vert x \Vert$$ for all $i$. Thanks a lot, Daniel. Solutions are likely to hit a singularity when the matrix $M$ with rows $(a_i + P_i x/\|P_i x\|)^T$ becomes singular or when $x$ approaches the origin. In your $n=2$ example with $u = x/\|x\|$, I get $\det(M) = 1 + \sum_j u_j$, and you'll get a singularity if that hits $0$. That does happen, e.g. with initial conditions $x_1(0)=1$, $x_2(0)=0$, at approximately $t= 1.2464504$ according to Maple's dsolve(..., numeric). EDIT: Hmm, in fact $x_1 - x_2$ is constant in this system, and you get a singularity when $x_1 = 0$, $x_2 < 0$. If $d = x_1 - x_2$, the system has closed-form implicit solutions $$t+\ln \left( \left( 2 x_1 \left( t \right) +d \right) \sqrt {2}/2+\sqrt {2\, \left( x_1 \left( t \right) \right) ^{2}+2\,x_1 \left( t \right) d+{d}^{2}}/2 \right) \sqrt {2 }/2 +\ln \left( 2\, \left( x_1 \left( t \right) \right) ^{2}+ 2\,x_1 \left( t \right) d+{d}^{2} \right)/2 +c=0$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590668082237244, "perplexity": 147.30047571035604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669183.52/warc/CC-MAIN-20191117142350-20191117170350-00045.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-983/topics/Topic-19268/subtopics/Subtopic-258816/?textbookIntroActiveTab=overview
# 5.04 Solving one-step equations Lesson One way to solve an equation is to use inverse operations along with the properties of equality. An inverse operation is an operation that "undoes" another operation. A property of equality is an operation that produces a new equation with the same solution as the original. ## Solving equations with addition or subtraction Addition and subtraction are inverse operations. For example, adding two to a number is the opposite of subtracting two. As we saw in modeling balanced equations, we can also add or subtract the same amount to both sides of an equation, and it will remain true. Adding or subtracting the same amount to both sides keeps the equations balanced. Addition and subtraction properties of equality Addition property of equality: Adding the same number to each side of an equation produces an equivalent equation. Example: If $x-2$x−2 $=$= $7$7 Then $x-2+2$x−2+2 $=$= $7+2$7+2 Subtraction property of equality: Subtracting the same number to each side of an equation produces an equivalent equation. Example: If $x+5$x+5 $=$= $7$7 Then $x+5-5$x+5−5 $=$= $7-5$7−5 Let's apply our knowledge of inverses and the addition and subtraction properties of equality to solve some equations. #### Worked examples ##### Question 1 Solve for $x$x in the equation $x+8=15$x+8=15, showing all of your work algebraically. Think: The number $8$8 is being added to the variable $x$x. In order to undo the operation of adding $8$8, we should subtract $8$8 from both sides of the equation. Do: $x+8$x+8 $=$= $15$15 Write the original equation. $x+8-8$x+8−8 $=$= $15-8$15−8 Subtract $8$8 from each side. $x$x $=$= $7$7 Simplify by doing the subtraction Reflect: We can check our answer by substituting it back into the original equation. $x+8$x+8 $=$= $15$15 Write the original equation. $7+8$7+8 ? $15$15 Substitute $7$7 for $x$x. $15$15 $=$= $15$15 The solution checks. ##### Question 2 Solve for $m$m in the equation $m-6=8$m6=8, showing all of your work algebraically. Think: The number $6$6 is being subtracted from the variable $m$m. In order to undo the operation of subtracting $6$6, we should add $6$6 to both sides of the equation. Do: $m-6$m−6 $=$= $8$8 Write the original equation. $m-6+6$m−6+6 $=$= $8+6$8+6 Add $6$6 to each side. $m$m $=$= $14$14 Simplify by doing the addition. Reflect: We can check our answer by substituting it back into the original equation. $m-6$m−6 $=$= $8$8 Write the original equation. $14-6$14−6 ? $8$8 Substitute $14$14 for $m$m. $8$8 $=$= $8$8 The solution checks. #### Practice questions ##### Question 3 Solve: $21=x+13$21=x+13 ##### Question 4 Solve: $x-4=10$x4=10 ## Solving equations with multiplication or division Multiplication and division are also inverse operations. For example, multiplying a number by two is the opposite of dividing it by two. As we saw in modeling balanced equations, we can also multiply or divide the same nonzero amount to both sides of an equation, and it will remain true. Multiplying or dividing both sides by the same nonzero amount keeps the equations balanced. Multiplication and division properties of equality Multiplication property of equality: Multiplying each side of an equation by the same nonzero number produces an equivalent equation. Example: If $\frac{x}{12}$x12​ $=$= $4$4 Then $\frac{x}{12}\times12$x12​×12 $=$= $4\times12$4×12 Division properties of equality: Dividing each side of an equation by the same nonzero number produces an equivalent equation. Example: If $6x$6x $=$= $12$12 Then $\frac{6x}{6}$6x6​ $=$= $\frac{12}{6}$126​ Let's apply our knowledge of inverses and the multiplication and division properties of equality to solve some equations. #### Worked examples ##### Question 5 Solve for $x$x in the equation $3x=15$3x=15, showing all of your work algebraically. Think: The number $3$3 is being multiplied by the variable $x$x. In order to undo the operation of multiplying by $3$3, we should divide each side of the equation by $3$3. Do: $3x$3x $=$= $15$15 Write the original equation. $\frac{3x}{3}$3x3​ $=$= $\frac{15}{3}$153​ Divide each side by $3$3. $x$x $=$= $5$5 Simplify. Reflect: We can check our answer by substituting it back into the original equation. $3x$3x $=$= $15$15 Write the original equation. $3\times5$3×5 ? $15$15 Substitute $5$5 for $x$x. $15$15 $=$= $15$15 The solution checks. ##### Question 6 Solve for $w$w in the equation $8=\frac{w}{2}$8=w2, showing all of your work algebraically. Think: Notice that the variable is on the right side, but this does not change anything about how we solve it. The variable $w$w is being divided by $2$2. In order to undo the operation of dividing by $2$2, we should multiply each side of the equation by $2$2. Do: $8$8 $=$= $\frac{w}{2}$w2​ Write the original equation. $8\times2$8×2 $=$= $\frac{w}{2}\times2$w2​×2 Multiply each side by $2$2. $16$16 $=$= $w$w Simplify. Reflect: We can check our answer by substituting it back into the original equation. $8$8 $=$= $\frac{w}{2}$w2​ Write the original equation. $8$8 ? $\frac{16}{2}$162​ Substitute $16$16 for $w$w. $8$8 $=$= $8$8 The solution checks. #### Practice questions ##### Question 7 Solve: $5x=45$5x=45 ##### Question 8 Solve: $\frac{x}{8}=6$x8=6 ### Outcomes #### 6.EE.B.7 Solve real-world and mathematical problems by writing and solving equations of the form xp=q and px=q for cases in which p, q and x are all nonnegative rational numbers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089837431907654, "perplexity": 1778.115140993106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00354.warc.gz"}
http://mathhelpforum.com/algebra/177280-quadratic-inequality.html
# Math Help - Quadratic inequality 1. ## Quadratic inequality x^2 +2x -3>0 I don't understand when to use the union symbol After using my three test points. i know the answer is (x-1)(x+3) so x = -3 X= 1 2. Originally Posted by tonie x^2 +2x -3>0 I don't understand when to use the union symbol After using my three test points. i know the answer is (x-1)(x+3) so x = -3 X= 1 It is clear that $x=0$ is not a solution. So $[-3,1]$ is not in the solution, the complement of that set: $(-\infty,-3)\cup(1,\infty)~.$ 3. Yes but kneed to understand the rule you use in finding out how to use union and when not to 4. Don't understand the rule when you should use union and when not to use union for the problem above 5. Originally Posted by tonie Don't understand the rule when you should use union and when not to use union for the problem above You are asking for a hard and fast rule which does not exist. In general, quadric inequalities can have the whole line as a solution set, such as $x^2+1\ge 0~.$ Or the solution set can be between to real numbers, as in $(x+2)(x-3)\le 0$ the set is $[-2,3]$. Out side two numbers: $(x-2)(x+3)\ge 0$, solution $(-\infty ,-3]\cup [2,\infty)$. But that is only a rule of thumb. 6. I am sorry but I am not getting it. I know it must be something simple because of the way you are responding to my question but I need to know when the correct time is to use a union or simply (3,1)? 7. Let $$f\left( x \right) = a{x^2} + bx + c$$ • If D>0, then f(x) has two real roots. Between those the sign of f(x) is the opposite of the sign of $$a$$ and outside them it has the same sign • If D=0, f(x) has the same sign with a except the point of zero. • If D<0, f(x) has the same sign with a 8. Originally Posted by tonie I am sorry but I am not getting it. I know it must be something simple because of the way you are responding to my question but I need to know when the correct time is to use a union or simply (3,1)? Are you saying that you do not know what the union symbol means? The fact that x= -3 and x= 1 make the two sides equal tells you that the inequality is one way or the other through out the three intervals, $(-\infty, -3)$, $(-3, 1)$, and $(1, \infty)$. Plato told you that the solution cannot be (1, 3) because x= 0 is in that interval and 0 does not satisfy the inequality. You really should, then, check one point in each of the other intervals. x= 2 is in the interval $(1, \infty)$. $2^2+ 2(2)- 3= 4+ 4- 3= 5> 0$ so x= 2 and, so, every number in $(1, \infty)$ satisfies it. x= -4 is in $(-\infty, -3)$ and $(-4)^2+ 2(-4)- 3= 16- 8- 3= 5> 0$ so x= -4 and, therefore, every number in $(-\infty, -3)$ satisifes the inequality. That is, any number in $(-\infty, -3)$ or in $(1, \infty)$ will satisfy the inequality. A point is in $A\cup B$ if and only if it is in A or in B, by definition. 9. You answered the question. Thank you, for understanding what I was trying to convey. I assumed I broke it clearly down to what I did and didn't understand, but I guess people are in a rush and only wish to explain the answer. Thank you so much for being patient and throughly explaining everything. Originally Posted by HallsofIvy Are you saying that you do not know what the union symbol means? The fact that x= -3 and x= 1 make the two sides equal tells you that the inequality is one way or the other through out the three intervals, $(-\infty, -3)$, $(-3, 1)$, and $(1, \infty)$. Plato told you that the solution cannot be (1, 3) because x= 0 is in that interval and 0 does not satisfy the inequality. You really should, then, check one point in each of the other intervals. x= 2 is in the interval $(1, \infty)$. $2^2+ 2(2)- 3= 4+ 4- 3= 5> 0$ so x= 2 and, so, every number in $(1, \infty)$ satisfies it. x= -4 is in $(-\infty, -3)$ and $(-4)^2+ 2(-4)- 3= 16- 8- 3= 5> 0$ so x= -4 and, therefore, every number in $(-\infty, -3)$ satisifes the inequality. That is, any number in $(-\infty, -3)$ or in $(1, \infty)$ will satisfy the inequality. A point is in $A\cup B$ if and only if it is in A or in B, by definition. 10. y = x² + 2x - 3 has coefficient 1 > 0 for x², hence the parabola opens up. x² + 2x - 3 = (x + 3)(x - 1) > 0 x = -3 or 1 -3 < 1 ∴ x < -3 or x > 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909773230552673, "perplexity": 276.73662220739504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447560397.30/warc/CC-MAIN-20141224185920-00077-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.pims.math.ca/scientific-event/131118-gpsdo
## Geometry and Physics Seminar: Dragos Oprea • Date: 11/18/2013 • Time: 15:10 Lecturer(s): Dragos Oprea (San Diego) Location: University of British Columbia Topic: The Chern classes of the Verlinde bundle Description: The Verlinde bundles over the moduli space $M_g$ of smooth curves have as fibers spaces of generalized theta functions i.e., spaces of global sections of determinant line bundles over moduli of parabolic bundles. I will discuss a formula for the Chern classes of the Verlinde bundles, as well as extensions over the compactification $\overline M_g$. Other Information: Location: ESB 4127
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945622444152832, "perplexity": 2200.923823447842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00086-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.gaussianwaves.com/2013/01/characteristics-of-noise-received-by-software-defined-radio-part-2/
(No Ratings Yet) ### Abstract To study the characteristics of the noise received by Software defined radio (SDR) and the characteristics of signal transimtted from one SDR to the other SDR using Intermediate Frequency (IF) and Radio Frequency (RF). ## Signal and its characteristics: ### 2.1 Introduction A baseband signal of frequency F Hz is generated and was upconverted to IF and transmitted via channel from one SDR to the other SDR. ### Sine wave generation In this model, we have generated a samples of sine wave for different frequencies with different sampling rates and was upsampled by 8 in the Matlab and pulse shaped using RRC filter and given to the SDR which upconverts the signal to IF for trasmission of the signal via channel from one SDR to the other SDR. The pots of the generated sine wave are shown in figure. The Figure above shows the upsampled version of the baseband signal. We can see that there are zeros between any two samples this is because it is padded with zeros for upsampling. The figure above shows the pulse shaped version of the previous figure. Pulse shaping was done using Raise Cosine filter with roll of factor 0.9. ### 2.3 Autocorrelation of received signal The transmitted samples from the SDR was received by the other SDR. The received samples where analyzed using the autocorrelation function as given in equation (1.1) and by the Power Spectral Density (PSD) which is the DFT of the samples received by the SDR. The DFT is performed using Fast Fourier Transform (FFT) in Matlab. The received signal, its Power Spectral Density and its autocorrelation are shown in the following figure. Figure 2.3: Figure showing Received signal of frequency 120 kHz, its PSD and autocorrelation (click to enlarge) The above figure has three plots. The rst plot is the sine wave received from the channel which was transmitted by the SDR. The second plot is the PSD of the signal. The spikes we see here are because of the sine nature. Since its frequency is 120 kHz we can see two spikes at 120kHz. The last part of plot is the autocorrelated version of the signal received by the SDR. This autocorrelation was done without taking the number of samples (N-k) in the equation (1.1). If that was taken we are expected to get a perfectly sine wave which was actually received by the SDR i.e., we are expected to get the first plot. ### 2.4 Conclusion From the graphs above we can infer that the received samples were almost with zero noise and we recovered the samples transmitted by the SDR.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011151790618896, "perplexity": 910.7279785988114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651471.95/warc/CC-MAIN-20150417045731-00071-ip-10-235-10-82.ec2.internal.warc.gz"}
http://www.ck12.org/book/CK-12-Geometry-Second-Edition/r5/section/6.1/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 6.1: Angles in Polygons Difficulty Level: At Grade Created by: CK-12 ## Learning Objectives • Extend the concept of interior and exterior angles from triangles to convex polygons. • Find the sums of interior angles in convex polygons. • Identify the special properties of interior angles in convex quadrilaterals. ## Review Queue 1. Find the measure of \begin{align*}x\end{align*} and \begin{align*}y\end{align*}. 1. Find \begin{align*}w^\circ, x^\circ, y^\circ\end{align*}, and \begin{align*}z^\circ\end{align*}. 2. What is \begin{align*}w^\circ + y^\circ + z^\circ\end{align*}? 3. What two angles add up to \begin{align*}y^\circ\end{align*}? 4. What are \begin{align*}72^\circ, 59^\circ\end{align*}, and \begin{align*}x^\circ\end{align*} called? What are \begin{align*}w^\circ, y^\circ\end{align*}, and \begin{align*}z^\circ\end{align*} called? Know What? To the right is a picture of Devil’s Post pile, near Mammoth Lakes, California. These posts are cooled lava (called columnar basalt) and as the lava pools and cools, it ideally would form regular hexagonal columns. However, variations in cooling caused some columns to either not be perfect or pentagonal. First, define regular in your own words. Then, what is the sum of the angles in a regular hexagon? What would each angle be? ## Interior Angles in Convex Polygons Recall from Chapter 4, that interior angles are the angles inside a closed figure with straight sides. Even though this concept was introduced with triangles, it can be extended to any polygon. As you can see in the images below, a polygon has the same number of interior angles as it does sides. From Chapter 1, we learned that a diagonal connects two non-adjacent vertices of a convex polygon. Also, recall that the sum of the angles in a triangle is \begin{align*}180^\circ\end{align*}. What about other polygons? Investigation 6-1: Polygon Sum Formula Tools Needed: paper, pencil, ruler, colored pencils (optional) 1. Draw a quadrilateral, pentagon, and hexagon. 2. Cut each polygon into triangles by drawing all the diagonals from one vertex. Count the number of triangles. Make sure none of the triangles overlap. 3. Make a table with the information below. Name of Polygon Number of Sides Number of \begin{align*}\triangle s\end{align*} from one vertex (Column 3) \begin{align*}\times\end{align*} (\begin{align*}^\circ\end{align*} in a \begin{align*}\triangle\end{align*}) Total Number of Degrees Quadrilateral 4 2 \begin{align*}2 \times 180^\circ\end{align*} \begin{align*}360^\circ\end{align*} Pentagon 5 3 \begin{align*}3 \times 180^\circ\end{align*} \begin{align*}540^\circ\end{align*} Hexagon 6 4 \begin{align*}4 \times 180^\circ\end{align*} \begin{align*}720^\circ\end{align*} 4. Do you see a pattern? Notice that the total number of degrees goes up by \begin{align*}180^\circ\end{align*}. So, if the number sides is \begin{align*}n\end{align*}, then the number of triangles from one vertex is \begin{align*}n - 2\end{align*}. Therefore, the formula would be \begin{align*}(n - 2) \times 180^\circ\end{align*}. Polygon Sum Formula: For any \begin{align*}n-\end{align*}gon, the sum of the interior angles is \begin{align*}(n - 2) \times 180^\circ\end{align*}. Example 1: Find the sum of the interior angles of an octagon. Solution: Use the Polygon Sum Formula and set \begin{align*}n = 8\end{align*}. \begin{align*}(8 - 2) \times 180^\circ = 6 \times 180^\circ = 1080^\circ\end{align*} Example 2: The sum of the interior angles of a polygon is \begin{align*}1980^\circ\end{align*}. How many sides does this polygon have? Solution: Use the Polygon Sum Formula and solve for \begin{align*}n\end{align*}. \begin{align*}(n - 2) \times 180^\circ & = 1980^\circ\\ 180^\circ n - 360^\circ & = 1980^\circ\\ 180^\circ n & = 2340^\circ\\ n & = 13 \qquad \text{The polygon has} \ 13 \ \text{sides.}\end{align*} Example 3: How many degrees does each angle in an equiangular nonagon have? Solution: First we need to find the sum of the interior angles in a nonagon, set \begin{align*}n = 9\end{align*}. \begin{align*}(9 - 2) \times 180^\circ = 7 \times 180^\circ = 1260^\circ\end{align*} Second, because the nonagon is equiangular, every angle is equal. Dividing \begin{align*}1260^\circ\end{align*} by 9 we get each angle is \begin{align*}140^\circ\end{align*}. Equiangular Polygon Formula: For any equiangular \begin{align*}n-\end{align*}gon, the measure of each angle is \begin{align*}\frac{(n-2)\times 180^\circ}{n}\end{align*}. Regular Polygon: When a polygon is equilateral and equiangular. It is important to note that in the Equiangular Polygon Formula, the word equiangular can be substituted with regular. Example 4: Algebra Connection Find the measure of \begin{align*}x\end{align*}. Solution: From our investigation, we found that a quadrilateral has \begin{align*}360^\circ\end{align*}. We can write an equation to solve for \begin{align*}x\end{align*}. \begin{align*}89^\circ+(5x-8)^\circ+(3x+4)^\circ+51^\circ&=360^\circ\\ 8x&=224^\circ\\ x&=28^\circ\end{align*} ## Exterior Angles in Convex Polygons Recall that an exterior angle is an angle on the outside of a polygon and is formed by extending a side of the polygon (Chapter 4). As you can see, there are two sets of exterior angles for any vertex on a polygon. It does not matter which set you use because one set is just the vertical angles of the other, making the measurement equal. In the picture to the left, the color-matched angles are vertical angles and congruent. In Chapter 4, we introduced the Exterior Angle Sum Theorem, which stated that the exterior angles of a triangle add up to \begin{align*}360^\circ\end{align*}. Let’s extend this theorem to all polygons. Investigation 6-2: Exterior Angle Tear-Up Tools Needed: pencil, paper, colored pencils, scissors 1. Draw a hexagon like the hexagons above. Color in the exterior angles as well. 2. Cut out each exterior angle and label them 1-6. 3. Fit the six angles together by putting their vertices together. What happens? The angles all fit around a point, meaning that the exterior angles of a hexagon add up to \begin{align*}360^\circ\end{align*}, just like a triangle. We can say this is true for all polygons. Exterior Angle Sum Theorem: The sum of the exterior angles of any polygon is \begin{align*}360^\circ\end{align*}. Proof of the Exterior Angle Sum Theorem Given: Any \begin{align*}n-\end{align*}gon with \begin{align*}n\end{align*} sides, \begin{align*}n\end{align*} interior angles and \begin{align*}n\end{align*} exterior angles. Prove: \begin{align*}n\end{align*} exterior angles add up to \begin{align*}360^\circ\end{align*} NOTE: The interior angles are \begin{align*}x_1, x_2, \ldots x_n\end{align*}. The exterior angles are \begin{align*}y_1, y_2, \ldots y_n\end{align*}. Statement Reason 1. Any \begin{align*}n-\end{align*}gon with \begin{align*}n\end{align*} sides, \begin{align*}n\end{align*} interior angles and \begin{align*}n\end{align*} exterior angles. Given 2. \begin{align*}x_n^\circ\end{align*} and \begin{align*}y_n^\circ\end{align*} are a linear pair Definition of a linear pair 3. \begin{align*}x_n^\circ\end{align*} and \begin{align*}y_n^\circ\end{align*} are supplementary Linear Pair Postulate 4. \begin{align*}x_n^\circ+ y_n^\circ=180^\circ\end{align*} Definition of supplementary angles 5. \begin{align*}(x_1^\circ+x_2^\circ+\ldots+x_n^\circ)+(y_1^\circ+ y_2^\circ+\ldots+ y_n^\circ)=180^\circ n\end{align*} Sum of all interior and exterior angles in an \begin{align*}n-\end{align*}gon 6. \begin{align*}(n-2)180^\circ=(x_1^\circ+ x_2^\circ+\ldots+x_n^\circ)\end{align*} Polygon Sum Formula 7. \begin{align*}180^\circ n=(n-2)180^\circ+(y_1^\circ+ y_2^\circ+\ldots+ y_n^\circ)\end{align*} Substitution PoE 8. \begin{align*}180^\circ n=180^\circ n-360^\circ+(y_1^\circ+ y_2^\circ+\ldots+ y_n^\circ)\end{align*} Distributive PoE 9. \begin{align*}360^\circ=(y_1^\circ+ y_2^\circ+\ldots+ y_n^\circ)\end{align*} Subtraction PoE Example 5: What is \begin{align*}y\end{align*}? Solution: \begin{align*}y\end{align*} is an exterior angle, as well as all the other given angle measures. Exterior angles add up to \begin{align*}360^\circ\end{align*}, so set up an equation. \begin{align*}70^\circ + 60^\circ + 65^\circ + 40^\circ + y & = 360^\circ\\ y & = 125^\circ\end{align*} Example 6: What is the measure of each exterior angle of a regular heptagon? Solution: Because the polygon is regular, each interior angle is equal. This also means that all the exterior angles are equal. The exterior angles add up to \begin{align*}360^\circ\end{align*}, so each angle is \begin{align*}\frac{360^\circ}{7} \approx 51.43^\circ\end{align*}. Know What? Revisited A regular polygon has congruent sides and angles. A regular hexagon has \begin{align*}(6-2)180^\circ=4\cdot180^\circ=720^\circ\end{align*} total degrees. Each angle would be \begin{align*}720^\circ\end{align*} divided by 6 or \begin{align*}120^\circ\end{align*}. ## Review Questions 1. Fill in the table. # of sides # of \begin{align*}\triangle s\end{align*} from one vertex \begin{align*}\triangle s \times 180^\circ\end{align*} (sum) Each angle in a regular \begin{align*}n-\end{align*}gon Sum of the exterior angles 3 1 \begin{align*}180^\circ\end{align*} \begin{align*}60^\circ\end{align*} 4 2 \begin{align*}360^\circ\end{align*} \begin{align*}90^\circ\end{align*} 5 3 \begin{align*}540^\circ\end{align*} \begin{align*}108^\circ\end{align*} 6 4 \begin{align*}720^\circ\end{align*} \begin{align*}120^\circ\end{align*} 7 8 9 10 11 12 1. What is the sum of the angles in a 15-gon? 2. What is the sum of the angles in a 23-gon? 3. The sum of the interior angles of a polygon is \begin{align*}4320^\circ\end{align*}. How many sides does the polygon have? 4. The sum of the interior angles of a polygon is \begin{align*}3240^\circ\end{align*}. How many sides does the polygon have? 5. What is the measure of each angle in a regular 16-gon? 6. What is the measure of each angle in an equiangular 24-gon? 7. What is the measure of each exterior angle of a dodecagon? 8. What is the measure of each exterior angle of a 36-gon? 9. What is the sum of the exterior angles of a 27-gon? 10. If the measure of one interior angle of a regular polygon is \begin{align*}160^\circ\end{align*}, how many sides does it have? 11. How many sides does a regular polygon have if the measure of one of its interior angles is \begin{align*}168^\circ\end{align*}? 12. If the measure of one interior angle of a regular polygon is \begin{align*}158 \frac{14}{17}^\circ\end{align*}, how many sides does it have? 13. How many sides does a regular polygon have if the measure of one exterior angle is \begin{align*}15^\circ\end{align*}? 14. If the measure of one exterior angle of a regular polygon is \begin{align*}36^\circ\end{align*}, how many sides does it have? 15. How many sides does a regular polygon have if the measure of one exterior angle is \begin{align*}32 \frac{8}{11}^\circ\end{align*}? For questions 11-20, find the measure of the missing variable(s). 1. The interior angles of a pentagon are \begin{align*}x^\circ, x^\circ, 2x^\circ, 2x^\circ\end{align*}, and \begin{align*}2x^\circ\end{align*}. What is the measure of the larger angles? 2. The exterior angles of a quadrilateral are \begin{align*}x^\circ, 2x^\circ, 3x^\circ\end{align*}, and \begin{align*}4x^\circ\end{align*}. What is the measure of the smallest angle? 3. The interior angles of a hexagon are \begin{align*}x^\circ, (x + 1)^\circ, (x + 2)^\circ, (x + 3)^\circ, (x + 4)^\circ\end{align*}, and \begin{align*}(x + 5)^\circ\end{align*}. What is \begin{align*}x\end{align*}? 4. Challenge Each interior angle forms a linear pair with an exterior angle. In a regular polygon you can use two different formulas to find the measure of each exterior angle. One way is \begin{align*}\frac{360^\circ}{n}\end{align*} and the other is \begin{align*}180^\circ - \frac{(n-2)180^\circ}{n}\end{align*} (\begin{align*}180^\circ\end{align*} minus Equiangular Polygon Formula). Use algebra to show these two expressions are equivalent. 5. Angle Puzzle Find the measures of the lettered angles below given that \begin{align*}m \ || \ n\end{align*}. 1. \begin{align*}72^\circ + (7x+3)^\circ + (3x+5)^\circ = 180^\circ\!\\ {\;}\qquad \qquad \qquad \quad \ 10x + 80^\circ = 180^\circ\!\\ {\;}\qquad \qquad \qquad \qquad \ \ \quad \ 10x = 100^\circ\!\\ {\;}\qquad \qquad \qquad \qquad \ \ \qquad \ x = 10^\circ\end{align*} 2. \begin{align*}(5x+17)^\circ +(3x - 5)^\circ = 180^\circ\!\\ {\;}\qquad \qquad \quad \ \ 8x +12^\circ = 180^\circ\!\\ {\;}\qquad \qquad \qquad \qquad \ 8x = 168^\circ\!\\ {\;}\qquad \qquad \qquad \qquad \ \ \ x = 21^\circ\end{align*} 1. \begin{align*}w = 108^\circ, x = 49^\circ, y = 131^\circ, z = 121^\circ\end{align*} 2. \begin{align*}360^\circ\end{align*} 3. \begin{align*}59^\circ + 72^\circ\end{align*} 4. interior angles, exterior angles ### My Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Tags: Subjects:
{"extraction_info": {"found_math": true, "script_math_tex": 112, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461897373199463, "perplexity": 1899.2469111899486}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832385.29/warc/CC-MAIN-20160723071032-00286-ip-10-185-27-174.ec2.internal.warc.gz"}
https://dentgoogmimes.host/business/bibtex-package.php
# dentgoogmimes.host Main / Business / Bibtex package # Bibtex package Introduction to the usage of BibTeX in combination with LaTeX. Just create a plain text file and apply what has been explained in section BibTeX File Format. BibTeX Anwenden - Format - Special Symbols. bibtex – Process bibliographies for LaTeX, etc to be specified in the document itself (one often needs a LaTeX citation-style package, such as natbib as well). Note: If you are starting from scratch it's recommended to use biblatex since that package provides localization in several languages, it's actively developed and  Introduction - Embedded system - Bibliography - The bibliography file. BibLaTeX is a complete reimplementation of the bibliographic facilities provided by LaTeX. Formatting of the bibliography is entirely controlled by LaTeX macros, . This document is a systematic reference manual for the biblatex package. Look at the sample documents which ship with biblatex [1] to get a first impression. Package 'bibtex'. June 30, Version Title Bibtex Parser. Description Utility to parse a bibtex file. Depends R (>= ). Learn how to create a bibliography with Bibtex and Biblatex in a few We now have to include the biblatex package and use the \autocite and. How to use BibTeX, especially if you want to use an author-date reference style. file from scratch, using the custom-bib package (that's how I created dentgoogmimes.host). BibTeX provides for the storage of all references in an external, flat-file database. . following to your preamble in order to get LaTeX to use the Natbib package. About biblatex. This package provides advanced bibliographic facilities for use with LaTeX. The package is a complete reimplementation of. More:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396414756774902, "perplexity": 4771.416947726496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148375.36/warc/CC-MAIN-20200229022458-20200229052458-00413.warc.gz"}
http://math.stackexchange.com/questions/574728/additive-properties-of-sequences-trying-to-understand-schnirelmann-density
# Additive properties of sequences: trying to understand Schnirelmann density I have started reading Gelford & Linnik's elementary methods in analytic number theory (1965). They define a sequence $A$ of integers as: $$0, a_1, a_2,a_3,\dots$$ where $$0 < a_1 < a_2 < a_3 < \dots$$ Let: $$A(n) = \sum_{0<{a_i} \le n} 1$$ So that: $$0\le\frac{A(n)}{n}\le1$$ I am following their explanations up to this point. Then, the following definition of density $d(A)$ is offered: $$d(A) = \inf_n \frac{A(n)}{n}$$ At this point, I am not clear on how the definition maps to the examples. I found a wikipedia article on Schnirelmann density but that didn't help. I'll reread it this evening. Gelford & Linnik provide examples of density. I would greatly appreciate it if someone could explain me how the definition above maps to these examples. Here are three examples from the section: (1) If $1 \notin A$, then $d(A) = 0$ (2) $d(A) = 1$ if and only if $A$ contains all the positive integers. (3) The densities of the sequences of squares, cubes, and prime numbers equal $0$. - (1) Clearly if $1\notin A$ then $A(1)=0$. (we try to find the (trivial) number of natural numbers not exceeding $1$ that are not $1$) But generally, $d(A)\geq 0$ and $\frac{A(1)}{1}=0$ So, inf $\frac{A(n)}{n}=0$. (2) Suppose that there exists a positive integer $k$ which is the least one that is not contained in $A$. Then $\frac{A(k)}{k}<1$ and so, (clearly) inf $\frac{A(n)}{n}\leq A(k)<1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597669839859009, "perplexity": 131.7864168875528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00146-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.futurelearn.com/courses/maths-puzzles/9/steps/267948
1.5 # A few preliminaries Just before we start, it would be a good idea that we all ‘speak’ the same math notation language and refresh our minds with some basic math. For addition, we will be using the short and long addition formats with the usual ‘+’ sign. For subtraction, we will be using long subtraction formats with the usual ‘-‘ sign. For multiplication, we will be using short and long multiplication formats with the usual ‘X’ sign. For division, we will be using short and long division formats. Division can be denoted either using two dots ‘:’, or two dots with a horizontal line between them (this is the usual standard) ‘÷’. It will also be useful to remember that every whole number has factors - numbers that divide the given number without leaving a remainder. For example: The factors of the number 10 are 1, 2, 5 and 10. All these numbers divide 10 without a remainder. The factors of the number 5 are 1 and 5 because only 1 and 5 divide the number 5 without a remainder. A number that has only two factors is a Prime number - it divides only itself and the number 1. Here is a fun explanation about factors and an online ‘factor calculator’.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611208200454712, "perplexity": 502.8332371662496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00541.warc.gz"}
https://topologicalmusings.wordpress.com/tag/paul-halmo/
You are currently browsing the tag archive for the ‘Paul Halmo’ tag. We are now ready to discuss a couple of familiar set theoretic operations: unions and intersections. Given two sets $A$ and $B$, it would be “nice” to have a set $U$ that contains all the elements that belong to at least one of $A$ or $B$. In fact, it would be nicer to generalize this to a collection of sets instead of just two, though we must be careful about using words like “two”, “three” and so on, since we haven’t really defined what numbers are so far. We don’t want to run the risk of inadvertently introducing circularity in our arguments! Anyway, this brings us to the following axiom. Axiom of unions: For every collection of sets, there exists a set that contains all the elements that belong to at least one set of the given collection. In other words, for every collection $C$, there exists a set $U$ such that if $x \in X$ for some $X$ in $C$, then $x \in U$. Now, the set $U$ may contain “extra” elements that may not belong to any $X$ in $C$. This can be easily fixed by invoking the axiom of specification to form the set $\{ x \in U: x \in X \mbox{ for some } X \mbox{ in } C \}$. This set is called the union of the collection $C$ of set. Its uniqueness is guaranteed by the axiom of extension. Generally, if $C$ is a collection of sets, then the union is denoted by $\bigcup \{ X: X \in C \}$, or $\bigcup_{X \in C} X$. A quick look at a couple of simple facts. 1) $\bigcup \{ X: X \in \emptyset \} = \emptyset$, and 2) $\bigcup \{ X: X \in \{ A \} \} = A$. We finally arrive at the definition of the union of two sets, $A$ and $B$. $A \cup B = \{ x: x \in A \mbox{ or } x \in B \}$. Below is a list of a few facts about unions of pairs: • $A \cup \emptyset = A$, • $A \cup B = B \cup A$ (commutativity), • $A \cup ( B \cup C) = (A \cup B) \cup C$ (associativity), • $A \cup A = A$ (idempotence), • $A \subset B$ if and only if $A \cup B = B$. Now, we define the intersection of two sets, $A$ and $B$ as follows. $A \cap B = \{ x: x \in A \mbox{ and } x \in B\}$. Once again, a few facts about intersections of pairs (analogous to the ones involving unions): • $A \cap \emptyset = \emptyset$, • $A \cap B = B \cap A$ , • $A \cap ( B \cap C) = (A \cap B) \cap C$, • $A \cap A = A$, • $A \subset B$ if and only if $A \cap B = A$. Also, if $A \cap B = \emptyset$, then the sets $A$ and $B$ are called disjoint sets. Two useful distributive laws involving unions and intersections: • $A \cap (B \cup C) = (A \cap B) \cup (A \cap C)$, • $A \cup (B \cap C) = (A \cup B) \cap (A \cup C)$. We prove the first one of the above. The proof of the second one is left as an exercise to the reader. The proof relies on the idea that we show each side is a subset of the other. So, suppose $x$ belongs to the left hand side; then $x \in A$ and $x \in B \cup C$, which implies $x \in A$ and $x \in B$ or $C$, which implies $x \in A \cap B$ or $x \in A \cap C$, which implies $x \in (A \cap B) \cup (A \cap C)$; hence $x$ belongs to the right hand side. This proves that the left hand side is a subset of the right hand side. A similar argument shows that the right hand side is a subset of the left hand side. And, we are done. The operation of the intersection of sets from a collection, $C$, is similar to that of the union of sets from $C$. However, the definition will require that we prohibit $C$ from being empty, and we will see why in the next section. So, for each collection, $C$, there exists a set $V$ such that $x \in V$ if and only if $x \in X$ for every $X$ in $C$. To construct such a set $V$, we choose any set $A$ in $C$ – this is possible because $C \not= \emptyset$ – and write $\{ x \in A: x \in X \mbox{ for all } X \mbox{ in } C\}$. Note that the above construction is only used to prove that $V$ exists. The existence of $V$ doesn’t depend on any arbitrary set $A$ in the collection $C$. We can, in fact, write $\{ x: x \in X \mbox{ for all } X \mbox{ in } C\}$. The set $V$ is called the intersection of the collection $C$ of sets. The axiom of extension, once again, guarantees its uniqueness. The usual notation for such a set $V$ is $\bigcap \{ X: X \in C \}$ or $\bigcap_{X \in C} X$. EXERCISE: $(A \cap B) \cup C = A \cap (B \cup C)$ if and only if $C \subset A$. SOLUTION: We first prove the “if” part. So, suppose $C \subset A$. Now, if $x \in (A \cap B) \cup C$, then either $x \in A \cap B$ or $x \in C$. In the first case, $x \in A$ and $x \in B$, which implies $x \in A$ and $x \in B \cup C$. In the second case, we again have $x \in A$ (since $C \subset A$), which implies $x \in A$ and $x \in B \cup C$. In either case, we have $x \in A \cap (B \cup C)$. Hence, $(A \cap B) \cup C$ is a subset of $A \cap (B \cup C)$. Similarly, if $x \in A \cap (B \cup C)$, then $x \in A$ and $x \in B \cup C$. Now, if $x \in B$, then $x \in A \cap B$, which implies $x \in (A \cap B) \cup C$. And, if $x \in C$, then once again $x \in (A \cap B) \cup C$. Thus, in either case, $x \in (A \cap B) \cup C$. Hence, $A \cap (B \cup C)$ is a subset of $(A \cap B) \cup C$. We, thus, proved $(A \cap B) \cup C = A \cap (B \cup C)$. This concludes the proof of the “if” part. Now, we prove the “only if” part. So, suppose $(A \cap B) \cup C = A \cap (B \cup C)$. If $x \in C$, then $x$ belongs to the left hand side of the equality, which implies $x$ belongs to the right hand side. This implies $x \in A$ (and $x \in B \cup C$.) Hence, $C \subset A$. And, we are done. We encounter sets, or if we prefer, collections of objects, everyday in our lives. A herd of cattle, a group of women, or a bunch of yahoos are all instances of sets of living beings. “The mathematical concept of a set can be used as the foundation for all known mathematics.” The purpose here is to develop the basic properties of sets. As a slight digression, I wouldn’t consider myself a Platonist; hence, I don’t believe there are some abstract properties of sets “out there” and that the chief purpose of mathematics is to discover those abstract things, so to speak. Even though the idea of a set is ubiquitous and it seems like the very concept of a set is “external” to us, I still think that we must build, or rather postulate, the existence of the fundamental properties of sets. (I think I am more of a quasi-empiricist.) Now, we won’t define what a set is, just as we don’t define what points or lines are in the familiar axiomatic approach to elementary geometry. So, we somewhat rely on our intuition to develop a definition of sets. Of course, our intuition may go wrong once in a while, but one of the very purposes of our exposition is to reason very clearly about our intuitive ideas, so that we can correct them any time if we discover they are wrong. Now, a very reasonable thing to “expect” from a set is it should have elements or members. So, for example, Einstein was a member of the set of all the people who lived in the past. In mathematics, a line has points as its members, and a plane has lines as its members. The last example is a particularly important one for it underscores the idea that sets can be members of other sets! So, a way to formalize the above notion is by developing the concept of belonging. This is a primitive (undefined) concept in axiomatic set theory. If $x$ is a member of $A$ ($x$ is contained in $A$, or $x$ is an element of $A$), we write $x \in A$. ($\in$ is a derivation of the Greek letter epsilon, $\epsilon$, introduced by Peano in 1888.) If $x$ is not an element of $A$, we write $x \not\in A$. Note that we generally reserve lowercase letters ($x, y$, etc) for members or elements of a set, and we use uppercase letters to denote sets. A possible relation between sets, more elementary than belonging, is equality. If two sets $A$ and $B$ are equal, we write $A = B.$ If two sets $A$ and $B$ are not equal, we write $A \not= B.$ Now, the most basic property of belonging is its relation to equality, which brings us to the following formulation of our first axiom of set theory. Axiom of extension: Two sets are equal if and only if they have the same elements. Let us examine the relation between equality and belonging a little more deeply. Suppose we consider human beings instead of sets, and change our definition of belonging a little. If $x$ and $A$ are human beings, we write $x \in A$ whenever $x$ is an ancestor of $A$. Then our new (or analogous) axiom of extension would say if two human beings $A$ and $B$ are equal then they have the same ancestors (this is the “only if” part, and it is certainly true), and also that if $A$ and $B$ have the same ancestors, then they are equal (this is the “if” part, and it certainly is false.) $A$ and $B$ could be two sisters, in which case they have the same ancestors but they are certainly not the same person. Conclusion: The axiom of extension is not just a logically necessary property of equality but a non-trivial statement about belonging. Also, note that the two sets $A = \{ x, y \}$ and $B = \{ x, x, y, y, y \}$ have the same elements, and hence, by the axiom of extension, $A = B$, even though it seems like $A$ has just two elements while $B$ has five! It is due to this that we drop duplicates while writing down the elements of a set. So, in the above example, it is customary to simply write $B = \{ x,y \}$. Now, we come to the definition of a subset. Suppose $A$ and $B$ are sets. If every member of $A$ is a member of $B$, then we say $A$ is a subset of $B$, or $B$ includes $A$, and write $A \subset B$ or $B \supset A$. This definition, clearly, implies that every set $A$ is a subset of itself, i.e. $A \subset A$, which demonstrates the reflexive property of set inclusion. (Of course, equality also satisfies the reflexive property, i.e. $A = A$.) We say $A$ is a proper subset of $B$ whenever $A \subset B$ but $A \not= B$. Now, if $A \subset B$ and $B \subset C$, then $A \subset C$, which demonstrates the transitive property of set inclusion. (Once again, equality also satisfies this property, i.e. if $A = B$ and $B = C$, then $A = C$.) However, we note that set inclusion doesn’t satisfy the symmetric property. This means, if$A \subset B$, then it doesn’t necessarily imply $B \subset A$. (On the other hand, equality satisfies the symmetric property, i.e. if $A = B$, then $B = A$.) But, set inclusion does satisfy one very important property: the antisymmetric one. If we have $A \subset B$ and $B \subset A$, then $A$ and $B$ have the same elements, and therefore, by the axiom of extension, $A = B$. In fact, we can reformulate the axiom of extension as follows: Axiom of extension(another version): Two sets $A$ and $B$ are equal if and only if $A \subset B$ and $B \subset A$. In mathematics, the above is almost always used whenever we are required to prove that two sets $A$ and $B$ are equal. All we do is show that $A \subset B$ and $B \subset A$, and invoke the (above version of) axiom of extension to conclude that $A = B$. Before we conclude, we note that conceptually belonging ($\in$) and set inclusion ($\subset$) are two different things. $A \subset A$ always holds, but $A \in A$ is “false”; at least, it isn’t true of any reasonable set that anyone has ever constructed! This means, unlike set inclusion, belonging does not satisfy the reflexive property. Again, unlike set inclusion, belonging does not satisfy the transitive property. For example, a person could be considered a member of a country and a country could be considered a member of the United Nations Organizations (UNO); however, a person is not a member of the UNO. I have just started reading Paul R. Halmos’ classic text Naive Set Theory, and I intend to blog on each section of the book. The purpose is mainly to internalize all the material presented in the book and at the same time provide the gist of each section, so that I can always come back and read whenever I feel like doing so. The actual text, divided into 25 sections (or chapters, if you will), comprises 102 pages. Halmos’ original intention was “to tell the beginning student of advanced mathematics the basic set theoretic facts of life, and to do so with the minimum of philosophical discourse and logical formalism… The style is usually informal to the point of being conversational. The reader is warned that “the expert specialist will find nothing new here.” Halmos recommends Hausdorff’s Set Theory and Axiomatic Set Theory by Suppes for a more extensive treatment of the subject. Nevertheless, the treatment by Halmos is not trivial at all. I personally feel his exposition is impeccable! Almost all the ideas presented in the following posts belong to the author of the book, and I make absolutely no claims to originality in the exposition. • 355,659 hits
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 195, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631936550140381, "perplexity": 131.52459310342002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00598.warc.gz"}
https://oneandnone.wordpress.com/tag/known-unknown/
# Unknown I swing from the known to the known for I only know the known that I have known I find comfort in the known that I have known I find pleasure/pain in the known that I have known I am attached to the known that I have known so I keep swinging from the known to the known When I see the unknown that is known I say, it is magic to know the unknown I forget that it is tragic to know the unknown, for it is now the known-unknown So I sing the known of the known-unknown Never knowing that the known-unknown is also known until it again becomes known to the known As the days turn to night and night to day I sleep and sleep walk in the known to the known Until the day I wonder the unknown of the unknown that day, maybe that moment, I may wake up to the unknown The unknown to the unknown to unknown And in that unknown I may find the unknown For I am not known unknown-unknown And maybe I am the unknown-unknown to the unknown-unknown Conjure Bloglovin # Unknown-Unknown In the name of the unknown, I make up the known In the name of the known, I make up the well known IN the name of the well known, I soak in the throne And to this throne, I am born I am born to warn of my throne and of the known that made me well known As a slave to the well known I drown in the throne of the known unknown I search for the unknown through the known, from the known Not knowing the known, not knowing the nature of the unknown, I make up an unknown that is very well known For to know this unknown, I need to know the known In knowing the known, I make the known, known, i now see the known as known including the known unknown But a mind that is bored of the already known, ignores the known and goes after the unknown, an known unknown, a magical unknown it comfortably dwells in And Within this known unknown, the unknown become another known, becomes another familiar known, a comfortable known so thus, the idea of the unknown through the known is born to provide me the comfort and magic of the known unknown Is this really unknown? Does the unknown really have comfort & magic in it? In this perpetual search for the unknown through the known, my mind becomes a perpetual loop looking for comfort to feel love, to feel the security of the terms and conditions of love And When I get bored of chasing, searching, researching, When all methods are done, when all paths are walked, when all that I have known are known, really, really known for what they truly are to my own mind, my mind (thought) falls unto itself It may no longer seek the comfort of its own knowledge of the past, knowledge of the known and the knowledge/memory of the known unknown For to see something different, I have to let go that which is familiar or else, everything I see, is from the filter, through the filter of that which is familiar May be here, may be now, my mind may come across the now, the known unknown, and the unknown unknown to the unknown unknown May be … Finite Loop Bloglovin
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307203054428101, "perplexity": 2026.2371829658964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211316.39/warc/CC-MAIN-20180816230727-20180817010727-00540.warc.gz"}
https://www.physicsforums.com/threads/two-equal-swords-clashing.5157/
# Two equal swords clashing 1. Aug 28, 2003 ### Koveras00 Eh.... when two identical swords "clang" together, is it possible that one of them breaks into two while the other one is still intact?? Thanks....sorry for asking such a question....watching too much anime you see.... 2. Aug 28, 2003 ### HallsofIvy Staff Emeritus Yes, certainly- if the edge of one hits the flat blade of the other. Or, more generally, assuming that "identical" means each has the same strong points and weak points, if a strong point on one strikes a weak point on the other. Similar Discussions: Two equal swords clashing
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652417659759521, "perplexity": 4619.242880910805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00037.warc.gz"}
https://quizlet.com/8895657/quantitative-chemistry-flash-cards/
How can we help? You can also find more resources in our Help Center. 24 terms # Quantitative chemistry Quantitative chemistry Atomic structure Periodicity Bonding Energetics Kinetics Equilibrium Acids and bases Oxidation and reduction Organic chemistry Measurement and data processing ###### PLAY Define the mole and its formula: A measure of the number of particles in a substance in comparison to the number of atoms in 12 grams of carbon-12. n = m/M n = Cv Determine the number of particles and the amount of substance (in moles). Number of Particles (N) = number of moles (n) x Avogadro's Constant (L) N=nL Define relative atomic mass (Ar): The weighted mean of the masses of the naturally occurring isotopes of an element on a scale in which the mass of an atom of carbon-12 is taken to be 12 exactly. Symbol: Ar (the r is in subscript) What does the term molar mass mean? The mass of one mole of a substance. Symbol: M (in italics); unit: g mol⁻¹ Difference between the terms empirical formula and molecular formula? Empirical: The lowest whole number ratio of elements in a compound. Molecular: A formula that shows the actual number of atoms of each element present in one molecule of a compound. How can you determine the empirical formula from the percentage composition or from other experimental data? Find the percentage composition: the amount of each element in a compound expressed as a percentage. And go through steps... How do you determine the molecular formula when given both the empirical formula and experimental data? Find the mass of empirical formula, and then divide the experimental data by this, and then times the empirical formula by the result of division. How do you deduce chemical equations when all reactants and products are given? Law of conversation of mass: Mass of reactants= mass of products. What is it important to do when writing an equation? The mole ration, ensuring subscripts are not confused with co-efficients, using state of elements i.e. (aq), (g), (s), (l) How do you calculate theoretical yields from chemical equations? Theo: The amount of product that is expected to be produced in a reaction, based on 100% reaction of the reactants. Given a chemical equation and the mass or moles of one species how do you calculate the mass or moles of another species? Use the molar ratio, and n = m/M How can you determine the limiting reactant and the reactant in excess when quantities of reacting substances are given? The limiting is the one which is used up completely in a reaction, while the one in excess is the one that has left over stuff. How can you solve problems involving theoretical, experimental and percentage yield? Theo: The amount of product that is expected to be produced in a reaction, based on 100% reaction of the reactants. %: A calculation of the experimental yield as a percentage of the theoretical yield. Exper: The amount of product produced in experiment. How do you apply Avogadro's law to calculate reacting volumes of gases? Equal volumes of gases at the same temperature and pressure contain equal numbers of particles. What is molar volume: in general at standard temperature and pressure The volume occupied by one mole of a gas under a given set of conditions of temperature and pressure. The molar volume of an ideal gas under standard conditions is 2.24 × 10⁻² m³ mol⁻¹ (22.4 dm³ mol⁻¹ ). What is the relationship between temperature, pressure and volume for a fixed mass of an ideal gas? PV = nRT What are the units for each of the ideal gas factors? P is the pressure of the gas (in atmospheres, ATM) V is the volume of the container (in dm³) n is the number of moles of gas in the container (in mol) R is Universal Gas Constant which is 0.0821 sm³ ATM K⁻¹ · mol⁻¹ T is the temperature of the gas (in K) Define the term solvent: A substance, usually a liquid, that is able to dissolve another substance, the solute. Define the term solution: A homogenous mixture of a solute in a solvent. Define the term concentration: the amount of solute per amount of solvent What is the molar ration? The ratio in whcih reactants and products in a chemical equation react; indicated by the coefficients written in front of each of the reactants and products in the equation. Define relative molecular mass (Mr): The sum of the relative atomic masses of the elements as given in the molecular formula of a compound. What is STP: Standard temperature and pressure: A set of conditions applied to gaseous calculations where the temperature is 0 degrees Celsius and the pressure is 1 atm (101.3 kPa).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724645972251892, "perplexity": 1608.1174593490582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00019-ip-10-171-10-108.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/14272/two-state-system-problem
# Two-state system problem Given a 2-state system with (complete set) orthonormal eigenstates $u_1, u_2$ with eigenvalues $E_1, E_2$ respectively, where $E_2>E_1$, and there exists a linear operator $\hat{L}$ with eigenvalues $\pm1$, 1. would the normalized eigenfunctions (in terms of the given eigenstates) just be $$u_1\over \sqrt{\int u_1^*u_1}$$ ? Since I am not given an argument/coordinate system for the eigenstates, perhaps I should first project them into one? But I don't have any info on the nature of the quantum system... 2. A second question asks for the expectation values of the energy in the respective states. But isn't that just $E_1$ and $E_2$ respectively?? Grateful for any enlightenment. - And what does that linear operator have to do with anything? –  justcurious Sep 2 '11 at 8:46 1) Yes. However, you say that $u$ are orthonormal which contains normalization And what $\hat{L}$ have to do with the question? Thanks again, Misha. :) So $\langle E \rangle = \langle H \rangle$? Then we have to do $\int u_1^*Hu_1$ where $H$ is the Hamiltonian? Unfortunately, the question only has the information I have given in the question above... –  justcurious Sep 2 '11 at 12:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523157477378845, "perplexity": 470.458485520702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988048.90/warc/CC-MAIN-20150728002308-00172-ip-10-236-191-2.ec2.internal.warc.gz"}
https://xcorr.net/2013/04/30/whiten-images-in-matlab/
# Whiten images in Matlab Previously, I showed how to whiten a matrix in Matlab. This involves finding the inverse square root of the covariance matrix of a set of observations, which is prohibitively expensive when the observations are high-dimensional – for instance, high-resolution natural images. Thankfully, it’s possible to whiten a set of natural images approximately by multiplying the data by the inverse of the square root of the mean power spectrum in the Fourier domain. Assume that X is a matrix of size (nimages,height,width), then: mX = bsxfun(@minus,X,mean(X)); %remove mean fX = fft(fft(mX,[],2),[],3); %fourier transform of the images spectr = sqrt(mean(abs(fX).^2)); %Mean spectrum wX = ifft(ifft(bsxfun(@times,fX,1./spectr),[],2),[],3); %whitened X ### Why this works It’s not obvious at all that this should give similar results to the other method, which involved finding the inverse square root of the covariance matrix M. Let’s assume that the images are stationary – that is, the second order statistics are similar at any point in the image. That means that the elements of the covariance matrix should only depend on the relative position between two pixels. And of course, M is a symmetric matrix. That means that M is a matrix such that the product Mx performs the convolution between x and the kernel associated with M – call this kernel m. That kernel is actually the sum of the autocorrelation of the images. If $Mx = m * x$, then by the convolution theorem, $Mx = F^-1(F(m)\cdot F(x))$. The Fourier transform F(x) is a linear mapping from an n-dimensional complex vector to another n-dimensional complex vector. Therefore, $F(x) = \hat F x$, where $\hat F$ is an n-by-n dimensional matrix. Then, we have that $Mx = \hat F^{-1} \text{diag}(\hat F m) \hat F x$. If we take images X and transform them via $Y' = \hat F^{-1} \text{diag}(1 \// ({\hat F m})^{-1 \// 2}) \hat F X'$, then it’s easy to verify that the covariance of Y will now be a scalar times the identity matrix. Therefore, modulo edge effects, which break the stationarity requirement, images can be whitened by dividing them in the Fourier domain by the inverse of the root of the mean power spectrum.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002907276153564, "perplexity": 302.8595175544994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00497.warc.gz"}
https://talkstats.com/search/1624185/
# Search results 1. ### Confusing question on z-score for sample mean Hey folks, I was hoping someone would understand this better than I do. I am trying to solve the following problem: We are testing the hypothesis that the average gas consumption per day in Billings, Montana is greater than 7 gallons per day; we want 95% confidence. We sample 30 drivers. The...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989684581756592, "perplexity": 678.459641143309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00534.warc.gz"}
https://matheducators.stackexchange.com/questions/11226/notation-of-line-segment-and-its-length
# Notation of line segment and its length I have sometimes seen a notation where $AB$ could mean either the line segment or its length. Why do the same notation can be mean both? Should one teach pupils to use for example notation $d(A,B)$ or $|AB|$ to mean the length of the line segment $AB$? I saw once in a text where $AB$ had two different meanings inside one solution. • French notation: $[AB]$ means the segment (set of points), $AB$ is the length of $[AB]$ and there's the (much less used) notation $]AB[$. – llllllllllllllllllllllllllllll Jul 12 '16 at 12:54 • The Greeks didn't distinguish between a line segment and the length of a line segment as clearly as we do. Hence geometric writing which follows Euclid's Elements tends to be similarly ambiguous. – John Coleman Jul 12 '16 at 13:55 • @JohnColeman, it's unfortunate that it can take this long to change notation. – Joonas Ilmavirta Jul 12 '16 at 21:02 • In my experience, $AB$ is the length of $\overline{AB}$. – John Molokach Jul 14 '16 at 2:11 • Good luck getting mathematicians to agree on a notation--that's what your question is about. This is a huge problem, imo, of mathematics--that it cannot agree on a formal language. – Jared Jul 16 '16 at 6:43 The situation you describe is common for mathematics. Take the notation $\sum_{k=1}^\infty a_k$ which has two different meanings: • the sequence of partial sums, i.e. $\sum_{k=1}^\infty a_k= \left(\sum_{k=1}^n a_k\right)_{n\in\mathbb N}$ • the limit of the series, i.e. $\sum_{k=1}^\infty a_k= \lim_{n\to\infty} \sum_{k=1}^n a_k$ Another example is the symbol $\subset$. Some authors use it for the subset relation and some for the proper subset relation. I cannot answer your question, why this happens. Since there is no overall style guide in the mathematics community for notations, it happens that different authors use the same notation for different mathematical concepts. I would suggest the following: As an author or lecturer I would avoid notations with different meanings or notations which are used for different concepts in the literature (if this is possible). So instead of $\subset$ I would use $\subseteq$ or $\subsetneq$ since there is is no ambiguity how to interpret these symbols... • I don't believe I've ever seen $\sum_{k=1}^\infty a_k$ identified as a sequence of partial sums, at least in a published paper or in a published book. However, the subset symbol is a good example and I do the same as you do when using it. – Dave L Renfro Jul 14 '16 at 20:09 • @DaveLRenfro: Take the sentence "$\sum_{k=1}^\infty a_k$ converges". Here the notation $\sum_{k=1}^\infty a_k$ means the sequence of partial sums (since the sentence "the limit of ... converges" does not make any sense) – Stephan Kulla Jul 15 '16 at 7:56 • O-K, I've seen this before, and I agree that it fits with what you were saying. That said, this is probably something more relevant in undergraduate or beginning graduate level textbook settings, where the reader is still learning standard notation and terminology, and so the writer should be a bit more careful. In more advanced situations, I would be more generous and consider the usage acceptable "abuse of notation" or acceptable "common usage". – Dave L Renfro Jul 18 '16 at 15:26 If you want to use a notation which clearly distinguishes between a line segment and its line, you can use the notation $$[A,B]$$ for the segment and $$d(A,B)\quad\text{or}\quad \|A-B\|$$ for its length. In my experience this notation is used in many textbooks on Linear Algebra. • Or, more commonly, $| AB |$. – Joseph O'Rourke Jul 17 '16 at 22:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058824777603149, "perplexity": 345.4613034058281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146004.9/warc/CC-MAIN-20200225014941-20200225044941-00391.warc.gz"}
https://indico.mitp.uni-mainz.de/event/188/timetable/?view=standard
# 57. International Winter Meeting on Nuclear Physics Europe/Berlin Bormio, Italy #### Bormio, Italy , , , Description Long-standing conference bringing together researchers and students from various fields of subatomic physics. The conference location is Bormio, a beautiful mountain resort in the Italian Alps. Participants • Aleksandra Ciemny • Amer Alqaaod • Andreas Mathis • ANGELO pagano • Aurora Tumino • Bernhard Hohlweger • Burkhard Kolb • Carlo Gustavino • Carlo Mancini-Terracciano • Charles Horowitz • Chloë Hebborn • Christian Fischer • Claudia Patrignani • Concettina Sfienti • Cristiano Galbiati • Dalibor Zakoucky • Dariusz Miskowiec • Dean Lee • Dmitry Testov • Edda Gschwendtner • Emanuele Vincenzo Pagano • EMIKO HIYAMA • Fausto Casaburo • Francesca Bellini • Francesco Forti • Francis Halzen • Frederic Colomer • Geirr Sletten • Georg Wolschin • Gianluca Stellin • GIULIA MANCA • Greg Landsberg • Hans-Thomas Janka • Harald Merkel • Herbert Hübel • Hermann Wolter • Isao Tanihata • Jan Fiete Grosse-Oetringhaus • Joana Wirth • John Harris • John Sharpey-Schafer • Krzysztof Nowakowski • laura fabbietti • Laura Moschini • Leszek Kosarzewski • Luciano Musa • Marcel Merk • Mariana Nanova • Martin Ivanov • Matteo Franchini • Micol De Simoni • Mikhail Barabanov • Mitko Gaidarov • Natalia Sokołowska • Patrick Achenbach • Pepe Guelker • Pierluigi Campana • Pierre Capel • Randolf Pohl • Rene Reifarth • Ruediger Haake • Ryan Mitchell • Shiro Matoba • Simon Kegel • Stan Lai • Stefan Leupold • Stefan Lunkenheimer • Stefan Schoenert • Steffen Maurus • Stephan Aulenbacher • Takashi Nakamura • Vanek Jan • Victoria Durant • Wolfgang Kuehn • Wolfgang Trautmann • Monday, 21 January • 06:00 08:00 Pre-Conference School: Sunday Lectures • 06:00 Selected Topics in Flavour Physics 10m Speaker: Prof. Marcel Merk (Nikhef) • 06:10 Selected Topics in Heavy Ion Collisions 10m Speaker: Prof. John Harris (Yale University) • 06:20 Selected topics in Nuclear Structures and Astrophysics 10m Speaker: Prof. Pierre Capel (Université Libre de Bruxelles (ULB)) • 06:30 Selected topics in Hadron Physics 10m Speaker: Dr Harald Merkel (Institut für Kernphysik, Johannes Gutenberg-Universität Mainz) • 09:00 13:10 Monday Morning Session Conveners: Prof. Concettina Sfienti (Johannes Gutenberg-Universität Mainz) , Prof. Laura Fabbietti (excellence cluster 'universe') • 09:00 Welcome 10m Speakers: Prof. Concettina Sfienti (Johannes Gutenberg-Universität Mainz) , Prof. laura fabbietti (Technische Universitaet Muenchen) • 09:10 Latest Results from the CMS Experiment 45m In this broad overview talk I'll cover the latest results from the CMS experiment at the LHC, with the focus on a few most important topics in high-pT and heavy ion physics, Speaker: Prof. Greg Landsberg (Brown University) • 09:55 Neutron stars and the equation of state of dense matter 45m I review neutron star structure and how this depends on the equation of state (pressure vs density) of dense matter. Constraints on the equation of state from laboratory experiments and astronomical observations with X-rays, neutrinos, and gravitational waves will be discussed. Speaker: Prof. Charles Horowitz (Indiana University) • 11:10 Recent results of the BES-III experiment 45m Recent results of the BES-III experiment Speaker: Ryan Mitchell (Indiana University) • 11:55 Flavour results at LHCb 30m Flavour results at LHCb Speaker: Dr Pierluigi Campana (LNF-INFN) • 17:00 19:00 Monday Afternoon Session: Poster Session Convener: Prof. Pierre Capel (Université Libre de Bruxelles (ULB)) • 17:00 Studies of the nucleosynthesis $^{12}\mathrm C(\alpha,\gamma)^{16}\mathrm O$ in inverse kinematic for the MAGIX experiment on MESA 3m MAGIX is a versatile fixed-target experiment and will be built on the new accelerator MESA (Mainz Energy-Recovering Superconducting Accelerator) in Mainz. The accelerator will deliver polarized electron beams with currents up to $1\,\mathrm{mA}$ and energy up to $105\,\mathrm{MeV}$. Using its internal gas-target, MAGIX will reach a luminosity of $\mathcal{O}(10^{35}\,\mathrm{cm}^{−2}\mathrm{s}^{−1})$. This allows to study processes with very low cross section at small momentum transfer in a rich physical program. The poster presents the planned measurements of the nucleosynthesis process $^{12}\mathrm C(\alpha,\gamma)^{16}\mathrm O$ to determine its S-factor. In the experiment we will scatter electrons on oxygen atoms and we will detect the scattered electrons in coincidence with the produced $\alpha$-particles. With this measurement we will determine the cross section of the inverse kinematic as a function of the outgoing center of mass energy of the carbon-$\alpha$-system to calculate the S-factor. Hereby we will present the results of the actual simulations and the parameter range that MAGIX will be able to explore. Additionally the planned experimental setup will be discussed. Speaker: Mr Stefan Lunkenheimer (KPH) • 17:03 15C structure and dynamics: coupling Halo EFT to reaction models for transfer, breakup and radiative capture 3m We study various reactions involving the one-neutron halo nucleus 15C using a single structure model based on Halo EFT. First, we determine the low-energy constants needed in this description of 15C to reproduce both the one-neutron binding energy of 15C ground state and the asymptotic normalization coefficient (ANC) extracted through the analysis of the 14C(d,p)15C transfer reaction at 17.06 MeV [1,2]. Then, we study the 15C breakup at high (605 AMeV [3]) and intermediate (68 AMeV [4]) energies using an eikonal model with a consistent treatment of nuclear and Coulomb interactions at all orders, which takes into account proper relativistic corrections. We show the importance of the inclusion of relativistic corrections in the former case. Finally, we study the 14C(n,gamma)15C radiative capture comparing our results to the direct measurements performed by Reifarth et al. [5]. Our theoretical predictions are in excellent agreement with the experimental data for each reaction, thus assessing the robustness of the structure model provided for this nucleus. [1] A. M. Mukhamedzhanov et al. Phys. Rev. C 84, 024616 (2011). [2] J. Yang and P. Capel, Phys. Rev. C 98, 054602 (2018). [3] U. D. Pramanik et al. Phys. Lett. B551, 63 (2003). [4] T. Nakamura et al. Phys. Rev. C 79, 035805 (2009). [5] R. Reifarth et al. Phys. Rev. C 77, 015804 (2008). Speaker: Dr Laura Moschini (Université libre de Bruxelles (ULB)) • 17:06 Measurements of open-charm hadrons in heavy-ion collisions by the STAR experiment 3m Charm quarks are primarily produced at early stages of ultra-relativistic heavy-ion collisions and can therefore probe the Quark-Gluon Plasma (QGP) throughout its whole evolution. Final-state open-charm hadrons are commonly used to experimentally study the charm quark interaction with the QGP. Thanks to the precise secondary vertex reconstruction provided by the Heavy Flavor Tracker (HFT), STAR is able to directly reconstruct D$^\pm$, D$^0$, D$_\textrm{s}$, and $\Lambda_\textrm{c}^\pm$ via their hadronic decay channels. Moreover, the topological cuts for signal extraction are optimized using supervised machine learning techniques. In this talk, we will present an overview of recent open charm results from the STAR experiment. In particular, the nuclear modification factors of open-charm mesons, D$_\textrm{s}$/D$^0$ and $\Lambda_\textrm{c}^\pm$/D$^0$ ratios as functions of transverse momentum and collision centrality will be discussed together with their physics implications. Speaker: Vanek Jan (Nuclear Physics Institute, Czech Academy of Sciences) • 17:09 Peripherality in inclusive nuclear breakup of halo nuclei 3m The development of Radioactive-Ion Beams (RIBs) in the early 80s has enabled the study of exotic nuclei far from stability. Halo nuclei are among the most peculiar structures discovered since then, they have one or two loosely-bound nucleons which tunnel far from the core of the nucleus, causing their matter radius to be much larger than stable nuclei [1]. Their short lifetimes make their analysis through usual spectroscopic techniques impossible, therefore indirect methods, such as reaction processes, have to be used. To infer precise information about the structure of these exotic nuclei from reaction measurements, an accurate reaction model coupled to a realistic description of the nucleus is needed. The eikonal model provides a simple interpretation of the collision and is very efficient from a computational point-of-view. However, it has several flaws: it does not treat properly the dynamics of the projectile during the reaction, in its standard form it has convergence issues to compute breakup observables due to the Coulomb interaction and it is valid only at energies higher than 50 MeV/nucleon. In this work, we study corrections to the eikonal model to address these issues and we analyse their efficiencies in the cases of elastic scattering and breakup of one-neutron halo nuclei. As experimental facilities such as HIE-ISOLDE at CERN and ReA12 at MSU provide RIB around 10 MeV/nucleon, extending the validity of the eikonal model to such energies would be of great interest. To do so, we study two corrections which aim to improve the deflection of the projectile due to its interaction with the target. The first correction relies on a semi-classical approach [2] while the second is based on an exact correspondence with the partial-wave expansion [3]. Both corrections improve the elastic-scattering of one-neutron halo nuclei at 10 MeV/nucleon but fails to describe breakup observables [4,5]. We have also studied corrections to better account for the dynamics of the projectile during the collision. This might improve the eikonal analyses of inclusive measurements of breakup reactions. [1] I. Tanihata, J. Phys. G **22**, 157 (1996). [2] C. E. Aguiar, F. Zardi, and A. Vitturi, Phys. Rev. C **56**, 1511 (1997). [3] J. M. Brooke, J. S. Al-Khalili, and J. A. Tostevin, Phys. Rev. C **59**, 1560 (1999). [4] C. Hebborn and P. Capel, Phys. Rev. C **96**, 054607 (2017). [5] C. Hebborn and P. Capel, Phys. Rev. C **98**, 044610 (2018). Speaker: Ms Chloë Hebborn (Université libre de Bruxelles) • 17:12 Meson-baryon interaction in the Fock-Tani formalism 3m The Fock-Tani formalism is a first principle method to obtain effective interactions from microscopic Hamiltonians. The idea consists in a change of representation such that the operators associated with composite particles could be rewritten in operators who satisfy the canonical anticomutation relations. Starting from Fock space and using creation and annihilation operators to the constituents' particles, we consider a system contending quarks and antiquarks which could form bonded-states. In this new representation, the meson/baryon states can be constructed from meson/baryon creation operators. Originally derived for meson-meson or baryon-baryon scattering, we will present the corresponding equations for meson-baryon scattering. • 17:15 Constraining the Λ-Λ interaction with femtoscopy in small systems at the LHC 3m Pioneering studies by the ALICE Collaboration have demonstrated the potential of employing femtoscopy to investigate and constrain baryon-baryon interactions with unprecedented precision. In particular, the small size of the particle-emitting source in pp and p-Pb collision systems at ultrarelativistic energies is well suited to study short-ranged strong potentials. Newly developed analysis tools allow for the comparison of the measured correlation function between the particle pairs of interest to theoretical predictions using either potentials or wave functions as an input. In this contribution, we present measurements of Λ–Λ correlations by the ALICE Collaboration in pp collisions at √s = 7 and 13 TeV and p–Pb at √s$_{\mathrm{NN}}$ = 5.02 TeV. The interaction among the Λ–Λ pairs is studied with unprecedented precision, and the data are found to agree with hypernuclei results and state of the art lattice computations. Furthermore, by testing the compatibility of different model predictions and scattering parameters with the high precision data, the interaction is investigated. The experimental data tightly constrain the existence and the binding energy of a hypothetical H-dibaryon state. Speaker: Andreas Mathis (Technische Universität München, Physik Department E62) • 17:18 Cascading decays of nucleon resonances via meson-pair emission? 3m Photoproduction of mesons provides important information about the ex- citation spectrum of the nucleon that is still not suciently understood despite various long-lasting experimental and theoretical eorts [1]. Reac- tions with multiple-meson nal states are important, in particular 0 since the acts as an isospin lter and provides information on the nature of the intermediate resonances. Particular attention has been paid to the recently claimed narrow structure observed at 1685 MeV in the N channel [2]. We have studied the two-meson photoproduction with the CB/TAPS detector system at the ELSA accelerator in Bonn in the reaction p! p0. High statistics have been obtained in irradiating a liquid hydrogen target with photon beams in the incident energy range from 0.9 to 3.0 GeV. A kine- matic t has been used in the reconstruction and identication of the exit channels. Preliminary results on the search for the narrow structure at 1685 MeV will be presented. [1] V. Crede and W. Roberts, Rep. Prog. Phys. 76 (2013) 076301 [2] V. Kuznetsov et al., JETP Letters 106 (2017) 693 *Funded by DFG(SFB/TR-16) Speaker: Dr Mariana Nanova (II. Pays. Institut, University of Giessen, Giessen, Germany) • 17:21 Neutronic Analysis for the Effects of High-Level Radioactive Waste Distribution on Subcritical Multiplication Parameters in ADS Reactor 3m The transmutation of high-level radioactive waste (HLW) for nuclear waste management attracted attention of many countries, and is a subject of current research in many European and national project[1][2]. This interest comes from the increase of accumulated nuclear waste due to operation of nuclear power plants, and the needs to minimize the environmental and proliferation threats. Innovative nuclear reactor concepts like accelerator driven systems (ADS) are currently in development that predict to play an effective role in the transmutation process of transuranium elements, in particular of minor actinides (MA) and Long Lived Fission Product (LLFP) to reduce the radiotoxicity risk [3]. In this study, the effect of high level radioactive waste distribution on the neutron characteristics like subcritical multiplication parameters and source efficiency are numerically investigated in three different configurations for simple model of Accelerator Driven System (ADS) reactor consist of two zones, inner region with fast neutron spectra and outer region with thermal neutron spectra, and the subcritical core coupled with external neutron source. The calculations are conducted by using Monte Carlo N-particle Transport code. [1] L. García, J. Pérez, C. García, A. Escrivá, J. Rosales, and A. Abánades, “Calculation of the packing fraction in a pebble-bed ADS and redesigning of the Transmutation Advanced Device for Sustainable Energy Applications (TADSEA),” Nucl. Eng. Des., vol. 253, pp. 142–152, 2012. [2] D. De Bruyn, H. A. Abderrahim, P. Baeten, and P. Leysen, “The MYRRHA ADS Project in Belgium Enters the Front End Engineering Phase,” Phys. Procedia, 2015. [3] P. K. Zhivkov, “Energy Production and Transmutation of Nuclear Waste by Accelerator Driven Systems,” in Journal of Physics: Conference Series, 2018. Speaker: Dr Amer A. Al Qaaod (Postdoc fellowship) • 17:24 Simulation and construction of an open TPC 3m Starting from the source, through the cavities, to the target and into the spectrometers the electrons at MESA/MAGIX do not have to pass any windows. Just before entering the detection-system one barrier has to be passed, which is unfortunate, but essential to seperate the vacuum in the spectrometers from the counting-gas inside the detector. For the track-reconstruction of low energy electrons (< 105 MeV) every bit of material is crucial, as the best achievable resolution is lowered by multiple scattering in traversed material. The classic approach for TPCs is to homogenise the drift field by surrounding the area with a field-cage, consisting of copper and kapton. To restrict the material barrier to an absolute minimum, we want to get rid of the field-cage on the entrance face of the detector. In this contribution new ideas on how to accomplish this, without distorting the electrical field are presented. Speaker: Mr Jakob, Manuel, Philip, Pepe Guelker (JGU) • 17:27 Detection of primary photons in high energy cosmic rays using Cherenkov imaging and surface detectors 3m %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[twoside,twocolumn]{article} \usepackage{blindtext} % Package to generate dummy text throughout this template \usepackage[]{units} \usepackage{graphicx} \usepackage{acronym} \usepackage[sc]{mathpazo} % Use the Palatino font \usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs \linespread{1.05} % Line spacing - Palatino needs more space between lines \usepackage{microtype} % Slightly tweak font spacing for aesthetics \usepackage[english]{babel} % Language hyphenation and typographical rules \usepackage[hmarginratio=1:1,top=32mm,columnsep=20pt]{geometry} % Document margins \usepackage[hang, small,labelfont=bf,up,textfont=it,up]{caption} % Custom captions under/above floats in tables or figures \usepackage{booktabs} % Horizontal rules in tables \usepackage{lettrine} % The lettrine is the first enlarged letter at the beginning of the text \usepackage{enumitem} % Customized lists \setlist[itemize]{noitemsep} % Make itemize lists more compact \usepackage{abstract} % Allows abstract customization \renewcommand{\abstractnamefont}{\normalfont\bfseries} % Set the "Abstract" text to bold \renewcommand{\abstracttextfont}{\normalfont\small\itshape} % Set the abstract itself to small italic text \usepackage{titlesec} % Allows customization of titles \renewcommand\thesection{\Roman{section}} % Roman numerals for the sections \renewcommand\thesubsection{\roman{subsection}} % roman numerals for subsections \titleformat{\section}[block]{\large\scshape\centering}{\thesection.}{1em}{} % Change the look of the section titles \titleformat{\subsection}[block]{\large}{\thesubsection.}{1em}{} % Change the look of the section titles \usepackage{fancyhdr} % Headers and footers \pagestyle{fancy} % All pages have headers and footers \fancyhead{} % Blank out the default header \fancyfoot{} % Blank out the default footer \fancyhead[C]{Detection of primary photons in high energy cosmic rays using \v{C}erenkov imaging and surface detectors} % Custom header text \fancyfoot[RO,LE]{\thepage} % Custom footer text \usepackage{titling} % Customizing the title section \usepackage{hyperref} % For hyperlinks in the PDF %---------------------------------------------------------------------------------------- % TITLE SECTION %---------------------------------------------------------------------------------------- \setlength{\droptitle}{-4\baselineskip} % Move the title up \pretitle{\begin{center}\Huge\bfseries} % Article title formatting \posttitle{\end{center}} % Article title closing formatting \title{Detection of primary photons in high energy cosmic rays using \v{C}erenkov imaging and surface detectors} % Article title \author{% \textsc{Fausto Casaburo}%\thanks{A thank you or further information} \\[1ex] % Your name \normalsize \\University of Rome "La Sapienza" \\ % Your institution \normalsize \href{mailto:[email protected]}{[email protected]} % Your email address %\and % Uncomment if 2 authors are required, duplicate these 4 lines if more %\textsc{Jane Smith}\thanks{Corresponding author} \\[1ex] % Second author's name %\normalsize University of Utah \\ % Second author's institution %\normalsize \href{mailto:[email protected]}{[email protected]} % Second author's email address } %\date{} % Leave empty to omit a date \renewcommand{\maketitlehookd}{% %\begin{abstract} %\noindent %\end{abstract} } %---------------------------------------------------------------------------------------- \begin{document} % Print the title \maketitle %---------------------------------------------------------------------------------------- % ARTICLE CONTENTS %---------------------------------------------------------------------------------------- \section{Abstract} \lettrine[nindent=0em,lines=3]{G}iven that two important experiments to study $\gamma$ rays, \textit{\textbf{\ac{LHAASO}}} and \textit{\textbf{\ac{CTA}}} are currently in planning phase, we analyzed some simulations made by \textit{\textbf{\ac{CORSIKA}}} to compare \textit{\textbf{\ac{EAS}}} induced by protons to \ac{EAS} induced by $\gamma$. We choosed two primary particles energies $E\sim\unit[150]{GeV}$ and $E\sim\unit[1]{TeV}$, there were plotted secondary particles distributions at observation level; by plots we can see that secondary particles of $\gamma$ rays showers are arranged on surfaces centered in \ac{EAS} core smaller than particles of proton showers. Later we showed that in proton showers we have more secondaries $\mu^{\pm}$ than in $\gamma$ rays showers. Mostly, by calculating particles density in circular crowns centered in the \ac{EAS} core, we showed that, increasing distance from core, density decreasing of secondary particles produced by $\gamma$ rays showers is faster than secondary particles produced by proton showers. Lastly, arbitrarily choosing 3 distances from the core $\unit[10]{m}$, $\unit[100]{m}$ and $\unit[600]{m}$ it was calculated secondaries particles density, showing that for fixed distances, increasing primary particles energy, secondary particles density increases too. Obtained results are important because they allow us to test teories at the basis of \ac{LHAASO} and \ac{CTA} realization, that is thanks to algorithms based on differences between lateral developments of showers in atmosphere, lateral distribution at observation level about charged and neutral particles around shower core, number of $\mu^{\pm}$, it will be possible to discern $\gamma$ rays showers from proton showers ($\frac{proton \ac{EAS} }{\gamma \ac{EAS} }\sim100$) to acquire events and to reject adronic background. Finally, comparing experimental data to obtained mean values of studied physical quantities in function of primary particles energies, it will be possible to estimate the latter. %------------------------------------------------ %---------------------------------------------------------------------------------------- % ACRONIMI %---------------------------------------------------------------------------------------- \section*{acronyms} \begin{acronym}[WYSIWYM] \acro{CORSIKA}[CORSIKA]{Cosmic Ray SImulations for KAscade} \acro{CTA}[CTA]{\v{C}erenkov Telescope Array} \acro{EAS}[EAS]{Extensive Air Shower} \acro{LHAASO}[LHAASO]{Large High Altitude Air Shower Observatory} \end{acronym} %---------------------------------------------------------------------------------------- \end{document} Speaker: Mr Fausto Casaburo (University La Sapienza) • 17:30 The Silicon Tracking System of CBM getting ready for experiment 3m The Compressed Baryonic Matter (CBM) experiment at the FAIR facility will explore the QCD phase diagram at very high baryon densities, where a first order phase transition from hadronic to partonic matter as well as a chiral phase transition is expected to occur. The Silicon Tracking System is the central detector for charged-particle identification and momentum measurement. It is designed to measure up to 1000 particles in A+A collision rates between 0.1 and 10 MHz, to achieve a momentum resolution in a 1Tm dipole magnetic field of better than 2%, and to be capable of identifying complex particle decays topologies, e.g., such with strangeness content. The STS employs high-granularity double-sided sensors matching the non-uniform track density and fast self-triggering electronics with a free streaming data acquisition system and online event selection. With the resulting 1.8 million readout channels, it poses the most demanding requirements regarding bandwidth and density of all CBM detectors. The STS functional building block is a module consisting of a sensor, micro-cables and two front-end electronics boards. The modules are mounted on carbon fiber support ladders. The silicon sensors provide double-sided segmentation at a strip pitch of 58 µm and 7.5-degree stereo angle. Ultra-thin micro-cables with up to 50cm length transfer the sensor signals to the electronics located out of the detector acceptance. The custom-developed read-out ASIC “STS-XYTER” has a self-triggering architecture that delivers time and amplitude information per channel. Towards the phase 0 of the CBM experiment, mini CBM (mCBM), a precursor of the full CBM with detector units from all subsystems, the STS will contribute with two tracking stations consisting of a total of 13 modules. The mCBM will allow to test and optimize the detector performance, including the data acquisition chain under realistic experimental conditions and its integration with the other subsystems. This presentation aims to show an overview of the development status of the module components, readout chain, first test results and system integration in a framework of the mCBM campaign at SIS18 at GSI. Speaker: Mr Adrian Rodriguez Rodriguez (Goethe University Frankfurt am Main) • 17:33 Probing of XYZ meson structure with near threshold pp and pA collisions 3m The spectroscopy of charmonium-like mesons with masses above the 2mD open charm threshold has been full of surprises and remains poorly understood [1]. The currently most compelling theoretical descriptions of the mysterious XYZ mesons attribute them to hybrid structure with a tightly bound cc\bar diquark [2] or cq(cq')\bar tetraquark [3 - 5] core that strongly couples to S-wave DD\bar molecular-like structures. In this picture, the production of a XYZ particle in high energy hadron collisions and its decays into light hadron plus charmonum final states proceed via the core component of the meson, while decays to pairs of open charmed mesons proceed via the DD\bar component. These ideas have been applied with some success to the X(3872) [2], where a detailed calculation finds a cc\bar core component that is only above 5% of the time with the DD\bar component (mostly D^0D^0\bar) accounting for the rest. In this picture, the X(3872) is compose of three rather disparate components: a small charmonium-like cc\bar core with r_rms < 1 fm, a larger D^+D^- component with r_rms = ħ/\sqrt(2µ+B+) ≈ 1.5 fm and a dominant component D^0D^0\bar with a huge, r_rms = ħ/\sqrt(2µ0B0)> 9 fm spatial extent. Here µ+(µ0) and B+(B0) denote the reduced mass for the D^+D^- (D^0D^0\bar) system and the relevant binding energy |mD + mD - MX(3872)| (B+ = 8.2 MeV, B0 < 0.3 MeV). The different amplitudes and spatial distributions of the D^+D^- and D^0D^0\bar components ensure that the X(3872) is not an isospin eigenstate. Instead it is mostly I = 0, but has a significant (~ 25 %) I = 1 component. In the hybrid scheme, an X(3872) is produced in high energy pA collisions via its compact (r_rms < 1 fm) charmonium-like structure and this rapidity mixes in a time (t ~ ħ/δM) into a huge and fragile, mostly D^0D^0\bar, molecular-like structure. δM is the difference between the X(3872) mass and that of the nearest cc\bar mass pole core state, which we take to be that of the χ_c1(2P) pure charmonium state which is expected to lie about 20 ~ 30 MeV above M_X(3872) [6, 7]. In this case, the mixing time, cτ_mix 5 ~ 10 fm, is much shorter than the lifetime of X(3872) which is cτ_X(3872) > 150 fm [8]. The experiments with proton-proton (pp) and proton-nuclear (pA) collisions with √SpN up to 26 Gev/c and luminosity up to 10^32 cm^-2s^-1 planned at NICA are well suited to test this picture for the X(3872) and, possibly, other XYZ mesons. In near threshold production experiments in the √SpN ≈ 8 GeV energy range, X(3872) mesons can be produced with typical kinetic energies of a few hundred MeV (i.e. with γβ ≈ 0.3). In the case of X(3872), its decay length will be greater than 50 fm while the distance scale for the cc\bar → D^0D^0*\bar transition would be 2 ~ 3 fm. Since the survival probability of an r_rms ~ 9 fm “molecular” inside nuclear matter should be very small, X(3872) meson production on a nuclear target with r_rms ~ 5 fm or more (A ~ 60 or larger) should be strongly quenched. Thus, if the hybrid picture is correct, the atomic number dependence of X(3872) production at fixed √SpN should have a dramatically different behavior than that of the ψ', which is long lived compact charmonium state. The current experimental status of XYZ mesons together with hidden charm tetraquark candidates and present simulations what we might expect from A-dependence of X(3872) mesons in pp and pA collisions are summarized. References [1] S. Olsen, Front. Phys. 10 101401 (2015) [2] S. Takeuchi, K. Shimizu, M. Takizawa, Progr. Theor. Exp. Phys. 2015, 079203 (2015) [3] A. Esposito, A. Pilloni, A.D. Poloza, arXiv:1603.07667[hep-ph] [4] M.Y.Barabanov, A.S.Vodopyanov, S.L.Olsen, A.I.Zinchenko, Phys. Atom. Nuc. 79, 1, 126 (2016) [5] M.Yu. Barabanov, A.S. Vodopyanov, S.L. Olsen, Phys. Scripta 166 014019 (2015) [6] N. Isgur, Phys. Rev. D 32, 189 (1985) [7] K. Olive et al. (PDG), Chin. Phys. C 38, 090001 (2014) [8] The width of X(3872) is experimentally constrained to be Г X(3872) < 1.2 (90% CL) in S.-K. Choi et al (Belle Collaboration), Phys. Rev. D 84, 052004 (2011) Speaker: Prof. Mikhail Barabanov (JINR) • 17:36 Nuclear symmetry energy and its components at zero and finite temperatures 3m We derive the volume and surface components of the nuclear symmetry energy (NSE) and their ratio [1] within the coherent density fluctuation model [2, 3]. The estimations use the results of the model for the NSE in finite nuclei based on the Brueckner and Skyrme energy-density functionals for nuclear matter. The obtained values of these quantities for the Ni, Sn, and Pb isotopic chains are compared with estimations of other approaches which have used available experimental data on binding energies, neutron-skin thicknesses, and excitation energies to isobaric analog states. Apart from the density dependence investigated in our previous works [4, 5, 6], we study also the temperature dependence of the symmetry energy in finite nuclei [7] in the framework of the local density approximation combining it with the self-consistent Skyrme-HFB method using the cylindrical transformed deformed harmonic oscillator basis. The results for the thermal evolution of the NSE in the interval T=0—4 MeV show that its values decrease with temperature. The same formalism is applied to obtain the values of the volume and surface contributions to the NSE and their ratio at finite temperatures [8]. We confirm the existence of "kinks" of these quantities as functions of the mass number at T = 0 MeV for the double closed shell nuclei 78Ni and 132Sn and the lack of "kinks" for the Pb isotopes, as well as the disappearance of these kinks as the temperature increases. References [1] A. N. Antonov, M. K. Gaidarov, P. Sarriguren, and E. Moya de Guerra, Phys. Rev. C 94, 014319 (2016). [2] A.N. Antonov, V.A. Nikolaev, and I.Zh. Petkov, Bulg. J. Phys. 6 (1979) 151; Z. Phys. A 297 (1980) 257; ibid 304 (1982) 239; Nuovo Cimento A 86 (1985) 23; A.N. Antonov et al., ibid 102 (1989) 1701; A.N. Antonov, D.N. Kadrev, and P.E. Hodgson, Phys. Rev. C 50 (1994) 164. [3] A.N. Antonov, P.E. Hodgson, and I.Zh. Petkov, Nucleon Momentum and Density Distributions in Nuclei, Clarendon Press, Oxford (1988); Nucleon Correlations in Nuclei, Springer-Verlag, Berlin-Heidelberg-New York (1993). [4] M. K. Gaidarov, A. N. Antonov, P. Sarriguren, and E. Moya de Guerra, Phys. Rev. C 84, 034316 (2011). [5] M. K. Gaidarov, A. N. Antonov, P. Sarriguren, and E. Moya de Guerra, Phys. Rev. C 85, 064319 (2012). [6] M. K. Gaidarov, P. Sarriguren, A. N. Antonov, and E. Moya de Guerra, Phys. Rev. C 89, 064301 (2014). [7] A. N. Antonov, D. N. Kadrev, M. K. Gaidarov, P. Sarriguren, and E. Moya de Guerra, Phys. Rev. C 95, 024314 (2017). [8] A. N. Antonov, D. N. Kadrev, M. K. Gaidarov, P. Sarriguren, and E. Moya de Guerra, Phys. Rev. C 98, 054315 (2018). Speaker: Dr Mitko Gaidarov (INRNE-BAS) • 17:39 β decay of 11 Be 3m 11Be is the neutron - rich nucleus expected to be a β-delayed proton emitter. The very small branching ratio (BR) for this exotic decay mode (∼ 10e−6 ) was determined through indirect observations based on accelerator mass spectrometry (AMS) [1, 2] and resulted to be about two orders of magnitude larger than predicted [3]. The direct measurement of the delayed proton emission probability and energy spectrum is particulary challenging, given the small energy window available (∼ 280 keV). The measurement of the βp energy spectrum is important for estimating the Gamow-Teller strength at high excitation ener- gies and for testing models that predict a direct relation between βp and halo structure. Moreover, recently a new hypothesis which may explain results of the AMS experiment appeared. According to it, the neutron may have another decay channel in which unknown particles are produced in the final state [4, 5]. In order to solve this puzzle, we decided to perform a series of measurements with the Warsaw Optical Time Projection Chamber (OTPC) [6]. First tests were made in February 2018 at JINR in Dubna. Those measurements were focused on studying the behaviour of light nuclei in the region of 11Be in the chosen experimental conditions. Additionally we measured 9C beta decay in which low-energy beta-delayed protons (165 keV) are emitted. The main experiment was performed in August/September 2018 at HIE-ISOLDE in CERN. During this run a large amout of 11Be beta decays was observed. A complementary measurement at LNS in Catania is planned for spring 2019 - during this experiment BR for beta delayd alpha emission from 11Be will be determined. The whole project is extremely challenging and complex, both from physical and tech- nical point of view (low BR for β-delayed protons: 10e−8 ∼ 10e−6 , long half-life: T_1/2 = 13.7 s, and low energy of the protons ∼ 180 keV). It required the development of new solutions for the acquisition system and analysis software. More details on the project and the status of the data analysis will be presented. [1] K. Riisager, Nucl. Phys. A 925, 112 (2014). [2] K. Riisager et al., Phys. Lett. B 732, 305 (2014). [3] M. J. G. Borge, et al. J. Phys. G, 40, 035109 (2013). [4] B. Fornal and B. Grinstein, Phys. Rev. Lett. 120, 191801 (2018). [5] M. Pfützner, K. Riisager, Phys. Rev. C 97, 042501(R) (2018). [6] M. Pomorski et al., Phys. Rev. C 90, 014311 (2014). Speaker: Ms Natalia Sokołowska (Faculty of Physics, University of Warsaw) • 17:42 Charged-current quasielastic (anti)neutrino cross sections on 12C with realistic spectral functions including meson-exchange contributions 3m We present a detailed study of charged-current quasielastic (anti)neutrino scattering cross sections on a $^{12}$C target obtained using a spectral function $S(p,{\cal E})$ that gives a scaling function in accordance with the electron scattering data. The spectral function accounts for the nucleon-nucleon (\emph{NN}) correlations, it has a realistic energy dependence and natural orbitals (NO's) from the Jastrow correlation method are used in its construction~\cite{01, 02, 03}. The results are compared with those when \emph{NN} correlations are not included, namely harmonic-oscillator single-particle wave functions are used instead of NO's. A comparison of the results with recent experiments, as well as to results from the superscaling approach is done. The contribution of two-particle two-hole meson-exchange currents on neutrino--nucleus interactions is also considered within a fully relativistic Fermi gas. The results show a good agreement with the experimental data over the whole range of neutrino energies. Speaker: Dr Martin Ivanov (INRNE, BAS) • Tuesday, 22 January • 09:00 13:30 Tuesday Morning Session Convener: Wolfgang Kuehn (JLU Giessen) • 09:00 The AWAKE Experiment 45m The AWAKE Experiment Speaker: Dr Edda Gschwendtner (CERN) • 09:45 Recent developments in nuclear structure theory. 45m Recent developments in nuclear structure theory. Speaker: Prof. Dean Lee (Michigan State University) • 11:00 News from the "proton radius puzzle" 30m For more than eight years now, the "proton radius puzzle" has let us dream about new physics: Our measurements of muonic hydrogen and muonic deuterium, performed in the CREMA Collabroation at PSI, yielded a proton radius which is more than five standard deviations smaller than the CODATA world average from measurements using electrons, namely precision spectroscopy of atomic hydrogen and deuterium, and elastic electron scattering. A wealth of new experiments has been fueled by this exciting discrepancy, and the first results are now coming in. I will report on several new measurements in atomic hydrogen we have performed at MPQ Garching. These, together with new hydrogen measurements from LKB Paris and York U. Toronto and new elastic electron scattering data from The PRad experiment at Jefferson Lab start to paint a clearer picture on the "proton radius puzzle", albeit not without raising new questions. Speaker: Prof. Randolf Pohl (Johannes Gutenberg University Mainz) • 11:30 Speaker: Claudia Patrignani (Università and INFN Bologna) • 12:00 Antimatter measurements at the LHC and implications for indirect dark matter searches 30m The observation of anti-deuteron and anti-helium in cosmic rays has been suggested as a smoking gun in indirect searches for Dark Matter in the Galaxy, under the hypothesis that the background from secondary astrophysical production is negligible. Constraining predictions for the secondary cosmic-ray flux of anti-helium and anti-deuteron with data is therefore crucial to the experimental searches. This contribution focuses on the impact of antimatter measurements at the Large Hadron Collider (LHC) on these searches. In proton-proton, proton-nucleus and nucleus-nucleus collisions at the TeV collision energy scale, light nuclei and their anti-matter counterparts are produced in equal amounts for a given species. The LHC can be used as “anti-matter factory” to measure the production of d_bar, 3He_bar and 4He_bar. In addition to providing unique information to characterise the system produced in high-energy collisions, accelerator data on anti-nuclei production can be used to constrain production models such as coalescence. The most recent results on anti-nuclei production at the LHC will be presented and implications for cosmic ray physics and indirect dark matter searches will be discussed. Speaker: Dr Francesca Bellini (CERN) • 17:00 19:20 Tuesday Afternoon Session Convener: Dr Burkhard Kolb (GSI Darmstadt) • 17:00 Search for a stable six-quark state in Upsilon decays 20m Recently, it has been proposed that the six-quark combination uuddss could be a deeply bound state S, called the "sexaquark". Depending on its mass, S could have a lifetime longer than the age of the universe, or even be absolutely stable. This makes S a good Dark Matter candidate, if it exists. In this talk we present the first search for a stable, deeply-bound, doubly strange six-quark state in the decay Upsilon --> S Lambdabar Lambdabar, using a data sample of 90 M Upsilon(2S) and 110 M Upsilon(3S) decays collected by the BABAR experiment. No signal is observed, and upper limits on the combined Upsilon(nS) --> S Lambdabar Lambdabar branching fractions are derived, setting stringent limits on the existence of such an exotic particle. [Ref: BABAR Collaboration, Search for a Stable Six-Quark State at BABAR, arXiv:1810.04724, submitted to PRL] Speaker: Prof. Wolfgang Gradl (Universität Mainz) • 17:20 Double-folding potentials from local chiral EFT interactions 20m We present the first determination of double-folding potentials based on chiral effective field theory up to next-to-next-to-leading order. To this end, we construct new two-body soft local chiral effective field theory interactions. We also present a first assessment of the impact of three-body interactions from chiral EFT on the nucleus-nucleus folding potential. We benchmark this approach in oxygen-oxygen scattering, and present results for cross sections computed for elastic scattering, as well as for the astrophysical S-factor of the fusion reaction. Speaker: Ms Victoria Durant (TU Darmstadt) • 17:40 Underground Nuclear Astrophysics: Present and future of the LUNA experiment 20m Thermonuclear reaction rates regulate the evolution of stars and the Big Bang Nucleosynthesis. The LUNA Collaboration has shown that, by exploiting the ultra low background achievable deep underground, it is possible to study the relevant nuclear processes down to the nucleosynthesis energy inside stars and during the first minutes of Universe. In this presentation the main results of LUNA50 and LUNA 400 are overviewed, as well as the scientific program the forthcoming 3.5 MV underground accelerator that will be operative at the Gran Sasso laboratory in 2019. Speaker: Dr Carlo Gustavino (INFN-Roma1) • 18:00 Proton-Xi interaction studied via the femtoscopy method in p-Pb collisions measured by ALICE. 20m Femtoscopic studies of Baryon-Baryon pairs opens a new era of studying twoparticle interactions at colliders. In particular, small collision systems prove to be particularly well suited to probe the short-ranged strong potentials. Experimental data are compared to local potentials with the newly developed Correlation Analysis Tool using the Schrödinger Equation (CATS). This analysis is based on the data measured by the ALICE Collaboration from pPb collisions at 5.02 TeV and the correlation function is obtained for pairs of protons and $\Xi$s. For the first time, an attractive strong interaction is observed between the two particles is observed with a significance of more than 3$\sigma$. Lattice calculations by the HAL QCD to model the latter are validated and are used to explore the implications of including the newly found attractive p-$\Xi$ interaction in the description of neutron stars. Speaker: Mr Bernhard Hohlweger (PhD Student) • 18:20 The Dose Profiler tracker: an online Particle Therapy monitor optimised for the detection of charged fragments produced by the ion beams interactions with matter. 20m The use of C, He and O ions in Particle Therapy (PT) exploits the enhanced Relative Biological Effectiveness and Oxygen Enhancement Ratio of such projectiles to improve the treatment efficacy in damaging the cancerous cells. To fully profit from the increased tumor control probability and ballistic precision of the projectiles, an accurate online monitor of the dose release spatial distribution is required to spare the healthy tissues surrounding the tumor area, preventing unwanted damage due to, for example, morphological changes in the patient during treatment with respect to the initial CT scan. A monitoring technology capable of monitoring online PT treatments is still missing in the clinical routine. Several studies are underway to develop beam range verification systems exploiting the detection of the emitted secondary radiation produced by the primary beam interactions with the patient body along the path towards the target volume. An interesting opportunity for C, He and O treatments is represented by the detection of charged particles that can be performed with high efficiency in a nearly background free environment. The Dose Profiler (DP) detector, developed within the INSIDE project, is a scintillating fibre tracker that allows an online charged fragments reconstruction and backtracking. The unfolding of the different matter interaction effects (absorption and multiple scattering inside the patient) on the measured shape represent a crucial task when trying to correlate the measured emission profile with the beam range. Several strategies, based on MC methods, are currently being explored to accomplish such task. In this contribution the preliminary tests performed on the DP, using the $^{12}$C ions beam of the CNAO treatment centre and an anthropomorphic phantom (RANDO®) as target, will be reported together with the implications for the treatment monitoring applications. The first DP clinical trial is scheduled to start in early 2019 at the CNAO center aiming to study the fragments production in the treatment of patients with different clinical conditions and expected treatment toxicity. Speaker: Ms Micol De Simoni (Università di Roma "La Sapienza", Scienze di Base e Applicate per l'Ingegneria, Rome, Italy) • 18:40 Search for new decay modes in neutron-deficient silicon isotopes 20m A characteristic feature of nuclei lying on the left of the $\beta$-stability path, extremely far away from it, is their high Q$_{\beta^+}$-value. This can result in the population of highly excited - and often unbound - states in the daughter nuclei. As a consequence, it can lead to $\beta$-delayed (multi-) charged-particle emission, being very competitive to deexcitation via $\gamma$ emission. Hence, studying of such decay channels is a unique tool for gaining an insight and understanding on the nuclear structure in this region. Moreover, such nuclei are often close to the path followed by the astrophysical rp-process [1,2]. The two most neutron-deficient silicon isotopes known, $^{22,23}$Si, were investigated in an experiment performed at the MARS spectrometer at the Cyclotron Institute, Texas A&M University. The ions were implanted into the Warsaw Optical Time Projection Chamber [3], which is an excellent tool for investigating rare decay modes with almost 100% detecting efficiency. The data collected allowed to confirm all known decay channels ($\beta$1p, $\beta$2p), as well as to identify convincing candidates for the new decay branches. The results of the analysis will be presented and discussed. [1] H. Schatz et al., Phys. Rep. 294, 167 (1998) [2] B.A. Brown et al., Phys. Rev. C 65, 045802 (2002) [3] M. Pomorski et al. Phys. Rev. C, 90, 014311 (2014) Speaker: A. A. Ciemny (Faculty of Physics, University of Warsaw, Poland) • Wednesday, 23 January • 09:00 13:25 Wednesday Morning Session Convener: Prof. John Harris (Yale University) • 09:00 IceCube: Opening a Neutrino Window on the Universe from the South Pole 45m IceCube: Opening a Neutrino Window on the Universe from the South Pole Speaker: Prof. Francis Halzen (University of Wisconsin-Madison) • 09:45 Neutron-star merger modelling after GW170817 45m This overview talk will summarize the status of the numerical modelling of neutron-star mergers. The focus will be particularly on the role of neutrinos, which determine the composition of matter ejected during and after the neutron-star collision and thus the properties of the electromagnetic kilonova emission. A newly developed, computationally fast 'Improved Leakage-Equilibration-Absorption Scheme' (ILEAS) will be introduced. This new treatment allows for efficient simulations of large model grids, which is necessary to explore the high-dimensional space of merger parameters and conditions. Speaker: Prof. Hans-Thomas Janka (Max Planck Institute for Astrophysics) • 11:00 Experimental constraints from LHC on the Quark-Gluon Plasma 45m An overview of recent results of the ALICE collaboration is given with emphasis on recent results from LHC run 2 and how they provide insight in the understanding of hot and dense nuclear matter. Particular attention is given to the observation of collective effects in small collision systems which have caused a paradigm shift in the field of heavy ions in the last years. An outlook is given to the upcoming decade of heavy-ion collisions at the LHC. Speaker: Dr Jan Fiete Grosse-Oetringhaus (CERN) • 11:45 Status and perspectives of the Belle II experiment. 35m The Belle II experiment at the SuperKEKB e+e- collider has completed a commissioning phase in 2018 and is gearing up for full physics data taking starting in March 2019. In this paper the status of the experiment, the first results of the 2018 data taking, as well as the futures plans and perspectives will be presented. Speaker: Prof. Francesco Forti (INFN and University, Pisa) • 12:20 QCD phase diagram from Dyson-Schwinger equations 20m We review results for the phase diagram of QCD, the properties of quarks and gluons and the resulting properties of strongly interacting matter at finite temperature and chemical potential. The interplay of two different but related transitions in QCD, chiral symmetry restoration and deconfinement, leads to a rich phenomenology when external parameters such as quark masses, volume, temperature and chemical potential are varied. We discuss the progress in this field from a theoretical perspective, focusing on non-perturbative QCD as encoded in the functional approach via Dyson-Schwinger and Bethe-Salpeter equations. We discuss various aspects associated with the variation of the quark masses, assess recent results for the QCD phase diagram including the location of a putative critical end-point for $N_f=2+1$ and $N_f=2+1+1$, discuss results for quark spectral functions and summarise aspects of QCD thermodynamics and fluctuations. Speaker: Prof. Christian Fischer (JLU Giessen) • 17:00 19:40 Wednesday Afternoon Session Convener: Dr Harald Merkel (Institut für Kernphysik, Johannes Gutenberg-Universität Mainz) • 17:00 Comparison of Transport Codes for Intermediate Energy Heavy Ion Collisions under Controlled Conditions 20m Transport descriptions of heavy ion collisions in the intermediate energy range are an important method to extract information on the nuclear equation-of-state which is also relevant for astrophysical processes. Different transport model codes have been developed and applied widely. The physical deductions of such analyses should be independent as much as possible of the particular model, or at least, differences should be well understood. However, this has not always been the case in recent analyses, e.g. of pion production. In view of this a transport code evaluation project under controlled conditions with the participation of most of the widely used codes has been initiated some time ago to understand these differences. A first study was made for Au+Au collisions, which showed rather substantial differences. To investigate these further, comparisons were made of calculations in infinite nuclear matter, which can be realized in a good approximation in a box with periodic boundary conditions. In this set-up the different aspects of a transport calculation can be investigated separately and can be compared against analytical results. We have completed a study of the collision term, and found an important influence of the fluctuations, which are intrinsically built into the codes, e.g. on the Pauli-blocking behavior. Work on the mean field propagation and on pion production is in progress. In this talk I will give an overview of the status and conclusions of these comparisons and discuss implications on transport code development and future directions. Speaker: Prof. Hermann Wolter (Univeristy of Munich) • 17:20 Breaking and restoration of rotational symmetry in the low-energy spectrum of light alpha-conjugate nuclei on the lattice 20m The breaking of rotational symmetry on the lattice for bound eigenstates of the two lightest alpha conjugate nuclei is explored. Moreover, a macroscopic alpha-cluster model is used for investigating the general problems associated with the representation of a physical many-body problem on a cubic lattice. In view of the descent from the 3D rotation group to the cubic group symmetry, the role of the squared total angular momentum operator in the classification of the lattice eigenstates in terms of SO(3) irreps is discussed. In particular, the behaviour of the average values of the latter operator, the Hamiltonian and the inter-particle distance as a function of lattice spacing and size is studied by considering the 0+, 2+, 4+ and 6+ (artificial) bound states of 8Be and the lowest 0+, 2+ and 3− multiplets of 12C. Some preliminary results of the analysis of the 16O spectrum, subject of an incoming paper, are going to be shortly presented. Speaker: Mr Gianluca Stellin (Helmholtz Institut für Strahlen- und Kernphysik (HISKP) - Universität Bonn) • 17:40 New Physics in Nuclear Collectivity – The Latest 20m J. F. Sharpey-Schafer University of the Western Cape, Cape Town, South Africa Abstract Recent developments in the experimental data on “collective” structures at low excitation energies have challenged the traditional paradigms of the essential physics involved in the underlying configurations. The importance of systematic experimental studies, that vary the deformations and asymmetries of the mean nuclear shape, is stressed. The limitations of collective descriptions of nuclear structure are discussed, especially the limitations of “phonon” and “boson” approximations to describe both low energy excitations of “spherical” and deformed nuclei. The importance of axial asymmetry in quadrupole deformation is stressed and the importance of octupole deformations, in some regions of the nuclear chart, is vital to understanding the detailed nuclear structure. We will discuss whether the collective excitations at low excitation energies can be described entirely in terms of the mean nuclear shape of non-spherical nuclei and where real “vibrations” of nuclei might occur. Speaker: Prof. John F. Sharpey-Schafer (University of the Western Cape) • 18:00 Development of an accurate DWIA model of coherent pion photoproduction to study neutron skins in medium heavy nuclei 20m Despite decades of studies which have seen the nuclear charge distribution being measured with increasing precision, the neutron distribution remains elusive. The difference between the neutron and proton distributions is often expressed as the difference of their root mean square radii: the neutron skin thickness. Recently, the A2 collaboration at MaMi has measured the skin thickness in lead [1]. This experiment was based on coherent pion photoproduction, where a photon impinges on a nucleus and produces a neutral pion coherently (the nucleus remains in its ground state). The pion is then measured through its two-photon decay by utilizing a large solid-angle photon detector, the Crystal Ball (CB) in conjunction with the Glasgow photon tagger. The coherent photoproduction measurements are thus very clean. At first order, in the plane wave impulse approximation (PWIA) in which the final state interaction between the outgoing pion and the nucleus is neglected, the photoproduction cross section is proportional to the nuclear density from factor. In combination with charge distribution measurements, coherent pion photoproduction is a good way of measuring the neutron skin thickness. However, the distortion caused by the final state interaction of the pion with the nucleus has a significant impact on these cross sections and induces model dependency. These are included in the distorted wave impulse approximation (DWIA). In this work, we develop a new reaction code in the DWIA to help the (still ongoing) analysis of recent measurements by the A2 collaboration at MaMi of the coherent pion photoproduction cross section on $^{116,120,124}$Sn isotopes. In this reaction code, we devise a new potential for the scattering of pion off $^{12}$C, based on the work of K. Stricker Bauer [2]. This potential is constructed from the partial wave analysis of pion scattering off free protons and neutrons in the SAID database [3]. It contains first and second order terms (the pion being scattered once or twice by the nucleons of the nucleus, respectively) and absorptive terms. The main interest of this potential is that it should be valid on a large range of energies with only minor adjustments and allows the use of realistic densities (for example at mean field approximation). This tight collaboration between the experimental and theoretical groups and this new reaction code will thus allow the study of the influence of details of neutron densities on the pion photoproduction cross sections by improving the quality of the potential used to simulate the final state interaction. \\ \\ {[1] C. Tarbert et al., Phys. Rev. Lett. 112, 242502 (2014)} \\ {[2] K. Stricker-Bauer, Ph.D. thesis, Michigan State University (1980)}\\ {[3] R. L. Workman at al., Phys. Rev. C 86, 035202 (2012)} Speaker: Mr Frederic Colomer (ULB (Université Libre de Bruxelles)) • 18:20 Machine Learning based jet momentum reconstruction in heavy-ion collisions 20m The precise reconstruction of jet transverse momenta in heavy-ion collisions is a challenging task. A major obstacle is the large number of (mainly) low-$p_\mathrm{T}$ particles overlaying the jets. Strong region-to-region fluctuations of this background complicate the jet measurement and lead to significant uncertainties. In this talk, a novel approach to correct jet momenta (or energies) for the underlying background in heavy-ion collisions will be presented for the first time. The proposed method was recently described in a paper submitted to PRC[1]. The analysis makes use of common Machine Learning techniques to estimate the jet transverse momentum based on several parameters, including properties of the jet constituents. Using a toy model and HIJING simulations, the performance of the new method is shown to be superior to the established standard area-based background estimator. The application of the new method to data promises the measurement of jets down to extremely low transverse momenta, unprecedented thus far in data on heavy-ion collisions. [1] preprint available at https://arxiv.org/abs/1810.06324 Speaker: Ruediger Haake (Yale University) • 18:40 DarkMESA: Light dark matter search at the MESA beam-dump 20m At the MESA accelerator in Mainz, Germany, the parasitic electron beam-dump experiment DarkMESA has a powerful discovery potential for dark sector particles in the light mass range. The possible existence of such light dark matter (LDM) is a candidate explanation for the long-standing dark matter problem. With 10 000 hours of operation time scheduled for P2 beam experiment at MESA, the dump of the external 150 $\mu$A beam could act as a strong source of a LDM beam. LDM would be produced copiously in the relativistic electron-nucleus collisions taking place in the dump if it couples to electrons via vector mediators, called dark photons. After production, LDM particles could be detected within a shielded detector down-stream of the dump. A large advantage is provided by the boost at which particles are produced by the beam, allowing an improved reach at low masses. Moreover, such a search is unique since it can probe at the same time both the dark photon production and the LDM interaction. DarkMESA will benefit from the beam energy being below pion production threshold, producing very little beam-related backgrounds, and the very stable and continuous beam conditions necessary for the P2 experiment. The DarkMESA detector will be constructed from total absorbing calorimeters made of high-density Cherenkov radiators. Advantages are their speed and relatively low sensitivity to background neutrons. In order to establish the anticipated performance of such calorimeters experimentally, measurements of detector responses over a range of electron energies relevant for LDM detection down to 5 MeV were performed. PbF$_2$ and SF5 lead glass detectors proofed to be well suited. The first phase of DarkMESA will employ available PbF$_2$ crystals for building a (1 x 1 x 0.13) m$^3$ detector of 1200 kg mass. This calorimeter will be arranged in sub-modules of 5 x 5 crystals. In a 2nd phase, additional calorimeters will be constructed from Pb glass blocks: a first prototype with a volume of 0.15 m$^3$ and a mass of 600 kg is already under construction. The completed Phase-2 calorimeters will comprise 1 m$^3$ volume and a mass of 4100 kg. A final active volume of above 10 m$^3$ is envisaged. Simulation studies were performed that explored the parameter space of possible dark photon masses and couplings, assuming realistic electron beam energy and angular distributions as well as different detector acceptances and efficiencies. They show that DarkMESA is complementary to experiments at proton beam facilities and reopens the door to regions of the parameter space excluded by searches for dark photon decays into electrons or muons. The studies indicate that DarkMESA has the potential to be sensitive to the LDM thermal relic targets, that are predicted by the annihilation cross sections for reproducing today's dark matter density. Speaker: Prof. Patrick Achenbach (JGU Mainz) • Thursday, 24 January • 09:00 13:20 Thursday Morning Session Convener: Prof. Marcel Merk (Nikhef) • 09:00 Latest results from the ATLAS Experiment 45m Latest results from the ATLAS Experiment Speaker: Stan Lai (Universität Göttingen) • 09:45 The DarkSide experiment. 45m The DarkSide experiment. Speaker: Prof. Cristiano Galbiati (Princeton University) • 11:00 Exploring towards the neutron-rich limit of nuclei, and beyond 35m How many neutrons can be added to a bound nucleus before it becomes unbound? The location of the neutron drip line, the bound limit in the neutron-rich side in nuclear chart, is indeed one of the fundamental unsolved questions in nuclear physics, as this is established experimentally only up to Z=8. The other question we address here is how atomic nuclei behave near the drip line and beyond. With these questions in mind, I present and discuss the recent experimental studies on exotic neutron rich nuclei,using the advanced rare-isotope in-flight beam facilities [1]. Neutron rich nuclei, in particular near and beyond the neutron-drip line, show characteristic structure due to the weakly-binding (or unbinding), and large difference between neutron and proton Fermi energies. Key aspects are the nuclear shell evolution, deformation, continuum effects, neutron halo, and the strong two neutron correlations called dineutron, which are discussed. Here, I will focus on the results on nuclei near/beyond the neutron drip line, using SAMURAI facility at RIBF, RIKEN. Finally, I will provide perspectives on experimental studies using the new-generation RI-beam facilities towards the neutron-rich limit of nuclear chart. [1] T.Nakamura, H. Sakurai, H. Wtanabe, Prog. Part. Nucl. Phys. 97, 53 (2017). Speaker: Prof. Takashi Nakamura (Tokyo Institute of Technology) • 11:35 Heavy ion and fixed target results at LHCb 30m Heavy ion and fixed target results at LHCb Speaker: GIULIA MANCA (UNIVERSITY OF CAGLIARI AND INFN) • 12:05 Structure of light s-shell \Xi hypernuclei 30m One of the important subject in hypernuclei is to extract information on hyperon-nucleon and hyperon-hyperon interactions. For two decades, by experimental and theoretical efforts, we could succeed in obtaining information on \Lambda N interaction. Then, as a next step, we focus on \Lambda \Lambda and \Xi N interactions. Especially, we have a lot of ambiguity of \Xi N interaction. In this meeting, I will report s-shell \Xi hypernuclei such as NN\Xi and NNN\Xi systems using Nijmegen potential and \Xi N potential based on HAL collaboration. In addition, I will explain how to extract information on \Xi N interaction from the calculation. Speaker: Prof. EMIKO HIYAMA (Kyushu Univeristy/RIKEN) • 17:00 19:30 Thursday Afternoon Session Convener: Prof. Christian Fischer (JLU Giessen) • 17:20 The fastest Time Projection Chamber of the world 20m The upgrade of the ALICE Time Projection Chamber (TPC) is an essential part of the experiment's preparation for the LHC Run 3 starting in 2021. The production of the new readout detectors has been practically completed; the detectors will be installed in the TPC in a few months from now. The Gas Electron Multiplier technology, on which they are based, will enable us to operate the TPC in a continuous mode, sampling the full rate of lead-lead collisions offered by the LHC. In my presentation I will briefly describe the design and production of the chambers, and review the physics prospects opening in Run 3. Speaker: Dariusz Miskowiec (GSI) • 17:40 Local equilibration of fermions and bosons 20m It is proposed to model the local kinetic equilibration in finite systems of fermions and bosons based on a nonlinear diffusion equation [1,2]. It properly accounts for their quantum-statistical characteristics, and is solved exactly. The solution is suited to replace the linear relaxation ansatz that has often been used in the literature. The microscopic transport coefficients are determined through the macroscopic variables temperature and local equilibration time. The model can be applied to high energies typical for relativistic particle collisions, and to low energies appropriate for cold quantum gases. With initial conditions that are appropriate for quarks [1] and gluons [2] in a relativistic heavy-ion collision such as Au-Au or Pb-Pb at energies reached at RHIC or LHC, the analytical solution is derived. It agrees with the numerical solution of the nonlinear equation. The analytical expression  for the gluonic local equilibration time in the thermal tail is compared to the corresponding case for fermions, where Pauli’s principle delays the thermalisation. Due to the nonlinearity of the basic equation, sharp edges of the initial distributions are continously smeared out and local equilibrium with a thermal tail in the ultraviolett region is rapidly attained [2]. [1] G. Wolschin, Phys. Rev. Lett. 48, 1004 (1982); T. Bartsch, G. Wolschin, Annals Phys., in press, and arXiv:1806.04044 (2018). [2] G. Wolschin, Physica A 499,1 (2018); Europhys. Lett. 123, 20009 (2018). Speaker: Prof. Georg Wolschin (U Heidelberg) • 18:00 Probing the structure of weak interactions 20m The Standard Model as a very succesful theory of electroweak interactions postulates the basic assumption about the pure "V(ector)-A(xial vector)" character of the interaction. Nevertheless, even after more than half a century of development of the model and experimental testing of its fundamental ingredients, experimental data still rule out the existence of other types of weak interactions (scalar, tensor) only at the ~8% level. A new project at ISOLDE/CERN to search for these forbidden components of weak interactions (or at least significantly improve their current experimental limits) WISARD is being prepared. Experimental setup WISARD online on the beam of isotope separator ISOLDE plans to probe the existence of scalar currents in the weak interactions via the study of β-delayed protons emitted in the decay of 32Ar. High precision measurement of the Doppler effect on the protons emitted from the moving recoil nuclei after the β –decay of 32Ar carries information about β- angular correlations (different for a scalar current compared to the dominant vector current). Current status of the WISARD setup and first results of the commissioning runs will be presented. Speaker: Dalibor Zakoucky (Nuclear Physics Institute of ASCR) • 18:20 Overview of recent measurements of Upsilon production and suppression with the STAR experiment 20m $\Upsilon$ states can be used to study the properties of the quark-gluon plasma created in heavy-ion collisions. At sufficiently high temperature, $\Upsilon$ mesons dissociate in the plasma as a result of the Debye-like screening of the strong force. Due to their different binding energies, the ground and excited $\Upsilon$ states are expected to dissociate in a sequential pattern. However, other effects, such as the influence of Cold Nuclear Matter (CNM), need to be taken into account when interpreting the $\Upsilon$ suppression observed in heavy-ion collisions. Furthermore, the quarkonium production mechanism in elementary collisions is not yet fully understood. This can be studied by comparing experimental measurements of $\Upsilon$ production in p+p collisions to theoretical calculations. In addition, the dependence of the $\Upsilon$ yield on charged particle multiplicity can be used to study the interplay between hard and soft processes. In this talk, we will present recent $\Upsilon$ measurements with the STAR experiment. The $\Upsilon$ transverse momentum and rapidity spectra in $500\:\mathrm{GeV}$ p+p collisions will be compared to model calculations. In addition, the normalized $\Upsilon$ yield vs. normalized charged particle multiplicity will be presented and compared to results from other experiments and models. The nuclear modification factors for $\Upsilon(1S)$ and $\Upsilon(2S+3S)$ in Au+Au collisions as functions of centrality and transverse momentum will be shown and compared to LHC measurements. Also, the nuclear modification factor of $\Upsilon(1S+2S+3S)$ as a function of rapidity measured in p+Au collisions will be presented for quantifying the CNM effects. Speaker: Dr Leszek Kosarzewski (Czech Technical University in Prague) • 18:40 In-medium properties of Λ in π− - Induced Reactions at 1.7 GeV/c 20m The high precision measurement of a two solar mass neutron star, gives a strong constrain to the equation of state (EOS) of several models describing such dense objects. While more data and recent experimental observations reduce the allowed phase space, the appearance of hyperons inside the neutron star core is still a discussed scenario. For all these EOS the interaction of the hyperon with (normal) nuclear matter is the key ingredient. Of particular interest is thus the $\Lambda$ hyperon, which should appear first, as it is the lightest hyperon. The Lambda-p interaction is known to certain extend and the existence of hypernuclei demonstrates the attractive nature of the lambda-nucleus interaction, no differential study of the Lambda propagation within nuclear matter was carried out so far. In 2014 the HADES collaboration measured $\pi^- + A$ ( A = C, W) reactions at an incident pion momentum of 1.7 GeV/c. Since the pion-nucleon cross section is rather larger, the hyperon production occurs at the surface of the nuclei for pion induced reactions . This provides an ideal system, as the path length of the produced hyperons through nuclear matter is rather large and hence in-medium properties can be studied. In our experimental approach we are selecting the exclusive channel of $\pi^-+p \rightarrow \Lambda + K^0$ reconstructed in term of their associated dominant charged decay products in an light (C) and heavy (W) nuclear environment. With the help of the GiBUU transport code, we are able to test different scenarios, which include different couplings of the $\Lambda$ to normal nuclear environment in combination with the $K^0$. One of this scenario also includes for the first time an repulsive $\Sigma^0$ potential, predicted by $\chi$-effective theory. We will report on the on-going analysis and present our sensitivity on the different scenarios of the in-medium propagation. Speaker: Mr Steffen Maurus (TUM) • Friday, 25 January • 09:00 12:30 Friday Morning Session Convener: Dr Mariana Nanova (II. Pays. Institut, University of Giessen, Giessen, Germany) • 09:00 Novel silicon pixel detectors 45m Novel silicon pixel detectors Speaker: Dr Luciano Musa (CERN) • 09:45 What it takes to calculate the magnetic moment of the muon in the standard model 35m The magnetic moment of the muon is the observable which shows at the moment the largest discrepancy between experiment and standard-model prediction (3 to 4 standard deviations) [1]. To turn the indication to an observation requires the reduction of both experimental and standard-model uncertainty. The dominant source of the latter resides in the hadronic contributions that enter the magnetic moment via loop corrections that involve strongly interacting fields. At the present level of accuracy the main players are the hadronic vacuum polarization and the hadronic light-by-light (HLbL) scattering contribution. I will report how dispersion theory can be used to relate the hadronic contributions to measurable quantities and obtain in that way a data driven determination including a reliable uncertainty estimate. In particular, I will focus on the most recent determination of the leading contribution to HLbL emerging from the pion-pole diagram [2,3]. [1] F. Jegerlehner and A. Nyffeler, Phys. Rept. 477, 1 (2009) [2] M. Hoferichter, B. L. Hoid, B. Kubis, S. Leupold and S. P. Schneider, Phys. Rev. Lett. 121, 112002 (2018) [3] M. Hoferichter, B. L. Hoid, B. Kubis, S. Leupold and S. P. Schneider, JHEP 1810, 141 (2018) Speaker: Stefan Leupold (Uppsala University) • 10:50 Importance of the Tensor Interaction in Structure of Nuclei 35m Studies of nuclei far from the stability line revealed drastic changes in nuclear orbitals presented as appearance of new magic numbers and disappearance of magic numbers. One of the important reason of such change is considered to be due to the effect of tensor forces in nuclear structure. Although the importance of tensor forces has been known as giving most of the binding energies in very light nuclei such as deuteron and 4He, direct experimental evidences of the importance in nuclear structure are scarce. In particular it is known that the mixing of higher waves, for example a D wave component in deuteron wave function with high-momentum component, is very important for the binding. Recent study of (p,d) and (p,pd) reaction at high momentum transfer will be presented and the importance of tensor interactions in low excited states of nuclei will be discussed. Speaker: Prof. Isao Tanihata (RCNP, Osaka University) • 11:25 Searching for a matter creating process in nuclear decays 35m Searching for a matter creating process in nuclear decays Speaker: Prof. Stefan Schoenert (TUM) • 12:00 Indirect techniques in nuclear astrophysics 30m Indirect techniques in nuclear astrophysics Speaker: Prof. Aurora Tumino (Kore University, Enna &amp; INFN-Laboratori Nazionali del Sud, Catania) • 17:00 19:30 Friday Afternoon Session Convener: Prof. John Sharpey-Schafer (The University of the Western Cape, South Africa) • 17:00 Neutron flows and neutron stars 20m The nuclear symmetry energy at high density has been probed with heavy ion reactions at high energy and by analyzing neutron star properties. A new source of information has opened up with the observation of the first LIGO and Virgo GW170817 gravitational wave signal from a neutron star merger. It offers additional possibilities for quantitatively comparing terrestrial and celestial results with implications for the applied models and methods. The prospects for improved measurements of neutron and proton elliptic flows at FAIR using the NeuLAND and KRAB detectors will be discussed. Speaker: Prof. Wolfgang Trautmann (GSI Helmholtzzentrum für Schwerionenforschung GmbH) • 17:20 Internal Gas-Jet Target for high intensity electron beam experiments 20m In the last three years a new target system has been developed in Mainz, which is dedicated to be the target of the upcoming MAGIX experiement. This target is a so called Gas-Jet Target, which is completele windowless. Therefore it should minimize uncertainities wich are typically induced by target frames and windows. To test this target a measurement @A1 in Mainz has been performed. This talk is about the target technique and the resuts of the measurements. Speaker: Stephan Aulenbacher (JGU Mainz) • 17:40 A versatile plastic neutron spectrometre for nuclear reactions and applications 20m With the advent of the new facility for radioactive ion beams, in particular for the neutron rich ones with respect to the stable beams, it is necessary to develop neutron detection systems fully integrated with the charged particles detection. It is argued that, the integration of the neutron signal, especially for neutron reach beams is an important experimental progress in order to study the properties of exotic nuclear matter. Neutrons detection with high angular and energy resolution is also an important issue in many physical applications. For this reasons, new detectors using new material have to be built. In this contribution the NArCoS (Neutron Array for Correlation Studies) project, having the purpose to realize a new detector prototype for neutrons and charged particles, will be presented with particular emphasis to physical motivations and first experimental tests. Speaker: Dr Emanuele Vincenzo Pagano (LNS-INFN) • 18:00 Dalitz decays of hyperons studied by HADES detector 20m Spectrum of excited states of single and double strange hyperons is only poorly known. Their internal structure is controversially discussed within several models e.g. quark and bag models, or even pentaquarks or meson-bayrion molecules like the famous $\Lambda$(1405). One of the kays to hyperons electromagnetic stucture are form factors which are predicted to be an ideal tool to discriminate between various models. The space-like region for transition form-fatctors have been already measured by the CLAS collaboration. The Hades detector is a perfect tool to perform similar research in the time-like region. The HADES detector is a versatile detector specialized for dilepton and strangeness measurements at SIS18 energies. It has been recently updated by an electromagnetic calorimeter, a new RICH photon detector. In next year an additional forward detector will be installed. It will extend an acceptance for forward peaked hyperons decays. All of this together, with improved SIS18 operating with protons at maximum energy 4.5 GeV, opens up new experimental possibilities. In my contribution I would like to present a feasibility studies for new experiments at 4.5 GeV beam kinetic energy in terms of excited hyperons Dalitz decays with use of the forward detector. Additionally a ongoing analysis of an existing data from [email protected] GeV and [email protected] GeV experiments could give a predictions for count rates and production cross-sections. Speaker: Mr Krzysztof Nowakowski (Jagiellonian University) • 18:20 Determination of the $^4$He monopole transition form factor 20m We will give an overview about new results for the $^4$He monopole transition form factor from data, taken in an electron scattering experiment at Mainz in 2016. Emphasis will be on different models for background contributions and the monopole resonance itself. On the sideline, we discuss the intrinsic FWHM (full-width-at-half-maximum) of the monopole resonance. Speaker: Mr Simon Kegel (Institut fuer Kernphysik Uni Mainz) • 18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057742953300476, "perplexity": 3322.1371382418947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00141.warc.gz"}
https://www.physicsforums.com/threads/find-an-expression-for-the-sum-of-the-series.582589/
# Find an expression for the sum of the series 1. Feb 29, 2012 1. The problem statement, all variables and given/known data (1x2x6) + (2x3x7) + ... + n(n + 1)(n + 5) 2. Relevant equations Find an expression for the sum of the series. Give your answer as a product of linear factors in n. 3. The attempt at a solution I haven't tried it since I don't know what to do. 2. Feb 29, 2012 ### Staff: Mentor why dont you compute the first 5 terms of the series and see if you notice a pattern n=1 sum=12 n=2 sum=54 n=3 ... then divide each sum by n and see if you can find a pattern in terms of n 3. Feb 29, 2012 Ok, so I have: 12/n + 42/n + 96/n + 180/n + 300/n All I can see is that each number has a highest common denominator of 6, but I'm guessing that's not it. Similar Discussions: Find an expression for the sum of the series
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562840223312378, "perplexity": 608.4081821636181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687582.7/warc/CC-MAIN-20170920232245-20170921012245-00440.warc.gz"}
http://mathoverflow.net/questions/130652/did-smith-correctly-state-the-mass-formula
# Did Smith correctly state the mass formula? Did Smith correctly state the mass formula? H.J.S. "normal form" Smith was the first, in 1867, to state the mass formula for integral quadratic forms in a genus of 4 or more variables. This was forgotten and the formula is usually attributed to Minkowski, who rediscovered it in 1885, and to Siegel, who corrected Minkowski in 1935. Conway and Sloane mention a number of erroneous sources after Siegel, but though they quote Smith worrying that he has made an error, they mention no errror, suggesting that he got it right. On the other hand, they say that in addition to the 1867 paper, Smith had an 1884 paper which won a prize from the French Academy jointly with Minkowski. So I expect that the judges compared the two results and would have noticed a discrepancy, suggesting that they were equally right or wrong. I'm not sure what any of these papers actually state. The 1884 competition was about the specific case of the sum of five squares, so perhaps the entries imposed restrictions that saved their correctness and Minkowski's error was only in the 1885 extension? In particular, I think that he restricted to odd forms in 1884, but I forget my source for this. Conway and Sloane say that Smith's 1867 formula restricted to odd determinant. More generally, what are good sources for the history of quadratic forms? - Not entirely sure about Smith. As far as i know, Conway and Sloane's version is correct, I've used it, but they give no proof at all. This is one reason that Shimura got involved. He and his student Jonathan Hanke both published on proofs of the mass formula. Shimura wrote a book, I think that must be item 22 at http://www.ams.org/journals/bull/2006-43-03/S0273-0979-06-01107-4/ Let's see. All the difficulty lies in the 2-adic contribution. There have been attempts to make a canonical 2-adic representative for quadratic forms. See J. W. S. Cassels, Rational Quadratic Forms. On page 120 we read We do not attempt to specify a unique canonical form [see Jones(1944), Pall (1945), or Watson (1976a)]: that is more a job for a parliamentary draftsman than for a mathematician. Note that C+S use Watson's version. There is a fair amount of stuff at http://zakuski.math.utsa.edu/~kap/forms.html and http://zakuski.math.utsa.edu/~kap/more_than_this.html which tends to the modern. Probably enough for now. -
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9000322818756104, "perplexity": 791.3842211470429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463928.32/warc/CC-MAIN-20150226074103-00108-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.earthdoc.org/content/papers/10.3997/2214-4609.201700844
1887 ### Abstract Summary In this paper we discuss the behaviour of air gun source arrays in marine seismic acquisition. We comment the fact that the source configuration and depth are changing continually with the combined actions of surface waves, sea currents and general towing conditions. This has a direct effect on the emitted source pressure wave field and thus on the signature in the seismic data. We describe the method of backpropagation with relative motion that allows an efficient and robust estimation of notional and far-field signatures from near-field measurements at every shot point. The derived shot by shot signatures show very good correlation with sea-state and sea-currents, as we would expect. We show that the variation of the signature can affect the quality of seismic data. We demonstrate that the estimated far-field signatures describe the real variation of the signature in the data and we show how the estimated shot by shot signatures can be used to mitigate the effect of signature variations and thereby improve the quality of the seismic data. /content/papers/10.3997/2214-4609.201700844 2017-06-12 2020-07-15 References 1. Ziolkowski, A., Parkes, G. E., Hatton, L., and Haugland, T. [1982] The signature of an airgun array: Computation from near-field measurements including interactions-part 1. Geophysics, 47, 1413–1421. 2. Landrø, M., StrandenesS. and Vaage, S. [1991] Use of near-field measurements to compute far-field marine source signatures - evaluation of the method, First Break, 9 (8), 375–385.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145668745040894, "perplexity": 2077.616427555008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657167808.91/warc/CC-MAIN-20200715101742-20200715131742-00441.warc.gz"}
https://www.arxiv-vanity.com/papers/0708.1807/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Supersolidity from defect-condensation in the extended boson Hubbard model Yu-Chun Chen Department of Physics, National Taiwan University, Taipei, Taiwan 106    Roger G. Melko Department of Physics and Astronomy, University of Waterloo, Ontario, N2L 3G1, Canada Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge TN, 37831    Stefan Wessel Intitut für Theoretische Physik III, Universität Stuttgart, 70550 Stuttgart, Germany    Ying-Jer Kao Department of Physics, National Taiwan University, Taipei, Taiwan 106 Center for Theoretical Sciences, National Taiwan University, Taipei, Taiwan 106 July 21, 2020 ###### Abstract We study the ground state phase diagram of the hard-core extended boson Hubbard model on the square lattice with both nearest- (nn) and next-nearest-neighbor (nnn) hopping and repulsion, using Gutzwiller mean field theory and quantum Monte Carlo simulations. We observe the formation of supersolid states with checkerboard, striped, and quarter-filled crystal structures, when the system is doped away from commensurate fillings. In the striped supersolid phase, a strong anisotropy in the superfluid density is obtained from the simulations; however, the transverse component remains finite, indicating a true two-dimensional superflow. We find that upon doping, the striped supersolid transitions directly into the supersolid with quarter-filled crystal structure, via a first-order stripe melting transition. ###### pacs: 75.10.Jm,05.30.Jp,74.20.-z,67.40.-w,67.80.-s ## I Introduction In 1956, Penrose and Onsager Penrose and Onsager (1956) first posed the question of whether one could expect superfluidity in a solid – a supersolid state – with coexisting diagonal and off-diagonal long-range order. They showed that for a perfect crystal, where the wave function of the particles is localized near each lattice site, superfluidity does not occur at low temperature. Later it was proposed Andreev and Lifshits (1969); Chester (1970); Leggett (1970) that fluctuating defects in imperfect crystals can condense to form a superfluid, and a supersolid state (with both superflow and periodic modulation in the density) emerges. In 2004, Kim and Chan reported signatures of superfluidity in solid He in torsional oscillator experiments, Kim and M.H.W.Chan (2004) where a drop in the resonant period, observed at around K, suggested the existence of a non-classical rotational inertia in the crystal.Leggett (1970) Following the discovery by Kim and Chan, many experiments and theories have attempted to explain this fascinating observation; the situation remains, however, controversial. Ceperley and Bernu (2004); Prokof’ev and Svistunov (2005); Burovski et al. (2005); Kim and Chan (2006); Rittner and Reppy (2006); Day and Beamish (2006) On the other hand, with improvements of quantum Monte Carlo (QMC) methods, the origin of supersolid phases can be studied exactly in both continuum and lattice models. Exotic quantum phases, including supersolids, are highly sought-after in lattice models, particularly those that may be realized by loading ultra-cold bosonic atoms onto optical lattices.Anderson et al. (1995); Greiner et al. (2002) The generation of a Bose-Einstein condensate (BEC) in a gas of dipolar atoms Griesmaier et al. (2005) with longer-range interactions provides one promising route to search for the supersolid state.dip The extended boson Hubbard model is the obvious microscopic Hamiltonian to study these systems, and supersolids have been found in this model on various lattices. Batrouni et al. (1995); Hébert et al. (2001); Sengupta et al. (2005); tri ; kag A simplified phenomenological picture for understanding the supersolid phase in these models has been the aforementioned “defect-condensation” scenario: starting from a perfect lattice crystal at commensurate filling, supersolidity arises when dopants (particles or holes) condense and contribute a superflow. In the simplest scenario – hard-core bosons doped above commensurate filling – this phenomenological picture suggests “microscopic phase separation” between the crystal and superfluid sublattices. Recent work on triangular lattice supersolids Melko et al. (2006); Hassan et al. (2007) has called this simple interpretation into question, since there one apparently finds examples where particles on the crystal lattice also take part in the superflow. However, frustration complicates the interpretation of these results, since the underlying order-by-disorder mechanism facilitates supersolid formation at half-filling. To more closely study the degree to which defect-condensation plays a role in the mechanism behind lattice supersolids, we study the extended hard-core boson Hubbard model with both nearest-neighbor (nn) and next-nearest-neighbor (nnn) hopping and repulsive interactions on the square lattice. The Hamiltonian is H = −t∑⟨i,j⟩(a†iaj+aia†j)+V1∑⟨i,j⟩ninj−μ∑ini (1) −t′∑⟨⟨i,j⟩⟩(a†iaj+aia†j)+V2∑⟨⟨i,j⟩⟩ninj, where and are the boson creation and annihilation operators, is the number operator, and denotes the nearest- and next-nearest neighbors. The limit has been studied previously, and is known to harbor several crystal solids, including a checkerboard structure that upon doping is unstable towards phase separation.Batrouni and Scalettar (2000); Hébert et al. (2001) A striped crystal and a stable striped supersolid were also found in the limit.Hébert et al. (2001) Here, we study the full Hamiltonian of Eq. (1) using both Gutzwiller mean field theory (Section II) and stochastic series expansion (SSE) quantum Monte Carlo (QMC) simulations (Section III) based on the directed loop algorithm.Sandvik and Kurkijärvi (1991); Syljuasen and Sandvik (2002) We confirm that the model contains a variety of lattice crystals, including a checkerboard and striped phase at half-filling, plus a quarter-filled solid.Schmid (2004) The nnn-hopping is found to stabilize a checkerboard supersolid away from half-filling; this and other supersolid phases with differently broken symmetry are studied in detail. In general, we find that although supersolid phases are not stabilized at commensurate fillings, they are readily formed upon doping. However, we demonstrate that, contrary to the simple phenomenological picture of defect-condensation, where the crystal and superfluid sublattices are clearly distinct, in at least one of our supersolid phases particles from the crystal sublattice also participate to a large degree in the superflow. ## Ii Gutzwiller mean field approximation We begin by surveying the ground state phase diagram of the model in several limits using Gutzwiller mean-field theory. The Gutzwiller variational methodGutzwiller (1963) is a powerful technique for studying strongly correlated system. The ground state of an interacting system is constructed from the corresponding noninteracting ground state, |ϕg⟩=∏i(∑nifni|ni⟩). (2) Here, the are site-dependent variational parameters, which can be optimized via minimizing the energy E0=⟨ϕg|H|ϕg⟩⟨ϕg|ϕg⟩ of the variational state. The form the local Fock basis at site with particles in state . In the hardcore limit, we only need to keep the local states and . Physical quantities are calculated within the variational ground states. In particular, we measure the local density , the density structure factor at wave vector S(q)=1N∑i,jeiq⋅(ri−rj)⟨ninj⟩, (3) and superfluidity due to a finite value of . The coexistence of superfluid order and Bragg peaks in the structure factor signifies supersolidity. We begin by studying the phase diagram within the Gutzwiller approximation near half-filling in the absence of the nnn repulsion (i.e. for ). From Fig. 1, we find that at the mean-field level, various phases are stabilized already by this restricted set of parameters. These include a uniform superfluid (SF), a checkerboard solid (cS) with ordering wave vector (see Fig. 1d for an illustration), and in particular a checkerboard supersolid (cSS) - with coexisting diagonal order and superfluidity, away from half-filling (). As expected, increasing the nn hopping destroys the solidity of the system. In particular, the supersolid region, found when the system is doped away from half-filling, becomes a uniform superfluid at large . This clearly indicates, that large destabilizes the supersolid state. To study the effects of a finite nnn repulsion , we first identify two limiting cases. For the model, a stable supersolid state is the striped supersolid (sSS) Batrouni et al. (1995); Batrouni and Scalettar (2000); Sengupta et al. (2005); Schmid and Troyer (2004) with ordering wavevectors or if obtained. For the model, a stable supersolid state is the checkerboard supersolid. To capture the behavior of the system between these limiting regimes, we introduce a parameter , which interpolates between the two regions, by setting and (see Ref. [Schmid, 2004]). In the following, we thus work in units of . Figure 2 shows Gutzwiller mean field phase diagrams for different values of , and 1. While for finite , half-filling is obtained for , we still take as the abscissa, in order to ease a direct comparison to the previous case of . The coexistence of both nn and nnn repulsions is expected to stabilize various solid states Schmid (2004): at half-filling, with both and large, a checkerboard (striped) solid is formed for Batrouni et al. (1995); we find that for , a quarter-filled solid (qS) (shown in Fig. 1d) emerges. In order to distinguish the different solids, we measure the structure factors at reciprocal lattice vectors , and . In addition, for the striped structure where a strong anisotropy due to the broken rotational symmetry occurs, the magnitude of the difference between and is almost equal to the sum . Keeping track of this quantity allows us to easily distinguish supersolids with an underlying striped crystal, from supersolids with an underlying quarter-filled crystal, in which case all three structure factors become finite, but the difference is zero (see Table 1 for a summary). The case , shown in Fig. 2a corresponds to the model, which we already discussed (compare to Fig. 1a). At (Fig. 2b), the superfluid phase expands, as the introduction of and destabilizes the checkerboard solid. For (Fig. 2c), with the model parameters approaching the limit, the checkerboard structure disappears. Instead, striped and quarter-filled structurs emerge, including a striped supersolid (sSS), and a quarter-filled supersolid (qSS).qSS In the limiting case (Fig. 2d), we find two different transition paths from the sSS to the superfluid upon doing. When , the striped supersolid enters the superfluid directly. When , the qS solid and qSS supersolid regions are passed as an intermediate regime, separating the striped supersolid sSS and the superfluid. Clearly, these mean-field phase diagrams provide evidence for not only the existence of various different supersolid phases, but also for the possibility of direct quantum phase transitions between them (in particular between the qSS and sSS phases). In the next section, using these mean-field results as guidance, we turn to quantum Monte Carlo simulations in order to study in details the various supersolid phases, as well as the transitions between them. ## Iii Quantum Monte Carlo results We performed extensive quantum Monte Carlo (QMC) simulations of the Hamiltonian Eq. (1) using a variation of the stochastic series expansion framework with directed loops.Sandvik and Kurkijärvi (1991); Syljuasen and Sandvik (2002) Correlation functions of density operators are easily measured within the QMC, and crystal order is signified by peaks in the -dependent structure factor of Eq. (3). The superfluid density is measured in the standard way in terms of winding number fluctuations, ρas=⟨W2a⟩β, (4) where labels the or direction and is the inverse temperature. Typically the stiffness is averaged over both directions, unless measured in a striped phase which breaks rotational symmetry (as discussed below). In the following, we choose large enough to ensure simulation of ground-state properties, and the system size is . We begin by examining the phase transition into the cSS state, identified in Fig. 1. In the limit where vanishes, Fig. 3 shows the behavior of the QMC observables at . For (open symbols), there is a discontinuity near , where a checkerboard solid with finite melts into a superfluid with finite via a first-order transition. This discontinuous jump in the particle density near is a clear indication that phase separation would occur in a canonical system.Batrouni and Scalettar (2000) In contrast, in the limit (solid symbols), the discontinuity disappears, and a smooth decrease in the structure factor as holes are doped into the system is accompanied by an increasing superfluidity. The coexistence of both finite and in contrast to the case , indicates that a checkerboard supersolid state is stabilized by the nnn hopping. In order to confirm that this is indeed true, we perform simulations with finite nn hopping . In Fig. 4, we show QMC results as a function of nn hopping for various and the chemical potential fixed at . A supersolid phase emerges for , and a checkerboard supersolid to superfluid transition occurs as increases. The smooth nature of the data across the transition region suggests that the destabilization of the cSS state upon increased occurs via a continuous phase transition. Next, we consider the effect of the nnn repulsion , as alluded to in Fig. 2. We focus on the results from simulations performed at , corresponding to and . Three different values of the nn repulsion are chosen: and . For , the dominant and render the model close to a model, and a striped structure is expected (Fig. 2). In Fig. 5, the equivalence of and indicates the absence of the quarter-filled structure, and indeed, at half-filling, a stable striped solid (sS) is formed. Furthermore, upon hole-doping away from , a striped supersolid emerges. To assess the behavior of the superflow in the sSS, we measured the superfluid densities perpendicular and parallel to the actual stripe direction. For this purpose, and are defined by comparing the magnitude of and calculated after each Monte Carlo step: when , the -direction winding number (see Eq. (4)) is counted as and is counted as , and vice versa Melko et al. (2006). Fig. 5 clearly exhibits a pronounced anisotropy of in the sSS phase. Upon further hole doping, we observe a melting of the crystal structure to a uniform superfluid (SF). This completes the quantum melting of the sS crystal upon doping holes – proceeding to a uniform superfluid state via an intermediate sSS state with coexisting superflow and crystal order. To stabilize the quarter-filled solid, we study the system with a strong nn repulsion. Fig. 6 shows the results for . A quarter-filled solid (qS) is stabilized at , whereas at half-filling, a sS is formed. Doping away from quarter-filling with holes, a “quarter-filled” supersolid (qSS) state is formed,qSS as signified by the coexistence of the quarter-filled crystal structure and superfluidity. Upon further hole-doping, the qSS eventually melts into a SF. Doping slightly away from quarter-filling with additional bosons, we observe a similar qSS state. With further doping, however, and , as well as , begin to exhibit significant anisotropies. Near , the anisotropy is most pronounced, and vanishes, signifying a sSS state. We thus observe two seemingly unique supersolid states with different underlying crystal structures. A detailed study of the transition region in Fig. 7 indicates the presence of discontinuities developing in the structure factors and superfluid density at the transition. This indicates a first-order phase transition between the two supersolid phases, as traversed by varying the chemical potential. In a simple phenomenological defect-condensation picture, this transition may be interpreted as occurring via the first-order melting of one of the crystal sublattices that differentiate the qSS from the sSS. This interpretation is discussed more in the next section. In Fig. 8, with slightly smaller , the qSS state is still observed, yet with a reduced extent; no obvious qS crystal is observed at quarter-filling on this lattice size. However, the superfluid density, although finite, shows a large dip near where the average particle density nears . In order to examine this more precisely, we performed simulations at a fixed particle density , by carefully adjusting the chemical potential, and restricting measurements to those Monte Carlo configurations with a particle number that precisely matches . The data in Fig. 9 strongly suggests that the superfluid density indeed scales to zero in the thermodynamic limit, revealing the absence of supersolid behavior at . This observation is consistent with the picture of supersolidity in this model occurring only away from commensurate crystal fillings, and arising due to the superflow of doped defects placed interstitial to the ordered solid structures. ## Iv Discussion Using mean-field theory and quantum Monte Carlo simulations, we studied in detail the formation of three supersolid phases, which arise in the hard-core extended boson Hubbard model of Eq. (1). For large nnn repulsion, a stable checkerboard supersolid phase can be observed provided a sufficiently strong nnn hopping is present, if the system is doped away from commensurate (1/2) filling. As observed in previous studies, the nn hopping itself is not sufficient to promote superflow within the doped checkerboard crystal. The other 1/2-filled crystal observed in this model is the striped solid that breaks rotational symmetry. Again, upon doping away from half-filling, a supersolid state emerges from the striped solid. Furthermore, at lower density and large repulsive interactions ( and ), the underlying density order changes to a quarter-filled crystal structure in order to avoid the large repulsions. At particle density of exactly 1/4, traces of the superflow vanish. The above observations lend strong support to the idea of a mechanism for supersolidity involving the condensation of dopants (defects) outside of the lattice crystal. Indeed, in no instance can we successfully stabilize both a finite crystal order parameter and a superfluid density at any commensurate filling. However, the simple phenomenological picture of the doped-defect condensation clearly breaks down at least for the striped supersolid phase, where although and show a strong anisotropy, remains finite even close to the half-filled striped crystal. This demonstrates that the superfluidity in the striped supersolid is not merely a one-dimensional superflow through one-dimensional channels. This finding is similar to observations in other models on the square lattice,Hébert et al. (2001); Schmid and Troyer (2004); Sengupta et al. (2005) and contrasts to the very weak anisotropies observed on a triangular lattice striped supersolid at half-filling.Melko et al. (2006) The presence of different supersolid phases in this model also raises the interesting possibility of observing direct supersolid-supersolid phase transitions. In particular, upon tuning , we studied the intermediate region between the and the model. We find that there is no direct transition between the checkerboard and the striped supersolid orders as is tuned – there is always a superfluid phase present when the repulsions become comparable.Che This is similar to the case at half-filling, where the superfluid emerges along the line without a direct transition between the checkerboard and striped solid, even when both and are large.Batrouni et al. (1995) In contrast, we find a direct transition between the qSS and sSS states in this model upon tuning . A detailed finite size study reveals that this supersolid-supersolid phase transition is a first-order stripe melting transition. Tuning from the sSS towards the qSS by decreasing the chemical potential, an abrupt increase in the superfluid density component perpendicular to the stripe direction takes place, corresponding to a jump into the qSS crystal structure. This observation lends itself to the interpretation that, upon traversing this phase boundary, one of the two occupied sublattices, that contribute to the striped crystal, abruptly melts into a superfluid component, while the other remains its rigidity, and provides the underlying qSS crystal structure. It would be interesting to compare this mechanism to that observed in a supersolid-supersolid phase transition on the triangular lattice,Hassan et al. (2007) where significantly stronger first-order behavior is observed. There have also been proposed more exotic mechanisms, where superfluids transition into non-uniform solid phases at commensurate filling, which may be compared to the current work.DVT In conclusion, we have found several ground state phases of the hard-core extended boson Hubbard model with nn and nnn hopping and repulsion on the square lattice. Most notable, we find that supersolid states readily emerge when doped away from commensurability “near” their associated crystal phases, provided sufficient kinetic (hopping) freedom is provided. The model thus proves an ideal playground for future study of concepts related to doping and the formation of supersolidity through the mechanism of condensed defects. Further studies are necessary to understand the detailed nature of the transitions between these different solid, superfluid and supersolid phases, as well as their finite temperature properties. ###### Acknowledgements. This work was supported by NSC and NCTS of Taiwan (YCC,YJK), the U.S. Department of Energy, contract DE-AC05-00OR22725 with Oak Ridge National Laboratory, managed by UT-Battelle, LLC (RGM), the German Research Foundation, NIC Jülich and HLRS Stuttgart (SW) and the National Science Foundation under Grand No. NSF PHYS05-51164 (SW,YJK). RGM would like to thank the Center of Theoretical Sciences and Department of Physics, National Taiwan University, for the hospitality extended during a visit, and SW and YJK acknowledge hospitality of the Kavli Institute for Theoretical Physics at Santa Barbara.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261605739593506, "perplexity": 2003.2262894277414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00409.warc.gz"}
http://techie-buzz.com/science/laser-cooling.html
Laser Cooling One of the coolest things in physics is used as a common tool to make things really cold. Lasers are used to cool a bunch of atoms to extremely low temperatures, temperatures in the micro-Kelvin range. Extremely successful, laser cooling is surely one of the hottest topics in Physics. Firstly, let’s note few important points: 1. The ‘temperature’ of a substance is the average kinetic energy of the constituent atoms/ molecules. This is the definition. And, yes, this is exactly what we measure with a thermometer. Thus, in short, the faster the atoms in a substance move, the higher its temperature. 2. There is a well known effect called the Doppler Effect. If you move towards a light source (or equivalently, if the source moves closer to you) the frequency of light you see goes up (“blueshift”). This is, obviously, a velocity dependent phenomenon. The faster you move, the more the shift in observed frequency. Similarly, if you move away from the source, the frequency goes down (“redshift”). So if a stationary observer observes green light, an observer moving towards it might see a blue light, while the observer moving away may see a red light. 3. A substance absorbs energy only at discrete and specific frequencies of radiation. For example, if a substance absorbs at a frequency of red light, radiating it with a light of slightly lower wavelength will not cause it to undergo any transition. This light will not be absorbed. This is due to the quantum nature of the energy levels of the atoms. 4. Light is made up of tiny packets of energy called photons. Each of these photons carries only one value of energy, dependent on the frequency of the radiation. The higher the frequency, the higher the energy. Also, photons carry momentum. Momentum for a photon is equal to its energy. Armed with these points, we can finish off the explanation for laser cooling in a few lines. Take an atom and irradiate it with light with a frequency slightly lower than the one it would absorb. The light would just bounce off without any absorption (point 3). Now say that the atom is moving towards the laser. The frequency it will see is greater than the static laser frequency (Point 2). This higher frequency is now good enough for absorption. Imagine two lasers irradiating the atom from opposite directions. One laser forces the atom to bounce off towards the other laser. This induces absorption. Now this excited atom can release energy as a photon. The momentum it loses due to the emission of this photon lowers the kinetic energy. Thus, the kinetic energy decreases. But this is basically a lowering of temperature (point 1). Lo and behold, we have cooling. Now this excited atom can release energy as a photon. The momentum it loses due to the emission of this photon lowers the kinetic energy. Thus, the kinetic energy decreases. But this is basically a lowering of temperature (point 1). Lo and behold, we have cooling.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8631457090377808, "perplexity": 402.6377693862126}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526064.11/warc/CC-MAIN-20190719053856-20190719075856-00300.warc.gz"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/G01/g01efc.html
g01 Chapter Contents g01 Chapter Introduction NAG C Library Manual # NAG Library Function Documentnag_gamma_dist (g01efc) ## 1  Purpose nag_gamma_dist (g01efc) returns the lower or upper tail probability of the gamma distribution, with parameters $\alpha$ and $\beta$. ## 2  Specification #include #include double nag_gamma_dist (Nag_TailProbability tail, double g, double a, double b, NagError *fail) ## 3  Description The lower tail probability for the gamma distribution with parameters $\alpha$ and $\beta$, $P\left(G\le g\right)$, is defined by: $P G≤g ; α,β = 1 βα Γα ∫0g Gα-1 e-G/β dG , α>0.0 , ​ β>0.0 .$ The mean of the distribution is $\alpha \beta$ and its variance is $\alpha {\beta }^{2}$. The transformation $Z=\frac{G}{\beta }$ is applied to yield the following incomplete gamma function in normalized form, $P G≤g ; α ,β = P Z≤g/β : α,1.0 = 1 Γα ∫0g/β Zα-1 e-Z dZ .$ This is then evaluated using nag_incomplete_gamma (s14bac). ## 4  References Hastings N A J and Peacock J B (1975) Statistical Distributions Butterworth ## 5  Arguments 1:     tailNag_TailProbabilityInput On entry: indicates whether an upper or lower tail probability is required. ${\mathbf{tail}}=\mathrm{Nag_LowerTail}$ The lower tail probability is returned, that is $P\left(G\le g:\alpha ,\beta \right)$. ${\mathbf{tail}}=\mathrm{Nag_UpperTail}$ The upper tail probability is returned, that is $P\left(G\ge g:\alpha ,\beta \right)$. Constraint: ${\mathbf{tail}}=\mathrm{Nag_LowerTail}$ or $\mathrm{Nag_UpperTail}$. 2:     gdoubleInput On entry: $g$, the value of the gamma variate. Constraint: ${\mathbf{g}}\ge 0.0$. On entry: the parameter $\alpha$ of the gamma distribution. Constraint: ${\mathbf{a}}>0.0$. 4:     bdoubleInput On entry: the parameter $\beta$ of the gamma distribution. Constraint: ${\mathbf{b}}>0.0$. 5:     failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings On any of the error conditions listed below except NE_ALG_NOT_CONV nag_gamma_dist (g01efc) returns $0.0$. NE_ALG_NOT_CONV The algorithm has failed to converge in $〈\mathit{\text{value}}〉$ iterations. The probability returned should be a reasonable approximation to the solution. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_REAL_ARG_LE On entry, ${\mathbf{a}}=〈\mathit{\text{value}}〉$ and ${\mathbf{b}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{a}}>0.0$ and ${\mathbf{b}}>0.0$. NE_REAL_ARG_LT On entry, ${\mathbf{g}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{g}}\ge 0.0$. ## 7  Accuracy The result should have a relative accuracy of machine precision. There are rare occasions when the relative accuracy attained is somewhat less than machine precision but the error should not exceed more than $1$ or $2$ decimal places. Note also that there is a limit of $18$ decimal places on the achievable accuracy, because constants in nag_incomplete_gamma (s14bac) are given to this precision. The time taken by nag_gamma_dist (g01efc) varies slightly with the input arguments g, a and b. ## 9  Example This example reads in values from a number of gamma distributions and computes the associated lower tail probabilities. ### 9.1  Program Text Program Text (g01efce.c) ### 9.2  Program Data Program Data (g01efce.d) ### 9.3  Program Results Program Results (g01efce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99952232837677, "perplexity": 2655.3006714191174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982948216.97/warc/CC-MAIN-20160823200908-00248-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/strength-of-materials-longitudal-stress-in-thin-cylindrical-shells.462291/
# Strength of materials(longitudal stress in thin cylindrical shells • #1 5 0 In Rc Stephen's book Strength op materials, the longitudinal stress in a cylinder(see) attachment caputure.jpeg. My question is how is the area(pi X d X t ) is derived as my calculations show that this area should equate to (pi x d x t + t x t) #### Attachments • 5.1 KB Views: 703 • #2 5 0 the method used is the force projected on the circumference instead of the cross sectional area. From what I can gather this is wrong as the stress involved is Force dived by the cross sectional area. • #3 2,479 100 I assume that d is the average diameter of the shell and t is its thickness.If the shell is thin (as in the title) then the area is approximately equal to pidt and I'm assuming that an approximate answer is good enough for this particular problem. • #4 5 0 Thanks for your timely response and sorry for not spesifying. p is the atmospheric pressure,d is the inside diameter(thus excluding the thickness of the cylinder wall) and t is the thickness of the before mentioned wall. It is a valid point that as this is a thin cylinder the circumference rather than the wall area may be used. As such would using the wall area not be more accurate. if so what would such an equation be. My question is in regard to the stress created by a force tending to seperate the left and right halves of the cylinder and as such is called longitudal stress as the direction of the stress is in the direction of the force. My second question is how was this formula pi x d x t derived Last edited: • #5 2,017 85 You stated the 'answer' to your own question in the thread title. Engineering is about making good assumptions. It's assumed that the cylinder is 'thin'. In this case it's assumed that all the load is taken on the circuference, as you say you can use a thick walled solution (which modells the problem more accurately), it's more difficult to solve and the answers are likely to be similar. You will probably find a caveat in the book saying this is valid only when D > 10 or 20T Say for example, that solving a full stress equation by hand takes you 10 minutes and you get an answer of 100. You could have applied an assumption and solved the problem in 5 minutes and get an answer of 95. Meaning the assumption loses you about 5% accuracy. Before doing this you need to ask yourself is the extra 5% accuracy really worth 100% extra calcualtion time. • #6 2,479 100 The real area is equal to the area of the circle of diameter equal to d+2t minus the area of the circle of diameter equal to d and it would be more accurate to use the real area.If,however, t is small compared to d then the area is approximately equal to pidt and I assume from the title of the thread that it is acceptable to use this approximate value. • #7 5 0 Thanks it makes sense, I still have two more questions on the subject namely how was piXDxT derived and can I use the same equation if there were a force tending to crush the cylinder asuming the t is reasonably smaller than d Last edited: • #8 5 0 I found the answer to my own question refering to how pi x D x T was derived. The full equation is pi x D x T + pi x T x T and as t is assumed to be small, T xT would be minute. Now only the question if this equation holds for crushing stresses remains and I assume that it will. • Last Post Replies 2 Views 1K • Last Post Replies 8 Views 14K • Last Post Replies 1 Views 2K • Last Post Replies 14 Views 866 • Last Post Replies 5 Views 1K • Last Post Replies 3 Views 19K • Last Post Replies 1 Views 4K • Last Post Replies 2 Views 2K • Last Post Replies 25 Views 44K • Last Post Replies 9 Views 7K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985819816589355, "perplexity": 645.2025338639822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00445.warc.gz"}
http://math.stackexchange.com/questions/219012/functional-analysis-complementary-subspace
# functional analysis complementary subspace Let $Y$ and $Z$ be closed subspaces in a Banach space $X$. Show that each $x \in X$ has a unique decomposition $x = y + z$, $y\in Y$, $z\in Z$ iff $Y + Z = X$ and $Y\cap Z = \{0\}$. Show in this case that there is a constant $\alpha>0$ such that $ǁyǁ + ǁzǁ \leq\alphaǁxǁ$ for every $x \in X$ - help me to solve this problem i am really in trouble with this problem –  math Oct 22 '12 at 22:23 What part of it are you stuck on? –  Robert Israel Oct 22 '12 at 22:24 I think that the constant part requires the Open Mapping Theorem. Consider the space $Y\oplus Z$ with the norm $\|y\oplus z\|_1=\|y\|+\|z\|$. It is easy to see that it is a Banach space. Then we define the map $T:Y\oplus Z\to X$ given by $T(y\oplus z)=y+z$. This map is clearly linear and bijective. Moreover, $$\|T(y\oplus z)\|=\|y+z\|\leq\|y\|+\|z\|=\|y\oplus z\|_1,$$ so $T$ is bounded. By the Open Mapping Theorem $T$ is open, which means that $T^{-1}$ is bounded. So, given $x\in X$ with $x=y+z$, $y\in Y$, $z\in Z$, there exists $\alpha=\|T^{-1}\|>0$ such that $$\|y\|+\|z\|=\|y\oplus z\|_1=\|T^{-1}(x)\|_1\leq\alpha\|x\|.$$ For the sake of completeness, here is a proof for the first part of the question. Note that both sides of the implication have the assertion $X=Y+Z$, so what we have to prove is $$\mbox{unique decomposition }\iff\ Y\cap Z=\{0\}.$$ So assume first that $X=Y+Z$ with unique decomposition, and let $w\in Y\cap Z$. By the decomposition, $w=y+z$, $y\in Y$, $z\in Z$. But as $w\in Y$, we get $z=w-y\in Y$. So $w=(y+z)+0$ is another decomposition of $w$; by the uniqueness, $z=0$. A similar argument shows that $y=0$, and thus $w=0+0=0$. This shows that $Y\cap Z=\{0\}$. Conversely, suppose that $Y\cap Z=\{0\}$. If $x=y_1+z_1=y_2+z_2$ with $y_1,y_2\in Y$, $z_1,z_2\in Z$, then we have $y_1-y_2=z_2-z_1\in Z$, so $y_1-y_2\in Y\cap Z$ and $y_1=y_2$. A similar argument shows that $z_1=z_2$. So the decomposition is unique. - thank u i did not solve yet. do i have to combine this. or could u eleborate the first part too –  math Oct 23 '12 at 2:40 Hint: Assume that $Y\cap Z=\{0\}$. If $z+y=x=z'+y'$ s.t. $z,z'\in Z$, $y,y'\in Y$ then $z-z'=y'-y$. For the other direction, if $y\in Y\cap Z$ then $0+0=y+(-y)$. - can you eloberate it more this is my home work question and tomorrow is my due date –  math Oct 22 '12 at 22:40 how to prove the norm part –  math Oct 22 '12 at 22:45 i appreciate your help but still i am confuse with this problem –  math Oct 22 '12 at 22:50 hello can some one eloberate this a bit so i can solve it –  math Oct 22 '12 at 23:23 Did you finish proving everything except the norm? –  Dennis Gulko Oct 22 '12 at 23:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941744208335876, "perplexity": 133.2928173761172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899041.10/warc/CC-MAIN-20141030025819-00070-ip-10-16-133-185.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/59520/how-to-place-text-between-two-columns-and-which-breaks-the-rule-dividing-them-in
# How to place text between two columns and which breaks the rule dividing them in ConTeXt? I have two paragraphs of instructions and I need to indicate to readers that they have a choice of one or another. I am using this code, to put the data into different columns, with a line between: \startcolumns[n=2, rule=on] \startlines This is some instructions. \stoplines \column \startlines This is some other instructions. \stoplines \stopcolumns This makes a document like this, with a nice divide line in the middle: _______________________________ | | | This is some : This is some | | instructions. : other instru- | | : ctions. | | | |_______________________________| I'd like to add the text "or", beween the two columns, to make it more clear that readers have a choice. This breaks the divide line and appears exactly in the middle, centered vertically on the line, and horizontally centered as well, e.g.: _______________________________ | | | This is some : This is some | | instructions. or other instr- | | : uctions. | | | |_______________________________| How can I add "or" text to the line dividing the two columns? - ## 1 Answer There isn't any inbuilt command do add text to the column rule. Assuming that you are only interested in two column text that do not break across pages, you can fake two column text using low-level TeX and add a frame using metapost. \defineframed [fakecolumn] [location=top, width=0.45\textwidth, align=normal, frame=off] \defineframed [ORcolumn] [location=top, height=\ORcolumnht, width=2.5em, frame=off, background=ORcolumn, top=\vss, bottom=\vss] \defineoverlay[ORcolumn][\useMPgraphic{ORcolumn}] \startuseMPgraphic{ORcolumn} ht := 2*StrutHeight; draw (OverlayWidth/2, OverlayHeight/2-ht/2) -- (OverlayWidth/2, 0); draw (OverlayWidth/2, OverlayHeight/2+ht/2) -- (OverlayWidth/2, OverlayHeight); setbounds currentpicture to boundingbox OverlayBox; \stopuseMPgraphic \newbox\leftcolumnbox \newbox\rightcolumnbox \newdimen\ORcolumnht \def\startORcolumns#1\column#2\stopORcolumn {\blank \setbox\leftcolumnbox \hbox{\fakecolumn{#1}}% \setbox\rightcolumnbox\hbox{\fakecolumn{#2}}% % location=top sets the ht of the box to structheight % and depth to the remaining length \ORcolumnht=\dimexpr\strutheight+ \dp\ifdim\dp\leftcolumnbox>\dp\rightcolumnbox\leftcolumnbox\else\rightcolumnbox \fi \hbox to \textwidth {\hss\copy\leftcolumnbox \hss\ORcolumn{OR}\hss \copy\rightcolumnbox\hss}% \blank} \starttext \input zapf \startORcolumns \input knuth \column \input ward \stopORcolumn \input zapf \stoptext -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834533154964447, "perplexity": 4194.062987157107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00151-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.ias.ac.in/listing/bibliography/boms/E_S_R_Gopal
• E S R Gopal Articles written in Bulletin of Materials Science • Development of a roller quenching apparatus for the production of amorphous phases The details of an apparatus designed to produce amorphous phases by rapid quenching from the melt are described. A drop of molten material is squeezed between two copper rollers rotating against each other at 5000 RPM and a thin foil of the material is produced. The system produces cooling rates of the order of 105 K/sec. Details of the development and construction are mentioned. • Critical point phenomena, heat capacities and the renormalization group theory of fluctuations In the normal study of matter, the ordered state is considered first, followed by the addition of minor disorder or fluctuations, for instance, studying crystalline solids with some quasiparticle excitations like phonons and magnons. The discovery of the universality of critical point phenomena seems to provide a chance to study a regime dominated by the fluctuations. The Onsager solution of the two-dimensional Ising model, exhibiting a logarithmic singularity in heat capacity, and the Fairbank-Buckingham-Kellers experiments, showing such a singularity in the heat capacity near the superfluid transition of liquid4He, are landmarks in this topic. The recent renormalization group theory shows a way of studying the patterns among the fluctuations. The dependence of the critical exponents upon spatial and spin dimensionalities, the existence of universal amplitude ratios and the other aspects of critical phenomena are briefly discussed. • A note on the composition dependence of elastic properties of Se-P glasses Longitudinal and shear wave ultrasonic velocities are reported in Se-P glasses over the composition range 0–50 at % P. The glass transition temperaturesTg show maxima at 30 and 50 at % of P, in consonance with earlier data. The bulk modulus shows minima at these compositions, contrary to the expectation of maxima. These are discussed in relation to the formation of compounds at specific compositions and the nature of the covalent bonding in the glasses. • Electronic conduction in bulk Se1−xTex glasses at high pressures and at low temperatures The electrical resistivity of bulk Se1−xTex glasses is reported as a function of pressure (up to 8 GPa) and temperature (down to 77K). The activation energy for electronic conduction has been calculated at different pressures. The samples with 0⩽x⩽0·06 show a single activation energy throughout the temperature range of investigations. On the other hand samples with 0·08⩽x⩽0·3 show two activation energies in the different regions of temperature. The observed behaviour has been explained on the basis of band picture of amorphous semiconductors. • Effect of high pressure on chalcogenide glasses The effect of high pressures on the various properties of the chalcogenide glasses is reviewed. The properties discussed include the mechanical, electrical, optical and magnetic properties. The phenomena of the crystallization of the chalcogenide glasses under high pressure is also discussed. • Average lattices and aperiodic structures Statistically averaged lattices provide a common basis to understand the diffraction properties of structures displaying deviations from regular crystal structures. An average lattice is defined and examples are given in one and two dimensions along with their diffraction patterns. The absence of periodicity in reciprocal space corresponding to aperiodic structures is shown to arise out of different projected spacings that are irrationally related, when the grid points are projected along the chosen coordinate axes. It is shown that the projected length scales are important factors which determine the existence or absence of observable periodicity in the diffraction pattern more than the sequence of arrangement. • # Bulletin of Materials Science Volume 43, 2020 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84798264503479, "perplexity": 1152.2414831174326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00142.warc.gz"}
http://mathhelpforum.com/advanced-statistics/155956-n-choose-k-proof-print.html
# n choose k proof • September 12th 2010, 04:35 PM Ardgan n choose k proof Hi, I have to hand in my homework in two days and I have no idea how to prove the following (freely translated, English is not my native language so hope it makes sense): "Show that for all positive whole numbers n and k, when n is bigger than or equal to k, the following applies: (n choose (k-1))+(n choose k)=((n+1) choose k) My textbook uses the word choose, not sure if that's universal so "n choose k"=n!/k!(n-k)! so you know what I'm talking about. Any help greatly appreciated! • September 12th 2010, 04:58 PM theodds Just calculate the sum directly. Get a common denominator using the recursion identity $j! = j (j - 1)!$. Everything falls out in two lines at the most.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9312566518783569, "perplexity": 797.3163697953179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266894.52/warc/CC-MAIN-20140728011746-00401-ip-10-146-231-18.ec2.internal.warc.gz"}
https://scicomp.stackexchange.com/questions/11397/is-there-a-reference-source-paper-for-the-tucker-als-in-tensor-toolbox-for-mat
Is there a reference/source paper for the TUCKER_ALS() in Tensor Toolbox for MATLAB? TUCKER_ALS computes the best rank-(R1,R2,..,Rn) approximation of tensor X, according to the specified dimensions. I am using MATLAB Tensor Toolbox Version 2.5. I am wondering if I write a paper, how can I refer to the algorithm?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364697933197021, "perplexity": 746.2397957062219}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00250.warc.gz"}
https://blog.kxy.ai/5-reasons-you-should-never-use-pca-for-feature-selection/
Principal Component Analysis, or PCA, is one of the most consequential dimensionality reduction algorithms ever invented. Unfortunately, like all popular tools, PCA is often used for unintended purposes, sometimes abusively. One such purpose is feature selection. In this article, we give you 5 key reasons never to use PCA for feature selection. But first, let's briefly review the inner-workings of PCA. ## The Problem Let us assume we have a vector of inputs $x := (x_1, \dots, x_d) \in \mathbb{R}^d$, which we assume has mean 0 to simplify the argument (i.e. E(x)=0). We are interested in reducing the size $d$ of our vector, without losing too much information. Here, a proxy for the information content of $x$ is its energy defined as $\mathcal{E}(x) := E(||x||^2).$ The challenge is that the information content of $x$ is usually unevenly spread out across its coordinates. In particular, coordinates can be positively or negatively correlated, which makes it hard to gauge the effect of removing a coordinate on the overall energy. Let's take a concrete example. In the simplest bivariate case ($d=2$), $\mathcal{E}(x) = \text{Var}(x_1) + \text{Var}(x_2) + 2\rho(x_1, x_2) \sqrt{\text{Var}(x_1)\text{Var}(x_2)}$, where $\rho(x_1, x_2)$ is the correlation between the two coordinates, and $\text{Var}(x_i)$ is the variance of $x_i$. Let's assume that $x_1$ has a higher variance than $x_2$. Clearly, the effect of removing $x_2$ on the energy, namely $\mathcal{E}(x)-\text{Var}(x_1) = \text{Var}(x_2) + 2\rho(x_1, x_2) \sqrt{\text{Var}(x_1)\text{Var}(x_2)},$ does not just depend on $x_2$; it also depends on the correlation between $x_1$ and $x_2$, and on the variance/energy of $x_1$! When $d > 2$, things get even more complicated. The energy now reads$\mathcal{E}(x) =\sum_{i=1}^{d}\sum_{j=1}^{d} \rho(x_i, x_j) \sqrt{\text{Var}(x_i)\text{Var}(x_j)},$and analyzing the effect on the energy of removing any coordinate becomes a lot more complicated. The aim of PCA is to find a feature vector $z := (z_1, \dots, z_d) \in \mathbb{R}^d$ obtained from $x$ by a linear transformation, namely $z = Wx,$ satisfying the following conditions: 1. $z$ has the same energy as $x$: $E(||x^2||) = E(||z^2||)$. 2. $z$ has decorralated coordinates: $\forall i \neq j, ~ \rho(z_i, z_j) = 0$. 3. Coordinates of $z$ have decreasing variances: $\text{Var}(z_1) \geq \text{Var}(z_2) \geq \dots \geq \text{Var}(z_d)$. When the 3 conditions above are met, we have $\mathcal{E}(x) = \mathcal{E}(z) =\sum_{i=1}^{d} \text{Var}(z_i).$Thus, dimensionality reduction can be achieved by using features $z ^{p} := (z_1, \dots, z_p)$ instead of the original features $x := (x_1, \dots, x_d)$, where $p < d$ is chosen so that the energy loss, namely $\mathcal{E}(z)-\mathcal{E}(z^{p}) = \sum_{i=p+1}^{d}\text{Var}(z_i),$is only a small fraction of the total energy $\mathcal{E}(z)$:$\frac{\sum_{i=p+1}^{d}\text{Var}(z_i)}{\sum_{i=1}^{d}\text{Var}(z_i)} \ll 1.$ ## The Solution The three conditions above induce a unique solution. The conservation of energy equation implies: $E(||z^2||) = E\left( x^{T} W^{T}Wx\right) = E\left( x^{T} x\right)=E(||x^2||).$ A sufficient condition for this to hold is that $W$ be an orthogonal matrix: $W^{T}W = WW^{T} = I.$ In other words, columns (resp. rows) form an orthonormal basis of $\mathbb{R}^{d}$. As for the second condition, it implies that the autocovariance matrix$\text{Cov}(z) = WE(xx^T)W^T = W\text{Cov}(x)W^T$should be diagonal. Let us  write $\text{Cov}(x) = UDU^T$ the Singular Value Decomposition of $\text{Cov}(x)$, where columns of the orthogonal matrix $U$ are orthonormal eigenvectors of the (positive semidefinite) matrix $\text{Cov}(x)$, sorted in decreasing order of eigenvalues. Plugging $\text{Cov}(x) = UDU^T$ in the equation $\text{Cov}(z) = W\text{Cov}(x)W^T$, we see that, to satisfy the second condition, it is sufficient that $WU=I=U^{T}W^{T}$, which is equivalent to $W=U^{-1} =U^{T}$. Note that, because $U$ is orthogonal, the choice $W = U^{T}$ also satisfies the first condition. Finally, given that columns of $U$ are sorted in decreasing order of eigenvalues, their variances $\text{Var}(z_i) = \text{Cov}(z)[i, i] = D[i, i]$ also form a decreasing sequence, which satisfies the third condition. Interestingly, it can be shown that any loading matrix $W$ of a linear transformation that satisfies the three conditions above ought to be of the form $W=U^T$ where columns of $U$ are orthonormal eigenvectors of $\text{Cov}(x)$ sorted in decreasing order of their eigenvalues. Coordinates of $z$ are called principal components, and the transformation $x \to U^{T}x$ is the Principal Component Analysis. ## 5 Reasons Not To Use PCA For Feature Selection Now that we are on the same page about what PCA is, let me give you 5 reasons why it is not suitable for feature selection. When used for feature selection, data scientists typically regard $z^{p} := (z_1, \dots, z_p)$ as a feature vector that contains fewer and richer representations than the original input $x$ for predicting a target $y$. Reason 1: Conservation of energy does not guarantee conservation of signal The essence of PCA is that the extent to which dimensionality reduction is lossy is driven by the information content (energy in this case) that is lost in the process. However, for feature selection, what we really want is to make sure that reducing dimensionality will not reduce performance! Unfortunately, maximizing the information content or energy of features $z^p := (z_1, \dots, z_p)$ does not necessarily maximize their predictive power! Think of the predictive power of $z^p$ as the signal part of its overall energy or, equivalently, the fraction of its overall energy that is useful for predicting the target $y$. We may decompose an energy into signal and noise as $\mathcal{S}(z^p) + \mathcal{N}(z^p) = \mathcal{E}(z^p) \leq \mathcal{E}(x) = \mathcal{S}(x) + \mathcal{N}(x) ,$where $\mathcal{N}(z^p) := E\left(||z^p||^2 \vert y\right)$ is the noise component, and $\mathcal{S}(z^p) := E\left(||z^p||^2 \right)-E\left(||z^p||^2 \vert y\right)$ is the signal. Clearly, while PCA ensures that $\mathcal{E}(x) \approx \mathcal{E}(z^p)$, we may easily find ourself in a situation where PCA has wiped out all the signal that was originally in $x$ (i.e. $\mathcal{S}(z^p) \approx 0$)! The lower the Signal-to-Noise Ratio (SNR) $\frac{\mathcal{S}(x)}{\mathcal{N}(x)}$, the more likely this is to happen. Fundamentally, for feature selection, what we want is conservation of signal  $\mathcal{S}(x) \approx \mathcal{S}(z^p)$ not conservation of energy. Note that, if instead of using the energy as the measure of information content we used the entropy, the noise would have been the conditional entropy $h\left(z^p \vert y\right)$, and the signal would have been the mutual information $I(y; z^p)$. Reason 2: Conservation of energy is antithetical to feature selection Fundamentally, preserving the energy of the original feature vector conflicts with the objectives of feature selection. Feature selection is most needed when the original feature vector $x$ contains coordinates that are uninformative about the target $y$, whether they are used by themselves, or in conjunction with other coordinates. In such a case, removing the useless feature(s) is bound to reduce the energy of the feature vector. The more useless features there are, the more energy we will lose, and that's OK! Let's take a concrete example in the bivariate case $x=(x_1, x_2)$ to illustrate this. Let's assume $x_2$ is uninformative about $y$ and $x_1$ is almost perfectly correlated to $y$. Saying that $x_2$ is uninformative about the target $y$ means that it ought to be independent from $y$ both unconditionally (i.e. $I(y; x_2)=0$) and conditionally on $x_1$ (i.e. $I(y; x_2 \vert x_1) = 0$). This can occur for instance when $x_2$ is completely random (i.e. independent from both $y$ and $x_1)$. In such a case, we absolutely need to remove $x_2$, but doing so would inevitably reduce the energy by $E(||x_2||^2)$. Note that, when both $x_1$ and $x_2$ have been standardized, as is often the case before applying PCA, removing $x_2$, which is the optimal thing to do from a feature selection standpoint, would result in 50% energy loss! Even worse, in this example, $x_1$ and $x_2$ happen to be principal components (i.e. $U=I$) associated to the exact same eigenvalue. Thus, PCA is unable to decide which one to keep, even though $x_2$ is clearly useless and $x_1$ almost perfectly correlated to the target! Reason 3: Decorrelation of features does not imply maximum complementarity It is easy to think that because two features are decorrelated each must bring something new to the table. That is certainly true, but that 'new thing' which decorrelated features bring is energy or information content, not necessarily signal! Much of that new energy can be pure noise. In fact, features that are completely random are decorrelated with useful features, yet they cannot possibly complement them for predicting the target $y$; they are useless. Reason 4: Learning patterns from principal components could be harder than from original features When PCA is used for feature selection, new features are constructed. In general, the primary goal of feature construction is to simplify the relationship between inputs and the target into one that models our toolbox can reliably learn. By linearly combining previously constructed features, PCA creates new features that can be harder to interpret, and in a more complex relationship with the target. The questions you should be asking yourself before applying PCA are: • Does linearly combining my features make any sense? • Can I think of an explanation for why the linearly combined features could have as simple a relationship to the target as the original features? If the answer to either question is no, then PCA features would likely be less useful than original features. As an illustration, imagine we want to predict a person's income using, among other features, GPS coordinates of her primary residence, age, number of children, number of hours worked per week. While it is easy to see how a tree-based learner could exploit these features, linearly combining them would result in features that make little sense and are much harder to learn anything meaningful from using tree-based methods. Reason 5: Feature selection ought to be model-specific Feature selection serves one primary goal: removing useless features from a set of candidates. As explained in this article, feature usefulness is a model-specific notion. A feature can very well be useful for a model, but not so much for another. PCA, however, is model-agnostic. In fact, it does not even utilize any information about the target. ## Conclusion PCA is a great tool with many high-impact applications. Feature selection is just not one of them.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8074045777320862, "perplexity": 346.1676955203019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00721.warc.gz"}
http://people.bu.edu/parkhs/em_coupling_papers.html
### A Level-Set Based IGA Formulation for Topology Optimization of Flexoelectric Materials H. Ghasemi, H.S. Park and T. Rabczuk Accepted for publication in Computer Methods in Applied Mechanics and Engineering 2016 #### Abstract This paper presents a design methodology based on a combination of isogeometric analysis (IGA), level set and point wise density mapping techniques for topology optimization of a continuum considering piezoelectric and flexoelectric effects. The fourth order partial differential equations (PDEs) of flexoelectricity, which require at least C1 continuous approximations, are discretized by using Non-Uniform Rational B-spline (NURBS). The point wise density mapping technique with consistent derivatives is directly used in the weak form of the governing equations. The boundary of the design domain is clearly and implicitly represented by a level set function. The accuracy of the IGA model is confirmed through numerical examples including a cantilever beam under a point load and a truncated pyramid under compression with different electrical boundary conditions. Finally, we provide numerical examples demonstrating the significant enhancement in electromechanical coupling coefficient that can be obtained using topology optimization. ### Topology Optimization of Piezoelectric Nanostructures S.S. Nanthakumar, T. Lahmer, X. Zhuang, H.S. Park and T. Rabczuk Journal of the Mechanics and Physics of Solids 2016; 94:316-335 #### Abstract We present an extended finite element formulation for piezoelectric nanobeams and nanoplates that is coupled with topology optimization to study the energy harvesting potential of piezoelectric nanostructures. The finite element model for the nanoplates is based on the Kirchoff plate model, with a linear through the thickness distribution of electric potential. Based on the topology optimization, the largest enhancements in energy harvesting are found for closed circuit boundary conditions, though significant gains are also found for open circuit boundary conditions. Most interestingly, our results demonstrate the competition between surface elasticity, which reduces the energy conversion efficiency, and surface piezoelectricity, which enhances the energy conversion efficiency, in governing the energy harvesting potential of piezoelectric nanostructures. This paper is available in PDF form . ### Surface Effects on the Piezoelectricity of ZnO Nanowires S. Dai and H.S. Park Journal of the Mechanics and Physics of Solids 2013; 61:385-397 #### Abstract We utilize classical molecular dynamics to study surface effects on the piezoelectric properties of ZnO nanowires as calculated under uniaxial loading. An important point to our work is that we have utilized two types of surface treatments, those of charge compensation and surface passivation, to eliminate the polarization divergence that otherwise occurs due to the polar (0001) surfaces of ZnO. In doing so, we find that if appropriate surface treatments are utilized, the elastic modulus and the piezoelectric properties for ZnO nanowires having a variety of axial and surface orientations are all reduced as compared to the bulk value as a result of polarization-reduction in the polar [0001] direction. The reduction in effective piezoelectric constant is found to be independent of the expansion or contraction of the polar (0001) surface in response to surface stresses. Instead, the surface polarization and thus effective piezoelectric constant is substantially reduced due to a reduction in the bond length of the Zn-O dimer closest to the polar (0001) surface. Furthermore, depending on the nanowire axial orientation, we find in the absence of surface treatment that the piezoelectric properties of ZnO are either effectively lost due to unphysical transformations from the wurtzite to non-piezoelectric d-BCT phases, or also become smaller with decreasing nanowire size. The overall implication of this study is that if enhancement of the piezoelectric properties of ZnO is desired, then continued miniaturization of square or nearly square cross section ZnO wires to the nanometer scale is not likely to achieve this result. This paper is available in PDF form . ### Surface Piezoelectricity, Size-effects in Nanostructures and the Emergence of Piezoelectricity in Non-piezoelectric Materials S. Dai, M. Gharbi, P. Sharma and H.S. Park Journal of Applied Physics 2011; 110:104305 #### Abstract In this work, using a combination of a theoretical framework and atomistic calculations, we highlight the concept of surface piezoelectricity that can be used to interpret the piezoelectricity of nanostructures. Focusing on three specific material systems (ZnO, SrTiO3 and BaTiO3), we discuss the renormalization of apparent piezoelectric behavior at small scales. In a rather interesting interplay of symmetry and surface effects, we show that nanostructures of certain non-piezoelectric materials may also exhibit piezoelectric behavior. Finally, for the case of ZnO, using a comparison with first principles calculations, we also comment on the fidelity of the widely-used core-shell interatomic potentials to capture non-bulk electro-mechanical response. This paper is available in PDF form . ### A New Multiscale Formulation for the Electromechanical Behavior of Nanomaterials H.S. Park, M. Devel and Z. Wang Computer Methods in Applied Mechanics and Engineering 2011; 200:2447-2457 #### Abstract We present a new multiscale, finite deformation, electromechanical formulation to capture the response of surface-dominated nanomaterials to externally applied electric fields. To do so, we develop and discretize a total energy that combines both mechanical and electrostatic terms, where the mechanical potential energy is derived from any standard interatomic atomistic potential, and where the electrostatic potential energy is derived using a Gaussian-dipole approach. By utilizing Cauchy-Born kinematics, we derive both the bulk and surface electrostatic Piola-Kirchoff stresses that are required to evaluate the resulting electromechanical finite element equilibrium equations, where the surface Piola-Kirchoff stress enables us to capture the non-bulk electric field-driven polarization of atoms near the surfaces of nanomaterials. Because we minimize a total energy, the present formulation has distinct advantages as compared to previous approaches, where in particular, only one governing equation is required to be solved. This is in contrast to previous approaches which require either the staggered or monolithic solution of both the mechanical and electrostatic equations, along with coupling terms that link the two domains. The present approach thus leads to a significant reduction in computational expense both in terms of fewer equations to solve and also in eliminating the need to remesh either the mechanical or electrostatic domains due to being based on a total Lagrangian formulation. Though the approach can apply to three-dimensional cases, we concentrate in this paper on the one-dimensional case. We first derive the necessary formulas, then give numerical examples to validate the proposed approach in comparison to fully atomistic electromechanical calculations. This paper is available in PDF form . ### Piezoelectric Constants for ZnO Calculated Using Classical Polarizable Core-Shell Potentials S. Dai, M.L. Dunn and H.S. Park Nanotechnology 2010; 21:445707 #### Abstract We demonstrate the feasibility of using classical atomistic simulations, i.e. molecular dynamics and molecular statics, to study the piezoelectric properties of ZnO using core-shell interatomic potentials. We accomplish this by reporting piezoelectric constants for ZnO as calculated using two different classical interatomic core-shell potentials, that originally proposed by Binks et al., and that proposed by Nyberg et al. We demonstrate that the classical core-shell potentials are able to qualitatively reproduce the piezoelectric constants as compared to benchmark \emph{ab initio} calculations. We further demonstrate that while the presence of the shell is required to capture the electron polarization effects that control the clamped ion part of the piezoelectric constant, the major shortcoming of the classical potentials is a significant underprediction of the clamped ion term as compared to previous ab initio results. However, the present results suggest that overall, these classical core-shell potentials are sufficiently accurate to be utilized for large scale atomistic simulations of the piezoelectric response of ZnO nanostructures. This paper is available in PDF form .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888354301452637, "perplexity": 1351.5765793793732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661023.80/warc/CC-MAIN-20160924173741-00031-ip-10-143-35-109.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/381792/factoring-a-time-derivative-operator-outside-of-an-integral-in-space
# Factoring a time derivative operator outside of an integral in space I'm trying to integrate $$\int_a^b \frac{d}{dt} \left[ \frac{du}{dx}\right]dx.$$ Assume $u$ is a sufficiently smooth function of both $t$ and $x$. Since the integral operator is in space only, can I simply factor out the time derivative operator and rewrite this integral as $$\frac{d}{dt} \int_a^b \left[ \frac{du}{dx}\right]dx?$$ If so, what property allows me to do this? Linearity of the integral operator? Linearity of the derivative operator? Something else? - $$\int^b_a \lim_{\epsilon\to 0} \frac{\frac{\partial }{\partial x}u(x,t+\epsilon) - \frac{\partial }{\partial x}u(x,t) }{\epsilon} dx\tag{1}$$ The question is: whether above is the same as $$\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\int^b_a \frac{\partial }{\partial x}u(x,t+\epsilon) dx - \int^b_a\frac{\partial }{\partial x}u(x,t)dx\right) \tag{2}$$ From (1) to (2), we used the linearity of the integration, and more importantly, we interchanged the limit and the integration. About interchanging the limit and the integration, proper assumption has to be made on $u$, for example, $\frac{\partial u}{\partial x}$ and $\frac{\partial^2 u}{\partial x \partial t}$ both being continuous will suffice. More weakly, the conditions of either dominated convergence theorem or monotone convergence theorem allow the interchanging. Therefore, the smoothness of $u$ is also used. If the partial derivative of $u$ has some weird discontinuities, there are counterexamples that you can not interchange of the derivative and the integral sign.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939486980438232, "perplexity": 205.7178805127858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111374.13/warc/CC-MAIN-20160428161511-00131-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.perimeterinstitute.ca/videos/limits-non-local-correlations-structure-local-state-space
# Limits on non-local correlations from the structure of the local state space Playing this video requires the latest flash player from Adobe. ## Recording Details Speaker(s): Scientific Areas: Collection/Series: PIRSA Number: 10120032 ## Abstract Nonlocality is arguably one of the most remarkable features of quantum mechanics. On the other hand nature seems to forbid other no-signaling correlations that cannot be generated by quantum systems. Usual approaches to explain this limitation is based on information theoretic properties of the correlations without any reference to physical theories they might emerge from. However, as shown in [PRL 104, 140401 (2010)], it is the structure of local quantum systems that determines the bipartite correlations possible in quantum mechanics. We investigate this connection further by introducing toy systems with regular polygons as local state spaces. This allows us to study the transition between bipartite classical, no-signaling and quantum correlations by modifying only the local state space. It turns out that the strength of nonlocality of the maximally entangled state depends crucially on a simple geometric property of the local state space, known as strong self-duality. We prove that the limitation of nonlocal correlations is a general result valid for the maximally entangled state in any model with strongly self-dual local state spaces, since such correlations must satisfy the principle of macroscopic locality. This implies notably that Tsirelson’s bound for correlations of the maximally entangled state in quantum mechanics can be regarded as a consequence of strong self-duality of local quantum systems. Finally, our results also show that there exist models which are locally almost identical to quantum mechanics, but can nevertheless generate maximally nonlocal correlations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148743748664856, "perplexity": 1556.5275202845169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540839.46/warc/CC-MAIN-20161202170900-00308-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.scribd.com/document/266915864/DataBase-Normalization-Cotelea
You are on page 1of 12 # Computer Science Journal of Moldova, vol.17, no. 2(50), 2009 Determination of the normalization level of database schemas through equivalence classes of attributes Cotelea Vitalie Abstract In this paper, based on equivalence classes of attributes there are formulated necessary and sufficient conditions that constraint a database schema to be in the second, third or Boyce-Codd normal forms. These conditions offer a polynomial complexity for the testing algorithms of the normalizations level. Keywords: Relational database schema, functional dependencies, equivalence classes of attributes, normal forms, polynomial algorithms. 1 Introduction The anomalies that appear during database maintaining are known as insertion, update and deletion anomalies. These are directly related to the dependencies between attributes. A rigorous characterization of the quality grade of a database schema can be made through the exclusion of mentioned anomalies, with consideration of attributes dependencies, which offers the possibility to define some formal techniques for design of desirable relation schemes. The process of design of some relation scheme structure with intend to eliminate the anomalies, is called normalization. Normalization consists in following a set of defined rules on data arrangement with the scope to reduce the complexity of scheme structures and its transformation into smaller and stable structures which will facilitate c °2009 by Vitalie Cotelea 123 Vitalie Cotelea data maintenance and manipulation. In Section 2. There exist several normalization levels that are called normal forms.e. third normal form (3NF) and Boyce-Codd normal form (BCNF). can be performed in a polynomial time [2]. the correlation is proven between nonredundant classes 124 . the problem of determination of the normalization level is known to be NP-complete [3. every relation in 3NF is also in 2NF and every relation in 2NF is in 1NF. For example. This approach can be a part of the database analysis and design toolset. 3NF and BCNF are important from a database design standpoint [1]. These forms have increasingly restrictive requirements: every relation in BCNF is also in 3NF. Thus a database designer may work in terms of attributes sets and data dependencies. because normalization testing requires finding the candidate keys and nonprime attributes. for the automation of database design and testing. But it is known that a relation can have an exponential number of keys under the number of all attributes of its scheme [5]. second normal form (2NF). A relation is in 1NF if every attribute contains only atomic values. most of the definitions needed in this paper are presented. Therefore. In this paper necessary and sufficient conditions for a scheme to be in 2NF. the design of a 3NF database schema. The problem of prime and nonprime attributes finding has been solved in a polynomial time [6]. 4]. Besides this. through the synthesizing method. several properties for equivalence classes of attributes. and not in terms of keys. The normal forms based on functional dependencies are first normal form (1NF). the definitions of normal forms use the notions of prime and nonprime attributes. the determination of normalization level of a scheme is also polynomial. proved in [6]. Unfortunately. Firstly. Secondly. which are also related to key. third or BCNF) contain the notion of key. are given. i. 3NF or BCNF are defined. In Section 3. 2NF is mainly of historical interest. the definitions of normal schemes (second. These conditions are described in terms of redundant and nonredundant equivalence classes of attributes and the computation of these classes can be performed in polynomial time [6]. The final section is about algorithmic aspects. The set X is a determinant for Y with respect to F if X 0 → Y is not in F + for every proper subset X 0 of X. Let X and Y be two nonempty finite subsets of R. where it is shown that the determination of the normalization level of database schemas can be performed in polynomial time. Otherwise A is nonprime in Sch(R. 5 and 6 there are presented necessary and sufficient conditions (Theorems 4-6). written as X + . If X is a determinant for R with respect to F . 3NF or BCNF. then X is a key for relation scheme Sch(R. An attribute A is prime in Sch(R. In Sections 4. Let Sch(R. 2 Preliminary notions In this and in the next section. F ) be a relation scheme. If F is a set of functional dependencies over R and X is a subset of R. respectively. Note that some relation scheme may have more than one key. F ). where F is a set of functional dependencies defined on a set R of attributes. that will be as concise as possible. Armstrong’s Axioms are sound in that they generate only functional dependencies in F + when applied to a set F . for a relation scheme to be in 2NF. 125 . in terms of equivalence classes of attributes. is the set of attributes A such that X → A can be inferred using the Armstrong Axioms. F ). some definitions and statements used in this paper are presented. They are complete in that repeated application of these rules will generate all functional dependencies in the closure F + [1]. F ) if A is contained in some key of Sch(R.Determination of the normalization level of database schemas of attributes and the right and left sides of functional dependency that is inferred from a given set of functional dependencies (Theorem 3). that is X + = {A|X → A ∈ F + } [7]. that is F + = {V → W |F | = V → W } [7]. then the closure of the set X with respect to F . F ). The set of all functional dependencies implied by a given set F of functional dependencies is called the closure of F and is denoted as F + . E). (A. The relation of strong connectivity is an equivalence relation over the set S. Lemma 1.Vitalie Cotelea In what follows. Over the set S ∗ of vertices of graph G∗ a strict partial order is defined. . Tm is obtained. If X → Y ∈ F + and X is a determinant of Y under F . Evidently the condensed graph G∗ is free of directed circuits. Given a relation scheme Sch(R. . n. E ∗ ) is defined as follows: S ∗ = {S1 ... [6].. where: • for every attribute A in R. Vertex Si precedes vertex Sj . Then the condensed graph [8] of G. then for every attribute A ∈ (X − Y ) there is an attribute B ∈ Y so that in the contribution graph G there exists a path from vertex A to vertex B and for every attribute B ∈ (Y − X) there exists in X an attribute A. Let G = (S. A ∈ Si and B ∈ Sj }.. . . B) in E. From the ordered sequence of sets S1 . So. F ). there is a vertex labeled by A in S. where T1 = S1 and Tj = Sj − Sj−1 + ( i=1 Ti )F for j = 2.. it will be assumed that the set F of functional dependencies is reduced [7]. . keeping the precedence of prior sets. the set F can be represented by a graph... if Sj is accessible from Si . 126 . there is a partition of set of vertices S into pairwise disjoint Sn subsets. Let S1 . Strict partial orders are useful because they correspond more directly to directed acyclic graphs.. Sn } and E ∗ = {(Si . that is. Sn be the strongly connected components of a graph G = (S.. E). Sj )|i 6= j. E) be divided into strongly connected components... Tn ..... Sn a sequence of ordered nonredundant sets can be built T1 . All empty sets are excluded from the sequence and a sequence of nonempty sets T1 . from which the vertex B can be reached. G∗ = (S ∗ . S = i=1 Si . B) in E that is directed from vertex A to vertex B. • for every functional dependence X → Y in F and for every attribute A in X and every B in Y there is an edge a = (A. called contribution graph [6] for F and denoted by G = (S. Sn4). then S Z.. Y ⊆ T1T . If an attribute ASin S Sn is nonprime in scheme Sch = ( i=1 Si . where Z = X (T . on the contribution graph of set F of dependencies. F ). where j = 1. S S Corollary 1. is a determinant for Y Tj under F . Theorem ([6].. Sn under F . ([6]. m. A contradiction has been reached.. Theorem S S 1.. then A ∈ i=1 Ti . In the following sections.. sufficient and necessary conditions for a relation scheme to be in a normal form are presented. ([6]. TEvidently that X ⊆ S S S j S S + 0 T1 . Lemma is a determinant under F of set S S 2.. Tm . Tm . Tj is redundant. Using above structures and statements it will be shown that the problem of determination of the normalization level has polynomial complexity. m..Determination of the normalization level of database schemas 3 Some properties of equivalence classes of attributes In this section a brief overview of several properties of equivalence classes of attributes is given. + Theorem 3... Tm under F .. For aTTj . in terms of equivalence classes of attributes... m. S of set T1 .. Tj−1 . but X Tj = ∅. then A ∈ ( ni=1 Si − m i=1 Ti ). Tm and X → T (Y Tj ) ∈ F . the following takes place: if Y Tj 6= ∅. if and only if X is determinant of set T1 . If an attribute SmA in S1 . Lemma 3). Tj−1 Tj+1 . TheoremT4).. And their proofs are presented in [6]. But in this case. Set X is a determinant S ofS set S1 . 0 where X ⊆ X. Tj under F . Corollary Sn 3). Tm . Sn is prime in scheme Sch = ( i=1 Si ... is a 1 j S determinant for T1 .. According to Lemma 1.. Let X . S S Corollary 2. X 0 ⊆ T1 . where i = 1.. Thereby. 127 . The T soundness of this T statement is proven by contradiction: let Y T = 6 ∅. then X Ti 6= ∅. from every vertex labeled with an attribute in X 0 there exists aSpath T Sto a vertex labeled with an attribute in Y Tj .. Proof. ([6]. then X Tj 6= ∅. T ) and j = 1... Let X S →SY ∈ F . where X is a determinant for Y under F and X. Theorem 2). If set of attributes X is a determinant S 2.. ([6]. F ). If X T S S T1 . Corollary S1 .. then A is called that completely depends on X. Proposition 1.. if each constituent relation scheme is in the 2NF.... if and Theorem 4. Tj−1 )→A∈F + . X ⊆ T1 . From the construction of contribution Sm + that (T1 . Definition 2. Scheme Sch = ( ni=1 Si . then A completely depends on X.. Tj ) → A ∈ F . if there exists a proper subset X 0 of set X. namely S S S + . if it is in 1NF and each nonprime S attribute in ni=1 Si doesn’t partially depend on every key for Sch. T .Vitalie Cotelea 4 Second normal form Thus. or A ∈ ( ni=1 Si − m i=1 Ti ). F ) be inSthe 2NF.. F ) is in Sm + only if it is in the 1NF and for every T . S Proof. 128 . Necessity. Because A∈( i=1 Ti −Tj )+ . If such a proper subset doesn’t exist. [9]. Let X → A ∈ F be a nontrivial functional dependency (namely A ∈ / X). m. 1 m In S S addition. ( j i=1 Ti − Tj ) = Sm i=1 Ti − Tj takes place. Database schema is in the 2NF. n Then Sm every nonprime attribute A. Let scheme Sch = ( ni=1 Si . such that X 0 → A ∈ F + . X is a determinant of set T .. but there is an attribute A∈( i=1 Ti −Tj ) such S Sm (S i=1 Ti −Tj ). The next theorem gives a characterization of the 2NF in terms of equivalence classes of attributes. But this contradicts A∈( j−1 i=1 Ti −Tj ) . Let AS∈ TS graph. There are two cases: either A ∈ Tj . According to Theorem 1. j = 1. F ) is in the 2NF under a set of functional dependencies F . Assuming to the contrary.. follows j . If set of attributes X is a determinant for attribute A under set of attributes F . S the 2NF. Tm . the relation scheme in the 2NF can be defined: S Definition 1. that Sch is in Sm + that A ∈ / the 2NF. Relation scheme Sch = ( ni=1 Si . that is a member of set S i=1SSi − Sn .. An attribute A is called partially dependent on X. i=1 Ti completely depends on every determinant X of setSS1 S. then (T1 the fact that set Tj is nonredundant. [9].. . or ( S − i i i i=1 i=1 i=1 Ti ) 6= ∅..Determination of the normalization level of database schemas S S S S Let A ∈ ( ni=1 Si − m for T .. the following equality takes place: ( every T . F ) is in 3NF under a set of functional dependencies F .. m and vice versa. So that (T . XS (TS .. that is in the case when set of nonprime attributes is not empty. if and only if for every i = 1. S Sufficiency. So... S S i=1 If ( ni=1 Si − m T ) = ∅... SmTwo cases are possible: either ( S − T ) = ∅. The + = T − Tj takes place when Ti = Si holds for every T − T ) ( m i i j i=1 i=1 i = 1. j = j i=1 Ti −Tj ) = Sm i=1 Ti −Tj .. In other words. the scheme is in the 2NF. Tm . Tj−1 ) is a de1 S S terminant for T . m Ti = Si holds. Scheme Sch = ( ni=1 Si . taking into account Lemma 2. fact that contradicts the assumption that scheme Sch is in the 2NF. S Definition 3. If X is a determinant T S S under F .. then 1 j−1 1 j−1 T S S + (X (T1 . 5 Third normal form In this section a characterization of the 3NF is given through the equivalence classes. T . Let scheme Sch = ( ni=1 Si . if it is in the 1NF and every nonprime attribute doesn’t transitively depend on a key of scheme Sch. that every nonprime attribute A comS results S pletely depends on T1 . 129 .. Database schema is in the 3NF. T )→A∈F + .. F ) be inS the 1NF and for m + 1. [9]. the nonprime attribute A partially depends on determinant X. That is A partially depends on key X. S S If ( ni=1 Si − m i=1 Ti ) 6= ∅. F ) is in the 2NF. Tm 1 i=1 Ti ). It will be Snproven that Sm scheme Sch isSinn the 2NF. S Corollary 3. therefore. soundness of this statement follows from the fact that Sm S Proof.. furthermore it completely depends on S S determinant X under F of set T1 . then scheme doesn’t contain nonprime i=1 i attributes and. m. if every constituent relation scheme is in the 3NF. Tm . Scheme Sch = ( ni=1 Si . Tj−1 )) → A ∈ F . scheme is in the 2NF and it is even in the third. A ∈ / V W. Therefore. Hence. Sn is in the S3NF. fully functionally depends SS n ). if Theorem 5. all nonprime attributes A don’t depend transitively on determinant X. F ) S Sn Sm and only if ( i=1 Si − i=1 Ti ) is a determinant for ( ni=1 Si − m i=1 Ti ) under F . A ∈ ( i=1 Si − i=1 Ti ) and A ∈ / XW . Let X be a determinant of set i i i=1 S i=1 T1 . no attribute A. F ). Then scheme Sch S = ( ni=1 SSi . That is. namely it is fully functionally on key X of scheme Sch = ( i=1 Si . 3. V. F ) is also in the 2NF and each m attribute A. Sm ( S S − T ) under F . X → ( i=1 Si − S F holds. S Tm under F . where A ∈ ( ni=1 i − i=1 Ti ). In addition. Tm under F . 130 . then Snthere doesn’t Sm exist anySdependency Sm W → A ∈ F .. W ⊆ i i=1 i=1 Si Sn and A ∈ i=1 Si . where A ∈ ( ni=1 Si − m i=1 Ti ). transitively depends on + X.. F ) be in the 3NF. W → A ∈ F + . 2.Vitalie Cotelea Sn Sn Definition 4. 4. That is. W → V ∈ / F + (namely V doesn’t functionally depend on W )..Sthere doesn’t Sm exist any dependency W → A ∈ F . S S Assume ( ni=1 Si − m i=1 Ti ) is a determinant for SnSufficiency. if the following conditions are all satisfied: 1. LetS scheme Sch = ( ni=1 Si . F S S dependent on determinant XSof set T1S . n that confirms that ( i=1 Si − i=1 Ti ) is a determinant for ( i=1 Si − S m i=1 Ti ) under F . such n Ti ) and / XW . Bei=1 Ti ) ∈ S n m cause ( ni=1 Si − m T ) is a determinant for ( S − i=1 i i=1 i i=1 Ti ) under + F . such that n W ⊆ ( i=1 Si − i=1 Ti ).. X is a +determinant of set SS1 . dependency that S W ⊆S( i=1 Si − Si=1 Sm A ∈ n ( ni=1 Si − m T ) → ( S − T fact i=1 i Sn i=1 i Sm i=1 i ) is reduced on the leftSside.. SSn . V → W ∈ F + . Necessity. It is considered that the attribute A transitively depends on V through W . Let scheme Sch = ( S .. According to Theorem S Sn Sm1. Relation scheme Sch = ( i=1 Si . [10]. S Proof. the case whenS A ∈ / V . V → Si=1 Si ∈ F holds. S Proof. if and only . that is. i i=1 i=1 Ti ∈ Without constraining the generality. It will be proven that scheme Sch = for ( j Sn i=1 i ( i=1 Si . m. S Let scheme Sch = ( ni=1 Si . the left side of each functional dependency functionally determines all attributes of scheme. j = 1. that Sn dependency + is. S so that V → Sna nontrivial m + S ∈ / F holds. the set Sjm Sm if it is in the 3NF and for every T of attributes ( i=1 Ti − Tj ) is a determinant for ( i=1 Ti − Tj ) under F. m. attributes in Tj j = 1. Then for every nontrivial functional V → A ∈ F + . so that m + V → ( i=1 Ti − Tj ) ∈ F . S Definition 5. j = 1. thenSthe last dependency is not trivial. the set of attributes ( m i=1 Ti − Tj ) is a determinant m T − T ) under F . In this case. Based on the m m + reflexivity rule. if it is in S the 1NF and for every nontrivial dependency V → A ∈ F + V → ni=1 Si ∈ F + takes place. In the determination of a database schema being in BCNF. namely X T = 6 ∅ for j = 1. T 2. Scheme Sch = ( ni=1 Si . let V be a determinant for A under F . m too. F ) be in BCNF. But this functional dependency contradicts the Smfact that every determinant Xof set S n according to Theorem i=1 Si and consequently a set i=1 Ti contains. Necessity. Let scheme Sch = ( ni=1 Si . [11]. F ) is in BCNF. F ) not be in BCNF. that is.Determination of the normalization level of database schemas 6 Boyce-Codd normal form The concept of BCNF is refined from the notion of 3NF. m. F ) is in the normal form BoyceCodd. there is functional dependency V → A ∈ F + . Relation scheme Sch = ( ni=1 Si .Sif there exists a set of attributes V ⊂ ( m i=1 Ti − Tj ). j Sn Sufficiency. Let scheme Sch = ( i=1 S Si . If it’s supposed Sm m that ( i=1 Ti − Tj ) is not a determinant for ( S i=1 Ti − Tj ) under F . F ) be in the 3NF and for everySTj . S Theorem 6. a given set F of functional dependencies is used. From the construction of the contribution graph and from the fact 131 . Then it can be stated that V → / F +. F ) is in BCNF under set F of functional dependencies. ( i=1 Ti − Tj ) → ( i=1 Ti − Tj )S∈ F . By the definition of BCNF V → ni=1 Si ∈ F + holds. then the i i=1 i=1S m T − T ) of attributes is not a determinant for ( set ( m j i=1 Ti − Tj ) i=1 i under F . Indeed. A few comments about the complexity of the algorithms for finding the normal form of scheme are made below.Vitalie Cotelea that dependency V → A is reduced. S Sm 3. and the right side consists of a nonprime attribute. V ⊆ m i=1 Ti and A ∈ ( i=1 Si − i=1 Ti ) . let m > 1 and A ∈ Tk . left and right sides are formed just from prime attributes. Then by Theorem 3 and the construction way of drawing the contribution graph. S Sn Sm S Suppose that V ⊆ ( ni=1 Si − m i=1 Ti ) and A ∈ ( i=1 Si − i=1 Ti ). F ) will not be in the 2NF. so that ( m i=1S m − Tj ) under F . i = l. k. Tk ). but in this case Ti − Tj ) is not a there exists a Tj . the polynomiality of the normal form testing problem can be proved. S fact that contradicts Smthe hypothesis. which contradicts the hypothesis. S Sn Sm 2. that is. because (( m determinant i=1 Ti − {A}) − Sm for ( i=1 Ti + Tj ) → ( i=1 Ti − Tj ) ∈ F . then nonprime attribute S A would transitively depend through Sn V on n every determinant of set i=1 Si and then scheme Sch = ( i=1 Si . where V Ti 6= ∅. V ⊆ m i=1 Ti and A ∈ i=1 Ti .. itScanSexist S two casesTeither V ⊆ Tk . that is. 132 .. that is. or V 6⊂ Tk .the left side is formed from prime attributes. F ) will not be in the 3NF. left and right sides are formed just from nonprime attributes. three cases can be examined (other cases don’t exist): S S Sn Sm 1. S S S If it is considered that V ⊆ m T and A ∈ ( ni=1 Si − m i=1 i i=1 Ti ). 7 Algorithms’ complexities Based on the above characterization. IfSit is considered that V ⊆ m T and A ∈ Ti . V ⊆ ( ni=1 Si − m i=1 Ti ) and A ∈ ( i=1 Si − i=1 Ti ). neither in the third.Swhere V Tj = ∅. but V ⊆ (Tl Tl+1 . T Evidently m > k − S l + 1. then nonprime attribute A would partially S S depend on a determinant of set ni=1 Si therefore scheme Sch = ( ni=1 Si . if for every Tj . Database Syst. Synthesizing Third placeNormal Form Relations from Funcional Dependencies. ( i=1 Ti − Tj ) = i=1 Ti − Tj ) requires a time O(|N onRedEquivClasses| · ||F ||). That is. m. 133 . ACM Trans. if for everySTj . set m of attributes ( m i=1 Ti − Tj ) is a determinant for ( i=1 Ti − Tj ) under F ) requires a time O(|R| · ||F ||). P. [3] Beeri. It is not hard to calculate the complexity of algorithms that determine whether a scheme is in the second. Bernstein. p. N 4. S Thus. third or Boyce-Codd normal form. |R| · ||F || for each of these algorithms. F ) toSbe in BCNF (that is. Database Management Systems. Philip A. m. Raghu and Gehrke. 1976.Determination of the normalization level of database schemas Both construction of equivalence classes of scheme’s attributes and redundancy elimination from these classes have a complexity O(|R| · ||F ||) [6].1. S Computation of the condition Sch = ( ni=1 Si . 2000. 900 pp. F )+ to be Smin the 2NF (that is. if ( i=1 Si − m i=1 Ti ) is a determinant for Snbe in theSm ( i=1 Si − i=1 Ti )) requires a time O(||F ||). Second Edition. Therefore the time is O(|R| · ||F ||). F ) Sn for theSscheme to 3NF (that is.A. Johannes. SnSimilarly. N 1.. j = 1. then calculation of the condition for the scheme Sch = ( Sm i=1 Si . References [1] Ramakrishnan.4.. p. j = 1. V..30–59. Database Syst. This is explained through the fact that the complexity of calculation of the classes of nonredundant attributes exceeds the complexity of calculation of the verification conditions that determine if a scheme is in one of the enumerated forms. [2] Bernstein. McGraw-Hill Higher Education.277–298. if nonredundant classes m i=1 T Si nare built. V. verification of the condition for the scheme Sch = ( i=1 Si . ACM Trans. C. March 1979. Computational Problems Related to the design of placeNormal Form Relations Schemes. J. 1982.. 2009. Shimon. Englewood Cliffs. 2009 Vitalie Cotelea Academy of Economic Studies of Moldova Phone: (+373 22) 40 28 87 E-mail: vitalie. 89–99.T. [10] Chao-Chih Yang.H. Graph Algorithms. [9] Codd. Information Processing Letters. E. Fisher. [6] Cotelea.T. Prentice-Hall.cotelea@gmail. Rustin (ed). N. Inform. 260 p. 637 p. Johnson D. 1986. 33–64. R. [11] Codd. Relational Databases. [7] Maier. V. p. On the complexity of finding the set of candidate keys for a given set of functional dependencies. 1978.F. Computer Science press. 1972. Computer Science Press. Recent Investigation in Relation Data Base Systems. Data Base Systems. 1017–1021..1(49). Computer Science Journal of Moldova. p.C. The theory of relational database. 1983. p. Vitalie Cotelea Received May 27. 1974. p.4. E. Vitalie.17. NJ. An approach for testing the primeness of attributes in relational schemas. Further Normalization of the data base relational model. The complexity of recognizing 3FN relation schemes.187–190. p. Vol. Prentice Hall.5..com 134 .Vitalie Cotelea [4] Jou. P. 250 p. IFIP Congress. D. [5] Yu C. 1979. [8] Even.F. Chisinau. Process.100–101. Nr. Letters 14.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766891002655029, "perplexity": 3146.9982813632378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206677.94/warc/CC-MAIN-20190326220507-20190327002507-00373.warc.gz"}
https://www.ias.ac.in/listing/bibliography/joaa/M._I._Nouh
• M. I. Nouh Articles written in Journal of Astrophysics and Astronomy • Relation between a function of the right ascension and the angular distance to the vertex for Hyades stars In this paper, relation was developed for Hyades stars between a function of the right ascensions and the angular distances from the vertex. The precision criteria of this relation are very satisfactory and a correlation coefficient value of ≃ 1 was found which proves that the attributes are completely related linearly. The importance of this relation was illustrated through its usages as: •a criterion for membership of the cluster, •a generating function for evaluating some parameters of the cluster, •a generating function for the initial values of the vertex equatorial coordinates which could then be improved iteratively using the procedure of differential corrections. • On the Maximum Separation of Visual Binaries In this paper, an efficient algorithm is established for computing the maximum (minimum) angular separation ρmaxmin), the corresponding apparent position angles (𝜃|ρmax , 𝜃|ρmin) and the individual masses of visual binary systems. The algorithm uses Reed’s formulae (1984) for the masses, and a technique of one-dimensional unconstrained minimization, together with the solution of Kepler’s equation for (ρmax, 𝜃 |ρmax) and (ρmin, 𝜃 |ρmin). Iterative schemes of quadratic coverage up to any positive integer order are developed for the solution of Kepler’s equation. A sample of 110 systems is selected from the Sixth Catalog of Orbits (Hartkopf et al. 2001). Numerical studies are included and some important results are as follows: there is no dependence between ρmax and the spectral type and a minor modification of Giannuzzi’s (1989) formula for the upper limits of ρmax functions of spectral type of the primary. • Spectroscopic Analysis of the Eclipsing Binary ∝ CrB The eclipsing binary ∝ CrB, is a well-known double-lined spectroscopic binary. The system is considered unique among main-sequence systems with respect to its small mass ratio and large magnitude difference between the components. Our aim in the present paper is to compute the orbital parameters and to model the atmospheric parameters of the system. Synthetic spectral analysis of both the individual and disentangled spectra has been performed and yielded effective temperatures 𝑇eff = 10000 ± 250 K, surface gravities log 𝑔 = 4 ± 0.25 and projected rotational velocities 𝑣 sin 𝑖 = 110 ± 5 km/sec for the primary component, and 𝑇eff = 6000 ± 250 K and log 𝑔 = 4.5 ± 0.25 for the secondary component. Evolutionary state of the system is investigated using stellar models. • Light Curve Stability and Period Behavior of the Contact Binary TZ Boo New CCD observations of the eclipsing binary TZ Boo in BVR bands were carried out in 2006 (presented three new minima) and used together with all published minima to study and update the orbital period of the system TZ Boo by means of an (O–C) diagram. The period variation from 1926 to 2011 is represented by polynomial of eighth degree and indicates period variation of about 9.752 × 10-10 days/yr. We studied light curve stability over 85 yr covering all published observations in the V band and confirm the cyclic light curve variations. • # Journal of Astrophysics and Astronomy Current Issue Volume 40 | Issue 4 August 2019 • # Continuous Article Publication Posted on January 27, 2016 Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles. • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128490447998047, "perplexity": 1897.7212903049024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571027.62/warc/CC-MAIN-20190915093509-20190915115509-00144.warc.gz"}
http://mathhelpforum.com/advanced-algebra/39962-show-property-skew-symmetric.html
# Math Help - show this property of skew-symmetric 1. ## show this property of skew-symmetric A is a real 3*3 non-zero antisymmetric matrix. Show that A has a real eigenvector such that Ax=0 2. Originally Posted by szpengchao A is a real 3*3 non-zero antisymmetric matrix. Show that A has a real eigenvector such that Ax=0 Hint : An antisymmetric matrix of odd order is always singular.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103628158569336, "perplexity": 1161.393609759689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776434475.94/warc/CC-MAIN-20140707234034-00054-ip-10-180-212-248.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1084891/why-rationalize-the-denominator/1085966
# Why rationalize the denominator? In grade school we learn to rationalize denominators of fractions when possible. We are taught that $\frac{\sqrt{2}}{2}$ is simpler than $\frac{1}{\sqrt{2}}$. An answer on this site says that "there is a bias against roots in the denominator of a fraction". But such fractions are well-defined and I'm failing to see anything wrong with $\frac{1}{\sqrt{2}}$ - in fact, IMO it is simpler than $\frac{\sqrt{2}}{2}$ because 1 is simpler than 2 (or similarly, because the former can trivially be rewritten without a fraction). So why does this bias against roots in the denominator exist and what is its justification? The only reason I can think of is that the bias is a relic of a time before the reals were understood well enough for mathematicians to be comfortable dividing by irrationals, but I have been unable to find a source to corroborate or contradict this guess. • by this time such bias is largely restricted to school level. Furthermore, it is restricted to cases with very short terms; if I need to estimate $$\frac{\sqrt{n+1} - \sqrt n}{2}$$ my quickest trick is to rationalize the numerator – Will Jagy Dec 29 '14 at 19:31 • See this thread in MathEducators.SE. Many answers and explanations there. The discussion is pedagogical rather than mathematical, but anyway. A quick summary: comparing answers is easier in standard form, b4 computers yadda yadda, nice to learn how to do this so that you can when it really matters. – Jyrki Lahtonen Dec 29 '14 at 19:54 • @JyrkiLahtonen, I think it is three yadda's. Hmmm; usually three but varies youtube.com/watch?v=O6kRqnfsBEc – Will Jagy Dec 29 '14 at 19:58 • At least it's a surd number of yaddas, rather than an absurd number! – Walter Mitty Dec 30 '14 at 8:33 • This is part of the moral relativism and moral dissonance problems. We try to rationalize everything! :-) – Asaf Karagila Dec 30 '14 at 10:33 This was very important before computers in problems where you had to do something else after computing an answer. One simple example is the following: When you calculate the angle between two vectors, often you get a fraction containing roots. In order to recognize the angle, whenever when possible, it is good to have a standard form for these fractions [side note, I saw often students not being able to find the angle $\theta$ so that $\cos(\theta)=\frac{1}{\sqrt{2}}$]. The simplest way to define a standard form is by making the denominator or numerator integer. If you wonder why the denominator is the choice, it is the natural choice: As I said often you need to make computations with fractions. What is easier to add: $$\frac{1}{\sqrt{3}}+\frac{1}{\sqrt{6}+\sqrt{3}} \, \mbox{ or }\, \frac{\sqrt{3}}{3}+\frac{\sqrt{6}-\sqrt{3}}{3} \,?$$ Note that bringing fractions to the same denominator is usually easier if the denominator is an integer. And keep in mind that in many problems you start with quantities which need to be replaced by fractions in standard form [for example in trigonometry, problems are set in terms of $\cos(\theta)$ where $\theta$ is some angle]. But at the end of the day, it is just a convention. And while you think that $\frac{1}{\sqrt{2}}$ looks simpler, and you are right, the key with conventions is that they need to be consistent for the cases where you need recognition. The one which looks simpler is often relative... The historical reason for rationalizing the denominator is that before calculators were invented, square roots had to be approximated by hand. To approximate $\sqrt{n}$, where $n \in \mathbb{N}$, the ancient Babylonians used the following method: 1. Make an initial guess, $x_0$. 2. Let $$x_{k + 1} = \frac{x_k + \dfrac{n}{x_k}}{2}$$ If you use this method, which is equivalent to applying Newton's Method to the function $f(x) = x^2 - n$, to approximate the square root of $2$ by hand with $x_0 = 3/2$, you will see that while the sequence converges quickly, the calculations become onerous after a few steps. However, once an approximation was known, it was easy to calculate $$\frac{1}{\sqrt{2}}$$ quickly by rationalizing the denominator to obtain $$\frac{\sqrt{2}}{2}$$ then dividing the approximation by $2$. • Minor nitpick: I think you meant "square roots had to be approximated [by hand]." – GregRos Jan 1 '15 at 23:04 • What would be the problem of using the approximation in the denominator? Is it because the errors would be 'expanded', while in the numerator they would not? – An old man in the sea. Aug 25 '16 at 17:42 • @Anoldmaninthesea. Before calculators were invented, approximating square roots was difficult. However, once a particular square root had been calculated, it was easier to rationalize the denominator and use a known approximation rather than calculate a new square root and verify that calculation. In short, rationalizing the denominator was a labor saving device. – N. F. Taussig Aug 25 '16 at 17:50 • @N.F.Taussig Thanks. I understood what you wrote in your answer. My doubt was specific to the last part, namely the example. We have a proximation for the square root of 2. Why should I use $\sqrt{2}/2$ instead of $1/\sqrt{2}$? the approximation used in both is the same... – An old man in the sea. Aug 25 '16 at 17:55 • @Anoldmaninthesea. Sorry, I did not initially understand what you were asking. The reason that it makes more sense to use $\sqrt{2}/2$ than $1/\sqrt{2}$ is that it is easier to divide the approximation $\sqrt{2} \approx 1.414214$ by $2$ than it is to divide $1$ by $1.414214$. Simpler calculations lead to fewer errors. – N. F. Taussig Aug 25 '16 at 18:24 I may have missed it, but there is an important reason that I think has been omitted from the other answers. (Ahaan Rungta mentioned it, but did not explain in detail.) Recall how something like $\frac3{17}$ was calculated prior to around 1964: $$\require{enclose} \begin{array}{rl} 17&\enclose{longdiv}{3.000\ldots} \end{array}$$ $$\begin{array}{rlll} & \ \ \ \,0.1\\ 17&\enclose{longdiv}{3.000\ldots} \\ & \ \ 1.7 \\ \hline & \ \ 1\ 3 \end{array}$$ $$\begin{array}{rlll} & \ \ \ \,0.17\\ 17&\enclose{longdiv}{3.000\ldots} \\ & \ \ 1.7 \\ \hline & \ \ 1\ 30 \\ & \ \ 1\ 19 \\ \hline & \ \ \ \ \ 11 \end{array}$$ And so on. The difficulty of the calculations depends only on the complexity of the divisor, which is 17. To extract a result with any required degree of precision one needs only continue the calculation until the required number of digits have been emitted. But the operations themselves are determined by the divisor. Now let us take $\frac3{\sqrt2}$ as an example. To calculate this directly we need to evaluate: $$1.4142\ldots \enclose{longdiv}{3.000\ldots}$$ which is quite onerous. Using an exact value for the divisor is impossible because of the way the algorithm works, so you must truncate the divisor. It's not clear how much error will be introduced by this truncation. And if you round off the divisor to $n$ digits of precision, you must perform many multiplications and subtractions of $n$-digit numbers. In contrast, calculating $\frac{3\sqrt2}2$ is much easier. First calculate $3\times \sqrt2$ with a single multiplication, to obtain $4.242640\ldots$. (If you need more digits later you can easily produce them when you need them.) Then perform the following division: $$2 \enclose{longdiv}{4.242640\ldots}$$ which requires only trivial integer calculations throughout. The main reason I'd guess our math teacher culture tells us to require rationalizing the denominator is so that there is one set universal nomenclature among students about what a standard from means. In school, teachers have a lot of answers to check so if they have to keep seeing things like $\frac{1}{\sqrt{3}}$, $\frac{\sqrt{3}}{3}$, and $\sqrt{\frac{1}{3}}$ (just as a very simple example) floating around, it slows down checking slightly and is, also, to some extent, annoying. Historical thing: before calculators, you had to do things by hand (duh). In this scenario, dividing $1$ by $\sqrt{3}$ is a lot harder than dividing $\sqrt{3}$ by $3$, so rationalizing the denominator (rather than rationalizing the numerator) seems logical. • I can buy the rationale that rationalizing the denominator makes fractions easier to compute by hand. But in that case we should prefer series representations of certain expressions to expressions containing certain common functions like exponentials. I think that terseness, symmetry, and forms that provide insight into the properties of a number are more important than ease of hand-computation, especially considering that we have calculators. – Reinstate Monica Dec 29 '14 at 19:47 • @Ahaan - +1 for the answer. Nonetheless, a good teacher, at any level, should have no trouble realizing that your three radical expressions represent the same number. However, having said that, and being a US resident, maybe not. – Chris Leary Dec 29 '14 at 20:06 • @ChrisLeary Thanks for the +1! Well, often times, there are standard forms. Anyhow, the more traditional reason, I think, is the historical reason I mention. – Ahaan S. Rungta Dec 29 '14 at 20:08 • @Solomonoff'sSecret: Traditionally, when computing by hand, the way you'd deal with exponential or trig functions would be to look them up in a table (or use a slide rule). That's easier than using a series expansion, although of course you could do that, if you didn't have a suitable lookup table available. – Ilmari Karonen Dec 30 '14 at 2:05 • +1 for the historical thing - after discussing standard-forms for so long one tends to forget that for some people the numerical results do matter... – piet.t Dec 30 '14 at 7:45 Anecdotally it shows that the inverse of $a+b\sqrt n$ can also be written as $c+d\sqrt n$ (with $a, b, c, d \in \Bbb Q$), which is key in showing that $\Bbb Q[\sqrt n]$ (the set $\{ P(\sqrt n) \mid P \text{ is a rational polynomial} \}$) is a field. • This would be more pertinent if it had mentioned the inverse of $a+b\sqrt n$ with $a,b\in\Bbb Q$. – Marc van Leeuwen Dec 30 '14 at 12:35 • True, laziness on my part. I'll change it. – Alexandre Halm Dec 30 '14 at 12:38 • This is a very important conceptual reason, and becomes even more compelling when you want to show expressions $a + b\sqrt[3]{2} + c\sqrt[3]{4}$ with rational $a, b, c$ form a field. The only nonobvious aspect of being a field is that such expressions are preserved under inversion, since the trick taught in school for the square root case does not work anymore. – KCd Dec 31 '14 at 4:53 • @KCd: So what is a trick that works for $\mathbb{Q}(\sqrt[3]{2})$? – user21820 Jan 1 '15 at 13:33 • Well, the point is that there is no trick. Maybe you just meant to ask how it is done at all? To invert a specific number, say $5 + \sqrt[3]{2} - 8\sqrt[3]{4}$, you could multiply it by an unknown $x + y\sqrt[3]{2} + z\sqrt[3]{4}$, set the product equal to $1 + 0\sqrt[3]{2} + 0\sqrt[3]{4}$ by equating coefficients, and solve the resulting set of 3 linear equations in 3 unknowns: it is a linear algebra problem. Another method is to solve the polynomial equation $(5 + t - 8t^2)u(t) + (t^3 - 2)v(t) = 1$ in $\mathbf Q[t]$ and use $u(\sqrt[3]{2})$. – KCd Jan 1 '15 at 17:12 I have a fairly clear memory of doing some high-school math problem and ending up with $\frac3{\sqrt3}$, and thinking "I'll go ahead and get the root out of the denominator, even though it never makes the result any more useful." I was surprised to discover that in that case the rationalized result is more useful (it's just $\sqrt3$, of course). Nowadays I have the facility to make good decisions about whether it's more parsimonious to put the root in the numerator or the denominator — I, like you, prefer $\frac1{\sqrt2}$ to $\frac{\sqrt2}2$. But I developed that facility after rationalizing a bunch of denominators, which makes me think that it's perfectly useful as a pedagogical bias. Adding up two fractions with irrational denominators looks like less roots. See $\frac{1}{\sqrt{2}} + \frac{1}{\sqrt{3}} = \frac{\sqrt{3}+\sqrt{2}}{\sqrt{6}}$ vs $\frac{\sqrt{2}}{2} + \frac{\sqrt{3}}{3} = \frac{2\sqrt{3}+3\sqrt{2}}{6}$ • True, although dividing the numerator and denominator of $\frac{\sqrt{3}+\sqrt{2}}{\sqrt{6}}$ by $\sqrt{3}$ would produce a fraction with just two square roots. Nonetheless, your method generalizes better. – Reinstate Monica Dec 29 '14 at 19:51 This is quite related (but not identical) to making the denominator real for complex valued fractions such as \begin{align} \frac1{1+3i} &= \frac{1-3i}{10} \\ &= \frac1{10} - \frac3{10}i \end{align} which is necessary in order to separate the fraction into its real and imaginary part. Of course that can be intermingled with non-rational numbers, e.g. \begin{align} \frac1{\sqrt3-\sqrt7i} &= \frac{\sqrt3+\sqrt7i}{10} \\ &= \frac{\sqrt3}{10} + \frac{\sqrt7}{10}i. \end{align} Expressions get of course more complicated once $\sqrt[3]{\ }$ and the like occurs, and it gets even funnier when you wonder about the real and imaginary parts of something like $\sqrt{3+7i}$... Like so many things it is nothing to get particularly obsessed about, but knowing how to rationalise denominators is quite a useful tool to have at one's disposition. In fact more so in the general context of manipulating expressions than just for simplifying numbers. It is based on a small trick that is easy to understand, but which most people would probably not have thought of if it were not taught to them. Notably, I would not like to do without this method when trying to decide whether a rational expression involving a single square root is equal to$~0$. Also I think that a similar method (though maybe better called realising than rationalising) is used in the most striaghtforward proof of the fact that the complex numbers are a field. Rationalizing the denominator (RTD) (a special case of the method of simpler multiples) is useful because it often serves to simplify problems, e.g. by transforming an irrational denominator (or divisor) into a simpler rational one. This can lead to all sorts of simplifications, e.g. below. In this prior question is an example where RTD transforms a limit of indeterminate form into a simple determinate limit by way of cancelling an apparent singularity at $$\rm\ x = a\$$ $$\rm \frac{x^2\!-a\sqrt{ax}}{\sqrt{ax}-a} = \frac{x^2\!-a\sqrt{ax}}{\sqrt{ax}-a} \ \frac{\sqrt{ax}+a}{\sqrt{ax}+a} = \frac{ax(x\!-\!a)\!+\!\sqrt{ax}(x^2\!-\!a^2) }{a(x\!-\!a) } = x+(x\!+\!a)\sqrt{\frac{x}{a}}$$ Here's another example from number theory showing how RTD serves to reduce divisibility of algebraic integers to rational integers. Consider the Gaussian integers $$\rm\ \mathbb I = \{ m + n\ i\ : \ m,n\in \mathbb Z \}.\,$$ As in any ring we define divisibilty by $$\rm\ a\mid b\ in\ \mathbb I \iff b/a \in \mathbb I\:.\$$ Suppose we wish to know if $$\rm\ 2+3\ i\,\mid\, 91\ in\ \mathbb I,\,$$ i.e. is $$\rm\ w = 91/(2+3\ i)\in \mathbb I\ ?\$$ Now in fact $$\rm\:\mathbb I\:$$ happens to have a division algorithm which we could apply. But it is more elementary to simply RTD, which quickly yields $$\rm\ w = 91\ (2-3\ i)/(2^2+3^2) = 7\ (2-3\ i)\$$ so, indeed, $$\rm\: w\in \mathbb I\:.\$$ More generally we can often reduce problems about algebraic numbers to problems about rational numbers by taking norms, traces, etc. In fact this is (roughly) how Kronecker constructed his divisor theory for algebraic integers, see e.g. Harold Edwards: Divisor Theory. We can also "rationalize" to base fields in any algebraic extension, e.g. we can "realize" denominators of complex fractions, which lifts "existence of inverses of elements $$\ne 0\,$$" from $$\mathbb R$$ to $$\mathbb C.\:$$ Namely, since $$\mathbb R$$ is a field, $$\rm\ 0\ne r\in \mathbb R\ \Rightarrow\ r^{-1}\in \mathbb R,\:$$ so with $$\,\alpha' =$$ conjugate of $$\alpha,$$ $$\rm 0\ne\alpha\in\mathbb C\ \ \Rightarrow\ \ 0\ne\alpha\alpha' = r\in \mathbb R\ \ \Rightarrow\ \frac{1}\alpha\, =\, \frac{\alpha'}{\alpha\:\alpha'}\, =\, \frac{\alpha'}r\in\mathbb C$$ Thus $$\,$$ field $$\mathbb R\, \Rightarrow\,$$ field $$\mathbb C\$$ by using the norm $$\rm\:\alpha\to\alpha\!\ \alpha'\:$$ to lift existence of inverses from $$\mathbb R$$ to $$\mathbb C.$$ Calculate $\frac1{\sqrt{3.0000001}-\sqrt3}$ and compare with the answer obtained from the rationalized form $\frac{\sqrt{3.0000001}+\sqrt3}{0.0000001}$. Adjust the number of $0$'s to the precision of the calculator or software used. In Maple with Digits := 10, the first expression gives $3.571428571\cdot10^7$, while the second gives $3.464101644\cdot10^7$. • This really has little to do with rationalisation of denominators. Take the inverses of those fractions, and the argument backfires. What it really says is avoid differences of near-equal values in multiplicative formulas, and one can sometimes obtain that by methods similar to those used to rationalise denominators – Marc van Leeuwen Dec 30 '14 at 13:16 • @MarcvanLeeuwen. The question was "So why does this bias against roots in the denominator exist and what is its justification?" My answer says: "to have a better precision in calculations". Of course, sometimes, it will be more intereting to rationalise the numerator, but I think we may extend the question a bit. – Bernard Massé Dec 30 '14 at 16:43 • @MarcvanLeeuwen Maybe this example is not the best, but I am quite convinced that the precision of numerical approximations is at the heart of rationalizing the denominator. Which is easier to compute $\sqrt{2}/10$ or $1/(5 \sqrt{2})$? If you computed their decimal approximations naïvely, which would give a more reliable result? – Richard D. James Dec 30 '14 at 20:48 • This example is a numerical illustration of the way rationalizing denominators is used to explain the formula for the derivative of $\sqrt{x}$ from the limit definition of the derivative. – KCd Dec 31 '14 at 4:56 can you compute this? $$\left \lfloor \frac{1}{\sqrt{25}-\sqrt{24}} \right \rfloor$$without calculator !!! can you compute $$\left \lfloor \frac{1}{\sqrt{25}-\sqrt{24}}\frac{\sqrt{25}+\sqrt{24}}{\sqrt{25}+\sqrt{24}} \right \rfloor=\\ \left \lfloor \frac{\sqrt{25}+\sqrt{24}}{25-24} \right \rfloor=\\\left \lfloor \frac{\sqrt{25}+\sqrt{24}}{1} \right \rfloor=\left \lfloor \sqrt{25}+\sqrt{24}\right \rfloor=\\\left \lfloor 5+\sqrt{24} \right \rfloor=9$$now which one is easy to understand ? • This isn't a fair comparison because rationalizing the denominator eliminates the fraction. – Reinstate Monica Dec 29 '14 at 19:46 • Sqrt of 24 is 4? – Ooker Dec 30 '14 at 6:46 • @Ooker: Note the "floor" brackets. – Dan Dec 30 '14 at 7:00 • Now I understand – Ooker Dec 30 '14 at 7:20 The reason I've seen is historical, because the Greeks could easily construct a fraction with radicals on the numerator by constructing all the radicals and then dividing as necessary, but to divide by an irrational amount is very difficult. This problem is changed to simply dividing up rationals which is easily done with straightedge and compass • I don't think this is correct. The method for dividing numbers with straightedge and compass is exactly the same whether the lengths are rational or irrational. – MJD Jan 2 '15 at 2:18 If you can visualize ${1 \over \sqrt 2}$ as a number representing ratios of sides of an isosseles right angled triangle in a right of its own, fine, well and good. But if you wish to find difference say between $1/(\sqrt p - \sqrt q)$ and $1/(\sqrt p + \sqrt q)$, you need to pay toll at the gate of denominator. Given $\sqrt{2}\approx1.41421356237$, suppose you're challenged to approximately calculate $1/\sqrt{2}$. Now you will find that $1:1.41421356237=100000000000:141421356237=0.7?$, where the calculation of ? isn't easily done. Knowing that $1/\sqrt2=\sqrt2/2$ it becomes a piece of cake: $1.41421356237:2=0.707106781185$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488537311553955, "perplexity": 416.6903230779646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00169.warc.gz"}
http://mathhelpforum.com/calculus/147096-sphere-line-intersection.html
# Math Help - Sphere-line intersection 1. ## Sphere-line intersection I want to find the coordinates of where a line projected from inside a sphere intersects the inner surface of the sphere. The sphere is: $4(x-2)^2 + 16(y-4)^2 + (z-5)^2 = 400$ Eye point at: $(13, -3.5, 29)^T$ Viewing direction: $(-2, 3, -16)^T$ At what coordinates does the line from the eye point in direction of the viewing direction hit the surface of the sphere? 2. Originally Posted by posix_memalign I want to find the coordinates of where a line projected from inside a sphere intersects the inner surface of the sphere. The sphere is: $4(x-2)^2 + 16(y-4)^2 + (z-5)^2 = 400$ Eye point at: $(13, -3.5, 29)^T$ Viewing direction: $(-2, 3, -16)^T$ At what coordinates does the line from the eye point in direction of the viewing direction hit the surface of the sphere? A line, in three dimensions, that includes the point $(x_0, y_0, z_0)$ and points in the direction of vector $$ can be written in parametric equations as $x= At+ x_0$, $y= Bt+ y_0$, and $z= Ct+ z_0$. The line from the eye point, (13, -3.5, 29) in direction <-2, -3, -16> has equations $x= -2t+ 13$, $y= -3t- 3.5$, and $z= -16t+ 29$. Replace x, y, and z in the equation of the sphere with those and you get one quadratic equation for t. Solve for t, then use the parametric equations of the line to find the corresponding x, y, and z values. A quadratic equation may have no real solution, one (double) solution, or two solutions. Those correspond to the cases where the line misses the sphere entirely, is tangent to the sphere, or crosses through the sphere. Since the given point is inside the sphere, there will have to be two solutions, one with t negative and one with t positive. Since the parametric equations were set up with t multiplying the direction vector, the correct solution will be the one for the positive t. 3. Originally Posted by HallsofIvy A line, in three dimensions, that includes the point $(x_0, y_0, z_0)$ and points in the direction of vector $$ can be written in parametric equations as $x= At+ x_0$, $y= Bt+ y_0$, and $z= Ct+ z_0$. The line from the eye point, (13, -3.5, 29) in direction <-2, -3, -16> has equations $x= -2t+ 13$, $y= -3t- 3.5$, and $z= -16t+ 29$. Replace x, y, and z in the equation of the sphere with those and you get one quadratic equation for t. Solve for t, then use the parametric equations of the line to find the corresponding x, y, and z values. A quadratic equation may have no real solution, one (double) solution, or two solutions. Those correspond to the cases where the line misses the sphere entirely, is tangent to the sphere, or crosses through the sphere. Since the given point is inside the sphere, there will have to be two solutions, one with t negative and one with t positive. Since the parametric equations were set up with t multiplying the direction vector, the correct solution will be the one for the positive t. Thanks! I just tried your suggested solution but I got that t = 2.5 or t = 1.5 -- no negative t, I double checked my calculations, perhaps I still made some miscalculation or is it possible to get two positive t? 4. Well, you said "a line projected from inside a sphere" so I assumed that you meant that the eye point was inside. It's easy to see that it isn't: $4(13- 2)^2$ alone is larger than 400. And, by the way, that is not a sphere- it is an ellipsoid. 5. Originally Posted by HallsofIvy Well, you said "a line projected from inside a sphere" so I assumed that you meant that the eye point was inside. It's easy to see that it isn't: $4(13- 2)^2$ alone is larger than 400. And, by the way, that is not a sphere- it is an ellipsoid. Ah, I see, sorry for my mistake. However does t = 1.5 and t = 2.5 mean that the line intersects the ellipsoid first once on the surface, and then once again from the inside and out (after having passed through the inside of the ellipsoid)? 6. Well, if then the line is projected from inside the sphere to the eyepoint, the point at which it strikes the surface is the point closer to the eyepoint.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8381401300430298, "perplexity": 538.1094201467739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928479.19/warc/CC-MAIN-20150521113208-00116-ip-10-180-206-219.ec2.internal.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/year/2000/docId/358
About changing the ordering during Knuth-Bendix completion • We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868145227432251, "perplexity": 612.1652562879523}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596541.52/warc/CC-MAIN-20190423074936-20190423100044-00044.warc.gz"}
http://www.high-frontier.org/key-concept-2-making-earth-orbit-escape-velocity-spaceflight-affordable/
# Making Earth Orbit to Escape Velocity Spaceflight Affordable Making Earth to orbit spaceflight affordable to everyone is only the beginning. The second step is to make getting to escape velocity affordable to everyone. Accelerating to escape velocity is what will allow us to start moving out into the solar system. The whole purpose of making beyond Earth orbit spaceflight affordable to everyone is so that we can start building a spacefaring civilization.  A trip to the Moon, a trip to an outpost space station at L2, a trip to a near-Earth asteroid, a trip to Mars, all of them requires accelerating to escape velocity and beyond. This is not an insignificant step. A vertical take-off rocket needs to accelerate to 9,100 meters per second in order to make it to low Earth orbit.  That includes 7,800 meters per second for low Earth orbit, plus 1,300 meters per second for drag and gravity losses.  To go from low Earth orbit to escape velocity requires another 3,230 meters per second of speed.  That is over 1/3 of the speed needed to reach low Earth orbit.  It takes a lot of propellant to accelerate that much.  For example, the Saturn V rocket that America used to go to the Moon in the late 1960s and early 1970s, could place 161,600 kilograms in low Earth orbit.  The amount of useful payload that it could send to the Moon was 62,300 kilograms.  Of the difference: 84,160 kilograms was propellant; and, 15,140 kilograms was the expendable upper stage and support structure.  Even if the upper stage for doing this were made reusable, there would still be the cost of launching all the propellant for another flight into Earth orbit, plus the need for the upper stage to carry enough extra propellant so that it could return to low Earth orbit.  None of this is inexpensive. So how do we make low Earth orbit to escape velocity flight affordable? The answer is a Skyhook. In the same way that the lower end of the Skyhook reduces the velocity that a launch vehicle needs to achieve to reach low Earth orbit, the upper end of a mature Skyhook can accelerate a spacecraft to escape velocity without the need for an upper stage or by burning any of the spacecraft’s onboard propellant.  The power for this comes from the ion propulsion system on the Skyhook.  The Skyhook uses its ion propulsion system to gradually increase its orbital altitude and orbital energy, altitude and energy that it gives to the departing spacecraft when the departing spacecraft releases from the upper end of the Skyhook.  In effect, the Skyhook acts as a reusable energy storage device for both arriving and departing spacecraft that never leaves Earth orbit.  The amount of energy being stored by the Skyhook at any given moment is measured by the height of its orbit.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584113717079163, "perplexity": 1267.592651714815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510019.12/warc/CC-MAIN-20181016051435-20181016072935-00457.warc.gz"}
https://www.physicsforums.com/threads/first-order-ode.189918/
# First Order ODE 1. Oct 8, 2007 ### robbondo 1. The problem statement, all variables and given/known data Find all solutions $$x^{2} y y\prime = (y^{2} - 1)^{\frac{3}{2}}$$ 2. Relevant equations 3. The attempt at a solution I know I have to use separation of variables because it isn't linear. so I get $$\frac{ydy}{(y^{2} - 1)^{\frac{3}{2}}} = \frac{1}{dxx^{2}}$$ Now I'm kinda stuck at how to integrate this. I think I'm supposed to use partial fraction expansion, but since there's the 3/2 exponent, I'm confused as to how to go about doing that. Tips??? 2. Oct 8, 2007 ### dashkin111 Check the right hand side of your equation to make sure the dx is in the right spot. Try substitution on the left 3. Oct 8, 2007 ### robbondo Thanks, that helped. So now I substituted take the integral of both sides and I get $$x = (y^{2}-1)^{\frac{1}{2}} + c$$ This isn't matching up with the correct answer though. I took the integral and got -1/x on the left and 1 / - (y^2 - 1)^1/2 + c on the right. So I just took out the negative signs and changed to the reciprocal on the both sides, whatcha think? 4. Oct 8, 2007 ### rock.freak667 Well that would be correct if the constant wasn't there but because it is...you just can't simply invert both sides if you brought $$\frac{1}{\sqrt{y^2-1}} +c$$ to the same base and then invert...that would be correct 5. Oct 8, 2007 ### robbondo ahh I see. I got the correct answer the only thing that confuses me is that in the book the answer says $$y \equiv \pm 1$$... Actually I still don't really know what "equivalent" represents in this class. 6. Oct 8, 2007 ### rock.freak667 well normally for the function to be continuous i thought it would be y not equal to +/- 1 7. Oct 8, 2007 ### Dick In addition to the solutions you get by integrating, the equation has solutions where y' is identically equal to zero. These are them. 8. Oct 8, 2007 ### robbondo So if if it didn't say " all solutions " I could just say that y=1, but what's the point of having the weird three line equal sign? 9. Oct 8, 2007 ### Dick It just means y(x)=1 for all x. I.e. y is IDENTICALLY equal to one. Sure, if you don't need all solutions, then you could just pick an easy one. Similar Discussions: First Order ODE
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379059672355652, "perplexity": 698.6669028499977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00103.warc.gz"}
https://courses.ansys.com/index.php/courses/magnetostatic-material-interactions/lessons/motional-and-transformer-emf-lesson-5/
# Motional and Transformer EMF - Lesson 5 This video lesson shows that if an applied magnetic field changes, it induces a voltage according to Faraday’s law. This voltage is called a transformer electromotive force, or EMF. The voltage produced when a conductor moves in a static magnetic field is called a motional EMF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932543992996216, "perplexity": 531.6889383425786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00620.warc.gz"}
https://www.datasciencecentral.com/profiles/blogs/two-beautiful-mathematical-results
# Two Beautiful Mathematical Results These results are relatively easy to prove (first year college-level calculus needed) and could be a good test to refresh your math skills. We posted another simple one (with a probabilistic / number theory flavor) a while back, see here. Prove the following: where g = 0.5772156649... is the Euler–Mascheroni constant. Solution: Let us introduce the function f(x), defined as follows: The answer to the first question is simply -f(-1). How do we compute it? Here is the key: An exact solution is available for this integral (I computed similar integrals as exercises during my last high school year), and the result can be found here. This WolframAlpha tool allows you to automatically make the cumbersome but simple, mechanical, boring computations. The result involves logarithms and Arctan( (2x+1) / SQRT(3) ) which has know values (a fraction of Pi) if x = 0, 1 or -1. To answer the second question, one can use generalized harmonic numbers H(x) with x = 1/3 (see details here) together with the well known asymptotic expansion for the diverging harmonic series (this is where the Euler–Mascheroni constant appears.) The mechanics behind this are as follows. Consider A = B + C, with Both A and B diverge, but C converges. We are interested in an asymptotic expansion, here, for A. The asymptotic expansion for B is well known: It involves the Euler–Mascheroni constant , and (log n) / 3. The computations for C is linked to the function f(x) as x tends to 1, and brings in the other numbers in the final result: log(3), and Pi / SQRT(3). Note that if we define h(x) as then C = h(1) and g(x) can be computed using a technique and integral very similar to that used for f(x). Generalization: The idea is to decompose a function g(x) that has a Taylor series, into three components. It generalizes the well known decomposition into two components: The even and odd part of g. It goes as follows: where the a's are the Taylor coefficients. Related article: DSC Resources Views: 5875 Comment Join Data Science Central Comment by victor zurkowski on June 22, 2018 at 4:27pm It is worth mentioning that the identity $\Sigma_{k=0}{\infty} x^{3k} = \frac{1}{1-x^3}$ and the term by term integration hold for $x \in (0,1)$. To justify the evaluation of the series at $x= - 1$ still equals the value of the integral, one appeals to Abel's theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283022880554199, "perplexity": 532.0685240785905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00359.warc.gz"}
https://www.physicsforums.com/threads/simple-inertia-test-setup.175368/
# Simple Inertia Test Setup 1. Jun 28, 2007 ### remz Any ideas on a simple experiment I can setup to calculate the moment of inertia for any object (sanity check only). Another post on PF proposed using a pendulum however unfortunately did not go into enough detail and my google skills are obviously lacking tonight. Since... $$J = \frac {mgr \theta} {\frac {d \omega} {dt}}$$ where... J = inertia of object on end of string m = mass of object g = acceleration due to gravity r = distance of object from centre of rotation $$\theta$$ = the angle to which the object is raised ($$sin \theta \approx \theta$$ for small $$\theta$$) $$\frac {d \omega} {dt}$$ = angular acceleration of object So, in this test setup, the inertia can be deduced through measuring the angular acceleration of the object only as the numerator is already fully known. Is there a flaw in my proposed test or can anyone suggest any alternative solutions. Regards, rem Last edited: Jun 28, 2007 2. Jun 28, 2007 ### Staff: Mentor Do the objects have a center of rotation? If so, just make that axis of rotation vertical, and devise a means of applying a known torque and measuring the rotational acceleration. You could use a mass on a string to generate a known force, and translate that down force into a torque on the shaft of your unknown object.... 3. Jun 29, 2007 ### FredGarvin Take a look at this: Last edited: Dec 20, 2007 Similar Discussions: Simple Inertia Test Setup
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374382257461548, "perplexity": 1431.2362346655177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00632-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.jiskha.com/display.cgi?id=1125329740
# phys posted by Cheyenne The speed of ocean waves depends on their wavelength λ (in meters) and the gravitational field strength g (in m/sˆ2) in this way: V=Kλˆp Gˆq Where K is a dimentionless constant. Find the value of the exponents p and q. Consider the dimensions. Left side, m/s. Right side (dimentionless)(m)^p * (m/s^2)^q m/s = m^P * m^q * 1/s^(2q) consider the exponents of m 1=p+q consider the exponents of s -1=-2q You should be able to solve. Check my thinking and math. ## Similar Questions 1. ### Physics Someone please help me with this. The speed of ocean waves depends on their wavelength λ (in meters) and the gravitational field strength g (in m/sˆ2) in this way: V=Kλˆp Gˆq Where K is a dimentionless constant. Find … 2. ### physics The speed (v) of an ocean wave depends on its wavelength ë (measured in m/s) and the gravitational field strength g (measured in m/s62) according to the formula v = Këˣgʸ where K is a dimensionless constant. Find the value … 3. ### college physics The speed (v) of an ocean wave depends on its wavelength ë (measured in m/s) and the gravitational field strength g (measured in m/s62) according to the formula v = Këˣgʸ where K is a dimensionless constant. Find the value … 4. ### college physics The speed (v) of an ocean wave depends on its wavelength ë (measured in meters) and the gravitational field strength g (measured in m/s62) according to the formula v = Këˣgʸ where K is a dimensionless constant. Find the … 5. ### college physics The speed (v) of an ocean wave depends on its wavelength ë (measured in meters) and the gravitational field strength g (measured in m/s62) according to the formula v = Këˣgʸ where K is a dimensionless constant. Find the … 6. ### physics while watching ocean waves at the dock of the bay. otis notices that 10 waves pass beneath him in 30 seconds. he also notices that the crests of succesive waves exactly coincide with the posts that are 5 meters apart. what are the … 7. ### Physics While watching ocean waves at the dock of the bay, Otis notices that 10 waves pass beneath him in 30 seconds. He also notices that the crests of successive waves exactly coincide with the posts that are 5 meters apart. What are the … 8. ### Physics While watching ocean waves at the dock of the bay, Otis notices that 9 waves pass beneath him in 28 seconds. He also notices that the crests of successive waves exactly coincide with the posts that are 5 meters apart. What are the … 9. ### physics While watching ocean waves at the dock of the bay, Otis notices that 13 waves pass beneath him in 32 seconds. He also notices that the crests of successive waves exactly coincide with the posts that are 7 meters apart. What are the … 10. ### Algebra 1 high school In deep water, the speed s (in meters per second) of a series of waves and the wavelength L (in meters) of the waves are related by the equation 2(pi)s^2=9.8L. a. Find the speed the the nearest hundredth of a meter per second of a … More Similar Questions
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677815794944763, "perplexity": 1984.5733651257679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00307.warc.gz"}
http://arxiv-export-lb.library.cornell.edu/abs/2206.08940
hep-ph (what is this?) # Title: Scalar Dark Matter Production from Preheating and Structure Formation Constraints Abstract: We investigate the out-of-equilibrium production of scalar dark matter (DM) from the inflaton condensate during inflation and reheating. We assume that this scalar couples only to the inflaton via a direct quartic coupling and is minimally coupled to gravity. We consider all possible production regimes: purely gravitational, weak direct coupling (perturbative), and strong direct coupling (non-perturbative). For each regime, we use different approaches to determine the dark matter phase space distribution and the corresponding relic abundance. For the purely gravitational regime, scalar dark matter quanta are copiously excited during inflation resulting in an infrared (IR) dominated distribution function and a relic abundance which overcloses the universe for a reheating temperature $T_\text{reh}>34 ~\text{GeV}$. A non-vanishing direct coupling induces an effective DM mass and suppresses the large IR modes in favor of ultraviolet (UV) modes and a minimal scalar abundance is generated when the interference between the direct and gravitational couplings is maximal. For large direct couplings, backreaction on the inflaton condensate is accounted for by using the Hartree approximation and lattice simulation techniques. Since scalar DM candidates can behave as non-cold dark matter, we estimate the impact of such species on the matter power spectrum and derive the corresponding constraints from the Lyman-$\alpha$ measurements. We find that they correspond to a lower bound on the DM mass of $\gtrsim 3\times 10^{-4} \, \rm{eV}$ for purely gravitational production, and $\gtrsim 20 \, \rm {eV}$ for direct coupling production. We discuss the implications of these results. Comments: 58 pages, 18 figures Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) Report number: DESY-22-104 Cite as: arXiv:2206.08940 [hep-ph] (or arXiv:2206.08940v1 [hep-ph] for this version) ## Submission history From: Sarunas Verner [view email] [v1] Fri, 17 Jun 2022 18:00:00 GMT (5643kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9406672120094299, "perplexity": 2522.2005684552755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00264.warc.gz"}
http://math.stackexchange.com/questions/101609/general-aggregation-functions
# General aggregation functions Is there a way to find all or some functions which "aggregate" numbers and are non-isomorphic to addition. I mean functions which are commutative and associative: $f(x,y)=f(y,x)$ $f(x,f(y,z))=f(f(x,y),z)$ Do you know examples? EDIT: So of I want to exclude trivial solutions which are isomorphic to addition: $f(x,y)=g(h(x)+h(y))$ - Multiplication. Perhaps more interesting, $$f(x,y)=x^{\log y}$$ which is defined for positive $x$ and $y$. The trick is to note that this is $e^{\log x\log y}$ and this makes it easy to prove the properties. Another example is $$f(x,y)=\root3\of{x^3+y^3}$$ EDIT: For an example which is "not isomorphic to addition," I think $$f(x,y)=\max(x,y)$$ will do. - But these functions are isomorphic to additions. I'll add an explanation to my question... :) – Gerenuk Jan 23 '12 at 12:54 Hmm, max() isn't "exactly" isomorphic to addition, but in a way it is still a limit and therefore not so interesting for me :( Since $\max(x,y)=\lim_{k\to\infty} \sqrt[k]{x^k+y^k}$ – Gerenuk Jan 24 '12 at 12:53 Looks like Hilbert's 13'th. The answer is no. - Could you explain? – Trevor Wilson Jan 24 '13 at 0:53 @TrevorWilson (en.wikipedia.org/wiki/Hilbert's_thirteenth_problem) probably explains better. The actual Arnold's proof is very instructive, but I don't have the link in English. – user58697 Jan 24 '13 at 19:04 I read that, but I didn't see its significance for the problem at hand. I think you should add some explanation to your answer. – Trevor Wilson Jan 24 '13 at 19:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401839137077332, "perplexity": 461.1988675124421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829320.91/warc/CC-MAIN-20160723071029-00190-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/38738/a-question-on-the-prime-divisors-of-p-1?answertab=active
# A question on the prime divisors of p-1 For each positive integer n we may define the convergent sum $$s(n)=\sum_{p}\frac{(n,p-1)}{p^2}$$ where the summation is over primes p and $(a,b)$ denotes the greatest common divisor of a,b. It is immediate to deduce that s(n) is bounded on average: Using $\sum \limits_{d|a, d|b}\phi(d)=(a,b)$ and inverting the order of summation we get $s(n)=\sum_{d|n}\phi(d) a_d$ where $a_d=\sum_{p \equiv 1 (mod \ d)}p^{-2}$ Ignoring the fact that we sum over primes we get the bound $a_d \ll \frac{1}{d^2}$ which leads to $$\sum_{n \leq x}s(n) \ll x \sum_{d \geq 1}\frac{\phi(d)a_d}{d}=O(x)$$ and $$s(n) \ll \exp(\sum_{p|n}1)$$ The last inequality means that $s(n)$ stays bounded if $\omega(n)$ is bounded. Towards the other direction, it seems fair to expect that $s(n)$ grows to infinity if $\omega(n)$ is large in some quantitative sense, say $\omega(n) \geq (1+\epsilon) \log \log n$. Taking into account that the contribution to the sum $s(n)$ of the primes $p$ that satisfy $(p-1,n) \leq \frac{p}{\log p}$ is bounded, since $\sum_{p}\frac{1}{p \log p}$ converges, we see that $s(n)=s'(n)+O(1)$ where $s'(n)=\sum_{ (p-1,n)>\frac{p}{\log p}} \frac{(n,p-1)}{p^2}$ We are therefore led to the question as to whether a condition of the form $\frac{\omega(n)}{\log \log n}-1 \gg 1$ can guarantee that $s'(n) \to +\infty$ Are there any non-trivial techniques that can be used to answer this question ? - Your guess that $s(n)$ gets large if $\omega(n)$ is large is not correct. It is possible for $n$ to have many primes, and for $s(n)$ still to be small. This can be seen from some of the work in your question. As you note $s(n) =\sum_{d|n} \phi(d) a_d$ where $a_d =\sum_{p\equiv 1\pmod d} p^{-2} \ll 1/d^2$. Therefore $$s(n) \ll \sum_{d|n} \frac{1}{d} \le \prod_{p|n} \Big(1-\frac 1p\Big)^{-1}.$$ If now every prime factor of $n$ exceeds $\log n$, then (since $\omega(n) \le \log n$ trivially) we have $$s(n) \ll \Big(1-\frac{1}{\log n} \Big)^{-\log n} \ll 1.$$ Thus $n$ can have about $\log n/\log \log n$ prime factors, all larger than $\log n$ and still $s(n)$ would be $\ll 1$. - A few more ideas: using the chebyshev upper bound, by partial summation we have $\sum_{p>y}p^{-2}=O(\frac{1}{y \log y})$ and therefore we see that $s(n)=\sum_{p \leq \frac{n}{\log n}}\frac{(p-1,n)}{p^2}+O(1).$ Furthermore by the equality $\sum_{p \leq x}p^{-1}=\log \log x +A +(\frac{1}{\log x})$ we get $$s(n)=\sum_{p \leq n^{1/3}}\frac{(p-1,n)}{p^2}+O(1)$$ and one can make this more accurate. There is something in the expression $s(n)=\sum_{d|n}\phi(d)a_d$ that is linked to Linnik's constant(or the Elliot-Halberstam Conjecture). In particular, using $a_d=\sum_{p=1(mod d), p>y} p^{-2} \leq d^{-2}\sum_{m>y/d}m^{-2}$ one can deduce that $$s(n)=\sum_{d|n}\phi(d) a'(d)+O(1)$$ where $$a'(d)=\sum_{p \leq d \log d {\log \log d}^2, p=1(mod d)}\frac{1}{p^2}$$ That seems to suggest that for each n for which $s(n)$ is quite large then for many divisors $d|n$ there might be many primes $p=1 (mod d)$ in the interval $[d,d \log d {\log \log d}^{2}]$ and conversely, but I haven't been able to establish a clear connection between these two facts. To this end we may compute the mean values $\sum_{n \leq x} s^{2k}(n), k \geq 0$ which is quite straightforward. Does all this set-up reminds you anything I could look up? - Are you just trying to show $s(n)$ is unbounded? and do you insist on a non-trivial technique? Let $n=m!$; then $s(n)>\sum_{p\lt m}{p-1\over p^2}=\sum_{p\lt m}{1\over p}+O(1)$ and of course the sum diverges. - (corrected the typo log log n, thanx !) your choice of n shows that $s(n) \gg \log \log \log n$ infinitely often. Choosing $n=\prod_{p \leq m}(p-1)$ we deduce that $s(n) \gg \log \log n$ infinitely often. But my question was of another nature : Is it true that $s(n) \to \infty$ for each sequence of integers n that have a large number of prime divisors (not for just a particular sequence of the form $m!$ or $\prod_{p \leq y}(p-1))$ ? – Captain Darling Sep 15 '10 at 12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880422353744507, "perplexity": 97.75851566560422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148475.34/warc/CC-MAIN-20160205193908-00045-ip-10-236-182-209.ec2.internal.warc.gz"}
https://chem.libretexts.org/Courses/UW-Whitewater/Chem_260%3A_Inorganic_Chemistry_(Girard)/03%3A_Coordination_Chemistry/3.07%3A_Structural_Isomers-_Coordination_Isomerism_in_Transition_Metal_Complexes
# 3.7: Structural Isomers- Coordination Isomerism in Transition Metal Complexes $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ Coordination isomerism occurs in compounds containing complex anionic and cationic parts and can be viewed as the interchange of one or more ligands between the cationic complex ion and the anionic complex ion. For example, $$\ce{[Co(NH3)6][Cr(CN)6]}$$ is a coordination isomer with $$\ce{[Cr(NH3)6][Co(CN)6]}$$. Alternatively, coordination isomers may be formed by switching the metals between the two complex ions like $$\ce{[Zn(NH3)4][CuCl4]}$$ and $$\ce{[Cu(NH3)4][ZnCl4]}$$. Exercise $$\PageIndex{1}$$ Are $$\ce{[Cu(NH3)4][PtCl4]}$$ and $$\ce{[Pt(NH3)4][CuCl4]}$$ coordination isomers? Solution Here, both the cation and anion are complex ions. In the first isomer, $$\ce{NH3}$$ is attached to the copper and the $$\ce{Cl^{-}}$$ are attached to the platinum. In the second isomer, they have swapped. Yes, they are coordination isomers. Exercise $$\PageIndex{2}$$ What is one coordination isomer of $$\ce{[Co(NH3)6] [Cr(C2O4)3]}$$? Solution Coordination isomers involve swapping the species from the inner coordination sphere to one metal (e.g, cation) to inner coordination sphere of a different metal (e.g., the anion) in the compound. One isomer is completely swapping the ligand sphere, e.g, $$\ce{[Co(C2O4)3] [Cr(NH3)6]}$$. Alternative coordination isomers are $$\ce{ [Co(NH3)4(C2O4)] [Cr(NH3)2(C2O4)2]}$$ and $$\ce{ [Co(NH3)2(C2O4)2] [Cr(NH3)4(C2O4)]}$$. ## Contributors and Attributions • The Department of Chemistry, University of the West Indies) 3.7: Structural Isomers- Coordination Isomerism in Transition Metal Complexes is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414286017417908, "perplexity": 3574.122135480228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00397.warc.gz"}
https://groupprops.subwiki.org/wiki/Matrix_exponential
# Matrix exponential ## Definition ### Definition for a topological field Suppose $K$ is a topological field. The matrix exponential, denoted $\exp$, is defined as the map from (a suitable subset of) the set of all $n \times n$ matrices over $K$, denoted $M(n,K)$, to the set of invertible $n \times n$ matrices over $K$, i.e., the general linear group $GL(n,K)$. It is defined as: $\exp(X) := \sum_{k=0}^\infty \frac{X^k}{k!} = I + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \dots$ More formally, it is the limit of the partial sums: $\exp(X) := \lim_{m \to \infty} \sum_{k=0}^m \frac{X^k}{k!}$ where the limit is taken entry-wise on the matrices with respect to the field topology. Note the following facts: • For the field of real numbers as well as the field of complex numbers (equipped with the usual topologies), the matrix exponential is defined for all matrices. • For the field of p-adic numbers, the matrix exponential is defined for all matrices in which all entries are $p$-multiples of elements of the $p$-adic integers (for $p$ odd). For $p = 2$, we need all entries to be 4 times elements in the p-adic integers. ### Definition for nilpotent matrices Suppose $K$ is any field and $X$ is a $n \times n$ nilpotent matrix in $K$ with $X^m = 0$ for some $m$. Suppose further that the characteristic of $K$ is either equal to zero or at least equal to $m$. Then, we define: $\exp X = \sum_{k=0}^{m-1} \frac{X^k}{k!} = I + X + \frac{X^2}{2!} + \dots + \frac{X^{m-1}}{(m-1)!}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988730251789093, "perplexity": 115.4220625240538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573415.58/warc/CC-MAIN-20190919015534-20190919041534-00352.warc.gz"}
http://help.solidworks.com/2018/english/SolidWorks/motionstudies/HIDD_DVE_SIM_DAMPERS.htm
# Damper PropertyManager To open the Damper PropertyManager: Click Damper (MotionManager toolbar). ## Damper Type Linear Damper (Motion Analysis only). Represents forces acting between two parts over a distance and along a particular direction. You can specify the location of the damper on two parts. The Motion Analysis study: Calculates the damping forces based on the relative velocity between the locations on the two parts Applies the action force to the first part you select, the action body. Applies an equal and opposite reaction force along the line of sight of the second part you select, the reaction body. Torsional Damper (Motion Analysis only). Rotational damper applied between two components about a specific axis. The Motion Analysis study: Calculates the spring moments based on the angular velocity between the two parts about the specified axis Applies an action moment about the specified axis to the first part you select Applies an equal and opposite reaction moment to the second part you select ## Damper Parameters Linear dampers selection box Lists the pair of features defining the damper endpoints. Torsional dampers first selection box Lists the feature defining one end of the damper and the torque direction. Select a second feature only to change the torque direction. Torsional dampers second selection box Lists an optional second feature that defines the damper. Leave this selection empty to attach the damper to ground. Exponent of Damper Force Expression Based on the Functional Expressions for Dampers. Damping Constant Based on the Functional Expressions for Dampers. • Linear damper: `- c*v**n` • Torsional damper: `- ct*omega**n` `v` is the current relative velocity between parts at the attachment points. `omega` is the current angular velocity between the parts, about the user-defined axis. `c` is the Linear Damping Constant. `ct` is the Torsional Damping Constant. `n` is the Exponent. For example, if the spring force = `-kx2`, then `n = 2`. Valid options are `<-4,-3,-2,-1,1,2,3,4>`.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8966335654258728, "perplexity": 1645.4474658852084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573415.58/warc/CC-MAIN-20190919015534-20190919041534-00349.warc.gz"}
http://math.stackexchange.com/questions/251646/splitting-of-the-tangent-bundle-of-a-vector-bundle/251668
# Splitting of the tangent bundle of a vector bundle Let $\pi:E\to M$ be a rank $k$ vector bundle over the (compact) manifold $M$ and let $i:M\hookrightarrow E$ denote the zero section. I'm interested in a splitting of $i^*(TE)$, the restriction of the tangent bundle $TE$ to the zero section. Intuitively I would guess that one could show the following: $$i^*(TE)\cong TM\oplus E$$ Is this true? If so, how does the proof work? Any details and references are appreciated! - ## 1 Answer The morphism $\pi:E\to M$ (which is a submersion) induces a surjective tangent morphism $T\pi: TE\to \pi^*TM\to 0$ whose kernel is (by definition) the vertical tangent bundle $T_vE$ . There results the exact sequence of bundles on E $$0\to T_vE\to TE\stackrel {T\pi}{\to} \pi^*TM\to 0$$ Pulling back that exact sequence to $M$ via the embedding $i$ yields the exact sequence of vector bundles on M: $$0\to E\to TE\mid M \to TM\to 0 \quad (\bigstar )$$ The hypothesis that $M$ is compact is irrelevant to what precedes. However if $M$ is paracompact, the displayed sequence $(\bigstar )$ splits and you may write $$TE\mid M \cong E\oplus TM$$ Since however the splitting of $(\bigstar)$ is not canonical, I do not recommend this transformation of the preferable (because intrinsic) exact sequence$(\bigstar)$. - Thanks for this great answer! I'm a bit unclear on a few points: 1. How is the map $T\pi$ defined? 2. Why is $E$ the pullback of $T_vE$ under $i$? (3. Why is paracompactness needed for the sequence to split?) – Dave Dec 5 '12 at 17:29 $\:T\pi$ is the disjoint union for $e\in E$ of the tangent maps $D_e\pi: T_eE\to T_{\pi(e)}M\quad$ 2. This is a bit involved. A key ingredient is the *canonical* identification of the tangent space at any point $v$ of a vector space $V$ with that vector space: $T_v V=V\quad$ 3.You can show that any short exact sequence of vector bundles splits with the help of a Riemannian metric, and such a metric always exists on a paracompact manifold. – Georges Elencwajg Dec 5 '12 at 21:17 Thank you! I still need to think some more about the second point. Do you happen to know a reference by any chance? – Dave Dec 5 '12 at 22:26 Dear Dave, you might find what you want in Spivak's A Comprehensive Introduction to Differential Geometry, Vol. 1 , to which I have no access right now. Don't forget to check the exercises. – Georges Elencwajg Dec 5 '12 at 22:47 I will take a look. Thank you very much. – Dave Dec 5 '12 at 23:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480271339416504, "perplexity": 197.70081579718138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.78/warc/CC-MAIN-20151124205424-00221-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/36153-monte-carlo-importance-sampling.html
# Math Help - monte carlo importance sampling 1. ## monte carlo importance sampling I was studying Monte Carlo importance sampling and stopped to make a simulation to see if i understood. Them, I tryed to simulated this: f(x)= absolute( (polynomial function) *exp(-(x^2)/2) ) I tryed a constant *exp(-(x^2)/2) but it isn't a really good try as tails wo't be covered and variance would be huge. can u give me idea ? thanks 2. changed the thread to be more comprehensible
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570611715316772, "perplexity": 3758.581806026981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447546043.1/warc/CC-MAIN-20141224185906-00006-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/173412-needing-some-help-congruences.html
# Thread: Needing some help with congruences 1. ## Needing some help with congruences Solve the following set of linear congruences: x≡4 (mod 24) x≡7 (mod 11) Ok now I that we can say that x=24k+4 and then plug this into the second congruence to get 24k+4=7 (mod 11). My teacher wrote that 2k = 3 = 14 (mod 11) k = 7 (mod 11); however, I do not see the reasoning behind this at all. I keep wanting to do solve it like you would a general equation..in my head 24k+4=7 (mod 11) becomes 24k=3 (mod 11) and then you would divide everything by 24 and so on to solve for k. But this is obviously not how the problem is done. Can someone please clearly explain just how exactly the value for k was obtained? 2. Originally Posted by steph3824 Solve the following set of linear congruences: x≡4 (mod 24) x≡7 (mod 11) Ok now I that we can say that x=24k+4 and then plug this into the second congruence to get 24k+4=7 (mod 11). My teacher wrote that 2k = 3 = 14 (mod 11) k = 7 (mod 11); however, I do not see the reasoning behind this at all. I keep wanting to do solve it like you would a general equation..in my head 24k+4=7 (mod 11) becomes 24k=3 (mod 11) and then you would divide everything by 24 and so on to solve for k. But this is obviously not how the problem is done. Can someone please clearly explain just how exactly the value for k was obtained? Google "Chinese Remainder Theorem" and apply the proof here: Chinese remainder theorem - Wikipedia, the free encyclopedia Tonio
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675208687782288, "perplexity": 587.8107807502541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00650.warc.gz"}
https://www.flexiprep.com/NCERT-Exemplar-Solutions/Biology/Class-11/NCERT-Class-11-Biology-Exemplar-Chapter-11-Transport-In-Plants-Part-4.html
# NCERT Class 11-Biology: Chapter – 11 Transport in Plants Part 4 (For CBSE, ICSE, IAS, NET, NRA 2022) Get unlimited access to the best preparation resource for CBSE/Class-6 : get questions, notes, tests, video lectures and more- for all subjects of CBSE/Class-6. Question 11: Given below is a table. Fill in the gaps Property Simple diffusion facilitated transport Active Transport i. Highly selective ________ Yes ________ ii. Uphill transport ________ ________ Yes iii Requires ATP ________ ________ ________ Property Simple diffusion facilitated transport Active Transport i. Highly selective No Yes Yes ii. Uphill transport No No Yes iii Requires ATP No No Yes Question 12: Define water potential and solute potential. The kinetic energy of water is called water potential. Water potential reduces when a solute is dissolved in it. The magnitude of lowering of water potential because of solute is called solute potential. Question 13: Why is solute potential always negative? Explain Water potential is the sum of solute potential and pressure potential . So when solute is dissolved in water, the water potential of pure water decreases or starts assuming a negative value. Solute potential also known as osmotic potential is the potential of solution that allows the water to enter the solution by diffusion or osmosis due to presence of the solute in it. Pressure potential is the hydrostatic pressure that is exerted on water present in a cell. This usually has a positive value. Question 14: An onion peel was taken and a. Placed in salt solution for five minutes. b. After that it was placed in distilled water. When seen under the microscope what would be observed in a and b? (a) When the Tradescantia leaf epidermal peel was taken and placed in salt solution for five minutes, the cells would have shrinked when seen under the microscope because salt solution is hypertonic causing water to move out of the cell thus leading to exosmosis. (b) After that when it was placed in distilled water, the cell regains its turgidity as it absorbs water and deplasmolysis occurs. Question 15: Differentiate between Apoplast and Symplast pathways of water movement. Which of these would need active transport? Property Apoplast Symplast Composition Consists of non-living parts of the plant Consists of living parts of the plant Water diffusion by Water diffusion occurs by passive diffusion Water diffusion occurs by osmosis Resistance to movement of water Resistance is less to water movement Resistance is more to water movement Question 16: How does most of the water moves within the root? Water moves from the soil to the roots via the process of osmosis. The water potential in the soil is more than in the cytoplasm of the root hair. So water passes across the semi-permeable membrane of the root hair cell into the root via osmosis. Then the water is passed on to the xylem vessels where water either would travel through cortex or would go through the cell walls. Now the water keeps on moving upwards via the xylem vessels by diffusion due to the water potential gradient present. The water upon moving to the leaves, it diffuses to the mesophyll cells then into the spaces between the cells after which it vaporizes out through the stomata via the process of transpiration (loss of water from the aerial parts of the plant in form of water vapour) . This whole process is driven by capillarity and root pressure. The driving force is the water potential that is established. Question 17: Give the location of casparian strip and explain its role in the water movement. Casparian strips are situated in the endodermal cell walls (radial and transverse) of plant roots. It prevents movement of water from pericycle to cortex thus promoting and establishing a positive hydrostatic pressure. Casparian strips blocks the apoplastic pathway due to which water has to enter the symplastic pathway. Question 18: Differentiate between guttation and transpiration.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8230434656143188, "perplexity": 3088.5753650552847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046157039.99/warc/CC-MAIN-20210805193327-20210805223327-00568.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-mathematics-for-calculus-7th-edition/chapter-1-section-1-1-real-numbers-1-1-exercises-page-10/16
## Precalculus: Mathematics for Calculus, 7th Edition The expression shows that multiplying the first term, (x + a), by each of the elements in the second term, x and b, would yield the same result as multiplying the first term by the sum of the elements of the second term. This means the expression is using the distributive property. The distributive property could be used to expand the expression even further: (x + a) (x + b) = (x + a)x + (x + a)b (x + a)x + (x + a)b = $x^{2}$ + ax + xb + ab
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530068397521973, "perplexity": 279.9450607692402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592001.81/warc/CC-MAIN-20180720232914-20180721012914-00572.warc.gz"}
http://lptms.u-psud.fr/en/activites/publication/articles-scientifics/publications-2012/?ajaxCalendar=1&mo=6&yr=2022
# Publications 2012 • ## A two-dimensional one component plasma and a test charge : polarization effects and effective potential ### G. Téllez 1, E. Trizac 2 #### Journal of Statistical Physics 146 (2012) 832-849 We study the effective interactions between a test charge Q and a one-component plasma, i.e. a complex made up of mobile point particles with charge q, and a uniform oppositely charged background. The background has the form of a flat disk, in which the mobile charges can move. The test particle is approached perpendicularly to the disk, along its axis of symmetry. All particles interact by a logarithmic potential. The long and short distance features of the effective potential --the free energy of the system for a given distance between Q and the disk-- are worked out analytically in detail. They crucially depend on the sign of Q/q, and on the global charge borne by the discotic complex, that can vanish. While most results are obtained at the intermediate coupling Gamma = beta q^2 = 2 (beta being the inverse temperature), we have also investigated situations with stronger couplings: Gamma=4 and 6. We have found that at large distances, the sign of the effective force reflects subtle details of the charge distribution on the disk, whereas at short distances, polarization effects invariably lead to effective attractions. • 1. Departamento de Fisica, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Aging of rotational diffusion in colloidal gels and glasses ### S. Jabbari-Farouji 1, 2, G. H. Wegdam 2, Daniel Bonn 3 #### Physical Review E 86 (2012) 041401 We study the rotational diffusion of aging Laponite suspensions for a wide range of concentrations using depolarized dynamic light scattering. The measured orientational correlation functions undergo an ergodic to non-ergodic transition that is characterized by a concentration-dependent ergodicity-breaking time. We find that the relaxation times associated with rotational degree of freedom as a function of waiting time, when scaled with their ergodicity-breaking time, collapse on two distinct master curves. These master curves are similar to those previously found for the translational dynamics; The two different classes of behavior were attributed to colloidal gels and glasses. Therefore, the aging dynamics of rotational degree of freedom provides another signature of the distinct dynamical behavior of colloidal gels and glasses. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. University of Amsterdam Van der Waals-Zeeman Institute (VAN DER WAALS-ZEEMAN INSTITUTE), University of Amsterdam • 3. Laboratoire de Physique Statistique de l'ENS (LPS), CNRS : UMR8550 – Université Paris VI - Pierre et Marie Curie – Université Paris VII - Paris Diderot – Ecole Normale Supérieure de Paris - ENS Paris • ## Application of a trace formula to the spectra of flat three-dimensional dielectric resonators ### S. Bittner 1, E. Bogomolny 2, B. Dietz 3, M. Miski-Oglu 3, A. Richter 1, 4 #### Physical Review E 85 (2012) 026203 The length spectra of flat three-dimensional dielectric resonators of circular shape were determined from a microwave experiment. They were compared to a semiclassical trace formula obtained within a two-dimensional model based on the effective index of refraction approximation and a good agreement was found. It was necessary to take into account the dispersion of the effective index of refraction for the two-dimensional approximation. Furthermore, small deviations between the experimental length spectrum and the trace formula prediction were attributed to the systematic error of the effective index of refraction approximation. In summary, the methods developed in this article enable the application of the trace formula for two-dimensional dielectric resonators also to realistic, flat three-dimensional dielectric microcavities and -lasers, allowing for the interpretation of their spectra in terms of classical periodic orbits. • 1. Institut für Kernphysik, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. Institut für Kernphysik, • 4. ECT, ECT Citations to the Article (2) • ## Arithmetic area for m planar Brownian paths ### Jean Desbois 1, Stephane Ouvry 1 #### Journal of Statistical Mechanics: Theory and Experiment (2012) P050005 We pursue the analysis made in [1] on the arithmetic area enclosed by m closed Brownian paths. We pay a particular attention to the random variable S{n1,n2, ...,n} (m) which is the arithmetic area of the set of points, also called winding sectors, enclosed n1 times by path 1, n2 times by path 2, ...,nm times by path m. Various results are obtained in the asymptotic limit m->infinity. A key observation is that, since the paths are independent, one can use in the m paths case the SLE information, valid in the 1-path case, on the 0-winding sectors arithmetic area. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Bose-Einstein Condensation of a Gaussian Random Field in the Thermodynamic Limit ### Philippe Mounaix 1, Satya N. Majumdar 2, Abhimanyu Banerjee 3 #### Journal of Physics A: Mathematical and Theoretical 45 (2012) 115002 We derive the criterion for the Bose-Einstein condensation (BEC) of a Gaussian field $\phi$ (real or complex) in the thermodynamic limit. The field is characterized by its covariance function and the control parameter is the intensity $u=\|\phi\|_2^2/V$, where $V$ is the volume of the box containing the field. We show that for any dimension $d$ (including $d=1$), there is a class of covariance functions for which $\phi$ exhibits a BEC as $u$ is increased through a critical value $u_c$. In this case, we investigate the probability distribution of the part of $u$ contained in the condensate. We show that depending on the parameters characterizing the covariance function and the dimension $d$, there can be two distinct types of condensate: a Gaussian distributed 'normal' condensate with fluctuations scaling as $1/\sqrt{V}$, and a non Gaussian distributed 'anomalous' condensate. A detailed analysis of the anomalous condensate is performed for a one-dimensional system ($d=1$). Extending this one-dimensional analysis to exactly the point of transition between normal and anomalous condensations, we find that the condensate at the transition point is still Gaussian distributed but with anomalously large fluctuations scaling as $\sqrt{\ln(L)/L}$, where $L$ is the system length. The conditional spectral density of $\phi$, knowing $u$, is given for all the regimes (with and without BEC). • 1. Centre de Physique Théorique (CPHT), CNRS : UMR7644 – Polytechnique - X • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. Indian Institute of Technology [Kanpur] (IIT Kanpur), Indian Institute of Technology Kanpur • ## Can a Lamb Reach a Haven Before Being Eaten by Diffusing Lions? ### Alan Gabel 1, Satya N. Majumdar 2, Nagendra K. Panduranga 1, S. Redner 1 #### Journal of Statistical Mechanics: Theory and Experiment (2012) P05011 We study the survival of a single diffusing lamb on the positive half line in the presence of N diffusing lions that all start at the same position L to the right of the lamb and a haven at x=0. If the lamb reaches this haven before meeting any lion, the lamb survives. We investigate the survival probability of the lamb, S_N(x,L), as a function of N and the respective initial positions of the lamb and the lions, x and L. We determine S_N(x,L) analytically for the special cases of N=1 and N--->oo. For large but finite N, we determine the unusual asymptotic form whose leading behavior is S_N(z)\simN^{-z^2}, with z=x/L. Simulations of the capture process very slowly converge to this asymptotic prediction as N reaches 10^{500}. • 1. Center for Polymer Studies (CPS), Boston University • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Casimir forces beyond the proximity approximation ### G. Bimonte 1, T. Emig 2, R. L. Jaffe 3, M. Kardar 4 #### Europhycs Letters 97 (2012) 50001 The proximity force approximation (PFA) relates the interaction between closely spaced, smoothly curved objects to the force between parallel plates. Precision experiments on Casimir forces necessitate, and spur research on, corrections to the PFA. We use a derivative expansion for gently curved surfaces to derive the leading curvature modifications to the PFA. Our methods apply to any homogeneous and isotropic materials; here we present results for Dirichlet and Neumann boundary conditions and for perfect conductors. A Padé extrapolation constrained by a multipole expansion at large distance and our improved expansion at short distances, provides an accurate expression for the sphere-plate Casimir force at all separations. • 1. Istituto Nazionale di Fisica Nucleare, Sezione di Napoli (INFN, Sezione di Napoli), INFN • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. Department of Physics, Center for Theoretical Physics, Massachussetts Institute of Technology (MIT) • 4. Department of Physics, Massachusetts Institute of Technology • ## Casimir interaction between inclined metallic cylinders ### P. Rodriguez-Lopez 1, T. Emig 2 #### Physical Review A 85 (2012) 032510 The Casimir interaction between one-dimensional metallic objects (cylinders, wires) displays unconventional features. Here we study the orientation dependence of this interaction by computing the Casimir energy between two inclined cylinders over a wide range of separations. We consider Dirichlet, Neumann and perfect metal boundary conditions, both at zero temperature and in the classical high temperature limit. For all types of boundary conditions, we find that at large distances the interaction decays slowly with distance, similarly to the case of parallel cylinders, and at small distances scales as the interaction of two spheres (but with different numerical coefficients). Our numerical results at intermediate distances agree with our analytic predictions at small and large separations. Experimental implications are discussed. • 1. Departamento de Fisica Aplicada, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Collective charge fluctuations and Casimir interactions for quasi one-dimensional metals ### Ehsan Noruzifar 1, Thorsten Emig 2, Umar Mohideen 1, Roya Zandi 1 #### Physical Review B 86 (2012) 115449 We investigate the Casimir interaction between two parallel metallic cylinders and between a metallic cylinder and plate. The material properties of the metallic objects are implemented by the plasma, Drude and perfect metal model dielectric functions. We calculate the Casimir interaction numerically at all separation distances and analytically at large separations. The large-distance asymptotic interaction between one plasma cylinder parallel to another plasma cylinder or plate does not depend on the material properties, but for a Drude cylinder it depends on the dc conductivity $\sigma$. At intermediate separations, for plasma cylinders the asymptotic interaction depends on the plasma wave length $\lambda_{\rm p}$ while for Drude cylinders the Casimir interaction can become independent of the material properties. We confirm the analytical results by the numerics and show that at short separations, the numerical results approach the proximity force approximation. • 1. Department of Physics and Astronomy, University of California, Riverside • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (1) • ## Compressed Sensing of Approximately-Sparse Signals: Phase Transitions and Optimal Reconstruction ### Jean Barbier 1, Florent Krzakala 1, Marc Mézard 2, Lenka Zdeborová 3 #### 50th annual Allerton conference on communication, control, and computing, États-Unis (2012) Compressed sensing is designed to measure sparse signals directly in a compressed form. However, most signals of interest are only "approximately sparse", i.e. even though the signal contains only a small fraction of relevant (large) components the other components are not strictly equal to zero, but are only close to zero. In this paper we model the approximately sparse signal with a Gaussian distribution of small components, and we study its compressed sensing with dense random matrices. We use replica calculations to determine the mean-squared error of the Bayes-optimal reconstruction for such signals, as a function of the variance of the small components, the density of large components and the measurement rate. We then use the G-AMP algorithm and we quantify the region of parameters for which this algorithm achieves optimality (for large systems). Finally, we show that in the region where the GAMP for the homogeneous measurement matrices is not optimal, a special "seeding" design of a spatially-coupled measurement matrix allows to restore optimality. • 1 : Laboratoire de Physico-Chimie Théorique (LPCT) CNRS : UMR7083 – ESPCI ParisTech • 2 : Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS) CNRS : UMR8626 – Université Paris XI - Paris Sud • 3 : Institut de Physique Théorique (ex SPhT) (IPHT) CNRS : URA2306 – CEA : DSM/IPHT • ## Controlling integrability in a quasi-1D atom-dimer mixture ### D. S. Petrov 1, 2, V. Lebedev 3, J. T. M. Walraven 3 #### Physical Review A 85 (2012) 062711 We analytically study the atom-dimer scattering problem in the near-integrable limit when the oscillator length l_0 of the transverse confinement is smaller than the dimer size, ~l_0^2/|a|, where a<0 is the interatomic scattering length. The leading contributions to the atom-diatom reflection and break-up probabilities are proportional to a^6 in the bosonic case and to a^8 for the up-(up-down) scattering in a two-component fermionic mixture. We show that by tuning a and l_0 one can control the 'degree of integrability' in a quasi-1D atom-dimer mixture in an extremely wide range leaving thermodynamic quantities unchanged. We find that the relaxation to deeply bound states in the fermionic (bosonic) case is slower (faster) than transitions between different Bethe ansatz states. We propose a realistic experiment for detailed studies of the crossover from integrable to nonintegrable dynamics. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. National Research Centre "Kurchatov Institute" (NRC KI), University of Moscow • 3. Van der Waals-Zeeman Institute, University of Amsterdam Citations to the Article (1) • ## Counting function fluctuations and extreme value threshold in multifractal patterns: the case study of an ideal $1/f$ noise ### Yan V. Fyodorov, Pierre Le Doussal, Alberto Rosso #### Journal of Statistical Physics 149 (2012) 898-920 To understand the sample-to-sample fluctuations in disorder-generated multifractal patterns we investigate analytically as well as numerically the statistics of high values of the simplest model - the ideal periodic $1/f$ Gaussian noise. By employing the thermodynamic formalism we predict the characteristic scale and the precise scaling form of the distribution of number of points above a given level. We demonstrate that the powerlaw forward tail of the probability density, with exponent controlled by the level, results in an important difference between the mean and the typical values of the counting function. This can be further used to determine the typical threshold $x_m$ of extreme values in the pattern which turns out to be given by $x_m^{(typ)}=2-c\ln{\ln{M}}/\ln{M}$ with $c=3/2$. Such observation provides a rather compelling explanation of the mechanism behind universality of $c$. Revealed mechanisms are conjectured to retain their qualitative validity for a broad class of disorder-generated multifractal fields. In particular, we predict that the typical value of the maximum $p_{max}$ of intensity is to be given by $-\ln{p_{max}} = \alpha_{-}\ln{M} + \frac{3}{2f'(\alpha_{-})}\ln{\ln{M}} + O(1)$, where $f(\alpha)$ is the corresponding singularity spectrum vanishing at $\alpha=\alpha_{-}>0$. For the $1/f$ noise we also derive exact as well as well-controlled approximate formulas for the mean and the variance of the counting function without recourse to the thermodynamic formalism. • ## Critical phenomena and phase sequence in classical bilayer Wigner crystal at zero temperature ### L. Samaj 1, E. Trizac 1 #### Physical Review B (Condensed Matter) 85 (2012) 205131 We study the ground-state properties of a system of identical classical Coulombic point particles, evenly distributed between two equivalently charged parallel plates at distance $d$; the system as a whole is electroneutral. It was previously shown that upon increasing d from 0 to infinity, five different structures of the bilayer Wigner crystal become energetically favored, starting from a hexagonal lattice (phase I, d=0) and ending at a staggered hexagonal lattice (phase V, d -> infinity). In this paper, we derive new series representations of the ground-state energy for all five bilayer structures. The derivation is based on a sequence of transformations for lattice sums of Coulomb two-particle potentials plus the neutralizing background, having their origin in the general theory of Jacobi theta functions. The new series provide convenient starting points for both analytical and numerical progress. Its convergence properties are indeed excellent: Truncation at the fourth term determines in general the energy correctly up to 17 decimal digits. The accurate series representations are used to improve the specification of transition points between the phases and to solve a controversy in previous studies. In particular, it is shown both analytically and numerically that the hexagonal phase I is stable only at d=0, and not in a finite interval of small distances between the plates as was anticipated before. The expansions of the structure energies around second-order transition points can be done analytically, which enables us to show that the critical behavior is of the Ginzburg-Landau type, with a mean-field critical index beta=1/2 for the growth of the order parameters. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Dissipative homogeneous Maxwell mixtures: ordering transition in the tracer limit ### V. Garzó 1, E. Trizac 2 #### Granular Matter 14 (2012) 99 The homogeneous Boltzmann equation for inelastic Maxwell mixtures is considered to study the dynamics of tracer particles or impurities (solvent) immersed in a uniform granular gas (solute). The analysis is based on exact results derived for a granular binary mixture in the homogeneous cooling state (HCS) that apply for arbitrary values of the parameters of the mixture (particle masses $m_i$, mole fractions $c_i$, and coefficients of restitution $\alpha_{ij}$). In the tracer limit ($c_1\to 0$), it is shown that the HCS supports two distinct phases that are evidenced by the corresponding value of $E_1/E$, the relative contribution of the tracer species to the total energy. Defining the mass ratio $\mu = m_1/m_2$, there indeed exist two critical values $\mu_\text{HCS}^{(-)}$ and $\mu_\text{HCS}^{(+)}$ (which depend on the coefficients of restitution), such that $E_1/E=0$ for $\mu_\text{HCS}^{(-)}<\mu<\mu_\text{HCS}^{(+)}$ (disordered or normal phase), while $E_1/E\neq 0$ for $\mu<\mu_\text{HCS}^{(-)}$ and/or $\mu>\mu_\text{HCS}^{(+)}$ (ordered phase). • 1. Departamento de Fisica, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Dynamic Monte Carlo Simulations of Anisotropic Colloids ### Sara Jabbari-Farouji 1, Emmanuel Trizac 1 #### Journal of Chemical Physics 137 (2012) 054107 We put forward a simple procedure for extracting dynamical information from Monte Carlo simulations, by appropriate matching of the short-time diffusion tensor with its infinite-dilution limit counterpart, which is supposed to be known. This approach --discarding hydrodynamics interactions-- first allows us to improve the efficiency of previous Dynamic Monte Carlo algorithms for spherical Brownian particles. In a second step, we address the case of anisotropic colloids with orientational degrees of freedom. As an illustration, we present a detailed study of the dynamics of thin platelets, with emphasis on long-time diffusion and orientational correlations. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Dynamical heterogeneity in aging colloidal glasses of Laponite ### Sara Jabbari-Farouji 1, 2, Rojman Zargar 2, Gerard Wegdam 2, Daniel Bonn 3, 4 #### Soft Matter 8 (2012) 5507-5512 Glasses behave as solids due to their long relaxation time; however the origin of this slow response remains a puzzle. Growing dynamic length scales due to cooperative motion of particles are believed to be central to the understanding of both the slow dynamics and the emergence of rigidity. Here, we provide experimental evidence of a growing dynamical heterogeneity length scale that increases with increasing waiting time in an aging colloidal glass of Laponite. The signature of heterogeneity in the dynamics follows from dynamic light scattering measurements in which we study both the rotational and translational diffusion of the disk-shaped particles of Laponite in suspension. These measurements are accompanied by simultaneous microrheology and macroscopic rheology experiments. We find that rotational diffusion of particles slows down at a faster rate than their translational motion. Such decoupling of translational and orientational degrees of freedom finds its origin in the dynamic heterogeneity since rotation and translation probe different length scales in the sample. The macroscopic rheology experiments show that the low frequency shear viscosity increases at a much faster rate than both rotational and translational diffusive relaxation times. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Van der Waals-Zeeman Institute, University of Amsterdam • 3. University of Amsterdam Van der Waals-Zeeman Institute (VAN DER WAALS-ZEEMAN INSTITUTE), University of Amsterdam • 4. Laboratoire de Physique Statistique de l'ENS (LPS), CNRS : UMR8550 – Université Paris VI - Pierre et Marie Curie – Université Paris VII - Paris Diderot – Ecole Normale Supérieure de Paris - ENS Paris Citations to the Article (1) • ## Dynamics of a massive intruder in a homogeneously driven granular fluid ### A. Puglisi 1, A. Sarracino 2, G. Gradenigo 3, D. Villamaina 4 #### Granular Matter 14 (2012) 235-238 A massive intruder in a homogeneously driven granular fluid, in dilute configurations, performs a memory-less Brownian motion with drag and temperature simply related to the average density and temperature of the fluid. At volume fraction $\sim 10-50%$ the intruder's velocity correlates with the local fluid velocity field: such situation is approximately described by a system of coupled linear Langevin equations equivalent to a generalized Brownian motion with memory. Here one may verify the breakdown of the Fluctuation-Dissipation relation and the presence of a net entropy flux - from the fluid to the intruder - whose fluctuations satisfy the Fluctuation Relation. • 1. Dipartimento di Fisica, Università La Sapienza • 2. Dipartimento di Fisica, CNR - Consiglio Nationale delle Ricerche • 3. Dipartimento di Fisica, CNR - Consiglio Nationale delle Ricerche • 4. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Edge properties of principal fractional quantum Hall states in the cylinder geometry ### Paul Soulé 1, Thierry Jolicoeur 1 #### Physical Review B (Condensed Matter) 86 (2012) 115214 We study fractional quantum Hall states in the cylinder geometry with open boundaries. We focus on principal fermionic 1/3 and bosonic 1/2 fractions in the case of hard-core interactions. The gap behavior as a function of the cylinder radius is analyzed. By adding enough orbitals to allow for edge modes we show that it is possible to measure the Luttinger parameter of the non-chiral liquid formed by the combination of the two counterpropagating edges when we add a small confining potential. While we measure a Luttinger exponent consistent with the chiral Luttinger theory prediction for the full hard-core interaction, the exponent remains non-trivial in the Tao-Thouless limit as well as for simple truncated states that can be constructed on the cylinder. If the radius of the cylinder is taken to infinity the problem becomes a Tonks-Girardeau one-dimensional interacting gas in Fermi and Bose cases. Finally we show that the the Tao-Thouless and truncated states have an edge electron propagator which decays spatially with a Fermi-liquid exponent even if the energy spectrum can still be described by a non-trivial Luttinger parameter. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Effect of coupling asymmetry on mean-field solutions of direct and inverse Sherrington-Kirkpatrick model ### Jason Sakellariou 1, Yasser Roudi 2, 3, Marc Mezard 1, John Hertz 4, 5 #### Philosophical Magazine 92 (2012) 272-279 We study how the degree of symmetry in the couplings influences the performance of three mean field methods used for solving the direct and inverse problems for generalized Sherrington-Kirkpatrick models. In this context, the direct problem is predicting the potentially time-varying magnetizations. The three theories include the first and second order Plefka expansions, referred to as naive mean field (nMF) and TAP, respectively, and a mean field theory which is exact for fully asymmetric couplings. We call the last of these simply MF theory. We show that for the direct problem, nMF performs worse than the other two approximations, TAP outperforms MF when the coupling matrix is nearly symmetric, while MF works better when it is strongly asymmetric. For the inverse problem, MF performs better than both TAP and nMF, although an ad hoc adjustment of TAP can make it comparable to MF. For high temperatures the performance of TAP and MF approach each other. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Kavli Institute for Systems Neuroscience, Kavli Institute for Systems Neuroscience • 3. NORDITA, NORDITA • 4. NORDITA, NORDITA • 5. Niels Bohr Institute (NBI), Niels Bohr Institute • ## Einstein relation in superdiffusive systems ### Giacomo Gradenigo 1, Alessandro Sarracino 1, Dario Villamaina 2, Angelo Vulpiani 3 #### Journal of Statistical Mechanics: Theory and Experiment (2012) L06001 We study the Einstein relation between diffusion and response to an external field in systems showing superdiffusion. In particular, we investigate a continuous time Levy walk where the velocity remains constant for a time \tau, with distribution P(\tau) \tau^{-g}. At varying g the diffusion can be standard or anomalous; in spite of this, if in the unperturbed system a current is absent, the Einstein relation holds. In the case where a current is present the scenario is more complicated and the usual Einstein relation fails. This suggests that the main ingredient for the breaking of the Einstein relation is not the anomalous diffusion but the presence of a mean drift (current). • 1. Istituto dei Sistemi Complessi--CNR, Università Sapienza • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. Dipartimento di Fisica, Università Sapienza • ## Electron and nuclear spin dynamics in the thermal mixing model of dynamic nuclear polarization ### Sonia Colombo Serra 1, Alberto Rosso 2, Fabio Tedoldi 1 #### Physical Chemistry Chemical Physics 14 (2012) 13299-13308 A novel mathematical treatment is proposed for computing the time evolution of dynamic nuclear polarization processes in the low temperature thermal mixing regime. Without assuming any a priori analytical form for the electron polarization, our approach provides a quantitative picture of the steady state that recovers the well known Borghini prediction based on thermodynamics arguments, as long as the electrons-nuclei transition rates are fast compared to the other relevant time scales. Substantially different final polarization levels are achieved instead when the latter assumption is relaxed in the presence of a nuclear leakage term, even though very weak, suggesting a possible explanation for the deviation between the measured steady state polarizations and the Borghini prediction. The proposed methodology also allows to calculate nuclear polarization and relaxation times, once specified the electrons/nuclei concentration ratio and the typical rates of the microscopic processes involving the two spin species. Numerical results are shown to account for the manifold dynamical behaviours of typical DNP samples. • 1. Centro Ricerche Bracco, Centro Ricerche Bracco • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (1) • ## Entangling many-body bound states with propagative modes in Bose-Hubbard systems ### Mario Collura 1, Helge Aufderheide 2, Guillaume Roux 3, Dragi Karevski 1 #### Physical Review A 86 (2012) 013615 The quantum evolution of a cloud of bosons initially localized on part of a one dimensional optical lattice and suddenly subjected to a linear ramp is studied, realizing a quantum analog of the 'Galileo ramp' experiment. The main remarkable effects of this realistic setup are revealed using analytical and numerical methods. Only part of the particles are ejected for a high enough ramp, while the others remain self-trapped. Then, the trapped density profile displays rich dynamics with Josephson-like oscillations around a plateau. This setup, by coupling bound states to propagative modes, creates two diverging condensates for which the entanglement is computed and related to the equilibrium one. Further, we address the role of integrability on the entanglement and on the damping and thermalization of simple observables. • 1. Institut Jean Lamour : Matériaux -Métallurgie - Nanosciences - Plasma - Surfaces (IJL), Université Henri Poincaré - Nancy I – CNRS : UMR7198 – Institut National Polytechnique de Lorraine (INPL) – Université Paul Verlaine - Metz • 2. Department Biological Physics, Max-Planck-Institute • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (1) • ## Equilibrium strategy and population-size effects in lowest unique bid auctions ### Simone Pigolotti 1, 2, Sebastian Bernhardsson 1, 3, Jeppe Juul 1, Gorm Galster 1, Pierpaolo Vivo 4 #### Physical Review Letters 108 (2012) 088701 In lowest unique bid auctions, $N$ players bid for an item. The winner is whoever places the \emph{lowest} bid, provided that it is also unique. We use a grand canonical approach to derive an analytical expression for the equilibrium distribution of strategies. We then study the properties of the solution as a function of the mean number of players, and compare them with a large dataset of internet auctions. The theory agrees with the data with striking accuracy for small population size $N$, while for larger $N$ a qualitatively different distribution is observed. We interpret this result as the emergence of two different regimes, one in which adaptation is feasible and one in which it is not. Our results question the actual possibility of a large population to adapt and find the optimal strategy when participating in a collective game. • 1. Niels Bohr Institute (NBI), Niels Bohr Institute • 2. Dept. de Fisica i Eng. Nuclear, Universitat Politécnica de Catalunya • 3. Swedish Defence Research Agency, Swedish Defence Research Agency • 4. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Exact results for classical Casimir interactions: Dirichlet and Drude model in the sphere-sphere and sphere-plane geometry ### G. Bimonte 1, T. Emig 2 #### Physical Review Letters 109 (2012) 160403 Analytic expressions that describe Casimir interactions over the entire range of separations have been limited to planar surfaces. Here we derive analytic expressions for the classical or high-temperature limit of Casimir interactions between two spheres (interior and exterior configurations), including the sphere-plane geometry as a special case, using bispherical coordinates. We consider both Dirichlet boundary conditions and metallic boundary conditions described by the Drude model. At short distances, closed-form expansions are derived from the exact result, displaying an intricate structure of deviations from the commonly employed proximity force approximation. • 1. Istituto Nazionale di Fisica Nucleare, Sezione di Napoli (INFN, Sezione di Napoli), INFN • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Exact wavefunctions for excitations of the nu=1/3 fractional quantum Hall state from a model Hamiltonian ### Paul Soulé 1, Thierry Jolicoeur 1 #### Physical Review B (Condensed Matter) 85 (2012) 155116 We study fractional quantum Hall states in the cylinder geometry with open boundaries. By truncating the Coulomb interactions between electrons we show that it is possible to construct infinitely many exact eigenstates including the ground state, quasiholes, quasielectrons and the magnetoroton branch of excited states. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Feshbach resonances in Cesium at Ultra-low Static Magnetic Fields ### D. J. Papoular 1, 2, S. Bize 3, A. Clairon 3, H. Marion 3, S. J. Kokkelmans 4, G. V. Shlyapnikov 1, 5 #### Physical Review A 86 (2012) 040701 We have observed Feshbach resonances for 133Cs atoms in two different hyperfine states at ultra-low static magnetic fields by using an atomic fountain clock. The extreme sensitivity of our setup allows for high signal-to-noise-ratio observations at densities of only 2*10^7 cm^{-3}. We have reproduced these resonances using coupled-channels calculations which are in excellent agreement with our measurements. We justify that these are s-wave resonances involving weakly-bound states of the triplet molecular Hamiltonian, identify the resonant closed channels, and explain the observed multi-peak structure. We also describe a model which precisely accounts for the collisional processes in the fountain and which explains the asymmetric shape of the observed Feshbach resonances in the regime where the kinetic energy dominates over the coupling strength. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Dipartimento di Fisica, Universita Trento • 3. Systèmes de Référence Temps Espace (SYRTE), CNRS : UMR8630 – INSU – Observatoire de Paris – Université Paris VI - Pierre et Marie Curie • 4. Department of Physics, Eindhoven University of Technology • 5. Van der Waals-Zeeman Institute, University of Amsterdam • ## Field induced stationary state for an accelerated tracer in a bath ### Matthieu Barbier 1, Emmanuel Trizac 1 #### Journal of Statistical Physics 149 (2012) 317-341 Our interest goes to the behavior of a tracer particle, accelerated by a constant and uniform external field, when the energy injected by the field is redistributed through collision to a bath of unaccelerated particles. A non equilibrium steady state is thereby reached. Solutions of a generalized Boltzmann-Lorentz equation are analyzed analytically, in a versatile framework that embeds the majority of tracer-bath interactions discussed in the literature. These results --mostly derived for a one dimensional system-- are successfully confronted to those of three independent numerical simulation methods: a direct iterative solution, Gillespie algorithm, and the Direct Simulation Monte Carlo technique. We work out the diffusion properties as well as the velocity tails: large v, and either large -v, or v in the vicinity of its lower cutoff whenever the velocity distribution is bounded from below. Particular emphasis is put on the cold bath limit, with scatterers at rest, which plays a special role in our model. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## From Weak- to Strong-Coupling Mesoscopic Fermi Liquids ### Dong E. Liu 1, Sébastien Burdin 2, Harold U. Baranger 1, Denis Ullmo 3 #### EPL 97 (2012) 17006 We study mesoscopic fluctuations in a system in which there is a continuous connection between two distinct Fermi liquids, asking whether the mesoscopic variation in the two limits is correlated. The particular system studied is an Anderson impurity coupled to a finite mesoscopic reservoir described by random matrix theory, a structure which can be realized using quantum dots. We use the slave boson mean field approach to connect the levels of the uncoupled system to those of the strong coupling Noziéres Fermi liquid. We find strong but not complete correlation between the mesoscopic properties in the two limits and several universal features. • 1. Duke Physics, Duke University • 2. Laboratoire Ondes et Matière d'Aquitaine (LOMA), CNRS : UMR5798 • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (3) • ## Generation of dispersive shock waves by the flow of a Bose-Einstein condensate past a narrow obstacle ### A. M. Kamchatnov 1, N. Pavloff 2 #### Physical Review A 85 (2012) 033603 We study the flow of a quasi-one-dimensional Bose-Einstein condensate incident onto a narrow obstacle. We consider a configuration in which a dispersive shock is formed and propagates upstream away from the obstacle while the downstream flow reaches a supersonic velocity, generating a sonic horizon. Conditions for obtaining this regime are explicitly derived and the accuracy of our analytical results is confirmed by numerical simulations. • 1. Institute of Spectroscopy, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (7) • ## Ground state of classical bilayer Wigner crystals ### L. Samaj 1, 2, E. Trizac 2 #### Europhysics Letters 98 (2012) 36004 We study the ground state structure of electronic-like bilayers, where different phases compete upon changing the inter-layer separation or particle density. New series representations with exceptional convergence properties are derived for the exact Coulombic energies under scrutiny. The complete phase transition scenario --including critical phenomena-- can subsequently be worked out in detail, thereby unifying a rather scattered or contradictory body of literature, hitherto plagued by the inaccuracies inherent to long range interaction potentials. • 1. Institute of Physics, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Impurity in a sheared inelastic Maxwell gas ### V. Garzó 1, E. Trizac 2 #### Physical Review E 85 (2012) 011302 The Boltzmann equation for inelastic Maxwell models is considered in order to investigate the dynamics of an impurity (or intruder) immersed in a granular gas driven by a uniform shear flow. The analysis is based on an exact solution of the Boltzmann equation for a granular binary mixture. It applies for conditions arbitrarily far from equilibrium (arbitrary values of the shear rate $a$) and for arbitrary values of the parameters of the mixture (particle masses $m_i$, mole fractions $x_i$, and coefficients of restitution $\alpha_{ij}$). In the tracer limit where the mole fraction of the intruder species vanishes, a non equilibrium phase transition takes place. We thereby identity ordered phases where the intruder bears a finite contribution to the properties of the mixture, in a region of parameter space that is worked out in detail. These findings extend previous results obtained for ordinary Maxwell gases, and further show that dissipation leads to new ordered phases. • 1. Departamento de Fisica, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Lane formation in a lattice model for oppositely driven binary particles ### Hiroki Ohta 1 #### Europhysics Letters 99 (2012) 40006 Oppositely driven binary particles with repulsive interactions on the square lattice are investigated at the zero-temperature limit. Two classes of steady states related to stuck configurations and lane formations have been constructed in systematic ways under certain conditions. A mean-field type analysis carried out using a percolation problem based on the constructed steady states provides an estimation of the phase diagram, which is qualitatively consistent with numerical simulations. Further, finite size effects in terms of lane formations are discussed. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Level statistics of disordered spin-1/2 systems and its implications for materials with localized Cooper pairs ### Emilio Cuevas, Mikhail Feigel'man, Lev Ioffe, Marc Mezard #### Nature Communications 3 (2012) 1128 The origin of continuous energy spectra in large disordered interacting quantum systems is one of the key unsolved problems in quantum physics. While small quantum systems with discrete energy levels are noiseless and stay coherent forever in the absence of any coupling to external world, most large-scale quantum systems are able to produce thermal bath and excitation decay. This intrinsic decoherence is manifested by a broadening of energy levels which aquire a finite width. The important question is what is the driving force and the mechanism of transition(s) between two different types of many-body systems - with and without intrinsic decoherence? Here we address this question via the numerical study of energy level statistics of a system of spins-1/2 with anisotropic exchange interactions and random transverse fields. Our results present the first evidence for a well-defined quantum phase transition between domains of discrete and continous many-body spectra in a class of random spin models. Because this model also describes the physics of the superconductor-insulator transition in disordered superconductors like InO and similar materials, our results imply the appearance of novel insulating phases in the vicinity of this transition • ## Material dependence of Casimir forces: gradient expansion beyond proximity ### G. Bimonte 1, T. Emig 2, M. Kardar 3 #### Applied Physics Letters 100 (2012) 074110 A widely used method for estimating Casimir interactions [H. B. G. Casimir, Proc. K. Ned. Akad. Wet. 51, 793 (1948)] between gently curved material surfaces at short distances is the proximity force approximation (PFA). While this approximation is asymptotically exact at vanishing separations, quantifying corrections to PFA has been notoriously difficult. Here we use a derivative expansion to compute the leading curvature correction to PFA for metals (gold) and insulators (SiO$_2$) at room temperature. We derive an explicit expression for the amplitude $\hat\theta_1$ of the PFA correction to the force gradient for axially symmetric surfaces. In the non-retarded limit, the corrections to the Casimir free energy are found to scale logarithmically with distance. For gold, $\hat\theta_1$ has an unusually large temperature dependence. • 1. Istituto Nazionale di Fisica Nucleare, Sezione di Napoli (INFN, Sezione di Napoli), INFN • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. Department of Physics, Massachusetts Institute of Technology • ## Mesoscopic Anderson Box: Connecting Weak to Strong Coupling ### Sébastien Burdin 2, Harold U. Baranger 1, Denis Ullmo 3 #### Physical Review B 85, 15 (2012) 155455 Both the weakly coupled and strong coupling Anderson impurity problems are characterized by a Fermi-liquid theory with weakly interacting quasiparticles. In an Anderson box, mesoscopic fluctuations of the effective single particle properties will be large. We study how the statistical fluctuations at low temperature in these two problems are connected, using random matrix theory and the slave boson mean field approximation (SBMFA). First, for a resonant level model such as results from the SBMFA, we find the joint distribution of energy levels with and without the resonant level present. Second, if only energy levels within the Kondo resonance are considered, the distributions of perturbed levels collapse to universal forms for both orthogonal and unitary ensembles for all values of the coupling. These universal curves are described well by a simple Wigner-surmise type toy model. Third, we study the fluctuations of the mean field parameters in the SBMFA, finding that they are small. Finally, the change in the intensity of an eigenfunction at an arbitrary point is studied, such as is relevant in conductance measurements: we find that the introduction of the strongly-coupled impurity considerably changes the wave function but that a substantial correlation remains. • 1. Duke Physics, Duke University • 2. Laboratoire Ondes et Matière d'Aquitaine (LOMA), CNRS : UMR5798 • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (1) • ## Multifractal dimensions for all moments for certain critical random matrix ensembles in the strong multifractality regime ### E. Bogomolny 1, O. Giraud 1 #### Physical Review E 85 (2012) 046208 We construct perturbation series for the q-th moment of eigenfunctions of various critical random matrix ensembles in the strong multifractality regime close to localization. Contrary to previous investigations, our results are valid in the region q<1/2. Our findings allow to verify, at first leading orders in the strong multifractality limit, the symmetry relation for anomalous fractal dimensions Delta(q)=Delta(1-q), recently conjectured for critical models where an analogue of the metal-insulator transition takes place. It is known that this relation is verified at leading order in the weak multifractality regime. Our results thus indicate that this symmetry holds in both limits of small and large coupling constant. For general values of the coupling constant we present careful numerical verifications of this symmetry relation for different critical random matrix ensembles. We also present an example of a system closely related to one of these critical ensembles, but where the symmetry relation, at least numerically, is not fulfilled. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Multifractality of eigenfunctions in spin chains ### Yasar Yilmaz Atas 1, Eugene Bogomolny 1 #### Physical Review E 86 (2012) 021104 We investigate different one-dimensional quantum spin-1/2 chain models and by combining analytical and numerical calculations prove that their ground state wave functions in the natural spin basis are multifractals with, in general, non-trivial fractal dimensions. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Multifractality of quantum wave packets ### Ignacio Garcia-Mata 1, 2, J. Martin 3, Olivier Giraud 4, Bertrand Georgeot 1 #### Physical Review E86 (2012) 056215 We study a version of the mathematical Ruijsenaars-Schneider model, and reinterpret it physically in order to describe the spreading with time of quantum wave packets in a system where multifractality can be tuned by varying a parameter. We compare different methods to measure the multifractality of wave packets, and identify the best one. We find the multifractality to decrease with time until it reaches an asymptotic limit, different from the mulifractality of eigenvectors, but related to it, as is the rate of the decrease. Our results are relevant to experimental situations. • 1 : Laboratoire de Physique Théorique - IRSAMC (LPT) CNRS : UMR5152 – Université Paul Sabatier (UPS) - Toulouse III • 2 : Instituto de Investigaciones F'ısicas de Mar del Plata (IFIMAR) Univ. Nacional de La Plata and Conicet • 3 : Institut de Physique Nucl'eaire, Atomique et de Spectroscopie Université de Liège • 4 : Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS) CNRS : UMR8626 – Université Paris XI - Paris Sud • ## New alphabet-dependent morphological transition in a random RNA alignment ### O. V. Valba 1, 2, M. V. Tamm 3, S. K. Nechaev 1, 4 #### Physical Review Letters 109 (2012) 018102 We study the fraction $f$ of nucleotides involved in the formation of a cactus--like secondary structure of random heteropolymer RNA--like molecules. In the low--temperature limit we study this fraction as a function of the number $c$ of different nucleotide species. We show, that with changing $c$, the secondary structures of random RNAs undergo a morphological transition: $f(c)\to 1$ for $c \le c_{\rm cr}$ as the chain length $n$ goes to infinity, signaling the formation of a virtually 'perfect' gapless secondary structure; while $f(c)<1$ for $c>c_{\rm cr}$, what means that a non-perfect structure with gaps is formed. The strict upper and lower bounds $2 \le c_{\rm cr} \le 4$ are proven, and the numerical evidence for $c_{\rm cr}$ is presented. The relevance of the transition from the evolutional point of view is discussed. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Moscow Institute of Physics and Technology (MIPT), Moscow Institute of Physics and Technology • 3. Physics Department, Moscow State University • 4. P. N. Lebedev Physical Institute, • ## Non-equilibrium and information: the role of cross-correlations ### Andrea Crisanti 1, Andrea Puglisi 2, Dario Villamaina 3 #### Physical Review E 85 (2012) 061127 We discuss the relevance of information contained in cross-correlations among different degrees of freedom, which is crucial in non-equilibrium systems. In particular we consider a stochastic system where two degrees of freedom $X_1$ and $X_2$ - in contact with two different thermostats - are coupled together. The production of entropy and the violation of equilibrium fluctuation-dissipation theorem (FDT) are both related to the cross-correlation between $X_1$ and $X_2$. Information about such cross-correlation may be lost when single-variable reduced models, for $X_1$, are considered. Two different procedures are typically applied: (a) one totally ignores the coupling with $X_2$; (b) one models the effect of $X_2$ as an average memory effect, obtaining a generalized Langevin equation. In case (a) discrepancies between the system and the model appear both in entropy production and linear response; the latter can be exploited to define effective temperatures, but those are meaningful only when time-scales are well separated. In case (b) linear response of the model well reproduces that of the system; however the loss of information is reflected in a loss of entropy production. When only linear forces are present, such a reduction is dramatic and makes the average entropy production vanish, posing problems in interpreting FDT violations. • 1. Sapienza, Sapienza • 2. Dipartimento di Fisica (DF-LS), Università degli studi di Roma I - La Sapienza • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Novel Fermi Liquid of 2D Polar Molecules ### Zhen-Kai Lu 1, 2, 3, G. V. Shlyapnikov 1, 4 #### Physical Review A 85 (2012) 023614 We study Fermi liquid properties of a weakly interacting 2D gas of single-component fermionic polar molecules with dipole moments $d$ oriented perpendicularly to the plane of their translational motion. This geometry allows the minimization of inelastic losses due to chemical reactions for reactive molecules and, at the same time, provides a possibility of a clear description of many-body (beyond mean field) effects. The long-range character of the dipole-dipole repulsive interaction between the molecules, which scales as $1/r^3$ at large distances $r$, makes the problem drastically different from the well-known problem of the two-species Fermi gas with repulsive contact interspecies interaction. We solve the low-energy scattering problem and develop a many-body perturbation theory beyond the mean field. The theory relies on the presence of a small parameter $k_Fr_*$, where $k_F$ is the Fermi momentum, and $r_*=md^2/\hbar^2$ is the dipole-dipole length, with $m$ being the molecule mass. We obtain thermodynamic quantities as a series of expansion up to the second order in $k_Fr_*$ and argue that many-body corrections to the ground-state energy can be identified in experiments with ultracold molecules, like it has been recently done for ultracold fermionic atoms. Moreover, we show that only many-body effects provide the existence of zero sound and calculate the sound velocity. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Max Planck Institute für quantenoptik, Max Planck Institute • 3. Fédération de recherche du département de physique de l'Ecole Normale Supérieure (FRDPENS), CNRS : FR684 – Ecole Normale Supérieure de Paris - ENS Paris • 4. Van der Waals-Zeeman Institute, University of Amsterdam Citations to the Article (8) • ## Number of Common Sites Visited by N Random Walkers ### Satya N. Majumdar 1, Mikhail V. Tamm 2 #### Physical Review E 86 (2012) 021135 We compute analytically the mean number of common sites, W_N(t), visited by N independent random walkers each of length t and all starting at the origin at t=0 in d dimensions. We show that in the (N-d) plane, there are three distinct regimes for the asymptotic large t growth of W_N(t). These three regimes are separated by two critical lines d=2 and d=d_c(N)=2N/(N-1) in the (N-d) plane. For d<2, W_N(t)\sim t^{d/2} for large t (the N dependence is only in the prefactor). For 2 • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Department of Physics, Lomonosov State University • ## Number of relevant directions in Principal Component Analysis and Wishart random matrices ### Satya N. Majumdar 1, Pierpaolo Vivo 1 #### Physical Review Letters 108 (2012) 200601 We compute analytically, for large $N$, the probability $\mathcal{P}(N_+,N)$ that a $N\times N$ Wishart random matrix has $N_+$ eigenvalues exceeding a threshold $N\zeta$, including its large deviation tails. This probability plays a benchmark role when performing the Principal Component Analysis of a large empirical dataset. We find that $\mathcal{P}(N_+,N)\approx\exp(-\beta N^2 \psi_\zeta(N_+/N))$, where $\beta$ is the Dyson index of the ensemble and $\psi_\zeta(\kappa)$ is a rate function that we compute explicitly in the full range $0\leq \kappa\leq 1$ and for any $\zeta$. The rate function $\psi_\zeta(\kappa)$ displays a quadratic behavior modulated by a logarithmic singularity close to its minimum $\kappa^\star(\zeta)$. This is shown to be a consequence of a phase transition in an associated Coulomb gas problem. The variance $\Delta(N)$ of the number of relevant components is also shown to grow universally (independent of $\zeta)$ as $\Delta(N)\sim (\beta \pi^2)^{-1}\ln N$ for large $N$. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## On the joint distribution of the maximum and its position of the Airy2 process minus a parabola ### Jinho Baik 1, Karl Liechty 1, Gregory Schehr 2 #### Journal of Mathematical Physics 53 (2012) 083303 The maximal point of the Airy2 process minus a parabola is believed to describe the scaling limit of the end-point of the directed polymer in a random medium, which was proved to be true for a few specific cases. Recently two different formulas for the joint distribution of the location and the height of this maximal point were obtained, one by Moreno Flores, Quastel and Remenik, and the other by Schehr. The first formula is given in terms of the Airy function and an associated operator, and the second formula is expressed in terms of the Lax pair equations of the Painleve II equation. We give a direct proof that these two formulas are the same. • 1. Department of Mathematics, Michigan State University • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Ordered spectral statistics in 1D disordered supersymmetric quantum mechanics and Sinai diffusion with dilute absorbers ### Christophe Texier 1, 2 #### Physica Scripta 86 (2012) 058515 Some results on the ordered statistics of eigenvalues for one-dimensional random Schrödinger Hamiltonians are reviewed. In the case of supersymmetric quantum mechanics with disorder, the existence of low energy delocalized states induces eigenvalue correlations and makes the ordered statistics problem nontrivial. The resulting distributions are used to analyze the problem of classical diffusion in a random force field (Sinai problem) in the presence of weakly concentrated absorbers. It is shown that the slowly decaying averaged return probability of the Sinai problem, $\mean{P(x,t|x,0)}\sim \ln^{-2}t$, is converted into a power law decay, $\mean{P(x,t|x,0)}\sim t^{-\sqrt{2\rho/g}}$, where $g$ is the strength of the random force field and $\rho$ the density of absorbers. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Laboratoire de Physique des Solides (LPS), CNRS : UMR8502 – Université Paris XI - Paris Sud • ## Parametrization of spin-1 classical states ### Olivier Giraud 1, Petr Braun 2, 3, Daniel Braun 4 #### Physical Review A 85 (2012) 032101 We give an explicit parametrization of the set of mixed quantum states and of the set of mixed classical states for a spin--1. Classical states are defined as states with a positive Glauber-Sudarshan P-function. They are at the same time the separable symmetric states of two qubits. We explore the geometry of this set, and show that its boundary consists of a two-parameter family of ellipsoids. The boundary does not contain any facets, but includes straight-lines corresponding to mixtures of pure classical states. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Institute of Physics, Saint-Petersburg University, Saint-Petersburg University • 3. Fachbereich Physik, Universitaet Duisburg-Essen, Universitaet Duisburg-Essen • 4. Laboratoire de Physique Théorique - IRSAMC (LPT), CNRS : UMR5152 – Université Paul Sabatier - Toulouse III • ## Phase behaviour of colloidal assemblies on 2D corrugated substrates ### Samir El Shawish 1, Emmanuel Trizac 2, Jure Dobnikar 1 #### Journal of Physics: Condensed Matter 24 (2012) 284118 We investigate - with Monte Carlo computer simulations - the phase behaviour of dimeric colloidal molecules on periodic substrates with square symmetry. The molecules are formed in a two-dimensional suspension of like charged colloids subject to periodic external confinement, which can be experimentally realized by optical methods. We study the evolution of positional and orientational order by varying the temperature across the melting transition. We propose and evaluate appropriate order parameters as well as the specific heat capacity and show that the decay of positional correlations belongs to a class of crossover transitions while the orientational melting is a second-order phase transition. • 1. Jozef Stefan Institute, Jozef Stefan Institute • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Probabilistic Reconstruction in Compressed Sensing: Algorithms, Phase Diagrams, and Threshold Achieving Matrices ### Florent Krzakala 1, Marc Mézard 2, François Sausset 2, Yifan Sun 1, 3, Lenka Zdeborová 4 #### Journal of Statistical Mechanics: Theory and Experiment (2012) P08009 Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make less measurements than what was considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in [arXiv:1109.4424] a strategy that allows compressed sensing to be performed at acquisition rates approaching to the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation max- imization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distribution of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically. • 1. Laboratoire de Physico-Chimie Théorique (LPCT), CNRS : UMR7083 – ESPCI ParisTech • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. LMIB and School of Mathematics and Systems Science, Beihang University • 4. Institut de Physique Théorique (ex SPhT) (IPHT), CNRS : URA2306 – CEA : DSM/IPHT Citations to the Article (5) • ## Probing Spin-Charge Relation by Magnetoconductance in One-Dimensional Polymer Nanofibers ### A. Choi 1, 2, K. H. Kim 1, S. J. Hong 1, M. Goh 3, 4, K. Akagi 3, R. B. Kaner 5, N. N. Kirova 6, S. A. Brazovskii 7, A. T. Johnson 8, 9, D. A. Bonnell 8, E. J. Mele 9, Y. W. Park 1 #### Physical Review B (Condensed Matter) 86 (2012) 155423 Polymer nanofibers are one-dimensional organic hydrocarbon systems containing conducting polymers where the non-linear local excitations such as solitons, polarons and bipolarons formed by the electron-phonon interaction were predicted. Magnetoconductance (MC) can simultaneously probe both the spin and charge of these mobile species and identify the effects of electron-electron interactions on these nonlinear excitations. Here we report our observations of a qualitatively different MC in polyacetylene (PA) and in polyaniline (PANI) and polythiophene (PT) nanofibers. In PA the MC is essentially zero, but it is present in PANI and PT. The universal scaling behavior and the zero (finite) MC in PA (PANI and PT) nanofibers provide evidence of Coulomb interactions between spinless charged solitons (interacting polarons which carry both spin and charge). • 1. Department of Physics and Astronomy, Seoul National University • 2. WCU Flexible Nanosystems, Korea University • 3. Department of Polymer Chemistry, Kyoto University • 4. Institute of Advanced Composite Materials, KIST • 5. Department of Chemistry and Biochemistry, UCLA • 6. Laboratoire de Physique des Solides (LPS), CNRS : UMR8502 – Université Paris XI - Paris Sud • 7. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 8. Nano-Bio Interface Center, University of Pennsylvania • 9. Department of Physics and Astronomy, University of Pennsylvania • ## Quantitative field theory of the glass transition ### Silvio Franz, Hugo Jacquin, Giorgio Parisi, Pierfrancesco Urbani, Francesco Zamponi #### Proceeding of the national academy of sciences 109 (2012) 18725 We develop a full microscopic replica field theory of the dynamical transition in glasses. By studying the soft modes that appear at the dynamical temperature we obtain an effective theory for the critical fluctuations. This analysis leads to several results: we give expressions for the mean field critical exponents, and we study analytically the critical behavior of a set of four-points correlation functions from which we can extract the dynamical correlation length. Finally, we can obtain a Ginzburg criterion that states the range of validity of our analysis. We compute all these quantities within the Hypernetted Chain Approximation (HNC) for the Gibbs free energy and we find results that are consistent with numerical simulations. • ## Quantum fluctuations around black hole horizons in Bose-Einstein condensates ### P. -É. Larré 1, A. Recati 2, I. Carusotto 2, N. Pavloff 1 #### Physical Review A 85 (2012) 013621 We study several realistic configurations allowing to realize an acoustic horizon in the flow of a one dimensional Bose-Einstein condensate. In each case we give an analytical description of the flow pattern, of the spectrum of Hawking radiation and of the associated quantum fluctuations. Our calculations confirm that the non local correlations of the density fluctuations previously studied in a simplified model provide a clear signature of Hawking radiation also in realistic configurations. In addition we explain by direct computation how this non local signal relates to short range modifications of the density correlations. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. INO-CNR BEC Center and Dipartimento di Fisica, Universita di Trento Citations to the Article (9) • ## Quantum flutter of supersonic particles in one-dimensional quantum liquids ### Charles J. M. Mathy 1, Mikhail B. Zvonarev 1, 2, Eugene Demler 1 #### Nature Physics 8 (2012) 881-886 The non-equilibrium dynamics of strongly correlated many-body systems exhibits some of the most puzzling phenomena and challenging problems in condensed matter physics. Here we report on essentially exact results on the time evolution of an impurity injected at a finite velocity into a one-dimensional quantum liquid. We provide the first quantitative study of the formation of the correlation hole around a particle in a strongly coupled many-body quantum system, and find that the resulting correlated state does not come to a complete stop but reaches a steady state which propagates at a finite velocity. We also uncover a novel physical phenomenon when the impurity is injected at supersonic velocities: the correlation hole undergoes long-lived coherent oscillations around the impurity, an effect we call quantum flutter. We provide a detailed understanding and an intuitive physical picture of these intriguing discoveries, and propose an experimental setup where this physics can be realized and probed directly. • 1. Department of Physics, University of Harvard • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Raman scattering of atoms from a quasi-condensate in a perturbative regime ### T. Wasak 1, J. Chwedenczuk 1, M. Trippenbach 1, 2, Pawel Zin 2, 3 #### Physical Review A 86 (2012) 043621 It is demonstrated that measurements of positions of atoms scattered from a quasi-condensate in a Raman process provide information on the temperature of the parent cloud. In particular, the widths of the density and second order correlation functions are sensitive to the phase fluctuations induced by non-zero temperature of the quasi-condensate. It is also shown how these widths evolve during expansion of the cloud of scattered atoms. These results are useful for planning future Raman scattering experiments and indicate the degree of spatial resolution of atom-position measurements necessary to detect the temperature dependence of the quasi-condensate. • 1. Institute of Theoretical Physics, Warsaw University • 2. Andrzej Soltan Institute for Nuclear Studies, Warsaw University • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Random, blocky and alternating ordering in supramolecular polymers of chemically bidisperse monomers ### Sara Jabbari-Farouji 1, 2, 3, Paul Van Der Schoot 2, 4 #### Journal of Chemical Physics 137 (2012) 064906 As a first step to understanding the role of molecular or chemical polydispersity in self-assembly, we put forward a coarse-grained model that describes the spontaneous formation of quasi-linear polymers in solutions containing two self-assembling species. Our theoretical framework is based on a two-component self-assembled Ising model in which the bidispersity is parameterized in terms of the strengths of the binding free energies that depend on the monomer species involved in the pairing interaction. Depending upon the relative values of the binding free energies involved, different morphologies of assemblies that include both components are formed, exhibiting paramagnetic-, ferromagnetic- or anti ferromagnetic-like order, i.e., random, blocky or alternating ordering of the two components in the assemblies. Analyzing the model for the case of ferromagnetic ordering, which is of most practical interest, we find that the transition from conditions of minimal assembly to those characterized by strong polymerization can be described by a critical concentration that depends on the concentration ratio of the two species. Interestingly, the distribution of monomers in the assemblies is different from that in the original distribution, i.e., the ratio of the concentrations of the two components put into the system. The monomers with a smaller binding free energy are more abundant in short assemblies and monomers with a larger binding affinity are more abundant in longer assemblies. Under certain conditions the two components congregate into separate supramolecular polymeric species and in that sense phase separate. We find strong deviations from the expected growth law for supramolecular polymers even for modest amounts of a second component, provided it is chemically sufficiently distinct from the main one. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Theory of Polymer and Soft Matter Group, Eindhoven University of Technology • 3. Dutch Polymer Institute, Dutch Polymer Institute • 4. Institute for Theoretical Physics,, Utrecht University • ## Reconstruction of financial network for robust estimation of systemic risk ### Iacopo Mastromatteo 1, Elia Zarinelli 2, Matteo Marsili 3 #### Journal of Statistical Mechanics: Theory and Experiment (2012) P03011 In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as Maximum Entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures, and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks. • 1. Scuola Internazionale Superiore di Studi Avanzati / International School for Advanced Studies (SISSA / ISAS), Scuola Internazionale Superiore di Studi Avanzati/International School for Advanced Studies (SISSA/ISAS) • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. The Abdus Salam International Centre for Theoretical Physics, ICTP Trieste • ## Record statistics and persistence for a random walk with a drift ### Satya N. Majumdar 1, Gregory Schehr 1, Gregor Wergen 2 #### Journal of Physics A General Physics 45 (2012) 355002 We study the statistics of records of a one-dimensional random walk of n steps, starting from the origin, and in presence of a constant bias c. At each time-step the walker makes a random jump of length \eta drawn from a continuous distribution f(\eta) which is symmetric around a constant drift c. We focus in particular on the case were f(\eta) is a symmetric stable law with a Lévy index 0 < \mu \leq 2. The record statistics depends crucially on the persistence probability which, as we show here, exhibits different behaviors depending on the sign of c and the value of the parameter \mu. Hence, in the limit of a large number of steps n, the record statistics is sensitive to these parameters (c and \mu) of the jump distribution. We compute the asymptotic mean record number after n steps as well as its full distribution P(R,n). We also compute the statistics of the ages of the longest and the shortest lasting record. Our exact computations show the existence of five distinct regions in the (c, 0 < \mu \leq 2) strip where these quantities display qualitatively different behaviors. We also present numerical simulation results that verify our analytical predictions. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Institut für Theoretische Physik, Universität zu Köln Citations to the Article (8) • ## Record Statistics for Multiple Random Walks ### Gregor Wergen 1, Satya N. Majumdar 2, Gregory Schehr 2 #### Physical Review E 86 (2012) 011119 We study the statistics of the number of records R_{n,N} for N identical and independent symmetric discrete-time random walks of n steps in one dimension, all starting at the origin at step 0. At each time step, each walker jumps by a random length drawn independently from a symmetric and continuous distribution. We consider two cases: (I) when the variance \sigma^2 of the jump distribution is finite and (II) when \sigma^2 is divergent as in the case of Lévy flights with index 0 < \mu < 2. In both cases we find that the mean record number grows universally as \sim \alpha_N \sqrt{n} for large n, but with a very different behavior of the amplitude \alpha_N for N > 1 in the two cases. We find that for large N, \alpha_N \approx 2 \sqrt{\log N} independently of \sigma^2 in case I. In contrast, in case II, the amplitude approaches to an N-independent constant for large N, \alpha_N \approx 4/\sqrt{\pi}, independently of 0<\mu<2. For finite \sigma^2 we argue, and this is confirmed by our numerical simulations, that the full distribution of (R_{n,N}/\sqrt{n} - 2 \sqrt{\log N}) \sqrt{\log N} converges to a Gumbel law as n \to \infty and N \to \infty. In case II, our numerical simulations indicate that the distribution of R_{n,N}/\sqrt{n} converges, for n \to \infty and N \to \infty, to a universal nontrivial distribution, independently of \mu. We discuss the applications of our results to the study of the record statistics of 366 daily stock prices from the Standard & Poors 500 index. • 1. Institut für Theoretische Physik, Universität zu Köln • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (4) • ## Self-assembly of spherical interpolyelectrolyte complexes from oppositely charged polymers ### Vladimir A. Baulin 1, 2, Emmanuel Trizac 3 #### Soft Matter 8 (2012) 2755-2766 The formation of inter-polyelectrolyte complexes from the association of oppositely charged polymers in an electrolyte is studied. The charged polymers are linear oppositely charged polyelectrolytes, with possibly a neutral block. This leads to complexes with a charged core, and a more dilute corona of dangling chains, or of loops (flower-like structure). The equilibrium aggregation number of the complexes (number of polycations m+ and polyanions m-) is determined by minimizing the relevant free energy functional, the Coulombic contribution of which is worked out within Poisson-Boltzmann theory. The complexes can be viewed as colloids that are permeable to micro-ionic species, including salt. We find that the complexation process can be highly specific, giving rise to very localized size distribution in composition space (m+,m-). • 1. Institució Catalana de Recerca i Estudis Avançats [Barcelona] (ICREA), ICREA – Universitat de Barcelona – Fundació Catalana per a la Recerca i la Innovació (FCRI) • 2. Departament d'Enginyeria Quimica, Universitat Rovira i Virgili • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Short time growth of a KPZ interface with flat initial conditions ### Thomas Gueudre 1, Pierre Le Doussal 1, Alberto Rosso 2, Adrien Henry 2, Pasquale Calabrese 3 #### Physical Review E 86 (2012) 041151 The short time behavior of the 1+1 dimensional KPZ growth equation with a flat initial condition is obtained from the exact expressions of the moments of the partition function of a directed polymer with one endpoint free and the other fixed. From these expressions, the short time expansions of the lowest cumulants of the KPZ height field are exactly derived. The results for these two classes of cumulants are checked in high precision lattice numerical simulations. The short time limit considered here is relevant for the study of the interface growth in the large diffusivity/weak noise limit, and describes the universal crossover between the Edwards-Wilkinson and KPZ universality classes for an initially flat interface. • 1. Laboratoire de Physique Théorique de l'ENS (LPTENS), CNRS : UMR8549 – Université Paris VI - Pierre et Marie Curie – Ecole Normale Supérieure de Paris - ENS Paris • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. Dipartimento di Fisica dell'Universita di Pisa and INFN,Pisa, UNIVERSITÀ DEGLI STUDI DI PISA Citations to the Article (2) • ## Slow quench dynamics of Mott-insulating regions in a trapped Bose gas ### Jean-Sebastien Bernier 1, 2, Dario Poletti 3, Peter Barmettler 3, Guillaume Roux 4, Corinna Kollath 1, 5 #### Physical Review A 85 (2012) 033641 We investigate the dynamics of Mott-insulating regions of a trapped bosonic gas as the interaction strength is changed linearly with time. The bosonic gas considered is loaded into an optical lattice and confined to a parabolic trapping potential. Two situations are addressed: the formation of Mott domains in a superfluid gas as the interaction is increased, and their melting as the interaction strength is lowered. In the first case, depending on the local filling, Mott-insulating barriers can develop and hinder the density and energy transport throughout the system. In the second case, the density and local energy adjust rapidly whereas long range correlations require longer time to settle. For both cases, we consider the time evolution of various observables: the local density and energy, and their respective currents, the local compressibility, the local excess energy, the heat and single particle correlators. The evolution of these observables is obtained using the time-dependent density-matrix renormalization group technique and comparisons with time-evolutions done within the Gutzwiller approximation are provided. • 1. Centre de Physique Théorique (CPHT), CNRS : UMR7644 – Polytechnique - X • 2. Department of Physics and Astronomy, University of British Columbia • 3. Département de Physique Théorique, Université de Genève • 4. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 5. Département de physique théorique, Université de Genève Citations to the Article (6) • ## Spectral determinants and zeta functions of Schrödinger operators on metric graphs ### J. M. Harrison 1, K. Kirsten 1, C. Texier 2, 3 #### Journal of Physics A: Mathematical and Theoretical 45 (2012) 125206 A derivation of the spectral determinant of the Schrödinger operator on a metric graph is presented where the local matching conditions at the vertices are of the general form classified according to the scheme of Kostrykin and Schrader. To formulate the spectral determinant we first derive the spectral zeta function of the Schrödinger operator using an appropriate secular equation. The result obtained for the spectral determinant is along the lines of the recent conjecture. • 1. Department of Mathematics, Baylor University • 2. Laboratoire de Physique des Solides (LPS), CNRS : UMR8502 – Université Paris XI - Paris Sud • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Statistical physics-based reconstruction in compressed sensing ### Florent Krzakala 1, Marc Mézard 2, François Sausset 2, Yifan Sun 1, 3, Lenka Zdeborová 4 #### Physical Review X 2 (2012) 021005 Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases. • 1. Laboratoire de Physico-Chimie Théorique (LPCT), CNRS : UMR7083 – ESPCI ParisTech • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 3. LMIB and School of Mathematics and Systems Science,, Beihang University • 4. Institut de Physique Théorique (ex SPhT) (IPHT), CNRS : URA2306 – CEA : DSM/IPHT Citations to the Article (16) • ## Strong-coupling theory for a polarizable planar colloid ### L. Samaj 1, 2, E. Trizac 2 #### Contributions to Plasma Physics 52 (2012) 53 We propose a strong-coupling analysis of a polarizable planar interface, in the spirit of a recently introduced Wigner-Crystal formulation. The system is made up of two moieties: a semi-infinite medium (z<0) with permittivity epsilon' while the other half space in z>0 is occupied by a solution with permittivity epsilon, and mobile counter-ions (no added electrolyte). The interface at z=0 bears a uniform surface charge. The counter-ion density profile is worked out explicitly for both repulsive and attractive dielectric image cases. • 1. Institute of Physics, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Structure factors in granular experiments with homogeneous fluidization ### A. Puglisi 1, A. Gnoli 2, 3, G. Gradenigo 4, A. Sarracino 5, D. Villamaina 6 Velocity and density structure factors are measured over a hydrodynamic range of scales in a horizontal quasi-2d fluidized granular experiment, with packing fractions $\phi\in[10%,40%]$. The fluidization is realized by vertically vibrating a rough plate, on top of which particles perform a Brownian-like horizontal motion in addition to inelastic collisions. On one hand, the density structure factor is equal to that of elastic hard spheres, except in the limit of large length-scales, as it occurs in the presence of an effective interaction. On the other hand, the velocity field shows a more complex structure which is a genuine expression of a non-equilibrium steady state and which can be compared to a recent fluctuating hydrodynamic theory with non-equilibrium noise. The temporal decay of velocity modes autocorrelations is compatible with linear hydrodynamic equations with rates dictated by viscous momentum diffusion, corrected by a typical interaction time with the thermostat. Equal-time velocity structure factors display a peculiar shape with a plateau at large length-scales and another one at small scales, marking two different temperatures: the 'bath' temperature $T_b$, depending on shaking parameters, and the 'granular' temperature $T_g • 1. Dipartimento di Fisica, Università La Sapienza • 2. Dipartimento di Fisica, Università degli studi di Roma "La Sapienza" • 3. Istituto dei Sistemi Complessi, CNR - Consiglio Nationale delle Ricerche • 4. Dipartimento di Fisica, CNR - Consiglio Nationale delle Ricerche • 5. Dipartimento di Fisica, CNR - Consiglio Nationale delle Ricerche • 6. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Download PDF via arXiV.org • ## Structure of trajectories of complex matrix eigenvalues in the Hermitian-non-Hermitian transition ### O. Bohigas 1, J. X. de Carvalho 2, 3, M. P. Pato 2 #### Physical Review E 86 (2012) 031118 The statistical properties of trajectories of eigenvalues of Gaussian complex matrices whose Hermitian condition is progressively broken are investigated. It is shown how the ordering on the real axis of the real eigenvalues is reflected in the structure of the trajectories and also in the final distribution of the eigenvalues in the complex plane. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Instituto de Fisica, Universidade de São Paulo • 3. Max-Planck-Institut für Physik komplexer Systeme, Max-Planck-Institut Download PDF via arXiV.org • ## Super-Rough Glassy Phase of the Random Field XY Model in Two Dimensions ### Anthony Perret 1, Zoran Ristivojevic 2, Pierre Le Doussal 2, Gregory Schehr 1, Kay J. Wiese 2 #### Physical Review Letters 109 (2012) 157205 We study both analytically, using the renormalization group (RG) to two loop order, and numerically, using an exact polynomial algorithm, the disorder-induced glass phase of the two-dimensional XY model with quenched random symmetry-breaking fields and without vortices. In the super-rough glassy phase, i.e. below the critical temperature$T_c$, the disorder and thermally averaged correlation function$B(r)$of the phase field$\theta(x)$,$B(r) = \bar{<[\theta(x) - \theta(x+ r) ]^2>}$behaves, for$r \gg a$, as$B(r) \simeq A(\tau) \ln^2 (r/a)$where$r = |r|$and$a$is a microscopic length scale. We derive the RG equations up to cubic order in$\tau = (T_c-T)/T_c$and predict the universal amplitude${A}(\tau) = 2\tau^2-2\tau^3 + {\cal O}(\tau^4)$. The universality of$A(\tau)$results from nontrivial cancellations between nonuniversal constants of RG equations. Using an exact polynomial algorithm on an equivalent dimer version of the model we compute${A}(\tau)$numerically and obtain a remarkable agreement with our analytical prediction, up to$\tau \approx 0.5$. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Laboratoire de Physique Théorique de l'ENS (LPTENS), CNRS : UMR8549 – Université Paris VI - Pierre et Marie Curie – Ecole Normale Supérieure de Paris - ENS Paris Download PDF via arXiV.org Citations to the Article (1) • ## Survival probability of an immobile target surrounded by mobile traps ### Jasper Franke 1, Satya N. Majumdar 2 #### Journal of Statistical Mechanics: Theory and Experiment (2012) P05024 We study analytically, in one dimension, the survival probability$P_{s}(t)$up to time$t$of an immobile target surrounded by mutually noninteracting traps each performing a continuous-time random walk (CTRW) in continuous space. We consider a general CTRW with symmetric and continuous (but otherwise arbitrary) jump length distribution$f(\eta)$and arbitrary waiting time distribution$\psi(\tau)$. The traps are initially distributed uniformly in space with density$\rho$. We prove an exact relation, valid for all time$t$, between$P_s(t)$and the expected maximum$E[M(t)]$of the trap process up to time$t$, for rather general stochastic motion$x_{\rm trap}(t)$of each trap. When$x_{\rm trap}(t)$represents a general CTRW with arbitrary$f(\eta)$and$\psi(\tau)$, we are able to compute exactly the first two leading terms in the asymptotic behavior of$E[M(t)]$for large$t$. This allows us subsequently to compute the precise asymptotic behavior,$P_s(t)\sim a\, \exp[-b\, t^{\theta}]$, for large$t$, with exact expressions for the stretching exponent$\theta$and the constants$a$and$b$for arbitrary CTRW. By choosing appropriate$f(\eta)$and$\psi(\tau)\$, we recover the previously known results for diffusive and subdiffusive traps. However, our result is more general and includes, in particular, the superdiffusive traps as well as totally anomalous traps. • 1. Institut für Theoretische Physik, Universität zu Köln • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (2) • ## The game of go as a complex network ### Bertrand Georgeot 1, Olivier Giraud 2 #### Europhysics Letters 97, 6 (2012) 68002 We study the game of go from a complex network perspective. We construct a directed network using a suitable definition of tactical moves including local patterns, and study this network for different datasets of professional tournaments and amateur games. The move distribution follows Zipf's law and the network is scale free, with statistical peculiarities different from other real directed networks, such as e. g. the World Wide Web. These specificities reflect in the outcome of ranking algorithms applied to it. The fine study of the eigenvalues and eigenvectors of matrices used by the ranking algorithms singles out certain strategic situations. Our results should pave the way to a better modelization of board games and other types of human strategic scheming. • 1. Laboratoire de Physique Théorique - IRSAMC (LPT), CNRS : UMR5152 – Université Paul Sabatier - Toulouse III • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Trace formula for dielectric cavities III: TE modes ### E. Bogomolny 1, R. Dubertrand 2 #### Physical Review E 86 (2012) 026202 The construction of the semiclassical trace formula for the resonances with the transverse electric (TE) polarization for two-dimensional dielectric cavities is discussed. Special attention is given to the derivation of the two first terms of Weyl's series for the average number of such resonances. The obtained formulas agree well with numerical calculations for dielectric cavities of different shapes. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Institut fur Theoretische Physik, University of Heidelberg • ## Trace formulae for non-equilibrium Casimir interactions, heat radiation and heat transfer for arbitrary objects ### Matthias Krüger 1, Giuseppe Bimonte 2, Thorsten Emig 3, Mehran Kardar 1 #### Physical Review B 86 (2012) 115423 We present a detailed derivation of heat radiation, heat transfer and (Casimir) interactions for N arbitrary objects in the framework of fluctuational electrodynamics in thermal non-equilibrium. The results can be expressed as basis-independent trace formulae in terms of the scattering operators of the individual objects. We prove that heat radiation of a single object is positive, and that heat transfer (for two arbitrary passive objects) is from the hotter to a colder body. The heat transferred is also symmetric, exactly reversed if the two temperatures are exchanged. Introducing partial wave-expansions, we transform the results for radiation, transfer and forces into traces of matrices that can be evaluated in any basis, analogous to the equilibrium Casimir force. The method is illustrated by (re)deriving the heat radiation of a plate, a sphere and a cylinder. We analyze the radiation of a sphere for different materials, emphasizing that a simplification often employed for metallic nano-spheres is typically invalid. We derive asymptotic formulae for heat transfer and non-equilibrium interactions for the cases of a sphere in front a plate and for two spheres, extending previous results. As an example, we show that a hot nano-sphere can levitate above a plate with the repulsive non-equilibrium force overcoming gravity -- an effect that is not due to radiation pressure. • 1. Department of Physics, Massachusetts Institute of Technology • 2. Dipartimento di Scienze Fisiche, Universita di Napoli Federico II • 3. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • ## Tuning spreading and avalanche-size exponents in directed percolation with modified activation probabilities ### François Landes 1, E. A. Jagla 2, Alberto Rosso 1 #### Physical Review E 86 (2012) 041150 We consider the directed percolation process as a prototype of systems displaying a nonequilibrium phase transition into an absorbing state. The model is in a critical state when the activation probability is adjusted at some precise value p_c. Criticality is lost as soon as the probability to activate sites at the first attempt, p1, is changed. We show here that criticality can be restored by 'compensating' the change in p1 by an appropriate change of the second time activation probability p2 in the opposite direction. At compensation, we observe that the bulk exponents of the process coincide with those of the normal directed percolation process. However, the spreading exponents are changed, and take values that depend continuously on the pair (p1, p2). We interpret this situation by acknowledging that the model with modified initial probabilities has an infinite number of absorbing states. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Centro Atómico Bariloche and Instituto Balseiro, Comision Nacional de Energia Atomica • ## Universal reference state in a driven homogeneous granular gas ### M. I. Garcia de Soria 1, P. Maynar 1, E. Trizac 2 #### Physical Review E 85 (2012) 051301 We study the dynamics of a homogeneous granular gas heated by a stochastic thermostat, in the low density limit. It is found that, before reaching the stationary regime, the system quickly 'forgets' the initial condition and then evolves through a universal state that does not only depend on the dimensionless velocity, but also on the instantaneous temperature, suitably renormalized by its steady state value. We find excellent agreement between the theoretical predictions at Boltzmann equation level for the one-particle distribution function, and Direct Monte Carlo simulations. We conclude that at variance with the homogeneous cooling phenomenology, the velocity statistics should not be envisioned as a single-parameter, but as a two-parameter scaling form, keeping track of the distance to stationarity. • 1. Fisica Teorica, Universidad de Sevilla, • 2. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud Citations to the Article (3) • ## Wave pattern induced by a localized obstacle in the flow of a one-dimensional polariton condensate ### P. -É Larré 1, N. Pavloff 1, A. M. Kamchatnov 2 #### Physical Review B 86 (2012) 165304 Motivated by recent experiments on generation of wave patterns by a polariton condensate incident on a localized obstacle, we study the characteristics of such flows under the condition that irreversible processes play a crucial role in the system. The dynamics of a non-resonantly pumped polariton condensate in a quasi-one-dimensional quantum wire is modeled by a Gross-Pitaevskii equation with additional phenomenological terms accounting for the dissipation and pumping processes. The response of the condensate flow to an external potential describing a localized obstacle is considered in the weak-perturbation limit and also in the nonlinear regime. The transition from a viscous drag to a regime of wave resistance is identified and studied in detail. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Institute of Spectroscopy, • ## Wavepacket Dynamics in Nonlinear Schrödinger Equations ### Simon Moulieras 1, Alejandro G. Monastra 2, 3, Marcos Saraceno 4, Patricio Leboeuf 1 #### Physical Review A 85 (2012) 013841 Coherent states play an important role in quantum mechanics because of their unique properties under time evolution. Here we explore this concept for one-dimensional repulsive nonlinear Schrödinger equations, which describe weakly interacting Bose-Einstein condensates or light propagation in a nonlinear medium. It is shown that the dynamics of phase-space translations of the ground state of a harmonic potential is quite simple: the centre follows a classical trajectory whereas its shape does not vary in time. The parabolic potential is the only one that satis?fies this property. We study the time evolution of these nonlinear coherent states under perturbations of their shape, or of the confi?ning potential. A rich variety of e?ects emerges. In particular, in the presence of anharmonicities, we observe that the packet splits into two distinct components. A fraction of the condensate is transferred towards uncoherent high-energy modes, while the amplitude of oscillation of the remaining coherent component is damped towards the bottom of the well. • 1. Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS : UMR8626 – Université Paris XI - Paris Sud • 2. Gerencia Investigación y Aplicaciones, Comision Nacional de Energia Atomica • 3. Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), University of Buenos Aires • 4. Gerencia Investigación y Aplicaciones, Comision Nacional de Energia Atomica Citations to the Article (2) • ## Archive ouverte HAL – Mesoscopic Anderson Box: Connecting Weak to Strong Coupling ### Dong E. Liu 1, * Sébastien Burdin 2 Harold U. Baranger 1 Denis Ullmo 3 #### Physical Review B : Condensed matter and materials physics, American Physical Society, 2012, 85 (15), pp.155455 (1-17). 〈10.1103/PhysRevB.85.155455〉 Both the weakly coupled and strong coupling Anderson impurity problems are characterized by a Fermi-liquid theory with weakly interacting quasiparticles. In an Anderson box, mesoscopic fluctuations of the effective single particle properties will be large. We study how the statistical fluctuations at low temperature in these two problems are connected, using random matrix theory and the slave boson mean field approximation (SBMFA). First, for a resonant level model such as results from the SBMFA, we find the joint distribution of energy levels with and without the resonant level present. Second, if only energy levels within the Kondo resonance are considered, the distributions of perturbed levels collapse to universal forms for both orthogonal and unitary ensembles for all values of the coupling. These universal curves are described well by a simple Wigner-surmise type toy model. Third, we study the fluctuations of the mean field parameters in the SBMFA, finding that they are small. Finally, the change in the intensity of an eigenfunction at an arbitrary point is studied, such as is relevant in conductance measurements: we find that the introduction of the strongly-coupled impurity considerably changes the wave function but that a substantial correlation remains. • 1. Duke Physics • 2. LOMA - Laboratoire Ondes et Matière d'Aquitaine • 3. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques • ## Membrane shape at the edge of the dynamin helix sets location and duration of the fission reaction. ### Sandrine Morlot 1, 2 Valentina Galli 1 Marius Klein 2 Nicolas Chiaruttini 3 John Manzi 2 Frédéric Humbert 1 Luis DinisMartin Lenz 4 Giovanni Cappello 2 Aurélien Roux 1 #### Cell, Elsevier (Cell Press), 2012, 151 (3), pp.619-29 The GTPase dynamin polymerizes into a helical coat that constricts membrane necks of endocytic pits to promote their fission. However, the dynamin mechanism is still debated because constriction is necessary but not sufficient for fission. Here, we show that fission occurs at the interface between the dynamin coat and the uncoated membrane. At this location, the considerable change in membrane curvature increases the local membrane elastic energy, reducing the energy barrier for fission. Fission kinetics depends on tension, bending rigidity, and the dynamin constriction torque. Indeed, we experimentally find that the fission rate depends on membrane tension in vitro and during endocytosis in vivo. By estimating the energy barrier from the increased elastic energy at the edge of dynamin and measuring the dynamin torque, we show that the mechanical energy spent on dynamin constriction can reduce the energy barrier for fission sufficiently to promote spontaneous fission. : • 1. Department of Biochemistry • 2. PCC - Physico-Chimie-Curie • 3. Nanobiophysique • 4. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques • ## Charge-density waves studied at the surface and at the atomic scale in NbSe3 ### Pierre Monceau 1 C. Brun 2 Zhao-Zhong Wang 2 S. Brazovskii 3 #### Physica B: Condensed Matter, Elsevier, 2012, 407 (11), pp.1845 We have studied by scanning tunneling microscopy (STM) the two charge-density wave (CDW) transitions in NbSe3 on in situ cleaved (b,c) plane. We could identify the three types of chains existing inside a single unit cell as well as characterize how both CDWs are distributed on these elementary chains. We also followed between 5 and 140 K the temperature dependence of first-order CDW satellite spots, obtained from the Fourier transform of the STM images, to extract the surface critical temperatures (T-s). Whereas the high-temperature CDW appears to have comparable critical temperature to the bulk one, the low-T CDW transition occurs at T-2s = 70-75 K, more than 15 K above the bulk T-2b = 59 K while at exactly the same wave number. A reasonable mechanism for such an unusually high surface enhancement is a softening of transverse phonon modes involved in the CDW formation. • 1. CristElec MagSup • 2. LPN - Laboratoire de photonique et de nanostructures • 3. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques • ## Archive ouverte HAL – From Weak- to Strong-Coupling Mesoscopic Fermi Liquids ### Dong E. Liu 1 Sébastien Burdin 2 Harold U. Baranger 1 Denis Ullmo 3 #### EPL - Europhysics Letters, European Physical Society/EDP Sciences/Società Italiana di Fisica/IOP Publishing, 2012, 97 (1), pp.17006. 〈10.1209/0295-5075/97/17006〉 We study mesoscopic fluctuations in a system in which there is a continuous connection between two distinct Fermi liquids, asking whether the mesoscopic variation in the two limits is correlated. The particular system studied is an Anderson impurity coupled to a finite mesoscopic reservoir described by random matrix theory, a structure which can be realized using quantum dots. We use the slave boson mean field approach to connect the levels of the uncoupled system to those of the strong coupling Noziéres Fermi liquid. We find strong but not complete correlation between the mesoscopic properties in the two limits and several universal features. • 1. Duke Physics • 2. LOMA - Laboratoire Ondes et Matière d'Aquitaine • 3. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques • ## Finite-temperature and finite-time scaling of the directed polymer free-energy with respect to its geometrical fluctuations ### Elisabeth Agoritsas 1, * Sebastian Bustingorry 2 Vivien Lecomte 3 Gregory Schehr 4 Thierry Giamarchi 1 #### Physical Review E : Statistical, Nonlinear, and Soft Matter Physics, American Physical Society, 2012, 86 (3), pp.031144. <10.1103/PhysRevE.86.031144> We study the fluctuations of the directed polymer in 1+1 dimensions in a Gaussian random environment with a finite correlation length {\xi} and at finite temperature. We address the correspondence between the geometrical transverse fluctuations of the directed polymer, described by its roughness, and the fluctuations of its free-energy, characterized by its two-point correlator. Analytical arguments are provided in favor of a generic scaling law between those quantities, at finite time, non-vanishing {\xi} and explicit temperature dependence. Numerical results are in good agreement both for simulations on the discrete directed polymer and on a continuous directed polymer (with short-range correlated disorder). Applications to recent experiments on liquid crystals are discussed. • 1. DPMC - Département de Physique de la Matière Condensée • 2. Centro Atomico Bariloche • 3. LPMA - Laboratoire de Probabilités et Modèles Aléatoires • 4. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques We report on scanning-tunneling microscopy experiments in a charge-density wave (CDW) system allowing visually capturing and studying in detail the individual solitons corresponding to the self-trapping of just one electron. This Amplitude Soliton'' is marked by vanishing of the CDW amplitude and by the pi shift of its phase. It might be the realization of the spinon-the long-sought particle (along with the holon) in the study of science of strongly correlated electronic systems. As a distinct feature we also observe one-dimensional Friedel oscillations superimposed on the CDW which develop independently of solitons.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587436079978943, "perplexity": 1423.9546488314975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00019.warc.gz"}
https://math.stackexchange.com/questions/2460506/vector-field-flow-chain-rule
# Vector Field Flow Chain Rule Let $M\subset \mathbb{R}^k$ be a smooth $m$-manifold, and let $X$ be a smooth vector field on $M$. Let $\phi$ be the flow of $X$, defined via $\phi(t,p) := \gamma(t)$, with $\gamma(0) = p_0$ (where $\gamma(t)$ satisfies the equation $\dot{\gamma}(t) = X(\gamma(t))$.) Let $Y$ be another vector field on $M$, with flow $\psi$. Set $\phi^s(\cdot) := \phi(s,\cdot)$, and $\psi^t$ analogously. Let $\beta(s,t) := \phi^s\circ\psi^t\circ\phi^{-s}\psi^{-t}(p)$, for some $p\in M$. My professor now writes $$\frac{\partial\beta}{\partial s}(0,t) = X(p) - d\psi^t(\psi^{-t}(p))X(\psi^{-t}(p))$$ I don't understand exactly how he is using the chain rule, etc., to get to this equation. I know that $$\frac{d}{dt}\phi^t(p) = X(\phi^t(p)),\quad \phi^0(p)=p,$$ but I can't figure out the rest. $\Psi(s,s')=\phi_s\psi_t\phi_{-s'}\psi_{-t}$. Then compute $\partial_s\Psi(0,s')+\partial_{s'}\Psi(s,0)$ which gives the result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992448091506958, "perplexity": 50.617719368006455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00387.warc.gz"}
http://mathhelpforum.com/calculus/176789-fun-natural-logs.html
# Thread: fun with natural logs.... 1. ## fun with natural logs.... ok so we have the problem... ln N - ln(N-2000)= t + C so i tried to say that ln a = ln B = ln (a/B) so i got ln(N/N-2000)= t + c so then i raised both sides to e and got N/N-2000 = e^(t+c) my book got the same thing except their answer was positive, i dont see how this could be possible? can someone tell me where i went wrong 2. Originally Posted by slapmaxwell1 ok so we have the problem... ln N - ln(N-2000)= t + C so i tried to say that ln a = ln B = ln (a/B) so i got ln(N/N-2000)= t + C so then i raised both sides to e and got N/N-2000 = e^(t+C) my book got the same thing except their answer was positive, i dont see how this could be possible? can someone tell me where i went wrong You haven't done anything wrong at this stage; the error lies elsewhere with your working or with the textbook. What is the original question? $N=e^{t+C}(N-2000)$ $N=Ne^{t+C}-2000e^{t+C}$ $N-Ne^{t+C}=-2000e^{t+C}$ $N(1-e^{t+C})=-2000e^{t+C}$ $N=\dfrac{-2000e^{t+C}}{1-e^{t+C}}$ 3. Notice that $e^{t+ C}= ce^y$ where $c= e^C$. Also, you can multiply both numerator and denominator by -1 to get $\frac{2000ce^t}{ce^t- 1}$ Perhaps that is the "positive" answer you refer to.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9800316691398621, "perplexity": 909.4650428898016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187113.46/warc/CC-MAIN-20170322212947-00146-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/0705.1015/
# [ [ ###### Abstract In the set of 236 GRB afterglows observed by Swift between January 2005 and March 2007, we identify 30 X-ray light-curves whose power-law fall-off exhibit a steepening (”break”) at 0.1–10 day after trigger, to a decay steeper than . For most of these afterglows, the X-ray spectral slope and the decay indices before and after the break can be accommodated by the standard jet model although a different origin of the breaks cannot be ruled out. In addition, there are 27 other afterglows whose X-ray light-curves may also exhibit a late break to a steep decay, but the evidence is not that compelling. The X-ray emissions of 38 afterglows decay slower than until after 3 day, half of them exhibiting such a slow decay until after 10 day. Therefore, the fraction of well-monitored Swift afterglows with potential jet-breaks is around 60 percent, whether we count only the strongest cases for each type or all of them. This fraction is comparable to the 75 percent of pre-Swift afterglows whose optical light-curves displayed similar breaks at day. The properties of the prompt emission of Swift afterglows with light-curve breaks show the same correlations (peak energy of GRB spectrum with the burst isotropic output and with burst collimated output) as previously found for pre-Swift optical afterglows with light-curve breaks (the Amati and Ghirlanda relations, respectively). However, we find that Ghirlanda relation is largely a consequence of Amati’s and that the use of the jet-break time leads to a stronger Ghirlanda correlation only when the few outliers to the Amati relation are included. g Jet-Breaks in Swift Afterglows] Jet-breaks in the X-ray Light-Curves of Swift GRB Afterglows A. Panaitescu]A. Panaitescu Space Science and Applications, MS D466, Los Alamos National Laboratory, Los Alamos, NM 87545, USA amma-rays: bursts - radiation mechanisms: non-thermal - shock waves ## 1 Introduction Observations of Gamma-Ray Burst (GRB) afterglows from 1999 to 2005 have evidenced the existence of breaks in the optical light-curve of many afterglows, occurring at 0.3–3 day and being followed by a flux decay , with the index ranging from 1.3 to 2.8. These breaks have been widely interpreted as due to the tight collimation of GRB outflows: when the jet Lorentz factor decreases below the inverse of the jet half-opening (i.e. when the cone of relativistically beamed emission is wider than the jet), the observer ”sees” the jet boundary, which leads to a faster decay of the afterglow flux (synchrotron emission from the ambient medium shocked by the blast-wave). This decay may be further ”accelerated” by the lateral spreading of the jet (Rhoads 1999). The temporal–spectral properties of the afterglow optical emission are roughly consistent with the expectations of the standard jet model, if it assumed that the shock microphysical parameters and blast-wave kinetic energy are constant, and the blast-wave kinetic energy per solid angle is the same along any direction This consistency yielded support to the jet interpretation for the optical light-curve breaks. Nevertheless, the basic confirmation of the jet model through observations of achromatic breaks (i.e. exhibited by light-curves at different frequencies) lacked because of the limited coverage in the X-rays. To a large extent, because of the sparser optical follow-up, that proof is still modest today, despite the good X-ray monitoring of GRB afterglows provided by Swift. Collimation of GRB outflow is a desirable feature to decrease the burst output, as the largest isotropic-equivalent energy release approaches the equivalent of a solar mass (GRB 990123 – Kulkarni et al 1999). From the light-curve break epoch of day, it follows that the half-opening of GRB jets is of few/several degrees, for which the GRB output is reduced to erg (e.g. Frail et al 2001, Panaitescu & Kumar 2001). However, such a tight collimation of the GRB outflow is more than necessary on energetic grounds, as the accretion of the debris torus formed during the collapse of a massive star (the origin of long-duration bursts – e.g. Woosley 1993, Paczyński 1998) or of the black-hole spin can power relativistic jets with more than erg (e.g. Mészáros, Rees & Wijers 1999, Narayan, Piran & Kumar 2001). In other words, without demanding too much energy, GRB jets can be wider than several degrees, placing the afterglow jet-break epoch (which scales as for a homogeneous circumburst medium and as for a wind-like medium) later than usually reachable by afterglows observations. Recently, Burrows & Racusin (2007) have suggested that Swift X-ray afterglows do not display day breaks as often as pre-Swift optical afterglows. The purpose of this article is to identify the afterglows observed by Swift from January 2005 through March 2007 whose X-ray light-curves steepen to a decay faster than (which is, roughly, the lower limit of the post-break decay indices of pre-Swift optical afterglows – figure 2 of Zeh, Klose & Kann 2006), to find the Swift X-ray afterglows that were monitored longer than a few days and displayed a decay slower than (indicating a jet-break occurring well after 1 day), and to compare the fraction of Swift X-ray afterglows with potential jet-breaks to that of pre-Swift optical afterglows with such breaks. In addition, the temporal–spectral properties of the X-ray afterglows will be used to identify, on case-by-case basis, the required features of the standard jet model. ## 2 Late breaks in X-ray afterglow light-curves Figures 1, 2, and 3 display all afterglows observed by Swift until April 2007 with good evidence for a late, 0.1–10 day break, followed to a decay steeper than . Their pre and post-break indices ( and ) of the power-law X-ray light-curve () and the slope of the power-law X-ray continuum (spectral distribution of energetic flux ) are listed in Table 1. The decay indices were obtained by fitting the 0.3–10 keV count-rate light-curves of Evans et al (2007); the spectral slopes were obtained through power-law fits in the 1–5 keV range to the specific count-rates of Butler & Kocevski (2007). Figure 4 compares the decay indices and spectral slopes of these 30 afterglows with the relations expected for the synchrotron emission from the forward-shock (the standard jet model), assuming that the circumburst medium is either homogeneous or has the radial stratification expected for the wind of a massive stellar GRB progenitor. The models that reconcile the observed and are listed in Table 1. Derivations of these relations for a spherical outflow and for a spreading jet can be found in Mészáros & Rees (1997), Sari, Piran & Narayan (1998), Chevalier & Li (1999), Rhoads (1999), Sari, Piran & Halpern (1999), Panaitescu & Kumar (2000). For the post jet-break emission, we also consider the case of a conical jet which does not undergo significant spreading, as could happen if the jet edge is not too sharp (Kumar & Granot 2003), but surrounded by an envelope which prevents its lateral expansion. In this case, the post-break decay index is larger by 3/4 (1/2) than the pre-break index for a homogeneous (wind-like) medium (Panaitescu, Mészáros & Rees 1998), this increase resulting from that, after the jet-break, the number of emitting electrons within the ”visible” region of angular extent equal to the inverse of the jet Lorentz factor stops increasing (as the jet is decelerated) because the outflow opening is less than that of the cone of relativistic beaming. Lateral spreading of the jet, if it occurs, enhances the jet deceleration and yields an extra contribution to the jet-break steepening which is smaller than the 3/4 (1/2) resulting from the ”geometrical” effect described above. As can be seen from Figure 4 and Table 1, most pre-break X-ray decays require either that the cooling frequency ( is below X-rays (model S1), in which case is independent of the ambient medium stratification, or that the medium is homogeneous and is above X-rays (model S2a). For six afterglows, the pre-break decay is too slow to be explained by the standard forward-shock model, indicating a departure from its assumptions. Most likely, this departure is the increase of the shock’s energy caused by the arrival of fresh ejecta (Paczyński 1998, Rees & Mészáros 1998, Panaitescu, Mészáros & Rees 1998, Nousek et al 2006, Panaitescu et al 2006, Zhang et al 2006). If the post-break decays of these six afterglows are attributed to a jet whose boundary is already visible then the jet-break time should be before the epoch when energy injection ceases so that, shortly after energy injection stops being dynamically important, the jet Lorentz factor falls below the inverse of the jet opening. Alternatively, the X-ray light-curve breaks of the six afterglows with slow pre-break decays could be attributed to the end of energy injection into a spherical (or wide opening) blast-wave. As shown in the lower panel of Figure 4, the post-break decays of four of these six afterglows are consistent with the S2a model ( above X-ray, homogeneous medium); still, two afterglows (GRBs 060413 and 060607A) exhibit a post-break decay that is too steep for any ”S” model. Attributing X-ray light-curve breaks to cessation of energy injection can be extended to other afterglows: for the measured spectral slope , the post-break decays of 19 afterglows are consistent within with that expected for a spherical outflow (”S” models in the lower panel of Figure 4). Here, consistency within between a model expectation and an observed index is defined by being within of zero, where and are the standard deviations given in Table 1. Thus only 11 afterglows have a post-break decay that is too fast for a spherical outflow and require a jet-interpretation for the break. However, it seems unlikely that cessation of energy injection can be so often the source of X-ray light-curve breaks, for the following reason. A similar analysis of the optical decay indices and spectral slopes done for 10 pre-Swift afterglows with optical light-curve breaks indicated that six of those breaks are consistent with arising from an episode of energy injection into a spherical blast-wave, ending at the break epoch (Panaitescu 2005a). Nevertheless, the numerical modelling of the radio, optical, and X-ray emission of those six afterglows has shown that the broadband emission of only two of them can be accommodated by energy injection (Panaitescu 2005b), the primary reason for the failure of this model being that the radio emission from the ejecta electrons that were accelerated by the reverse shock is too bright. Therefore, while it is possible that up to 2/3 of the Swift X-ray breaks listed in Table 1 arise from cessation of energy injection, this interpretation does not find much support in the light-curve breaks of pre-Swift afterglows. For this reason, we propose that the 30 Swift X-ray breaks of Table 1 are due to the GRB outflow being narrowly collimated, although a different origin cannot be ruled out based only on X-ray observations. Last column of Table 1 lists the combination of models for the pre- and post-break decays that explains both phases in a self-consistent manner, the type of medium and location of cooling frequency being the same both before and after the break. The features of those ”global” afterglow models show that: the cooling frequency must be below X-rays (”1” models) for 2/3 of afterglows, 1/4 of afterglows require a homogeneous medium (”a” models), 1/10 require a wind medium (”b” models), 1/3 of afterglows require a spreading jet (”J” models), and 1/3 require a conical jet (”j” models), larger fractions allowing any of the above features as they are not always well constrained. This shows that there is a substantial diversity in the details of the forward-shock model that accommodate the temporal and spectral properties of Swift X-ray afterglows. ## 3 Swift X-ray and pre-Swift optical breaks In the Jan05–Mar07 set of Swift afterglows, we find another 27 potential jet-breaks at 0.1–10 d, followed by a decay or steeper: GRB 050401, 050408, 050525A, 050603, 050712, 050713A, 050713B, 050726, 050802, 050826, 050922B, 051001, 051016B, 051211B, 060121, 060124, 060210, 060218, 060219, 060306, 060707, 060719, 060923C, 061019, 061126, 070125, 070318. For some of these afterglows, the post-break decay was followed for only 0.5 dex in time, for others, the break is only marginally significant, with the break magnitude () being smaller than for the 30 afterglows in Table 1. Therefore, in the Jan05–Mar07 set of X-ray afterglows, there could be as many as 57 with jet-breaks. However, we cannot exclude the possibility that some of those light-curve breaks arise from another mechanism, such as the sudden change of energy injected in the forward shock. In the same sample, there are 18 afterglows (GRB 050607, 050915B, 051016A, 060108, 060111A, 060115, 060123, 060510A, 060604, 060708, 060712, 060904A, 060912A, 060923A, 061110A, 070223, 070224, 070328) exhibiting a decay or slower until 3–10 day, indicating that their jet-breaks occurred after the last observation. Similar decays, but lasting until 10–30 day, are displayed by 13 afterglows (GRB 050716, 050824, 051021A, 051109A, 051117A, 060202, 060714, 060814, 061007, 061121, 061122, 070110, 070129) and by 6 afterglows (GRB 050416A, 050822, 060206, 060319, 060729, 061021) until after 30 day. Thus, the number of well-monitored afterglows without a jet-break is 19, but could be as high as 37 if all the other 18 light-curves followed for less than 10 day are included. For the remaining more than 100 afterglows the temporal coverage is insufficient to test for the existence of jet-breaks. We conclude that, if only the afterglows with good evidence for existence or lack of jet-breaks are counted, then the fraction of Swift afterglows with jet-breaks is 30/(30+18)=0.63; if we include all potential cases for each type then that fraction is 57/(57+37)=0.61. In pre-Swift afterglow observations, evidence for light-curve breaks is found only in the optical emission. The X-ray coverage of pre-Swift afterglows at the time of the optical break is too limited to test for the existence of a simultaneous break in the X-ray emission. The radio light-curves of a dozen pre-Swift afterglows show breaks at 1–10 day, however the pre and post-break decays indicate that those breaks arose from the passage of the synchrotron peak frequency through the radio. All radio post-break decays are slower than , hence a jet origin for those breaks is very unlikely. There are 12 pre-Swift optical afterglows with good evidence for a break at 0.3–3 day to a decay steeper than : GRB 980519, 990123, 990510, 991216, 000301C, 000926, 011211, 030226, 030328, 030329, 030429, 041006 (figure 1 of Zeh, Klose & Kann 2006). Three afterglows (GRB 011121, 020124, 020405) may also had a break, though the evidence is not so strong. A break to a decay slightly less steep than was observed for three afterglows (GRB 010222, 020813, 021004). We find only 5 optical afterglows followed for more than several days that have a decay slower than : GRB 970228, 970508, 980329, 000418, 030323. Therefore the fraction of well-monitored optical afterglows with potential jet-breaks is 12/(12+5)=0.71 if only the best cases for light-curve breaks are taken into account and 18/(18+5)=0.78 including the other 6 potential optical breaks. Therefore the fraction of Swift X-ray afterglows with light-curve breaks at 0.1–10 day (60 percent) is slightly smaller than that of pre-Swift optical afterglows with breaks at 0.3–3 day (75 percent). ## 4 Amati and Ghirlanda relations Having identified new breaks in afterglow light-curves that may qualify as jet-breaks, we calculate the jet opening from the break epoch , assuming a GRB efficiency of 50 percent and a homogeneous circumburst medium of particle density . From there, the GRB collimated output is Ejet=1.9×1050(Eγ1053ergtb,dz+1)3/4erg, (1) where is the burst isotropic-equivalent output in the (host-frame) 1 keV–10 MeV range, is the jet-break epoch measured in days, and numerical coefficient is for the arrival-time of photons emitted from the jet edge. The results below do not change much if a wind-like medium is assumed, for which , because we shall use in the calculation of the correlation coefficient and changing the multiplying factor (3/4 to 1/2) of does not affect (though it would alter the slope of the best-fits involving ). The necessary information (low and high-energy burst spectral slopes, peak energy of the GRB spectrum, burst fluence and redshift) to study the Amati and Ghirlanda relations (most recently presented by Amati 2006 and Ghirlanda et al 2007) between the intrinsic peak energy of the burst spectrum and the isotropic GRB output or the collimated GRB energy is available for 1. 15 pre-Swift optical afterglows: GRB 990123, 990510, 991216, 000926, 010222, 011121, 020124, 020405, 020813, 021004, 030226, 030328, 030329, 030429, 041006 2. 9 of the 57 Swift X-ray afterglows with good or potential evidence for jet-breaks: GRB 050318, 050505, 050525A, 050803, 050814, 050820A, 060124, 060605, 060906 3. 6 of the 37 Swift X-ray afterglows without a jet-break until at least 3 day: GRB 050416A, 051109A,060115, 060206, 061007, 061121 4. 2 short-bursts (lasting less than 2 seconds): GRB 050709 and 051221A. Figure 5 displays the significance of the Amati and Ghirlanda correlations for the set 15 pre-Swift afterglows with optical jet-breaks, 15 Swift afterglows with X-ray jet-breaks or without one until more than 3 day, and the joint set of 30 afterglows. The long-duration GRB 011121 and the only two short-bursts 051221A and 050709 were excluded as they are outliers for the Amati relation. For the Ghirlanda relation, GRB 050416A was excluded as an outlier and the last observation epoch of the 6 X-ray afterglows without jet-breaks were taken as jet-break times. Their true jet-break epochs would evidently be later which, as can be seen from Figure 5, would weaken the Ghirlanda relation (see also figure 6 of Sato etal 2007 and figure 9 of Willingale et al 2007). The linear correlation coefficients and best-fit slopes given in Figure 5 show that: 1. Swift X-ray afterglows display the same Amati correlation as the pre-Swift optical afterglows but a weaker Ghirlanda correlation, 2. the addition of Swift afterglows weakens the Ghirlanda correlation (smaller correlation coefficient) and increases the statistical significance of the Amati correlation. The former result was also pointed out by Campana et al (2007), but we note that half of the 8 X-ray light-curve breaks identified in that work are followed by decays slower than and, thus, they may not be jet-breaks, 3. going from isotropic to collimated GRB output brings the three outlying bursts for the Amati relation closer to the rest. For the entire set of 30 afterglows with optical and/or X-ray light-curve jet-breaks, we find a the log-log space slope for the Amati relation () which is consistent with that obtained by Amati (2006) for 41 afterglows (), but a smaller one for the Ghirlanda relation () than that obtained by Ghirlanda et al (2007) for 25 afterglows (). This discrepancy is due to that we did not include here the afterglows of Ghirlanda (2007) that have breaks followed by decays shallower than , which are unlikely to be jet-breaks, and have used a larger set of Swift afterglows. That the correlation coefficient () is nearly the same as for () indicates that the addition of a new observable (the jet-break time ) does not reduce the spread of the Amati relation. This suggests that the jet-break time is not correlated with either burst observable. Indeed, we find such correlations not to be statistically significant: and , where is the host-frame jet-break time. For the 15 pre-Swift afterglows with optical breaks, the slope of the best-fit () is equal to that of the best-fit () multiplied by (from equation (1). This indicates that the Ghirlanda relation for pre-Swift afterglows is the consequence of Amati’s. For the entire set of 30 afterglows, , consistent with the same conclusion. ## 5 Conclusions Summarizing our findings, out of the more than 200 X-ray afterglows monitored by Swift from January 2005 through March 2007, about 100 have been followed sufficiently long to test for the existence of late light-curve steepenings. About 60 percent of these well-monitored afterglows display a clear or a possible X-ray light-curve break at 0.1–10 day followed by a or steeper decay. These are potential jet-breaks, resulting when the jet Lorentz factor decreases below the inverse of the jet opening. However, from X-ray observations alone, we cannot exclude other origins for the light-curve breaks. More stringent tests of the jet model for X-ray light-curve breaks require a good optical coverage. So far, only a couple of the 30 X-ray breaks listed in Table 1 were followed in the optical, showing that the X-ray breaks were achromatic, as expected for a jet origin: GRB 050730 (Pandey et al 2006, Perry et al 2007), GRB 060124 (Curran et al 2007), and GRB 050526 (Dai et al 2007). For the last two, the pre- and post-break optical and X-ray decay indices are consistent with the jet interpretation but, for the first, the post-break optical light-curve falls-off too slowly () compared to the X-ray emission (). Around 75 percent of the pre-Swift optical afterglows with a good coverage exhibit a light-curve break at 0.3–3 day. Hence the fraction of Swift X-ray afterglows with breaks is slightly smaller, but comparable, to that of pre-Swift optical afterglows. The burst and redshift information necessary to test the Ghirlanda (, = peak energy of the burst spectrum, = GRB collimated output, = GRB isotropic energy release, = afterglow break epoch) and Amati () correlations exist for only eight of the 30 afterglows with X-ray jet-breaks. These afterglows display the mentioned correlations at the level. Adding them to a set of 15 pre-Swift afterglows with optical light-curve breaks, leads to correlations which are significant at the level. However, including the jet-break time in the Amati correlation does not lead to a stronger (Ghirlanda) correlation, unless the few under-energetic outliers shown in Figure 5 are taken into account. Furthermore, because the jet-break time is not correlated with either burst property, the slope of the best-fit is that expected from the slope of fit. These two facts indicate that the Ghirlanda correlation results almost entirely from the Amati correlation. For the cosmological use of GRBs, one is interested in obtaining a good calibrator of the source luminosity with other burst observables, a quality which is quantified by the linear correlation coefficient. Although the Ghirlanda correlation is stronger than Amati’s when outliers are included, the Ghirlanda relation is not necessarily better for constraining cosmological parameters because outliers to the Amati relation, such as the three underluminous bursts shown in Figure 5, should stand out and be easily to excise from the Hubble diagram constructed based on the Amati relation (i.e. with the luminosity distance inferred from the burst isotropic luminosity obtained from the peak energy of the burst spectrum). As shown in §4, if we consider only bursts which are not outliers to the Amati relation, the addition of a new observable (the afterglow jet-break epoch) does not yield a stronger correlation than that of Amati’s. Thus, with the current sample of afterglows with jet-breaks, we suggest that the Amati relation should be at least as useful for constraining cosmological parameters as is the Ghirlanda relation (Ghirlanda et al 2004, Schaefer 2007). ## Acknowledgments This work made use of data supplied by the UK Swift Science Data Center at the University of Leicester. ## References Want to hear about new tools we're making? Sign up to our mailing list for occasional updates. If you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source! For everything else, email us at [email protected].
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691155910491943, "perplexity": 3047.654883495704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585203.61/warc/CC-MAIN-20211018124412-20211018154412-00305.warc.gz"}
https://homework.cpm.org/category/ACC/textbook/gb8i/chapter/6%20Unit%207/lesson/INT1:%206.3.1/problem/6-84
### Home > GB8I > Chapter 6 Unit 7 > Lesson INT1: 6.3.1 > Problem6-84 6-84. What are the points of intersection of the lines below? Use any method. Write your solutions as a point $\left(x, y\right)$. 1. $y = −x + 8$ $y = x − 2$ One way is to graph the two lines and find the point where the lines intersect. If you use substitution, be careful when distributing the negative. $2x − \left(−4x + 2\right) = 10$ $2x + 4x − 2 = 10$ $6x = 12$ Solve for $y$. 1. $2x − y = 10$ $y = −4x + 2$
{"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377066493034363, "perplexity": 1326.1842867967216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131412.93/warc/CC-MAIN-20201001112433-20201001142433-00624.warc.gz"}
https://dsp.stackexchange.com/questions/73958/spectral-efficiency-of-with-ofdm-vs-without-ofdm
# Spectral efficiency of with OFDM vs without OFDM? OFDM has advantage over frequency selective channel with addition of cyclic prefix after IFFT of MPSK or MQAM symbol. but I wonder that the spectral efficiency of with OFDM vs without OFDM. The spectral efficiency of OFDM is strictly worse than that of a pulse-shaped QAM signal with the same rate. • OFDM requires a guard interval, on which no useful information is transmitted. • It is common to dedicate several subcarriers to pilot signals. • It is often not pulse-shaped (or, more accurately, pulse-shaped with a rectangular pulse). Since each non-pilot subcarrier is essentially a narrowband QAM signal, you can see that its spectral efficiency cannot be better. However, the point of OFDM is to allow for low-complexity frequency-domain equalization, with spectral efficiency given a less important role. • To add to your good list: the DC bin (bin 0) is typically not used due to inevitably competing with DC offsets in implementation. Mar 23, 2021 at 2:46 • Good point, Dan! – MBaz Mar 23, 2021 at 13:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894854724407196, "perplexity": 2211.291644865305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00179.warc.gz"}