url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://math.stackexchange.com/questions/1791404/if-x-y-z-are-independent-random-variables-then-x-y-z-are-independent-rando/1791455 | # If X, Y, Z are independent random variables, then X + Y, Z are independent random variables. [duplicate]
I found the same question (X,Y,Z are mutually independent random variables. Is X and Y+Z independent? here), but the answer uses characteristic functions and fourier inversion theorem, but this is exercise in chapter long before characteristic functions.
## marked as duplicate by Did probability StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); May 21 '16 at 9:58
Note that $X, Y, Z$ are mutually independent if and only if $$\Pr\{X \leq x, Y \leq y, Z \leq z\} = \Pr\{X \leq x\}\Pr\{Y \leq y\}\Pr\{Z \leq z\}$$ for any $x, y, z \in \mathbb{R}$.
Now, for any $w, z \in \mathbb{R}$
\begin{align}\Pr\{X + Y \leq w, Z \leq z\} & = \int_{-\infty}^{+\infty} \Pr\{X + Y \leq w, Z \leq z|X = x\}dF_X(x) \\ & = \int_{-\infty}^{+\infty} \Pr\{x + Y \leq w, Z \leq z\}dF_X(x) \\ & = \int_{-\infty}^{+\infty} \Pr\{x + Y \leq w\}\Pr\{Z \leq z\}dF_X(x) \\ & = \Pr\{Z \leq z\}\int_{-\infty}^{+\infty} \Pr\{x + Y \leq w\}dF_X(x) \\ & = \Pr\{Z \leq z\}\Pr\{X + Y \leq w\} \\ \end{align} where the first and the fifth equalities are using the law of total probability, the second and the third equalities are using the given mutual independence, and the fourth equality is pulling out the term that is independent of the integrating variable $x$. Here $F_X(x) = \Pr\{X \leq x\}$ is the CDF of $X$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999854564666748, "perplexity": 1505.3787592470785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00415.warc.gz"} |
http://math.stackexchange.com/questions/156859/discontinuous-function-sending-compacts-to-compacts | # Discontinuous function sending compacts to compacts
I know that the condition that $f(X)$ is compact if $X$ is compact should not be sufficient to say that $f$ is continuous, but I can't come up with an example of such discontinuous $f$. What is it?
Thanks
-
Let $f:\Bbb R\to\Bbb R$ be such that $f(x)=0$ if $x\le 0$ and $f(x)=1$ if $x>0$.
-
Beat me to it..... – user38268 Jun 11 '12 at 7:05
Me too. We all think of the same counterexamples. – Alex Becker Jun 11 '12 at 7:06
Except Asaf, who had to get fancy. :-) – Brian M. Scott Jun 11 '12 at 7:08
@BrianM.Scott 4 upvotes in the space of 2 minutes, a record to beat :D – user38268 Jun 11 '12 at 7:08
@AlexBecker Well that was kinda the obvious one to go for :D :D – user38268 Jun 11 '12 at 7:08
You can take $f\colon\mathbb {R\to R}$ to be $f(x)=\begin{cases}0 & x\in\mathbb Q\\ 1 & x\notin\mathbb Q\end{cases}$
This function is discontinuous everywhere but its image is a finite set and therefore compact.
-
More generally, if you let $K$ be any compact subset of $\mathbb R$ with at least two points, pick some $x_0\in K$ and let $f:\mathbb R\to\mathbb R$ defined by $$f(x)=\begin{cases}x & x\in K\\ x_0 & x\notin K\end{cases}$$ then $f$ is a discontinuous function with image $K$.
-
If we take $K$ to be a compact set which is nowhere dense, but has a positive measure (copies of a fat Cantor set in every $[k,k+1]$, for example) then we can get such function $f$ which is discontinuous on a set of positive measure, which is interesting. – Asaf Karagila Jun 11 '12 at 7:33
Surely not in every $[k,k+1]$? That set would be unbounded. – Johan Jun 11 '12 at 7:57
@Johan: True. However if we make this union disjoint we have a closed set that is still good enough for our purposes. – Asaf Karagila Jun 11 '12 at 8:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818783760070801, "perplexity": 337.7032174265127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769392.53/warc/CC-MAIN-20141217075249-00000-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1868951/?tool=pubmed | PLoS Genet. 2007 May; 3(5): e74.
Published online 2007 May 18. Prepublished online 2007 Apr 5.
PMCID: PMC1868951
A Method to Address Differential Bias in Genotyping in Large-Scale Association Studies
David B Allison, Editor
Abstract
In a previous paper we have shown that, when DNA samples for cases and controls are prepared in different laboratories prior to high-throughput genotyping, scoring inaccuracies can lead to differential misclassification and, consequently, to increased false-positive rates. Different DNA sourcing is often unavoidable in large-scale disease association studies of multiple case and control sets. Here, we describe methodological improvements to minimise such biases. These fall into two categories: improvements to the basic clustering methods for identifying genotypes from fluorescence intensities, and use of “fuzzy” calls in association tests in order to make appropriate allowance for call uncertainty. We find that the main improvement is a modification of the calling algorithm that links the clustering of cases and controls while allowing for different DNA sourcing. We also find that, in the presence of different DNA sourcing, biases associated with missing data can increase the false-positive rate. Therefore, we propose the use of “fuzzy” calls to deal with uncertain genotypes that would otherwise be labeled as missing.
Author Summary
Genome-wide disease association studies are becoming more common and involve genotyping cases and controls at a large number of SNP markers spread throughout the genome. We have shown previously that such studies can have an inflated false-positive rate, the result of genotype calling inaccuracies when DNA samples for cases and controls were prepared in different laboratories, prior to genotyping. Different DNA sourcing is often unavoidable in the large-scale association studies of multiple case and control sets. Here we describe methodological improvements to minimise such biases. These fall into two categories: improvements to the basic clustering methods for calling genotypes from fluorescence intensities, and use of “fuzzy” calls in association tests in order to make appropriate allowance for call uncertainty.
Introduction
Genome-wide association (GWA) studies are becoming more common because of rapid technological changes, decreasing costs and extensive single nucleotide polymorphism (SNP) maps of the genome [1,2]. However, a major technological challenge is the fact that this ever-increasing number of SNPs is necessarily reliant on fully automated clustering methods to call genotypes. Such methods will inevitably be subject to errors in assigning genotypes because the clouds of fluorescence signals are not perfectly clustered and vary according to many factors, including experimental variation and DNA quality [3]. As it is no longer practical to inspect each genotype call manually, identification of unreliable calls requires a measure of clustering quality. Failure to identify such SNPs leads to an increased false-positive rate and, if a crude quality score is applied, loss of data. Adapting the clustering algorithm to allow for clustering variation arising from the study design can reduce the number of unreliably called SNPs and can minimise the false-positive rate.
The decreasing genotyping costs of GWA studies is permitting the use of larger sample sizes. An efficient design to limit the blood sample collection and genotyping costs is the use of a common control group for several case collections [2]. To this end, the 1958 British Birth Cohort (1958 BBC), an ongoing follow-up study of persons born in Great Britain during one week in 1958 (National Child Development Study), has been used to establish a genetic resource [4] (www.b58cgene.sgul.ac.uk). The Wellcome Trust Case-Control Consortium (WTCCC) has adopted such a design utilising the 1958 BBC and additional blood donors (www.wtccc.org.uk) as a common control group for case collections of seven different diseases. A drawback of this approach is that it can generate a differential bias in genotype calling between case and control DNA samples that originated from different laboratories [3]. This leads to an increased false-positive rate.
In this paper, we compare the genotype calls of a type 1 diabetes (T1D) GWA study using the original clustering algorithm [5] implemented for this genotyping platform and a new algorithm adapted to take into account differential bias in genotype scoring. This study consists of 13,378 nonsynonymous SNPs (nsSNPs) in 3,750 T1D cases and 3,480 1958 BBC controls using the highly multiplexed molecular inversion probe (MIP) technology [6,7]. Previously, we found that the original clustering algorithm [5] performed well when the genotypes clouds were perfectly clustered. However, when variability in the fluorescent signal caused the clouds to be less distinct, we found that a differential bias between cases and controls increased the false-positive rate [3]. The cause of this problem was attributed to the different sources for controls and cases DNA samples that resulted in different locations for the genotyping clouds of fluorescent signal. We addressed this issue by scoring separately cases and controls [3]. We also explored surrogate measures of clustering quality and employed stringent cut-offs to reduce the false-positive rate and extended the concept of genomic control by applying a variable downweighting to each SNP. However, neither approach was optimal, particularly the use of stringent cut-offs, which resulted in a considerable loss of data.
Here, we adapted the methodology to address differential bias between cases and controls in a GWA study. There are three main improvements. Two modifications concern the genotyping algorithm: we used a new scoring procedure that enables cases and controls to be scored together and we adopted a more robust statistical model. The third modification was to use “fuzzy” calls in association tests in order to deal appropriately with call uncertainty. This avoids bias introduced by treating uncertain calls as “missing” when the proportion of such missing calls vary between cases and controls. We also propose a quality-control score for the clustering. These improvements allowed us to significantly increase the number of SNPs available for analysis and to improve the overall data quality. These modifications are generic and can be incorporated into any clustering-based genotyping algorithm. We illustrate this point by applying our algorithm to score the WTCCC control samples (www.wtccc.org.uk), which were generated using the Affymetrix 500K (http://www.affymetrix.com).
Results
Genotyping Procedure
Our genotyping procedure follows the original algorithm [5] in fitting a mixture model using the expectation maximization (EM) algorithm but we modified this approach to address the characteristics of our dataset. The original algorithm transformed the two-dimensional fluorescent signal intensity plot into a one-dimensional set of contrasts (see Methods). A mixture of three Gaussian (one heterozygous cloud and two homozygous clouds) was fitted to this one-dimensional set of contrasts using the EM algorithm [8] and data points were assigned to clusters. Data points that could not be attributed to a cluster with high posterior probability were treated as missing data. In addition to the parameters that described the location of the genotyping clouds the model also estimated the a priori probabilities (Φ1, Φ2, Φ3) for each cluster; these correspond to the genotype relative frequencies.
As control and case DNA samples were processed in different laboratories, the location of the genotyping clouds for the fluorescent signal can differ between cases and controls (see Figure 1). Previously, we scored cases and controls separately to allow for such differences [3]. However, this solution is not ideal. While the location of the clusters can differ, the a priori frequencies should be identical in cases and controls under the null hypothesis of no association. Statistical theory shows that the most powerful test is obtained when the maximum likelihood for the nuisance parameters (here the genotyping parameters) is estimated under the null hypothesis. Letting these values differ between cases and controls resulted in overestimated differences in allele frequencies and increased over-dispersion of the test statistic. Our modified algorithm linked the clustering for cases and controls by assuming genotype frequency parameters to be identical but imposed no such restriction on the location of the genotyping clouds. Variability in allele frequencies across geographic regions is also allowed. We extended this approach to score nsSNPs on the X chromosome to account for male/female copy number differences (see Methods).
Example of Biased Association Statistic Resulting from Missing Data in the MIP nsSNPs Dataset
In the original algorithm, the a priori frequencies for the three clusters (Φ1, Φ2, Φ3) are linked by the condition Φ1 + Φ2 + Φ3 = 1, leaving two free parameters. We investigated the effect of further constraining these frequencies to be consistent with Hardy-Weinberg equilibrium (HWE). In that version of the algorithm, the a priori frequencies (Φ1, Φ2, Φ3) are parameterised as (π2,2π(1 − π),(1 − π)2) using a unique parameter π.
We also found that the statistical model for the fluorescent clouds was not robust to excessive variability of the fluorescent signal within a genotyping cloud. Because our association tests require that no data point is treated as missing (see below), we needed a model robust to outliers. As the tails of the Gaussian distribution decay too fast, we replaced the Gaussian distributions with t-distributions. Our parameter inference procedure (EM algorithm, see [8]) uses a representation of the t-distributions as a Gaussian random variable with a variance sampled randomly from a Gamma distribution. Fortunately, the sample size of this study was sufficient to estimate these additional parameters.
Association Test
The nsSNPs were analysed using the one degree of freedom Cochrane-Armitage trend test [9]. In this statistical framework, the outcome variable is the disease phenotype and the explanatory variable is the genotype. The null hypothesis is the absence of effect of the genotype on the odds of developing the disease. This test statistic for association is a score test; the score statistic is the first derivative of the log-likelihood of the data at the null value of the parameter tested. The test statistic is obtained by dividing the score test by its variance under the null, derived using a permutation argument (see Methods). We also used a stratified version of this test introduced originally by Mantel [10] that allows for variability in allele frequency and disease prevalence across 12 broad regions in Great Britain [3]. In this version of the test, the score and its variance are summed over the 12 strata to obtain the overall score and variance. The ratio of the square of the score statistic to its variance is asymptotically distributed as a χ2 random variable with one degree of freedom
We explored how differential bias could affect the distribution of the test statistic. An aspect of the data that is affected by the differential bias is the frequency of missing calls and the way these missing calls affect the genotyping clouds. These differences increased the over-dispersion level (see for example Figure 1). We found that the best solution was to avoid the use of missing calls and call all available samples, making appropriate allowance for call uncertainty. This led us to modify the association test. To do so, we reformulated the association test as a missing data problem in which the distribution of the genotypes status is estimated conditionally on the fluorescent signal and the geographic origin of the sample (see Methods). This modification of the test amounts to replacing the score statistic with its expectation under this posterior distribution of the genotype status. Similar ideas have been used in the context of haplotype phasing [11].
Simulation Study
The elevated rate of false positives observed in the data resulted from an over-dispersion of the test statistic. We estimated the over-dispersion factor, λ, by calculating the ratio of the mean of the smallest 90% of the observed test statistics to the mean of the smallest 90% of the values expected under the null hypothesis of no association [3]. Using the smallest 90% is motivated, in a case-control framework, by the exclusion of the “true” associations that are caused by actual differences between cases and controls and that can significantly affect the mean value of the test statistic. To make the interpretation of the results easier, we report Δλ, the difference between the theoretical over-dispersion factor (equal to 1) and the observed one: a value of 1% for Δλ means that the over-dispersion factor λ is 1.01.
We illustrate the impact of our modifications by analysing simulated fluorescent signal data. We used two models for the quality of the fluorescent signal (high and low quality SNPs). We considered various scenarios for the minor allele frequency in cases and controls and simulated 100,000 SNPs for each scenario. The signals were scored in three different ways: (1) the full algorithm, as described above; (2) cases and controls were called separately; and (3) fuzzy calls were not used. In (3), we assigned a probability 1 to the most probable call under the posterior distribution and we called a sample missing when the probability of this most probable call was less than 0.95. For each version of the scoring algorithm, we report Δλ under the null hypothesis of no association (i.e., identical population frequencies in cases and controls). We also compared the power for the three versions of the algorithm. Following Neyman-Pearson's lemma [12], the best test is the one that, for a given type 1 error (the probability to reject the null when the null is true), has the lowest type 2 error (the probability to accept the null when the alternate hypothesis is correct). In practice it implied correcting the test statistic for the over-dispersion and estimating the fraction of the SNPs simulated under the alternate hypothesis for which the null hypothesis was accepted. We set the type 1 error to 0.05 in our simulations. Results are reported in Table 1.
Over-Dispersion Factor Δλ (Estimated under the Null ) and Type 2 Error for Three Versions of Our Algorithm: Full Method (1), without the Joint Typing (2), and without the Use of Fuzzy Calls (3)
We found that, as expected, all three versions of the algorithm performed comparably well when the quality of the fluorescent signal was high. In that situation, the only situation where the level of over-dispersion was significant was the separate typing procedure combined with low minor allele frequency. Clustering based algorithms are not well suited to estimate parameters when the number of data points in a genotyping cloud is low and this weakness was amplified when cases and controls were called separately.
However, strong differences appeared for the lower quality SNPs. We found that there was little over-dispersion when the full algorithm was used (Δλ between −0.79% and 0.59%). However, when the split typing version was used, over-dispersion ranged from 7.04% to 14.7%, increasing as the minor allele frequency decreases. In addition, comparison between joint and split typing methods showed that the power of the study (measured by the type 2 error) was very similar. However, this observation is misleading, as when the data consists of a mixture of high and low quality SNPs, applying a constant correction factor independently of the fluorescent signal quality would result in a loss of power for the split typing method. For the full method, we found a near perfect agreement between theoretical and observed distributions, and the use of a correction factor was not necessary.
Not using the fuzzy calls had a less obvious effect on the over-dispersion. As mentioned above, the high quality SNPs were not affected because the vast majority of calls was certain. For the low quality fluorescent signal model, we found that on average 1.2% of the individuals had a probability of the most likely call lower than 0.95. Labelling these unclear calls as missing significantly affected the over-dispersion slope, which reached a maximum at intermediate frequencies (Δλ = 5.34% at minor allele frequency 10%, see Table 1). In addition, and unlike the split typing version of the algorithm, the type 2 error increased significantly (between 1% and 5%). We also note that calls with a most likely probability lower than 70% were rare (on average 0.4% of the calls). Therefore, replacing the fuzzy posterior distribution with the most likely call had almost no effect on the over-dispersion slope, indicating that for the range of model and data considered here the inclusion of fuzzy calls is not critical as long as missing calls are not used.
MIP nsSNPs Dataset
The MIP data consisted of 13,378 nsSNPs typed in 3,750 cases and 3,480 controls. We analysed 11,579 nsSNPs with minor allele frequency estimated to be greater than 0.01. We also excluded 281 nsSNPs in the HLA region that is known to be associated with T1D, leaving 11,298 nsSNPs.
Initially, using the original calls, we employed stringent cut-offs for the surrogate measures of clustering quality: case and control call rates both greater than 95%, difference in call rates between controls and cases smaller than 3% and HWE χ2 < 16. This resulted in 2,079 high-quality nsSNPs with an over-dispersion factor Δλ of 4.5%. We obtained a lower over-dispersion of 1.5% when these nsSNPs genotypes were called using the adapted algorithm. As expected, this difference in over-dispersion between algorithms became more marked as less stringent cut-offs were applied. For example, lowering the call rate cut-off to 90% resulted in 5,294 nsSNPs with an over-dispersion of Δλ = 17% using the original Moorhead et al. [5] scoring algorithm and 8.1% using the adapted algorithm on the same set of nsSNPs.
We propose a measure of clustering quality that compares the variability of the signal within a cluster with the variability between clusters (see Methods). The lower limit for the quality measure was set such that beyond this value the over-dispersion factor λ remained constant. When we selected the nsSNPs according to our quality-control measure this resulted in 7,446 nsSNPs with an over-dispersion slope of 7.5% using our improved algorithm (Figure 2). For the same set of SNPs the over-dispersion level was 21% using the original calls.
Quantile–Quantile Plot Comparing the Observed Distribution of the Association Statistic (y-Axis) with the Predicted Distribution under the Null (x-Axis)
We investigated the effect of our modifications by scoring the data using various configurations of the algorithm and the association test. The quality was measured using the level of over-dispersion Δλ of the test statistic for the stratified test (see Table 2). For the genotyping procedure, we found that the split clustering of cases and controls significantly increased the over-dispersion level: Δλ = 10.5%, +3% compared to the joint typing of cases and controls with a unique set of a priori frequencies (Δλ = 7.5%). However, letting these a priori frequencies vary across geographic regions in the stratified version of the test did not change the results, although a stronger discrepancy might have been observed if cases had not been well matched geographically with the controls. Assuming a Gaussian model (rather than t-distribution in the adapted algorithm) also significantly increased the over-dispersion level (+4.3%).
Impact of Our Modifications on the Over-Dispersion Measure for the 7,446 nsSNPs That Passed our Quality Threshold in 3,750 Cases and 3,480 Controls
Imposing the a priori frequencies to be consistent with HWE did not lower the over-dispersion level (+1.5%), probably because this condition was too stringent. We investigated a weaker version of this constraint in the parameter estimation: we first estimated the parameters under the HWE constraint. Then we relaxed this assumption in the second step but used the parameter values estimated in the first step as a starting point for the iterative parameter estimation procedure. This modification also did not lower the over-dispersion level (Δλ = 8.2%, +0.7% compared to the adapted calls). However, while this further constrain did not improve the over-dispersion overall, this two-step procedure helped with finding the global maximum of the likelihood function for a small fraction of nsSNPs for which the variance of the fluorescent signal was large. Therefore, it provided an alternative scoring method useful to maximise the number of typed nsSNPs while increasing the over-dispersion only slightly.
Regarding the association test, we investigated the effect of missing calls. For each nsSNP, we called a sample missing when the probability of belonging to the most likely genotype cloud was less than 95%. The number of missing calls varied greatly across nsSNPs: the median of the average number of missing calls across the nsSNPs that pass the quality threshold is 0.2% but this median number is 1% among the 2,171 nsSNPs with the lowest quality score among the best set of 7,446 nsSNPs. We found that the use of missing calls slightly increased the level of over-dispersion (+1.1% compared to the same algorithm in the absence of missing calls). However, missing calls have a larger effect on the quality scores: re-estimating a best set of 7,446 nsSNPs but computing the quality scores with missing data generated an over-dispersion of 10.5%. This larger over-dispersion is explained by the fact that introducing missing calls biased the computation of the quality scores and prevented us from identifying low quality nsSNPs (see Discussion). However, once we avoided the use of missing calls and called all available samples, using the most likely call instead of the posterior distribution had little effect (Δλ = 7.4%, 0.1% lower than our adapted calls). This limited effect is expected because split calls are rare for the range of models we considered.
WTCCC Control Dataset
In this section, we show the result of our adapted algorithm applied to a different genotyping platform, the Affymetrix Mapping 500K array set. These data have been generated by the WTCCC (www.wtccc.org.uk). The WTCCC is a GWA study involving seven different disease groups. For each disease, the WTCCC genotyped 2,000 individuals from England, Scotland, and Wales. Disease samples will then be compared to a common set of 3,000 nationally ascertained controls also from the same regions. These controls come from two sources: 1,500 are representative samples from the 1958 BBC and 1,500 are blood donors recruited by the three national UK Blood Service. Here, we compare the WTCCC control groups. This comparison is interesting because in a typical GWA study, we expect a fraction of the over-dispersion to reflect actual genetic differences between control and disease groups. However, when comparing two sets of healthy controls the interpretation of the results is easier, as both groups should be representative samples of the population.
We first show that our quality measure was efficient at distinguishing poorly typed SNPs from correctly typed ones. We illustrate this point by showing the distribution of p-values on Chromosome 1 for three quality thresholds (see Figure 3). Because the distribution of the fluorescent signal differs between the MIP platform and the Affymetrix 500K, the optimum threshold also differs. We found that approximately 79% of the SNPs have a minor allele greater than 0.01 as well as a quality score greater that 1.9. Given a total number of 40,220 SNPs on this chromosome, is approximately the level beyond which no p-value is expected. For that quality threshold of 1.9, only four SNPs are obvious false positives with p-values beyond 1 × 10−5. Visual inspection of the clusters confirmed that these were indeed clustering errors. When we increased the quality score to 2.2, only one of the four SNPs remained (with a quality score of 2.3). As approximately 12% of the Affymetrix 500K SNPs are monomorphic in the British population, we found that only 9% of the SNPs did not pass our quality threshold, while keeping the false-positive rate close to zero. Similar numbers were found on other chromosomes.
Distribution of p-Values for the Association Test between the 1958 BBC Samples and the UK Blood Donors (WTCCC Control Dataset) for Three Different Quality Thresholds
In addition, we compared our algorithm with the BRLMM calls, commonly used on this platform and provided by Affymetrix. For each autosomal chromosome we used BRLMM and our adapted algorithm to select the subset of SNPs with a quality score greater than 1.9 and a minor allele frequency greater than 0.01. For both sets of calls, we computed the fraction of SNPs that pass that threshold. In order to make results comparable, we calculated the over-dispersion slope for the SNPs that passed both the BRLMM and the adapted calls threshold (see Table 3). We found that the percentage of SNPs that pass the quality threshold is typically 4% higher using our adapted algorithm, while the over-dispersion remained 2%–5% lower, indicating a significant improvement.
Level of Over-Dispersion for the SNPs That Pass Both the Minor Allele Frequency Cut-Off (Greater than 0.01) and the Quality Threshold of 1.9
Discussion
In this T1D nsSNP GWA study, the adapted algorithm was successful at scoring more nsSNPs confidently (7,446 nsSNPs instead of 5,294 nsSNPs) and, as a consequence, reducing the false-positive rate: over-dispersion decreased from 17% to 7.5%. Rather than developing an entirely new genotyping algorithm we have adapted the current algorithm for GWA with the motivation of controlling the false-positive rate resulting from a cases/controls genotyping bias. Consequently, these modifications are relevant to all clustering based genotyping algorithms. Here, we considered the MIP genotyping technology [7] and the Affymetrix 500K array, but these modifications are also applicable to the Illumina platform (http://www.illumina.com). Our results show that the most important recommendation consists of scoring the different datasets (typically cases and controls) in a centralised manner, when this is possible. Introducing fuzzy calls is less important as long as one avoids the use of missing calls.
In practice, a key component of any genotyping algorithm is the ability to provide a single measure of clustering quality. Previously, we used surrogate measures of clustering quality (such as call rate and deviation from HWE) to identify unreliable SNPs, but this approach was not optimal [3]. Our measure of clustering quality compared the locations of the clusters of fluorescent signals with the variability of this signal within a cluster. However, to be really informative, this measure should be computed in the absence of missing calls. Excluding calls artificially reduces the variability of the signal within each cloud and biases the quality measure upward. Contrary to intuition, when using the calls provided by the original MIP algorithm [7] to compute both the quality measure and the association statistic, the over-dispersion level is higher for the nsSNPs that have the highest confidence value: Δλ = 26% for a confidence greater than 8 (1,116 nsSNPs) and Δλ = 15% for a confidence level between 5 and 8 (2,393 nsSNPs). Visual inspection of the clustering for these nsSNPs showed that such high confidence levels were typically associated with small variability of the fluorescent signals within clouds. In that situation, the original algorithm called missing those data points located a few standard deviations away from the center of the cluster. When these missing calls occurred differently in cases and controls it resulted in an increased over-dispersion of the association statistic (such as in Figure 1).
We note that in spite of our efforts a level of over-dispersion remains even for the 2,079 nsSNPs with near perfect clustering (Δλ = 1.5%). This estimate is noisy and its significance or causes are difficult to assess. However, we note that in the larger set of 7,446 nsSNPs, the inclusion of 21 non-Caucasian samples increased the over-dispersion from 7.5% to 12.1%. Also, if there were any undetected close relations in the collections of cases and controls this could also increase the level of over-dispersion (we did ensure that inadvertent or deliberate sample duplications were removed, and no first-degree relatives were included in the study)
The difference between the lower bound of 1.5% (in the high quality set of 2,079 nsSNPS) and our 7.5% level (in the larger set of 7,446 nsSNPs) is probably associated with remaining imperfections in our statistical model. As pointed out in the Results section, replacing the most likely call with its posterior distribution given the fluorescent signal had little effect on the level of over-dispersion. Indeed, when a data point was located between two clusters, the algorithm did not assign an intuitive 50%/50% probability on both adjacent clouds but rather put a weight close to one on the cloud with the largest standard deviation. This replacement of “grey” calls with “black or white” amplified the difference between cases and controls and contributed to the remaining level of over-dispersion.
Materials and Methods
Description of the genotyping algorithm.
The original algorithm is described in [5]. Genotypes are scored based on the contrast measure: for a SNP with alleles A and G and signal intensities IA and IB, respectively, S = IA + IB and contrast = sinh(2IAIG/S)/sinh(2). In this approach a mixture of three Gaussian is then fitted to the set of contrast values. Three parameters (Φ1, Φ2, Φ3) with the constraint Φ1 + Φ2 + Φ3 = 1 represent the a priori probabilities to belong to each of the three clouds (before knowing the value of the contrast). Parameters (a priori frequency estimates location μ, and standard deviation σ of the three clouds) are estimated using the EM algorithm [8]. This Gaussian mixture is replaced with t-distributions in our modified method. A possible representation of a t-distribution with n degree of freedom, variance parameter σ and mean μ is the following:
This representation is used in the version of the EM algorithm we used to score the data [8]. It used a data augmentation procedure and treated the variables u as missing data.
Linked clustering of cases and controls.
When controls and cases are typed separately each sample has its own set of parameters Θ: (Φ1, Φ2, Φ3) that describe the a priori allele frequencies as well as that describe the location of the three genotype clouds. In the linked version of the scoring the a priori frequencies (Φ1, Φ2, Φ3) are identical for both samples (cases and controls). In the EM algorithm the set of parameters Θ is estimated iteratively. The estimator of Φi at step (k+1) is where n is the number of observations and . When the scoring is done separately for cases and controls this estimator is computed separately for both samples. In the linked version of the scoring this sum is computed jointly for cases and controls. For the stratified association test, each geographic region s has its own set of parameters (Φ1, Φ2, Φ3) that is estimated separately for each region, but jointly for cases and controls. The rest of the EM algorithm follows [8].
Typing of the X chromosome.
We extended our linked clustering approach to deal with nsSNPs on the X chromosome. Because of male/female copy number differences this situation is similar to differential genotyping bias as the location of the genotyping clouds can differ across samples. We extended our linked clustering approach to this situation: the location of the genotyping clouds could differ but the a priori frequencies were estimated jointly. In that case we denote and . Then for the female sample we have . For the male sample:
Imposing HWE.
The linked clustering approach can be extended to impose HWE for the a priori frequency estimates (Φ1, Φ2, Φ3). The frequencies are parameterised as (π2,2π(1 − π),(1 − π)2). Using the same notations the EM estimator becomes: . This approach can also be extended to X chromosome SNPs as presented above.
Association statistic.
We first consider the unstratified version of the test (see Protocol S1 for a complete derivation of the test). We denote the disease status (the outcome variable in our model) as a vector of binary variables Y. The vector X of explanatory variables (the genotypes) can take three values (1,2,3). We assume a logistic model: logit[P(Y = 1)] = α + βX. The score statistic can be written as:
The score variance can be computed using a profile likelihood argument:
where D and H are the numbers of cases and controls, is the sample variance of the expected value of the genotype variable X, and is the variance of Xi under the fuzzy distribution. The test statistic U2/V is χ2 with one degree of freedom under the null.
Extension for stratification.
The derivation of this test is available in Protocol S1. In that version of the test the score statistic becomes:
where Si is the strata for the individual i and the mean value of Y in that strata. Each strata has its own score variance (computed as in the nonstratified situation) and the contribution of each strata is then summed to obtain the overall score variance. The test statistic U2/V is still distributed as χ2 with one degree of freedom under the null.
Measure of clustering quality.
We designed a measure that captures the intuition that clouds of points are well separated for a given SNP. We use the difference between the centres of adjacent clouds divided by the sum of the standard deviation for these two clouds. Center and standard deviation of the clouds is computed based on the most likely calls. The final quality measure for a SNP is the minimum computed over each pair of clusters. This computation is done for cases and controls separately and the minimum over both samples is then computed. As expected, increasing that threshold is inversely correlated with over-dispersion. The over-dispersion stops decreasing at a threshold of 2.8 and we used this value to generate our set of 7,446 SNPs.
Simulation details.
When simulating SNPs we simulated directly the set of contrasts. For high quality SNPs, the centres of the three genotyping clouds are −0.9,0,0.9. The three t-distributions have degree of freedom equal to ν = 10 and the scaling factor for the standard deviation is 0.03. The standard error for each genotyping cloud is then equal to .
For lower-quality SNPs, the centres of the three genotyping clouds are also −0.9,0,0.9. The three t-distributions have degree of freedom equal to ν = 3.5 and the scaling factor for the standard deviation is 0.1. The standard error for each genotyping cloud is then equal to .
Supporting Information
Protocol S1
Derivation of the Test Statistic
(85 KB PDF)
Acknowledgments
We acknowledge use of DNA from the 1958 BBC collection (D. Strachan, S. Ring, W. McArdle and M. Pembrey), funded by the Medical Research Council grant G0000934 and Wellcome Trust grant 068545/Z/02.
Abbreviations
1958 BBC 1958 British Birth Cohort EM expectation maximization GWA genome-wide association HWE Hardy-Weinberg equilibrium MIP molecular inversion probe nsSNP nonsynonymous SNP SNP single nucleotide polymorphism T1D type 1 diabetes WTCCC Wellcome Trust Case-Control Consortium
Footnotes
Competing interests. The authors have declared that no competing interests exist.
A previous version of this article appeared as an Early Online Release on April 5, 2007 (doi:10.1371/journal.pgen.0030074.eor).
Author contributions. JAT and DGC conceived and designed the experiments and contributed reagents/materials/analysis tools. VP and JDC analyzed the data and wrote the paper.
Funding. This work was funded by the Wellcome Trust and the Juvenile Diabetes Research Foundation International. VP is a Juvenile Diabetes Research Foundation International postdoctoral fellow.
References
• The International HapMap Consortium. A haplotype map of the human genome. Nature. 2005;437:1299–1320. [PubMed]
• Wang WY, Barratt BJ, Clayton DG, Todd JA. Genome-wide association studies: Theoretical and practical concerns. Nat Rev Genet. 2005;6:109–118. [PubMed]
• Clayton DG, Walker NM, Smyth DJ, Pask R, Cooper JD, et al. Population structure, differential bias and genomic control in a large-scale, case-control association study. Nat Genet. 2005;37:1243–1246. [PubMed]
• Power C, Elliott J. Cohort profile: 1958 British Birth Cohort (National Child Development Study) Int J Epidemiol. 2006;35:34–41. [PubMed]
• Moorhead M, Hardenbol P, Siddiqui F, Falkowski M, Bruckner C, et al. Optimal genotype determination in highly multiplexed SNP data. Eur J Hum Genet. 2006;14:207–215. [PubMed]
• Hardenbol P, Baner J, Jain M, Nilsson M, Namsaraev EA, et al. Multiplexed genotyping with sequence-tagged molecular inversion probes. Nat Biotech. 2003;21:673–678. [PubMed]
• Hardenbol P, Yu F, Belmont J, Mackenzie J, Bruckner C, et al. Highly multiplexed molecular inversion probe genotyping: Over 10,000 targeted SNPs genotyped in a single tube assay. Genome Res. 2005;15:269–275. [PubMed]
• Mclachlan G, Peel D. Finite Mixture Models (Wiley Series in Probability and Statistics) New York: Wiley-Interscience; 2000. 419 p. p.
• Chapman JM, Cooper JD, Todd JA, Clayton DG. Detecting disease associations due to linkage disequilibrium using haplotype tags: A class of tests and the determinants of statistical power. Hum Hered. 2003;56:18–31. [PubMed]
• Mantel N. Chi-square tests with one degree of freedom: extensions of the Mantel-Haenszel procedure. J Am Stat Assoc. 1963;58:690–700.
• Kang H, Qin ZS, Niu T, Liu JS. SNP-based haplotype inference with genotyping uncertainty. Am J Hum Genet. 2004;74:495–510. [PubMed]
• Kendall MG, Stuart A. Advanced Theory of Statistics. Volume 2, 3rd edition. London: Charles Griffin and Company; 1961. p. 166. p.
Articles from PLoS Genetics are provided here courtesy of Public Library of Science
Formats:
Related citations in PubMed
See reviews...See all...
• MedGen
MedGen
Related information in MedGen
• PubMed
PubMed
PubMed citations for these articles | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142293095588684, "perplexity": 1724.7586378842077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296169.46/warc/CC-MAIN-20150323172136-00204-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/87799-integration.html | 1. ## integration
-VdV/(kV+g)
where k and g are constants
plz i need answer for this with illustration
and thank you all
2. Originally Posted by mohamedsafy
-VdV/(kV+g)
where k and g are constants
plz i need answer for this with illustration
and thank you all
All you have to do is some minor algebra.
VdV/(kV+g) = $\frac{(1/k) (kV +g)- g/k}{kV+g}$
Seperate the fraction we get.
$\frac{1}{k} +\frac{- g}{k(kV+g)}$
Your integral reduces to
$\int \frac{1}{k} +\frac{- g}{k(kV+g)} dV$
$\int \frac{1}{k}+\frac{- g}{k(kV+g)} dV = \frac{V}{k} - \frac{ g ln(kV +g)}{k^{2}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950761795043945, "perplexity": 4120.643905463306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295383.20/warc/CC-MAIN-20160823195815-00002-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://arxiv.org/abs/quant-ph/0512169 | quant-ph
(what is this?)
Title: Deterministic and Unambiguous Dense Coding
Abstract: Optimal dense coding using a partially-entangled pure state of Schmidt rank $\bar D$ and a noiseless quantum channel of dimension $D$ is studied both in the deterministic case where at most $L_d$ messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability $\tau_x$) Bob knows for sure that Alice sent message $x$, and when it fails (probability $1-\tau_x$) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For $\bar D\leq D$ a bound is obtained for $L_d$ in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. For $\bar D > D$ it is shown that $L_d$ is strictly less than $D^2$ unless $\bar D$ is an integer multiple of $D$, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for $\bar D \leq D$, assuming $\tau_x>0$ for a set of $\bar D D$ messages, and a bound is obtained for the average $\lgl1/\tau\rgl$. A bound on the average $\lgl\tau\rgl$ requires an additional assumption of encoding by isometries (unitaries when $\bar D=D$) that are orthogonal for different messages. Both bounds are saturated when $\tau_x$ is a constant independent of $x$, by a protocol based on one-shot entanglement concentration. For $\bar D > D$ it is shown that (at least) $D^2$ messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states.
Comments: Short new section VII added. Latex 23 pages, 1 PSTricks figure in text Subjects: Quantum Physics (quant-ph) Journal reference: Phys. Rev. A 73, 042311 (2006) DOI: 10.1103/PhysRevA.73.042311 Cite as: arXiv:quant-ph/0512169 (or arXiv:quant-ph/0512169v2 for this version)
Submission history
From: Robert B. Griffiths [view email]
[v1] Tue, 20 Dec 2005 18:23:49 GMT (24kb)
[v2] Tue, 14 Feb 2006 21:03:06 GMT (25kb) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046953320503235, "perplexity": 968.5595155076072}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00179.warc.gz"} |
http://quantum-algorithms.herokuapp.com/299/paper/node16.html | Next: The Efficiency of the Up: The Quantum Computer Previous: Probability Interpretation Contents
## Quantum Parallelism
Given that our quantum memory register differs from a classical one in that it can store a superposition of the base states of the register, one might wonder what this implies as to the efficiency of quantum computing. The study of quantum computing is relatively new, most give credit to Richard Feynman for being the first to suggest that there were tasks that a quantum computer could perform exponentially better than a classical computer. Feynman observed that a classical computer could not simulate a quantum mechanical system without suffering from exponential slowdown. At the same time he hinted that perhaps by using a device whose behavior was inherently quantum in nature one could simulate such a system without this exponential slowdown. (Feynman)
Several quantum algorithms rely on something called quantum parallelism. Quantum parallelism arises from the ability of a quantum memory register to exist in a superposition of base states. A quantum memory register can exist in a superposition of states, each component of this superposition may be thought of as a single argument to a function. A function performed on the register in a superposition of states is thus performed on each of the components of the superposition, but this function is only applied one time. Since the number of possible states is 2n where n is the number of qubits in the quantum register, you can perform in one operation on a quantum computer what would take an exponential number of operations on a classical computer. This is fantastic, but the more superposed states that exist in you register, the smaller the probability that you will measure any particular one will become.
As an example suppose that you are using a quantum computer to calculate the function (x) = 2*x mod 7, for x integers between 0 and 7 inclusive. You could prepare a quantum register that was in a equally weighted superposition of the states 0-7. Then you could perform the 2*x mod 7 operation once, and the register would contain the equally weighted superposition of 1,2,4,6,1,3,5,0 states, these being the outputs of the function 2*x mod 7 for inputs 0 - 7. When measuring the quantum register you would have a 2/8 chance of measuring 1, and a 1/8 chance of measuring any of the other outputs. It would seem that this sort of parallelism is not useful, as the more we benefit from parallelism the less likely we are to measure a value of a function for a particular input. Some clever algorithms have been devised, most notably by Peter Shor and L. K. Grover which succeed in using quantum parallelism on a function where they are interested in some property of all the inputs, not just a particular one.
Next: The Efficiency of the Up: The Quantum Computer Previous: Probability Interpretation Contents
Matthew Hayward - Quantum Computing and Shor's Algorithm GitHub Repository | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8821664452552795, "perplexity": 348.62088059534415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00300.warc.gz"} |
https://www.physicsforums.com/threads/hookes-law-does-this-spring.535849/ | Hooke's Law. Does this spring
1. Oct 2, 2011
valeriex0x
1. The problem statement, all variables and given/known data
A student measures the force required to stretch a spring by various amounts and makes the graph shown in the figure, which plots this force as a function of the distance the spring has stretched.
Part A) Does this spring obey Hooke's Law?
Part B) What is the force constant of the spring in N/m?
Part C) What force would be needed to stretch the spring a distance of 17 cm from its unstretched length, assuming that it continues to obey Hooke's Law (F=N)?
2. Relevant equations
F is proportional to displacement x
F=-kx
k=delta f/delta x
3. The attempt at a solution
We didnt cover this in class yet, but I know that a hookean body gets displaced, and once the force is removed it goes back to its origional position. Which means that the displacement along the x axis is the same as the force applied to it.
For part A) I would say No, the spring does not obey Hooke's Law.
For part B) I get -1/2 N/m from F=-kx 5=(-x) 2.5
For part C) -17N from F=(-k ) 17cm
File size:
8 KB
Views:
833
2. Oct 2, 2011
EkaterinaAvd
they might mean absolute values of force and displacement and you just should treat it as F=kx. Because negative k would mean that when your spring is stretched, it tends to be streched even more. And when it is contracted, it tends to be contracted even more.
(A) Why do you think so? And if it doesn't obey, how can you use it to solve parts (B) and (C)?
(B) Suppose you are right and k=-1/2. Then from your expression: F=-kx=-(-1/2 N/m)*2.5 m=1.25 N which is not equal to 5 N
(C) Suppose you were right in part (B) and k=-1/2. Then F=-(-1/2 N/cm)*17 cm = 8.5 N not equal to 17 N.
Please, think a little more about all three parts and post the next iteration of your solution.
3. Oct 2, 2011
valeriex0x
I originally thought the answer to Part A) was No, because when I looked at the graph, when x=5, y is not equal to 5. I thought that is something obeyed Hooke's Law both the x and y value would be 5, and again when x=10, y would be 10. I entered the reponse "No" into mastering physics and they told me I was wrong, so obsivoulsy I entered Yes and then got it correct. Was I wrong in my thinking to believe that they are supposed to be equal? Or is it okay that the x and y values are not equal just so long as the graph is linear?
Can you just explain why my first answer was incorrect? Or did I kind of figure it out with the above response.
4. Oct 2, 2011
valeriex0x
For Part B I calculated:
ΔF/Δx
(15-7.5)/(10-5)=1.5
Am i on the right track?
5. Oct 2, 2011
EkaterinaAvd
part (B) is correct now in N/cm. But if you need answer in SI, you need to transfer it to SI.
for part A,
you can't expect force and displacement to be equal to each other or not because they have different dimensions.
imagine that it had x=5 when y=5 etc. Now lets measure x in meters instead of centimeters. and you'd have x=0.05 when y=5 when nothing actually changed in your system.
Hooke's law just says that force depends on displacement linearly with some coefficient k, F=kx (talking about magnitudes). This coefficient compensates the difference in dimensions of F and x (N/cm or N/m).
As your graph is straight line, answer is "Yes", force depends on displacement linearly. If it were parabola or circle of sin or etc, you would answer "No".
Similar Discussions: Hooke's Law. Does this spring | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8613767027854919, "perplexity": 760.0894111712283}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00442.warc.gz"} |
https://www.semanticscholar.org/paper/Hodge-integrals-and-tau-symmetric-integrable-of-Dubrovin-liu/f2f3263eba3ce1700329cc2a5373953c9507c366 | # Hodge integrals and tau-symmetric integrable hierarchies of Hamiltonian evolutionary PDEs
```@article{Dubrovin2014HodgeIA,
title={Hodge integrals and tau-symmetric integrable hierarchies of Hamiltonian evolutionary PDEs},
author={B. Dubrovin and Si-Qi liu and Di Yang and You-jin Zhang},
year={2014},
volume={293},
pages={382-435}
}```
For an arbitrary semisimple Frobenius manifold we construct Hodge integrable hierarchy of Hamiltonian partial differential equations. In the particular case of quantum cohomology the tau-function of a solution to the hierarchy generates the intersection numbers of the Gromov–Witten classes and their descendents along with the characteristic classes of Hodge bundles on the moduli spaces of stable maps. For the one-dimensional Frobenius manifold the Hodge hierarchy is a deformation of the… Expand
32 Citations
#### Tables from this paper
Cubic Hodge integrals and integrable hierarchies of Volterra type
A tau function of the 2D Toda hierarchy can be obtained from a generating function of the two-partition cubic Hodge integrals. The associated Lax operators turn out to satisfy an algebraic relation.Expand
Fractional Volterra hierarchy
• Mathematics, Physics
• 2017
The generating function of cubic Hodge integrals satisfying the local Calabi–Yau condition is conjectured to be a tau function of a new integrable system which can be regarded as a fractionalExpand
Connecting Hodge Integrals to Gromov–Witten Invariants by Virasoro Operators
• Mathematics, Physics
• 2017
In this paper, we show that the generating function for linear Hodge integrals over moduli spaces of stable maps to a nonsingular projective variety X can be connected to the generating function forExpand
Integrable viscous conservation laws
• Mathematics, Physics
• 2013
We propose an extension of the Dubrovin-Zhang perturbative approach to the study of normal forms for non-Hamiltonian integrable scalar conservation laws. The explicit computation of the first fewExpand
Geometry and arithmetic of integrable hierarchies of KdV type. I. Integrality
• Mathematics, Physics
• 2021
For each of the simple Lie algebras g = Al, Dl or E6, we show that the all-genera one-point FJRW invariants of g-type, after multiplication by suitable products of Pochhammer symbols, are theExpand
Integrable systems of double ramification type
• Mathematics, Physics
• 2016
In this paper we study various aspects of the double ramification (DR) hierarchy, introduced by the first author, and its quantization. We extend the notion of tau-symmetry to quantum integrableExpand
Simple Lie Algebras, Drinfeld–Sokolov Hierarchies, and Multi-Point Correlation Functions
• Mathematics, Physics
• 2016
For a simple Lie algebra \$\mathfrak{g}\$, we derive a simple algorithm for computing logarithmic derivatives of tau-functions of Drinfeld--Sokolov hierarchy of \$\mathfrak{g}\$-type in terms ofExpand
Classical Hurwitz numbers and related combinatorics
• Mathematics
• 2016
In 1891 Hurwitz [30] studied the number Hg,d of genus g ≥ 0 and degree d ≥ 1 coverings of the Riemann sphere with 2g + 2d− 2 fixed branch points and in particular found a closed formula for Hg,d forExpand
On tau-functions for the KdV hierarchy
• Physics, Mathematics
• 2018
For an arbitrary solution to the KdV hierarchy, the generating series of logarithmic derivatives of the tau-function of the solution can be expressed by the basic matrix resolvent via algebraicExpand
K-theoretic Gromov–Witten invariants in genus 0 and integrable hierarchies
• Mathematics
• 2016
We prove that the genus 0 invariants in K-theoretic Gromov–Witten theory are governed by an integrable hierarchy of hydrodynamic type. If the K-theoretic quantum product is semisimple, then we alsoExpand
#### References
SHOWING 1-10 OF 52 REFERENCES
Normal forms of hierarchies of integrable PDEs, Frobenius manifolds and Gromov - Witten invariants
• Mathematics, Physics
• 2001
We present a project of classification of a certain class of bihamiltonian 1+1 PDEs depending on a small parameter. Our aim is to embed the theory of Gromov - Witten invariants of all genera into theExpand
A polynomial bracket for the Dubrovin--Zhang hierarchies
• Mathematics, Physics
• 2010
We define a hierarchy of Hamiltonian PDEs associated to an arbitrary tau-function in the semi-simple orbit of the Givental group action on genus expansions of Frobenius manifolds. We prove that theExpand
On deformations of quasi-Miura transformations and the Dubrovin–Zhang bracket
• Mathematics, Physics
• 2012
Abstract In our recent paper, we proved the polynomiality of a Poisson bracket for a class of infinite-dimensional Hamiltonian systems of partial differential equations (PDEs) associated toExpand
Simple singularities and integrable hierarchies
• Mathematics
• 2005
The paper [11] gives a construction of the total descendent potential corresponding to a semisimple Frobenius manifold. In [12], it is proved that the total descendent potential corresponding to K.Expand
On Hamiltonian perturbations of hyperbolic systems of conservation laws
• Mathematics, Physics
• 2004
We study the general structure of formal perturbative solutions to the Hamiltonian perturbations of spatially one-dimensional systems of hyperbolic PDEs. Under certain genericity assumptions it isExpand
Recursion Relations for Double Ramification Hierarchies
• Mathematics, Physics
• 2014
In this paper we study various properties of the double ramification hierarchy, an integrable hierarchy of hamiltonian PDEs introduced in Buryak (CommunMath Phys 336(3):1085–1107, 2015) usingExpand
Hodge integrals and Gromov-Witten theory
• Mathematics, Physics
• 1998
Integrals of the Chern classes of the Hodge bundle in Gromov-Witten theory are studied. We find a universal system of differential equations which determines the generating function of theseExpand
GROMOV - WITTEN INVARIANTS AND QUANTIZATION OF QUADRATIC HAMILTONIANS
We describea formalism based on quantizationof quadratichamil- tonians and symplectic actions of loop groups which provides a convenient home for most of known general results and conjectures aboutExpand
KP hierarchy for Hodge integrals
Abstract Starting from the ELSV formula, we derive a number of new equations on the generating functions for Hodge integrals over the moduli space of complex curves. This gives a new simple andExpand
New topological recursion relations
• Mathematics
• 2008
Simple boundary expressions for the k-th power of the cotangent line class on the moduli space of stable 1-pointed genus g curves are found for k >= 2g. The method is by virtual localization on theExpand | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944602012634277, "perplexity": 1251.796147358315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00172.warc.gz"} |
https://www.physicsforums.com/threads/faradays-law-and-induced-emf.307489/ | # Faraday's Law and induced EMF
• Start date
• #1
10
0
## Main Question or Discussion Point
induced emf = - d(flux)/dt
If this is applied to a loop where induced emf causes currents, and thus flux itself, do we have to consider that flux (of course we don't if it's constant)?
If the external flux has a nonzero second derivative, then the induced emf is changing with time, thus the induced flux has a nonzero first derivative. Will this varying induced flux need to be considered when applying Faraday's law?
Related Other Physics Topics News on Phys.org
• #2
1,160
82
Yes. The internal flux must be considered except where it is much smaller than the external flux. Lenz' law describes the internal flux as opposite in direction to the external. Hence the net flux decreases. Faraday's law relates the *net* flux to the emf. Thus the emf is determined by the external flux plus the geometry and resistance of the loop itself.
Claude
• #3
jtbell
Mentor
15,500
3,296
This leads to the concept of "self inductance" of a coil or loop of wire.
• #4
10
0
Say we have a circular loop of wire with some area and a uniform magnetic field pointing directly into it (no angle).
What if the magnitude of B is something like
B(t) = 100T^5 + 100t^4 + 100T^3 + 100T^2 + 100T + 100
Then finding an expression for emf in loop of wire will be very hard, correct?
Because the actual flux through the loop at time t is not just Area*B'(t) , but rather (Area*B'(t) + self_flux'(t) )
Where self_flux(t) is the flux created by the loop itself.
Correct?
• #5
1,160
82
I worked this problem out last month, but it's at home and I'm at work right now. I'll scan it and post it later tonight.
Claude
• #6
10
0
I worked this problem out last month, but it's at home and I'm at work right now. I'll scan it and post it later tonight.
Claude
I just made that question up to explain the issue I'm having in understanding Faraday's law. It's not a problem from anywhere.
If you mean that you also "considered" this issue a month ago and worked out some proof where we can ignore the self_flux, then that would be great if you can scan that work.
• #7
4,662
5
Because the actual flux through the loop at time t is not just Area*B'(t) , but rather (Area*B'(t) + self_flux'(t) ) Where self_flux(t) is the flux created by the loop itself.
Correct?
Isn't there a minus sign in the total flux because of Lenz' Law?
• #8
1,160
82
Here it is. I reuploaded it in a jpg format. I forgot about the psd format being unreadable for most. The emf, or voltage if you prefer, and current, is given by:
V = -j*omega*phi_e*R / (R + j*omega*L);
I = j*omega*phi_e / (R + j*omega*L).
Plugging in all boundary conditions makes perfect sense. If R is quite large, >> omega*L, then V reduces to:
-j*omega*phi_e, which is Faraday's law w/o considering self inductance.
Note - R = resistance of loop; L = inductance of loop; phi_e = external flux normal to loop; omega = radian frequency of time changing flux.
Claude
#### Attachments
• 46.9 KB Views: 462
Last edited:
• #9
1,160
82
Also, phi_i = internal fluz due to loop's own current.
Claude
• Last Post
Replies
4
Views
4K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
46
Views
3K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
23
Views
24K
• Last Post
Replies
3
Views
669
• Last Post
Replies
4
Views
7K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935520350933075, "perplexity": 2022.7186742130027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00150.warc.gz"} |
https://xianblog.wordpress.com/tag/buffons-needle/ | ## dial e for Buffon
Posted in Books, Kids, Statistics with tags , , , , , , , on January 29, 2021 by xi'an
The use of Buffon’s needle to approximate π by a (slow) Monte Carlo estimate is a well-known Monte Carlo illustration. But that a similar experiment can be used for approximating e seems less known, if judging from the 08 January riddle from The Riddler. When considering a sequence of length n exchangeable random variables, the probability of a particuliar ordering of the sequence is 1/n!. Thus, counting how many darts need be thrown on a target until the distance to the centre increases produces a random number N≥2 with pmf 1/n!-1/(n+1)! and with expectation equal to e. Which can be checked as follows
p=diff(c(0,1+which(diff(rt(1e5))>0)))
sum((p>1)*((p+1)*(p+2)/2-1)+2*(p==1))
which recycles simulations by using every one as starting point (codegolfers welcome!).
An earlier post on the ‘Og essentially covered the same notion, also linking it to Forsythe’s method and to Gnedenko. (Rényi could also be involved!) Paradoxically, the extra-credit given to the case when the target is divided into equal distance tori is much less exciting…
## Buffon machines
Posted in Books, pictures, Statistics, University life with tags , , , , , , , , on December 22, 2020 by xi'an
By chance I came across a 2010 paper entitled On Buffon Machines and Numbers by Philippe Flajolet, Maryse Pelletier and Michèle Soria. Which relates to Bernoulli factories, a related riddle, and the recent paper by Luis Mendo I reviewed here. What the authors call a Buffon machine is a device that produces a perfect simulation of a discrete random variable out of a uniform bit generator. Just like (George Louis Leclerc, comte de) Buffon’s needle produces a Bernoulli outcome with success probability π/4. out of a real Uniform over (0,1). Turned into a sequence of Uniform random bits.
“Machines that always halt can only produce Bernoulli distributions whose parameter is a dyadic rational.”
When I first read this sentence it seemed to clash with the earlier riddle, until I realised the later started from a B(p) coin to produce a fair coin, while this paper starts with a fair coin.
The paper hence aims at a version of the Bernoulli factory problem (see Definition 2), although the term is only mentioned at the very end, with the added requirement of simplicity and conciseness translated as a finite expected number of draws and possibly an exponential tail.
It first recalls the (Forsythe–)von Neumann method of generating exponential (and other) variates out of a Uniform generator (see Section IV.2 in Devroye’s generation bible). Expanded into a general algorithm for generating discrete random variables whose pmf’s are related to permutation cardinals,
$\mathbb P(N=n)\propto P_n\lambda^n/n!$
if the Bernoulli generator has probability λ. These include the Poisson and the logarithmic distributions and as a side product Bernoulli distributions with some logarithmic, exponential and trigonometric transforms of λ.
As a side remark, a Bernoulli generator with probability 1/π is derived from Ramanujan identity
$\frac{1}{\pi} = \sum_{n=0}^\infty {2n \choose n}^3 \frac{6n+1}{2^{8n+2}}$
as “a discrete analogue of Buffon’s original. In a neat connection with Mendo’s paper, the authors of this 2010 paper note that Euler’s constant does not appear to be achievable by a Buffon machine.
## certified RNGs
Posted in Statistics with tags , , , , , , , on April 27, 2020 by xi'an
A company called Gaming Laboratories International (GLI) is delivering certificates of randomness. Apparently using Marsaglia’s DieHard tests. Here are some unforgettable quotes from their webpage:
“…a Random Number Generator (RNG) is a key component that MUST be adequately and fully tested to ensure non-predictability and no biases exist towards certain game outcomes.”
“GLI has the most experienced and robust RNG testing methodologies in the world. This includes software-based (pseudo-algorithmic) RNG’s, Hardware RNG’s, and hybrid combinations of both.”
“GLI uses custom software written and validated through the collaborative effort of our in-house mathematicians and industry consultants since our inception in 1989. An RNG Test Suite is applied for randomness testing.”
“No lab in the world provides the level of iGaming RNG assurance that GLI does. Don’t take a chance with this most critical portion of your iGaming system.”
## The answer is e, what was the question?!
Posted in Books, R, Statistics with tags , , , , , on February 12, 2016 by xi'an
A rather exotic question on X validated: since π can be approximated by random sampling over a unit square, is there an equivalent for approximating e? This is an interesting question, as, indeed, why not focus on e rather than π after all?! But very quickly the very artificiality of the problem comes back to hit one in one’s face… With no restriction, it is straightforward to think of a Monte Carlo average that converges to e as the number of simulations grows to infinity. However, such methods like Poisson and normal simulations require some complex functions like sine, cosine, or exponential… But then someone came up with a connection to the great Russian probabilist Gnedenko, who gave as an exercise that the average number of uniforms one needs to add to exceed 1 is exactly e, because it writes as
$\sum_{n=0}^\infty\frac{1}{n!}=e$
(The result was later detailed in the American Statistician as an introductory simulation exercise akin to Buffon’s needle.) This is a brilliant solution as it does not involve anything but a standard uniform generator. I do not think it relates in any close way to the generation from a Poisson process with parameter λ=1 where the probability to exceed one in one step is e⁻¹, hence deriving a Geometric variable from this process leads to an unbiased estimator of e as well. As an aside, W. Huber proposed the following elegantly concise line of R code to implement an approximation of e:
1/mean(n*diff(sort(runif(n+1))) > 1)
Hard to beat, isn’t it?! (Although it is more exactly a Monte Carlo approximation of
$\left(1-\frac{1}{n}\right)^n$
which adds a further level of approximation to the solution….)
## Buffon needled R exams
Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , on November 25, 2013 by xi'an
Here are two exercises I wrote for my R mid-term exam in Paris-Dauphine around Buffon’s needle problem. In the end, the problems sounded too long and too hard for my 3rd year students so I opted for softer questions. So recycle those if you wish (but do not ask for solutions!) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8523300290107727, "perplexity": 1292.6565910678407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00638.warc.gz"} |
https://www.physicsforums.com/threads/definition-of-time-ordered-product-for-dirac-spinors.385462/ | # Definition of time-ordered product for Dirac spinors
1. Mar 10, 2010
### sith
I guess the answer to this question actually should be pretty obvious, but I still have problems getting it right though. I wonder about the definition of the time ordered product for a pair of Dirac spinors. In all the books I've read it simply says:
$$T\left\{\psi(x)\bar{\psi}(x')\right\} = \theta(t - t')\psi(x)\bar{\psi}(x') - \theta(t' - t)\bar{\psi}(x')\psi(x)$$
The spinor indices are always left out. So should it be A:
$$T\left\{\psi_\alpha(x)\bar{\psi}_\beta(x')\right\} = \theta(t - t')\psi_\alpha(x)\bar{\psi}_\beta(x') - \theta(t' - t)\bar{\psi}_\beta(x')\psi_\alpha(x)$$
or B:
$$T\left\{\psi_\alpha(x)\bar{\psi}_\beta(x')\right\} = \theta(t - t')\psi_\alpha(x)\bar{\psi}_\beta(x') - \theta(t' - t)\bar{\psi}_\alpha(x')\psi_\beta(x)$$?
I personally think the A definition feels more natural, but when I use it in my derivations I get strange results. On the other hand, the B definition gives more reasonable results. It could simply be that I've done some mistakes in the derivations, but before I dig into those I want to know if I've got the definition right in the first place.
2. Mar 10, 2010
### sith
Sorry, I found what I did wrong in the derivations, and now I get it out right with the A definition. :)
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Threads - Definition ordered product Date
I Wavefunctions in first order degenerate pertubation theory Jan 6, 2018
I Wave Functions of Definite Momentum Oct 30, 2017
B Simple definition of spin? Sep 13, 2016 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331328630447388, "perplexity": 669.7983150705887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647567.36/warc/CC-MAIN-20180321023951-20180321043951-00123.warc.gz"} |
http://fonoplanet.com/md/admin/index.php | ## Copyright notice
Copyright (C) 1999 onwards Martin Dougiamas (http://moodle.com)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the Moodle License information page for full details:
http://docs.moodle.org/dev/License | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655266761779785, "perplexity": 2929.2448582405764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00413.warc.gz"} |
https://www.physicsforums.com/threads/theta-problem-o_0.150197/ | # Theta problem o_0
• Thread starter AznBoi
• Start date
#### AznBoi
471
0
1. Homework Statement
An engineer wishes to design a curved exit ramp for a toll road in such a way that a car will not have to rely on friction to round the curve without skidding. She does so by banking the road in such a way that the force causing the centripetal acceleration will be supplied by the circular path.
a) Show that for a given speed (v) and a radius (r), the curve must be banked at the angle (theta) such that $$tan \theta= \frac{v^2}{rg}$$
b)
Find the angle at which the curve should be banked if a typical car rounds it at a 50m radius and a speed of 13.4 m/s.
2. Homework Equations
$$tan \theta= \frac{v^2}{rg}$$
3. The Attempt at a Solution
I have no idea what a) means or how to start it. I know that you have to show it by using the variables given. However, I don't know how you would show it. =P
b) $$tan \theta= \frac{v^2}{rg}$$
$$tan \theta= \frac{13.4m/s^2}{(50m)(9.8m/s^2)}$$
$$tan \theta= 0.366449 radians$$
$$\theta= tan^{-1}{}0.366449 radians$$
Thanks for your help!
Related Introductory Physics Homework Help News on Phys.org
#### gabee
175
0
Try drawing a free-body diagram and listing out the net forces acting on the car, that's always a good idea!
#### AznBoi
471
0
Try drawing a free-body diagram and listing out the net forces acting on the car, that's always a good idea!
I don't understand the question that well. The road is being tilted to an angle theta right? So that means both the normal force and the centripetal acceleration pointed towards the middle helps with keeping the car in a circular rotation? I don't get how you solve a) though. How are you suppose to show that? Plug numbers in? I'm lost. =/
#### Zell2
28
0
So that means both the normal force and the centripetal acceleration pointed towards the middle helps with keeping the car in a circular rotation?
The centripetal acceleration isn't a force in itself: a force or a component of a force provides the required centripetal acceleraion.
Try drawing a force diagram as suggested, marking on the two forces acting on the car and then think about what value the components of the unknown force must have.
Last edited:
#### Hootenanny
Staff Emeritus
Gold Member
9,598
6
Just to further clarify what Zell2 said; you can think of the centripetal force as "The net force directed towards the centre of the circular motion required to produce the centripetal acceleration".
#### gabee
175
0
Zell2 is right, a component of one of these forces will provide the centripetal acceleration. Find out which one and set it equal to $$\frac{mv^2}{r}$$ (centripetal force).
#### AznBoi
471
0
Zell2 is right, a component of one of these forces will provide the centripetal acceleration. Find out which one and set it equal to $$\frac{mv^2}{r}$$ (centripetal force).
Well I know that it is the x component of the normal force because the y component and the weight cancel out. So is the x compontent of the normal force pointed parallel to the slope of theta? or does it point horizontally to form a rectanglular box?
#### Attachments
• 1 KB Views: 280
#### AznBoi
471
0
haha nvm. I finally solved a) by using sines and cosines I get it now. So the centripetal force is the net force of the x component of the normal force? Is that why you make them equal to each other?
So in order to find the centripetal acceleration, you divide the net force(centripetal force) by the mass of the object right??
Can someone confirm that part b) is correct? Thanks a lot!!
#### PhanthomJay
Homework Helper
Gold Member
7,113
456
haha nvm. I finally solved a) by using sines and cosines I get it now. So the centripetal force is the net force of the x component of the normal force? Is that why you make them equal to each other? the net force in the x direction, which is the centripetal direction, equals the product of the mass times the acceleration in the x (centripetal) direction, per Newton 2nd Law.
So in order to find the centripetal acceleration, you divide the net force(centripetal force) by the mass of the object right??yes, which is a_centripetal = v^2/r. The centripetal accceleration is due to a change in direction of the velocity, not a change in its speed.
Can someone confirm that part b) is correct? Thanks a lot!!
Check your units. The tan of an angle has none. The angle itself may be represented in degrees or radians or some other measure. What is the angle?
#### AznBoi
471
0
Check your units. The tan of an angle has none. The angle itself may be represented in degrees or radians or some other measure. What is the angle?
I think your suppose to find the angle. How come part b) is incorrect? I just plugged all the given numbers in to find theta.
Here is b) Find the angle at which the curve should be banked if a typical car rounds it at a 50m radius and a speed of 13.4 m/s.
#### PhanthomJay
Homework Helper
Gold Member
7,113
456
I think your suppose to find the angle. How come part b) is incorrect? I just plugged all the given numbers in to find theta.
Here is b) Find the angle at which the curve should be banked if a typical car rounds it at a 50m radius and a speed of 13.4 m/s.
The tan of the angle is 0.366, not 0.366 radians. Now get out your calculator and find theta, and pay heed on the setting..is it degrees or radians or what??
#### AznBoi
471
0
The tan of the angle is 0.366, not 0.366 radians. Now get out your calculator and find theta, and pay heed on the setting..is it degrees or radians or what??
i don't get how you find the angle. I did the tan-1 on calculator using radian mode and I got .35125 radians. Is that the measurement of theta?
#### PhanthomJay
Homework Helper
Gold Member
7,113
456
i don't get how you find the angle. I did the tan-1 on calculator using radian mode and I got .35125 radians. Is that the measurement of theta?
Sure you get it, your answer is correct. Theta = 0.35125 radians. Now put your calculator in degrees mode and you get 20.125 degerees, correct? Both answers are correct, they're just in different measures. Pi radians = 180 degrees, so one radian is about 57 degrees or so.
#### AznBoi
471
0
Sure you get it, your answer is correct. Theta = 0.35125 radians. Now put your calculator in degrees mode and you get 20.125 degerees, correct? Both answers are correct, they're just in different measures. Pi radians = 180 degrees, so one radian is about 57 degrees or so.
Oh ok! Thanks a lot for your help! So all of my steps for b) are correct then right? I don't know if I subsituted the correct numbers but I'm pretty sure I did. Thanks again!
• Last Post
Replies
1
Views
772
• Last Post
Replies
1
Views
3K
• Last Post
Replies
6
Views
4K
• Last Post
Replies
4
Views
730
• Last Post
Replies
1
Views
3K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
5
Views
2K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767704367637634, "perplexity": 608.6712238777669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00401.warc.gz"} |
https://infoscience.epfl.ch/record/220875?ln=en | Motivic invariants of moduli spaces of rank 2 Bradlow-Higgs triples
In the present thesis we study the geometry of the moduli spaces of Bradlow-Higgs triples on a smooth projective curve C. There is a family of stability conditions for triples that depends on a positive real parameter Ï. The moduli spaces of Ï-semistable triples of rank r and degree d vary with Ï. The phenomenon arising Ï from this is known as wall-crossing. In the first half of the thesis we will examine how the moduli spaces and their universal additive invariants change as Ï varies, for the case r = 2. In particular we will study the case of Ï very close to 0, for which the moduli space relates to the moduli space of stable Higgs bundles, and Ï very large, for which the moduli space is a relative Hilbert scheme of points for the family of spectral curves. Some of these results will be generalized to Bradlow-Higgs triples with poles. In the second half we will prove a formula relating the cohomology of the moduli spaces for small and odd degree and the perverse filtration on the cohomology of the moduli space of stable Higgs bundles. We will also partially generalize this result to the case of rank greater than 2.
Hausel, Tamás
Year:
2016
Publisher:
Lausanne, EPFL
Keywords:
Other identifiers:
urn: urn:nbn:ch:bel-epfl-thesis7120-0
Laboratories:
Note: The status of this file is: Anyone | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499402642250061, "perplexity": 210.59862825949506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900200.97/warc/CC-MAIN-20201028162226-20201028192226-00260.warc.gz"} |
https://infoscience.epfl.ch/record/30551 | Infoscience
Thesis
# Mesure des paramètres macroscopiques de réseaux uranium-eau légère: application à des réseaux non uniformes
Time dependent and independent measurements on a subcritical multiplying medium lead to the determination of the two group diffusion constants: k, L12, L22, l01, and l02. The measured values are related to these constants through five independent expressions. Similar measurements have been made on non uniform cylindrical lattices, in which the pitch is a function of the radius. In such lattices, the radial fluxes are expressed by Bessel function series, used to define a generalized expression for the effective multiplication factor.
Thèse École polytechnique fédérale de Lausanne EPFL, n° 65 (1968)
#### Reference
Record created on 2005-03-16, modified on 2016-08-08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326936602592468, "perplexity": 2960.4332027711303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696653.69/warc/CC-MAIN-20170926160416-20170926180416-00395.warc.gz"} |
https://en.wikipedia.org/wiki/Fuzzy_subalgebra | Fuzzy subalgebra
Fuzzy subalgebras theory is a chapter of fuzzy set theory. It is obtained from an interpretation in a multi-valued logic of axioms usually expressing the notion of subalgebra of a given algebraic structure.
Definition
Consider a first order language for algebraic structures with a monadic predicate symbol S. Then a fuzzy subalgebra is a fuzzy model of a theory containing, for any n-ary operation h, the axioms
${\displaystyle \forall x_{1},...,\forall x_{n}(S(x_{1})\land .....\land S(x_{n})\rightarrow S(h(x_{1},...,x_{n}))}$
and, for any constant c, S(c).
The first axiom expresses the closure of S with respect to the operation h, and the second expresses the fact that c is an element in S. As an example, assume that the valuation structure is defined in [0,1] and denote by ${\displaystyle \odot }$ the operation in [0,1] used to interpret the conjunction. Then a fuzzy subalgebra of an algebraic structure whose domain is D is defined by a fuzzy subset s : D → [0,1] of D such that, for every d1,...,dn in D, if h is the interpretation of the n-ary operation symbol h, then
• ${\displaystyle s(d_{1})\odot ...\odot s(d_{n})\leq s({\mathbf {h}}(d_{1},...,d_{n}))}$
Moreover, if c is the interpretation of a constant c such that s(c) = 1.
A largely studied class of fuzzy subalgebras is the one in which the operation ${\displaystyle \odot }$ coincides with the minimum. In such a case it is immediate to prove the following proposition.
Proposition. A fuzzy subset s of an algebraic structure defines a fuzzy subalgebra if and only if for every λ in [0,1], the closed cut {x ∈ D : s(x)≥ λ} of s is a subalgebra.
Fuzzy subgroups and submonoids
The fuzzy subgroups and the fuzzy submonoids are particularly interesting classes of fuzzy subalgebras. In such a case a fuzzy subset s of a monoid (M,•,u) is a fuzzy submonoid if and only if
1. ${\displaystyle s({\mathbf {u}})=1}$
2. ${\displaystyle s(x)\odot s(y)\leq s(x\cdot y)}$
where u is the neutral element in A.
Given a group G, a fuzzy subgroup of G is a fuzzy submonoid s of G such that
• s(x) ≤ s(x−1).
It is possible to prove that the notion of fuzzy subgroup is strictly related with the notions of fuzzy equivalence. In fact, assume that S is a set, G a group of transformations in S and (G,s) a fuzzy subgroup of G. Then, by setting
• e(x,y) = Sup{s(h) : h is an element in G such that h(x) = y}
we obtain a fuzzy equivalence. Conversely, let e be a fuzzy equivalence in S and, for every transformation h of S, set
• s(h)= Inf{e(x,h(x)): x∈S}.
Then s defines a fuzzy subgroup of transformation in S. In a similar way we can relate the fuzzy submonoids with the fuzzy orders.
Bibliography
• Klir, G. and Bo Yuan, Fuzzy Sets and Fuzzy Logic (1995) ISBN 978-0-13-101171-7
• Zimmermann H., Fuzzy Set Theory and its Applications (2001), ISBN 978-0-7923-7435-0.
• Chakraborty H. and Das S., On fuzzy equivalence 1, Fuzzy Sets and Systems, 11 (1983), 185-193.
• Demirci M., Recasens J., Fuzzy groups, fuzzy functions and fuzzy equivalence relations, Fuzzy Sets and Systems, 144 (2004), 441-458.
• Di Nola A., Gerla G., Lattice valued algebras, Stochastica, 11 (1987), 137-150.
• Hájek P., Metamathematics of fuzzy logic. Kluwer 1998.
• Klir G., UTE H. St.Clair and Bo Yuan Fuzzy Set Theory Foundations and Applications,1997.
• Gerla G., Scarpati M., Similarities, Fuzzy Groups: a Galois Connection, J. Math. Anal. Appl., 292 (2004), 33-48.
• Mordeson J., Kiran R. Bhutani and Azriel Rosenfeld. Fuzzy Group Theory, Springer Series: Studies in Fuzziness and Soft Computing, Vol. 182, 2005.
• Rosenfeld A., Fuzzy groups, J. Math. Anal. Appl., 35 (1971), 512-517.
• Zadeh L.A., Fuzzy Sets, ‘’Information and Control’’, 8 (1965) 338353.
• Zadeh L.A., Similarity relations and fuzzy ordering, Inform. Sci. 3 (1971) 177–200. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530755877494812, "perplexity": 1657.5454007113005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320873.54/warc/CC-MAIN-20170626235612-20170627015612-00661.warc.gz"} |
http://iphoneart.com/jwlrt7ly/covariant-derivative-of-covector-4094ec | covariant derivative of covector
Physics Expressing, exhibiting, or relating to covariant theory. To begin, let S be a regular surface in R3, and let W be a smooth tangent vector field defined on S . gww.��_��Dv@�IU���զ��Ƅ�s��ɽt��Ȑ2e���C�cG��vx��-y��=�3�������C����5' In other cases the extra terms describe how the coordinate grid expands, contracts, twists, interweaves, etc. Likewise, the gauge covariant derivative is the ordinary derivative modified in such a way as to make it behave like a true vector operator, so that equations written using the covariant derivative preserve their physical properties under gauge transformations. \nabla_{\mathbf v}(\varphi\otimes\psi)_p=(\nabla_{\mathbf v}\varphi)_p\otimes\psi(p)+\varphi(p)\otimes(\nabla_{\mathbf v}\psi)_p. Physics Expressing, exhibiting, or relating to covariant theory. Check if you have access through your login credentials or your institution to get full access on this article. In particular, Koszul connections eliminated the need for awkward manipulations of Christoffel symbols (and other analogous non-tensorial) objects in differential geometry. r VY := [D VY]k where D VY is the Euclidean derivative d dt Y(c(t))j t=0 for ca curve in S with c(0) = p;c_(0) = V Curvature and Torsion. If a vector field is constant, then Ar;r =0. While this problem could be avoided by including higher derivative terms [56, 57], ... Covariant f(T) gravity is a little bit more involved than the usual, non-covariant one, due to the necessity of finding the appropriate spin connection to the tetrad. By and large, these generalized covariant derivatives had to be specified ad hoc by some version of the connection concept. A covariant derivative is a (Koszul) connection on the tangent bundle and other tensor bundles. is the metric, and are the Christoffel symbols.. is the covariant derivative, and is the partial derivative with respect to .. is a scalar, is a contravariant vector, and is a covariant vector. Thus the theory of covariant differentiation forked off from the strictly Riemannian context to include a wider range of possible geometries. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. The infinitesimal change of the vector is a measure of the curvature. 1. Definition. In other words, the covariant derivative is linear (over C∞(M)) in the direction argument, while the Lie derivative is linear in neither argument. In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. Geodesics of an Affinely Connected Manifold. This question hasn't been answered yet Ask an expert. Covariant Derivative of a Vector Thread starter JTFreitas; Start date Nov 13, 2020; Nov 13, 2020 #1 JTFreitas. Its worth is proportional to the density of noodles; that is, the closer together are the sheets, the larger is the magnitude of the covector. In the case of Euclidean space, one tends to define the derivative of a vector field in terms of the difference between two vectors at two nearby points. The covariant derivative of a covariant tensor is The derivative d+/dx', is the irh covariant component of the gradient vector. google_ad_width = 728; The covariant derivative Y¢ of Y ought to be ∇ a ¢ Y, but neither a¢ nor Y is defined on an open set of M as required by the definition of ∇. We will also define what it means that one of those (vector field, covector field, tensor field) is differentiable. Suppose a (pseudo) Riemann manifold M, is embedded into Euclidean space (\R^n, \langle\cdot;\cdot\rangle) via a (twice continuously) differentiable mapping \vec\Psi : \R^d \supset U \rightarrow \R^n such that the tangent space at \vec\Psi(p) \in M is spanned by the vectors, and the scalar product on \R^n is compatible with the metric on M: g_{ij} = \left\langle \frac{\partial\vec\Psi}{\partial x^i} ; \frac{\partial\vec\Psi}{\partial x^j} \right\rangle. The covariant derivative component is the component parallel to the cylinder's surface, and is the same as that before you rolled the sheet into a cylinder. If the basis vectors are constants, r;, = 0, and the covariant derivative simplifies to (F.27) as you would expect. It also extends in a unique way to the … Statistics Varying with another variable quantity in a manner that leaves a... 2. The quantity in brackets on the RHS is referred to as the covariant derivative of a vector and can be written a bit more compactly as (F.26) where the Christoffel symbol can always be obtained from Equation F.24. 2. Consider T to be a differentiable multilinear map of smooth sections α1, α2, ..., αq of the cotangent bundle T∗M and of sections X1, X2, ... Xp of the tangent bundle TM, written T(α1, α2, ..., X1, X2, ...) into R. The covariant derivative of T along Y is given by the formula, any tangent vector can be described by its components in the basis. covector fields) and to arbitrary tensor fields, in a unique way that ensures compatibility with the tensor product and trace operations (tensor contraction). Math 396. This persistence of identity is reflected in the fact that when a vector is written in one basis, and then the basis is changed, the components of the vector transform according to a change of basis formula. Now we are in a position to say a few things about the number of the components of the Riemann tensor. The same effect can be noticed if we drag the vector along an infinitesimally small closed surface subsequently along two directions and then back. When v is a vector field, the covariant derivative \nabla_{\mathbf v}f is the function that associates with each point p in the common domain of f and v the scalar (\nabla_{\mathbf v}f)_p. Informal definition using an embedding into Euclidean space, \vec\Psi : \R^d \supset U \rightarrow \R^n, \left\lbrace \left. First of all like we will do that in dimensions although, so far we have been considering four dimensional space then. As with the directional derivative, the covariant derivative is a rule, \nabla_{\bold u}{\bold v}, which takes as its inputs: (1) a vector, u, defined at a point P, and (2) a vector field, v, defined in a neighborhood of P.[6] The output is the vector \nabla_{\bold u}{\bold v}(P), also at the point P. The primary difference from the usual directional derivative is that \nabla_{\bold u}{\bold v} must, in a certain precise sense, be independent of the manner in which it is expressed in a coordinate system. If \nabla_{\dot\gamma(t)}\dot\gamma(t) vanishes then the curve is called a geodesic of the covariant derivative. [2][3] This new derivative – the Levi-Civita connection – was covariant in the sense that it satisfied Riemann's requirement that objects in geometry should be independent of their description in a particular coordinate system. Covariant derivative of a dual vector eld. To make appropriate quantity, we have to parallel transport this quantity to the point x plus [INAUDIBLE]. WHEBN0000431848 After that we will follow a more mathematical approach. 2.1. �!M�����) �za~��%4���MU���z��k�"�~���W��Ӊf[B$��u. /Length 2175 2 ALAN L. MYERS ... For spacetime, the derivative represents a four-by-four matrix of partial derivatives. This time, the coordinate transformation information appears as partial derivatives of the new coordinates, ˜xi, with respect to the old coordinates, xj, and the inverse of equation (8). In 1950, Jean-Louis Koszul unified these new ideas of covariant differentiation in a vector bundle by means of what is known today as a Koszul connection or a connection on a vector bundle. A velocity V in one system of coordinates may be transformed into V0in a new system of coordinates. In this system, mass is simply invariant, ... ...tivistic mass". To specify the covariant derivative it is enough to specify the covariant derivative of each basis vector field {\mathbf e}_j\, along {\mathbf e}_i\,. The G term accounts for the change in the coordinates. In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. The definition of higher covariant derivatives is given inductively:$ \nabla ^ {m} U = \nabla ( \nabla ^ {m - 1 } U) $. Login options. Definition In the context of connections on ∞ \infty-groupoid principal bundles. Are you certain this article is inappropriate? Definition. This extra structure is necessary because there is no canonical way to compare vectors from different vector spaces, as is necessary for this generalization of the directional derivative. Notice how the contravariant basis vector g is not differentiated. Covariant Derivative on a Vector Bundle. So for … Spinor covariant derivatives on degenerate manifolds. G g ⊥ K xyz . g_{kl} \Gamma^k{}_{ij} = \frac{1}{2} \left( \frac{\partial g_{jl}}{\partial x^i} + \frac{\partial g_{li}}{\partial x^j}- \frac{\partial g_{ij}}{\partial x^l}\right). [1] Ricci and Levi-Civita (following ideas of Elwin Bruno Christoffel) observed that the Christoffel symbols used to define the curvature could also provide a notion of differentiation which generalized the classical directional derivative of vector fields on a manifold. See more. (4), we can now compute the covariant derivative of a dual vector eld W . 4 0 obj << %PDF-1.4 The quantity on the left must therefore contract a 4-derivative with the field strength tensor. (Since the manifold metric is always assumed to be regular, the compatibility condition implies linear independence of the partial derivative tangent vectors.). INTRODUCTION TO DIFFERENTIAL GEOMETRY Joel W. Robbin UW Madison Dietmar A. Salamon ETH Zuric h 18 April 2020 The covariant derivative of the basis vectors (the Christoffel symbols) serve to express this change. Generally speaking, the tensor$ \nabla ^ {m} U $obtained in this way is not symmetric in the last covariant indices; higher covariant derivatives along different vector … The derivative of your velocity, your acceleration vector, always points radially inward. google_ad_slot = "4852765988"; [5] Using ideas from Lie algebra cohomology, Koszul successfully converted many of the analytic features of covariant differentiation into algebraic ones. Consider the example of moving along a curve γ(t) in the Euclidean plane. In differential geometry, the Lie derivative / ˈ l iː /, named after Sophus Lie by Władysław Ślebodziński, evaluates the change of a tensor field (including scalar functions, vector fields and one-forms), along the flow defined by another vector field. The number of the tensor eld t V W manifolds connection coincides with the definition extends to a differentiation the. L. MYERS... for spacetime, the contraction V W tensor field along the curve Library Association a... That for Riemannian manifolds connection coincides with the field strength tensor directional derivatives which canonical! { \dot\gamma ( t ) in Mathematics, pert ( recall ) ( 16.157 ) the on!, the contraction of the r direction is the regular derivative plus another term be transformed into V0in a system! The gradient vector smooth tangent vector to the terms of Use and Privacy Policy all! Specified relationship is unchanged covariant derivative of covector directional derivatives which is canonical: the Lie derivative is a scalar.! Need for awkward manipulations of Christoffel symbols ) serve to express this change is coordinate invariant therefore! The number of the Riemann tensor 3 ) prove the Leibniz Rule for covariant derivatives of vector fields along,... Is broken into two parts, the concepts of vector fields that extends that of the Public... The infinitesimal change of the basis vectors is defined as a covariant transformation ) the quantity the... W. A. Rodrigues Jr.1 means that one of those ( vector field, tensor field ) a. Four current the vector is a scalar eld irh covariant component of the globe contributors is made from! Features of covariant differentiation into algebraic ones from vector calculus { \mu } {... Discrete connection and covariant derivatives ; and horizontal lifts defined on S 1997 ) 1037 mass defect indicative. Is to define the parallel transport this quantity to the regular derivative derivative is a scalar eld derivative vector. This is called a geodesic of the same type one must know both fields. Acting on the left must therefore contract a 4-derivative with the definition of covariant differentiation into algebraic ones,... System of coordinates Privacy Policy statistics Varying with another variable quantity in a covariant transformation let S be smooth. Privacy Policy however another generalization of the curvature of the usual differential on functions covariant form \dot\gamma t. For … covariant derivative of the vector at the point x plus [ INAUDIBLE ] W is a tensor of! Derivative plus another term so for … covariant derivative of tensors: axiomatic de nition begins by describing two involving. If \nabla_ { \mu } V^ { \nu } # # \nabla_ { \dot\gamma t. Canonical: the Lie derivative a manner that leaves a... 2 the curve called! In covariant derivative of covector open neighborhood derivative from vector calculus # is a tensor field of the vector along an small. Definition of V as...... tivistic mass '' S. de nition on S. de nition S.... Spaces to be specified ad hoc by some version of the components of the components a! Full access on this article was sourced from Creative Commons Attribution-ShareAlike License ; additional terms may.., 17 U U ( K + G ) analogous non-tensorial ) objects in differential.... At a slightly later time, the derivative represents a four-by-four matrix of partial...., is the regular derivative plus another term in this system, mass simply... Must know both vector fields that extends that of the old basis vectors ( Christoffel... ] using ideas from Lie algebra cohomology, Koszul successfully converted many of the components of a vector... Imagine this new kind of vector fields that extends that of the features. A differentiation on the tangent vector field is presented as an extension of components! At this point, from the U.S. Congress, E-Government Act of 2002 Curves! ( i.e English dictionary definition of covariant differentiation into algebraic ones may apply your login credentials your. Will Use Einstein summation convention slightly later time, the contraction of the old basis (... More mathematical approach ALAN L. MYERS... for spacetime, the contraction V W is a ( Koszul connection! Redefine what it means to be a tensor field ) is a tensor, etc that, 17 U (! And other analogous non-tensorial ) objects in differential geometry K + G ) mass is simply invariant while... Space, \vec\Psi: \R^d \supset U \rightarrow \R^n, \left\lbrace \left e on a globe the., q ) right is proportional to the origin of the same type Rodrigues Jr.1 velocity, your acceleration,! Encyclopedia™ is a scalar eld 16.158 ) exactly reconstructs the inhomogeneous equations are recall! ) 1037 Commons Attribution-ShareAlike License ; additional terms may apply Library Association, a non-profit organization defined! Covariant Di erentiation we start with a geometric de nition on S. de nition { now. Had to be compared is also used to define Y¢ by a frame field formula modeled the... Specified ad hoc by some version of the subject world Public Library Association, a rank 1 tensor ) term. Derivative represents a four-by-four matrix of partial derivatives A. Rodrigues Jr.1 modeled on scalar... Now we can construct the components of e and B from the U.S.,. Scalar function f V W show in an open neighborhood, \vec\Psi: \R^d \supset U \R^n... Vectors ( the Christoffel symbols and geodesic equations acquire a clear geometric meaning invariant and the. Inhomogeneous equations are ( recall ) ( 16.157 ) the quantity on the duals of vector fields that extends of!, English dictionary definition of V as...... tivistic mass '' Gravitational acceleration Given the definition of derivative!$ ��u again a tensor field is constant, Ar ; q∫0 the globe the normal. Thus, one must take into account the change of the gradient vector to say a things... Directional derivative from vector calculus of Christoffel symbols ) serve to express this change, this is absolute! The Metric tensor ; Christoffel Symbol ; contravariant ; coordinate system the of!, is equivalent to the curve be transformed into V0in a new system of coordinates on ∞ \infty-groupoid bundles! Your login credentials or your institution to get full access on this article, \dot { }... Vectors as a linear combination of the gradient vector Maxwell 's equations in covariant form in... ) in Mathematics for Physical Science and Engineering, 2014 you agree the. In particular, Koszul successfully converted many of the usual differential on functions tensor field of directional! That # # is a ( Koszul ) connection on the tangent bundle and other examples... To variation of one variable with another so that a specified relationship is unchanged we show that for manifolds. So far we have these relations, we have to parallel transport along curve... Extends that of the curvature of the r component in the q direction the. That we subtract a vector field Analysis and Design and connections ; connections and differential! Combination of the analytic features of covariant differentiation forked off from the vector along an infinitesimally small closed subsequently. The tangent bundle and other pictorial examples of visualizing contravariant and covariant.! Le...... that, 17 U U ( K + G ) is tensor in covariant derivative of covector. Is also used to define curvature when covariant derivatives had to be compared have relations... Objects in differential geometry ) is a ( Koszul ) connection on the tangent bundle and other tensor bundles,... Notion of curvature and gives an example ] using ideas from Lie algebra cohomology, Koszul connections the... ( kō-vā′rē-ănt ) in Mathematics for Physical Science and Engineering, 2014 of directional derivatives which is:... Space, \vec\Psi: \R^d \supset U \rightarrow \R^n, \left\lbrace \left begin, let t be a field. Terms of Use and Privacy Policy of lesser curvature e and B from the strictly Riemannian context to include wider. Inhomogeneous equations are ( recall ) ( 16.156 ) ( 16.156 ) 16.156! ( and other tensor bundles G ) we will also define what it means that one of subject... Vector to the north momentum is covariant geometric structure on a manifold which allows in... Particular, \dot { \gamma } ( t ) } \dot\gamma ( t ) vanishes then curve. Algebra cohomology, Koszul connections eliminated the need for awkward manipulations of Christoffel )! Change is coordinate invariant and therefore the Lie derivative is defined as a covariant manner of e and B the! Define Y¢ by a frame field formula modeled on the right is proportional to regular... Describes the new basis vectors as a starting point for defining the derivative represents a four-by-four matrix partial... Need for awkward manipulations of Christoffel symbols and geodesic equations acquire a clear geometric meaning your vector. In the context of connections on ∞ \infty-groupoid principal bundles write Maxwell 's equations in covariant form... defect indicative... I need to show that # # is a vector is a generalization of derivatives! Fields V. V. Ferna´ndez1, A. M. Moya1, E. Notte-Cuello2 and W. A. Rodrigues.! Fields and tensor fields shall be presented a covariant derivative of covector organization be noticed if we drag the at! Another term along V, consider the covariant derivative for vector field along the of. Mass is simply invariant, while momentum is covariant covariant derivative of covector Question has been... 4-Derivative with the field strength tensor say that, contracts, twists, interweaves, etc field constant. Scalar eld, E. Notte-Cuello2 and W. A. Rodrigues Jr.1 begin, let t be a tensor these covariant... Strictly Riemannian context to include a wider range of possible geometries another term t W! L. MYERS... for spacetime, the mass defect is indicative of lesser curvature i.e... Certain behavior on vector fields, covector fields and tensor fields shall be presented the extrinsic component... Of curvature and gives an example d+/dx ', ( Sec.3.6 ) slightly later,... Transport along the curve generalization of directional derivatives which is canonical: Lie... ; View all Topics Lie derivative is a generalization of the Metric tensor ; Christoffel ;... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555552005767822, "perplexity": 790.638056241332}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662556725.76/warc/CC-MAIN-20220523071517-20220523101517-00594.warc.gz"} |
https://www.zora.uzh.ch/id/eprint/92056/ | # Search for the rare decay KS -> mu+ mu-
LHCb Collaboration; et al; Bernet, R; Müller, K; Steinkamp, O; Straumann, U; Vollhardt, A (2013). Search for the rare decay KS -> mu+ mu-. Journal of High Energy Physics, 2013:90.
## Abstract
A search for the decay TeX is performed, based on a data sample of 1.0 fb−1 of pp collisions at TeX collected by the LHCb experiment at the Large Hadron Collider. The observed number of candidates is consistent with the background-only hypothesis, yielding an upper limit of TeX < 11(9) × 10−9 at 95 (90)% confidence level. This limit is a factor of thirty below the previous measurement.
## Abstract
A search for the decay TeX is performed, based on a data sample of 1.0 fb−1 of pp collisions at TeX collected by the LHCb experiment at the Large Hadron Collider. The observed number of candidates is consistent with the background-only hypothesis, yielding an upper limit of TeX < 11(9) × 10−9 at 95 (90)% confidence level. This limit is a factor of thirty below the previous measurement.
## Statistics
### Citations
Dimensions.ai Metrics
18 citations in Web of Science®
31 citations in Scopus® | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999380111694336, "perplexity": 4090.4194108401175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00294.warc.gz"} |
http://math.stackexchange.com/questions/144901/show-that-forall-n-in-mathbbn-left-left-2in-2-in-right-in | # Show that $\forall n \in \mathbb{N} \left ( \left [(2+i)^n + (2-i)^n \right ]\in \mathbb{R} \right )$
Show that $\forall n \in \mathbb{N} \left ( \left [(2+i)^n + (2-i)^n \right ]\in \mathbb{R} \right )$
My Trig is really rusty and weak so I don't understand the given answer:
$(2+i)^n + (2-i)^n$
$= \left ( \sqrt{5} \right )^n \left (\cos n\theta + i \sin n\theta \right ) + \left ( \sqrt{5} \right )^n \left (\cos (-n\theta) + i \sin (-n\theta) \right )$
$= \left ( \sqrt{5} \right )^n \left ( \cos n\theta + \cos (-n\theta) + i \sin n\theta + i \sin (-n\theta) \right )$
$= \left ( \sqrt{5} \right )^n 2\cos n\theta$
-
You have $z^n=|z|^n\exp(ni\arg\,z)=|z|^n(\cos(n\arg\,z)+i\sin(n\arg\,z))$ for starters... – J. M. May 14 '12 at 8:40
This gives a neat formula. Another way of proving this is to show that (if we call your expression $a_n$) it satisfies the equation $a_n=4a_{n-1}-5a_{n-2}$ and work from there. – Mark Bennet May 14 '12 at 8:45
Where did Mark get that recursion relation, you ask? Note that $(z-(2+i))(z-(2-i))=z^2-4z+5$... it's the same theory behind Fibonacci sequences. – J. M. May 14 '12 at 8:52
...and the high-brow route is Newton-Girard: $x^n+y^n$ for integer $n$ is always expressible as a combination of $x+y$ and $xy$; for your particular case, $x+y=4$ and $xy=5$ (notice a pattern?) – J. M. May 14 '12 at 8:55
The binomial expansion works, because the odd powers of $i$ are attached to odd powers of $b$ and $-b$ respectively, so will cancel. – Mark Bennet May 14 '12 at 9:15
There are two ways to write a complex number: rectangular form, e.g., $x+iy$, and polar form, e.g., $re^{i\theta}$. The conversion between them uses trig functions: $$re^{i\theta}=r\cos\theta+ir\sin\theta\;.\tag{1}$$ Going in the other direction, $$x+iy=\sqrt{x^2+y^2}\,e^{i\theta}\;,$$ where $\theta$ is any angle such that $$\cos\theta=\frac{x}{\sqrt{x^2+y^2}}\;\text{ and }\sin\theta=\frac{y}{\sqrt{x^2+y^2}}\;.$$ The important thing for your argument is that $r=\sqrt{x^2+y^2}$.
The $r$ corresponding to $2+i$ is therefore $\sqrt{2^2+1^2}=\sqrt5$, and that corresponding to $2-i$ is $\sqrt{2^2+(-1)^2}=\sqrt5$ as well. The angles for $2+i$ is an angle $\theta$ whose cosine is $\frac2{\sqrt5}$ and whose sine is $\frac1{\sqrt5}$, while the angle for $2-i$ is an angle whose cosine is $\frac2{\sqrt5}$ and whose sine is $-\frac1{\sqrt5}$. It doesn’t matter exactly what they are; the important thing is that if we let the first be $\theta$, the second is $-\theta$, since $$\cos(-\theta)=\cos\theta\;\text{ and }\sin(-\theta)=-\sin\theta\;.$$
Substituting into $(1)$ gives you $$2+i=\sqrt5\cos\theta+i\sqrt5\sin\theta=\sqrt5(\cos\theta+i\sin\theta)=\sqrt5 e^{i\theta}$$ and $$2-i=\sqrt5\cos(-\theta)+i\sqrt5\sin(-\theta)=\sqrt5(\cos\theta-i\sin\theta)=\sqrt5 e^{-i\theta}\;.$$
Now use the fact that it’s easy to raise an exponential to a power:
\begin{align*} (2+i)^n+(2-i)^n&=(\sqrt5)^n\left(e^{i\theta}\right)^n+(\sqrt5)^n\left(e^{-i\theta}\right)^n\\ &=(\sqrt5)^n\left(e^{in\theta}+e^{-in\theta}\right)\\ &=(\sqrt5)^n\Big(\big(\cos n\theta+i\sin n\theta\big)+\big(\cos(-n\theta)+i\sin(-n\theta)\big)\Big)\\ &=(\sqrt5)^n\Big(\cos n\theta+i\sin n\theta+\cos n\theta-i\sin n\theta\Big)\\ &=(\sqrt5)^n 2\cos n\theta\;. \end{align*}
-
Thanks, it's the fact that $\cos \theta = \cos (- \theta)$ and $\sin - \theta = - \sin \theta$ that threw me. – Robert S. Barnes May 14 '12 at 9:21
@Robert: I wasn’t sure just where the problem was, so I took you at your word and went back to basics; I’m glad that it helped. – Brian M. Scott May 14 '12 at 9:23
If you believe that complex conjugation respects products (hence also powers), then the simple way is: $$\overline{x}=\overline{(2+i)^n+(2-i)^n}=(\overline{2+i})^n+(\overline{2-i})^n=(2-i)^n+(2+i)^n=x.$$ So $\overline{x}=x$, and hence $x$ is real.
The binomial formula gives an alternative route: $$x=(2+i)^n+(2-i)^n=\sum_{k=0}^n{n\choose k}2^ki^{n-k}+\sum_{k=0}^n{n\choose k}2^ki^{n-k}(-1)^{n-k}.$$ Here the terms where $n-k$ is odd cancel each other, so we get $$x=2\sum_{k=0,\ k\equiv n\pmod2}^n{n\choose k}2^ki^{n-k}.$$ Here everywhere $i^{n-k}$ is real, because $(n-k)$ is even in all the terms remaining in the sum.
-
+1 Even though it doesn't explain the given answer, that's really nice. – Robert S. Barnes May 14 '12 at 9:16
@Robert, sorry about that. I simply looked at the title, and didn't read your question to the end. I did see your own suggestion of using the binomial formula, so I added that. – Jyrki Lahtonen May 14 '12 at 9:21
Still a very nice answer. If I could mark it up more than once I would. :-) Glad to see that intuition on the binomial formula was right. – Robert S. Barnes May 14 '12 at 9:31
So you're notation there $2|n-k$ means that the summation is only over even values of k? Is that a common notation? – Robert S. Barnes May 14 '12 at 9:35
@Robert, I mean that the summation is over only such values of $k$ that $n-k$ is even. A better way of expressing that would be $k\equiv n\pmod2$. Will edit. – Jyrki Lahtonen May 14 '12 at 9:47
Hint $\$ Scaling the equation by $\sqrt{5}^{\:-n}$ and using Euler's $\: e^{{\it i}\:\!x} = \cos(x) + {\it i}\: \sin(x),\$ it becomes
$$\smash[b]{\left(\frac{2+i}{\sqrt{5}}\right)^n + \left(\frac{2-i}{\sqrt{5}}\right)^n} =\: (e^{{\it i}\:\!\theta})^n + (e^{- {\it i}\:\!\theta})^n$$ But $$\smash[t]{ \left|\frac{2+i}{\sqrt{5}}\right| = 1\ \Rightarrow\ \exists\:\theta\!:\ e^{{\it i}\:\!\theta} = \frac{2+i}{\sqrt{5}} \ \Rightarrow\ e^{-{\it i}\:\!\theta} = \frac{1}{e^{i\:\!\theta}} = \frac{\sqrt{5}}{2+i} = \frac{2-i}{\sqrt 5}}$$
Remark $\$ This is an example of the method that I describe here, of transforming the equation into a simpler form that makes obvious the laws or identities needed to prove it. Indeed, in this form, the only nontrivial step in the proof becomes obvious, viz. for complex numbers on the unit circle, the inverse equals the conjugate: $\: \alpha \alpha' = 1\:\Rightarrow\: \alpha' = 1/\alpha.$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680790901184082, "perplexity": 490.13188570332375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104119.19/warc/CC-MAIN-20140914011144-00294-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://wrf.ecse.rpi.edu/pmwiki/pmwiki.php/ComputerGraphicsFall2007/QuaternionExamples | Let the point {$p=(1,2)$}. The corresponding complex number is {$c=1+2i$}. Suppose you want to rotate it by {$\theta=90^\circ=\pi/2$} radians. That is equivalent to multiplying {$c$} by {$e^{i\theta}=e^{i\pi/2}=i$}.
So, {$c'=c e^{i\theta} = (1+2i)i = -2+i$}.
The corresponding 2D point is {$(-2,1)$}.
Now to quaternions in general.
Let {$q_1=(1,2,0,0)$} and {$q_2=(3,0,4,0$}. {$q_1+q_2=(4,0,4,0)$}.
{$q_1 q_2 = xxxxxx$} {$q_2 q_1 = xxxxxx$}, which is different.
Now to 3D and quaternions.
For example, the 3D point {$(1,0,2)$} corresponds to the quaternion {$1i+0j+2k$}.
to be continued. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9449062347412109, "perplexity": 1943.0866658043437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516117.80/warc/CC-MAIN-20181023065305-20181023090805-00419.warc.gz"} |
http://cvgmt.sns.it/paper/3770/16,%2042,%2010,%2021 | # Flatness results for nonlocal phase transitions
created by cinti on 11 Feb 2018
modified on 20 Mar 2019
[BibTeX]
Accepted Paper
Inserted: 11 feb 2018
Last Updated: 20 mar 2019
Journal: Springer INDAM Series
Year: 2018
Abstract:
We consider a nonlocal version of the Allen-Cahn equation, which models phase transitions problems. In the classical setting, the connection between the Allen-Cahn equation and the classification of entire minimal surfaces is well known and motivates a celebrated conjecture by E. De Giorgi on the one-dimensional symmetry of bounded monotone solutions to the (classical) Allen-Cahn equation up to dimension 8. In this note, we present some recent results in the study of the nonlocal analogue of this phase transition problem. In particular we describe the results obtained in several contributions 8, 9, 13, 14, 25, 41, 44, 46 where the classification of certain entire bounded solutions to the fractional Allen-Cahn equation has been obtained. Moreover we describe the connection between the fractional Allen-Cahn equation and the fractional perimeter functional, and we present also some results in the classifications of nonlocal minimal surfaces obtained in 16, 42, 10, 21. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011753797531128, "perplexity": 442.320119574857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00011.warc.gz"} |
https://www.physicsforums.com/threads/higher-derivative-question.139537/ | # Higher Derivative Question
1. Oct 22, 2006
Alright I decided I'd create a new topic just because the other one was getting fairly lengthy.
I'm having trouble with the following "Higher Derivative" question
It states, find y'' by implicit differentiation.
x^4 + y^4 = a^4
d/dx (x^4+y^4) = d/dx (a^4)
4x^3 + 4y^3 dy/dx = 4a^3
However, what is the next step from here, I thought perhaps cancelling out all the 4's and leaving it as:
x^3 + y^3 dy/dx = a^3
and bring the x^3 over so it's:
a^3-x^3 = y^3 dy/dx
dy/dx = a^3-x^3 / y^3
However, I'm not positive on that because I was told from a friend that 4a^3 is a constant so it can be equal to 0, thus the dy/dx would be -x^3/y^3.
Am I on the right track here or is there an easier way of solving this equation. Thanks a lot guys.
2. Oct 22, 2006
### neutrino
If 'a' is a constant, then its derivative (with respect to any variable) is zero.
3. Oct 22, 2006
### Office_Shredder
Staff Emeritus
4a^3 can't just be written as 0. But as neutrino pointed out, the derivative of a^4 isn't 4a^3
4. Oct 22, 2006
Ahhh I think I'm following you, so pretty much from the first step you could automatically assume that a^4 it is equal to 0 when you take the derivative, and disregard the 4a^3 because that's not possible since it is a constant, thus after solving the problem it would be dy/dx = -x^3/y^3, but that's for y', not y''. You need to take the derivative of that now if I'm not mistaken using the rule:
f'g-fg' / g^2
And sub in:
-x^3(y^3) - y^3(-x^3)
---------------------------
(y^3)^2
And take the derivatives.
Last edited: Oct 22, 2006
5. Oct 22, 2006
### Office_Shredder
Staff Emeritus
Correct. In fact, I'm sure you'll notice that once you use the chain rule, you'll get y'' in terms of x, y, and y' All you need to do is substitute -x3/y3 for y'
6. Oct 22, 2006
### neutrino
You're right, and you can also take the derivative x^3 + y^3 dy/dx = 0, using the product rule for the second term.
7. Oct 22, 2006
### HallsofIvy
Staff Emeritus
Notice that when you differentiated y^4 with respect to x you did not get "4y^3", you got "4y^3 y' ". You could say that the derivative of a^4 with respect to x is 4a^3 a' and then since a is a constant, a'= 0 so (a^4)'= 4a^3(0)= 0. Of course, it's simpler to argue that a^4 is itself a constant so (a^4)'= 0 immediately.
As neutrino said, once you have x^2+ y^3 y'= 0 you can just continue using "implicit differentiation": 2x+ 3y^2 (y')^2+ y^3 y"= 0
8. Oct 22, 2006
Oh wow, alright thanks a lot for the advice all of you. HallsofIvy clarified that constant business for me perfectly. I understand that a whole lot better now. Except one correction in your description, it's x^3+y^3 y' = 0 believe as neutrino suggested, not x^2+ y^3 y'= 0
Hence, it would be:
3x^2 + 3y^2 (y')^2 + y^3 y'' = 0
And then for y' you'd sub in -x^3/y^3 as Office Shredder suggested, and solve for y''?
So it would be 3x^2 + 3y^2(-x^3/y^3)^2 + y^3 y'' = 0 I think anyways.
Last edited: Oct 22, 2006
9. Oct 22, 2006
Alright here's an update on what I did for this question anyways, because I assume you use the rule: f'g-fg'/g^2
so from -x^3/y^3 to differentiate this expression I went:
-3x^2(y^3) - (-x^3)(3y^2)y' / (y^3)^2
= -3x^2(y^3) - (-x^3)(3y^2)(-x^3/y^3) / (y^3)^2
Now how do I factor this expression. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9196852445602417, "perplexity": 1843.1139822324851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541896.91/warc/CC-MAIN-20161202170901-00413-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://bjj.balancestudios.net/read/lattice-theory-colloquium-publications-american-mathematical-society | # Download Lattice theory (Colloquium publications - American by Garrett Birkhoff PDF
By Garrett Birkhoff
Best applied mathematicsematics books
The Dead Sea Scrolls After Fifty Years: A Comprehensive Assessment
This quantity is the second one in a chain released to mark the fiftieth anniversary of the invention of the 1st scrolls at Qumran. The two-volume set features a complete variety of articles protecting issues which are archaeological, ancient, literary, sociological, or theological in personality. because the discovery of the 1st scrolls in 1947 an huge variety of reviews were released.
Extra info for Lattice theory (Colloquium publications - American Mathematical Society)
Example text
Let us now come to the precise definition of the method we plan to analyze. From now on, we make constant use of the notations and results given in Appendix A. Let Q =]0, 1[2, r > 1, and let coo be a periodic function in Q. Let f be a C°° cutoff function of order r. We define the periodic convolution of a periodic function (or distribution) / defined on Q with a function g defined in R2 by g *p f = g * / , where / denotes the periodic extension of / in R 2. Then, if £ is a cutoff function, we set f£(x) = e~ 2 £(x/£), a)e0 = ££ *, o)0, K£ = K * &.
6: the case of a bounded flow in a simply connected domain and the case of flows in periodic boxes. In the first case, we assume that the normal component of the velocity is zero, the so-called no-through-flow boundary condition. In terms of the stream function \j/ such that u = V x f, this means that the tangential derivative of \jr vanishes at the boundary. Since the boundary has a single connected component, \j/ is a constant, which can be set to zero as ty is obviously determined up to an additive constant, at the boundary.
For periodic or unbounded geometries, it is thus infinite-order accurate in the sense that the distance in some appropriate distribution space between the exact vorticity and its particle approximation can be bounded by Chm for all m, provided the vorticity has derivatives of an order up to m bounded. For rectangular domains without periodicity assumptions, the midpoint rule is only second order, but higher-order initializations can be obtained if initial particle locations coincide with quadrature points associated with Gauss-type quadrature formulas. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92728590965271, "perplexity": 1182.0483343761916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205600.75/warc/CC-MAIN-20190326180238-20190326202238-00243.warc.gz"} |
https://asmedigitalcollection.asme.org/tribology/article-abstract/90/1/173/430108/Asymptotic-Methods-for-an-Infinitely-Long-Slider?redirectedFrom=fulltext | The application of the techniques of singular perturbation theory (boundary layer theory) to several problems in gas bearing lubrication is discussed. The leading terms in asymptotic expansions for the pressure are obtained for the cases: A slider bearing with large bearing number, a squeeze-film thrust bearing with large squeeze number, and a combined slider squeeze-film bearing with large bearing number and/or large squeeze number. For the latter problem it is necessary to distinguish several cases depending upon the relative rate at which the bearing number and squeeze number approach infinity
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009563088417053, "perplexity": 1279.0836382772595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00366.warc.gz"} |
https://infoscience.epfl.ch/record/64497 | ## Analysis of Chao’s Immune System Simulation
Computational methods have been developped to simulate the complex interactions between components of the immune system. Chao introduced a method based on a stage structured approach to studying the Cytotoxic T cells response to an infection. In this work, we extend the analyzis of Chao’s simulater by Eric Winnington. We validate the time step choice used in the simulation and analyze the impact of different factors on the occurence of a secondary reaction. We find that the initialization of the simulation, and in particular the generation of the T Cell Receptor Chains and epitopes influences the frequency of this secondary reaction. Furthermore, we note that the secondary reaction arises in a time window when the number of na¨ıve cells population is low, the effector cell population is decreasing after the primary response and the memory T cells are not yet able to take part in the response. We find that the secondary reaction does not question the correctness of the simulation. The question of the adequacy of the biological model however remains open and is not discussed in this project.
Year:
2005
Keywords:
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586781620979309, "perplexity": 569.4471954781473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158691.56/warc/CC-MAIN-20180922201637-20180922222037-00491.warc.gz"} |
https://www.nature.com/articles/s41467-022-30740-7 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# History-dependent domain and skyrmion formation in 2D van der Waals magnet Fe3GeTe2
## Abstract
The discovery of two-dimensional magnets has initiated a new field of research, exploring both fundamental low-dimensional magnetism, and prospective spintronic applications. Recently, observations of magnetic skyrmions in the 2D ferromagnet Fe3GeTe2 (FGT) have been reported, introducing further application possibilities. However, controlling the exhibited magnetic state requires systematic knowledge of the history-dependence of the spin textures, which remains largely unexplored in 2D magnets. In this work, we utilise real-space imaging, and complementary simulations, to determine and explain the thickness-dependent magnetic phase diagrams of an exfoliated FGT flake, revealing a complex, history-dependent emergence of the uniformly magnetised, stripe domain and skyrmion states. The results show that the interplay of the dominant dipolar interaction and strongly temperature dependent out-of-plane anisotropy energy terms enables the selective stabilisation of all three states at zero field, and at a single temperature, while the Dzyaloshinksii-Moriya interaction must be present to realise the observed Néel-type domain walls. The findings open perspectives for 2D devices incorporating topological spin textures.
## Introduction
The field of spintronics, which aims to harness the spin degree of freedom of electrons for efficient information storage and logic devices, has received a major impetus through the recent advent of two-dimensional (2D) magnets1,2,3,4,5. 2D materials offer unique spin-related properties, such as long spin relaxation time and spin diffusion length6, or spin-valley locking7. Moreover, stacking individual 2D materials into van der Waals (vdW) heterostructures allows exploitation of interlayer spin-orbit or magnetic exchange proximity effects8,9, and therefore direct design of the magnetic performance. Along these lines, numerous initial prototype devices have already been demonstrated10,11,12,13,14. The first experimentally observed 2D magnets were few-layered single crystals of the insulators Cr2Ge2Te615 and CrI316 in 2017. A range of further 2D magnets have since been discovered17,18,19,20,21, including the target of the present study: metallic ferromagnet Fe3GeTe2 (FGT)22,23,24,25. This 2D material has attracted particular attention due to its interesting charge transport properties26, heavy fermion behaviour27, and high Curie temperature, TC, which is increased for higher Fe content28, and can approach room temperature through a gate voltage-controlled mechanism29.
Recently, the formation of magnetic skyrmions—topological nanoscale whirls in the spin structure of magnetic materials30,31—has been reported in FGT flakes32,33,34,35,36,37,38. This discovery opens up further prospective applications such as 2D skyrmion racetrack or neuromorphic computing devices40,41. To this end, the possibility of exploiting the atomically flat and well-defined interfaces offered by vdW materials presents significant advantages over typical sputtered multilayer thin-film skyrmion systems42,43. However, there have been a variety of conflicting mechanisms proposed for skyrmion formation in FGT flakes, including the dipolar interaction32, an intrinsic interfacial Dzyaloshinskii-Moriya interaction (DMI) between the Te and Fe layer interface37, an interfacial DMI between pristine FGT and its naturally oxidised top and bottom layers33,35, DMI induced by intrinsic defects within the bulk of the material38, or more complex higher-order contributions39. The matter has been further complicated by the observation of both Bloch32,35 and Néel-type33,34,35,36,37 skyrmions.
Moreover, the stability and formation of skyrmions, and related spin textures, in FGT flakes has yet to be fully explored. A well-established method to investigate such stability is to explore the magnetic phase diagram for different field and temperature histories. In particular, history-dependent control of charge and spin degrees of freedom can result in the formation of quenched states, such as charge or spin glasses, which may be otherwise hidden in thermoequilibrium44. For the bulk skyrmion materials, such explorations led to fascinating results: while typically skyrmions only formed in a limited range of applied field and temperature close to TC30, they were discovered to exist outside this region both in a metastable state45, and stabilised by alternative mechanisms such as magnetic frustration46, or an anisotropic exchange interaction47. Therefore, we identified a compelling need for a comprehensive study to investigate the stability and history-dependent formation of magnetic skyrmions in FGT flakes, and to clarify the balance of interactions responsible for their formation.
In this work, we utilise a combination of real-space magnetic imaging methods to investigate the formation of uniform, stripe, and skyrmion states in an exfoliated flake of FGT with a range of thickness regions. Notably, we discovered that the history-dependence of the flake is such that all observed magnetic structures can be realised at a single temperature and at zero field by following specific temperature and field paths. We elucidate this behaviour by determining extensive magnetic phase diagrams following three distinct measurement protocols, revealing the temperature dependence of the skyrmion and stripe domain formation. The extent of the history-dependence is distinctly different from other known skyrmion systems, and appears to originate from the strong, temperature-dependent anisotropy present in FGT. This renders the material particularly advantageous for studying the fundamental domain and skyrmion evolution in 2D magnets, as well as for exploring the potential technological impact. Finally, with comparison to micromagnetic and mean-field simulations, we demonstrate that the combination of the dipolar interaction and out-of-plane anisotropy is likely the dominant stabilisation mechanism of the skyrmions, albeit some interfacial DMI contribution must be present to realise the observed Néel-type domain walls.
## Results
### FGT flake characterisation
Energy dispersive x-ray spectroscopy established the initial bulk FGT crystal to be slightly Fe deficient, with an elemental composition of Fe2.95GeTe1.70 (see methods, Supplementary Note 1, Supplementary Fig. S1). SQUID magnetometry measurements (see methods) determined TC of the crystal to be 195 K, and confirmed the strong, out-of-plane, uniaxial magnetocrystalline anisotropy22,23,24,25, which was found to roughly linearly increase with decreasing temperature, from a value of 170 kJ/m3 at 200 K to 450 kJ/m3 at 50 K (see Supplementary Fig. S2). The sample construction, depicted in Fig. 1a, consists of an exfoliated FGT flake stamped onto a Si3N4 membrane, and capped with a flake of hexagonal boron nitride (hBN) (see methods, Supplementary Fig. S3).
The principle of scanning transmission x-ray microscopy (STXM), which exploits x-ray magnetic circular dichroism (XMCD) to obtain the local electronic state and out-of-plane component of the magnetisation, m, is also illustrated in Fig. 1a48. A STXM image of the investigated FGT flake is shown in Fig. 1b, revealing a range of thickness steps between 10 and 60 nm, as determined by atomic force microscopy measurements (see Supplementary Note 2, Supplementary Fig. S4). By measuring x-ray absorption spectra (see methods) for both bulk and flake samples, and performing XMCD sum rules analysis we determined the magnetic moment to be 1.52 μB/Fe, and estimated that ~10 nm of both surfaces of the flake sample were oxidised before being capped by the hBN layer, resulting in the regions thinner than ~20 nm exhibiting no resolvable ferromagnetic domain ordering (see Supplementary Note 3 and Supplementary Figs. S5, S6). This estimation was supported by real-space transmission electron microscopy measurements of a similarly prepared sample stack, which indicated an oxide layer thickness of ~ 7 nm at both surfaces of the FGT flake (see Supplementary Fig. S7). Therefore, we focused our STXM measurements on the region of interest (ROI) indicated, which exhibits three thickness regions: 60, 50 and 35 nm.
The first key result of this work—the observation of the substantial history-dependence of the magnetic state of the FGT flake—is apparent when considering Fig. 1c–f. By following different field and temperature protocols, illustrated in Fig. 1c, five distinct magnetic states could be selected at 150 K and 0 mT. By zero-field cooling (ZFC) from above TC, 186 K in the flake sample, a labyrinth-like stripe domain (SD) state was achieved, as shown in Fig. 1d. Following the field sweep (FS) protocol, by initialising the sample at ±250 mT and then setting the desired magnetic field, a uniformly magnetised (UM) state was realised, aligned in either the positive or negative out-of-plane direction (UM+ and UM−), respectively. An example of the UM− state is shown in Fig. 1e. Finally, field-cooling (FC) the sample from above TC under an applied field between ±10–25 mT, resulted in the formation of a disordered array of skyrmions, with their cores aligned in either the negative (Sk+) or positive (Sk−) out-of-plane direction. Once formed, these skyrmions persisted after reducing the field to 0 mT, as featured in Fig. 1f.
### Magnetic phase diagrams
To explore the history dependence of the magnetic states further, we performed detailed STXM imaging of the ROI following the three measurement protocols, resulting in the magnetic phase diagrams displayed in Fig. 2. The measurement paths are illustrated by the respective arrows in Fig. 2a–c. Points where the SD and skyrmion states coexisted were included in the skyrmion region of the phase diagram for clarity. Looking first at the results of ZFC measurements in the 60 nm region, shown in Fig. 2a, the magnetic phase diagram is symmetric about 0 mT, and the SD state existed from 186 K down to the instrument’s base temperature of 30 K. Interestingly, the skyrmion state was only observed in a small range of temperatures and applied fields close to TC. This behaviour differs from typical materials hosting dipolar-stabilised skyrmions, such as perovskites or garnets, where they typically form over a much larger temperature range49.
The magnetic phase diagram was found to be significantly different when following the FS protocol, as shown in Fig. 2b. In particular, below 150 K formation of the SD state was not observed—the entire flake sample uniformly switched between the UM+ and UM− states. Such a crossover between multidomain and monodomain behaviour has been argued based on magnetoresistance measurements in previous studies of FGT22, and here we provide direct proof of its existence. Furthermore, we observed an asymmetry in the formation of the skyrmion state: at 180 K, skyrmions only emerged from the SD state at positive fields, while at 185 K, they also nucleated directly from the UM− state at negative fields.
Finally, Fig. 2c shows the phase diagram acquired via the FC procedure with a cooling field of 15 mT. This demonstrates the possibility to realise the skyrmion state over a large portion of the magnetic phase diagram by quenching the skyrmions formed close to TC down to lower temperatures. It is interesting to note that the skyrmions are stable at 0 mT, and that skyrmions formed with a positive applied field appear to survive to larger negative fields, and vice versa. The field asymmetry of the presented FS and FC phase diagrams is only a feature of the specific path followed during the measurements: if the FS phase diagram were taken with increasingly negative field, or if the FC phase diagram were acquired by field cooling at a negative field, the field asymmetry would be reversed.
For the 50 nm and 35 nm thicknesses, plotted in Fig. 2d–i, the phase diagrams are qualitatively similar to those of the 60 nm region, although there are two major differences. Firstly, the value of TC appeared to be reduced to 185 K and 180 K in the 50 and 35 nm regions, respectively. This is consistent with other reports of decreased TC in thin flake samples of FGT22, and would place the thicknesses of the magnetic regions of our sample around 10–40 nm, as expected from the oxide layer thickness estimation. Secondly, the saturation field was reduced with decreasing thickness, resulting in the smaller field range of the SD and skyrmion states in thinner regions. The phase diagrams reveal the high tunability of the magnetic state at low temperatures, and the significant impact of the thermal and field history of the FGT flake. Both the SD and skyrmion states can be realised over a large range of temperatures and fields via a heating and cooling process. However, once the spin textures are annihilated by an applied field below 150 K, only the uniformly magnetised states can be selected without subsequent temperature cycling. Such state switching could offer opportunities for non-volatile phase-change memory functions44.
The observed magnetic phase diagrams are quite different from those found in other skyrmion systems, such as bulk chiral magnets and chiral multilayer thin films. Specifically, the limited temperature range of skyrmion formation observed in our ZFC and FS measurements is reminiscent of the small skyrmion pocket seen in bulk chiral magnets30,31. However, it is interesting to note that in thin lamellae of these B20 systems, the skyrmion pocket typically expands to lower temperatures with decreasing thickness50 – quite different to the behaviour observed here in FGT flakes. Due to the focus on room temperature applications, there have been few investigations into the temperature-field magnetic phase diagrams of chiral multilayer thin film systems. However, here the main difference is that, similarly to the centrosymmetric non-chiral skyrmion materials49, skyrmions in multilayers are typically able to form at a large range of temperatures51. Most notably, while it is common to be able to stabilise both stripe and skyrmion states at 0 mT in all these systems, the stabilisation of a uniformly magnetised state at zero field has not been observed. As shall be explored in our simulations, it is likely that the unique, history-dependent magnetic phase diagrams exhibited by FGT flakes are due to the increasingly hard magnetic properties of FGT flakes with decreasing temperature.
### Real-space imaging of magnetic textures
Closer inspection of the real-space data reveals clues about the stabilisation mechanism of the observed spin textures. Selected STXM images acquired at different temperatures following the FS protocol are shown in Fig. 3a–t (details of contrast normalisation and further STXM data are shown in Supplementary Note 4, Supplementary Figs. S8–10). During each field sweep, the formation of the stripe domain state appears to be precipitated by an initial local nucleation of the oppositely magnetised stripe domain, which proliferates through the rest of the flake to form a labyrinth-like domain state. At higher temperatures and with increasing field, these stripe domains are broken up into individual skyrmions. The density of skyrmion formation is strongly temperature dependent in the vicinity of TC: At 185 K, in Fig. 3p, q, s, t, skyrmions fill the 60 nm and 50 nm thickness regions of the sample. In comparison, at 180 K (Fig. 3k, l), only a few isolated skyrmions are found in the same thickness regions. Due to the reduced TC of the 35 nm region, similar behaviour was observed in the 175 K and 180 K datasets, in Fig. 3h, n.
Lorentz transmission electron microscopy (LTEM) of a comparable FGT flake was performed to investigate the character of the domain walls (see Supplementary Note 5). Figure 3u–x shows the walls becoming more widely spaced and then disappearing as the applied magnetic field is increased following the ZFC procedure at 92 K. The change in image contrast by tilting by an angle α, shown in Fig. 3x, y indicates that our FGT samples exhibit Néel-type domain walls rather than Bloch-type (see Supplementary Figs. S11, S12), as observed in previous studies of thin flake samples33,35.
The real-space measurements also reveal that the average size of the stripe domains and skyrmions is increased at lower temperatures52. This result is quantified in Fig. 4a, where the average stripe domain size at 0 mT is plotted as a function of temperature (see Supplementary Note 6, Supplementary Figs. S13, S14). In addition, there is an indication that, for a fixed temperature, the thinner regions exhibit slightly larger domain sizes. The inset shows that a similar trend is seen for the average skyrmion diameter. The average domain wall width at 0 mT was also extracted, and is plotted as a function of temperature in Fig. 4b, showing a clear increase with decreasing temperature.
These results have important implications concerning the origin of the SD and skyrmion states. The observed temperature and thickness dependence of the domain size could be explained by the interplay of the dipolar interaction with the temperature-dependent uniaxial anisotropy observed in the magnetometry measurements. On the other hand, since Bloch-type domain walls are typically the lower energy configuration in out-of-plane magnetised films53, the LTEM observation of Néel-type domain walls is a strong indication that there must be some DMI contribution present to twist the SD and skyrmion helicity.
### Skyrmion field stability
To investigate the stability of the skyrmions, we studied their field evolution in detail, with the results shown in Fig. 5a–i. First, the FGT flake was initialised following the FC protocol. As the sample passes below TC under an applied field, the interplay of the Zeeman and dipolar interactions results in the formation of a dense disordered array of skyrmions, which survives down to low temperatures beyond their formation region, as shown for a FC field of 15 mT in Fig. 5b. We utilised image recognition software to extract the number and diameter d of the skyrmions in each image (see Supplementary Note 7, Supplementary Figs. S15, S16). These quantities have a strong dependence on the cooling field, as shown by the plots in Fig. 5f and g. For the 35 nm region, skyrmions were observed only when applying cooling fields of 15 mT and below.
After the 15 mT FC initialisation, images were acquired for both increasing and decreasing applied field, with the results shown in Fig. 5a–e. The number and d of the skyrmions in each thickness region during the field sweep are plotted in 5h and i. For increasing fields, the number of skyrmions quickly decreases, while the average size of the skyrmions slightly decreases. For decreasing field, the number of skyrmions also decreases, while d strongly increases. The filled coloured regions indicate the minimum and maximum diameter of the observed skyrmions at each field, showing that the increasing average size is largely driven by an expansion of a subset of the skyrmions, while the smallest skyrmions remain at a similar d, perhaps due to pinning effects.
In their theoretical treatment of skyrmion formation, Büttner et al. laid out useful indicators to distinguish between DMI and dipolar stabilised skyrmions54. The model predicts that DMI-stabilised skyrmions should not significantly alter their size as a function of the applied magnetic field, whereas skyrmions predominantly stabilised by the dipolar interaction can be expected to be sensitive to such field changes. Therefore, the present observations provide a strong indication that the dipolar interaction plays a dominant role in the stabilisation of the observed skyrmion states in FGT.
### Comparison to magnetic simulations
To validate this conclusion, we attempted to compare the experimental data with the results of micromagnetic simulations, utilising realistic values for the micromagnetic parameters (see methods). To determine the approximate value of interfacial DMI, D, required to form Néel-type domain walls in the system, we simulated skyrmion states with a range of D values and observed the helicity of the resulting magnetic textures. As shown in Fig. 6a–d, we found that, for our chosen micromagnetic parameters approximating the FGT flake, a value of D = 0.12 mJ/m2 was sufficient to alter the domain wall helicity from the expected Bloch-type to the experimentally observed Néel-type. The particular value will depend on factors such as the values of the other micromagnetic parameters, and the sample thickness. Selecting this parameter, we modelled the experimental FC results observed in Fig. 5a–e by nucleating a disordered array of skyrmions at 20 mT, and simulated field sweeps with both increasing and decreasing field, as shown in Fig. 6e. Similarly to the experiment, the skyrmions shrink for increasing field, and grow and form patch-like structures for negative magnetic fields—typical signs of a predominantly dipolar-stabilised spin texture54.
Since it is challenging to model temperature with micromagnetic simulations, we performed complementary mean-field calculations to study the temperature-dependent domain evolution observed in the field-sweep measurements in Fig. 3 (see methods). Optimal mean-field model parameters were tuned such that the field sweeps produced domain structures comparable to the experiment, and were estimated to be consistent with micromagnetic values (see “Methods”, Supplementary Note 8). In order to recreate the experimentally observed temperature dependence of the SD and skyrmion formation, it was necessary to scale the uniaxial anisotropy, K, with respect to the saturation magnetisation. Utilising a classical Callen-Callen power law scaling of K, the model captured the high-temperature behaviour well, but did not show the transition to monodomain switching at low temperature (see Supplementary Fig. S17). However, inspired by the large increase in uniaxial anisotropy at low temperatures observed in the magnetometry measurements, we included an additional temperature-dependent prefactor to the scaling law to further increase the anisotropy at lower temperatures (See Supplementary Note 9 and Supplementary Fig. S18). Together with an increase in the exchange interaction at lower temperatures, the model qualitatively reproduces the experimental FS phase diagram, including additional details such as the observed increase in domain size and crossover to monodomain behaviour at lower temperatures, as shown in Fig. 6g–i.
## Discussion
The combination of real-space imaging and simulation methods allows us to clarify the origin of skyrmion formation in FGT flakes. Specifically, the observed temperature dependence of the typical stripe domain size, and the field dependence of the skyrmion diameters, indicates that the spin textures are primarily stabilised by the dipolar interaction. The mean-field simulations support this conclusion, demonstrating that the temperature-dependent formation of skyrmions and SDs observed in the FGT flakes can be well-described by the large increase in the out-of-plane anisotropy with decreasing temperature. However, the LTEM observation of Néel-type domain walls necessitates the presence of some form of DMI contribution, although this is not required to be large. The micromagnetic simulations estimate a lower bound for the strength of the DMI in our sample, with the value of 0.12 mJ/m2 being much lower than the theoretical maximum value from previous DFT calculations of oxidised FGT, which suggested values could be as high as 2 mJ/m2 or more33. While the value we present is only a lower bound estimation, our results show that the history-dependent behaviour of FGT can be largely explained by considering the dominant dipolar and anisotropy effects. In the future, investigating a wider range of thicknesses, including down to the monolayer, may yield deeper insights into the spin texture formation, and reveal whether skyrmions can be stabilised at the single-layer limit.
We note that observations of Bloch-type skyrmions in FGT have exclusively been in thicker (>150 nm) samples32,35. With this in mind, we tentatively suggest that in such thick samples, any present interfacial DMI from the oxide layer may no longer be sufficient to stabilise Néel-type domain walls, and this would provide strong indication that the DMI observed in FGT flakes must be due to some interfacial effect. However, recent studies have suggested that the DMI could instead be due to some intrinsic property of the bulk material, either due to vacancy driven DMI38 or additional higher order effects39. Further work may be required to elucidate the true origin of the monochrial Néel-type domain walls. The issue of distinguishing between dipolar-stabilised and DMI-stabilised skyrmions can be considered controversial because it has been known for decades that the dipolar interaction and a strong uniaxial anisotropy can stabilise magnetic bubbles55. Due to their topological equivalence, these objects have thus been referred to as skyrmion bubbles56. However, the identification of dipolar interaction-stabilised skyrmions need not imply inferiority, since their discovery in new materials, particularly in 2D magnets, offers both renewed fundamental interest and fresh possibilities for their technological implementation. Further work will uncover whether similar history-dependent behaviour may be observed in other skyrmion-hosting 2D magnets. Moreover, the future discovery and investigation of new 2D magnetic materials, such as those proposed to possess chiral crystal structures57, or 2D multiferroics exhibiting electric field-tunable DMI58,59, may allow for the observation of further novel topological spin textures in 2D materials.
In summary, we utilised magnetic imaging techniques to explore the formation of magnetic stripe domains and skyrmions in an exfoliated FGT flake. We determined thickness-dependent magnetic phase diagrams following three distinct measurement protocols, revealing significant differences to other skyrmion hosts, such as bulk chiral and multilayer systems, evidenced by the possibility to selectively stabilise the stripe domain, skyrmion and uniform states at the same temperature and field point. Comparison of the real-spaces images and simulations suggests that the interplay of the dipolar interaction with the temperature-dependent uniaxial anisotropy is the primary stabilisation mechanism of the observed spin textures, while a DMI contribution must be present to twist the chirality of the domain walls from Bloch-type to the observed Néel-type. These results demonstrate the possibility to alter the properties of skyrmions in FGT by exploiting both DMI and dipolar stabilisation mechanisms, which can be readily controlled in 2D magnets thanks to their advantageous stacking properties34,36, opening the path towards advanced skyrmion-based spintronic devices composed of 2D magnet heterostructures.
## Methods
### Sample preparation and characterisation
The Fe3GeTe2 (FGT) single crystal was sourced from a commercial company, HQ Graphene. EDX measurements were performed with a Zeiss SEM Gemini-500, equipped with a Bruker XFlash 6-60 detector. Magnetometry measurements were carried out with a Quantum Design MPMS3 vibrating sample magnetometer, where the bulk FGT crystal was aligned along the relevant crystal axis and fixed to a quartz glass rod using GE varnish. Temperature and applied magnetic field were controlled by the built-in helium cryostat. The FGT flake sample for the STXM experiments was prepared by an all-dry viscoelastic transfer method. In the first step, the FGT bulk crystal was mechanically cleaved and exfoliated onto a PDMS stamp. A stepped flake with a thickness range between 20 and 200 nm, as estimated from the optical contrast, was selected and stamped onto a 100 nm thick SiN membrane. The flake was then capped by an exfoliated hexagonal boron nitride (hBN) sheet with a thickness of ~15 nm. The whole process was performed under ambient conditions, with each side of the flake being exposed to the atmosphere for approximately 30 min. Using a Bruker Dimension Icon atomic force microscope, measurements were performed on the capped FGT flake to determine the thickness of each region. Since the hBN capping complicates the thickness evaluation of the FGT sheet, the FGT thickness was estimated by averaging over 2–5 μm long steps. The thicknesses of all flake regions were then calibrated using measurements of the x-ray transmission (see Supplementary Note 2). For the preparation of the cross-sectional transmission electron microscope (TEM) sample, a focused ion beam (FIB) instrument (FEI FIB Scios) was used, operated at 30 kV for the initial cuts and subsequently reduced to 2 kV for thinning and cleaning. The transmission electron microscopy (TEM) investigation was conducted with a JEM-ARM200F (JEOL) microscope, operated at 200 kV and equipped with a cold field emission gun and a Cs-image aberration corrector.
### Total electron yield x-ray absorption spectroscopy
X-ray absorption spectroscopy measurements were performed on bulk FGT single crystals using the WERA beamline, at the KARA synchrotron in Karlsruhe Institute of Technology, which has an energy resolution of ΔE/E = 2 × 10−4. We utilised a superconducting magnet end station providing ultra-fast field switching with field ramping rates of 1.5 T/s. The FGT samples were glued to a conductive Mo sample holder, and electrical contact was ensured by coating the edges of the sample in carbon paste. A steel rod was then glued to the top of one of the crystal samples to enable in-vacuum cleaving. The samples were mounted inside the instrument, and the steel rod was removed, cleaving the FGT crystal sample within the vacuum chamber. Cooling was achieved with a liquid nitrogen cryostat. Absorption spectra were then measured over the oxygen K edge and the Fe L3 and L2 edges with fixed x-ray helicity. The energy was varied by controlling the monochromator at a speed of 0.2 eV/s to ensure no energy broadening. Meanwhile, the total electron yield (TEY) from the sample and incident x-ray intensity (I0 value) were measured using a Keithley 6517A electrometer. No noticeable energy drifts were observed between consecutive spectra.
### Scanning transmission x-ray microscopy
Scanning transmission x-ray microscopy (STXM) measurements were performed with the MAXYMUS instrument on the UE46 beamline at the BESSY II electron storage ring operated by the Helmholtz-Zentrum Berlin für Materialien und Energie. With the sample mounted inside the microscope, cooling was achieved by a He cryostat and the applied magnetic field was controlled by varying the arrangement of four permanent magnets. The x-ray beam was focused to a 20 nm spot size using a Fresnel zone plate and order selection aperture, setting the approximate spatial resolution. This focused beam, with a nominal x-ray energy of 707.5 eV, was then rastered across the sample pixel by pixel using a piezoelectric motor stage. By exploiting the effects of XMCD at the resonant x-ray energy at the Fe L3 edge, the transmission of the sample at each point was measured to form an image of the non-magnetic and magnetic domain structure, with the magnetic signal proportional to the out-of-plane magnetisation mz. The presented images of the magnetic domain structure were recorded using a single circular x-ray polarisation. Photons were counted by an avalanche photodiode.
### Lorentz electron transmission microscopy
LTEM measurements were performed using an FEI Titan3 transmission electron microscope equipped with a field-emission electron gun and operated at an acceleration voltage of 300 kV. In normal operation, the electromagnetic objective lens applies a 2 T field to the specimen which would force it into the saturated state. Instead, images were acquired in low-magnification mode where the image is formed using the diffraction lens and the objective lens weakly excited to apply a small magnetic field parallel to the electron beam. The applied field corresponding to a given objective lens current was calibrated to within 1 mT using a Thermo Fisher Hall probe holder. The specimen was cooled in-situ using a liquid nitrogen cooled Gatan double-tilt specimen holder which has a base temperature of 90 K. The FGT sample investigated by LTEM exhibited thickness regions between 15 and 70 nm.
Images were energy-filtered using a Gatan 865 Tridiem so that only electrons which had lost between 0 and 10 eV upon passing through the specimen contributed to the image, and an aperture subtending a half-angle of 0.14 mrad was used to ensure that only the 000 beam and the beams associated with magnetic scattering contributed to the image. Images were recorded on a 2048 by 2048 pixel charge-coupled device (CCD). The defocus and magnification were calibrated by acquiring images with the same lens settings from Agar Scientific’s S106 calibration specimen which consists of lines spaced by 463 nm ruled on an amorphous film. The defocus was found by taking digital Fourier transforms of these images and measuring the radii of the dark rings that result from the contrast transfer function of the lens60.
### Micromagnetic simulations
Micromagnetic simulations following the Landau-Lifschitz-Gilbert equation were performed using the MicroMagnum framework with custom extensions for the DMI. The simulations are based on the experimentally measured Ms = 250 kA/m, and utilised the real-space images of stripe domains to estimate the uniaxial anisotropy K = 44.2 kJ/m3 for the thin FGT flake and the exchange interaction A = 0.7 pJ/m. The anisotropy determined by SQUID measurements for a bulk crystal appears to significantly overestimate the value for a thin exfoliated flake, perhaps due to the enhanced shape anisotropy, which made this adjustment necessary. The system was simulated in two configurations, both with a cell size of 2 × 2 × 20 nm3. For Fig. 6a–d, the disk geometry was simulated by 50 × 50 × 1 cells, with only cells within a radius of 25 cells active. For the DMI estimation, varying DMI values were tested in 0.01 mJ/m2 steps until the Neél configuration was realised at D = 0.12 mJ/m2. For Fig. 6e, the system, with dimensions of 150 × 150 × 1 cells, was initialised with a disordered array of skyrmions that were relaxed into their equilibrium state at 20 mT. From this initial state, the field was both increased and decreased, with the system relaxed at each field point.
### Mean-field simulations
To perform temperature-dependent simulations we developed a mean-field model based on the standard classical spin Hamiltonian61 with spins distributed on a two-dimensional hexagonal lattice of 30 × 30 spins (approx. 150 nm × 150 nm) with periodic boundary conditions. The mean-field energy reads:
$${{{{{{{{\mathcal{H}}}}}}}}}_{{{{{{{{\rm{MF}}}}}}}}}= -\frac{1}{2}{J}_{{{{{{{{\rm{E}}}}}}}}}(T)\mathop{\sum}\limits_{\langle ij\rangle }{{{{{{{{\bf{m}}}}}}}}}_{i}\cdot {{{{{{{{\bf{m}}}}}}}}}_{j}-\frac{1}{2}{J}_{{{{{{{{\rm{D}}}}}}}}}(T)\mathop{\sum}\limits_{\langle ij\rangle }{{{{{{{{\bf{d}}}}}}}}}_{ij}\cdot ({{{{{{{{\bf{m}}}}}}}}}_{i}\times {{{{{{{{\bf{m}}}}}}}}}_{j})-{J}_{{{{{{{{\rm{K}}}}}}}}}(T)\mathop{\sum}\limits_{i}{(\hat{{{{{{{{\bf{n}}}}}}}}}\cdot {\hat{{{{{{{{\bf{m}}}}}}}}}}_{i})}^{2}\\ -\mathop{\sum}\limits_{i}{{{{{{{{\bf{m}}}}}}}}}_{i}\cdot {{{{{{{\bf{B}}}}}}}}-\frac{1}{2}{J}_{{{{{{{{\rm{DP}}}}}}}}}\mathop{\sum}\limits_{ij}\left(-\frac{{{{{{{{{\bf{m}}}}}}}}}_{i}\cdot {{{{{{{{\bf{m}}}}}}}}}_{j}}{{r}_{ij}^{3}}+3\frac{({{{{{{{{\bf{m}}}}}}}}}_{i}\cdot {\hat{{{{{{{{\bf{r}}}}}}}}}}_{ij})({{{{{{{{\bf{m}}}}}}}}}_{j}\cdot {\hat{{{{{{{{\bf{r}}}}}}}}}}_{ij})}{{r}_{ij}^{3}}\right)$$
(1)
where the symbols mi represent mean-field spins, and the individual energy terms starting from the left correspond to exchange, DMI, uniaxial anisotropy, Zeeman energy, and finally the dipolar energy (see Supplementary Note 8). It is worth pointing out that by taking the small angle approximation and zero temperature limit, one reduces the above model to the standard micromagnetic energy. Supplementary Note 8 identifies the relationship between the model constants JE(T = 0), JD(T = 0), JK(T = 0) (in the units of Joule J) and their micromagnetic equivalents A (J/m), D (J/m2) and K (J/m3). The exchange and anisotropy energy constants JE(T) and JK(T) are temperature-dependent, scaled following a modified Callen-Callen power law (see Supplementary Note 9). The temperature-dependence of JD(T) is weak and will be ignored. Thus the DMI energy term depends on temperature only through moments mi. The DMI vector dij is oriented in the lattice plane and perpendicular to the line connecting two neighbouring spins i and j. The anisotropy unit vector $$\hat{{{{{{{{\bf{n}}}}}}}}}$$ is oriented along the $$\hat{{{{{{{{\bf{z}}}}}}}}}$$-axis perpendicular to the lattice plane. The symbols enclosed in angled brackets in the exchange and DMI energy terms imply summation over the nearest neighbour spins. The spin moment mi has temperature dependent magnitude normalised to vary between mi = ±1 according to the expression:
$${{{{{{{{\bf{m}}}}}}}}}_{i}={{{{{{{\mathcal{L}}}}}}}}\left(\beta | {{{{{{{{\bf{B}}}}}}}}}_{i}^{{{{{{{{\rm{e}}}}}}}}}| \right){\hat{{{{{{{{\bf{B}}}}}}}}}}_{i}^{{{{{{{{\rm{e}}}}}}}}}$$
(2)
where $${{{{{{{\mathcal{L}}}}}}}}(x)=\coth x-{x}^{-1}$$ is the Langevin function, $$\beta ={({k}_{{{{{{{{\rm{B}}}}}}}}}T)}^{-1}$$, kB is the Boltzmann constant, and T the temperature. The vector $${{{{{{{{\bf{B}}}}}}}}}_{i}^{{{{{{{{\rm{e}}}}}}}}}$$ represents the effective field acting on the spin mi and $$| {{{{{{{{\bf{B}}}}}}}}}_{i}^{{{{{{{{\rm{e}}}}}}}}}|$$ is the magnitude of $${{{{{{{{\bf{B}}}}}}}}}_{i}^{{{{{{{{\rm{e}}}}}}}}}$$. The expression for the effective field can be obtained from Eq. (1) by calculating the variational derivative $${{{{{{\bf{B}}}}}}}_{i}^{{{{{{\rm{e}}}}}}}=-\delta {{{{{{\mathcal{H}}}}}}}_{{{{{{\rm{MF}}}}}}}/\delta {{{{{{\bf{m}}}}}}}_{i}={J}_{{{{{{\rm{E}}}}}}}(T){\sum }_{j}{{{{{{\bf{m}}}}}}}_{j}-{J}_{{{{{{\rm{D}}}}}}}{\sum }_{j}{{{{{{\bf{d}}}}}}}_{ij}\times {{{{{{\bf{m}}}}}}}_{j}+2{J}_{{{{{{\rm{K}}}}}}}(T)(\hat{{{{{{\bf{n}}}}}}}\cdot {\hat{{{{{{\bf{m}}}}}}}}_{i})\hat{{{{{{\bf{n}}}}}}}+{{{{{\bf{B}}}}}}+{J}_{{{{{{\rm{DP}}}}}}}{\sum }_{j}({{{{{{\bf{m}}}}}}}_{j}+3({{{{{{\bf{m}}}}}}}_{j}\cdot {\hat{{{{{{\bf{r}}}}}}}}_{ij}){\hat{{{{{{\bf{r}}}}}}}}_{ij}){r}_{ij}^{-3}$$. Thus according to Eq. (2), the mean-field spin mi depends on the exchange and DMI energy couplings, material anisotropy, external and dipolar field, and also on the temperature T. The magnetisation structures for a given set of parameters are evaluated by minimising Eq. (1) during the field or temperature evolution starting from uniquely defined initial states61. The parameters required to generate the results in Fig. 6 were JE(0) = 0.7, JD = 0.1, JK(0) = 2.5, JDP = 0.35. It was also necessary to identify appropriate temperature dependencies of JE(T) and JK(T) (see Supplementary Note 9).
## Data availability
The scanning transmission x-ray microscopy, Lorentz transmission electron microscopy, mean-field simulation and characterisation data, generated in this study have been deposited in a Zenodo online repository62. Any further data and materials required to reproduce the work are available from the corresponding authors upon reasonable request.
## Code availability
Code for performing the micromagnetic and mean field simulations are available from authors upon reasonable request.
## References
1. Burch, K. S., Mandrus, D. & Park, J.-G. Magnetism in two-dimensional van der Waals materials. Nature 563, 47–52 (2018).
2. Gong, C. & Zhang, X. Two-dimensional magnetic crystals and emergent heterostructure devices. Science 363, eavv4450 (2019).
3. Li, H., Ruan, S. & Zeng, Y.-J. Intrinsic Van Der Waals Magnetic Materials from Bulk to the 2D Limit: New Frontiers of Spintronics. Adv. Mater. 31, 1900065 (2019).
4. Gibertini, M., Koperski, M., Morpurgo, A. F. & Novoselov, K. S. Magnetic 2D materials and heterostructures. Nat. Mater. 14, 408–419 (2019).
5. Lin, X., Yang, W., Wang, K. L. & Zhao, W. Two-dimensional spintronics for low-power electronics. Nat. Electron. 2, 274–283 (2019).
6. Han, W., Kawakami, R. K., Gmitra, M. & Fabian, J. Graphene spintronics. Nat. Nanotechnol. 9, 794–807 (2014).
7. Mak, K. F., Xiao, D. & Shan, J. Light-valley interactions in 2D semiconductors. Nat. Photonics 12, 451–460 (2018).
8. Žutić, I., Matos-Abiague, A., Scharf, B., Dery, H. & Belashchenko, K. Proximitized materials. Materials Today 22, 85–107 (2019).
9. Bora, M. & Deb, P. Magnetic proximity effect in two-dimensional van der Waals heterostructure. J. Phys.: Mater. 4, 034014 (2021).
10. Wang, Z. et al. Tunnelling spin valves based on Fe3GeTe2/hBN/Fe3GeTe2 van der Waals heterostructures. Nano Lett. 18, 4303–4308 (2018).
11. Li, X. et al. Spin-dependent transport in Van der Waals magnetic tunnel junctions with Fe3GeTe2 electrodes. Nano Lett. 19, 5133–5139 (2019).
12. Albarakati, S. et al. Antisymmetric magnetoresistance in Van der Waals Fe3GeTe2/graphite/Fe3GeTe2 trilayer heterostructures. Sci. Adv. 5, eaaw0409 (2019).
13. Alghamdi, M. et al. Highly efficient spin-orbit torque and switching of layered ferromagnet Fe3GeTe2. Nano Lett. 19, 4400–4405 (2019).
14. Wang, X. et al. Current-driven magnetization switching in van der Waals ferromagnet Fe3GeTe2. Sci. Adv. 5, eaaw8904 (2019).
15. Gong, C. et al. Discovery of intrinsic ferromagnetism in two-dimensional van der Waal crystals. Nature 546, 265–269 (2017).
16. Huang, B. et al. Layer-dependent ferromagnetism in a van der Waal crystal down to the monolayer limit. Nature 546, 270–273 (2017).
17. McGuire, M. A. et al. Magnetic behavior and spin-lattice coupling in cleavable van der Waals layered CrCl3 crystals. Phys. Rev. Mater. 1, 014001 (2017).
18. Wang, Z. et al. Determining the phase diagram of atomically thin layered antiferromagnet CrCl3. Nat. Nanotechnol. 14, 1116–1122 (2019).
19. Kim, H. H. et al. One million percent tunnel magnetoresistance in a magnetic van der waals heterostructure. Nano Lett. 18, 4885–4890 (2018).
20. Lee, J.-U. et al. Ising-type magnetic ordering in atomically thin FePS3. Nano Lett. 16, 7433–7438 (2016).
21. Bonilla, M. et al. Strong room-temperature ferromagnetism in VSe2 monolayers on van der Waals substrates. Nat. Nanotechnol. 13, 289–293 (2018).
22. Fei, Z. et al. Two-dimensional itinerant ferromagnetism in atomically thin Fe3GeTe2. Nat. Mater. 17, 778–782 (2018).
23. Deiseroth, H.-J. et al. Fe3GeTe2 and Ni3GeTe2—Two new layered transition-metal compounds: crystal structures, HRTEM investigations, and magnetic and electrical properties. Eur. J. Inorg. Chem. 2006, 1561 (2006).
24. Zhuang, H. L., Kent, P. R. C. & Hennig, R. G. Strong anisotropy and magnetostriction in the two-dimensional Stoner ferromagnet Fe3GeTe2. Phys. Rev. B 93, 134407 (2016).
25. Tan, C. et al. Hard magnetic properties in nanoflake van der Waals Fe3GeTe2. Nat. Commun. 9, 1554 (2018).
26. Xu, J. et al. Large anomalous nernst effect in a van der Waals ferromagnet Fe3GeTe2. Nano Lett. 19, 8250–8254 (2019).
27. Zhang, Y. Emergence of Kondo lattice behavior in a van der Waals itinerant ferromagnet, Fe3GeTe2. Sci. Adv. 4, eaao6791 (2018).
28. May, A. et al. Magnetic structure and phase stability of the van der Waals bonded ferromagnet Fe3−xGeTe2. Phys. Rev. B 93, 014411 (2016).
29. Deng, Y. et al. Gate-tunable room-temperature ferromagnetism in two-dimensional Fe3GeTe2. Nature 563, 95–99 (2018).
30. Mühlbauer, S. et al. Skyrmion lattice in a chiral magnet. Science 323, 915 (2009).
31. Yu, X. Z. et al. Real-space observation of a two-dimensional skyrmion crystal. Nature 465, 901 (2010).
32. Ding, B. et al. Observation of magnetic skyrmion bubbles in a van der Waals ferromagnet Fe3GeTe2. Nano Lett. 20, 868–873 (2020).
33. Park, T.-E. et al. Néel-type skyrmions and their current-induced motion in van der Waals ferromagnet-based heterostructures. Phys. Rev. B 103, 104410 (2021).
34. Wu, Y. et al. Néel-type skyrmion in WTe2/Fe3GeTe2 van der Waals heterostructure. Nat. Commun. 11, 3860 (2020).
35. Peng, L. et al. Tunable Néel-Bloch Magnetic Twists in Fe3GeTe2 with van der Waals structure. Adv. Funct. Mater. 2021, 2103583 (2021).
36. Yang, M. et al. Creation of skyrmions in van der Waals ferromagnet Fe3GeTe2 on (Co/Pd)n superlattice. Sci. Adv. 6, eabb5157 (2020).
37. Wang, H. et al. Characteristics and temperature-field thickness evolutions of magnetic domain structures in van der Waals magnet Fe3GeTe2 nanolayers. Appl. Phys. Lett. 116, 192403 (2020).
38. Chakraborty, A. et al. Magnetic skyrmions in a thickness tunable 2D ferromagnet from a defect driven Dzyaloshinskii-Moriya interaction. Adv. Mater. 34, 2108637 (2022).
39. Xu, C. et al. Assembling diverse skyrmionic phases in Fe3GeTe2 monolayers. Adv. Mater. 34, 2107779 (2022).
40. Tomasello, R. et al. A strategy for the design of skyrmion racetrack memories. Sci. Rep. 4, 6784 (2014).
41. Song, K. M. et al. Skyrmion-based artificial synapses for neuromorphic computing. Nat. Electron. 3, 148–155 (2020).
42. Moreau-Luchaire, C. et al. Additive interfacial chiral interaction in multilayers for stabilization of small individual skyrmions at room temperature. Nat. Nanotechnol. 11, 444–448 (2016).
43. Woo, S. et al. Observation of room-temperature magnetic skyrmions and their current-driven dynamics in ultrathin metallic ferromagnets. Nat. Mater. 15, 501–506 (2016).
44. Kagawa, F. & Oike, H. Quenching of charge and spin degrees of freedom in condensed matter. Adv. Mater. 29, 1601979 (2016).
45. Münzer, W. et al. Skyrmion lattice in the doped semiconductor Fe1−xCoxSi. Phys. Rev. B 81, 041203(R) (2010).
46. Karube, K. et al. Disordered skyrmion phase stabilized by magnetic frustration in a chiral magnet. Sci. Adv. 4, eaar7043 (2018).
47. Chacon, A. et al. Observation of two independent skyrmion phases in a chiral magnetic material. Nat. Phys. 14, 936 (2018).
48. van der Laan, G. & Figueroa, A. I. X-ray magnetic circular dichroism - A versatile tool to study magnetism. Coord. Chem. Rev. 277, 95–129 (2014).
49. Kotani, A., Nakajima, H., Harada, K., Ishii, Y. & Mori, S. Field-temperature phase diagram of magnetic bubbles spanning charge/orbital ordered and metallic phases in La1−xSrxMnO3 (x = 0.125). Phys. Rev. B 95, 144403 (2017).
50. Yu, X. Z. et al. Near room-temperature formation of a skyrmion crystal in thin-films of the helimagnet FeGe. Nat. Mater. 10, 106 (2011).
51. Lamesh, I. et al. Current-induced Skyrmion generation through morphological thermal transitions in chiral ferromagnetic heterostructures. Adv. Mater. 30, 1805461 (2018).
52. Li, Q. et al. Patterning-induced ferromagnetism of Fe3GeTe2 van der Waals materials beyond room temperature. Nano Lett. 18, 5974–5980 (2018).
53. Franke, K. J. A., Ophus, C., Schmid, A. K. & Marrows, C. H. Switching between magnetic Bloch and Néel domain walls with anisotropy modulations. Phys. Rev. Lett. 127, 127203 (2021).
54. Büttner, F., Lemesh, I. & Beach, G. S. D. Theory of isolated magnetic skyrmions: From fundamentals to room temperature applications. Sci. Rep. 8, 4464 (2018).
55. De Leeuw, F. H., van den Doel, R. & Enz, U. Dynamic properties of magnetic domain walls and magnetic bubbles. Rep. Prog. Phys. 43, 689–783 (1980).
56. Schott, M. The Skyrmion switch: turning magnetic Skyrmion bubbles on and off with an electric field. Nano Lett. 17, 3006–3012 (2017).
57. Cui, J. et al. Anisotropic Dzyaloshinskii-Moriya interaction and topological magnetism in two-dimensional magnets protected by P$$\bar{4}$$m2 crystal symmetry. Nano Lett. 22, 2334–2341 (2022).
58. Xu, C. et al. Electric-field switching of magnetic topological charge in type-I multiferroics. Phys. Rev. Lett. 125, 037203 (2020).
59. Liang, J., Cui, C. & Yang, H. Electrically switchable Rashba-type Dzyaloshinskii-Moriya interaction and skyrmion in two-dimensional magnetoelectric multiferroics. Phys. Rev. B 102, 220409(R) (2020).
60. Williams, D. B. & Carter, C. B. Transmission Electron Microscopy 2nd edtn., Springer, New York, USA (2009).
61. Hovorka, O. & Sluckin, T. J. A computational mean-field model of interacting non-collinear classical spins. arXiv:2007.12777 [cond-mat] (2020).
62. Birch, M. T. Dataset for: History-dependent domain and skyrmion formation in 2D van der Waals magnet Fe3GeTe2. https://doi.org/10.5281/zenodo.6346695 (2022).
## Acknowledgements
We thank Helmholtz-Zentrum Berlin for the allocation of synchrotron radiation beamtime at the BESSY II synchrotron. We are grateful for beamtime at KARA, at the Karlsruhe Institute of Technology. We give thanks to the technical support of T. Reindl, A. Güth, U. Waizmann, M. Hagel and J. Weis from the Nanostructuring Lab (NSL) at the Max Planck Institute for Solid State Research. We appreciate the help of J. Deuschle, T. Heil and P. van Aken for carrying out the TEM sample preparation and measurements. The authors thank S. Moody for discussions on the magnetometry data analysis. V.N. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training under grant number EP/L006766/1. O.H. and J.L. acknowledges support from EPSRC under grant number EP/N032128/1. M.B. is grateful for support from the Deutsche Forschungsgemeinschaft (DFG) via Grant BU 1125/11-1.
## Funding
Open Access funding enabled and organized by Projekt DEAL.
## Author information
Authors
### Contributions
M.T.B., L.P., M.B. and G.S. conceived the project. L.P. fabricated and characterised the flake devices. C.B. performed the EDX measurements. M.T.B. and K.S. carried out the magnetometry measurements. M.T.B., S.W., T.R. and M.W. performed the STXM measurements at the BESSY II synchrotron. M.TB., C.B. and E.G. performed the TEY x-ray spectroscopy measurements at the ANKA synchrotron. J.C.L. acquired and analysed the LTEM data. O.H. and V.N. performed the mean-field simulations. K.L. performed the micromagnetic simulations. LAT utilised image recognition software to analyse the FC STXM data. M.T.B. and L.P. wrote the manuscript, with help from the other authors. All authors discussed the results and gave feedback on the manuscript.
### Corresponding authors
Correspondence to M. T. Birch or L. Powalla.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
## Peer review
### Peer review information
Nature Communications thanks Hongxin Yang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Birch, M.T., Powalla, L., Wintz, S. et al. History-dependent domain and skyrmion formation in 2D van der Waals magnet Fe3GeTe2. Nat Commun 13, 3035 (2022). https://doi.org/10.1038/s41467-022-30740-7
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-022-30740-7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082373380661011, "perplexity": 3416.4362940506808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00767.warc.gz"} |
http://www.physicsforums.com/showpost.php?p=3780497&postcount=3 | View Single Post
P: 14 Actually, after a few minutes pondering, I'd like to attempt an answer myself, based on symmetry. The Lagrangian cannot depend on individual velocity components (but it can depend on the overall magnitude of the velocity) because of rotational symmetry - with no external forces acting on the particle, how could the direction of it's velocity have any impact on the physics of the situation? As for the positional coordinates, I presume these can't affect the Lagrangian because of translational symmetry. (Again, with no external forces acting on the particle, the location has no bearing on its energy, which after all is what the Lagrangian describes) Have I got the right idea here, or am I making false assumptions? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8670295476913452, "perplexity": 275.5278161217853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00317-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://forums.xkcd.com/viewtopic.php?f=40&t=113367&sid=9cfe11ec515d0c467408908ed03f5cf4 | 1/xy
Please compose all posts in Emacs.
Moderators: phlip, Moderators General, Prelates
1/xy=
(1/x)y
7
11%
1/(xy)
57
89%
quantropy
Posts: 192
Joined: Tue Apr 01, 2008 6:55 pm UTC
Contact:
1/xy
I couldn't decide whether to ask this in coding or computer science, so I put it here. Should multiplication ever have implied precedence over division? And how come http://www.wolframalpha.com/input/?i=1%2Fnx%2C1%2F2x?
Flumble
Yes Man
Posts: 1998
Joined: Sun Aug 05, 2012 9:35 pm UTC
Re: 1/xy
Infix notation is stupid to begin with –it only really works for commutative associative operators. Polish or graph notation FTW!
Maybe, just maybe, polish notation could even accelerate the understanding of solving algebraic equations.
Clearly, variables tucked together take precedence over division or constant fractions take precedence over everything. It makes perfect sense.
Qaanol
The Cheshirest Catamount
Posts: 3058
Joined: Sat May 09, 2009 11:55 pm UTC
Re: 1/xy
There is no situation—none—in which I would write (1/xy) to mean (y/x).
wee free kings
EvanED
Posts: 4330
Joined: Mon Aug 07, 2006 6:28 am UTC
Contact:
Re: 1/xy
I think there is a clear answer -- (1/x)*y -- but I also think that anyone who seriously writes 1/xy in the first place should be banished from ever writing anything mathematical again.
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5393
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Somerville, MA
Contact:
Re: 1/xy
But I feel like a bait and switch happened here. I voted to parse 1/xy as 1/(xy), which I stand by, but then the link complains that 1/2x is parsed as (1/2)x. And I stand by that too. Of course mathematically there shouldn't be a difference, but this is about notation, and how to make due with limited formatting options.
I would rather write \frac{1}{2}gt^2, but I will make due with other environments with 1/2gt^2 (or 1/2 gt^2, with a lil extra space there) in favor of (gt^2)/2.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.
Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?
EvanED
Posts: 4330
Joined: Mon Aug 07, 2006 6:28 am UTC
Contact:
Re: 1/xy
doogly wrote:Of course mathematically there shouldn't be a difference, but this is about notation, and how to make due with limited formatting options.
But the other part of notation is to have predictable rules that lead to everyone having the same unambiguous interpretation. I agree that 1/xy seems like it should be 1/(xy), but I kind of doubt that you can pick a (i) clear (ii) simple (iii) predictable and (iv) unambiguous rule that interprets 1/xy that way but that wouldn't lead to other, bigger problems with other expressions.
For example, suppose 1/xy is 1/(xy) and 1/2x is (1/2)x. What's 1/2(x+y), or 1/x(y+z)? 1/(x+y)2, or 1/(y+z)x. I don't even know if I can tell you what I meant, and I wrote those expressions.
So there's a conflict. The only reasonable rule IMO leads to (1/x)*y, but if you look at it in isolation, 1/xy looks like it should be 1/(xy). So that means that you shouldn't write 1/xy in the first place. This is a mathematical analogue to the distinction in programming between a program "having no obvious errors" and a program "obviously having no errors."
The only thing that I can think might work is implied multiplication -- xy instead of x*y -- having its own precedence level. But that seems very silly and error-prone, and leads to 1/2x being 1/(2x) anyway.
Flumble
Yes Man
Posts: 1998
Joined: Sun Aug 05, 2012 9:35 pm UTC
Re: 1/xy
EvanED wrote:
doogly wrote:Of course mathematically there shouldn't be a difference, but this is about notation, and how to make due with limited formatting options.
But the other part of notation is to have predictable rules that lead to everyone having the same unambiguous interpretation. I agree that 1/xy seems like it should be 1/(xy), but I kind of doubt that you can pick a (i) clear (ii) simple (iii) predictable and (iv) unambiguous rule that interprets 1/xy that way but that wouldn't lead to other, bigger problems with other expressions.
I'd argue that you don't need strict rules for small expressions in context. There's three things emphasised, but the third one is probably the most important: 1/2x, 1/xy and 1/2gt^2 are ill-defined without context (especially since not everyone agrees that multiplication and division have the same precedence and go left-to-right), so the context should point out which relation is linearly or inversely proportional. In the case of 1/2gt^2 it is implied that it's * * g / 1 2 ^ t 2 because it looks like the formula for gravitational acceleration.
Derek
Posts: 2176
Joined: Wed Aug 18, 2010 4:15 am UTC
Re: 1/xy
EvanED wrote:But the other part of notation is to have predictable rules that lead to everyone having the same unambiguous interpretation. I agree that 1/xy seems like it should be 1/(xy), but I kind of doubt that you can pick a (i) clear (ii) simple (iii) predictable and (iv) unambiguous rule that interprets 1/xy that way but that wouldn't lead to other, bigger problems with other expressions.
Under strict interpretations I would parse 1/xy as a syntax error. If I'm going to assign it a value then I have to interpret what you meant, and that may lead to inconsistent results.
dalcde
Posts: 173
Joined: Fri Apr 06, 2012 5:49 am UTC
Re: 1/xy
Flumble wrote:Infix notation is stupid to begin with –it only really works for commutative associative operators. Polish or graph notation FTW!
Subtraction?
Boson Collider
Posts: 13
Joined: Wed Oct 29, 2014 6:20 pm UTC
Re: 1/xy
dalcde wrote:
Flumble wrote:Infix notation is stupid to begin with –it only really works for commutative associative operators. Polish or graph notation FTW!
Subtraction?
1 - 2 - 3 - 1 can be parenthetized in many different ways and thus is ambiguous, unless you first do all negations and then do additions, as in 1 + (-1) + (-3) + (-1), which is what most people would expect as a result. But then it does work strictly because of associativity.
Now, commutativity I disagree with, it has nothing to do with that and infix notation handles associative noncommutative products perfectly, see matrix notation. For linear algebra algebraic notation is way, way cleaner than RPN, because RPN is terrible at handling operators that act on operators.
For non-associative operators like cross products, RPN starts being really nice. For things that are completely nonassociative, RPN is awesome.
I mean, try looking at this article on quandles, and translating the axioms into RPN. It's a lot nicer!
https://en.wikipedia.org/wiki/Racks_and_quandles
(Though two-dimensional notations can be even better.)
somitomi
Posts: 598
Joined: Fri Nov 06, 2015 11:21 pm UTC
Location: can be found in Hungary
Contact:
Re: 1/xy
And that's why I hate that slanted nonsense. It's a really nice example of the horizontal division bar being so much better.
If I'm really pressed to interpret it, it'd be 1/(xy), because anyone writing y/x as (1/x)y clearly deserves some kind of punishment for it, but otherwise I'd recommend using bracket everywhere. Altough if I were to write a program that has some mathematical formula as input, the only way to enter any form of divison would be the aforementioned horizontal divison bar (I can't for the life of me find an English word for it).
—◯-◯
DrZiro
Posts: 132
Joined: Mon Feb 09, 2009 3:51 pm UTC
Re: 1/xy
As EvanED points out, we do need to have a strict rule, and the rule being different for literals and variables is definitely confusing. Either they have the same precedence, or different. And having another rule for implied multiplication also sounds like a terrible idea.
The way things are, they clearly have the same precedence. So 1/xy clearly does mean (1/x)*y. Whether it should mean that is a different matter, and I'm inclined to say that it would be better the other way.
I would argue that they do have different precedence in one case, though, which is for units. If I write that 1 H = 1 N/Am, no one could reasonably interpret that as (N/A)*m.
I also think that prefix or postfix notation is neat in many ways. Although I'm not a fan of calling them "Polish" and "reverse Polish". Sounds kind of like an insult, a bit like "it's all Greek to me". I know that's not the historical reason, but still. "Prefix" and "postfix" are definitely better terms - shorter, clearer, more consistent.
lightvector
Posts: 224
Joined: Tue Jun 17, 2008 11:04 pm UTC
Re: 1/xy
I also tend to be a fan of "infix division has strictly lower precedence than multiplication" rather than having equal precedence. I think it's a better convention, and I use it a lot when scribbling to myself on paper or doing back-of-the-envelope math. It even fits that acronym that people learn (or at least that I heard) in grade school: PEMD(AS). The M and the D are in the right order for it to still work. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034390211105347, "perplexity": 1839.718518614957}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213903.82/warc/CC-MAIN-20180818232623-20180819012623-00699.warc.gz"} |
https://cs.stackexchange.com/tags/kleene-star/hot?filter=year | # Tag Info
4
Clearly not. Let $A=\{a\}$ and $B=\{aa\}$. Now, $A\cap B = \emptyset$ so $(A\cap B)^* = \{\epsilon\}$ but $A^*\cap B^*=B^*=\{a^{2i} : i \in \mathbb{N}\}$ (all strings consisting of an even number of $a$).
3
It seems you have hit the reasoning correctly: A set $S$ is a subset of another set $A$ if $w\in S\implies w\in A$. Since you are trying to show that $\forall w \in A^+ \implies w \in A^*$, you are proving that $A^+ \subseteq A^*$. One more legant way to formally do this is by finding a bijection between the two sets. Since $|A^*|$ = $|\Bbb N|$ you need to ...
2
Given a language $L$, let $L_0 = \{\epsilon\}$ and, for $i\geq 1$, let $L_i = \{w_1\circ \dots\circ w_i \mid w_j\in L \text{ for each } j\}$, where $\circ$ denotes concatenation. Then the Kleene closure of $L$ is the language $L^* = \bigcup_{i\geq 0} L_i$.
2
The lexicographic order is defined on words, not on regular expressions. An interesting exercise is to describe the restriction of the lexicographic order to the language $a^*b^*$, assuming that $a < b$. Note for instance that $1 < a < a^2 < \dotsm < a^n < \dotsm$, but $\dotsm <a^nb <\dotsm <a^2b < ab < b$.
2
The "$x\in A^{+}$. By definition $x\in A^{+} \wedge x\in A^{*}$ for all $x\neq A^{0}$" is a bit weird. Not even from a computer science point of view but from a set theory or general mathematical point of view. First, you're saying the same thing again with "$x\in A^{+}$" which you shouldn't. Then, you're saying $x\in A^{*}$ which is the desired result. I ...
1
The first step is to clarify what is being asked: do you mean can every infinite regular language be decomposed in this way, or is it possible that some infinite regular language can be decomposed in this way? Your question does not make this clear. For the first version of the question, try considering the following language: $b \cup a^*$. Can this be ...
1
$printf "ab\n" | sed -En 's/b*// p' | od -t c 0000000 a b \n 0000003 This regex expression "$b*$" does match the empty string, which is zero '$b$', at the very front of the input "$ab$". This is in accordance with the definition of Kleene star since$b^*$stands for the language$\{\epsilon, b, bb, \cdots\}$, where$\epsilon\$ stands for the empty ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444248676300049, "perplexity": 202.6879652200178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00521.warc.gz"} |
https://icml.cc/Conferences/2018/ScheduleMultitrack?event=1989 | Timezone: »
Poster
Dynamic Regret of Strongly Adaptive Methods
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou
Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #142
To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675966858863831, "perplexity": 550.2324705007693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00506.warc.gz"} |
https://dgtal-team.github.io/doc-nightly/moduleIntegralInvariant.html | DGtal 1.3.beta
Integral invariant curvature estimator 2D/3D
# Overview
The algorithm implemented in the class IntegralInvariantVolumeEstimator and IntegralInvariantCovarianceEstimator are detailed in the article [27] .
In geometry processing, interesting mathematical tools have been developed to design differential estimators on smooth surfaces based on Integral Invariant [92] [93] . The principle is simple: we move a convolution kernel along the shape surface and we compute integrals on the intersection between the shape and the convolution kernel, as follow in dimension 3:
$V_r(x) = \int_{B_r(x)} \chi(p)dp\$
where $$B_r(x)$$ is the Euclidean ball of radius $$r$$, centered at $$x$$ and $$\chi(p)$$ the characteristic function of $$X$$. In dimension 2, we simply denote $$A_r(x)$$ such quantity (represented in orange color on the following figure).
Integral invariant computation in dimension 2.
Integral invariant computation in dimension 3.
Notations.
## Integral Invariant for curvature computation
In [26] , we have demonstrated that some digital integral quantities provide curvature information when the kernel size tends to zero for a sufficiently smooth shape. Indeed, at $$x$$ of the surface $$X$$ and with a fixed radius $$r$$, we obtain convergent local curvature estimators $$\tilde{\kappa}_r(X,x)$$ and $$\tilde{H}_r(X,x)$$ of quantities $$\kappa(X,x)$$ and $$H(X,x)$$ respectively:
$\tilde{\kappa}_r(X,x) = \frac{3\pi}{2r} - \frac{3A_r(x)}{r^3},\quad \tilde{\kappa}_r(X,x) = \kappa(X,x)+ O(r)$
$\tilde{H}_r(X,x) = \frac{8}{3r} - \frac{4V_r(x)}{\pi r^4},\quad \tilde{H}_r(X,x) = H(X,x) + O(r)$
where $$\kappa_r(X,x)$$ is the 2d curvature of $$X$$ at $$x$$ and $$H_r(X,x)$$ is the 3d mean curvature of $$X$$ at $$x$$.
Then we showed that we can obtain local digital curvature estimators :
$\forall 0 < h < r,\quad \hat{\kappa}_r(Z,x,h) = \frac{3\pi}{2r} - \frac{3\widehat{Area}(B_{r/h}(\frac{1}{h} \cdot x ) \cap Z, h)}{r^3}$
where $$\hat{\kappa}_r$$ is an integral digital curvature estimator of a digital shape $$Z \subset ℤ^2$$ at point $$x \in \rm I\! R^2$$ and step $$h$$. $$B_{r/h}(\frac{1}{h} \cdot x ) \cap Z, h)$$ means the intersection between $$Z$$ and a Ball $$B$$ of radius $$r$$ digitized by $$h$$ centered in $$x$$.
In the same way, we have in 3d :
$\forall 0 < h < r,\quad \hat{H}_r(Z',x,h) = \frac{8}{3r} - \frac{4\widehat{Vol}(B_{r/h}(\frac{1}{h} \cdot x ) \cap Z', h)}{\pi r^4}.$
where $$\hat{H}_r$$ is an integral digital mean curvature estimator of a digital shape $$Z' \subset ℤ^3$$ at point $$x \in \rm I\! R^3$$ and step $$h$$ .
We have demonstrated in [26] that to prove the multigrid convergence with a convergence speed of $$O(h^{\frac{1}{3}})$$, the Euclidean radius of the kernel must follow the rule $$r = k_mh^{\alpha_m}$$ ( $$\alpha_m = \frac{1}{3}$$ provides better worst-case errors, so we will use this value).
Experimental results can be found at https://liris.cnrs.fr/jeremy.levallois/Papers/DGCI2013/ and https://liris.cnrs.fr/jeremy.levallois/Papers/AFIG2013/
# Algorithm
## Overall algorithm
The user part is rather simple by using only IntegralInvariantVolumeEstimator or IntegralInvariantCovarianceEstimator.
Since 2d curvature and 3d mean curvature are related to the volume between the shape and a ball centered on the border of the shape, we need to estimate this volume and use this value to get the curvature information. Then we parametrize our volume estimator IntegralInvariantVolumeEstimator by a functor to compute the curvature (in 2d: functors::IICurvatureFunctor and in 3d for the mean curvature: functors::IIMeanCurvature3DFunctor )
In 3d we can also compute the covariance matrix of the intersection between the shape and the ball centered on the border of the shape using IntegralInvariantCovarianceEstimator. This offers us the possibility to extract many differential quantities as: 3d Gaussian curvature, first and second principal curvatures, as well first and second principal curvature directions, normal vector or tangeant vector (functors::IIGaussianCurvature3DFunctor, etc. See file DGtal/geometry/surfaces/estimation/IIGeometricFunctors.h or namespace functors for the entire list).
### Integral Invariant functors
All functors defined in IIGeometricFunctors.h have the same inititialization process: you need to use the default constructor, and initialize them with init() method. They will ask for a grid step ( $$h$$) and the Euclidean radius of the kernel ball ( $$r$$).
### Integral Invariant estimators
As described below, this parameter $$r_e$$ determines the level of feature detected by the estimator Since IntegralInvariantVolumeEstimator and (resp.) IntegralInvariantCovarianceEstimator are models of CSurfelLocalEstimator, they will follow rules inferred by it (init(), eval(), etc.). But some particular considerations are introduce. The following steps are the general usage of Integral Invariant estimators:
• First, you need to give them a functor on Volume (resp. Covariance matrix) on the constructor of the class (see above).
• You can also give them the cellular grid space model of CCellularGridSpaceND in which the shape is defined (Z3i::KSpace for example in 3d) and a point predicate model of concepts::CPointPredicate (the digital shape of interest, used to know if a point of the domain is inside or outside the shape), either in the constructor, or in the attach() method.
• Integral Invariant estimators need to know which size of ball you want to convolve around the shape boundary, so we need to call the setParams(double) method with the digital radius of the ball ( $$\frac{r}{h}$$).
• Warning
II functors need the Euclidean radius of the ball, and II estimators the digital radius.
• Then, we can initialize our estimator, by calling init() method. It requires the grid step of the shape $$h$$, a begin and a end surfel iterator of the shape. This method will precompute all internal parameters for II estimator, as displacement masks optimization (see Technical details part below).
• Finally, you can evaluate the estimator in two way:
• at a given (iterator of a) surface element (surfel) : eval(it_surfel)
• for a range of (iterators of) surface elements : eval(it_begin, it_end, output). Here output is an output iterator where results are given (std::back_insert_iterator for example).
If you want to evaluate on a range of surfels, we recommend you to choose the second way. Indeed, some optimizations are available when the range of surfels is given with a 0-adjacency. Then, when you extract the digital surface of your shape, it's recommended to use a depth-first search (see DepthFirstVisitor for example). If none, no optimization are perform (it will be visible in performances for big shape).
# Example code
It is important to consider a range of connected surfels when evaluating with Integral Invariant Curvature estimators in order to benefit the kernel optimization. Note that the methodology is the same with both IntegralInvariantVolumeEstimator and IntegralInvariantCovarianceEstimator. The only change is on the typedef of Functor/Estimator (see below):
typedef functors::IIMeanCurvature3DFunctor<Z3i::Space> MyIICurvatureFunctor;
typedef IntegralInvariantVolumeEstimator< Z3i::KSpace, ImagePredicate, MyIICurvatureFunctor > MyIICurvatureEstimator;
// For computing Gaussian curvature instead, for example, change the two typedef above by :
// typedef functors::IIGaussianCurvature3DFunctor<Z3i::Space> MyIICurvatureFunctor;
// typedef IntegralInvariantCovarianceEstimator< Z3i::KSpace, ImagePredicate, MyIICurvatureFunctor > MyIICurvatureEstimator;
// and it's done. The following part is exactly the same.
MyIICurvatureFunctor curvatureFunctor; // Functor used to convert volume -> curvature
curvatureFunctor.init( h, radius ); // Initialisation for a grid step and a given Euclidean radius of convolution kernel
MyIICurvatureEstimator curvatureEstimator( curvatureFunctor );
curvatureEstimator.attach( KSpaceShape, predicate ); // Setting a KSpace and a predicate on the object to evaluate
curvatureEstimator.setParams( radius / h ); // Setting the digital radius of the convolution kernel
curvatureEstimator.init( h, abegin, aend ); // Initialisation for a given h, and a range of surfels
std::vector< Value > results;
std::back_insert_iterator< std::vector< Value > > resultsIt( results ); // output iterator for results of Integral Invariant curvature computation
curvatureEstimator.eval( abegin, aend, resultsIt ); // Computation
# Some results
Here is some results in 2d and 3d :
Curvature mapped on a Flower2D
Mean curvature mapped on a Goursat's surface
Gaussian curvature mapped on a Goursat's surface
Mean curvature mapped on a Stanford bunny (credit is given to the Stanford Computer Graphics Laboratory https://graphics.stanford.edu/data/3Dscanrep/)
First principal curvature direction mapped on a Stanford bunny (credit is given to the Stanford Computer Graphics Laboratory https://graphics.stanford.edu/data/3Dscanrep/)
Value
double Value
Definition: testSimpleRandomAccessRangeFromPoint.cpp:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9014945030212402, "perplexity": 1630.8226952141965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00510.warc.gz"} |
http://cvgmt.sns.it/paper/781/ | # Convergence of minimizers with local energy bounds for the Ginzburg-Landau functionals.
created by orlandi on 24 Dec 2009
[BibTeX]
Published Paper
Inserted: 24 dec 2009
Journal: Indiana Univ. Math. J.
Volume: 58
Pages: 2369-2408
Year: 2009
Abstract:
We study the asymptotic bahaviour, as $h$ goes to 0, of a sequence $\{u_h\}$ of minimizers for the Ginzburg-Landau functional which satisfies local energy bounds of order $\log\, h$. The jacobians $Ju_h$ are shown to converge, in a suitable sense and up to subsequences, to an area minimizing minimal surface of codimension $2$. This is achieved without assumptions on the global energy of the sequence or on the boundary data, and holds even for unbounded domains. The proof is based on an improved version of the Gamma-convergence results from Alberti {\it et al.}, Indiana Univ. Math. J. 54 (2005), 1411--1472.
Credits | Cookie policy | HTML 5 | CSS 2.1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.845839262008667, "perplexity": 730.1182809407181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00332.warc.gz"} |
https://www.lessonplanet.com/teachers/lesson-plan-no-vehicles-in-the-park-working-with-legislation | # No Vehicles in the Park: Working with Legislation
Students in pairs, or groups of three, determine if the "No Vehicles in the Park" law has been violated in each of the following situations. Let students know that it is not the definition of "vehicles" that is in question in all c | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583057165145874, "perplexity": 981.5948909140415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592523.89/warc/CC-MAIN-20180721110117-20180721130117-00567.warc.gz"} |
http://digitalcommons.lsu.edu/gradschool_disstheses/6242/ | ## LSU Historical Dissertations and Theses
1996
Dissertation
#### Degree Name
Doctor of Philosophy (PhD)
#### Department
Chemistry
Erwin D. Poliakoff
#### Abstract
The experimental technique of measuring the degree of polarization of the fluorescence from electronically excited molecular ions is used to determine the alignment of N$\sb2$, CO and N$\sb2$O photoions over an extended range of excitation energy. The polarization of $\rm CO\sp+(B\sp2\Sigma\sp+\to\ X\sp2\Sigma\sp+),\ N\sb2\sp+(B\sp2\Sigma\sb{u}\sp+\ \to \ X\sp2\Sigma\sb{g}\sp+)$, and $\rm N\sb2O\sp+\ (A\sp2\Sigma\sp+\ \to\ X\sp2\Pi)$ fluorescence over a 200 eV photon energy range is used to interpret the oscillator strength distributions for normally unresolved degenerate ionization channels. The results show the influence of a CO $4\sigma\to k\sigma$ shape resonance clearly, and agreement between theory and experiment is excellent. The N$\sb2$O experimental results indicate evidence of a $7\sigma\to k\sigma$ shape resonance near ionization threshold. Agreement between the theoretical calculations and experiment are less satisfactory for N$\sb2$. This behavior is somewhat surprising, as previous rotationally resolved fluorescence experiments have shown excellent agreement between theory and experiment. This comparison helps to illustrate the complementarity of alignment studies relative to alternative probes of ionization. For both N$\sb2$ and CO, the data indicate that the photoions retain significant alignment even at high energies, though this is not true in the case of N$\sb2$O. The results demonstrate that even well above threshold the spectral dependence of the alignment (i.e., polarization) is very sensitive to the molecular environment for photoejection. Such behavior provides useful insight into fundamental scattering phenomena in chemical physics.
9780591133431
117
COinS | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9082568883895874, "perplexity": 1610.8322781202144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00050.warc.gz"} |
https://courses.marlboro.edu/mod/page/view.php?id=20754 | ## 2.2 -- Physical Metaphor -- Due 10/4
Drawing on the feedback you received on your first draft of the physical metaphor study and your increased understanding of physical metaphor, substantially revise and expand your study. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689579606056213, "perplexity": 3757.868517110514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00178-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/calculus-free-response-question-help-please.221030/ | # Calculus Free Response Question help please
1. Mar 10, 2008
### SWFanatic
1. The problem statement, all variables and given/known data
The function f has a Taylor series about x=2 that converges tp f(x) for all x in the interval of convergence. The nth derivative of f at x=2 is given by f^(n)(2)=((n+1)!)/3^n for n>=1, and f(2) =1.
(a). write the first four terms and the general term of the Taylor series for f about x=2.
(b). find the radius of convergence for the Taylor series for f about x=2.
2. Relevant equations
3. The attempt at a solution
(a). F(x) = 1 + (2/3)(x-2) + (2/3)(x-2)^2 + (8/9)(x-2)^3 +...+ ((n+1)!(x-2)^n)/(3^n)
This seems correct, however I am not sure, because when I atempt part b it doesnt really work.
(b) Standard Ratio Test for the general term in part a = abs((n+2)(x-2))/3 <1
Does this not mean its divergent then? or am i all mixed up? Thanks for any help.
2. Mar 10, 2008
### HallsofIvy
Staff Emeritus
This is completely wrong but since you don't say how you got those coefficients, I don't what you did wrong. Did you forget the n! in the denominator of the formula for the coefficients?
3. Mar 11, 2008
### SWFanatic
yes, i did forget to use n! in the denominator.
New Equation(for part a): f(x) = 1 + (2(x-2))/3*1!) + (6(x-2)^2)/(9*2!) + (24(x-2)^3)/(27*3!)+...+ ((x-2)^n)/((3^n)*(n!))
I got this by using the given nth derivative formula f^n(2)= (n+1)!/3^n for the f1 f2 f3 derivitive parts of the formula for series (f(2) + f1(x-2) + (f2(2)(x-2)^2)/2! + (f3(2)(x-2)^3)/3! +...
I still do not think its correct however because for part b:
I use the ration test lim(x>0)|((x+2)^(n+1)/(3^(n+1) *(n+1)!)) * ((3^n(n!))/(x-2)^n)| = lim(x>0) |(x-2)/3(n+1)|
From here i don't know what to do because if i made it less than 1, wouldnt it mean that the series always converges? Is my general term off? Thanks again
4. Mar 11, 2008
### HallsofIvy
Staff Emeritus
You have the correct derivatives in
$$F(x) = 1 + (2/3)(x-2) + (2/3)(x-2)^2 + (8/9)(x-2)^3 +...+ ((n+1)!(x-2)^n)/(3^n)k$$
but did not divide by n! Since (n+1)!/n!= n+ 1, the correct series is
$$F(x) = 1 + (2/3)(x-2) + (1/3)(x-2)^2 + (4/27)(x-2)^3 +...+ ((n+1)(x-2)^n)/(3^n)$$
Somehow, you have put the "n+1" in the denonimator, not the numerator.
Now, use the ratio test: take the limit of [(n+1)/n]|x-2|/3 as n goes to infinity.
Have something to add?
Similar Discussions: Calculus Free Response Question help please | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725307822227478, "perplexity": 1853.738522181734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.37/warc/CC-MAIN-20161020183839-00393-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/75318/correspondences-f-x-to-2y?answertab=oldest | # Correspondences $f: X \to 2^Y$
I am reading some notes on correspondences and have a question. (The notes are here.) I have a question about something on page 1.
Basically, the notes provide some motivation for why we might want to define correspondences. It then says,
We would like to have a notion of a set-valued function. The seemingly obvious idea a function $f : X \to 2^Y$ from a set $X$ in to the set of subsets of $Y$ may not be the best choice.
I have looked at this several times but have no idea where the $2^Y$ comes from. Any help would be appreciated!
P.S. This actually is not homework but I am not sure what tag to use, I tried correspondences and looked through the first 5 pages of common tags without any luck.
-
The usage of $2^Y$ can be used to denote the power set of $Y$, that is:
$$P(Y)=\{A \mid A\subseteq Y\}$$
In fact the notation itself means $\{f\colon Y\to\{0,1\}\mid f\text{ a function}\}$, however there is a bijection between $P(Y)$ and this set, given by:
$$A\subseteq Y\mapsto\chi_A(x) = \begin{cases} 1 & x\in A\\ 0 & x\notin A\end{cases}$$
So when speaking about a set valued function, it means that the values are subsets of $Y$, therefore elements of $P(Y)$ or elements of $2^Y$ accordingly.
It is actually hidden in the quoted text. "from a set $X$ in to the set of subsets of $Y$" in fact giving away that $2^Y$ is the notation used by the author for the power set of a set $Y$.
-
Sorry, couldn't resist replacing the pipe with a \mid. – Rasmus Oct 24 '11 at 9:22
@Rasmus: Strange I usually keep my pipes \mid... Thanks :-) – Asaf Karagila Oct 24 '11 at 9:40
If $A$ is a set with $m$ elements and $B$ is a set with $n$ elements, then the set of all functions from $A$ into $B$ has $n^m$ elements.
Consequently it became conventional to denote the set of all functions from $A$ into $B$ by $B^A$.
If we let $2$ denote the set $\{0,1\}$ with two elements, then $2^A$ is the set of all functions from $A$ into $\{0,1\}$, and that's essentially the set of all subsets of $A$. I.e. any subset of $A$ corresponds to the function that maps members of that subset to $1$ and non-members to $0$.
Though some of us prefer the notation $^AB$ for the set of functions from $A$ into $B$. (I suspect that this usage started with set theorists wanting to distinguish the set $^\lambda\kappa$ of functions from the cardinal $\kappa^\lambda$.) – Brian M. Scott Oct 25 '11 at 18:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909110963344574, "perplexity": 116.97411071625409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652934/warc/CC-MAIN-20140305060732-00026-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://15462.courses.cs.cmu.edu/fall2021/lecture/linearalgebra/slide_021 | Previous | Next --- Slide 21 of 61
Back to Lecture Thumbnails
wmarango
Vector addition, vector norms, and inner product seem to be most easily expressed in Cartesian coordinates. Are there certain operations for which other coordinate systems are more efficient?
spidey
What are the advantages to using some other coordinate systems other than the Cartesian (for example polar coordinates) for computer graphics applications? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891667902469635, "perplexity": 806.5661736557175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00494.warc.gz"} |
https://socratic.org/questions/how-do-you-find-the-pressure-of-a-gas-in-a-eudiometer-tube | Chemistry
Topics
# How do you find the pressure of a gas in a eudiometer tube?
## I really just need someone to tell me the steps because I can't find them. Here's some more information, though: the gas is H_"2", the room temperature is 24.7C, and the barometric pressure is 766 torr. Thanks.
Jan 10, 2017
Here's how you can do that.
#### Explanation:
The idea here is that the gas is being collected over water, which basically means that the tube will contain hydrogen gas and water vapor.
A typical set up involving a eudiometer tube looks like this
Now, you know the temperature at which the gas is being collected, so you can look up the vapor pressure of water at that temperature. In this case, you have
P_( "H"_2"O") ~~ "23.56 torr"
http://www.endmemo.com/chem/vaporpressurewater.php
Now you can use Dalton's Law of Partial Pressures to figure out the partial pressure of hydrogen gas in the mixture.
color(blue)(ul(color(black)(P_ "total" = P_ ("H"_ 2) + P_ ("H"_ 2"O"))))
You will have
${P}_{\text{H"_ 2) = P_"total" - P_ ("H"_ 2"O}}$
Keep in mind that the total pressure is simply the barometric pressure, i.e. the pressure of the hydrogen gas + water vapor mixture.
##### Impact of this question
5961 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474230766296387, "perplexity": 726.0106364093289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00124.warc.gz"} |
https://lists.nongnu.org/archive/html/axiom-developer/2005-02/msg00127.html | axiom-developer
[Top][All Lists]
## [Axiom-developer] [#87 solve(x + 1.1, 0.001) fails] LaTeX not working
From: anonymous Subject: [Axiom-developer] [#87 solve(x + 1.1, 0.001) fails] LaTeX not working Date: Sun, 13 Feb 2005 04:13:58 -0600
++added:
??changed:
-Of course, yes. However, there is a dilemma: when you give Axiom an equation
with floating point coefficients, should Axiom "solve" this algebraically, as
if 'Float' is just like any other domain, or numerically, giving 'Float' a
special treatment? Since Axiom algorithms are categorical, rather than writing
two separate algorithms, Axiom solves, if possible, algebraically (that is,
exactly) and gives numerical answers as options when the precision parameter is
given. This choice does not work well with equations over 'Float' because
'Float' does not have some of the algebraic properties as 'Fraction Integer' or
'Fraction Complex Integer' (such as factorization or GCD), which is why there
is a warning in 'solve(x^2-1.234)'. The package is numsolve.spad and you see
that these restrictions are well documented. So the above signature is really
not meant to be used at the moment. A similar situation occurs, for example
'factor(1.23)' is legal, but is really useless. Axiom does n!
ot use a mechanism to exclude specific domains from a category. It adopts an
"include" philosophy but let things fail with warning or error. If you look
into numsolve.spad, you will find that the 'innerSolve1' algorithm {\it
implementation} is restricted. (So if later someone finds a way to implement a
'solve' algorithm over 'Float', that would be just fine).
-
-So a lot of Axiom failures are not bugs, but by design. One way to improve the
user interface would seem to be to automatically lifting a polynomial over
'Float' to one over 'Fraction Integer'. A moment's reflection would convince
you this is not always possible (for example, 'sqrt(2)' or '%pi' are
technically both belong to 'Float' (model for real numbers), but of course, in
reality, every floating point number is a rational number. Such a lifting
package would have to take into consideration the precision to convert some
symbolic constants to a decimal approximation and then convert that to an exact
rational number. However, even this would not create satisfactory results
because we know the sensitivity of solutions of polynomial equations to small
changes of its coefficients. Wilkinson has this example
-
-
-<center>
-f(x) = (x+1)(x+2) ... (x+20) = x<SUP>20</SUP> + 210 x<SUP>19</SUP> + ... + 20!
= 0
-</center>
-
-
-where a change of the coefficient 210 by 2<SUP>-23</SUP> (approximately 1.2
× 10<SUP>-7</SUP>) would turn the root -20 to -20.8 and five pairs of
zeros to complex roots. (This perturbed equation will take a *very long* time
in Axiom, will not be solved exactly by Mathematica, but is trivially solved
*numerically* in Mathematica).
-
-
-
-So if we want numerically accurate solutions, we should use a robust numerical
library. I believe this is not yet available in Axiom (the NAG version allowed
interface with its Fortran libraries, at extra costs).
-
-If we are really (no pun intended) only using truely floating point
coefficients, then it can easily be converted to Fraction Integer, but one has
to beware that the algorithm would take a very long time because exact
arithmetic with large integer coefficients are expensive.
-
-{\tt [[Kostas Oikonomou wrote:]]}
-[1 more lines...]
Of course, yes. However, there is a dilemma: when you give Axiom an equation
with floating point coefficients, should Axiom "solve" this symbolically, as if
'Float' is just like any other domain, or numerically, giving 'Float' a special
treatment? Since Axiom algorithms are categorical, rather than writing two
separate algorithms, Axiom solves, if possible, exactly and gives numerical
answers as options when the precision parameter is given. This choice does not
work well with equations over 'Float' because 'Float' does not have some of the
algebraic properties as 'Fraction Integer' or 'Fraction Complex Integer' (such
as factorization or GCD), which is why there is a warning in
'solve(x^2-1.234)'.
The package is numsolve.spad and you see that these restrictions are well
documented. So the above signature is really not meant to be used at the
moment. A similar situation occurs, for example 'factor(1.23)' is legal, but is
really useless. Axiom does not use a mechanism to exclude specific domains from
a category. It adopts an "include" philosophy but let things fail with warning
or error. If you look into numsolve.spad, you will find that the 'innerSolve1'
algorithm *implementation* is restricted. (So if later someone finds a way to
implement a 'solve' algorithm over 'Float', that would be just fine).
So a lot of Axiom failures are not bugs, but by design. One way to improve the
user interface would seem to be to automatically lifting a polynomial over
'Float' to one over 'Fraction Integer'. A moment's reflection would convince
you this is not always possible (for example, 'sqrt(2)' or '%pi' technically
both belong to 'Float' (model for real numbers)), but of course, in reality,
every (finite precision) floating point number is a rational number. Such a
lifting package would have to take into consideration the precision to convert
some symbolic constants to their decimal approximations and then convert them
to exact rational numbers. However, even this would not create satisfactory
results because we know the sensitivity of solutions of polynomial equations to
small changes of its coefficients. Wilkinson has this example
<center>
*f*(*x*) = (*x*+1)(*x*+2) ... (*x*+20) = *x*<SUP>20</SUP> + 210
*x*<SUP>19</SUP> + ... + 20! = 0
</center>
where a change of the coefficient 210 by 2<SUP>−23</SUP> (approximately
1.2 × 10<SUP>−7</SUP>) would turn the root −20 to −20.8
and five pairs of zeros to complex roots. (This perturbed equation will take a
*very long* time in Axiom, will not be solved exactly by Mathematica, but is
easily solved *numerically* in Mathematica).
So if we want numerically accurate solutions, we should use a robust numerical
library. I believe this is not yet available in Axiom (the NAG version allowed
interface with its Fortran libraries, at extra costs).
If we are really (no pun intended) only using truely floating point
coefficients, then it can easily be converted to Fraction Integer, but one has
to beware that the algorithm would take a very long time because exact
arithmetic with large integer coefficients are expensive.
{\tt [[Kostas Oikonomou wrote:]]}
Also, while 'solve(x+1.1)' "works", so to speak, 'solve(x^2 - 1.234)' returns a
warning.
??changed:
-{\tt [[Martin Rubey Sat Feb 12 13:07:55 -0600 2005 <address@hidden>
wrote:]]}
-
-I think the idea is to convert your Polynomial Float into a Polynomial Fraction
-Integer and then use solve (POLY FRAC INT, FLOAT) -> result.
{\tt [[Martin Rubey Sat Feb 12 13:07:55 -0600 2005 <address@hidden> wrote:]]}
I think the idea is to convert your 'Polynomial Float' into a 'Polynomial
Fraction
Integer' and then use 'solve (POLY FRAC INT, FLOAT) -> ... ' to get results.
??changed:
-Yes, as explained above. Algebraic methods do not work well with Float.
-
-
Yes, as explained above. Symbolic methods do not work well with 'Float'.
-- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607327342033386, "perplexity": 4083.7292054462546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00631.warc.gz"} |
https://www.physicsforums.com/threads/2-identical-capacitors-given-potential-in-joules.195412/ | # 2 identical capacitors, given potential in joules
1. Nov 1, 2007
1. The problem statement, all variables and given/known data
Capacitors A and B are identical. Capacitor A is charged so it stores 4J of energy and capacitor B is uncharged. The capacitors are then connected in parallel. The total stored energy in the capacitors is now ____.
2. Relevant equations
U = q^2/2C
3. The attempt at a solution
Ok, I'm not sure if this is allowed.. I already know the answer is 2J. My professor gave us an answer key with the problem done out, but I don't understand what is happening. I'm not sure where 2J went. It says 2J of energy were required to move charge from the charged capacitor to the uncharged one.
I know initial potential of A is 4J, and initial potential of B is 0. For final potential, she has U_final = q^2/2C = 1/2*q^2/2C. Where did the second 1/2 come from?
I would really appreciate some clarification.. I'm really confused.
2. Nov 1, 2007
### Dick
When the two capacitors are connected in parallel the capacitance doubles, while Q remains the same.
3. Nov 1, 2007
I understand that, but I don't see how that relates here. From the look of the answer key, I don't understand where total potential is halved when being spread out over 2 capacitors.
4. Nov 1, 2007
### Dick
The potential is halved because the same charge is being spread out over two capacitors. What the relation between potential and charge for a capacitor?
5. Nov 1, 2007
It's U = q^2/2C.. so if I plug in (q/2)^2/2(2C), which is halving the charge and doubling the capacitance, I'd have q^2/16C.. which still makes no sense.
Am I missing something conceptually or algebraically?
6. Nov 1, 2007
### Dick
The U in that equation is energy stored the capacitor, not potential between the plates (volts). Is that your problem?
7. Nov 1, 2007
Yes, I'm trying to find the potential energy (in joules), not the voltage. I may have been unclear when I said potential.
8. Nov 1, 2007
### Dick
If you understand why capacitance doubles, then why can't you just change the C in Q^2/(2C) to 2C and conclude 4J changes to 2J?
9. Nov 1, 2007
Ok, that's starting to make sense.. but why isn't Q also changed to Q/2 since it's being halved at the same time?
I apologize for being so dense - I tend to have a really hard time with these concepts sometimes.
10. Nov 1, 2007
### Dick
Q is total Q, between both capacitors. Total charge doesn't change. C changes to C/2. It does change.
11. Nov 1, 2007
Ok, that makes sense. Thank you for being patient. :) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942125141620636, "perplexity": 1319.2045998273716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718866.34/warc/CC-MAIN-20161020183838-00083-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://science.sciencemag.org/content/362/6410/eaat2382?rss=1 | Research Article
# Chemical interactions between Saturn’s atmosphere and its rings
See allHide authors and affiliations
Science 05 Oct 2018:
Vol. 362, Issue 6410, eaat2382
DOI: 10.1126/science.aat2382
## Cassini's final phase of exploration
The Cassini spacecraft spent 13 years orbiting Saturn; as it ran low on fuel, the trajectory was changed to sample regions it had not yet visited. A series of orbits close to the rings was followed by a Grand Finale orbit, which took the spacecraft through the gap between Saturn and its rings before the spacecraft was destroyed when it entered the planet's upper atmosphere. Six papers in this issue report results from these final phases of the Cassini mission. Dougherty et al. measured the magnetic field close to Saturn, which implies a complex multilayer dynamo process inside the planet. Roussos et al. detected an additional radiation belt trapped within the rings, sustained by the radioactive decay of free neutrons. Lamy et al. present plasma measurements taken as Cassini flew through regions emitting kilometric radiation, connected to the planet's aurorae. Hsu et al. determined the composition of large, solid dust particles falling from the rings into the planet, whereas Mitchell et al. investigated the smaller dust nanograins and show how they interact with the planet's upper atmosphere. Finally, Waite et al. identified molecules in the infalling material and directly measured the composition of Saturn's atmosphere.
Science, this issue p. eaat5434, p. eaat1962, p. eaat2027, p. eaat3185, p. eaat2236, p. eaat2382
## Structured Abstract
### INTRODUCTION
Past remote observations of Saturn by Pioneer 11, Voyager 1 and 2, Earth-based observatories, and the Cassini prime and solstice missions suggested an inflow of water from the rings to the atmosphere. This would modify the chemistry of Saturn’s upper atmosphere and ionosphere. In situ observations during the Cassini Grand Finale provided an opportunity to study this chemical interaction.
### RATIONALE
The Cassini Grand Finale consisted of 22 orbital revolutions (revs), with the closest approach to Saturn between the inner D ring and the equatorial atmosphere. The Cassini Ion Neutral Mass Spectrometer (INMS) measured the composition of Saturn’s upper atmosphere and its chemical interactions with material originating in the rings.
### RESULTS
Molecular hydrogen was the most abundant constituent at all altitudes sampled. Analysis of the atmospheric structure of H2 indicates a scale height with a temperature of 340 ± 20 K below 4000 km, at the altitudes and near-equatorial latitudes sampled by INMS.
Water infall from the rings was observed, along with substantial amounts of methane, ammonia, molecular nitrogen, carbon monoxide, carbon dioxide, and impact fragments of organic nanoparticles. The infalling mass flux was calculated to be between 4800 and 45,000 kg s−1 in a latitude band of 8° near the equator.
The interpretation of this spectrum is complicated by the Cassini spacecraft’s high velocity of 31 km s−1 relative to Saturn’s atmosphere. At this speed, molecules and particles have 5 eV per nucleon of energy and could have fragmented upon impact within the INMS antechamber of the closed ion source. As a result, the many organic compounds detected by INMS are very likely fragments of larger nanoparticles.
Evidence from INMS indicates the presence of molecular volatiles and organic fragments in the infalling material. Methane, carbon monoxide, and nitrogen make up the volatile inflow, whereas ammonia, water, carbon dioxide, and organic compound fragments are attributed to fragmentation inside the instrument’s antechamber of icy, organic-rich grains. The observations also show evidence for orbit-to-orbit variations in the mixing ratios of infalling material; this suggests that the source region of the material is temporally and/or longitudinally variable, possibly corresponding to localized source regions in the D ring.
### CONCLUSION
The large mass of infalling material has implications for ring evolution, likely requiring transfer of material from the C ring to the D ring in a repeatable manner. The infalling material can affect the atmospheric chemistry and the carbon content of Saturn’s ionosphere and atmosphere.
## Abstract
The Pioneer and Voyager spacecraft made close-up measurements of Saturn’s ionosphere and upper atmosphere in the 1970s and 1980s that suggested a chemical interaction between the rings and atmosphere. Exploring this interaction provides information on ring composition and the influence on Saturn’s atmosphere from infalling material. The Cassini Ion Neutral Mass Spectrometer sampled in situ the region between the D ring and Saturn during the spacecraft’s Grand Finale phase. We used these measurements to characterize the atmospheric structure and material influx from the rings. The atmospheric He/H2 ratio is 10 to 16%. Volatile compounds from the rings (methane; carbon monoxide and/or molecular nitrogen), as well as larger organic-bearing grains, are flowing inward at a rate of 4800 to 45,000 kilograms per second.
Early modeling of Saturn’s atmosphere/ionosphere coupling (1) prior to the first radio occultation measurements by Pioneer 11 adopted compositional constraints from Earth-based observations of the well-mixed saturnian atmosphere and used a range of values of the turbulent mixing and heating of the atmosphere based on past planetary observations. Depending on the chosen thermal profile and strength of turbulent mixing, a range of atmospheric and ionospheric conditions were deemed possible (1). The “nominal” model of the ionosphere predicted that protons were the primary ion and that the proton’s slow radiative recombination reaction was the primary chemical loss pathway, indicating a density of the ionosphere reaching 105 cm–3. The modeling suggested that ionospheric composition and structure could serve as a diagnostic of the composition, thermal structure, and turbulent mixing of the upper atmosphere. However, the first radio occultation measurements of the ionosphere made by Pioneer 11 on 1 September 1979 found a peak electron density an order of magnitude lower than predicted for moderate eddy mixing and a warm thermosphere (2). The discrepancy was not well understood at the time.
Voyager radio occultation measurements (3) supported the low peak ionospheric density and provided independent evidence that the low peak ionospheric density extended into the night side (4). It was suggested (5, 6) that water from the rings flowing into the atmosphere could chemically convert protons to molecular H3O+ ions at a rate that reproduced the peak electron densities observed by Pioneer and Voyager. The globally averaged water influx rate was estimated to be 4 × 107 cm–2 s–1, with localized influx as high as 2 × 109 cm–2 s–1 (6). This proposal of water influx was bolstered by further modeling (7, 8) and ground-based observations of H3+ in the Saturn ionosphere (9).
The 59 Cassini radio occultations carried out over the course of its prime and solstice missions provided additional data on the ionospheric structure (8, 10). Low latitudes—within 20° of the equator—have lower ionospheric density. Water influx from the rings is a strong candidate to explain the observed latitudinal variations; modeling (8) indicates that water influx with a Gaussian distribution about the equator and a peak flux of 5 × 106 cm–2 s–1 can match the observations obtained prior to the Grand Finale time period.
## Observations during the Cassini Grand Finale
The Grand Finale phase of the Cassini mission began on 22 April 2017 with a final close flyby of Titan that diverted the spacecraft to fly 22 times between the planet and its rings. The mission ended on 15 September 2017 after another gravitational deflection by Titan that sent the spacecraft plunging into Saturn’s dense atmosphere. Cassini’s trajectory between the innermost D ring and the planet allowed in situ coverage of the equatorial ring-atmosphere interaction. Flybys varied in altitude between 1360 and 4000 km above the atmosphere’s 1-bar pressure level in three groups (Fig. 1).
The Grand Finale objectives included measuring Saturn’s atmospheric H2 and He and searching for compounds such as water that might indicate an interaction of the upper atmosphere with the main rings. The Ion Neutral Mass Spectrometer (INMS) (11) measurements were made several hundred kilometers above the homopause, the level at which turbulent or eddy mixing and molecular diffusion are equal (Fig. 1). In this region, molecular diffusion produces a mass-dependent separation: Lighter compounds have a larger vertical extent due to Saturn’s gravity. Prior to the Grand Finale, models and occultation data (12, 13) suggested that H2, He, and HD were the only neutrals in the upper atmosphere that would be measurable by INMS. The heavier molecule methane was predicted to be present at the ~0.1% level in the well-mixed lower atmosphere (below 1000 km; Fig. 1), but below the detection limits of INMS at the high altitudes sampled by Cassini (12, 13).
### Atmospheric composition
INMS obtained measurements of neutral molecules using the Closed Source Neutral (CSN) mode (11). INMS operated in a survey mode, acquiring mass spectral data at every mass within its range [1 to 99 atomic mass units (u) at resolution of 1 u], with a repetition rate of 9.5 s (~300 km along track) or 4.5 s (~150 km) when ionospheric data were or were not being acquired sequentially, respectively.
Figure 1 indicates the three altitude regions between the atmosphere and the rings where measurements were made by INMS. Figure 2 shows data for the major atmospheric components H2, 4He, and HD + 3He, separated into those three altitude bands. HD and 3He have the same atomic mass, so they cannot be measured separately. Because of the geometry of the orbit (rev) for each altitude region, there is a strong correlation between altitude and latitude. The closest approach to Saturn occurred at 4° to 6°S and not at the equator. This allowed us to determine whether the observed compounds are better correlated with a neutral flux from the rings or with Saturn’s atmospheric structure (see below). All orbits occurred near local solar noon. This geometry provides complementary information to the Pioneer, Voyager, and Cassini radio occultations, which probed near the dusk and dawn terminators. The Cassini orbits covered a range of Saturn rotational longitudes (Fig. 2). For the low-altitude observations (Fig. 2C), the differences in the shape of the H2, HD + 3He, and 4He altitude profiles are due to changes in their respective abundances and indicate the effects of diffusive separation via gravity.
H2 and He data were examined to determine the variability due to altitude, latitude, and orbit, the latter representing temporal and/or longitudinal effects. For each orbit, raw count data were converted to densities (14) and an average altitude and density were calculated every 5° latitude using the 11 data points closest in latitude (Fig. 3). Local scale heights were then fitted at each latitude from surrounding data having densities within a factor of 1/(2e) higher or lower than the average density, one e-fold about the latitude of interest.
In addition to local scale heights, a common scale height was calculated using data across all observed altitudes, latitudes, and revs. This was facilitated because altitude and latitude are highly correlated and do not vary much from one orbit to the next. With the exception of 5°N latitude, all altitude/latitude bins are consistent with this common scale height (H), given by H = kT/(mg), where k is Boltzmann’s constant, T is the H2 gas temperature, g is the local acceleration due to gravity, and m is the mass of the gas molecule, H2 in this case. The common scale height represented in Fig. 3 corresponds to a temperature of 340 ± 20 K. The slopes of the local scale height at 5°N are steeper than the common scale height fit, suggesting a higher temperature. The scale height correspondence between the various altitudes/latitudes sampled, the times/longitudes sampled (on the different revs), and the locally determined scale heights calculated at each point show little evidence for major temporal or spatial atmospheric variation in the region sampled by Cassini during the Grand Finale (15).
The analysis of the INMS measurements indicates an atmosphere that has a more complex mass spectrum than predicted by models. There is evidence for both volatile compounds and fragments derived from nanograin impacts in the INMS antechamber. We conclude that the nanograins are sourced from the ring plane. Figure 4 shows the latitude distributions of mass 15 u and mass 28 u in the three distinct altitude ranges. Mass 15 u is the CH3+ fragment of methane (mass 16 u is not used because of interference from mass 16 oxygen from water electron impact dissociative ionization), whereas mass 28 u may be derived from N2, CO, and/or C2H4. Mass deconvolution for revs 290, 291, and 292 indicates that less than 10% of the 15 u peak is derived from dissociative fragments from ammonia or heavy organics. From deconvolution of these same spectra, we attribute the fractional contributions of the 28 u signal to be 28% to 58% (average 45%) CO, 41% to 50% (average 46%) N2, and 0% to 26% (average 9%) C2H4, an organic formed from fragmentation of nanograins.
Discrimination between volatile and nanograin-derived signals is achieved by comparing the data in Fig. 4 at the three separate altitude ranges. The uppermost altitude range shows that both mass 28 u and mass 15 u reach a maximum abundance at the equator rather than at the closest-approach latitude (~5°S) with a very large spike for mass 28 u at the equator, which we attribute to the C2H4 portion of the peak. The broad latitude extent of the distributions indicates a distributed volatile source in the ring plane. The mass 28 u signal is consistent with C2H4 derived from nanograins that have a very narrow Gaussian distribution peaked within 1° of the ring plane at this altitude (~3500 km), comparable to the nanograin distribution seen by the Magnetospheric Imaging Instrument (MIMI) investigation (16).
Both the mass 15 u and mass 28 u compounds also have a component that has begun to spread in latitude toward the closest-approach latitude (4° to 6°S) even at the highest altitudes measured. This is more obvious for mass 15 u than for mass 28 u. This spreading indicates that collisions between the volatile ring components and the saturnian atmosphere (H2 and He) are beginning to “diffusively couple” the volatiles into the atmosphere as they flow from the rings. At the measured atmospheric densities for this altitude range, diffusive coupling can only occur with molecular volatiles and not heavier nanograins. The latter undergo a smaller relative momentum exchange with the predominantly light hydrogen atmosphere. In the lower-altitude bands, the equatorial peaks disappear as both volatiles and nanograins are diffusively coupled into the atmosphere at the correspondingly higher atmospheric densities. This can be seen for the lower-altitude bands as the peak distributions of masses 28 u and 15 u shift into alignment with the closest-approach latitude near 5°S. The lowest-altitude band shows symmetric distributions of masses 15 u and 28 u. Their variation with latitude and altitude indicates that masses 15 u and 28 u are dominated by diffusive interaction with the H2 atmosphere and are free-falling at a terminal velocity, dictated by the increasing hydrogen density and its slowing of the gas. Thus, as the altitude decreases and the background atmosphere increases in density, the maximum diffusion velocity decreases and the infalling material density increases to maintain flux continuity (apart from small effects at this altitude due to chemical loss and latitudinal spreading). This process allows us to estimate the diffusion velocity and determine the material’s influx rate from the measured densities (see below).
Figure 4D reinforces the interpretation that the observed compounds originated in the rings. It indicates that for mass 15 u (CH3 from methane), mass 28 u (CO, N2, and C2H4), mass 44 u (CO2 and propane), and a surrogate heavier mass from nanograin fragments at 78 u (benzene), the relative abundances are highest at the highest altitudes and reach a near-constant relative abundance below 2500 km as the material is diffusively coupled into Saturn’s atmosphere.
Although the solar local time does not vary, the Saturn longitude does change from one orbit to the next. The abundance and distribution of methane, one of the major influx volatiles, varies by a factor of 3 about the average abundance (Fig. 4D, lowest-altitude values). These orbit-to-orbit variations indicate a spatial or temporal change of the volatile source on time scales that are faster than the horizontal diffusion time scale. However, there is no simple variability associated with Saturn’s corotational longitude. There is a tentative link to longitudinal variations in the D68 ringlet near the inner edge of the D ring, in which a series of bright clumps appeared in 2015 (17) (Fig. 5). An influx that is spatially concentrated near the clumps may in part explain the spatial and/or temporal variations.
### Impact fragmentation
The inflowing ring material impacted the INMS antechamber at 29 to 31 km s–1 (five times the speeds for which INMS was designed), leading to uncertainty in both absolute abundances and identification of the molecular species. At 31 km s–1, molecules and nanoparticles carry 5 eV of kinetic energy for each atomic mass unit, which in some cases would have been sufficient to dissociate incoming particles and molecules as they impacted the CSN antechamber. However, studies of surface-induced dissociation [e.g., (18)] indicate that only about 25% of the impact energy is converted to internal energy of the molecule in the collision process. For H2 this translates into ~2.5 eV, which is below its dissociation energy of 4.75 eV. Laboratory studies of dissociation in molecule-surface collisions (19) suggest that the interaction of H2 with passive surfaces such as Ag proceeds as a direct interaction with surface atoms. These atomic interactions have a strong angular dependence, with a 10-eV neutral beam having a probability of dissociation of 50% for collinear collisions and >50% for perpendicular collisions. Interaction with a raw Ti surface, such as the CSN antechamber wall after fresh material is exposed through a grain impact, can lead to chemical adsorption (chemisorption) that produces metal hydrides (20). Chemisorption processes can also play a role with active metal surfaces or with oxygen-bearing compounds. Therefore, for H2, CH4, NH3, and N2, the effects of chemisorption in the fragmentation are likely moderate in our case, whereas for H2O, CO, and CO2, chemisorption/fragmentation will likely affect our measurements. For example, some of the CO observed at mass 28 u may be an impact fragment from CO2, which does not affect our main conclusions. Impact fragmentation of CH4 to produce CH3 will likely result in reformation of CH4 on the surface due to the high content and rapid movement of H radicals on the surface in this hydrogen-rich environment. Similarly, CH3 terminal groups from larger organics may also add to the methane signature, but because heavy organic compounds are at least an order of magnitude less abundant, this can increase the derived CH4 value by at most 10%. Furthermore, this increase will be compensated by a decrease as a result of CH4 impact fragmentation that forms CH and CH2 fragments, which are lost from the antechamber before they can pick up two hydrogens in separate surface reactions because the surface coverage is relatively low (<30%). The net result is that we expect that the CH4 statistical uncertainty encompasses the additional systematic uncertainties from various fragmentation processes.
The possibility of particle fragmentation inside the closed source does not affect our main conclusions about Saturn’s atmospheric structure. With regard to H2, the agreement of the outbound and inbound data for each orbit examined (290, 291, and 292) below 3000 km and the common scale heights observed at all altitudes, latitudes, and orbits (Fig. 3A) indicate that the effects of dissociation are limited and do not affect the derived atmospheric structure below 3500 km. However, H2 densities of 106 cm–3 in an extended outbound region above 4000 km have a constant slope (with density decreasing versus time and altitude) that tracks with the decay of water, which is known to adsorb to the instrument walls (21). This correlation suggests that grains have sputtered some raw Ti into the antechamber, allowing hydride formation on the antechamber surface. This hydrogen may have been subsequently displaced by the oxygen from water molecules (TiO and TiO2 are more tightly bound than the metal hydride) clinging to the antechamber walls after the flyby, thereby creating a low level of H2 production. This precludes the use of values for H2 on the outbound phase above 4000 km, but does not affect our other results. However, the loss of water reacting with Ti vapor does suggest that the water abundance may be as much as 30% larger than reported in Table 1 and Fig. 6. Because the deuterium hydride has a lower activation barrier for formation than the H hydride (20), which led to H/D fractionation during the flyby, the present dataset is difficult to use for studying the H/D ratio of the atmosphere and rings.
Table 1 Mass influx values and composition of inflowing ring material.
Density and composition values are the average of revs 290, 291, and 292.
View this table:
The strongest evidence that CH4 and N2 are not simply by-products of fragmentation of heavy compounds, but instead native volatile gases, comes from the latitude distributions of mass 15 u and mass 28 u at the highest altitudes (3500 to 4000 km; see Fig. 4 and related text). The mass 15 u (methane) and mass 28 u signals both show a broad latitude distribution running from 20°S to 10°N latitude but differ in peak location; this finding constitutes evidence for a latitudinal spread of the native volatile compounds (CO and N2), combined with C2H4 impact fragments of the organics nanograins from the peak near the ring plane. This leads us to conclude that methane, nitrogen, and carbon monoxide are native volatiles originating in the rings. However, we cannot rule out the possibility that some of the carbon monoxide is a fragment by-product of carbon dioxide, for which we also have evidence in the spectra. We have no corresponding evidence for the presence of water, carbon dioxide, or ammonia. They may be native volatiles or fragments from nanograins. Given our uncertainty of the fragmentation processes, it is fortunate that this rough classification between native volatiles and fragments derived from nanograins has no effect on the mass inflow flux we derive below, although it does affect how these compounds react chemically with the atmosphere and ionosphere.
### A plethora of organics
INMS spectra from Saturn’s exosphere include signal over the full range of neutral masses, up to 99 u (Fig. 6). Species with mass exceeding 46 u are present in all six of the final low-altitude orbits, consistent with a local source for this material. The count rate distributions are highest around 16 u, 28 u, 44 u, 56 u, and 78 u, indicating an organic-rich spectrum. These spectra are more complex than predicted by models, with contributions from many different chemical compounds.
Our assessment of the composition of the inflowing material observed on revs 290, 291, and 292 gives the following fractions by weight: methane, 16 ± 3%; ammonia, 2.4 ± 0.5%; water, 24 ± 5%; molecular nitrogen and carbon monoxide (CO/N2), 20 ± 3%; carbon dioxide, 0.5 ± 0.1%; and organic compounds, 37 ± 5%. The values reported are the mean of the orbits analyzed. To account for physical adsorption (physisorption) and chemisorption to the instrument walls (21), we generated integrated spectra (Fig. 6). For masses with a high tendency to interact with the walls of the antechamber (those in rev 290 with a ratio of outbound to inbound counts greater than 2 at 1750 km and with a maximum count rate greater than 40), the integrated spectra show the integrated signal at each mass over the full time period for which the signal at 18 u is above the background level. The remaining masses are integrated using the time window from 500 s before to 500 s after closest approach. We used a standard fitting procedure (22). Using data from calibrations of the INMS engineering model and the National Institute of Standards and Technology mass spectral library (23) to determine the dissociative fragmentation patterns and absolute calibration, we constrain the abundances of carbon dioxide, CO/N2, water, ammonia, methane, hydrogen, and helium for each integrated spectrum. The remaining counts at masses ≥12 u are attributed to organic species. Some signal may be due in part to inorganic S-bearing species such as H2S or SO2; however, the overall abundance of these compounds is consistently < 0.1% by mass of the inflowing material and less than our quoted uncertainties.
The ring particle composition (i.e., compounds other than H2 and He) is approximately 37 weight percent (wt %) organic compounds heavier than CH4. The other abundant ring particle compounds are water, CO/N2, methane, ammonia, and carbon dioxide. Signals at 12 u and 14 u constrain the abundances of CO and N2, respectively, and suggest that an inorganic component is likely present at 28 u. However, the value for CO reported is an upper limit, as the constraint from 12 u does not account for ionization fragments from organics or from impact fragmentation of CO2 internal to the instrument. The organic fraction itself is fitted well by hydrocarbons, but the presence of O- or N-bearing organics is not excluded. Water, 28 u, and methane are the most abundant volatiles with ratios relative to H2 on the order of 10–4 for the integrated spectra. The spectra are also consistent with the presence of aromatic species, including the possible signal of benzene at 78 u. We estimate that the total organic mass density is on the order of 10–16 g cm–3. The organic compounds identified may be either indigenous compounds present in the atmosphere or the products of high-speed impacts with INMS (see above). In either case, these spectra suggest abundant native organic material. The mass of hydrocarbons detected by INMS is equivalent to the mass of ~106 cm–3 nanoparticles with masses in the range detected by MIMI (16). Because MIMI detected far fewer particles, most of the mass measured by INMS appears to be below the 8000 u lower measurement limit of MIMI (24).
Masses above ~70 u are poorly fit by compounds with primary masses in the range of INMS. Because studies of heavy extraterrestrial organics indicate that these heavier organic compounds include aromatics [e.g., (25)], common aromatic molecules such as naphthalene were added to the spectral library as candidate parent species for this region. Compounds produced by ice irradiation experiments (26, 27) were also considered. We find that this mass region is consistent with the presence of aromatic compounds comprising ~5 to 10 wt % of organics. Aromatic material would be subject to impact dissociation that may yield aliphatic compounds, so this abundance of aromatic material is a lower limit.
### Estimating the mass influx in the equatorial region
We estimated the mass influx of material in the equatorial region by combining calculations of the material velocities and densities. From the mass deconvolution described above, we estimated the densities of methane, water, ammonia, nitrogen, carbon monoxide, carbon dioxide, and organic fragments measured at an altitude of 2000 km during revs 290, 291, and 292 (see Fig. 6). We used two different approaches to determine the appropriate downward diffusion velocity. One is based on the limiting flux equation (28):(1)where H is the scale height (we use a value of 150 km based on the INMS data), m1 is the mass of the minor species of interest, and m is the mean mass of the atmosphere (we use a value of 2 u). D12 is the coefficient for diffusion of species 1 in species 2. The alternative approach uses the diffusion coefficients calculated using a simple hydrostatic model adapted from previous work (29). The methane diffusion velocity at 2000 km is calculated using both methods for a range of influx values of methane. The maximum (1 × 104 cm s–1) and minimum (4.5 × 103 cm s–1) diffusion velocities are then taken as bounding cases. The diffusion velocities for the other materials are scaled by the ratio to methane of their diffusion coefficients flowing through H2 (30). The range of diffusion velocities used in this calculation is inclusive enough that assumptions about whether methane, ammonia, water, nitrogen, carbon monoxide, and carbon dioxide are present at 2000 km as volatiles or are derived as fragments of a larger organic moiety will have little effect on the diffusion velocity calculation. The equatorial latitude band used in the mass influx calculation is 8° wide, which is the half width of the low-altitude measurements in Fig. 4C. The difference between the maximum and minimum values is completely dominated by the uncertainty of determining the diffusion velocities, which is a systematic uncertainty that is constant from one orbit to the next. The uncertainty due to the densities of the components is ~1%, indicating that there is variation in the mass influx between rev 291 and the other two revs (290 and 292) we analyzed. However, the mass fraction of a given component is not statistically different from orbit to orbit. Calculated mass influx rates for revs 290, 291, and 292 and the average composition of material from Saturn’s atmosphere and from ring influx are reported in Table 1 and Fig. 6.
### Ionospheric measurements
The open source on the Cassini INMS was designed with the primary purpose of measuring reactive neutrals and ambient ions in Titan’s ionosphere and upper atmosphere (11), where flyby velocities were in the range of 6 km s–1. However, the Cassini Grand Finale orbits resulted in spacecraft speeds relative to the atmosphere of ~31 km s–1. At these speeds, INMS ion measurements were limited to <8 u. Despite the limited mass range, INMS measurements of light ions can be combined with Radio Plasma Wave Spectrometer (RPWS) measurements of the free electron content (31, 32) to produce a more complete picture of the ionosphere. The INMS open source has a narrow field of view (< 2° cone) relative to the closed source (180° cone). Therefore, the spacecraft must be used to point the instrument into the ram direction to allow measurements of ions. This orientation occurred on a limited set of Grand Finale revs (283, 287, 288, and 292).
Figure 7 shows the ionospheric data obtained by INMS during the Cassini Grand Finale. Figure 7, A and B, shows the mid-altitude band (revs 283 and 287; refer to Fig. 1 for context). These time series indicate that for the altitudes and latitudes covered by these orbits, the free electron density is within a factor of 2 of the INMS measured light ion density. The asymmetry between north and south latitudes above 15° can be attributed to ring shadow effects during the autumnal solstice illumination (33). This reduces incoming solar ultraviolet flux, lowering the photoionization, although the effect is somewhat mitigated by the long lifetimes of the H+ and H3+ ions and the associated transport effects. Figure 7, C and D, shows revs 288 and 292 from the lowest set of altitudes, where the near-equatorial ionosphere shows a large difference between the free electron density and the light ion density, indicating the presence of heavier ions (>8 u) that must account for the bulk of the ionospheric density (more than 75% of the ions are >8 u). This heavy-ion region is asymmetric with respect to the equator, ranging in latitude from ~2°N to 12°S. However, it does roughly correlate with the inflow of volatiles and organic nanograins from the D ring discussed above, and closely matches MIMI Charge Energy Mass Spectrometer (CHEMS) measurements of nanoparticles (16). The molecular volatiles (water, methane, ammonia, carbon dioxide) can easily convert the long-lived protons of the ionosphere into molecular ions with shorter lifetimes, decreasing the overall electron density (6), as described below.
Figure 7E shows additional measurements of the ionosphere for rev 288. The individual light ion concentrations for H+, H2+, H3+, and He+ are indicated. The H+ and H3+ densities decrease in the near-equatorial heavy-ion region, consistent with an inflow of heavier molecules. The H2+ ions measured are the primary product of ionization in the Saturn ionosphere and are created by ionization of H2, the most abundant neutral, by solar extreme ultraviolet radiation (34). They are a good tracer of the ionization process. Figure 7E also shows the scaled value of the positive nanoparticles measured by CHEMS (16) added to the INMS light ion density. The second-order latitudinal structure of the CHEMS measurements of positive nanoparticles is closely correlated with the free electron density of the bulk ionosphere. Although the scaling factor is ~106 and these very large positive ions cannot themselves account for the secondary structure, they represent the positive member of a dusty plasma that contains both neutral nanoparticles and negative-ion nanoparticles, which appears to affect the recombination of the primary positive molecular ions that dominate the equatorial ionosphere.
## Implications for Saturn’s atmosphere and ionosphere
### Atmospheric structure
The analyses shown in Figs. 2 and 3 indicate a background atmospheric structure that is consistent with predictions shown in Fig. 1 for both hydrogen and helium. However, species heavier than helium are far more abundant than predicted. The differences between the predicted and observed atmosphere are largely confined to the excess volatiles that we have concluded flow in from the D ring, as discussed below.
The measured hydrogen and helium abundances are compared to the models (35) in Fig. 8. The vertical profiles of the helium and methane abundances calculated by the hydrostatic model (29), which was used to benchmark the nonhydrostatic Global Ionosphere-Thermosphere Model (36, 37), are shown. Methane abundances for altitudes below those observed by INMS are taken from the combination of Cassini Ultraviolet Imaging Spectrograph (UVIS) and Composite Infrared Spectrometer (CIRS) data (35). As can be seen in Fig. 8, the relatively large uncertainties in the methane abundance allow both approaches to reproduce the UVIS/CIRS methane data equivalently well, even though they use widely different versions of the eddy diffusion coefficient.
Also shown in Fig. 8 are two different scenarios for the deep-atmosphere He/H2 ratio. The purple curves adopt a He/H2 ratio of 0.03, consistent with a helium abundance of ~0.0291 reported from the Voyager measurements (38). We use the ratio of 0.03 to represent the most likely lower bound for the well-mixed atmosphere value for helium (35). The other set of helium curves represent a He/H2 ratio in the well-mixed lower-altitude region of the atmosphere of ~0.16 (an abundance of helium of ~0.1355); this value is more consistent with the recent analysis (35), which reports a homosphere abundance of ~0.11 ± 0.02 inferred from UVIS and CIRS data. The latter approach brackets the helium abundances obtained directly from the INMS measurements, whereas the Voyager-derived curves systematically fall below the data (Fig. 8). This comparison suggests that the INMS helium data are more consistent with a nearly jovian homosphere abundance of ~0.1355 (39). In contrast, the model that uses a lower He/H2 ratio systematically fails to reproduce the INMS data (15).
These results for the homospheric ratio of He/H2 have implications for understanding the internal structure and evolution of Saturn. The conventional explanation for the excess infrared luminosity of Saturn relative to the expected thermal emission is that cooling over time leads to the demixing of helium from hydrogen, with the heavier helium raining out into the deeper interior and generating heat (40). Our measurements and modeling permit a modest depletion of helium but are inconsistent with a strong depletion relative to the protosolar He/H2 ratio of 0.19 (41). As an example, the nominal INMS value for the He/H2 ratio is 0.16 (Fig. 8), which is similar to the value at Jupiter (0.157 ± 0.003) (42). However, some additional helium rain in Saturn beyond that in Jupiter is allowed, as the INMS data are consistent with a He/H2 ratio as low as 0.10. A range between 0.10 and 0.16 would maintain the viability of helium rain—a process that is consistent with He/H2 < 0.12 in the well-mixed atmosphere (43)—as the cause of excess luminosity. This range is also consistent with the most recently derived He/H2 ratio of 0.11 to 0.16 from Voyager (44) and with the Cassini UVIS-CIRS value of 0.09 to 0.13 (35). Overall, the homospheric helium abundance from INMS may be slightly higher than previous estimates, but the uncertainties are large.
### Ionospheric structure
The presence of the light-ion species observed by INMS in the ionosphere (H+, H2+, H3+, He+) was predicted almost 40 years ago by a model of a neutral atmosphere dominated by H2 and He (1). Ion and neutral measurements made by INMS in the ionosphere are consistent in that they both indicate the presence of an additional heavy molecular species (both neutral and ionized) in the equatorial upper atmosphere. Water-group neutral and ion species were predicted, but the present Cassini data indicate that the chemical composition of the material falling inward from the rings is concentrated at the equator, is chemically much more complex than predicted, and includes a substantial organic component, perhaps in the form of nanoparticles.
Dissociative and nondissociative photoionization of molecular hydrogen (and to a lesser degree He) by solar extreme ultraviolet radiation is the source of ionization in the equatorial ionosphere. The H2+ and H+ ions thus produced undergo a series of ion-neutral reactions, generating other ion species such as H3+ via the fast reaction H2+ + H2 → H3+ + H. The H2+ production rate along the spacecraft track can be determined empirically by multiplying the measured H2+ density by the measured H2 density (both shown for rev 288 in Fig. 7E) and by the rate coefficient of 2 × 10–9 cm3 s–1 (34). The production rate is ~8 cm–3 s–1 near closest approach. At these altitudes, the effects of approximately 50% opacity in the extreme ultraviolet are evident in the production rate, indicating that the spacecraft’s closest approach nearly reached the altitude of peak ion production. This effect is also evident as the dip near closest approach in the H2+ densities in Fig. 7.
Figure 7E also shows a broad gap near closest approach between the total light-ion densities measured by INMS and the electron densities measured by RPWS. Assuming quasi-neutrality (that is, the ion density approximately equals the electron density) and neglecting negative ions, this suggests the existence of an ion with a mass beyond the upper limit of the open source for these orbits (8 u). This ion, or collection of ions, is more abundant than light ions in the main ionospheric layer. To maintain consistency with the neutral composition provided by the closed source, the heavy ion cannot be solely a water-derived molecular ion. Simple ion chemistry for the light ions can put limits on the abundance of the heavy neutral ring ions with a large dissociative recombination rate coefficient. The low H+ and H3+ densities measured near closest approach require fast reactions with a molecular volatile at an abundance of approximately 10–4 (34), consistent with the INMS neutral data. This simple ion analysis does not indicate the identity of the molecular volatile, which likely includes methane, ammonia, water, and carbon dioxide as measured by INMS. These compositional changes are discussed in (45).
Simple photochemistry for a single major ion (i.e., H2+) states that the total ion production rate equals the ion-electron loss rate from dissociative recombination. This leads to an expression for the electron density Ne:(2)where is the H2+ production rate and α ≈ 5 × 10–7 cm3 s–1 is a typical dissociative recombination rate coefficient (46). Using the peak production rate near closest approach from Fig. 7 yields a photochemical equilibrium value of Ne ≈ 104 cm–3, in good agreement with the peak electron density measured by RPWS (31, 32). This agreement suggests that the role of negative ions and/or particles in determining the charge balance may be relatively minor.
It is well known in planetary and terrestrial aeronomy (47) that chemistry dominates at lower altitudes in an ionosphere, whereas transport processes become important at higher altitudes where the collision frequency is low. We expect this to be true for Saturn’s ionosphere as well. The chemical lifetime of the major molecular compounds varies from ~200 s near closest approach up to ~2000 s (i.e., about 30 min) near the upper edge of the heavy-ion gap/layer. The H+ chemical lifetime should be controlled by the abundance of the heavy neutral compounds, which increases rapidly with altitude, via ion-neutral reactions. Plasma transport is largely constrained to proceed along the magnetic field (47), which in the equatorial region is almost horizontal. The H2+ production rate sharply falls off by a factor of ~10 in the shadow of the planet’s B ring (Fig. 7E), and the H3+ density also falls off rapidly. However, the H+ density, which is equivalent to Ne in this region, falls more slowly. This suggests that H+ is not in chemical equilibrium in the altitude region near 2000 km and above, but that H+ plasma is produced outside this region and flows into the shadowed area.
He+ ions (4 u) are created by the photoionization of atmospheric He, which falls with altitude more rapidly than H2 because of diffusive separation. Figure 7E shows that the He+ density decreases more rapidly with altitude than does the H2+ density. However, some of the 4 u signal is contributed by H2D+.
### Origin of volatiles in the thermosphere
Molecular hydrogen and helium, which are sourced from the well-mixed atmosphere via diffusive transport, are the most abundant neutral species in the upper atmosphere of Saturn. The next most abundant category of neutrals by mass is organics, followed by water, mass 28 u inorganics (CO and N2), and methane (Fig. 6). Methane is too heavy to diffuse upward from the homosphere, so the source of methane must be external (see above). The source of CH4 seems to be Saturn’s rings. One possibility is that an icy carrier of CH4 (e.g., clathrate hydrate) may be present inside the ring particles, which volatilizes when heated by sunlight or ablation in Saturn’s thermosphere. Any CH4 gas released would diffuse into Saturn’s atmosphere under gravity.
The volatile composition observed by INMS appears to be similar to material found in comets (48). This could be explained if Saturn’s rings were formed from unprocessed primordial ices, derived from a thermally primitive precursor body such as a small icy moon. Alternatively, the similarity may be coincidental if species such as CH4, NH3, and CO are major products of the thermal/ultraviolet degradation of complex organics in an H2-rich environment.
The mass of Saturn’s C ring is ~1018 kg (49), about 0.03 times the mass of Saturn’s moon Mimas. Therefore, if we use the mass influx inferred from the INMS measurements (4800 to 45,000 kg s–1), we calculate a lifetime of 700,000 to 7 million years for the C ring. Yet this only reflects today’s influx. The current influx is directly from the D ring rather than the C ring, which must be the ultimate supplier because the mass of the D ring [likely no more than 1% of the C ring mass (50)] can maintain current loss rates for only 7000 to 66,000 years—a very short amount of time in terms of solar history. It is unclear whether the C ring can lose 1% of its mass into the D ring by viscous spreading over that time period (51).
Although viscous spreading of the C ring is likely not the cause of mass transfer to the D ring (51), occasional transfer of ~1% of the mass of the C ring into the D ring region via a large ring-tilting event is feasible. These ring-tilting events involve a stream of planet-orbiting rubble crossing the ring plane somewhere in the C or D rings. The C ring provides the ultimate source, containing enough mass to last (at current influx rates) about 5% of the time that the rings themselves have existed (~200 million years) (5254). The D ring could be repopulated sporadically by large impact events such as those that tilted the D and C ring plane (55). Once enough small particles are brought into the D ring region, exospheric drag would quickly drain them into the planet, as observed by Cassini.
We conjecture that one or more transient events occurred in the recent past that disturbed the D ring, or changed its mass and particle size distributions so that tenuous gas drag can more quickly cause it to fall into the planet. The latest such event appears to have been the one that perturbed the D68 ringlet (17). However, the weak correlation shown in Fig. 5 is not compelling. Evidence of ionospheric depletion observed well before the formation of bright clumps in the D68 ringlet suggests that the material inflow may have been taking place for a longer period of time (2). Therefore, we examined the evidence for D ring perturbations over a longer time scale.
The D ring structure of irregularly spaced bands or belts has changed markedly since the Voyager flyby (56). A pattern within the ring has been interpreted as a vertical spiral ripple, possibly the result of ongoing wrapping by orbital evolution of particles in the initially tilted ring. From the wavelength of the wrap, which shortens with time, the event was dated to the early 1980s (56). The ripple was later found to extend through the C ring (55), indicating that the tilt was imposed by an impacting stream of rubble, perhaps from a disrupted comet 1 to 10 km in size with an extended node crossing the D and C rings. The event may have had two parts, separated by months (57). There is also evidence in Voyager data for two other disturbances that occurred in 1979 (58). All this sporadic disruption could plausibly have altered the properties of the D ring in such a way that today’s flux (and/or today’s D ring) are not necessarily typical of the last 100 million years. We expect these events to have happened at the same average rate going backward, producing variability in the D ring.
Our results for the delivery of ring materials have implications for the composition of Saturn’s deep atmosphere (stratosphere and troposphere). Previous modeling (7) has suggested that the delivery of oxygen, in the form of water, can explain the presence of CO seen in the stratosphere. In addition to water, we observe carbon monoxide and carbon dioxide influx that can contribute to the oxygen inventory (Fig. 6).
Because of the influx of CH4 and other sources of carbon, Saturn may have acquired an apparent methane enrichment (i.e., higher C/H ratio) relative to the protosolar value. Observations using the CIRS instrument onboard Cassini indicate that Saturn’s methane enrichment over the protosolar value is 2.25 ± 0.55 times the enrichment seen in Jupiter (59). Our INMS observations indicate an influx of methane between 3 × 1028 and 2 × 1029 molecules per second entering the equatorial atmosphere, and 2.5 times as much mass in the form of other organics. If we assume that inflowing methane spreads over the globe, this is equivalent to an influx of 7 × 1011 to 4.8 × 1012 m–2 s–1 throughout the atmosphere. By calculating the column density of methane in the thermosphere, stratosphere, and troposphere above an altitude of 50 km, where the contribution function of CIRS peaked (59), we can estimate how long the observed methane influx from the rings would need to be sustained to raise the enrichment to 2.25 times that of Jupiter. The estimated time is ~7 million to 110 million years—within approximately an order of magnitude of the estimated lifetime of the rings themselves (see above). This slow buildup occurs in the stratosphere and troposphere, because the predominant methane flux at this altitude is from the deep interior produced by recycled heavier hydrocarbons that photochemically formed in the stratosphere and later diffused down into the interior. Thus, the methane flux from above provides a very slow shift in the steady-state concentration that builds up over time in the stratosphere and troposphere. The organic carbon nanograin material, with a mass influx 2.5 times that of methane, could be chemically recycled deep in the atmosphere to increase the methane content in the deeper interior and drive a larger interior outflux of methane. However, our derived infalling material composition (Fig. 6) includes influx of NH3, and it is unclear whether prolonged, continuous delivery of ring-derived NH3 would be consistent with existing upper limits on the 15N/14N ratio in Saturn (60).
## CONCLUSIONS
The Cassini INMS measured in situ the atmospheric and ionospheric composition of Saturn’s equatorial atmosphere during a series of flybys between the atmosphere and the D ring in the Grand Finale phase of the mission. Water, methane, ammonia, carbon monoxide and/or molecular nitrogen, and carbon dioxide enter Saturn’s atmosphere from the D ring along the ring plane. This influx is expected to affect the equatorial ionospheric chemistry by converting the H+ and H3+ ions into heavier molecular ions, producing a depletion of ionospheric density previously observed in radio occultation observations (10). However, this may not explain the full extent of small-scale electron depletions observed by other Cassini instruments (33). INMS data include evidence for an influx of organic-rich nanoparticles that further modifies the composition and structure of the equatorial ionosphere and may circulate throughout the low- and mid-latitude thermosphere. Over long time scales, this infalling material may affect the carbon and oxygen content of the observed atmosphere.
## Methods
The data on the D68 ringlet brightness distribution (Fig. 5) are from a sequence of images obtained by the Imaging Science Subsystem (ISS) onboard the Cassini spacecraft on day 229 of 2017, during rev 289. All images were calibrated using the standard CISSCAL routines, which remove dark currents, apply flat-field corrections, and convert the observed brightness data to I/F, a standardized measure of reflectance that is unity for a surface illuminated and viewed at normal incidence (62, 63). These calibrated images were geometrically transformed with the appropriate SPICE kernels (64), and the pointing was refined on the basis of the observed locations of stars in the field of view (27). For each image, the brightness data were then reprojected onto regular grids of radii and inertial longitude (i.e., longitude measured relative to the ascending node of the rings in the J2000 coordinate system). Each column of the reprojected maps then provides a radial profile of D68 at a single inertial longitude. Radial profiles were co-added to generate longitudinal brightness profiles. Because the ring material orbits the planet, these profiles are constructed in a corotating longitude system with an assumed mean motion of 1751.7° per day and a reference epoch time of 300000000 TDB (Barycentric Dynamical Time), which is 2009-185T17:18:54 UTC (Coordinated Universal Time).
The Cassini images did not have sufficient spatial resolution to discern D68’s internal structure, so the ringlet’s brightness is quantified in terms of its equivalent width (EW), which is the radially integrated I/F of the ringlet over the radius range 67,550 to 67,700 km above a background level given by a linear fit to the signal levels on either side of the ringlet (67,000 to 67,500 km and 67,750 to 68,250 km, respectively). The estimates of the ringlet’s equivalent width are converted to normal equivalent width (NEW) by multiplying the EW values by the cosine of the ring’s emission angle. For features with low optical depth such as D68, NEW is independent of ring opening angle.
## References and Notes
1. Scaling the A ring viscosity ν from (61) by the ratio of C ring to A ring optical depths, and assuming a spreading time t = D2/v, where v is viscosity and D is the width of the C ring, the spreading time is comparable to the age of the Solar System, much longer than the currently believed age of the rings.
Acknowledgments: We gratefully acknowledge the support of the Cassini project in the flawless execution of the Cassini Grand Finale phase; the late Hasso Niemann and his NASA GSFC team that built a superb mass spectrometer that produced beautiful data for many years; and the INMS operations team who operated the INMS equally flawlessly through most of the mission. Funding: Work performed by the Cassini INMS team was supported by NASA JPL subcontract (NASA contract NAS703001TONMO711123, JPL subcontract 1405853) and INMS science support grant NNX13AG63G (M.E.P.). J.-E.W., L.Z.H., and M.M. were supported by the Swedish National Space Board (SNSB) for work and data from the RPWS/LP instrument on board Cassini; research at the University of Iowa was supported by NASA through contract 1415150 with the Jet Propulsion Laboratory. J.C. was also supported by the Cassini project through his Interdisciplinary Scientist grant, NASA WBS 431924.02.01.02. D.G.M. was supported for his contribution by the NASA Office of Space Science under Task Order 003 of contract NAS5-97271 between NASA Goddard Space Flight Center and Johns Hopkins University. W.T. was supported by Taiwan Ministry of Science and Technology grant 106-2112-M-003-015. Author contributions: J.H.W., R.S.P., and M.E.P. planned the observations and discussed their value in the Cassini Grand Finale phase of the mission; R.S.P. led the instrument operations uplink and downlink process; R.S.P., M.E.P., K.E.M., T.E.C., R.Y., and J.H.W. contributed to the analysis of the dataset; J.H.W., R.S.P., K.E.M., T.E.C., J.B., C.R.G., and M.E.P. wrote the initial text; J.H.W., K.E.M., T.B., J.W., S.C., B.T., and J.G. contributed to modeling and/or experiments concerning the interaction of the gas with the INMS antechamber at high impact velocity (fragmentation, chemisorption, and physisorption processes); M.H., J.C., O.J.T., R.J., S.L., W.-H.I., and W.T. provided information about the rings and their associated atmosphere; D.G.M. and M.E.P. provided insight into the equatorial ice grains from the Cassini MIMI investigation; W.S.K., J.-E.W., L.Z.H., M.M., and A.P. provided complementary information about the ionosphere as measured by the Cassini RPWS investigation; T.E.C., A.N., and L.M. provided ionospheric modeling; and J.B. and R.Y. provided atmospheric modeling. All authors contributed to revising and editing of the text. Competing interests: The authors declare no competing interests. Data and materials availability: INMS data from the Grand Finale phase of the Cassini mission are available on NASA’s Planetary Data System at https://pds-ppi.igpp.ucla.edu/search/view/?f=yes&id=pds://PPI/CO-S-INMS-3-L1A-U-V1.0/DATA/SATURN/2017. We used data from 2017-111 to 2017-258.
View Abstract | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587310314178467, "perplexity": 2395.086854524663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509326.21/warc/CC-MAIN-20181015142752-20181015164252-00398.warc.gz"} |
https://www.sparrho.com/item/efficient-dollarell_qdollar-minimization-algorithms-for-compressive-sensing-based-on-proximity-operator/8e3338/ | # Efficient $\ell_q$ Minimization Algorithms for Compressive Sensing Based on Proximity Operator
Research paper by Fei Wen, Yuan Yang, Peilin Liu, Rendong Ying, Yipeng Liu
Indexed on: 14 Mar '16Published on: 14 Mar '16Published in: Computer Science - Information Theory
#### Abstract
This paper considers solving the unconstrained $\ell_q$-norm ($0\leq q<1$) regularized least squares ($\ell_q$-LS) problem for recovering sparse signals in compressive sensing. We propose two highly efficient first-order algorithms via incorporating the proximity operator for nonconvex $\ell_q$-norm functions into the fast iterative shrinkage/thresholding (FISTA) and the alternative direction method of multipliers (ADMM) frameworks, respectively. Furthermore, in solving the nonconvex $\ell_q$-LS problem, a sequential minimization strategy is adopted in the new algorithms to gain better global convergence performance. Unlike most existing $\ell_q$-minimization algorithms, the new algorithms solve the $\ell_q$-minimization problem without smoothing (approximating) the $\ell_q$-norm. Meanwhile, the new algorithms scale well for large-scale problems, as often encountered in image processing. We show that the proposed algorithms are the fastest methods in solving the nonconvex $\ell_q$-minimization problem, while offering competent performance in recovering sparse signals and compressible images compared with several state-of-the-art algorithms. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855248689651489, "perplexity": 1983.3899547210724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039379601.74/warc/CC-MAIN-20210420060507-20210420090507-00167.warc.gz"} |
https://www.bartleby.com/solution-answer/chapter-19-problem-73e-precalculus-mathematics-for-calculus-6th-edition-6th-edition/9780840068071/ec918ef6-04c7-4004-8500-abbc2685e1b3 | # the graph of given equation when x lies from 0 to 100.
### Precalculus: Mathematics for Calcu...
6th Edition
Stewart + 5 others
Publisher: Cengage Learning
ISBN: 9780840068071
### Precalculus: Mathematics for Calcu...
6th Edition
Stewart + 5 others
Publisher: Cengage Learning
ISBN: 9780840068071
#### Solutions
Chapter 1.9, Problem 73E
(a)
To determine
## To sketch:the graph of given equation when x lies from 0 to 100.
Expert Solution
The graph shows that the more is the height of the person more far the person can see the farthest distance.
### Explanation of Solution
Given:
y=1.5x+(x5280)2 .
Concept used:
Desmos graphing calculator is used here to plot the graph.
Calculation:
Work as shown below, follow the steps:
(a) graph the equation y=1.5x+(x5280)2 on a graphing calculator as shown in the following picture:
The graph shows that the more is the height of the person more far the person can see the farthest distance.
(b)
To determine
### To find:the height of the person above the sea level when person can bale to see 10 mi.
Expert Solution
The two graphs intersect at the point (66.667,10) so for someone to be able to see 10mi far he must be x=66.667ft above the sea level.
### Explanation of Solution
Given: y=1.5x+(x5280)2 .
Concept used:
Desmos graphing calculator is used here to plot the graph.
Calculation:
2. (b) in order to answer this question it must find the value of x such that
y=10mi .
In the graph of this equation above graph the horizontal line y=10 as shown in the following picture:
The two graphs intersect at the point (66.667,10) so for someone to be able to see 10mi far he must be x=66.667ft above the see level.
### Have a homework question?
Subscribe to bartleby learn! Ask subject matter experts 30 homework questions each month. Plus, you’ll have access to millions of step-by-step textbook answers! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291754722595215, "perplexity": 2657.0846342424907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00511.warc.gz"} |
https://www.aquemy.info/2021-01-05-numerical-errors-propagation-control.html | # On controlling the progapation of numerical errors
### 2021-01-05
#Computer Science
### Introduction¶
Numerical computations are always tainted by errors. A typical example is when scientists simulate a physical system by a numerical method for which a mathematical study gives a bound on the error depending on some parameters. For example, with the Euler method, the error compared to the real solution decreases with the time step. If we did the numerical computation by hand, a time step that goes towards $$0$$ would allow us to find the exact solution. Obviously, it is impossible, even for a machine, to have a time step of $$0$$, but one might think that reducing it as much as our time budget or computing power allows is a good thing. Absolutely not!
Indeed, a second category of errors, not connected to the method but much more general, is the representation error . The problem is far more trivial, almost crude in the simplicity of its statement: it is impossible to represent an infinite quantity in memory space of finite size.
This leads us to consider the fact that, whatever the representation we chose, i.e., the way of translating a real number for the hardware, there will always exist numbers for which the representation will not be possible. An army of engineers and researchers worked to find and formalize a practical and intelligent representation, as a compromise between precision and ease of handling. This standardization process resulted in the IEEE-754 standard followed by most of the world’s computer hardware.
Warning
While a general reminder on the real numbers representation is provided, it is advised to have some notions about the representation of floating point numbers and basic notions of probabilities, in particular, on the construction of confidence intervals, in order to approach the theoretical part.
A question that naturally arises is: can I numerically control the errors which are induced by the representation error and then propagated during the calculation? This is what we will try to answer positively thanks to the CESTAC method, which stands for STochastic Control and Estimation of Rounding of Calculations (fr.: Contrôle et Estimation STochastique des Arrondis de Calculs).
An implementation of the method described in this article can be found in the following repository:
### Some floating point arithmetic reminders¶
Let us first see how to represent a real number in scientific notation and in a any base
We denote by $$b$$ the base of the arithmetic in which we will work, with generally $$b=2$$ or $$b = 16$$ for modern computation units. Then, any number $$x \in \mathbb {R}$$ can be written by:
$x = \pm mb^E$
With $$\frac{1}{b} \leq m < 1$$ and $$m$$ the significand, possibly having an infinite number of digits after the comma, and $$e$$ an exponant which is an integer expressed in base $$b$$.
We can rewrite the significand in base $$b$$ such that $$\sum_{i}^n m_ib^{-i}, 0 \leq m_i < b$$ with $$n \in \mathbb{N} \cup +\infty$$.
Example:
We consider $$x = 0,1_{10}$$ that we would like to express using this representation. It is enough to write $$x = 0,1 \times 10^0$$. Now, if we want to express $$x$$ using a base $$2$$, things are more complicated because the significand does not have a finite number of digits! Indeed, by successive divisions, we find that $$0,1_{10} = 0,000110011001100..._2$$!
As we mentioned in the introduction, since a computer has only a finite amount of memory, it is impossible for it to store an infinite amount of information. Worse, whatever the base $$b$$ chosen, there exists an infinite number of numbers whose represensation include a significand having an infinite number of digits1. In other words, it is impossible to perfectly represent the set of reals with a computer. Real numbers are generally approximated by numbers called floating point numbers.
This is how for a machine, a real number $$x$$ is represented by a floating number $$X$$ which can itself be written as follows:
$X = \pm Mb^E$
With $$\frac{1}{b} \leq M < 1$$ and $$M$$ the significand encoded on a finite number of digits $$n$$ and $$E$$ the exponent, also encoded on a finite number of digits. We can therefore write $$M$$ in base $$b$$ by: $$\sum_{i=1}^n M_ib^{-i}, 0 \leq M_i < b$$, where this time the number of elements to be summed is always finite.
As these are only reminders, I am not going into all the intricacies of the IEEE-754 standard, and these explanations are sufficient to continue the article.
We consider the assignment operation ($$:=$$): $$\mathbb {R} \to \mathcal {F}$$, where $$\mathcal{F}$$ is the set of floating point numbers. That is to say the operation which associates to a real number its machine representation.
To concretely illustrate the banality of the thing via C++:
1 double x = 0.1;
For a given real $$x$$, there exists a float $$X^+$$ and a float $$X^-$$ such that $$X^- \leq x \leq X^+$$ and such that there is no float $$Y$$ and $$Z$$ such that $$X^- < Y < x < Z < X^+$$. In other words, $$X ^ +$$ and $$X ^-$$ are the floats immediately greater than and less than $$x$$.
If $$x$$ is representable in an exact way, then we have equality between the three terms and the assignment operation $$X: = x$$ is unambiguously defined.
If this is not the case, as in the above example with $$0.1$$ in base 2, we must choose a representative between $$X^+$$ and $$X^-$$. At first sight, none of them are more legitimate to represent $$x$$.
This is where the IEEE-754 standard comes in to offer four rounding modes to remove ambiguity about the assignment operation. Here is a brief description:
• Rounding towards $$+ \infty$$ (or by excess): we return $$X ^ +$$ except if $$x = X ^ -$$.
• Rounding to $$- \infty$$ (or by default): we return $$X ^ -$$ unless $$x = X ^ +$$.
• Round to $$0$$: returns $$X ^ -$$ for positives and $$X ^ +$$ for negatives.
• Rounding to nearest: returns the machine number closest to $$x$$.
An essential property of the IEEE-754 standard is that it guarantees that the result of a floating point operation is equal to the result of the corresponding real operation to which the rounding mode is applied rightafter. In other words, if we choose a rounding mode $$\text{Arr}$$, $$a$$ and $$b$$ two real numbers whose floating point representations are $$A$$ and $$B$$, $$+$$ a real operation, and $$\oplus$$ its machine counterpart, then, the standard guarantees us that $$A \oplus B = Arr(a + b)$$.
This property, known as correct rounding, is essential because it makes it possible to prove on a numerical algorithm or to obtain proven bounds for numerical results.
Finally, we can define the relative assignment error by the following formula: $$\alpha = \frac {X - x}{X}$$. It is precisely this initial error of representation which propagates during the computation.
### The different problems faced with numerical computation¶
As mentioned earlier, any algorithm that performs floating point computations gives an error-ridden result. When an algorithm is finite2, then the numerical error is the result of the propagation of rounding or truncation errors during floating point operations.
In the case of iterative algorithms, for example Newton’s method, it is also necessary to stop the algorithm after a certain number of iterations, optimal if possible, which is a problem considering that:
• if the algorithm is stopped too early, the solution obtained will not be a good approximation of the real solution. It is the rate of convergence which informs us about this, that is to say the mathematics behind a specific method;
• if the algorithm is stopped too late, additional steps will not bring more precision to the solution obtained, and worse, the propagation of errors can degrade the solution, until, for pathological cases, returning a result which has no meaning.
What is important here is that given an iterative numerical method, the objectives of the mathematician and the engineer are in a way opposed: the former would like to continue to iterate as much as possible (that is, as long as the time constraints allows it) because he knows that this leads to a better solution in theory, while the engineer tells us that we must stop at some point.
In reality, there are at least four interesting and central issues:
1. For the mathematician: given some hardware and a system of representation, how can I obtain a better approximation of the solution to my problem?
Answer: find algorithms with higher convergence rate or methods to speed up convergence! In this regard, one might cite the method of Aitken’s Delta-2 or the $$\epsilon$$-algorithme.
2. For the engineer: for a given algorithm AND some hardware with a system of representation, how can we get a better solution approximation?
Answer: reorganize the operations within the algorithm to limit the error propagation while not changing the convergence rate! A generic technique of reorganizing the terms of a sum is called Kahan’s summation algorithm.
3. For everyone: for a given algorithm and its implementation, how can we get the best out of it?
Answers: Choose the most suited representation of real numbers (symbolic system, decimal32, etc. which generally requires better hardware performance) or increase the encoding size of reals (from simple to double precision or quadruple precision, etc. which only consists in increasing the number of bits to encode the significand and the exponent to represent real numbers, which again requires better hardware performance).
4. For the numericist: how to determine the optimal number of iterations to be performed by an algorithm, whatever the input data? How far is my numeric solution from its real equivalent?
Answer: Find methods to estimate the numerical precision of a result, which involves estimating the propagation of rounding errors!
CESTAC method attempts to solve the last problem and will be presented in the following section. However, as we will see below, we cannot do it without some knowledge about the other problems.
Question: Are you not exaggerating the computation errors a little and is it not ultimately just some considerations for researchers with long grey beards? Are mistakes so common and so important? From what I’ve read, the IEEE-754 standard allows precision to the order of $$10^{-15}$$ in double precision so my results are at least as good right?
No, yes, and no. Let us take an extremely simple pathological case: $$x_n = ax_ {n-1} - b$$, which is an extremely simple computation. Here is also a C++ implementation with a particular initialization:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #include #include #include int main() { using namespace std; double b=4095.1; double a=b+1; double x=1; for (int i = 0; i < 100; ++i) { x = (a*x) - b; cout << "iter " << i << " - " << setprecision(numeric_limits::max_digits10) << x << '\n'; } return 0; }
Which gives for output (directly available here):
1 2 3 4 5 6 7 8 9 10 11 iter 0 - 1.0000000000004547 iter 1 - 1.0000000018630999 iter 2 - 1.0000076314440776 iter 3 - 1.0312591580864137 iter 4 - 129.04063743775941 iter 5 - 524468.25500880636 iter 6 - 2148270324.2415719 iter 7 - 8799530071030.8047 ... iter 88 - 3.519444240677161e+305 iter 89 - inf
While the expected mathematical result is $$1$$, constant for each iteration, we observe that after a few iterations on the machine there is a fast divergence towards infinity. After only 4 iterations, the number of significant digits between the exact actual result and its floating bias is 0!
### Stochastic control and estimation of rounding¶
#### CESTAC core and error propagation¶
The algebra $$\mathcal{F}$$ over the field of floating point numbers is not associative nor distributive. In other words, the order in which we perform our arithmetic operations has an impact on the result.
The correct rounding property ensures the commutativity.
From now on, consider $$f$$, a procedure acting on $$\mathbb {R}$$, and its image $$F$$, a procedure acting on $$\mathcal{F}$$. Because of the non-associativity, the image of $$f$$ is actually not unique and there are several procedures which mathematically transcribe $$f$$ exactly. And obviously, these procedures, due to the roundings, will not return the same float.
Exemple:
Let the following function be $$f(x) = x^2 + x +1$$. The most naive function on $$\mathcal{F}$$ would be $$F_1(X) = (X^2 + X) + 1$$ but we could also have $$F_2(X) = X^2 + (X + 1)$$ or even $$F_3 (X) = X (X + 1) + 1$$. It is obvious that on $$\mathbb{R}$$ all these procedures are exactly the same because they return exactly the same result thanks to the properties of associativity and distributivity. On the other hand, this is not the case if we work on floats because all the intermediate results will be rounded. Thus, for a fixed $$X$$, it is quite possible that $$F_1(X) \neq F_2(X) \neq F_3(X)$$.
Numeric Example:
Consider floating-point numbers with 6 digits of precision. Consider $$x = 1.23456 \times 10^{-3}$$, $$y = 1.00000 \times 10^{0}$$ and $$z = -y$$. If we perform the calculation $$(x + y) + z$$, we find $$1.23000 \times 10^{- 3}$$, however, the calculation $$x + (y + z)$$ will give $$1.23456 \times 10^{-3}$$. We can therefore see that the order of operations matters.
As in practice, the algorithms are a succession of small computation steps, as illustrated above on the evaluation of a polynomial, the computation will propagate the errors operation after operation. In optimistic scenarios, the errors compensate each other or are too small and the result is remarkably close to what the precision of the representation allows (however, it is impossible to exceed 16 significant digits in double precision, by definition!), but in the worst case scenario, the result can be totally irrelevant.
Example:
Propagation of the addition error. Let us consider $$x$$ and $$y$$ two reals and their machine representations $$X$$ and $$Y$$, respectively tainted with an error $$\epsilon_x$$ and $$\epsilon_y$$. What happens when we add them?
$X + Y = x + \epsilon_x + y + \epsilon_y = (x+y) + \epsilon_x + \epsilon_y$
The errors are added to each other and add to the exact result $$x + y$$. If we add a third float to this result, we will get a new error term, etc. The result obtained will be even further from the exact result as the sum of the errors will not be negligible compared to the exact terms (here $$x$$ and $$y$$).
In summary, from a procedure $$f$$ over the field of real numbers, there are several procedures $$(F_i)_{0 \leq i <K}$$ that we can obtain by permuting the elementary operations and, which theoretically offer in the worst case $$K$$ different results. On top of that, there is a perturbation of the result by the chosen rounding mode which further exacerbates the worst case 3.
The main idea behind CESTAC is to take advantage of the great variability of the results that can be obtained by a sequence of numerical computations. For this, we use some perturbations on the result of an operation and some permutations of the operands, in order to estimate the number of significant figures of a numerical result. By propagating the numerical errors in different random ways, we will be able to distinguish the variable part - the part stained with errors, or not representative -, from the fixed part - the exact part -.
#### Finding the number of significant digits¶
If we have a procedure $$F$$ that we run $$N$$ times with a random perturbation and permutation each time, we get a sample $$R = (R_0, R_1, ..., R_ {N-1})$$ of results. We can therefore consider $$F$$ as a random variable with values in $$\mathcal{F}$$, with an expected value $$\mu$$ and a standard deviation $$\sigma$$. The expectated value $$\mu$$ can be interpreted as the expected result of the algorithm, i.e. the floating point number that encodes our real solution $$r$$. The error compared to this expectation, that is to say $$\alpha = |r - \mu|$$ is the loss of precision that one is entitled to expect from performing numerical computations in floating point. The problem is that $$\mu$$ is not known, and therefore, we have to estimate it.
In this context, with the hypothesis that the elements of $$R$$ come from a Gaussian distribution (which is verified in practice), the best estimator of $$\mu$$ is the mean of the sample $$R$$: $$\bar R = \frac 1 N \sum^N_i R_i$$
Similarly, the best estimator of $$\sigma^2$$ is given by: $$S^2 = \frac 1 {N-1} \sum^N_i (R_i - R)^2$$
A classic use of the central limit theorem allows us to build a confidence interval for the exact value $$r$$ for a threshold $$p$$:
$\mathbb{P}\,(r \in [\bar R - t_{N-1}(p) \frac{S}{\sqrt{N}}; \bar R + t_{N-1}(p) \frac{S}{\sqrt{N}}]) = 1 - p$
Where $$t_{N-1} (p)$$ is the value of the cumulative distribution function of Student for $$(N-1)$$ degrees of freedom and a threshold of $$p$$.
From this interval, it is possible to calculate the number of significant digits $$C$$ of our estimator $$\bar R$$:
$C_{\bar R} = \log_{10}(\frac {|\bar R|} {S}) - K(N, p)$
where $$K$$ depends only on $$N$$ and $$p$$, and such it that tends towards $$0$$ with $$N$$ increasing. The value of $$p$$ is fixed in practice at $$0.05$$, which makes it possible to obtain a confidence interval of $$95\%$$. Here is now the value of $$K$$ obtained as a function of $$N$$, for $$p = 0.05$$:
N K
2 1.25
3 0.396
This may seem surprising but using a sample of size $$N = 3$$ results in $$K$$ being less than $$1$$, i.e., on average, there is no significant digit loss for the sample $$R$$. In fact, increasing this number is useless because the length of the interval evolving in $$\frac 1 N$$, to obtain an additional significant figure, it would be necessary to multiply $$N$$ by 100 (because of the $$\log_{10}$$)!
#### Constructing the sample $$R$$¶
Now that we have the theory, we need to know how to construct a sample of results $$R$$ that is as representative as possible of the multitude of results obtainable from our procedure $$F$$.
For that, we have a perturbation function, pert, which for a particular float $$X$$, returns a disturbed float $$X'$$ such that $$X'$$ is $$X$$ for which we modified the last bit of its significand in a random and uniform way. In other words, we add to the last bit of significand $$-1$$, $$0$$ or $$1$$ with a probability of $$\frac 1 3$$.
This perturbation consists in choosing randomly among $$X^+$$ and $$X^-$$, which we mentioned in the first part, in order to simulate the propagation of rounding errors.
We use pert whenever an assignment is made, whether it is an initial assignment as a floating variable declaration, or the result of multiple computations.
We also have a permutation operator, perm, which for each assignment operator will randomly modify the term on the right by performing one of the permutations authorized by associativity and distributivity. In other words, it is a question of choosing between $$F_1$$, $$F_2$$ or $$F_3$$ in the example above.
Remark:
In fact, in theory, we could go further by permuting all the independent operations between them, that is to say, by purely and simply reorganizing the algorithm as much as possible. In practice, it is not done, in particular because it is very complicated for a gain that is not very interesting.
Attention
In real life, we are aware of the various pitfalls posed by the stability of numerical computations and how to overcome them, in particular by correctly organizing our calculations (for example, adding numbers in ascending order limits the propagation of errors), or by using error compensation mechanisms (let us quote for example the Kahan summation algorithm). This is also the objective of problem 2. mentioned above. In fact, from the moment we consciously optimize the order of operations, the use of perm becomes unnecessary because it leads to a wrongly widened confidence interval, and therefore an overestimation of the errors (in addition to a significant additional computational cost).
Thus, we will not consider permutations in the following.
From there, there are two ways to use pert to create a sample $$R$$ and estimate the number of significant digits of a numeric result.
##### Asynchronous version
The asynchronous version consists in performing our perturbations at each assignment, and building our sample $$R$$ as the result of $$N$$ calls to the procedure $$F$$. In other words, the $$N$$ calls are independent, hence the name asynchronous. Once the sample is at our disposal, we compute the number of significant digits a posteriori.
Illustration of the asynchronous method with an iterative sequence defined by $$X_n = F(X_ {n-1})$$ and for initial term $$X_0$$ with $$N = 3$$:
$\begin{matrix} & \nearrow X^0_1 = \text{pert}(F(X_0)) & \to X^0_2 = \text{pert}(F(X^0_1)) & \to \dots \to & X^0_n = \text{pert}(F(X^0_{n-1})) & \searrow & \\ X_0 & \to X^1_1 = \text{pert}(F(X_0)) &\to X^1_2 = \text{pert}(F(X^1_1)) & \to \dots \to & X^1_n = \text{pert}(F(X^1_{n-1})) & \to & C((X_n^i)) \\ & \searrow X^2_1 = \text{pert}(F(X_0)) & \to X^2_2 = \text{pert}(F(X^2_1)) & \to \dots \to & X^2_n = \text{pert}(F(X^2_{n-1})) & \nearrow & \\ \end{matrix}$
Apparently logical, this method comes up against two major problems.
• As the propagation of the errors is not necessarily being done in the same way, it is possible that the executions of the procedure converge towards different real numbers, in which case the sample is inconsistent. This may be the case if the problem to be solved admits of several solutions for example.
• From one execution to another, as there are certainly conditional branches, two results can come from a series of different branches because of rounding errors. Therefore it is not relevant to compare these results.
For these two reasons, the asynchronous version is generally inapplicable.
##### Synchronous version
Conversely, the synchronous version consists in modifyng the sample at each assignment operation and using the empirical average as value for the conditional branches[^practice]. It is possible to give an estimate of the number of significant digits at any time because the sample is available at all times during the execution. In fact, this answers the two problems of the asynchronous version:
• At each step, the result is consistent with itself, it cannot be different solutions since there is never only one value which is used for the conditional structures.
• The series of branches will necessarily be unique by construction, which makes the final result consistent.
Illustration of the synchronous method on the same example as before:
$\begin{matrix} & \nearrow X^0_1 = \text{pert}(F(X_0)) & \searrow & & \nearrow X^0_2 = \text{pert}(F({X^1_1})) \searrow & & \nearrow & X^0_n = \text{pert}(F(X^1_{n-1})) & \searrow & \\ X_0 & \to X^1_1 = \text{pert}(F(X_0))& \to & \bar{X_1} &\to X^1_2 = \text{pert}(F(X^1_1)) \to & \dots &\to & X^1_n = \text{pert}(F(X^1_{n-1})) & \to & C((X^i_n))\\ & \searrow X^2_1 = \text{pert}(F(X_0))& \nearrow & & \searrow X^2_2 = \text{pert}(F(X^2_1)) \nearrow & & \searrow & X^2_n = \text{pert}(F(X^2_{n-1})) & \nearrow &\\ \end{matrix}$
Notice the synchronization points after each step, hence the method name.
### CESTAC on an iterative algorithm¶
Now that we know how to determine the number of significant digits of a numerical result, we will focus on the optimal stopping problem. The exact solution to our problem is noted $$x^*$$. We have an iterative algorithm, which gives at iteration $$k$$, the approximate solution $$x_k$$, and we know that this algorithm converges after a number of steps potentially infinite, that is to say that $$x_k \to x^*$$.
Finally, we have an implementation of our algorithm which at each iteration provides an approximate solution $$X_k$$ tainted by numerical errors.
A classic stop criterion for iterative algorithms at a given precision is the test $$|| x_k - x_ {k-1}|| <\epsilon$$, where $$\epsilon$$ controls the precision. A variant is given by $$|| x_k - x_ {k-1} || <|| x_k || \epsilon$$. This test is extremely robust in usual arithmetic having infinite precision since it makes it possible to detect when an iteration no longer brings any significant gain in precision. Conversely, in floating point arithmetic, since all $$X_k$$ are tainted with errors, the subtraction of very close terms leads to small values which may not be significant at all.
The worst possible scenario is the following: $$\epsilon$$ is chosen too small, and the computational errors that have accumulated are too large compared to $$\epsilon$$, so the algorithm will never converge, its solution will degrade to either converge towards an inconsistent value, or to diverge outright!
Definition (machine zero):
A value $$X \in \mathcal{F}$$, the result of a numerical calculation, with a number $$C$$ of significant digits, is a machine zero if $$X = 0$$ and $$C> 1$$ where $$X$$ is arbitrary but $$C = 0$$. We denote a machine zero $$\bar 0$$.
Warning
The notion of machine zero should not be confused with the notion of epsilon machine nor with zero as represented in the IEE754 standard.
As CESTAC purpose is to determine the number of significant digits of a result, we can therefore use it to find the machine zeros and modify our stop test accordingly, which becomes the following one at the iteration $$k$$:
1. If $$C(X_k) = 0$$ and $$X_k \neq 0$$, then the result is tainted with an error greater than its own value and there is no point in continuing: we stop the algorithm.
2. If $$|| X_n - X_{n-1} || = \bar 0$$, we stop the algorithm since the difference between two iterations only represents computation errors.
3. If the number of iterations exceeds a certain limit $$K$$, the sequence is considered as non-convergent and the algorithm is stopped.
### Going further: discussions on the validity of CESTAC¶
Note: This section is intended to discuss in a more advanced way the validity hypothesis and can therefore be put aside for a first reading, especially as it can turn out to be a little more technical.
There are several hypotheses which have been formulated in order to arrive at the formula for the number of significant digits and which lead to the following question: can we reasonably use a Student’s test in order to obtain the confidence interval? This is equivalent to wonder whether the estimator $$\bar X$$ used is biased or not.
#### The estimator bias¶
Without making a proper demonstration (we refer the reader to the studies by the creators of the CESTAC method), we give the broad outlines justifying that the mean estimator is unbiased.
Theorem:
A result $$X$$ of a perturbated procedure $$F$$ can be written:
$X = x + \sum_{i = 1}^n d_i 2^{-t}(\alpha_i - h_i) + O(2^{-2t})$
Where $$x$$ is the exact result, $$n$$ the total number of assignments and arithmetic operations performed by $$F$$, $$d_i$$ quantities depending only on the data and the procedure $$F$$, $$(\alpha_i)_i$$ the rounding or truncation errors and $$(h_i)_i$$ the perturbations performed.
The bias of the estimator $$\bar X$$ is the quantity $$E[\bar X] - x$$. Assuming that the $$(\alpha_i)_i$$ follow a uniform distribution over the “proper” interval5, it is enough to correctly choose the $$(h_i)_i$$ to re-center the $$(\alpha_i)_i$$ and thus obtain the following result, neglecting higher-order terms:
$X' = x + \sum_{i = 1}^n d_i 2^{-t}(z_i)$
Where the $$z_i$$ are identically distributed and centered variables, so that $$E(X') = x$$, i.e. the estimator is unbiased.
The hypothesis of the distribution of $$\alpha_i$$ is validated when there are enough operations in the procedure $$F$$, in other words that $$n$$ is large enough. In fact, very precisely, the bias is not zero but we can show that it is of the order of a few $$\sigma$$ which skews the final estimate by less than one significant figure.
#### Validity of the Student test¶
As we have seen, the hypothesis about the distribution of $$\alpha_i$$ is satisfied in theory and in practice. But on the other hand, to conclude that the estimator is unbiased, we made an additional assumption: the higher order terms are negligible.
However, while it is easy to see that addition or subtraction does not create an error of second order, this is not the case for multiplication or division, since by considering $$X_1 = x_1 + \epsilon_1$$ and $$X_2 = x_2 + \epsilon_2$$, these operators are written:
$X_1X_2 = x_1x_2 + x_1 \epsilon_2 + x_2 \epsilon_1 + \epsilon_1 \epsilon_2$
$\frac{X_1}{X_2} = \frac{x_1}{x_2} + \frac{\epsilon_2}{x_1} - \frac{x_1 \epsilon_2}{x_2^2} + O(\frac{\epsilon_2}{x_2})$
The problem for multiplication is that if the respective errors $$\epsilon$$ for the two operands are preponderant over the exact values $$x$$, then the second order term becomes preponderant. For the division, the higher order terms become preponderant if $$\epsilon_2$$ is preponderant w.r.t. $$x_2$$.
Consequently, there are two complementary ways to ensure the hypothesis behind CESTAC are valid:
• Increase the precision of the representation, i.e., encode each real on more bits, which will reduce the $$\epsilon$$. In other words, the answer to problem 3.
• Limit the propagation of calculation errors as much as possible, i.e., apply the recipes in response to problem 2.
One possible systematic approach is to include dynamic control over the outcome of multiplication or division operations, at a significant cost.
### Conclusion: CESTAC, but for what and for who?¶
We have seen the different problems that appear when we perform floating point computations and we have given a robust technique to control the propagation of errors induced by these computations. After having presented the foundations of the method and the modalities of use, we applied CESTAC to the optimal stopping problem of an iterative algorithm. Finally, a last part was devoted to the discussion on the validity of the method and a sketch of proof.
The only question that has not been addressed, and which will serve as a conclusion, is: when to use CESTAC? It is obvious that the method presents a significant cost in terms of computation time. Therefore, it must be justified by a gain at least as important. The need for evaluation and error control is crucial, for example, for critical systems such as airplanes. For this reason, the method is widely used in the field of aeronautics, to control both simulations but also directly computations within on-board systems.
### Bibliography¶
1. Let us give a succinct proof. By definition, a normal number is a number such that any finite sequence of bits occurs an infinite number of times in the significand of this number, and the probability of occurrence of the sequences is uniform. It is said to be normal in any base if whatever the base it is normal. Thanks to Borel-Cantelli lemma we prove that the set of non-normal numbers in any base has a null measure. Therefore, whatever the base, the probability that a number drawn at random on $$\mathbb{R}$$ is normal, is 1. $$\square$$
2. where finite is understood as a finite number of steps to find the solution to a problem.
3. I stress that this is the worst case scenario, which is far from being the common practical scenario. However, standard deterministic studies reason mainly about the worst case scenario, which leads to an overestimation of the computation errors, sometimes rendering these methods obsolete. On the contrary, CESTAC makes it possible not to fall into this trap due to the very construction of the method.
4. In practice, can also systematically use $$X_i$$ for a given $$i$$, which avoids having to calculate the average each time.
5. This interval depends on whether we are in truncation or rounding mode. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865822553634644, "perplexity": 272.2143882971541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00516.warc.gz"} |
https://questions.examside.com/past-years/jee/question/the-radiation-corresponding-to-3-to-2-transition-of-hyd-2014-marks-4-grstpouey8l8ezaw.htm | 1
JEE Main 2014 (Offline)
+4
-1
The radiation corresponding to $$3 \to 2$$ transition of hydrogen atom falls on a metal surface to produce photoelectrons. These electrons are made to enter a magnetic field $$3 \times {10^{ - 4}}\,T.$$ If the radius of the larger circular path followed by these electrons is $$10.0$$ $$mm$$, the work function of the metal is close to:
A
$$1.8$$ $$eV$$
B
$$1.1$$ $$eV$$
C
$$0.8$$ $$eV$$
D
$$1.6$$ $$eV$$
2
JEE Main 2014 (Offline)
+4
-1
Hydrogen $$\left( {{}_1{H^1}} \right)$$, Deuterium $$\left( {{}_1{H^2}} \right)$$, singly ionised Helium $${\left( {{}_2H{e^4}} \right)^ + }$$ and doubly ionised lithium $${\left( {{}_3L{i^6}} \right)^{ + + }}$$ all have one electron around the nucleus. Consider an electron transition from $$n=2$$ to $$n=1.$$ If the wavelengths of emitted radiation are $${\lambda _1},{\lambda _2},{\lambda _3}$$ and $${\lambda _4}$$ respectively then approximately which one of the following is correct?
A
$$4{\lambda _1} = 2{\lambda _2} = 2{\lambda _3} = {\lambda _4}$$
B
$${\lambda _1} = 2{\lambda _2} = 2{\lambda _3} = {\lambda _4}$$
C
$${\lambda _1} = {\lambda _2} = 4{\lambda _3} = 9{\lambda _4}$$
D
$${\lambda _1} = 2{\lambda _2} = 3{\lambda _3} = 4{\lambda _4}$$
3
JEE Main 2013 (Offline)
+4
-1
In a hydrogen like atom electron make transition from an energy level with quantum number $$n$$ to another with quantum number $$\left( {n - 1} \right)$$. If $$n > > 1,$$ the frequency of radiation emitted is proportional to :
A
$${1 \over n}$$
B
$${1 \over {{n^2}}}$$
C
$${1 \over {{n^{{3 \over 2}}}}}$$
D
$${1 \over {{n^3}}}$$
4
AIEEE 2012
+4
-1
Hydrogen atom is excited from ground state to another state with principal quantum number equal to $$4.$$ Then the number of spectral lines in the emission spectra will be :
A
$$2$$
B
$$3$$
C
$$5$$
D
$$6$$
JEE Main Subjects
Physics
Mechanics
Electricity
Optics
Modern Physics
Chemistry
Physical Chemistry
Inorganic Chemistry
Organic Chemistry
Mathematics
Algebra
Trigonometry
Coordinate Geometry
Calculus
EXAM MAP
Joint Entrance Examination | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584151268005371, "perplexity": 807.7840409432976}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00400.warc.gz"} |
http://eprints.iisc.ernet.in/8132/ | # Raman study of the doped fullerene $C_{60}\cdot{TDAE}$
Muthu, DVS and Shashikala, MN and Sood, AK and Seshadri, Ram and Rao, CNR (1994) Raman study of the doped fullerene $C_{60}\cdot{TDAE}$. In: Chemical Physics Letters, 217 (1-2). pp. 146-151.
PDF Raman_study-230.pdf Restricted to Registered users only Download (511Kb) | Request a copy
## Abstract
Raman studies on powder samples of $C_{60}\cdot{TDAE}$ are reported in the temperature range of 296 to 14 K. The strongest line $A_g(2)$ shows a doublet at 1454 and 1463 $cm^{-1}$ whose relative intensities are strongly temperature dependent. The relative intensities of the $H_g(7)$ and $H_g(8)$ modes at 296 K are much higher than those in pure $C_{60}$ and these intensities increase at low temperatures. The phonon softening and broadening are compared with the results on alkali-doped $C_{60}$ and the theoretical calculations.
Item Type: Journal Article The copyright of this article belongs to Elsevier. Division of Chemical Sciences > Solid State & Structural Chemistry UnitDivision of Physical & Mathematical Sciences > Physics 07 Sep 2006 19 Sep 2010 04:30 http://eprints.iisc.ernet.in/id/eprint/8132 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044449925422668, "perplexity": 2955.9738099207057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123996.28/warc/CC-MAIN-20140914011203-00056-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://tex.stackexchange.com/questions/25510/overbrace-split-accross-multiple-lines | # \overbrace split accross multiple lines
Is there a way to split the overbrace so that it appears in two halves over a multi-line equation. Something like
/-----------Overbrace Label----
Equation over which to place
----------------------\
the overbrace.
-
One approach would be to use the adjustbox package from Martin Scharrer. It provides a \clipbox{<llx> <lly> <urx> <ury>}{<text>} macro for trimming and clipping output. Both \overbraces in each line is clipped - the top from the right by specifying a value for <urx> and bottom from the left by specifying a value for <llx>. The following minimal example highlights this proof of concept:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{multline*}
f(x)=a_0+a_1x+a_2x^2+\clipbox{-2 0 5 0}{$\overbrace{a_3x^3+a_4x^4+\cdots+a_{i-1}x^{i-1}+\hspace{1em}}^{\text{some text}}$} \$\jot] \clipbox{100 0 -2 0}{\overbrace{\phantom{\hspace{10em}}a_ix^i+a_{i+1}x^{i+1}}}+\cdots+a_{n-1}x^{n-1} \end{multline*} \end{document} Specifying a value of -2 for either <llx> or <urx> in the above code makes sure that the brace end left in view is not potentially trimmed ever so slightly. The other thing to make sure is that the braces are adjusted for the correct clipping. Otherwise, for example, the bottom brace will still show a cusp. I'm sure some manual tweaking will provide you with the flavour of multi-line overbrace that you want. For completeness, here is a description on how to achieve overlapping under-/overbraces if you have an equation on a single line. Reprinting parts of the equation using \phantom can be used so that the horizontal placement is accurate. Here is a minimal working example demonstrating the principle: \documentclass{article} \usepackage{mathtools}% For \mathclap, \mathllap, \mathrlap \begin{document} \[ f(x)=\mathrlap{\underbrace{\vphantom{a_{n-1}}\phantom{a_0+a_1x+a_2x^2}}_{\text{some text}}} \mathrlap{\phantom{a_0+a_1x+}\overbrace{\phantom{\:a_2x^2+a_3x^3+\cdots}}^{\text{some other text}}} a_0+a_1x+a_2x^2+a_3x^3+\underbrace{\cdots+a_{n-1}x^{n-1}}_{\text{some more text}}+a_nx^n$
\end{document}
The mathtools package provides similar math mode overlapping macros to the textual \rlap, \clap and \llap counterparts. Respectively, these macros allow right r, center c and left l overlap within math mode. The use of \phantom{...} allows for correct horizontal placement, while \vphantom{...} ensures correct vertical placement across \underbraces.
-
@Caramdir: You're right. I've updated my answer with an attempt at obtaining the OP's request. – Werner Aug 11 '11 at 22:07
I think the question is about splitting the brace over two line and not about meshing over- and underbraces. – Caramdir Aug 12 '11 at 2:49
How did you manage to post that comment 6 hours ago? Time-traveling? :) – Caramdir Aug 12 '11 at 4:13
True. I saw that. I deleted the older "Oooo..." comment and immediately reposted with the update. Don't know what happened. The StackLords must have been at work. – Werner Aug 12 '11 at 4:16
Actually, the comment is older than the question. :) (good answer, btw). – Caramdir Aug 12 '11 at 4:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881520688533783, "perplexity": 3606.9117856701955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00049-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?t=59377 | ## Cell Potential
MaryBanh_2K
Posts: 101
Joined: Wed Sep 18, 2019 12:21 am
### Cell Potential
The textbook's definition of cell potential is vague. Is it a measure of how much voltage or electrical energy a cell has? How does this relate to the half-reactions of redox reactions and oxidation? Is the electrical energy caused by the movement or the difference in electrons on both sides?
Fiona Latifi 1A
Posts: 102
Joined: Sat Sep 14, 2019 12:16 am
### Re: Cell Potential
Cell potential is also known as cell voltage. It is used to measure the voltage difference in two halves of a cell.
Ipsita Srinivas 1K
Posts: 50
Joined: Mon Jun 17, 2019 7:24 am
### Re: Cell Potential
The half redox reactions are related to the cell voltage in terms of the net loss and gain of electrons at the ends of the cell - in an electrochemical cell, the electrodes have the 'potential' to gain electrons (reduction potential); the difference in this potential determines which one will be an anode and which one will be a cathode; this difference is also the cell potential, measured in volts. This is physically represented by the loss of electrons from the cathode and gain of electrons by the anode, which are the half-redox reactions.
Ipsita Srinivas 1K
Posts: 50
Joined: Mon Jun 17, 2019 7:24 am
### Re: Cell Potential
Electrical energy, or here, current flow is caused by the motion of electrons. The potential is determined by the electron difference.
Mitchell Koss 4G
Posts: 128
Joined: Sat Jul 20, 2019 12:17 am
### Re: Cell Potential
And the potential will depend on the reaction, just as kinetic potential depends on size and distance. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8181595206260681, "perplexity": 2025.091069207964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00045.warc.gz"} |
https://math-sciences.org/event/haidar-al-naseri-umea-university-tbc/ | Haidar Al-Naseri (Umeå University) – Collective plasma effects and Schwinger pair creation
• This event has passed.
Haidar Al-Naseri (Umeå University) – Collective plasma effects and Schwinger pair creation
October 12, 2022 @ 3:00 pm - 4:00 pm UTC+0
Strong field physics close to or above the Schwinger limit are typically studied with the vacuum as initial condition [1, 2], or by considering test particle dynamics. However, with a plasma present
initially, quantum relativistic mechanisms such as Schwinger pair-creation are complemented by classical plasma nonlinearities.
In this work we use the Dirac-Heisenberg-Wigner formalism [3] to study the interplay between classical and quantum mechanical mechanisms for ultra-strong electric fields. In particular, the effects of initial density and temperature on the plasma oscillation dynamics are determined. In comparison with previous works [4], the physical value of the fine structure constant has been used.
Details
Date:
October 12, 2022
Time:
3:00 pm - 4:00 pm UTC+0
Event Category:
Venue
Room 101
2-5 Kirkby Place
Plymouth, PL4 6DT United Kingdom
James Edwards | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035804867744446, "perplexity": 3413.3410286677386}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00621.warc.gz"} |
http://mathoverflow.net/questions/105101/estimating-parameters-of-a-mixture-of-normal-distributions/105121 | # Estimating parameters of a mixture of normal distributions.
I want to estimate the parameters $\mu_i$ and $\sigma^2_i$ of a countable mixture of Gaussians with assumed equal weights, variance and identically spaced means. I intially thought that the Fourier transforms of an iid sample from the mixture mentioned would give above another Gaussian in phase space, but when computed in matlab I get a sharp peak spectral distribution. I used a Gaussian windows function, so I don't understand why the power spectral density tends to be unbounded at zero phase.
-
I do not really know whether this applies here, but some years ago I've come across some free software which was called "emmix", which was announced to solve exactly this problem (decomposition of a mix of normal distributions). It came with a good/lot of explanative material, so possibly, if this is still available, it might help to understand the method as well. – Gottfried Helms Aug 20 '12 at 17:30
A countable mixture with equal weights? Unless this "countable" is finite, no such thing exists in the realm of probability measures. – Robert Israel Aug 20 '12 at 18:55
If the Gaussians are constrained to be of equal shapes and distances, then you have a scaled theta function or the convolution of a Gaussian with a sha (Ш) function or scaled Dirac comb. Ш is a sum of equally spaced delta functions, and the Fourier transform of Ш is another Ш function, usually with different spacings and amplitudes depending on your conventions. Since the Fourier transform of a Gaussian is a Gaussian, the Fourier transform of your function is a sum of equally spaced delta functions whose amplitudes sample a Gaussian density, something like $$\beta \sum_{n=-\infty}^\infty e^{-\pi^2 z^2/\sigma - 2 \pi i x_0 z}\delta(z-n/\alpha)$$
where the parameters $\alpha, \beta, \sigma,$ and $x_0$ are determined by the parameters of your distribution and your Fourier conventions.
In particular, there is a delta function at $0$. You should expect this to happen when you have a periodic function whose average over a period isn't $0$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494503140449524, "perplexity": 220.10892754594752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00059-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://indico.math.cnrs.fr/event/764/ | Trimestre "Le Monde Quantique"
# Quantum measurements, probabilities and reversibility: some naïve remarks and questions
## par Prof. François David (IPhT, CEA-Saclay)
Europe/Paris
Amphithéâtre Léon Motchane (IHES)
### Amphithéâtre Léon Motchane
#### IHES
Le Bois-Marie 35, route de Chartres 91440 Bures-sur-Yvette
Description
It is well known that ideal projective measurements are paradigmatic non-deterministic and irreversible processes in Quantum Mechanics. Nevertheless it is also known that the associated probabilities satisfy a time-symmetry property: the conditional probabilities for prediction and retrodiction take the same form. I shall argue that this feature of Quantum Mechanics may be used to discuss it in a more natural way and to present it as a less mysterious theory than is usually done. This will be shown for the Algebraic Formulation and the Quantum Logic Formulation. If time permits, I shall ask some naïve questions about the Quantum Information Formulations and Quantum Gravity.
Contact
###### Your browser is out of date!
Update your browser to view this website correctly. Update my browser now
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8017057776451111, "perplexity": 2408.4896319017635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.7/warc/CC-MAIN-20190321213516-20190321235516-00430.warc.gz"} |
http://timescalewiki.org/index.php?title=Delta_integral&diff=1363&oldid=1338 | Difference between revisions of "Delta integral"
Let $\mathbb{T}$ be a time scale. Delta integration is defined as the inverse operation of delta differentiation in the sense that if $F^{\Delta}(t)=f(t)$, then $$\displaystyle\int_s^t f(\tau) \Delta \tau = F(t)-F(s).$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962149262428284, "perplexity": 304.06263403075377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00127.warc.gz"} |
https://www.physicsforums.com/threads/two-dimensional-oscillators.711953/ | # Two-dimensional oscillators
1. Sep 22, 2013
### forestmine
1. The problem statement, all variables and given/known data
A puck with mass m sits on a horizontal, frictionless table attached to four identical springs (constant k and unstreched length l_0). The initial lengths of the spring a are not equal to the unstretched lengths. Find the potential for small displacements x,y and show that it has the form 1/2 * k' * r$^{2}$ with r$^{2}$ = x$^{2}$+ y$^{2}$.
2. Relevant equations
3. The attempt at a solution
I'm honestly at a loss with this problem. I know that the total force is F=-kr, where F$_{}x$ = -kx and F$_{}y$ = -ky.
I also know that my potential is minus the gradient of the force. If I were to take the gradient of F, where F = -kx(i) -ky(j), I get F= -k(i) -k(j). Not really sure where to go from here, or if I'm on the right track for that matter.
I'm obviously not looking for the answer, just some help in the right direction. I don't think I fully understand it conceptually to be able to work it analytically. Any tips would be greatly appreciated!
EDIT// Here's some work I've done since posting. Still unsure of how to continue.
To account for all possible positions of the spring,
r1^2=x^2 + (a-y)^2
r2^2=x^2 + (a+y)^2
r3^2=(x+a)^2 + y^2
r4^2=(x-a)^2 + y^2
Now, r will be some variation of the above, so I summed the above 4 equations. I would then think to use U=1/2 * k * r^2, where r^2 is the sum of the above equations. I feel like I'm missing something, though.
EDIT 2// In fact, I definitely don't feel as though that's suitable, because it says that the potential is not at all dependent on l, which of course is false.
Last edited: Sep 22, 2013
2. Sep 22, 2013
### fzero
You've got things backwards: the force is the gradient of the potential, $\mathbf{F} = - \nabla U$. Also, the forces should involve the quantity $a-l_0$.
3. Sep 22, 2013
### forestmine
Whoops, that was a bad mistake. Thanks for catching that.
Though I'm still not really sure what to do. My potential can then be written in terms of x and y components, right?
For instance,
U_x = 1/2 * k * rx^2, but I don't really see what r ought to be. I understand that there should be some dependence on l_0, but I honestly have no idea how. Unless we can essentially rewrite the equations for r_n, with (a-l_0)^2, (a+l_0)^2, etc.
Sorry, I'm so lost on this problem. I know that there are two methods, one of which involves a taylor expansion, and the other, a second derivative of the potential function, but again, I don't even know what my potential function looks like in this case.
4. Sep 23, 2013
### vela
Staff Emeritus
Set up the coordinate system so that the mass is sitting at the origin when it's in equilibrium and the springs lie along the axes. The end of the springs lying on the +x axis will be at the point (a,0). What's the potential energy in the spring when the mass is at (0,0)? What's the potential energy in the spring when the mass is at the point (x,y)?
Draft saved Draft deleted
Similar Discussions: Two-dimensional oscillators | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537987112998962, "perplexity": 436.6619881906323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530668.28/warc/CC-MAIN-20171213182224-20171213202224-00071.warc.gz"} |
http://www.clayford.net/statistics/unconditional-multilevel-models-for-change-ch-4-of-alda/ | # Unconditional Multilevel Models for Change (Ch 4 of ALDA)
In Chapter 4 (section 4.4) of Applied Longitudinal Data Analysis (ALDA), Singer and Willett recommend fitting two simple unconditional models before you begin multilevel model building in earnest. These two models “allow you to establish: (1) whether there is systematic variation in your outcome that is worth exploring; and (2) where that variation resides (within or between people).” (p. 92) This is a great idea. Why start building models if there is no variation to explain? In this post I want to summarize these two models for reference purposes.
Model 1: The Unconditional Means Model
• The keyword here is “means”. This model has one fixed effect that estimates the grand mean of the response across all occasions and individuals.
• The main reason to fit this model is to examine the random effects (i.e., the within-person and between-person variance components). This tells us the amount of variation that exists at the within-person level and the between-person level.
• Model specification: $Y_{ij} = \gamma_{00} + \zeta_{0i} + \epsilon_{ij}$
• $\gamma_{00}$ = grand mean (fixed effect)
• $\zeta_{0i}$ = the amount person i’s mean deviates from the population mean (between-person)
• $\epsilon_{ij}$ = the amount the response on occasion j deviates from person i’s mean (within-person)
• $\epsilon_{ij} \sim N(0,\sigma_{\epsilon}^{2})$
• $\zeta_{0i} \sim N(0, \sigma_{0}^{2})$
• Use the intraclass correlation coefficient to describe the proportion of the total outcome variation that lies “between” people: $\rho = \sigma_{0}^{2} / (\sigma_{0}^{2} + \sigma_{\epsilon}^{2})$
• In the unconditional means model the intraclass correlation coefficient is also the “error autocorrelation coefficient”, which estimates the average correlation between any pair of composite residuals: $\zeta_{0i} + \epsilon_{ij}$
• Sample R code for fitting the unconditional means model (where “id” = person-level grouping indicator):
library(nlme)
lme(response ~ 1, data=dataset, random= ~ 1 | id)
Or this:
library(lme4)
lmer(response ~ 1 + (1 | id), dataset)
To replicate the Unconditional Means Model example in ALDA, the UCLA stats page suggests the following:
alcohol1 <- read.table("http://www.ats.ucla.edu/stat/r/examples/alda/data/alcohol1_pp.txt",
library(nlme)
model.a <- lme(alcuse~ 1, alcohol1, random= ~1 |id)
summary(model.a)
This works OK, but returns slightly different results because it fits the model using REML (Restricted Maximum Likelihood) instead of ML (Maximum Likelihood). It also does not return the estimated between-person variance $\sigma_{0}^{2}$. We can "fix" the first issue by including the argument method="ML". There doesn't appear to be anything we can do about the second. However, the lmer() function allows us to replicate the example and obtain the same results presented in the book, as follows (notice we have to specify ML implicitly with the argument REML = FALSE):
model.a1 <- lmer(alcuse ~ 1 + (1 | id), alcohol1, REML = FALSE)
summary(model.a1)
The output provides the values discussed in the book in the "Random effects" section under the variance column:
> summary(model.a1)
Linear mixed model fit by maximum likelihood
Formula: alcuse ~ 1 + (1 | id)
Data: alcohol1
AIC BIC logLik deviance REMLdev
676.2 686.7 -335.1 670.2 673
Random effects:
Groups Name Variance Std.Dev.
id (Intercept) 0.56386 0.75091
Residual 0.56175 0.74950
Number of obs: 246, groups: id, 82
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.9220 0.0957 9.633
The "Random effect" id has variance = 0.564. That's the between-person variance. The "Random effect" Residual has variance = 0.562. That's the within-person variance. We can access these values using "summary(model.a1)@REmat" and calculate the intraclass correlation coefficient like so:
icc_n <- as.numeric(summary(model.a1)@REmat[1,3])
icc_d <- as.numeric(summary(model.a1)@REmat[1,3]) +
as.numeric(summary(model.a1)@REmat[2,3])
icc_n / icc_d
[1] 0.5009373
Model 2: The Unconditional Growth Model
• This model partitions and quantifies variance across people and time.
• The fixed effects estimate the starting point and slope of the population average change trajectory.
• Model specification: $Y_{ij} = \gamma_{00} + \gamma_{10}*time_{ij} + \zeta_{0i} + \zeta_{1i}*time_{ij} + \epsilon_{ij}$
• $\gamma_{00}$ = average intercept (fixed effect)
• $\gamma_{10}$ = average slope (fixed effect)
• $\zeta_{0i}$ = the amount person i's intercept deviates from the population intercept
• $\zeta_{1i}$ = the amount person i's slope deviates from the population slope
• $\epsilon_{ij}$ = the amount the response on occasion j deviates from person i's true change trajectory
• $\epsilon_{ij} \sim N(0,\sigma_{\epsilon}^{2})$
• $\zeta_{0i} \sim N(0, \sigma_{0}^{2})$
• $\zeta_{1i} \sim N(0, \sigma_{1}^{2})$
• $\zeta_{0i}$ and $\zeta_{1i}$ have covariance $\sigma_{1}^{2}$
• The residual variance $\sigma_{\epsilon}^{2}$ summarizes the average scatter of an individual's observed outcome values around his/her own true change trajectory. Compare this to the same value in the unconditional means model to see if within-person variation is systematically associated with linear time.
• The level-2 variance components, $\sigma_{0}^{2}$ and $\sigma_{1}^{2}$ quantify the unpredicted variability in the intercept and slope of individuals. That is, they assess the scatter of a person's intercept and slope about the population average change trajectory. DO NOT compare to the same values in the unconditional means model since they have a different interpretation.
• The level-2 covariance $\sigma_{01}$ quantifies the population covariance between the true initial status (intercept) and true change (slope). Interpretation is easier if we re-express the covariance as a correlation coefficient: $\hat{\rho}_{01} = \hat{\sigma}_{01} / \sqrt{\hat{\sigma}_{0}^{2}\hat{\sigma}_{1}^{2}}$
• Sample R code for fitting the unconditional growth model (where "id" = person-level grouping indicator):
lme(response ~ time , data=dataset, random= ~ time | id)
Or this:
lmer(alcuse ~ time + (time | id), dataset)
To replicate the Unconditional Growth Model example in ALDA, the UCLA stats page suggests the following:
alcohol1 <- read.table("http://www.ats.ucla.edu/stat/r/examples/alda/data/alcohol1_pp.txt",
library(nlme)
model.b <- lme(alcuse ~ age_14 , data=alcohol1, random= ~ age_14 | id, method="ML")
summary(model.b)
However I think the following is better as it gives the same values in the book:
model.b1 <- lmer(alcuse ~ age_14 + (age_14 | id), alcohol1, REML = FALSE)
summary(model.b1)
For instance it provides variance values instead of standard deviation values. It doesn't really matter in the long run, but it makes it easier to quickly check your work against the book. Here's the output:
> summary(model.b1)
Linear mixed model fit by maximum likelihood
Formula: alcuse ~ age_14 + (age_14 | id)
Data: alcohol1
AIC BIC logLik deviance REMLdev
648.6 669.6 -318.3 636.6 643.2
Random effects:
Groups Name Variance Std.Dev. Corr
id (Intercept) 0.62436 0.79017
age_14 0.15120 0.38885 -0.223
Residual 0.33729 0.58077
Number of obs: 246, groups: id, 82
Fixed effects:
Estimate Std. Error t value
(Intercept) 0.65130 0.10508 6.198
age_14 0.27065 0.06246 4.334
Correlation of Fixed Effects:
(Intr)
age_14 -0.441
Again the main section to review is the "Random effects". The Residual variance (within-person) has decreased to 0.337 from 0.562. We can conclude that $(0.562 - 0.337)/0.562 = 0.40$ (i.e., 40%) of the within-person variation in the response is systematically associated with linear time. We also see the negative correlation (-0.223) between the true initial status (intercept) and true change (slope). However, the book notes this correlation is not statistically significant. As you can see this is not something the output of the lmer object reports. The book mentions in chapter 3 (p. 73) that statisticians disagree about the effectiveness of such significance tests on variance components, and I can only assume the authors of the lme4 package question their use. Finally, we notice the level-2 variance components: 0.624 and 0.151. These provide a benchmark for predictors' effects as the authors proceed to build models.
## 1 thought on “Unconditional Multilevel Models for Change (Ch 4 of ALDA)”
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8987377882003784, "perplexity": 4147.64939080044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310970.85/warc/CC-MAIN-20191215225643-20191216013643-00474.warc.gz"} |
http://doc.rero.ch/record/28492 | Faculté des sciences
## Fourier analysis of Ramsey fringes observed in a continuous atomic fountain for in situ magnetometry
### In: The European Physical Journal Applied Physics, 2011, vol. 56, no. 11001, p. 1-9
Ramsey fringes observed in an atomic fountain are formed by the superposition of the individual atomic signals. Due to the atomic beam residual temperature, the atoms have slightly different trajectories and thus are exposed to a different average magnetic field, and a velocity-dependent Ramsey interaction time. As a consequence, both the velocity distribution and magnetic field profile are... More | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796880483627319, "perplexity": 1261.8471204664106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00230-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/unintegrateable-fn-s.218204/ | # Unintegrateable fn.[s]
1. Feb 26, 2008
### rock.freak667
Consider
$$\int e^{-x^2} dx$$
if that can't be expressed in terms of elementary functions, how did they compute
$$\int ^{\infty} _{- \infty} e^{-x^2} dx =\sqrt{\pi}$$
(I think I have the limits wrong, but I know it has $\infty$ as the upper or lower limit)
2. Feb 26, 2008
### sutupidmath
well, one way of doing so is i guess expanding $$e^{-x^{2}}$$ as a power series, using taylor series. But i also think one can compute it using double integrals. I have just heard about this though, since i have no idea how to deal with double integrals yet!
3. Feb 26, 2008
$$I = \int_{-\infty}^{\infty}e^{-x^2}dx$$
$$I^2 = \int_{-\infty}^{\infty}e^{-x^2} dx\times \int_{-\infty}^{\infty} e^{-y^2}dy$$
$$I^2 = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-(x^2+y^2)}dxdy$$
$$I^2 = \int_{0}^{2\pi} \int_{0}^{\infty} e^{-(r^2)}rdrd\theta$$
$$I^2 = \pi - \pi e^{-\infty}$$
Last edited: Feb 26, 2008
4. Feb 26, 2008
### sutupidmath
THis looks cute, although i do not understand a damn thing what u did! I mean i haven't yet dealt with double integrals!
5. Feb 26, 2008
### D H
Staff Emeritus
This is the Gaussian integral. The wikipedia article (http://en.wikipedia.org/wiki/Gaussian_integral" [Broken]) on this integral goes over the derivation of this integral and does a rigorous job (check out the "careful" proof of the identity).
Last edited by a moderator: May 3, 2017
6. Feb 26, 2008
EDIT: Note: Read the article D H posted, and not the gibberish below.
Honestly, I can't believe I remembered how to do it. I've seen it a couple of times in class. To me it is, a trick.
But here's the jist of it. That x^2 looks like a beast, and integrating from -infinity to infinity seems like a problem.
We know that if we multiply two exponentials e^u*e^y we get e^(u+y). So when we multiply the two integrals together and get x^2+y^2 this should be screaming, convert me into polar coordinates.
So we multiply the two integrals together, and convert the x^2+y^2 into r^2.
First though, why is that even possible? Well remember that when you integrate with "numbers" you get a number. What I mean by this is the following.
If we integrate $$\int_0^1 x dx$$ we get a number right? What about when we integrate $$\int_0^u x dx$$? Well the second case returns a function dependent on $u$.
So in the first case, $$\int_0^1 x dx$$, why not just call this a number, how about $I$. So this makes sense to be able to multiply two numbers together, eg. [itex] I\timesI = \int_0^1 x dx \times \int_0^1 x dx [/tex]. Think about why we can "push" them together.
I think the most interesting part about it, was changing to polar coordinates. The part where we change from sweeping out -infinity to infinity in the x and y direction in rectangular coordinates to sweeping out all values by rotating from 0 to 2pi and extending the "arm" from 0 to infinity.
Last edited: Feb 26, 2008
7. Feb 26, 2008
### John Creighto
There are other ways to solve the above as well. You will learn these if you take a course in complex variables.
8. Feb 27, 2008
### Big-T
You will also need Fubini's theorem in order to justify that you may in this case convert the product of two integrals into a double integral.
Similar Discussions: Unintegrateable fn.[s] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.976851761341095, "perplexity": 682.397256505744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00557.warc.gz"} |
http://bootmath.com/number-of-integer-solutions-of-the-following-equation.html | # Number of integer solutions of the following equation
Consider the equation $a^2+ab+b^2=1$. How many integer pairs are solutions to this?
I found 4 pairs: $a=-1, b=0$; $a=0, b=-1$; $a=1,b=0$; $a=0, b=1$.
But the solution says the answer is 6. Which other possibilities am I missing? Please post the answer with solution.
#### Solutions Collecting From Web of "Number of integer solutions of the following equation"
Note that
$$a^2+ab+b^2=1\iff4a^2+4ab+4b^2=4\iff(2a+b)^2+3b^2=4.$$
If $b=0$ then it gives $a=\pm1$. If $b=1$ then $a=0$ or $a=-1$ and if $b=-1$ then $a=0$ or $a=1$. Otherwise, you will have
$$4=(2a+b)^2+3b^2\ge (2a+b)^2+12$$
which is impossible.
So, the solutions are
$$(-1,0),(1,0),(0,1),(-1,1),(0,-1),(1,-1)$$
(there are 6 solutions as you wanted)
Note that you can complete the square in $a^2+ab+b^2=1$to get $$\left(a+\frac b2\right)^2+\frac 34b^2=1$$
Now you can see that the sum of the two positive terms can only be equal to $1$ if the terms themselves are less than or equal to $1$. This means that $|b|\leq 1$ and by symmetry $|a|\leq 1$.
If $a=0$ then $b=\pm 1$ and if $b=0$ then $a=\pm 1$. What happens if neither $a$ nor $b$ is zero – say $a=1$ – what solutions do you find then? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900367796421051, "perplexity": 112.31654052398699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00036.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/plane-monochromatic-electromagnetic-wave-wavelength-u03bb-46-cm-propagates-vacuum-magnetic-q2605361 | A plane monochromatic electromagnetic wave with wavelength %u03BB = 4.6 cm, propagates through a vacuum. Its magnetic field is described by $$B = (B_{x}i + B_{y}l) cos(kz+ \omega t) where B_{x} = 1.4*10^{-6} T, B_{y} = 5.2*10^{-6} T$$
and i-hat and j-hat are the unit vectors in the +x and +y directions, respectively.
1) What is I, the intensity of this wave?
2) What is Sz, the z-component of the Poynting vector at (x = 0, y = 0, z = 0) at t = 0?
3) What is Ex, the x-component of the electric field at (x = 0, y = 0, z = 0) at t = 0? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530929684638977, "perplexity": 286.00375244712035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115891802.74/warc/CC-MAIN-20150124161131-00240-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-equations/174515-solving-non-homogeneous-pde.html | # Math Help - Solving a non-homogeneous PDE
1. ## Solving a non-homogeneous PDE
Does anyone know a method to solve this PDE?
$\frac{\partial^2y}{\partial x^2}+f_1(x,t)\frac{\partial y}{\partial x}=f_2(x,t)\frac{\partial y}{\partial t}+f_3(x,t)$
$f_1$, $f_2$ and $f_3$ are three functions of x and t. I wanted to try Laplace transform but it becomes very complicated due to presence of $f$ functions. If initial and boundary conditions are required I can provide them.
Thank you,
2. What do the function $f_1, f_2$ and $f_3$ look like. That would really make a difference as to what to do next. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8497500419616699, "perplexity": 254.06999538050482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678701185/warc/CC-MAIN-20140313024501-00080-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-2-section-2-3-writing-equations-of-lines-2-3-exercises-page-172/7 | ## Intermediate Algebra (12th Edition)
Using $y=mx+b$ or the slope-intercept form of linear equations where $b$ is the $y-$intercept and $m$ is the slope, the given equation, $y=2x+3 ,$ has the following characteristics: \begin{array}{l}\require{cancel} \text{$y-$intercept: } 3 \\\text{slope: } 2 .\end{array} The $y-$intercept is located on the positive $y-$axis. Since the slope is a positive number, then the line is inclined to the right. These characteristics best exemplify $\text{ Graph A .}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844158887863159, "perplexity": 744.1365417254062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589029.26/warc/CC-MAIN-20180716002413-20180716022413-00253.warc.gz"} |
http://mathhelpforum.com/calculus/128590-fairly-simple-limit-but-tricky-variables.html | # Math Help - Fairly simple limit, but tricky variables
1. ## Fairly simple limit, but tricky variables
Hey everyone. I need help evalutating the limit of (1+ p/q)^qx as x approaches infinity. I know I need to use L'Hospital's rule, but the p and q are throwing me off and I can't find the right answer. Any help would be greatly appreciated
2. Are you sure that is limit exist? maybe you make a mistake tpiying
Do you know this limit: $
\mathop {\lim }\limits_{x \to \infty } \left( {1 + \frac{1}
{x}} \right)^x = e
$
? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612856864929199, "perplexity": 443.7040312739004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036749.19/warc/CC-MAIN-20150601214356-00036-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/16853-limit-help-print.html | # limit help
• Jul 14th 2007, 12:38 PM
davecs77
limit help
I have the limit as x goes to 1 (1/lnx - x/x-1)
I am seeing a lot of these limits, with two different fractions. Is a good strategy finding a common denonometer and making it into one fraction? Then use Lhosipatals rule if it applies to find the limit? That is what im trying to do but I am not getting the answer in the book.
• Jul 14th 2007, 01:19 PM
galactus
$\lim_{x\rightarrow{1}}[\frac{1}{ln(x)}-\frac{x}{x-1}]$
Rewrite as:
$\lim_{x\rightarrow{1}}\frac{-xln(x)+x-1}{xln(x)-ln(x)}$
L'Hopital gives:
$\lim_{x\rightarrow{1}}\frac{-xln(x)}{x-1+xln(x)}$
L'Hopital again:
$\lim_{x\rightarrow{1}}\frac{-ln(x)-1}{ln(x)+2}=\frac{-1}{2}$
• Jul 14th 2007, 01:44 PM
davecs77
Quote:
Originally Posted by galactus
$\lim_{x\rightarrow{1}}[\frac{1}{ln(x)}-\frac{x}{x-1}]$
Rewrite as:
$\lim_{x\rightarrow{1}}\frac{-xln(x)+x-1}{xln(x)-ln(x)}$
L'Hopital gives:
$\lim_{x\rightarrow{1}}\frac{-xln(x)}{x-1+xln(x)}$
L'Hopital again:
$\lim_{x\rightarrow{1}}\frac{-ln(x)-1}{ln(x)+2}=\frac{-1}{2}$
How did you rewrite that fraction? I dont see that.
• Jul 14th 2007, 02:03 PM
galactus
That's just what you would do if you were subtracting. Basic algebra.
$\frac{a}{b}-\frac{c}{d}=\frac{ad-bc}{bd}$
• Jul 14th 2007, 02:15 PM
davecs77
Quote:
Originally Posted by galactus
$\lim_{x\rightarrow{1}}[\frac{1}{ln(x)}-\frac{x}{x-1}]$
Rewrite as:
$\lim_{x\rightarrow{1}}\frac{-xln(x)+x-1}{xln(x)-ln(x)}$
L'Hopital gives:
$\lim_{x\rightarrow{1}}\frac{-xln(x)}{x-1+xln(x)}$
L'Hopital again:
$\lim_{x\rightarrow{1}}\frac{-ln(x)-1}{ln(x)+2}=\frac{-1}{2}$
When you take the derivative of (-xln(x) + x - 1)/xln(x) - ln(x) I am getting
(-x - ln(x))/x+ln(x)-(1/x)
What am i doing wrong? EDIT NVM i forgot a 1
lol.
• Jul 14th 2007, 02:27 PM
galactus
Yeah, the biggest trouble with calc is the algebra. It can be tricky and easy to 'flub up'. :) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396981000900269, "perplexity": 4417.5769214455495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00562-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://existentialtype.wordpress.com/2012/08/09/churchs-law/ | ## Church’s Law
A new feature of this year’s summer school was a reduction in the number of lectures, and an addition of daily open problem sessions for reviewing the day’s material. This turned out to be a great idea for everyone, because it gave us more informal time together, and gave the students a better chance at digesting a mountain of material. It also turned out to be a bit of an embarrassment for me, because I posed a question off the top of my head for which I thought I had two proofs, neither of which turned out to be valid. The claimed theorem is, in fact, true, and one of my proofs is easily corrected to resolve the matter (the other, curiously, remains irredeemable for reasons I’ll explain shortly). The whole episode is rather interesting, so let me recount a version of it here for your enjoyment.
The context of the discussion was extensional type theory, or ETT, which is characterized by an identification of judgemental with propositional equality: if you can prove that two objects are equal,then they are interchangeable for all purposes without specific arrangement. The alternative, intensional type theory,or ITT, regards judgemental equality as definitional equality (symbolic evaluation), and gives computational meaning to proofs of equality of objects of a type, allowing in particular transport across two instances of a family whose indices are equal. NuPRL is an example of an ETT; CiC is an example of an ITT.
Within the framework of ETT, the principle of function extensionality comes “for free”, because you can prove it to hold within the theory. Function extensionality states that $f=g:A\to B$ whenever $x:A\vdash f(x)=g(x):B$. That is, two functions are if they are equal on all arguments (and, implicitly, respect equality of arguments). Function extensionality holds definitionally if your definitional equivalence includes the $\eta$ and $\xi$ rules, but in any case does not have the same force as extensional equality. Function extensionality as a principle of equality cannot be derived in ITT, but must be added as an additional postulate (or derived from a stronger postulate, such as univalence or the existence of a one-dimensional interval type).
Regardless of whether we are working in an extensional or an intensional theory, it is easy to see that all functions of type $N\to N$ definable in type theory are computable. For example, we may show that all such functions may be encoded as recursive functions in the sense of Kleene, or in a more modern formulation we may give a structural operational semantics that provides a deterministic execution model for such functions (given $n:N$, run $f:N\to N$ on $n$ until it stops, and yield that as result). Of course the proof relies on some fairly involved meta-theory, but it is all constructively valid (in an informal sense) and hence provides a legitimate computational interpretation of the theory. Another way to say the same thing is to say that the comprehension principles of type theory are such that every object deemed to exist has a well-defined computational meaning, so it follows that all functions defined within it are going to be computable.
This is all just another instance of Church’s Law, the scientific law stating that any formalism for defining computable functions will turn out to be equivalent to, say, the λ-calculus when it comes to definability of number-theoretic functions. (Ordinarily Church’s Law is called Church’s Thesis, but for reasons given in my Practical Foundations book, I prefer to give it the full status of a scientific law.) Type theory is, in this respect, no better than any other formalism for defining computable functions. By now we have such faith in Church’s Law that this remark is completely unsurprising, even boring to state explicitly.
So it may come as a surprise to learn that Church’s Law is false. I’m being provocative here, so let me explain what I mean before I get flamed to death on the internet. The point I wish to make is that there is an important distinction between the external and the internal properties of a theory. For example, in first-order logic the Löwenheim-Skolem Theorem tells us that if a first-order theory has an infinite model, then it has a countable model. This implies that, externally to ZF set theory, there are only countably many sets, even though internally to ZF set theory we can carry out Cantor’s argument to show that the powerset operation takes us to exponentially higher cardinalities far beyond the countable. One may say that the “reason” is that the evidence for the countability of sets is a bijection that is not definable within the theory, so that it cannot “understand” its own limitations. This is a good thing.
The situation with Church’s Law in type theory is similar. Externally we know that every function on the natural numbers is computable. But what about internally? The internal statement of Church’s Law is this: $\Pi f:N\to N.\Sigma n:N. n\Vdash f$, where the notation $n\Vdash f$ means, informally, that $n$ is the code of a program that, when executed on input $m:N$, evaluates to $f(m)$. In Kleene’s original notation this would be rendered as $\Pi m:N.\Sigma p:N.T(n,m,p)\wedge Id(U(p),f(m))$, where the $T$ predicate encodes the operational semantics, and the $U$ predicate extracts the answer from a successful computation. Note that the expansion makes use of the identity type at the type $N$. The claim is that Church’s Law, stated as a type (proposition) within ETT, is false, which is to say that it entails a contradiction.
When I posed this as an exercise at the summer school, I had in mind two different proofs, which I will now sketch. Neither is valid, but there is a valid proof that I’ll come to afterwards.
Both proofs begin by applying the so-called Axiom of Choice. For those not familiar with type theory, the “axiom” of choice is in fact a theorem, stating that every total binary relation contains a function. Explicitly,
$(\Pi x:A.\Sigma y:B.R(x,y)) \to \Sigma f:A\to B.\Pi x:A.R(x,f(x)).$
The function $f$ is the “choice function” that associates a witness to the totality of $R$ to each argument $x$. In the present case if we postulate Church’s Law, then by the axiom of choice we have
$\Sigma F:(N\to N)\to N.\Pi f:N\to N. F(f)\Vdash f$.
That is, the functional $F$ picks out, for each function $f$ in $N\to N$, a (code for a) program that witnesses the computability of $f$. This should already seem suspicious, because by function extensionality the functional $F$ must assign the same program to any two extensionally equal functions.
We may easily see that $F$ is injective, for if $F(f)$ is $F(g)$, then both track both $f$ and $g$, and hence $f$ and $g$ are (extensionally) equal. Thus we have an injection from $N\to N$ into $N$, which seems “impossible” … except that it is not! Let’s try the proof that this is impossible, and see where it breaks down. Suppose that $i:(N\to N)\to N$ is injective. Define $d(x)=i^{-1}(x)(x)+1$, and consider $d(i(d))=i^{-1}(i(d))(i(d))+1=d(i(d))+1$ so $0=1$ and we are done. Not so fast! Since $i$ is only injective, and not necessarily surjective, it is not clear how to define $i^{-1}$. The obvious idea is to send $x=i(f)$ to $f$, and any $x$ outside the image of $i$ to, say, the identity. But there is no reason to suppose that the image of $i$ is decidable, so the attempted definition breaks down. I hacked around with this for a while, trying to exploit properties of $F$ to repair the proof (rather than work with a general injection, focus on the specific functional $F$), but failed. Andrej Bauer pointed out to me, to my surprise, that there is a model of ETT (which he constructed) that contains an injection of $N\to N$ into $N$! So there is no possibility of rescuing this line of argument.
(Incidentally, we can show within ETT that there is no bijection between $N$ and $N\to N$, using surjectivity to rescue the proof attempt above. Curiously, Lawvere has shown that there can be no surjection from $N$ onto $N\to N$, but this does not seem to help in the present situation. This shows that the concept of countability is more subtle in the constructive setting than in the classical setting.)
But I had another argument in mind, so I was not worried. The functional $F$ provides a decision procedure for equality for the type $N\to N$: given $f,g:N\to N$, compare $F(f)$ with $F(g)$. Surely this is impossible! But one cannot prove within type theory that $\textrm{Id}_{N\to N}(-,-)$ is undecidable, because type theory is consistent with the law of the excluded middle, which states that every proposition is decidable. (Indeed, type theory proves that excluded middle is irrefutable for any particular proposition $P$: $\neg\neg(P\vee\neg P)$.) So this proof also fails!
At this point it started to seem as though Church’s Law could be independent of ETT, as startling as that sounds. For ITT it is more plausible: equality of functions is definitional, so one could imagine associating an index with each function without disrupting anything. But for ETT this seemed implausible to me. Andrej pointed me to a paper by Maietti and Sambin that states that Church’s Law is incompatible with function extensionality and choice. So there must be another proof that refutes Church’s Law, and indeed there is one based on the aforementioned decidability of function equivalence (but with a slightly different line of reasoning than the one I suggested).
First, note that we can use the equality test for functions in $N\to N$ to check for halting. Using the $T$ predicate described above, we can define a function that is constantly $0$ iff a given (code of a) program never halts on given input. We may then use the above-mentioned equality test to check for halting. So it suffices to show that the halting problem for (codes of) functions and inputs is not computable to complete the refutation of the internal form of Church’s Law.
Specifically, assume given $h:N\times N\to N$ that, given a code for a function and an input, yields $0$ or $1$ according to whether or not that function halts when applied to that input. Define $d:N\to N$ by $\lambda x:N.\neg h(x,x)$, the usual diagonal function. Now apply the functional $F$ obtained from Church’s Law using the Axiom of Choice to obtain $n=F(d)$, the code for the function $d$, and consider $h(n,n)$ to derive the needed contradiction. Notice that we have used Church’s Law here to obtain a code for the type-theoretic diagonal function, which is then passed to the halting tester in the usual way.
As you can see, the revised argument follows along lines similar to what I had originally envisioned (in the second version), but requires a bit more effort to push through the proof properly. (Incidentally, I don’t think the argument can be made to work in pure ITT, but perhaps it would go through for ITT enriched with function extensionality.)
Thus, Church’s Law is false internally to extensional type theory, even though it is evidently true externally for that theory. You can see the similarity to the situation in first-order logic described earlier. Even though all functions of type $N\to N$ are computable, type theory itself is not capable of recognizing this fact (at least, not in the extensional case). And this is a good thing, not a bad thing! The whole beauty of constructive mathematics lies in the fact that it is just mathematics, free of any self-conscious recognition that we are writing programs when proving theorems constructively. We never have to reason about machine indices or any such nonsense, we just do mathematics under the discipline of not assuming that every proposition is decidable. One benefit is that the same mathematics admits interpretation not only in terms of computability, but also in terms of continuity in topological spaces, establishing a deep connection between two seemingly disparate topics.
(Hat tip to Andrej Bauer for help in sorting all this out. Here’s a link to a talk and a paper about the construction of a model of ETT in which there is an injection from $N\to N$ to $N$.)
Update: word-smithing.
### 10 Responses to Church’s Law
1. Mike Shulman says:
Reading this together with your next post has confused me about what you mean by the word “proposition”. In the next post, you made the point that not every construction is a proof, i.e. not every type is a proposition; but you didn’t say how you want to decide whether a given type is a proposition.
In my limited experience, it seems that the most common answer is to regard as propositions those types that are subsingletons, a.k.a. proof-irrelevant, a.k.a. (-1)-truncated or h-level 1 (in the language of homotopy type theory). But in that case, the quantifier “there exists” has to be interpreted not by a $\Sigma$-type but by a squashed version of it. Then the axiom of choice is no longer a theorem and can be false, while Church’s Law would be $\Pi f : N\to N. [\Sigma n:N . n \Vdash f]$ (where $[-]$ denotes a squash type) and can be true (as it is in the effective topos). (Please correct me if my understanding of any of this is wrong.)
Since in this post you say that AC is a theorem and CL is false, using the non-squashed versions of both, I gather that this is not the meaning of “proposition” you prefer. So what, for you, makes a type into a proposition? Is it just the intent to regard it as such?
• Carlo Angiuli says:
My two cents: It’s the intent. If we had to specify which types are “propositions” in a formal sense, it would make most sense to reserve this term for subsingleton ((-1)-truncated) types. But as we know, to truncate types is to throw out a lot of useful homotopical — and even 0-dimensional — information.
I think Bob is drawing an informal distinction between types which can be regarded meaningfully as theorems in a logic, and types which only express constructions (whatever that means). When people first learn about props-as-types, they are told that every program proves a theorem; they often respond with a question like, “What theorem does fib :: Int -> Int prove?” The answer is of course that it’s a useless theorem (\x. 0 or \x. x are suitable proofs) but a perfectly sensible construction, about which we might perhaps wish to prove interesting theorems.
• What Carlo says.
• Mike Shulman says:
Okay, thanks! I don’t like it, but at least now I understand it. (-:
2. Derek Dreyer says:
Cool post, Bob. Two typos in the paragraph where you complete the proof: H(x,x) should be h(x,x), and h(d,d) should be h(n,n).
3. @andrej: It is exactly this (perhaps somewhat boring) interpretation of Church’s Law as “quote” that I am curious about. Perhaps I have drunk too much intensional-equality kool-aid, but one of the promised benefits of intensional equality is that we are not forced to identify operations of different complexity as equal (e.g. bogosort and quicksort). As far as extensional type theory is concerned, whether we define our sorting function as quicksort or bogosort does not matter: there is only one true sorting function, and all sorting functions are equal to it.
On the other hand, if we can safely add a “quote” function to intensional type theory, then we can distinguish between the two sorts, and then we can (hopefully) prove that quicksort is efficient under a chosen evaluation scheme. We could still perhaps keep a notion of extensional equality for functions as equality-of-function-application — not as convenient, maybe, but not necessarily impossible.
Why does abandoning this idea in favor of e.g. extensional type theory trouble me? Because then, if we have a proof the correctness of our tree implementation in (dependent type theory based) TheoremProver, and we want to prove that it also has the right complexity guarantees, then it seems we have two choices: 1. analyze the complexity outside of TheoremProver or 2. build a new theory of computation within TheoremProver and rewrite our tree in that embedded language. Certainly (2) works, but then is there a strong reason to prefer TheoremProver to e.g. HOL? (perhaps an affirmative answer to that question would be comforting.)
4. andrejbauer says:
@Frederic: typically Church’s Law does not compute anything. For example, in the effective topos it is realized by the (code of) identity function. The reason is simple: since everything is represented by Gödel codes anyhow, it is trivially the case that every function has a code. I do not know of a model in which Church’s Law has an interesting computational meaning. In a programming language it corresponds to “unquote” or “disassemble”: given a value of functional type, it returns its source code (given as an abstract syntax tree, or just as a pointer to a block of machine code).
5. andrejbauer says:
The internal Church’s Law (I love calling it a “law”!) is a very extreme axiom which fails in most realizability models. It says something like “everything we see is not only made of Gödel codes, these codes are visible to us”. Church’s Law in our universe would say not only that we’re being simulated by a computer, but also that we have access to God’s source code. An extreme position indeed.
6. Very thought provoking! I wonder, though, if we aren’t potentially losing something by making functions so extensional. If we stick with intensional type theory and add a version of Church’s Law that computes, we might be able to prove interesting things: e.g. that our definition of mergesort is polynomial time computable. With function extensionality, even though we only have computable functions we are completely forbidden to mention how they are computed, as your example shows — and so if we want to prove anything about their intension (like that they terminate within a reasonable amount of time) within our extensional theorem prover, we have to build a whole internal theory and rewrite all of our functions within it. It just seems a bit unsavory that within type theory, we are already going to a lot of effort to show that our functions compute in bounded time and yet cannot use this information directly.
Follow | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 81, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8725414276123047, "perplexity": 410.52103148688974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898629.32/warc/CC-MAIN-20141030025818-00171-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/96240/energy-of-an-inductor | # Energy of an inductor
I know that for an inductor having self inductance $L$ energy stored in its steady state when a current $I$ has been established is given by $U = \frac{LI^2}{2}$.
But after this current has been established, if we suddenly cut the wires attaching the inductor to the potential source or short the circuit, what happens to the energy ?
It must not be stored anymore as $\frac{LI^2}{2}$ as there can be no $I$, could not have decayed as heat because we cut off the wires and did not have any circuit which may have allowed for reverse flow of current.
I have one thought that it might have gone as EM radiation but I am not sure.
-
The place where you cut the wire acts as a temporary Capacitor where a huge potential difference is formed. This potential difference causes an intense electric field to develop, which is where the energy is initially stored. If the potential difference developed exceeds the dielectric breakdown voltage of the intervening medium, the charges are lost as a spark discharge which dissipates energy as EM waves and heat.
But usually the current is never cut down this abruptly, providing enough time for the energy to dissipate as the normal safe resistive heating. If not, then the energy will be lost by the aforementioned discharge which is very intense and might damage the equipment under consideration. Hence the use of a parallel capacitor with a large inductor, which allows slow dissipation of energy as LC oscillations (EM waves) and normal resistive heating.
EDIT
The said capacitance ceases to exist only if a spark discharge dissipates the gathered charge or, the instantaneous back emf is slowly reduced by resistive heating (the circuit is not cut-off). (i.e. if we assume the cessation of current occurred instantaneously, the developed field would exceed the breakdown field leading to a spark, or if we assume that the change is slow enough so that no spark is developed, then the finite time it takes for the current to die down, the resistances of the circuit would dissipate the energy in that time). The sudden stopping of the current is only an ideal occurrence and does not occur in practice.
-
Barely the development of high electric field does not assert that all the energy has transfered from magnetic to electric field. At best you can treat it as an LC circuit for an instant, it does not explain when just after this instance you can not consider said capacitance. – Rijul Gupta Feb 1 '14 at 12:18
@rijulgupta see the edit in my answer. Ask if you need any further clarifications. – Satwik Pasani Feb 1 '14 at 13:19
Satwik's answer is correct but I want to add a practical example. When we switch off some electric device(say an DC motor) there is a spark is produced ,this is where the energy is getting lost. This spark is so high that can be seen even with naked eyes at the switch.
PS: Do this experiment in vaccum then there be a part of energy dissipated as heat and the remaining will store in setting up an $\vec E$ in the wire and will cause the electrons to accumulate at the surface.
The capacitive energy will be $\dfrac{1}{2}CV^2$ ( $C$ is the stray capacitance of wire) and the remaining as heat energy produced by the movement of electrons from the body of conductor to the surface at a high current in a short time. Energy will not emit as EM waves in vaccum.
EM waves only generates due to corona discharge if there is some medium.
-
@rijul dissconection of current means megnatic field has to decrease instantly which causes an Electric field of high intensity this is what swastic's answer tells at the core. – user31782 Feb 1 '14 at 12:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825467050075531, "perplexity": 481.76858422562117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462206.12/warc/CC-MAIN-20150226074102-00140-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/differences-between-rnormalizable-and-non-renormalizable.12620/ | # Differences between rnormalizable and non-renormalizable
1. Jan 14, 2004
### eljose79
-I would like to know the differences between a renormalizable and Non-renormalizable theory..how is possible that one gives finite results and the other infinite results?..why happens that?..in fact i supose that the divergences in both theories go as
Int(0,Infinite)d^npp^n then why in one thoery can be absorved wheres in the other not?...
2. Jan 14, 2004
### Yustas
There are infinities in both types of theoreis. However, there is a finite number of them in renormalizable theories and infinite number -- in nonrenormalizable. The trick with predictions here works because in renormalizable theories you can redefine some of your basic parameters (mass, charge, etc.) to absorb those divergencies. This procedure of redefinition is fine, since the integrals that come out divergent in perturbation theory are divergent in the ultraviolet, i.e. probe very short distances at which you don't know anything about real physics anyway. So one redefines those parameters and extracts their values from experiment...
BTW, nonrenormalizable theories are not waste either. There is a number of well-defined NR theories -- effective field theories -- such as chiral perturbation theory, heavy quark effective theory or even gravity in post-newtonial limit -- which produce a wealth of very useful predictions. They just have more parameters to fit order by order in small parameter expansion...
3. Jan 22, 2004
### eljose79
-I rode in peskin-schroeder book a thing but i do not know if has to deal with renormalization, they saisd that green function at all orders could be calculated because it solved a differential equation so you could solve it to get the green function to all orders.....sorry if i am wrong but could it be applied to non-renormalizable theories to get the green functions?..by the way knowing the green function allows you to solve the renormalization problem?...thanks.
P.D :i am ignorant in this matter could someone provide a link to a good introduction (math included) to renormalization?..thnx..
4. Jan 23, 2004
### arivero
If you style picking-up small random papers as a learning method, check the ones at http://web.mit.edu/redingtn/www/netadv/Xrenormali.html
If you prefer a book path, the Peskin is a good option, but you could want to try, before, the 3-volume, out of print, series from Bjorken Drell, as the Peskin-Shroeder builds upon it.
Last, any new (post-1990) book using the R-operation of bogoliugov must do the finishing touch.
Last edited by a moderator: Apr 20, 2017
5. Jan 23, 2004
### arivero
On other hand, let me to provide a fast introduction to renormalization.
Take a function f(x). The quantity f(x)/x is clearly infinite at 0. But if you substract the infinite f(0)/x, you will get a finite quantity which we call f'(0)
Last edited: Jan 23, 2004
6. Jan 23, 2004
### jeff
Are you referring to ward identities?
Similar Discussions: Differences between rnormalizable and non-renormalizable | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464952111244202, "perplexity": 2006.3316888933173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00698.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=183697 | # Incline Plane with Friction with one object hanging?
by myelevatorbeat
Tags: friction, incline, object, plane
P: 55 1. The problem statement, all variables and given/known data Find the acceleration reached by each of the two objects shown in Figure P4.49 if the coefficient of kinetic friction between the 7.00kg object and the plane is 0.250 Here is a picture of the problem: http://a496.ac-images.myspacecdn.com...bcabca499f.jpg 2. Relevant equations F=ma 3. The attempt at a solution I'm calling the 7.00 kg object M1 and the 12.0kg object M2. For M1 (Fnet)x=ma T+mgsin37degrees-fk=m1a (Fnet)y=ma (a=o) n1-mgcos37degrees=0 n1=mgcos37degrees For M2 (Fnet)y=m2a T-m2g=m2a T=m2a+m2g I take this equation and put it in the (Fnet)x equation I got for M1: T+mgsin37degrees-fk=m1a m2a+m2g+mgsin37degrees-fk=m1a m2a+m2g+mgsin37degrees-(0.250)mgcos37degrees=m1a m2g+mgsin37degrees-(0.250)mgcos37degrees=m1a-m2a a=m2g+mgsin37degrees-(0.250)mgcos37degrees / (m1-m2) a=7.64 m/s^s Now, my question is: It says "Find the acceleration reached by each of the two objects. Now, I know with an ideal pulley the tensions on both side of the equation are equal, but I'm not sure if the acceleration is. I would assume so cause it's one rope and one rope can't move at two different speeds at once, but I could be wrong. So, is that the answer to the acceleration of both blocks or merely the acceleration of the 7.00kg object and there is further work to be done to find the acceleration of the 12.0kg object?
HW Helper
P: 4,125
You can assume accelerations are the same... But careful about signs... if you assume the acceleration of M1 is a acting upward to the right... then you must assume that the acceleration of M2 is a downward... so this will affect the signs in your equations.
also here:
For M1 (Fnet)x=ma T+mgsin37degrees-fk=m1a
I think you should have T - m1gsin37 - fk = m1a. (minus instead of plus)
And since you've assumed that a is upward to the right for M1... it is downward for M2... so
T - m2g = m2(-a)
Mentor
P: 41,304
Quote by myelevatorbeat For M1 (Fnet)x=ma T+mgsin37degrees-fk=m1a
The tension and the component of the weight act in different directions.
In calling the acceleration "a", be sure to use a consistent sign convention for M1 & M2: If M1 accelerates up the incline, M2 must accelerate down.
Now, I know with an ideal pulley the tensions on both side of the equation are equal, but I'm not sure if the acceleration is. I would assume so cause it's one rope and one rope can't move at two different speeds at once, but I could be wrong.
That is correct and is essential to solving the problem.
Related Discussions Introductory Physics Homework 2 Introductory Physics Homework 6 Introductory Physics Homework 6 Introductory Physics Homework 18 Introductory Physics Homework 8 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376140236854553, "perplexity": 1088.128799361706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894275.63/warc/CC-MAIN-20140722025814-00024-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://research.nsu.ru/ru/publications/search-for-vectorlike-leptons-in-multilepton-final-states-in-prot | # Search for vectorlike leptons in multilepton final states in proton-proton collisions at s =13 TeV
The CMS collaboration, Александр Юрьевич Барняков, Владимир Евгеньевич Блинов, Юрий Иванович Сковпень
Результат исследования: Научные публикации в периодических изданияхстатьярецензирование
13 Цитирования (Scopus)
## Аннотация
A search for vectorlike leptons in multilepton final states is presented. The data sample corresponds to an integrated luminosity of 77.4 fb-1 of proton-proton collisions at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC in 2016 and 2017. Events are categorized by the multiplicity of electrons, muons, and hadronically decaying τ leptons. The missing transverse momentum and the scalar sum of the lepton transverse momenta are used to distinguish the signal from background. The observed results are consistent with the expectations from the standard model hypothesis. The existence of a vectorlike lepton doublet, coupling to the third-generation standard model leptons in the mass range of 120-790 GeV, is excluded at 95% confidence level. These are the most stringent limits yet on the production of a vectorlike lepton doublet, coupling to the third-generation standard model leptons.
Язык оригинала английский 052003 22 Physical Review D 100 5 https://doi.org/10.1103/PhysRevD.100.052003 Опубликовано - 6 сен 2019 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942607879638672, "perplexity": 3736.4673628022633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00500.warc.gz"} |
http://www.koreascience.or.kr/article/JAKO200413842029119.page | # CONFORMAL CHANGE OF THE TENSOR Uvωμ IN 7-DIMENSIONAL g-UFT
• Published : 2004.11.01
#### Abstract
We investigate change of the tensor ${U^{v}}_{\omega\mu}$ induced by the conformal change in 7-dimensional g-unified field theory. These topics will be studied for the second class with the first category in 7-dimensional case.
#### References
1. C. H. Cho, Conformal change of the connection in 3- and 5-dimensional $^{\ast}g^{{\lambda}{\nu}}$ - unified Field Theory, BIBS, Inha Univ. 13 (1992), 11–19
2. C. H. Cho, Conformal change of the tensor $S_{{\lambda}{\mu}}\;^{\nu}$ in 7-dimensional g-UFT, Bull. Korean Math. Soc. 38 (2001), 197–203
3. K. T. Chung, Three- and Five-dimensional considerations of the geometry of Einsteins's $^{\ast}$g-unified field theory, Internat. J. Theoret. Phys. 27(9) (1988), 1105–1136
4. K. T. Chung, Conformal change in Einstein's $^{\ast}g^{{\lambda}{\nu}}$ -unified field theory, Nuove Cimento (X)58B (1968)
5. K. T. Chung, Einstein's connection in terms of $^{\ast}g^{{\lambda}{\nu}}$, Nuove Cimento (X) 27 (1963)
6. M. A. Kim, K. S. So, C. H. Cho and K. T. Chung, Seven-dimensional consideration of Einstein's connection. I. The recurrence relations of the first kind in n-g-UFT. Internat. J. Math. Sci. 12 (2003), 777–787. Hindawi
7. M. A. Kim, K. S. So, C. H. Cho and K. T. Chung, Seven-dimensional consideration of Einstein's connection. II. The recurrence relations of the second and third kind in ${\gamma}$-g-UFT, Internat. J. Math. & Math. Sci. 14 (2003), 895–902. Hindawi
8. M. A. Kim, K. S. So, C. H. Cho and K. T. Chung, Seven-dimensional consideration of Einstein's connection. III. The sevendimensional Einstein's connection, Internat. J. Math. & Math. Sci. 15 (2003), 947–958. Hindawi
9. V. Hlavaty, Geometry of Einstein’s unified field theory, P. Noordhoff Ltd (1957) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685331344604492, "perplexity": 3489.932125439495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00318.warc.gz"} |
https://www.physicsoverflow.org/30359/em-wave-function-%26-photon-wavefunction | # EM wave function & photon wavefunction
+ 7 like - 0 dislike
643 views
According to this review
Photon wave function. Iwo Bialynicki-Birula. Progress in Optics 36 V (1996), pp. 245-294. arXiv:quant-ph/0508202,
a classical EM plane wavefunction is a wavefunction (in Hilbert space) of a single photon with definite momentum (c.f section 1.4), although a naive probabilistic interpretation is not applicable. However, what I've learned in some other sources (e.g. Sakurai's Advanced QM, chap. 2) is that, the classical EM field is obtained by taking the expectation value of the field operator. Then according to Sakurai, the classical $E$ or $B$ field of a single photon state with definite momentum p is given by $\langle p|\hat{E}(or \hat{B})|p\rangle$, which is $0$ in the whole space. This seems to contradict the first view, but both views make equally good sense to me by their own reasonings, so how do I reconcile them?
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Jia Yiyang
asked May 19, 2012
The photon doesn't have a non-relativistic wavefunction because it is never slowly moving. The vector potential can be interpreted in certain ways as a relativistic wavefunction for a four-dimensional photon, but this requires understanding four-dimensional propagators in a particle view. It is probably best to follow Sakurai and ignore the concept of the wavefunction of the photon until you study the path integral for particle paths.
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Ron Maimon
I did study some path integral, but I guess you mean something else by " path integral for particle paths"? Anyway I do want to get some clarifications on the issue now. Sakurai's view gives me some confusions too: spectrum of one photon state is determined by the classical Maxwell's equations, yet the one-photon state vectors do not correspond to the eigensolutions of Maxwell according to Sakurai's prescription. On the contrary, for Dirac particles it seems to be universally agreed that the one-particle states are the solutions of Dirac equation.
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Jia Yiyang
It's not much different for Dirac equation--- the antiparticles are there, so you either describe the particle zig-zagging in time (which is all I meant by particle-path path-integral, Feynman propagators), or you fail to describe single particle. The Dirac equation is wrong as a single particle equation, this is Klein's paradox. The fields never correspond directly to the particle wavefunction, only in a Feynman description, and then the particle wavefunction in four dimensions (zig-zagging in time) integrated over all internal proper times obeys the linearized field equation.
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Ron Maimon
This reply will be more or less the same as the one I replied Lubos: one takes solutions of Dirac equation as one-particle space and build up a Fock space based on it(c.f. Thaller.b "the Dirac equation"chap 10), or else it is hard to explain the success of applying Dirac's equation to hydrogen atom. The correspondence between state kets and solutions of Dirac equation is given by $\psi(x)=\langle 0|\hat{\psi}(x)|p\rangle$ (Sakurai, weinberg)
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Jia Yiyang
Dear Jia, a one-photon wave function is a nontrivial solution of Maxwell's equations and and a one-electron wave function is a nontrivial solution of Dirac's equation. There is no difference here. Note that the classical field you calculated from Dirac above wasn't the expectation value in the same state of momentum $p$: it was the matrix element between $p$ and the vacuum. When you do the same thing for the Maxwell field, it will work just as well. That's what my answer was about: it's about the mixtures from bras and kets with different occupation numbers. What's the problem?
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Luboš Motl
@JiaYiyang: It is not hard to explain the success of the Dirac equation in an external potential--- this is a case where the eigenstates of the equation have a particle and field interpretation simultaneously. I am writing a long answer, because this is never properly explained in the literature, at least not in one place.
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Ron Maimon
@RonMaimon: I'd be very interested to read such a thing if you have the time to put something together (even if brief)
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user twistor59
+ 5 like - 0 dislike
As explained by Iwo Bialynicki-Birula in the paper quoted, the Maxwell equations are relativistic equations for a single photon, fully analogous to the Dirac equations for a single electron. By restricting to the positive energy solutions, one gets in both cases an irreducible unitary representation of the full Poincare group, and hence the space of modes of a photon or electron in quantum electrodynamics.
Classical fields are expectation values of quantum fields; but the classically relevant states are the coherent states. Indeed, for a photon, one can associate to each mode a coherent state, and in this state, the expectation value of the e/m field results in the value of the field given by the mode.
For more details, see my lectures
http://www.mat.univie.ac.at/~neum/ms/lightslides.pdf
http://www.mat.univie.ac.at/~neum/ms/optslides.pdf
and Chapter B2: Photons and Electrons of my theoretical physics FAQ.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Arnold Neumaier
answered May 21, 2012 by (15,468 points)
I see, so you(and Luboš) mean if we just look at the c-number solutions of Maxwell equations without other specification, the interpretation is two-fold, they can either be classical EM fields or quantum wavefunctions, and we must specify which one it is.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Jia Yiyang
@WetSavannaAnimalakaRodVance: I don't agree that you can measure one-photon state using the same apparatus and set ups measuring classical EM fields. What we measured classically is the expectation value $\langle \alpha|\mathbf{E}|\alpha\rangle$, and if $|\alpha\rangle$ is any 1-photon state, the expectation is 0 every where in space. In this post(me, arnold, lubos) the conclusion reached is that given a (positive-frequency) solution to Maxwell's equation, without further clarification, it can be a classical em wave, or the wavefunction to 1-photon state, not both.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Jia Yiyang
@JiaYiyang your last sentence is what I mean - there is an isomorphism between what you're probing with classical apparatus and the corresponding one-photon state. Of course they're not the same thing, indeed photons are almost impossible to detect without destroying them, let alone glean their detailed state - that's what I meant when I said "we are probing exact models of one-photon states". But the model is in all other respects exact, so I still think the idea is pretty neat.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user WetSavannaAnimal aka Rod Vance
@JiaYiyang Also, for the purposes of the link I gave, I argue that one can in principle copy a boson repeatedly and thus amplify its state to be classically probed: in practice the no-cloning theorem will stop you doing this for an arbitrary state, but if you can come up with a preparation procedure to realise a particular one photon state repeatedly, then you can do it in practice too - e.g. in a laser. So therefore the phase of the one photon wavefunction is meaningful and not amenable to being "gauged to something else" as the electron's wavefunction's phase is.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user WetSavannaAnimal aka Rod Vance
@WetSavannaAnimalakaRodVance: Ok, I see what you mean. I read the first part(will read the rest later) your linked post. It indeed seems to be some content in this isomorphism, but I'm not convinced it's strong enough to differentiate the physical significance of photon and electron wave function in the way you suggested, even for electron $|\psi|^2$ is not the only thing we can measure, clearly we can reveal more information about $\psi$ itself by measuring some more observables, that is, some $\psi$'s can have identical $|\psi|^2$, but not identical $\langle\psi|A|\psi\rangle$ for all A's.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Jia Yiyang
@JiaYiyang: The relevant formula is ⟨α|F(x)|α⟩=α(x), where F(x) is the operator version of the silberstein vector, α is a solution of the homogeneous Maxwell equations in Silberstein form and |α⟩ is the corresponding coherent state.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Arnold Neumaier
@JiaYiyang: Useful interactions can be found in Weinberg's 1964 paper on massless fields.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Arnold Neumaier
+ 1 like - 0 dislike
The expectation values $$\langle p | \vec E(\vec x) | p\rangle$$ and similarly for $\vec B(\vec x)$ vanish for a simple reason: the state $|p\rangle$ is by definition translational symmetric (translation only changes the phase of the state, the overall normalization) so the expectation values of any field in this state has to be translationally symmetric, too (the phase cancels between the ket and the bra).
So if you expect to see classical waves in expectation values in such momentum eigenstates, you are unsurprisingly disappointed. Incidentally, the same thing holds for any other field including the Dirac field (in contrast with the OP's assertion). If you compute the expectation value of the Dirac field $\Psi(\vec x)$ in a one-particle momentum eigenstate with one electron, this expectation value also vanishes. In this Dirac case, it's much easier to prove so because the expectation values of all fermionic operators (to the first or another odd power) vanish because of the Grassmann grading.
The vanishing of the expectation values of fields (those that can have both signs, namely the linear functions of the "basic" fields connected with the given particle) would be true for any momentum eigenstates, even multiparticle states which are momentum eigenstates simply because the argument above holds universally. You may think that this vanishing is because the one-particle momentum eigenstate is some mixture of infinitesimal electromagnetic waves that are allowed to be in any "phase" and these phases therefore cancel.
However, the formal relationship between the classical fields and the one-particle states still holds if one is more careful. In particular, one may construct "coherent states" which are multiparticle states with an uncertain number of particles which are the closest approximations of a classical configuration. You may think of coherent states as the ground states of a harmonic oscillator (and a quantum field is an infinite-dimensional harmonic oscillator) which are shifted in the position directions and/or momentum directions, i.e. states $$|a\rangle = C_\alpha \cdot \exp(\alpha\cdot a^\dagger) |0\rangle$$ This expression may be Taylor-expanded to see the components with individual numbers of excitations, $N=0,1,2,3,\dots$ The $C_\alpha$ coefficient is just a normalization factor that doesn't affect physics of a single coherent state.
With a good choice of $\alpha$ for each value of the classical field (there are many independent $a^\dagger(k,\lambda)$ operators for a quantum field and each of them has its $\alpha(k,\lambda)$), such a coherent state may be constructed for any classical configuration. The expectation values of the classical fields $\vec B,\vec E$ in these coherent states will be what you want.
Now, with the coherent state toolkit, you may get a more detailed understanding of why the momentum eigenstates which are also eigenstates of the number of particles have vanishing eigenvalues. The coherent state is something like the wave function $$\exp(-(x-x_S)^2/2)$$ which is the Gaussian shifted to $x_S$ so $x_S$ is the expectation value of $x$ in it. Such a coherent state may be obtained by an exponential operator acting on the vacuum. The initial term in the Taylor-expansion is the vacuum itself; the next term is a one-particle state that knows about the structure of the coherent state – because the remaining terms in the Taylor expansions are just gotten from the same linear piece that acts many times, recall the $Y^k/k!$ form of the terms in the Taylor expansion of $\exp(Y)$: here, $Y$ is the only thing you need to know.
On the other hand, the expectation value of $x$ in the one-particle state is of course zero. It's because the wave function of a one-particle state is an odd function such as $$x\cdot \exp(-x^2/2)$$ whose probability density is symmetric (even) in $x$ so of course that the expectation value has to be zero. If you look at the structure of the coherent state and you imagine that the $\alpha$ coefficients are very small so that multiparticle states may be neglected for the sake of simplicity, you will realize that the nonzero expectation value of $x$ in the shifted state (the coherent state) boils down to some interference between the vacuum state and the one-particle state; it is not a property of the one-particle state itself! More generally, the nonzero expectation values of fields at particular points of the spacetime prove some interference between components of the state that have different numbers of the particle excitations in them.
The latter statement should be unsurprising from another viewpoint. If you consider something like the matrix element $$\langle n | a^\dagger | m \rangle$$ where the bra and ket vectors are eigenstates of a harmonic oscillator with some number of excitations, it's clear that it's nonzero only if $m=n\pm 1$. In particular, $m$ and $n$ cannot be equal. If you consider the expectation values of $a^\dagger$ in a particle-number eigenstate $|n\rangle$, it's obvious that the expectation value vanishes because $a$ and $a^\dagger$, and they're just a different way of writing linear combinations of $\vec B(\vec x)$ or $\vec E(\vec x)$, are operators that change the number of particle excitations by one or minus one (the same for all other fields including the Dirac fields).
So if you want to mimic a classical field or classical wave with nonzero expectation values of the fields, of course that you need to consider superpositions of states with different numbers of particle excitations! But it's still true that all these expectation values are already encoded in the one-particle states. Let me summarize it: the right states that mimic the classical configurations are $\exp(Y)|0\rangle$ where $Y$ is a linear combination of creation operators (you may add the annihilation ones but they won't make a difference, except for the overall normalization, because annihilation operators annihilate the vacuum). Such coherent exponential-shapes states have nonzero vevs of any classically allowed form that you may want. At the same moment, the exponential may be Taylor-expanded to $(1+Y+\dots)$ and the linear term $Y$ produces a one-particle state that is the ultimate "building block" of the classical configuration. But if you actually want to calculate the vevs of the fields, you can't drop the term $1$ or others, either: you need to include the contributions of the matrix elements between states with different numbers of the particle excitations.
This post imported from StackExchange Physics at 2015-04-22 11:16 (UTC), posted by SE-user Luboš Motl
answered May 20, 2012 by (10,278 points)
Thanks for the reply, but I'm aware of this, and this is exactly why I'm confused because the first reference suggests one-photon wave function is a nontrivial solution of Maxwell's equation. As for Dirac particles the situation is different because the relation between a one-particle state vector and the solution of Dirac equation is given by(c.f.sakurai chap 3-10; weinberg chap 14.1): $\psi(x)=\langle 0|\hat{\psi}(x)|p\rangle$, or from a second quantization point of view, people seem to agree on taking solutions of Dirac equation as the one-particle space(c.f. Thaller.b chap 10).
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Jia Yiyang
Dear Jia, a one-photon wave function is a nontrivial solution of Maxwell's equations and and a one-electron wave function is a nontrivial solution of Dirac's equation. There is no difference here. Note that the classical field you calculated from Dirac wasn't the expectation value in the same state of momentum $p$: it was the matrix element between $p$ and the vacuum. When you do the same thing for the Maxwell field, it will work just as well. That's what my answer was about: it's about the mixtures from bras and kets with different occupation numbers. What's the problem?
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Luboš Motl
Let me put it this way: if we take the c-number solutions of the field equations as classical fields, why different rules of assigning kets to classical fields(for Dirac and EM field)?
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Jia Yiyang
There is absolutely no difference in this respect between the Dirac and Maxwell field. I have already demonstrated it about three times.
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Luboš Motl
Let me try to phrase my confusion clearer: I understand single-particle expectation values are 0 in both Dirac field and EM field, and I understand the matrix element \langle 0|field\ operator|p\rangle gives plane wavefunction in both cases. I guess my confusion comes from the following content of quite a few textbooks: (1)Relativistic wave equations are understood as field equations(EM,Dirac etc.) (2)c-number solutions of field equations are understood as classical fields(this is usually mentioned for EM field, but I presume this is also the case for Dirac since they are both field equations)
This post imported from StackExchange Physics at 2015-04-22 11:17 (UTC), posted by SE-user Jia Yiyang
It is not true you won't find a textbook that discusses $\langle 0|E(x)|p\rangle$ and even if it were true, I don't understand why it would be relevant. It is not true that all things that may be discussed are inevitably written in some textbooks. One may ask thousands of questions that are not discussed in the textbooks; this doesn't imply any contradiction. On the other hand, there's a simple reason why the expectation value of a Dirac field is the same state isn't discussed and I have already explained it: it is identically zero because of the Grassmann grading.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152595400810242, "perplexity": 525.911351776475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00132.warc.gz"} |
http://bkanuka.com/articles/native-latex-plots/ | ### Bennett Kanuka
Using math and software to solve real world problems
# Native Looking matplotlib Plots in LaTeX
I write most of my math/numerical analysis scripts in Python, and I tend to use matplotlib for plotting. When including a matplotlib plot in LaTeX I got the highest quality results by saving the plot as a PDF and using \includegraphics{plot.pdf} in LaTeX. However, it bothered me that the plot had different fonts and font sizes than the rest of the document. Here’s how I fixed that.
## Figure Width
I always choose the size of my plots as a percentage of the text width. For example width=0.6\textwidth. This allows me to use 0.3\textwidth for images that are going to be side-by-side and not worry about absolute sizes. We want matplotlib to output the right size plot so we need to find what exactly the textwidth is and tell matplotlib. Do this by writing \the\textwidth inside your LaTeX document (inside the document, not the preamble) and running it through pdflatex or whatever LaTeX engine you use. You’ll find that LaTeX will replace the command with some number. Record this number.
## Generate Figures
For every LaTeX document that has plots, I write a script figures.py which creates all the plots. Copy the following script into figures.py and save it into the same folder as your LaTeX document. Replace fig_width_pt with whatever number you got from above.
You must import matplotlib and make any rc changes before importing matplotlib.pyplot. matplotlib expresses sizes in inches, while LaTeX likes sizes to be in pt, so the first part of this script sets up sizes in matplotlib properly. The figure height is determined by the golden ratio, which is highly aesthetic ratio (it’s a good default).
## LaTeX
Running the above with python figures.py produces two files: ema.pdf and ema.pgf. The PDF file is used just to have a stand-alone version of the plot and make sure everything looks right.
To incorporate the plot into LaTeX, put \usepackage{pgf} in the preamble and insert using \input{ema.pgf}. For example:
\documentclass{article}
\usepackage{pgf}
\begin{document}
\begin{figure}
\caption{A simple EMA plot.\label{fig:ema1}}
\centering
\input{ema.pgf}
\end{figure}
\end{document} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666295051574707, "perplexity": 2490.9051351495073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825889.47/warc/CC-MAIN-20171023092524-20171023112524-00528.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/212869-solution-infinite-matrix-equation.html | ## solution to infinite matrix equation
Let $b_{i,j}\in\mathbb{C}$, and suppose that for each $i$ we have
$\sum_{j=1}^\infty|b_{i,j}|<\infty$ and $\sum_{j=1}^\infty|b_{i,j}|\leq\sum_{j=1}^\infty|b_ {i+1,j}|$.
I seek to determine whether a solution $X$ to the equation $AX=B$ exists, where $A,B,X$ are infinite matrices and $a_{i,j}=b_{i+1,j}$. In other words, I seek to show that there exists some $X=(x_{i,j})$ with complex entries satisfying the following equation:
$\begin{bmatrix}b_{2,1}&b_{2,2}&b_{2,3}&\cdots\\b_{ 3,1}&b_{3,2}&b_{3,3}&\cdots\\b_{4,1}&b_{4,2}&b_{4, 3}&\cdots\\\vdots&\vdots&\vdots&\end{bmatrix}$ $\begin{bmatrix}x_{1,1}&x_{1,2}&x_{1,3}&\cdots\\x_{ 2,1}&x_{2,2}&x_{2,3}&\cdots\\x_{3,1}&x_{3,2}&x_{3, 3}&\cdots\\\vdots&\vdots&\vdots&\end{bmatrix}$ $=\begin{bmatrix}b_{1,1}&b_{1,2}&b_{1,3}&\cdots\\b_ {2,1}&b_{2,2}&b_{2,3}&\cdots\\b_{3,1}&b_{3,2}&b_{3 ,3}&\cdots\\\vdots&\vdots&\vdots&\end{bmatrix}$
What tools do I use to deal with a problem like this? Clearly, if $A$ is invertible then $X=A^{-1}B$ exists. But how do I show that an infinite matrix is invertible? If $A$ is not invertible, do I have other options?
Any help would be much appreciated, thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621508717536926, "perplexity": 75.83700668811152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697442043/warc/CC-MAIN-20130516094402-00046-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.percentagecal.com/answer/112500-is-what-percent-of-150000 | Solution for 112500 is what percent of 150000:
112500:150000*100 =
(112500*100):150000 =
11250000:150000 = 75
Now we have: 112500 is what percent of 150000 = 75
Question: 112500 is what percent of 150000?
Percentage solution with steps:
Step 1: We make the assumption that 150000 is 100% since it is our output value.
Step 2: We next represent the value we seek with {x}.
Step 3: From step 1, it follows that {100\%}={150000}.
Step 4: In the same vein, {x\%}={112500}.
Step 5: This gives us a pair of simple equations:
{100\%}={150000}(1).
{x\%}={112500}(2).
Step 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS
(left hand side) of both equations have the same unit (%); we have
\frac{100\%}{x\%}=\frac{150000}{112500}
Step 7: Taking the inverse (or reciprocal) of both sides yields
\frac{x\%}{100\%}=\frac{112500}{150000}
\Rightarrow{x} = {75\%}
Therefore, {112500} is {75\%} of {150000}.
Solution for 150000 is what percent of 112500:
150000:112500*100 =
(150000*100):112500 =
15000000:112500 = 133.33
Now we have: 150000 is what percent of 112500 = 133.33
Question: 150000 is what percent of 112500?
Percentage solution with steps:
Step 1: We make the assumption that 112500 is 100% since it is our output value.
Step 2: We next represent the value we seek with {x}.
Step 3: From step 1, it follows that {100\%}={112500}.
Step 4: In the same vein, {x\%}={150000}.
Step 5: This gives us a pair of simple equations:
{100\%}={112500}(1).
{x\%}={150000}(2).
Step 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS
(left hand side) of both equations have the same unit (%); we have
\frac{100\%}{x\%}=\frac{112500}{150000}
Step 7: Taking the inverse (or reciprocal) of both sides yields
\frac{x\%}{100\%}=\frac{150000}{112500}
\Rightarrow{x} = {133.33\%}
Therefore, {150000} is {133.33\%} of {112500}.
Calculation Samples | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728351831436157, "perplexity": 1429.5629074441053}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880014.26/warc/CC-MAIN-20201022170349-20201022200349-00356.warc.gz"} |
https://econsa.readthedocs.io/en/latest/tutorials/uncertainty-propagation.html | Download the notebook here! Interactive online version:
# Uncertainty propagation¶
We show how to conduct uncertainty propagation for the EOQ model. We can simply import the core function from temfpy.
[1]:
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import chaospy as cp
from temfpy.uncertainty_quantification import eoq_model
from econsa.correlation import gc_correlation
## Setup¶
We specify a uniform distribution centered around $$\mathbf{x^0}=(M, C, S) = (1230, 0.0135, 2.15)$$ and spread the support 10% above and below the center.
[2]:
marginals = list()
for center in [1230, 0.0135, 2.15]:
lower, upper = 0.9 * center, 1.1 * center
marginals.append(cp.Uniform(lower, upper))
## Independent parameters¶
We now construct a joint distribution for the the independent input parameters and draw a sample of $$1,000$$ random samples.
[3]:
distribution = cp.J(*marginals)
sample = distribution.sample(10000, rule="random")
The briefly inspect the joint distribution of $$M$$ and $$C$$.
[5]:
plot_joint(sample)
We are now ready to compute the optimal economic order quantity for each draw.
[6]:
y = eoq_model(sample)
This results in the following distribution $$f_{Y}$$.
[8]:
plot_quantity(y)
## Depdendent paramters¶
We now consider dependent parameters with the following correlation matrix.
[9]:
corr = [[1.0, 0.6, 0.2], [0.6, 1.0, 0.0], [0.2, 0.0, 1.0]]
We approximate their joint distribution using a Gaussian copula. This requires us to map the correlation matrix of the parameters to the correlation matrix of the copula.
[10]:
corr_copula = gc_correlation(marginals, corr)
copula = cp.Nataf(distribution, corr)
We are ready to sample from the distribution.
[11]:
sample = copula.sample(10000, rule="random")
Again, we briefly inspect the joint distribution which now clearly shows a dependence pattern.
[12]:
plot_joint(sample)
[13]:
y = eoq_model(sample)
This now results in a distribution of $$f_{Y}$$ where the peak is flattened out.
[14]:
plot_quantity(y) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535590171813965, "perplexity": 3063.2193126747475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057787.63/warc/CC-MAIN-20210925232725-20210926022725-00294.warc.gz"} |
http://nanosun.net/eadrjt8m/5vqj0.php?tag=integral-symbol-copy-%26-paste-d68fba | Press and hold the ALT key and type the number which you want to make numeral symbol. Hmm. How to type Integral With Overbar in word? (mathematics) Braid group algebra. Done. Under Equation Tools, on the Design tab, in the Structures group, click the Integral button: Select Symbol and then More Symbols. Below is the Square root symbol if you want to copy and paste it into your work: You can select the above character with your mouse as plain text (click-and-drag to highlight) or double click the symbol to highlight it. Option 3: Copy and paste the Square root symbol. The StefanâBoltzmann constant. How to Use Symbol Codes. In the Professional format:. Copy the Integral With Underbar in the above table (it can be automatically copied with a mouse click) and paste it in word, Or . Return to Math Symbols Page Go to the About the Codes section to see how they are implemented. Go to Word Option, go to the Proofing tab, click on Auto Correct Options, then go to the Math Auto Correct tab, tick the âUse Math AutoCorrect â¦â. Integral Calculation. For its origins: "â« symbol $\int$ is used to denote the integral in mathematics. How to add an equation in your document, see Working with Microsoft Equation. Select the Insert tab. It is very simple to use number symbol alt codes. Add 0 (in this case in a7) with the 0,1 â the dx. Math symbol is a copy and paste text symbol that can be used in any desktop, web, or mobile applications. Copy the Integral With Overbar in the above table (it can be automatically copied with a mouse click) and paste it in word, Or . Page Content Greek Letters New Page Common Arithmetic & ⦠Select the Integral With Overbar tab in the Symbol window. This table explains the meaning of every math symbol. How to type Integral With Underbar in word? Use unicode number symbols in your html document or copy paste the desired character. Infinity is characterized by a number of uncountable objects or concepts which have no limits or size. Select the Insert tab. Right click on the highlighted area and choose the 'copy' command from the context menu or use the Ctrl+C key-combinations (simultaneously). (mathematics) Sum of divisors. Symbol . (Physics, scattering) Cross_section_(physics). Select the Integral With Underbar tab in the Symbol window. You should click on the line (where number 2 is marked), and expand the whole row to fit the integral equation that has been inserted. See Integral Symbol.That is, it's usually called the "integral symbol". 2. Irrespective of the software you are working with, you can always copy and paste any symbol into your work. Then, to type the integral sign, type \int. (linguistics, phonology) Syllable. Ï (mathematics, statistics) Standard deviation. Analysis & calculus symbols table - limit, epsilon, derivative, integral, interval, imaginary unit, convolution, laplace transform, fourier transform To add an integral form of the Gauss's law, do the following:. Tip: you should type in the x2 (marked in the number 1), using an insert symbol (which is in the insert tab). A shielding constant. Select Symbol and then More Symbols. 1. The notation was introduced by the German mathematician Gottfried Wilhelm Leibniz towards the end of the 17th century. integral symbol for facebook â« copy and paste this character. (spatial databases) The select operation. The copy and paste option can be the easiest option to insert this symbol into MS Word. Create your own equation. Copy Infinity Symbol to clipboard Copy Infinity is something we are introduced to in our math classes, and later on we learn that infinity can also be used in physics, philosophy, social sciences, etc. Or copy paste the Square root symbol to math symbols Page Go to the About the section... Microsoft equation in this case in a7 ) with the 0,1 â dx. Underbar tab in the symbol window numeral symbol ( simultaneously ) paste this character software are., you can always copy and paste option can be the easiest option to this... Go to the About the codes section to see how they are implemented with Microsoft.. The symbol window & ⦠integral symbol '' to type the number which you want to make numeral symbol implemented... Add an integral form of the Gauss 's law, do the following.! Or concepts which have no limits or size â « copy and paste option can be the easiest to... ( simultaneously ) the software you are Working with Microsoft equation Physics, scattering ) Cross_section_ (,... Underbar tab in the symbol window with, you can always copy paste! Denote the integral sign, type \int math symbols Page Go to the About the codes section to see they... Called the integral symbol for facebook â « symbol $\int$ is used to denote the in! Use the Ctrl+C key-combinations ( simultaneously ) Page Content Greek Letters New Page Arithmetic! Copy paste the desired character the 17th century type \int explains the meaning of every math symbol an form... Unicode number symbols in your html document or copy paste the desired character it 's called! Type \int limits or size have no limits or size menu or use the Ctrl+C key-combinations ( )! Used to denote the integral with Overbar tab in the symbol window the of. Which have no limits or size the codes section to see how they are implemented for its origins: â! 0 ( in this case in a7 ) with the 0,1 â the.... â « symbol $\int$ is used to denote the integral in mathematics Page Content Greek Letters Page! Type the number which you want to make numeral symbol the symbol window the alt key and type integral... 0 ( in this case in a7 ) with the 0,1 â the dx the Square symbol. Math symbols Page Go to the About the codes section to see how they are implemented or paste! In this case in a7 ) with the 0,1 â the dx Cross_section_ ( Physics, scattering ) Cross_section_ Physics. Press and hold the alt key and type the integral sign, type \int context menu or use the key-combinations... Letters New Page Common Arithmetic & ⦠integral symbol '' it 's called... German mathematician Gottfried Wilhelm Leibniz towards the end of the Gauss 's law, the... The software you are Working with, you can always copy and paste option can be the easiest to... Working with, you can always copy and paste this character Wilhelm Leibniz towards end! Document, see Working with, you can always copy and paste the desired character numeral! Is, it 's usually called the integral symbol for facebook â « symbol $\int is! « copy and paste option can be the easiest option to insert this symbol into your.! Limits or size insert this symbol into MS Word Common Arithmetic & ⦠integral symbol for facebook «! â « symbol$ \int $is used to denote the integral with Underbar in! The dx Physics ) the context menu or use the Ctrl+C key-combinations ( )! Symbol window integral with Overbar tab in the symbol window your work was introduced integral symbol copy & paste the mathematician... Underbar tab in the symbol window with Underbar tab in the symbol window New Page Arithmetic. Is characterized by a number of uncountable objects or concepts which have no limits or size \int$ used! Notation was introduced by the German mathematician Gottfried Wilhelm Leibniz towards the end of the Gauss 's law do! Which have no limits or size by a number of uncountable objects or which! Sign, type \int paste option can be the easiest option to insert this symbol into MS Word and! Greek Letters New Page Common Arithmetic & ⦠integral symbol '' Leibniz towards end... The meaning of every math symbol into your work â the dx in the window! Add an equation in your html document or copy paste the desired character â « symbol \int! Your document, see Working with Microsoft equation the dx to the the... Press and hold the alt key and type the integral with Underbar in. Numeral symbol the German mathematician Gottfried Wilhelm Leibniz towards the end of the software are! End of the Gauss 's law, do the following: integral symbol '' number. Context menu or use the Ctrl+C key-combinations ( simultaneously ) infinity is characterized by number! Document, see Working with, you can always copy and paste any symbol into MS Word 0,1. Type \int concepts which have no limits or size ) with the 0,1 â dx. The integral symbol '', it 's usually called the integral ''. Any symbol into MS Word case in a7 ) with the 0,1 the. Copy paste the desired character 'copy ' command from the context menu use... Go to the About the codes section to see how they are implemented Ctrl+C key-combinations ( simultaneously.. Integral with Overbar tab in the symbol window with, you can always copy and paste this character Symbol.That. 0 ( in this case in a7 ) with the 0,1 â the dx meaning of every symbol. Choose the 'copy ' command from the context menu or use the key-combinations. Simultaneously ): copy and paste option can be the easiest option to insert this symbol into your.. A number of uncountable objects or concepts which have no limits or size very simple to use number symbol codes! Integral symbol '' German mathematician Gottfried Wilhelm Leibniz towards the end of the 17th century your,! Menu or use the Ctrl+C key-combinations ( simultaneously ) 3: copy and paste the desired character symbol! In your html document or copy paste the desired character or concepts which have limits! Ms Word is very simple to use number symbol alt codes for â... $is used to denote the integral in mathematics & ⦠integral symbol for facebook â « copy and the! Mathematician Gottfried Wilhelm Leibniz towards the end of the Gauss 's law, do following... You can always copy and paste any symbol into your work command from the context menu or the. Context menu or use the Ctrl+C key-combinations ( simultaneously ) your work the 17th century in )... To make numeral symbol ) Cross_section_ ( Physics, scattering ) Cross_section_ ( Physics, scattering ) Cross_section_ (,... Use unicode number symbols in your html document or copy paste the desired character notation! Can always copy and paste option can be the easiest option to insert this integral symbol copy & paste into Word! Use unicode number symbols in your document, see Working with Microsoft equation:! ) with the 0,1 â the dx this symbol into your work ( this! Your work symbol$ \int $is used to denote the integral in.! From the context menu or use the Ctrl+C key-combinations ( simultaneously ) mathematician Gottfried Wilhelm Leibniz towards the of. Option can be the easiest option to insert this symbol into MS Word ' command from the context menu use... Your html document or copy paste the desired character table explains the meaning of every symbol...  the dx always copy and paste option can be the easiest option insert. To insert this symbol into MS Word is characterized by a number uncountable... The desired character, type \int the following:, do the following.. ( Physics ) to make numeral symbol the integral in mathematics is used to denote integral... Numeral symbol command from the context menu or use the Ctrl+C key-combinations ( simultaneously ) the copy and option... In the symbol window Content Greek Letters New Page Common Arithmetic & integral. Codes section to see how they are implemented \int$ is used to denote the integral with Overbar in. This case in a7 ) with the 0,1 â the dx meaning of every math.... Case in a7 ) with the 0,1 â the dx the Gauss 's,. Be the easiest option to insert this symbol into MS Word then, to type the number which want. Unicode number symbols in your document, see Working with, you can always copy and this. ¦ integral symbol for facebook â « symbol $\int$ is used to denote the integral Underbar! New Page Common Arithmetic & ⦠integral symbol for facebook â « integral symbol copy & paste $\int is! Easiest option to insert this symbol into your work and hold the key!, it 's usually called the integral symbol '' Go to the About the codes section to see they... Desired character$ \int \$ is used to denote the integral in mathematics to... Type \int context menu or use the Ctrl+C key-combinations ( simultaneously ) the integral symbol '' German! Menu or use the Ctrl+C key-combinations ( simultaneously ) or use the Ctrl+C key-combinations simultaneously. Use the Ctrl+C key-combinations ( simultaneously ) ( Physics, scattering ) Cross_section_ Physics... On the highlighted area and choose the 'copy ' command from the context menu or use Ctrl+C... The alt key and type the integral in mathematics simultaneously ) then, to type number! This case in a7 ) with the 0,1 â the dx ) Cross_section_ Physics!, it 's usually called the integral symbol '' end of the software you are with.
Does Running Cause Muscle Loss, Affirmative Defenses To Nuisance California, Html Vs Html5, Quartz Bricks Recipe, Lg Cx 65 Sale, Cherry Cream Cheese Cobbler, Red Lentil Soup Recipe, Frank Lloyd Wright Philosophy, Chicken Stroganoff Recipe Creme Fraiche, Fmi Gas Fireplace Repair, Purple Flowers Meaning Love, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836374521255493, "perplexity": 2745.7554706959854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464146.56/warc/CC-MAIN-20210418013444-20210418043444-00370.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/109847-coordinate-vector-help.html | 1. coordinate vector help
Let $T:\mathbb{R^{3}}$ $\rightarrow$ $\mathbb{R^{3}}$ be the linear transformation defined by $T(x,y,z)=(2x+y,x+2z,x+y+z)$.
Q: Let $\beta=\{(1,0,1),(1,1,0),(0,1,1)\}$. Find the coordinate vectors of $e_{1},e_{2},e_{3}$ relative to $\beta$
Not sure how to put everything together, some guidance would be greatly appreciated.
2. Originally Posted by LexyLeia
Let $T:\mathbb{R^3}$ $\rightarrow$ $\mathbb{R^3}$ be the linear transformation defined by $T(x,y,z)=(2x+y,x+2z,x+y+z)$.
Q: Let $\beta=\{(1,0,1),(1,1,0),(0,1,1)\}$. Find the coordinate vectors of $e_{1},e_{2},e_{3}$ relative to $\beta$
Not sure how to put everything together, some guidance would be greatly appreciated.
The "coordinate vectors" of any vector u is the ordered triple (a, b, c) where a(1,0,1)+ b(1,1,0)+ c(0,1,1)= v.
So you need (a,b,c) such that a(1,0,1)+ b(1,1,0)= c(0,1,1)= (1, 0, 0).
That is the same as saying (a, 0, a)+ (b, b, 0)+ (0, c, c)= (a+b, b+ c, a+ c)= (1, 0, 0) or a+b= 1, b+ c= 0, a+ c= 0. Solve those equations for a, b, and c.
"T" has nothing to do with this question!
3. Originally Posted by HallsofIvy
So you need (a,b,c) such that a(1,0,1)+ b(1,1,0)= c(0,1,1)= (1, 0, 0).
Thanks for the response. How did you get the vector v=(1,0,0)?
4. You said "find the coordinate vector of $e_1$, $e_2$, and $e_3$" and those are usual notation for the standard basis vectors of $R^3$, (1, 0, 0), (0, 1, 0), and (0, 0, 1), respectively. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833342432975769, "perplexity": 748.5992453148112}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00072-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.albert.io/ie/algebra/horizontal-asymptotes-for-rational-functions-9 | Free Version
Easy
# Horizontal Asymptotes for Rational Functions 9
ALGEBR-J14AIG
Consider the dashed lines shown in the graphs.
Which best represents the graph of a horizontal asymptote for the rational function?
A
B
C
D | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8608080744743347, "perplexity": 1667.8411620563234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172000.93/warc/CC-MAIN-20170219104612-00263-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.stevenabbott.co.uk/practical-adhesion/butt-test.php | ## Butt Test
### Quick Start
The butt joint is a great example of how NOT to think about adhesion. Countless papers report a force F to break an area A so give a value of F/A MPa. Butt joints do not fail evenly across the area. They fail with forces concentrated at the edges, and small defects, or small deviations from a 90° pull can cause large concentrations of forces and earlier failure. So the quoted MPa values are meaningless.
Playing with surface energy W, A and modulus E may not cause much surprise, but the thickness of the adhesive layer, d, has an unexpected effect: the thinner it is the better.
### Butt Test
The Butt joint is not the greatest of joints and the Butt test is not the greatest of tests. The idealised version of it from Kendall's The Sticky Universe is included here because it is another reminder that testing doesn't test what you think its testing. A related, but different pull (Butt) test is described in the Weak and Strong page.
Take an adhesive of modulus E (and therefore a bulk modulus, K=(3E)/(1-2ν), where ν is the Poisson ratio, taken here to be 0.33), a Work of Adhesion W and thickness of d. Assume the joint has a cross-sectional area of A (i.e. πa2). Then the force needed to pull it apart is given by:
F=A.sqrt((KW)/d)
Famously this shows that an adhesive of zero thickness requires an infinite force, so not only do you save on cost of adhesive but you get a very strong joint. Clearly the assumptions behind the model break down at very low values of d. But for small thicknesses, such joints can be amazingly strong - provided you stay strictly in pull mode. If you pull with a slight tilt then the bond will break (in this idealised system) with a peel force of ~W.A½.
For guidance, a typical strong adhesive (such as an epoxy) has a modulus ~1GPa. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8608789443969727, "perplexity": 1605.883202484632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578732961.78/warc/CC-MAIN-20190425173951-20190425195951-00553.warc.gz"} |
https://www.marbk-40h.win/w/index.php?title=Zero-width_non-joiner&action=edit§ion=6 | # Optimal control
Optimal control theory is a branch of applied mathematics that deals with finding a control law for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in both science and engineering. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure.[1] Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy.
Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies.[2] The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane.[3] Optimal control can be seen as a control strategy in control theory.
## General method
Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's Principle),[4] or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition).
We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total traveling time? In this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The system consists of both the car and the road, and the optimality criterion is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.
A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. Constraints are often interchangeable with the cost function.
Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel.
A more abstract framework goes as follows. Minimize the continuous-time cost functional
${\displaystyle J=\Phi \,[\,{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}\,]+\int \limits _{t_{0}}^{t_{f}}{\mathcal {L}}\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t\,]\,\operatorname {d} t}$
subject to the first-order dynamic constraints (the state equation)
${\displaystyle {\dot {\textbf {x}}}(t)={\textbf {a}}\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t\,],}$
the algebraic path constraints
${\displaystyle {\textbf {b}}\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t\,]\leq {\textbf {0}},}$
and the boundary conditions
${\displaystyle {\boldsymbol {\phi }}\,[\,{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}\,]=0}$
where ${\displaystyle {\textbf {x}}(t)}$ is the state, ${\displaystyle {\textbf {u}}(t)}$ is the control, ${\displaystyle t}$ is the independent variable (generally speaking, time), ${\displaystyle t_{0}}$ is the initial time, and ${\displaystyle t_{f}}$ is the terminal time. The terms ${\displaystyle \Phi }$ and ${\displaystyle {\mathcal {L}}}$ are called the endpoint cost and Lagrangian, respectively. Furthermore, it is noted that the path constraints are in general inequality constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution ${\displaystyle [{\textbf {x}}^{*}(t^{*}),{\textbf {u}}^{*}(t^{*}),t^{*}]}$ to the optimal control problem is locally minimizing.
A special case of the general nonlinear optimal control problem given in the previous section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the quadratic continuous-time cost functional
${\displaystyle J={\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}(t_{f})\mathbf {S} _{f}\mathbf {x} (t_{f})+{\tfrac {1}{2}}\int _{t_{0}}\limits ^{t_{f}}[\,\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} (t)\mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} (t)\mathbf {u} (t)\,]\,\operatorname {d} t}$
Subject to the linear first-order dynamic constraints
${\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t),}$
and the initial condition
${\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}}$
A particular form of the LQ problem that arises in many control system problems is that of the linear quadratic regulator (LQR) where all of the matrices (i.e., ${\displaystyle \mathbf {A} }$, ${\displaystyle \mathbf {B} }$, ${\displaystyle \mathbf {Q} }$, and ${\displaystyle \mathbf {R} }$) are constant, the initial time is arbitrarily set to zero, and the terminal time is taken in the limit ${\displaystyle t_{f}\rightarrow \infty }$ (this last assumption is what is known as infinite horizon). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional
${\displaystyle J={\tfrac {1}{2}}\int \limits _{0}^{\infty }[\,\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} \mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} \mathbf {u} (t)\,]\,\operatorname {d} t}$
Subject to the linear time-invariant first-order dynamic constraints
${\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t),}$
and the initial condition
${\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}}$
In the finite-horizon case the matrices are restricted in that ${\displaystyle \mathbf {Q} }$ and ${\displaystyle \mathbf {R} }$ are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the matrices ${\displaystyle \mathbf {Q} }$ and ${\displaystyle \mathbf {R} }$ are not only positive-semidefinite and positive-definite, respectively, but are also constant. These additional restrictions on ${\displaystyle \mathbf {Q} }$ and ${\displaystyle \mathbf {R} }$ in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is bounded, the additional restriction is imposed that the pair ${\displaystyle (\mathbf {A} ,\mathbf {B} )}$ is controllable. Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the control energy (measured as a quadratic form).
The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved after the zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback form
${\displaystyle \mathbf {u} (t)=-\mathbf {K} (t)\mathbf {x} (t)}$
where ${\displaystyle \mathbf {K} (t)}$ is a properly dimensioned matrix, given as
${\displaystyle \mathbf {K} (t)=\mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t),}$
and ${\displaystyle \mathbf {S} (t)}$ is the solution of the differential Riccati equation. The differential Riccati equation is given as
${\displaystyle {\dot {\mathbf {S} }}(t)=-\mathbf {S} (t)\mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} (t)+\mathbf {S} (t)\mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t)-\mathbf {Q} }$
For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary condition
${\displaystyle \mathbf {S} (t_{f})=\mathbf {S} _{f}}$
For the infinite horizon LQR problem, the differential Riccati equation is replaced with the algebraic Riccati equation (ARE) given as
${\displaystyle \mathbf {0} =-\mathbf {S} \mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} +\mathbf {S} \mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} -\mathbf {Q} }$
Understanding that the ARE arises from infinite horizon problem, the matrices ${\displaystyle \mathbf {A} }$, ${\displaystyle \mathbf {B} }$, ${\displaystyle \mathbf {Q} }$, and ${\displaystyle \mathbf {R} }$ are all constant. It is noted that there are in general multiple solutions to the algebraic Riccati equation and the positive definite (or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved by Rudolf Kalman.[5]
## Numerical methods for optimal control
Optimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control (c. 1950s to 1980s) the favored approach for solving optimal control problems was that of indirect methods. In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of a Hamiltonian. Thus, the resulting dynamical system is a Hamiltonian system of the form
${\displaystyle {\begin{array}{lcl}{\dot {\textbf {x}}}&=&\partial H/\partial {\boldsymbol {\lambda }}\\{\dot {\boldsymbol {\lambda }}}&=&-\partial H/\partial {\textbf {x}}\end{array}}}$
where
${\displaystyle H={\mathcal {L}}+{\boldsymbol {\lambda }}^{\mathsf {T}}{\textbf {a}}-{\boldsymbol {\mu }}^{\mathsf {T}}{\textbf {b}}}$
is the augmented Hamiltonian and in an indirect method, the boundary-value problem is solved (using the appropriate boundary or transversality conditions). The beauty of using an indirect method is that the state and adjoint (i.e., ${\displaystyle {\boldsymbol {\lambda }}}$) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO.[6]
The approach that has risen to prominence in numerical optimal control since the 1980s is that of so-called direct methods. In a direct method, the state and/or control are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as a cost function. Then, the coefficients of the function approximations are treated as optimization variables and the problem is "transcribed" to a nonlinear optimization problem of the form:
Minimize
${\displaystyle F({\textbf {z}})\,}$
subject to the algebraic constraints
${\displaystyle {\begin{array}{lcl}{\textbf {g}}({\textbf {z}})&=&{\textbf {0}}\\{\textbf {h}}({\textbf {z}})&\leq &{\textbf {0}}\end{array}}}$
Depending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting or quasilinearization method), moderate (e.g. pseudospectral optimal control[7]) or may be quite large (e.g., a direct collocation method[8]). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT[9]) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly direct collocation methods which are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs include DIRCOL,[10] SOCS,[11] OTIS,[12] GESOP/ASTOS,[13] DITAN.[14] and PyGMO/PyKEP.[15] In recent years, due to the advent of the MATLAB programming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods include RIOTS,[16]DIDO,[17] DIRECT,[18], FALCON.m [19], and GPOPS,[20] while an example of an industry developed MATLAB tool is PROPT.[21] These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems. Finally, it is noted that general-purpose MATLAB optimization environments such as TOMLAB have made coding complex optimal control problems significantly easier than was previously possible in languages such as C and FORTRAN.
## Discrete-time optimal control
The examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions. The Theory of Consistent Approximations[22] provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones. For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct method RIOTS is based on the Theory of Consistent Approximation.
## Examples
A common solution strategy in many optimal control problems is to solve for the costate (sometimes called the shadow price) ${\displaystyle \lambda (t)}$. The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice when ${\displaystyle \lambda (t)}$ can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values.
Having obtained ${\displaystyle \lambda (t)}$, the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge of ${\displaystyle \lambda (t)}$. Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time.
### Finite time
Consider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date ${\displaystyle 0}$ to date ${\displaystyle T}$. At date ${\displaystyle 0}$ there is ${\displaystyle x_{0}}$ ore in the ground, and the time-dependent amount of ore ${\displaystyle x(t)}$ left in the ground declines at the rate of ${\displaystyle u(t)}$ that the mine owner extracts it. The mine owner extracts ore at cost ${\displaystyle u(t)^{2}/x(t)}$ (the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant price ${\displaystyle p}$. Any ore left in the ground at time ${\displaystyle T}$ cannot be sold and has no value (there is no "scrap value"). The owner chooses the rate of extraction varying with time ${\displaystyle u(t)}$ to maximize profits over the period of ownership with no time discounting.
1. Discrete-time version The manager maximizes profit ${\displaystyle \Pi }$: ${\displaystyle \Pi =\sum \limits _{t=0}^{T-1}\left[pu_{t}-{\frac {u_{t}^{2}}{x_{t}}}\right]}$ subject to the law of evolution for the state variable ${\displaystyle x_{t}}$ ${\displaystyle x_{t+1}-x_{t}=-u_{t}\!}$ Form the Hamiltonian and differentiate: ${\displaystyle H=pu_{t}-{\frac {u_{t}^{2}}{x_{t}}}-\lambda _{t+1}u_{t}}$ ${\displaystyle {\frac {\partial H}{\partial u_{t}}}=p-\lambda _{t+1}-2{\frac {u_{t}}{x_{t}}}=0}$ ${\displaystyle \lambda _{t+1}-\lambda _{t}=-{\frac {\partial H}{\partial x_{t}}}=-\left({\frac {u_{t}}{x_{t}}}\right)^{2}}$ As the mine owner does not value the ore remaining at time ${\displaystyle T}$, ${\displaystyle \lambda _{T}=0\!}$ Using the above equations, it is easy to solve for the ${\displaystyle x_{t}}$ and ${\displaystyle \lambda _{t}}$ series ${\displaystyle \lambda _{t}=\lambda _{t+1}+{\frac {(p-\lambda _{t+1})^{2}}{4}}}$ ${\displaystyle x_{t+1}=x_{t}{\frac {2-p+\lambda _{t+1}}{2}}}$ and using the initial and turn-T conditions, the ${\displaystyle x_{t}}$ series can be solved explicitly, giving ${\displaystyle u_{t}}$. 2. Continuous-time version The manager maximizes profit ${\displaystyle \Pi }$: ${\displaystyle \Pi =\int \limits _{0}^{T}\left[pu(t)-{\frac {u(t)^{2}}{x(t)}}\right]dt}$ where the state variable ${\displaystyle x(t)}$ evolves as follows: ${\displaystyle {\dot {x}}(t)=-u(t)}$ Form the Hamiltonian and differentiate: ${\displaystyle H=pu(t)-{\frac {u(t)^{2}}{x(t)}}-\lambda (t)u(t)}$ ${\displaystyle {\frac {\partial H}{\partial u}}=p-\lambda (t)-2{\frac {u(t)}{x(t)}}=0}$ ${\displaystyle {\dot {\lambda }}(t)=-{\frac {\partial H}{\partial x}}=-\left({\frac {u(t)}{x(t)}}\right)^{2}}$ As the mine owner does not value the ore remaining at time ${\displaystyle T}$, ${\displaystyle \lambda (T)=0}$ Using the above equations, it is easy to solve for the differential equations governing ${\displaystyle u(t)}$ and ${\displaystyle \lambda (t)}$ ${\displaystyle {\dot {\lambda }}(t)=-{\frac {(p-\lambda (t))^{2}}{4}}}$ ${\displaystyle u(t)=x(t){\frac {p-\lambda (t)}{2}}}$ and using the initial and turn-T conditions, the functions can be solved to yield ${\displaystyle x(t)={\frac {(4-pt+pT)^{2}}{(4+pT)^{2}}}x_{0}}$
## References
1. ^ Luenberger, David G. (1979). "Optimal Control". Introduction to Dynamic Systems. New York: John Wiley & Sons. pp. 393–435. ISBN 0-471-02594-1.
2. ^ Sargent, R. W. H. (2000). "Optimal Control". Journal of Computational and Applied Mathematics. 124 (1–2): 361–371. Bibcode:2000JCoAM.124..361S. doi:10.1016/S0377-0427(00)00418-0.
3. ^ Bryson, A. E. (1996). "Optimal Control—1950 to 1985". IEEE Control Systems Magazine. 16 (3): 26–33. doi:10.1109/37.506395.
4. ^ Ross, I. M. (2009). A Primer on Pontryagin's Principle in Optimal Control. Collegiate Publishers. ISBN 978-0-9843571-0-9.
5. ^ Kalman, Rudolf. A new approach to linear filtering and prediction problems. Transactions of the ASME, Journal of Basic Engineering, 82:34–45, 1960
6. ^ Oberle, H. J. and Grimm, W., "BNDSCO-A Program for the Numerical Solution of Optimal Control Problems," Institute for Flight Systems Dynamics, DLR, Oberpfaffenhofen, 1989
7. ^ Ross, I. M.; Karpenko, M. (2012). "A Review of Pseudospectral Optimal Control: From Theory to Flight". Annual Reviews in Control. 36 (2): 182–197. doi:10.1016/j.arcontrol.2012.09.002.
8. ^ Betts, J. T. (2010). Practical Methods for Optimal Control Using Nonlinear Programming (2nd ed.). Philadelphia, Pennsylvania: SIAM Press. ISBN 978-0-89871-688-7.
9. ^ Gill, P. E., Murray, W. M., and Saunders, M. A., User's Manual for SNOPT Version 7: Software for Large-Scale Nonlinear Programming, University of California, San Diego Report, 24 April 2007
10. ^ von Stryk, O., User's Guide for DIRCOL (version 2.1): A Direct Collocation Method for the Numerical Solution of Optimal Control Problems, Fachgebiet Simulation und Systemoptimierung (SIM), Technische Universität Darmstadt (2000, Version of November 1999).
11. ^ Betts, J.T. and Huffman, W. P., Sparse Optimal Control Software, SOCS, Boeing Information and Support Services, Seattle, Washington, July 1997
12. ^ Hargraves, C. R.; Paris, S. W. (1987). "Direct Trajectory Optimization Using Nonlinear Programming and Collocation". Journal of Guidance, Control, and Dynamics. 10 (4): 338–342. Bibcode:1987JGCD...10..338H. doi:10.2514/3.20223.
13. ^ Gath, P.F., Well, K.H., "Trajectory Optimization Using a Combination of Direct Multiple Shooting and Collocation", AIAA 2001–4047, AIAA Guidance, Navigation, and Control Conference, Montréal, Québec, Canada, 6–9 August 2001
14. ^ Vasile M., Bernelli-Zazzera F., Fornasari N., Masarati P., "Design of Interplanetary and Lunar Missions Combining Low-Thrust and Gravity Assists", Final Report of the ESA/ESOC Study Contract No. 14126/00/D/CS, September 2002
15. ^ Izzo, Dario. "PyGMO and PyKEP: open source tools for massively parallel optimization in astrodynamics (the case of interplanetary trajectory optimization)." Proceed. Fifth International Conf. Astrodynam. Tools and Techniques, ICATT. 2012.
16. ^ RIOTS, based on Schwartz, Adam (1996). Theory and Implementation of Methods based on Runge–Kutta Integration for Solving Optimal Control Problems (Ph.D.). University of California at Berkeley. OCLC 35140322.
17. ^ Ross, I. M. and Fahroo, F., User's Manual for DIDO: A MATLAB Package for Dynamic Optimization, Dept. of Aeronautics and Astronautics, Naval Postgraduate School Technical Report, 2002
18. ^ Williams, P., User's Guide to DIRECT, Version 2.00, Melbourne, Australia, 2008
19. ^ FALCON.m, described in Rieck, M., Bittner, M., Grüter, B., Diepolder, J., and Piprek, P., FALCON.m - User Guide, Institute of Flight System Dynamics, Technical University of Munich, October 2019
20. ^ GPOPS, described in Rao, A. V., Benson, D. A., Huntington, G. T., Francolin, C., Darby, C. L., and Patterson, M. A., User's Manual for GPOPS: A MATLAB Package for Dynamic Optimization Using the Gauss Pseudospectral Method, University of Florida Report, August 2008.
21. ^ Rutquist, P. and Edvall, M. M, PROPT – MATLAB Optimal Control Software," 1260 S.E. Bishop Blvd Ste E, Pullman, WA 99163, USA: Tomlab Optimization, Inc.
22. ^ E. Polak, On the use of consistent approximations in the solution of semi-infinite optimization and optimal control problems Math. Prog. 62 pp. 385–415 (1993). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 89, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8217273354530334, "perplexity": 729.5310055403088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00134.warc.gz"} |
https://brilliant.org/practice/functions-level-5-challenges/?subtopic=functions&chapter=functions | Algebra
# Functions: Level 5 Challenges
Let $$f(x)$$ be a cubic polynomial such that $$f(1) = 5, f(2) = 20, f(3) = 45$$.
Then find the product of roots of the equation below.
$\large [f(x)]^{2} + 3x \ f(x) + 2x^{2} = 0$
Let $$f$$ be a function from the integers to the real numbers such that $f(x) = f(x-1) \cdot f(x+1).$
What is the maximum number of distinct values of $$f(x)$$?
Let $$f(x)$$ be a polynomial. It is known that for all $$x$$,
$\large f(x)f(2x^2) = f(2x^3+x)$
If $$f(0)=1$$ and $$f(2)+f(3)=125$$, find $$f(5)$$.
Given a function $$f$$ for which $f(x) = f(398 - x) = f(2158 - x) = f(3214 - x)$ holds for all real $$x$$, what is the largest number of different values that can appear in the list $$f(0),f(1),f(2),\ldots,f(999)?$$
The functions $$f(x)$$ and $$g(x)$$ are defined $$\mathbb {R^+ \to R}$$ such that $f(x)=\begin{cases} 1-\sqrt{x}\quad \text{x is rational} \\ \quad x^2\quad\quad\text{x is irrational}\end{cases}\\g(x)=\begin{cases} \quad x\quad\quad~~~ \text{x is rational} \\ 1-x\quad\quad\text{x is irrational}\end{cases}$
The composite function $$f \circ g(x)$$ is
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210959434509277, "perplexity": 62.26151277101057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530060.34/warc/CC-MAIN-20190420220657-20190421002657-00519.warc.gz"} |
http://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.4f4d9629-e84f-325d-a961-894d5aaef0c4 | PL EN
Journal
## Contemporary Economics
2010 | 4(16) | 25-39
Article title
### Konwergencja i rozklad dochodow w duzych gospodarkach europejskich
Authors
Title variants
EN
CONVERGENCE AND DISTRIBUTIONS OF INCOME IN LARGE EUROPEAN ECONOMIES
Languages of publication
PL
Abstracts
EN
The aim of this paper is an empirical analysis of the convergence process in the years 1993-2008 and the impact of economic growth on income distribution in selected European Union countries. Considering this fact one can state that research was conducted from the perspective of EU citizens. The crucial hypothesis of this paper is statement that convergence is differently perceived in terms of entire economies, and gives a different picture from the perspective of the single citizen of the selected country. The analysis was carried out in several stages. Initially, the authors referred to the classical convergence hypotheses (unconditional 'beta' and 'sigma' convergence) within the EU-27, then the same assumptions were examined taking into account population - weighted indicators. However, the main aim of research undertaken in this study was to investigate the individual within - country distribution of income for the initial and final period, which allowed to answer the question whether faster growth of the 'new EU' was accompanied by reduction of inequalities within analyzed economic systems.
Keywords
EN
Discipline
Journal
Year
Issue
Pages
25-39
Physical description
Document type
ARTICLE
Contributors
author
author
• Puziak Marcin, Uniwersytet Ekonomiczny, Al. Niepodleglosci 10, 61-875 Poznan, Poland
References
Document Type
Publication order reference
Identifiers
CEJSH db identifier
11PLAAAA091610 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242519497871399, "perplexity": 3191.392204777067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864019.29/warc/CC-MAIN-20180621020632-20180621040632-00004.warc.gz"} |
http://mathhelpforum.com/statistics/225992-combinations.html | 1. ## combinations
there are 20 teachers, of these are 8 maths teachers, 6 history teachers, 4 physics teachers and 2 geography teachers.
how many ways can the teachers be chosen if there are to be at least 2 maths teachers?
i know the correct way to do this is: total number of combinations - combinations if there are no maths teachers - combinations if there is 1 math teacher
or even: combination if there are 2 maths teachers + combinations if there are 3 maths teachers + combinations if there are 4 maths teachers
but, what's wrong with the following approach? it seems to make sense: first, choose 2 from the 8 maths teachers, and then choose the other 2 from the remaining 18 teachers
$^8C_2\cdot^{18}C_2$
this gives a completely dfiferent answer
(Edited something i completely mistakenly typed in)
2. ## Re: combinations
Counting combinations means that the order in which the elements are selected doesn't matter. However, by choosing two math teachers first, then 2 teachers (from a pool that include other math teachers), you are adding an order. First you chose two, then you possibly choose more. By adding an ordering to the choices, you are no longer counting combinations.
3. ## Re: combinations
Originally Posted by SlipEternal
Counting combinations means that the order in which the elements are selected doesn't matter. However, by choosing two math teachers first, then 2 teachers (from a pool that include other math teachers), you are adding an order. First you chose two, then you possibly choose more. By adding an ordering to the choices, you are no longer counting combinations.
Thanks! i wasn't able to see that until i wrote down some of the possible outcomes of using this approach with a smaller sample space.
to be clear (this is mostly just for my own benerit lol), what happens is that in the first part i may have chosen math teacher A and math teacher B, then in the second part i chose math teacher C and geography teacher A. In another draw, i choose maths teacher B and C in the first part, then choose maths teacher A and geography teacher A. These 2 draws are the same but are counted as different, because of the arrangemnt as slipeternal said.
4. ## Re: combinations
Originally Posted by muddywaters
there are 20 teachers, of these are 8 maths teachers, 6 history teachers, 4 physics teachers and 2 geography teachers. How many ways can the teachers be chosen if there are to be at least 2 maths teachers?
You must tell how many are to be chosen.
5. ## Re: combinations
Originally Posted by Plato
You must tell how many are to be chosen.
Oh yes sorry it was 4, missed that out.
6. ## Re: combinations
Originally Posted by muddywaters
there are 20 teachers, of these are 8 maths teachers, 6 history teachers, 4 physics teachers and 2 geography teachers. How many ways can the (four) teachers be chosen if there are to be at least 2 maths teachers?
$\binom{20}{4}-\left(\binom{12}{4}+8\binom{11}{3}\right)$ WHY?
7. ## Re: combinations
Originally Posted by Plato
$\binom{20}{4}-\left(\binom{12}{4}+8\binom{11}{3}\right)$ WHY?
Rather, $\binom{20}{4}-\left(\binom{12}{4}+8\binom{12}{3}\right)$.
Because,
Originally Posted by muddywaters
total number of combinations - combinations if there are no maths teachers - combinations if there is 1 math teacher | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472745060920715, "perplexity": 1121.4311781474723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982924605.44/warc/CC-MAIN-20160823200844-00250-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1829342/formula-for-sum-i-geq-0-in-choose-2i | Formula for $\sum_{i\geq 0} i{n \choose 2i}$?
So I know that $\sum_{i\geq 0}{n \choose 2i}=2^{n-1}=\sum_{i\geq 0}{n \choose 2i-1}$. However, I need formulas for $\sum_{i\geq 0}i{n \choose 2i}$ and $\sum_{i\geq 0}i{n \choose 2i-1}$. Can anyone point me to a formula with proof for these two sums? My searches thus far have only turned up those first two sums without the $i$ coefficient in the summand. Thanks!
• Hints: Do you know the formula for the generating polynomial $\sum_i {n \choose 2i} x^i$? Do you know how to get at the sum of $i$ times the $x^i$ coefficient of a polynomial? – Noam D. Elkies Jun 17 '16 at 3:27
• $\sum_{i\ \geq\ 0}{n \choose 2i} = 2^{n - 1} + {1 \over 2}\,\delta_{n0}$ to agree with the case $n = 0$. – Felix Marin Jun 19 '16 at 8:36
3 Answers
Here’s a solution not using generating functions. Let
$$a_n=\sum_kk\binom{n}{2k}\;,$$
the first of your two sums. Suppose that you have a pool of players numbered $1$ through $n$; then $k\binom{n}{2k}$ is the number of ways to choose $2k$ players from the pool to form a team and then designate one of the lowest-numbered $k$ on the team as the captain. Thus, $a_n$ is the number of ways to pick a team with an even number of members and designate one member of the lower-numbered half of the team to be the captain. Note that $k=0$ contributes $0$ to $a_n$, so we may as well consider only $k\ge 1$.
We can choose team and captain in a different way, however. We first pick the player who will be the highest-numbered player in the lower half; if that player’s number is $\ell$, we must have $1\le\ell\le n-1$. For some $k$ between $1$ and $n-\ell$ inclusive we then pick $k$ players numbered above $\ell$ and $k-1$ players numbered below $\ell$. Finally, we pick one of the $k$ chosen players numbered $\ell$ or lower to be the captain. For a given $\ell$ and $k$ this can be done in $k\binom{n-\ell}k\binom{\ell-1}{k-1}$ different ways. Thus,
\begin{align*} a_n&=\sum_{\ell=1}^{n-1}\sum_{k=1}^{n-\ell}k\binom{n-\ell}{k}\binom{\ell-1}{k-1}\\ &=\sum_{\ell=1}^{n-1}(n-\ell)\sum_k\binom{n-1-\ell}{k}\binom{\ell-1}k\tag{1}\\ &=\sum_{\ell=1}^{n-1}(n-\ell)\sum_k\binom{n-1-\ell}{k}\binom{\ell-1}{\ell-1-k}\\ &=\sum_{\ell=1}^{n-1}(n-\ell)\binom{n-2}{\ell-1}\tag{2}\\ &=\sum_{\ell=0}^{n-2}(n-1-\ell)\binom{n-2}\ell\\ &=(n-1)2^{n-2}-\sum_\ell\ell\binom{n-2}\ell\\ &=(n-1)2^{n-2}-(n-2)2^{n-3}\\ &=n2^{n-3}\;. \end{align*}
To get $(1)$ I used the identity $m\binom{n}m=n\binom{n-1}{m-1}$ and shifted the index $k$ by one; there’s no need to specify limits on the inner summation, because examination shows that it’s over all values of $k$ that yield non-zero terms. $(2)$ follows from the Vandermonde identity.
The formula $a_n=n2^{n-3}$ is valid for $n\ge 2$, and clearly $a_0=a_1=0$.
Your second sum is
$$b_n=\sum_{k\ge 0}k\binom{n}{2k-1}=\sum_{k\ge 1}k\binom{n}{2k-1}=\sum_{k\ge 0}(k+1)\binom{n}{2k+1}\;.$$
This corresponds to choosing an odd number $2k+1$ of players for your team and naming a captain from the lowest-numbered $k+1$ members of the team. The alternative calculation is almost the same as before: $\ell$ is the number of the player in the middle of the team when it’s arranged by number, which can be anything from $1$ to $n$ inclusive, and
\begin{align*} b_n&=\sum_{\ell=1}^n\sum_{k=0}^{n-\ell}(k+1)\binom{n-\ell}{k}\binom{\ell-1}k\\ &=\sum_{\ell=1}^n\sum_{k=0}^{n-\ell}k\binom{n-\ell}k\binom{\ell-1}k+\sum_{\ell=1}^n\sum_{k=0}^{n-\ell}\binom{n-\ell}k\binom{\ell-1}k\\ &=\sum_{\ell=1}^n(n-\ell)\sum_k\binom{n-1-\ell}{k-1}\binom{\ell-1}k+\sum_{\ell=1}^n\binom{n-1}{\ell-1}\\ &=\sum_{\ell=1}^n(n-\ell)\sum_k\binom{n-1-\ell}k\binom{\ell-1}{k+1}+2^{n-1}\\ &=\sum_{\ell=1}^n(n-\ell)\sum_k\binom{n-1-\ell}k\binom{\ell-1}{\ell-2-k}+2^{n-1}\\ &=\sum_{\ell=1}^n(n-\ell)\binom{n-2}{\ell-2}+2^{n-1}\\ &=n\sum_\ell\binom{n-2}\ell-\sum_{\ell=1}^n\ell\binom{n-2}{\ell-2}+2^{n-1}\\ &=n2^{n-2}-\sum_\ell(\ell+2)\binom{n-2}\ell+2^{n-1}\\ &=n2^{n-2}-2\sum_\ell\binom{n-2}\ell-\sum_\ell\ell\binom{n-2}\ell+2^{n-1}\\ &=n2^{n-2}-2^{n-1}-(n-2)\sum_\ell\binom{n-3}{\ell-1}+2^{n-1}\\ &=n2^{n-2}-(n-2)2^{n-3}\\ &=(n+2)2^{n-3}\;, \end{align*}
valid for $n\ge 2$. Clearly $b_1=1$.
As a matter of possible interest, these sequences are essentially OEIS A001792 and OEIS A045623, though with different starting points in each case. Thus, $a_n$ turns out to be (among many other things) the number of parts in all compositions of $n-1$, and $b_n$ to be the number of ones in all compositions of $n$. Many other interpretations and many references can be found at the OEIS links.
It’s also not hard to verify that $\sum_{k=1}^nb_k=a_{n+1}$ for $n\ge 1$.
So for n even you can use ${n \choose 2i} = {n \choose n-2i}$ to rewrite the sum as $\sum_i i {n \choose n-2i}$ then add this sum to the original sum to get $\sum_i n{n \choose n-2i}$
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
Just one !!!. The other ones are similar to the present one.
\begin{align} \color{#f00}{\sum_{i\ \geq\ 0}i{n \choose 2i}} & = \half\sum_{i\ \geq\ 1}2i{n \choose 2i} = \half\sum_{i\ \geq\ 1}i{n \choose i}{1 + \pars{-1}^{i} \over 2} \\[3mm] & = \left.{1 \over 4}\,\partiald{}{x}\sum_{i\ \geq\ 1}{n \choose i}x^{i} \right\vert_{\ x\ =\ 1} - \left.{1 \over 4}\,\partiald{}{x}\sum_{i\ \geq\ 1}{n \choose i}x^{i} \right\vert_{\ x\ =\ -1} \\[3mm] & = \left.{1 \over 4}\,\partiald{}{x}\bracks{\pars{1 + x}^{n} - 1} \right\vert_{\ x\ =\ 1} - \left.{1 \over 4}\,\partiald{}{x}\bracks{\pars{1 + x}^{n} - 1} \right\vert_{\ x\ =\ -1} \\[3mm] & = {1 \over 4}\,n\,2^{n - 1} - {1 \over 4}\,n\,\delta_{n1} = \color{#f00}{{1 \over 4}\pars{2^{n - 1}n - \delta_{n1}}} \end{align} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581594467163086, "perplexity": 468.591290843729}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00514.warc.gz"} |
https://engineering.stackexchange.com/questions/18190/what-is-the-advantage-of-statically-indeterminate-structures/18192 | # What is the advantage of statically indeterminate structures?
Most of the real life structures are statically indeterminate.
What is the benefit of designing a statically indeterminate structure instead of a statically determinate one?
I don't mean only beams and trusses.Per example, the shaft of a merchant vessel is considered as a typical statically indeterminate structure.
• Possible duplicate of Why the design of continuous beam is always more economical than beam with supports? – Wasabi Nov 26 '17 at 19:11
• @Wasabi Sorry I don't know much about statics, I can't understand why this question is a duplicate of the nice link you give – veronika Nov 26 '17 at 20:33
• You'll see the question states that continuous beams are always more economical than isostatic (statically determinate) ones, and the answers mostly agree with that statement and explain why that happens. That's the advantage of statically indeterminate structures: they're cheaper (or, put another way, stronger for the "same price"). – Wasabi Nov 27 '17 at 1:34
• If stress is the limit you go with statically determinant, and if deflection/stiffness is the limit then you go with indeterminant. – ja72 Nov 27 '17 at 3:07
• @ja72 You could write a nice answer – veronika Nov 27 '17 at 5:43
For example, take a look at the following static systems. Assume they have the same length and the same (constant) cross-section. Thus an equal allowed bending moment $M_u$.
The first system is statically determinate, as it is supported only by simple supports. The maximum moment developing within the beam is $M=\frac{QL}{4}$, thus the load under which the beam fails is $$Q_u=4\frac{M_u}{L}$$ This coincides with the formation of a plastic hinge at point B, which leaves the systems statically under-determinate, a mechanism.
In the second system, the beam is supported by two clamped supports, which both reduce the maximum moment at the point where the load is applied. If you determine the static forces within the beam you will find that, neglecting residual stresses, the maximum moment $M=\frac{QL}{8}$. Thus, $$Q_u=8\frac{M_u}{L}$$ To turn into a mechanism, three plastic hinges have to be formed, which requires more work. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055634498596191, "perplexity": 892.5813003721928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00139.warc.gz"} |
https://iris.unime.it/handle/11570/1433887 | A method for the evaluation of the response sensitivity of both classically and non-classically damped discrete linear structural systems under stochastic actions is presented. The proposed approach requires the following items: (a) a suitable modal expansion of the response; (b) the derivation in analytical form of the equations governing the evolution of the derivatives of the response (the so-called sensitivity equations) with respect to the parameters that define the structural model; (c) an extensive use of the Kronecker algebra for determining the analytical expressions of the sensitivity of the structural response statistics to non-stationary random input processes. Moreover, a step-by-step integration scheme able to solve the sensitivity equations is also studied. Handy expressions for the cross-correlations between the input process and the response sensitivities are also derived. A numerical application shows that the proposed approach is suitable to cope with practical problems of engineering interest.
A modal approach for the evaluation of the response sensitivity of structural systems subjected to non-stationary random process
Abstract
A method for the evaluation of the response sensitivity of both classically and non-classically damped discrete linear structural systems under stochastic actions is presented. The proposed approach requires the following items: (a) a suitable modal expansion of the response; (b) the derivation in analytical form of the equations governing the evolution of the derivatives of the response (the so-called sensitivity equations) with respect to the parameters that define the structural model; (c) an extensive use of the Kronecker algebra for determining the analytical expressions of the sensitivity of the structural response statistics to non-stationary random input processes. Moreover, a step-by-step integration scheme able to solve the sensitivity equations is also studied. Handy expressions for the cross-correlations between the input process and the response sensitivities are also derived. A numerical application shows that the proposed approach is suitable to cope with practical problems of engineering interest.
Scheda breve Scheda completa Scheda completa (DC)
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate
Caricamento pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
Utilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/11570/1433887`
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
• ND
• 39
• 26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126060009002686, "perplexity": 1917.0095929850602}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00751.warc.gz"} |
http://stats.stackexchange.com/questions/16985/how-to-report-general-precision-in-estimating-correlations-within-a-context-of-j | # How to report general precision in estimating correlations within a context of justifying sample size?
I have a study with 130 participants where most of my analyses involve examining the relative size of correlations between various psychological variables. I'm trying to comply with recommendations that request authors to say something about the statistical power of their study. Statistical power focuses on whether the correlation is significantly different to zero, but the null hypothesis is not particularly interesting in my research. Everything is correlated with everything. It's the relative amount that matters. I'd like to say something in the method section of my report about the precision of estimating the true correlations.
• Is it a good approach to present the 95% confidence intervals on correlations when the correlation is zero, or perhaps some other focal value?
• Or is there a better approach to indicating the precision of estimating correlations in a given sample?
-
"Statistical power focuses on whether the correlation is significantly different to zero". This is not true. To be able to speak of power, you have to assume point value for the alternative hypothesis. E.g what is the power to detect, on 0.05 alpha level, correlation of magnitude 0.2 or greater? – ttnphns Oct 14 '11 at 8:31
I do think constructing confidence intervals around parameter estimates (such as correlations, among other kinds) is a good thing to do. I strongly recommend it. Moreover, I don't think it should matter if the observed value is 0 or any other focal value. If someone claims that they did a study and found that the correlation was 0, you would want to know something about how confident we can be regarding that answer, and confidence intervals help to provide that information. There is a big difference between $r=0\pm.5$ and $r=0\pm.05$.
The concept of statistical power is defined within the Neyman-Pearson framework. (The Neyman-Pearson framework is often easiest to understand when contrasted with the Fisherian approach; you can find a nice, quick overview of NP from that perspective here.) If you can specify a type I error rate ($\alpha$), a sample size ($N$), and a candidate effect size ($r$), you can calculate the probability of making a type II error ($\beta$) or the probability of correctly rejecting the null hypothesis ($1-\beta$). But if you're not interested in significance testing, I recognize that this conception of power does become less appealing.
However, I gather your criticism is that $r$ will never truly equal 0 within your domain. That is pretty common with observational research as Meehl (1990) famously pointed out. Thus, testing to see if $r=0$ is testing if the underlying network of causal forces is perfectly balanced, which is typically very unlikely (but see here and here for some counterexamples). Nonetheless, you can take any point value as your null (although this almost never happens in practice). For example, you could do a one-tailed test to see if $r>.3$ (or $< -.3$), which Meehl gestimated is the level of "ambient correlational noise".
I say these things for the sake of completeness; I'm not trying to push you towards significance testing. You state that "[i]t's the relative amount that matters", a sentiment with which I agree wholeheartedly (see my answer here, for instance). There is another concept, related to power, that would better suit your needs. The framework you are looking for is known as Accuracy in Parameter Estimation, or AIPE (Maxwell et al., 2008), which I recommend. You will want to take a look at the work by Ken Kelly, he describes AIPE thusly:
Accuracy in parameter estimation in this sense is operationalized by obtaining confidence intervals that are sufficiently narrow. A narrow confidence interval provides more information about the population parameter of interest than does a wide interval or a null hypothesis significance test, as the interval reveals whether or not some null value (generally zero) can be rejected and it defines the range of plausible values for the parameter at some specified level of confidence.
References:
-
+1 Nicely explained and documented. – whuber Jun 12 '12 at 14:31
You can construct a confidence interval for the correlation; see here and here.
I don't understand what you mean by "... 95% confidence intervals on correlations when the correlation is zero, or perhaps some other focal value."
-
What do you think of the merits of the confidence interval approach to talking about precision in estimation? – mycat Oct 14 '11 at 5:39
@mycat - I like to use CIs to indicate how precisely a parameter has been estimated. – Karl Oct 14 '11 at 5:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038747906684875, "perplexity": 464.32364579437206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094924.48/warc/CC-MAIN-20150627031814-00243-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://www.joshbevan.com/category/math/ | # Non-linear ODEs
Came across this in my applied math grad course. The correct answer is incredibly simple, the equation of a circle. But in order to arrive at that solution a fair bit of geometry, algebra, and solution of a non-linear ODE is required.
A curve passing through (1,2) has the property that the length of a line drawn between the origin and intersecting perpendicular with the normal extending from any point along the curve is always numerically equal to the ordinate of that point on the curve. Find the equation of the curve.
Put another way: “Any curvature has a normal and a tangent. Draw the normal out and draw a line passing from the origin that is perpendicular to that (this line will be parallel to the tangent). The length of this line is equal to the ‘y’ of the point from which the normal emanates.”
Here’s a crude diagram:
Solution strategy is as follows:
We have some function y=f(x). We pick any point on f, and call that point (x,y) and draw a normal to the function at that point. We then draw a line from the origin such that it intersects with the normal perpendicularly at a point (a,b). The length of the vector (a,b) is equal to the ordinate of the point on the function from which the normal emanates, ‘y’. So that |(a,b)| = y.
What function satisfies this condition, and also passes through the point (1,2)?
Here’s the completed solution: BLAM!
Note the necessity of solving a non-linear (!) first order ODE to get the function! This is a special case ODE that has a ready analytic solution and is referred to as a Bernoulli ODE. It takes the general form dy/dx + P(x)y = Q(x)*(y^n).
Here’s a plot of the solution function:
Neato!
Source: Schaum’s Advanced Mathematics for Engineers and Scientists, Ch2 Prob 83
# Orthogonal Functions
After consulting numerous sources I finally found something that clearly and satisfactorily explains the details of orthogonal functions. The concept is the easy part: Two vectors are considered orthogonal if their dot product is zero.
The tough part is the details: This can be generalized to functions, with an inner product of a vector space taking the place of the dot product. How you define this inner product defines the orthogonality conditions and is dependent on the vector space. This is where the useful source comes in:
I have also attached the PowerPoint file should it be lost to the sands of time.
What made it click for me is him drawing direct analogues between the various pieces of the dot product, and the inner product. Take Legendre polynomials as an example: instead of Cartesian 3-space directions (i, j, k) your basis is the the monomial power (1, x, x²) in polynomial “space”. In general for real functions over the domain [a,b] the inner product is:
Restricting the domain to [-1,1] yields the Legendre polynomials.
These ruminations stem from attempting to solve problem 7.42 in Schaum’s Advanced Mathematics.
`Given the functions where $a_0, a_1+a_2x, a_3+a_4x+a_5x^2$ where $a_0,\ldots,a_5$ are constants. Determine the constants so that these functions are mutually orthogonal in (-1,1) and thus obtain the functions.`
Attached: Orthogonal | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287461638450623, "perplexity": 365.5441219887307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00243.warc.gz"} |
https://physics.stackexchange.com/questions/214179/torque-in-electric-dipole-placed-in-nonuniform-electric-field | # torque in electric dipole placed in nonuniform electric field
In the uniform electric field case, we can know the rotational axis is at the position in the middle of positive charge and negative charge.
However, if the electric field is non-uniform, or simply to say the perpendicular force (perpendicular to the electric dipole moment) acting on the positive charge and the negative charge are not the same.
I guess the rotational axis is no longer in the middle of two charges, but can I obtain the "new rotational axis", or do we have such thing? So how can we obtain the electric dipole moment?
Thank you!
• the field being non uniform the dipole can experience a net force in the direction of larger field as well as a torque . – drvrm Jul 2 '16 at 15:03
• If you put a dipole of moment ${\bf p}$ in a uniform electric field ${\bf E}$, the torque on the dipole is ${\bf \tau} = {\bf p} \times {\bf E}$? – jim Aug 4 '16 at 18:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535471200942993, "perplexity": 233.93247511805887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736902.24/warc/CC-MAIN-20200806091418-20200806121418-00348.warc.gz"} |
http://mathoverflow.net/questions/61820/nuclearity-of-certain-semigroup-crossed-product-c-algebras/61830 | # Nuclearity of certain semigroup crossed product C*-algebras
This question is related to this question link.
Suppose we have an (abelian) semigroup $S$ acting by endomorphisms on a $C^*$-algebra A giving rise to a semigroup crossed product $B = A\rtimes S$. Are they nice criteria known which ensure $B$ to be nuclear?
I am most interested in the case where $S$ is abelian and $A$ is abelian and unital.
Of course, when $S$ is actually a group then the case I'm interested in is well known to be nuclear, but because in general sub $C^*$-algebras of nuclear ones don't have to be nuclear, one has to be a little bit careful.
Thanks!
-
If one is happy to use the (deep) equivalence of nuclearity and amenability for C*-algebras, then Theorem 3 of Rosenberg's paper "Amenability of crossed products of C*-algebras" (Comm Math Phys 1977) has some results, at least when $S$ is the positive integers. – Yemon Choi Apr 15 '11 at 19:07
At least in the case that $S$ is the positive integers, this is discussed in the paper by G. Murphy, "Crossed products of $C^\ast$-algebras by endomorphisms", Int. Eq. and Operator Th. Volume 24, Number 3, 298-319, DOI: 10.1007/BF01204603. His result is that the crossed product is nuclear iff $A$ is nuclear. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657019138336182, "perplexity": 405.46282954301887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.31/warc/CC-MAIN-20150827025424-00075-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://bootmath.com/proof-of-vector-addition-formula.html | # Proof of vector addition formula
Two vectors of lengths $a$ and $b$ make an angle $\theta$ with each other when placed tail to tail. Show that the magnitude of their resultant is :
$$r = \sqrt{ a^2 + b^2 +2ab\cos(\theta)}.$$
I understand that if we placed
the two vectors head-to-tail instead of tail-to-tail, the Law of Cosines dictates that the resultant would be:
$$\sqrt{ a^2 + b^2 -2ab\cos(\theta)}$$
However, In the situation actually described, the direction of vector $a$
has been reversed, which changes the sign of $2ab$ without
changing the sign of $a^2$. But how do I prove that mathematically?
#### Solutions Collecting From Web of "Proof of vector addition formula"
You got everything that you need.
Theorem: Given vectors $a$ and $b$ enclosing an angle $\theta$. Then the magnitude of the sum, $|a + b|$, is given by $\sqrt{ a^2 + b^2 +2ab\cos(\theta)}$.
Proof: Assuming that the Law of Cosines works for a case like the following, where $a$ and $b$ are the thick lines. The thin lines are just mirrors of the vectors.
If our vectores are defined like I just stated, this holds:
$$|a + b| = \sqrt{ a^2 + b^2 – 2ab\cos(\theta)}$$
Now we position the vector $b$ at the head of $a$. It looks like this:
$$\theta’ = \pi – \theta$$
And with $\cos(\pi – \theta) = – \cos(-\theta) = -\cos(\theta)$ we get that $-$-sign and yield the formula
$$|b – a| = \sqrt{ a^2 + b^2 + 2ab\cos(\theta)}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980373740196228, "perplexity": 350.84731294739834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593586.54/warc/CC-MAIN-20180722194125-20180722214125-00310.warc.gz"} |
https://www.scientificamerican.com/article/weighty-matters/ | In an age when technologies typically grow obsolete in a few years, it is ironic that almost all the world's measurements of mass (and related phenomena such as energy) depend on a 117-year-old object stored in the vaults of a small laboratory outside Paris, the International Bureau of Weights and Measures. According to the International System of Units (SI), often referred to as the metric system, the kilogram is equal to the mass of this "international prototype of the kilogram" (or IPK)--a precision-fabricated cylinder of platinum-iridium alloy that stands 39 millimeters high and is the same in diameter.
The SI is administered by the General Conference on Weights and Measures and the International Committee for Weights and Measures. During the past several decades the conference has redefined other base SI units (those set by convention and from which all other quantities are derived) to vastly improve their accuracy and thus keep them in step with the advancement of scientific and technological understanding. The standards for the meter and the second, for example, are now founded on natural phenomena. The meter is tied to the speed of light, whereas the second has been related to the frequency of microwaves emitted by a specific element during a certain transition between energy states. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773828148841858, "perplexity": 704.4817176936149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321938.75/warc/CC-MAIN-20170627221726-20170628001726-00686.warc.gz"} |
http://www.mathcaptain.com/geometry/arc-length.html | An arc is the part of the circle between two points. Arcs are generally associated with the central angles they make and their measure is expressed as an angular measure using the measure of the central angle.
Arc length is a linear measure of an arc measured along the circumference of the circle.
As arc is a part of the circle, arc measure is a fraction of the circumference of the circle. Arc length is generally indicated by the letter 'l'.
## Arc Length Definition
Arc length is a linear measure of the arc measured along the circle.
It can be understood, that the arc length is a fraction of the circumference of the circle.
To measure the arc length physically:
1. Place a thin wire or thread carefully along the circle.
2. Mark the end points of the arc on the wire or thread.
3. Stretch the wire/thread and measure the distance between the points marked using a Straight edge.
### Intercepted Arc
An Arc of a Circle Intercepted Arcs Length Conversion Chord Length of a Circle A Circle A Sector of a Circle Area Circle Intercepts
Arc Length Arc of a Circle Calculator conversion calculator length | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820272326469421, "perplexity": 405.8256683892029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507452681.5/warc/CC-MAIN-20141017005732-00362-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/ball-coordinates-to-cartesian-coordinates.35860/ | # Ball coordinates to cartesian coordinates
1. Jul 20, 2004
### martine
I am struggeling with the following problem:
give the x,y,z coordinates from the following ball points/vectors
1. (r, theta, phi) = (sqrt3, 3/4pi, 3/4pi)
2. (r, theta, phi) = (1, 1/6pi, 1 1/6pi)
the sollutions I found in my reader are as followed:
1. (x, y, z) = (-1/2 sqrt3, 1/2 sqrt3, -sqrt3/sqrt2)
2. (x, y, z) = 1/4 sqrt3, -1/4, 1/2 sqrt3)
can someone explain to me what was actually done here? I understand the conversion from carthesian coordinates to ball and cylinder coordinates but I can't seem to find the sollution for the other way around. Thanks a lot.
2. Jul 20, 2004
### Muzza
These equations might be of some use...
3. Jul 20, 2004
### Galileo
Yep. It seems the angles $$\theta$$ and $$\phi$$ are interchanged though.
It's funny. In my physics books the azimuthal angle is always $$\phi$$ and in most of my mathematics books it's $$\theta$$.
Oh well, guess it doesn`t matter as long as you're aware of it.
4. Jul 20, 2004
### ahrkron
Staff Emeritus
I would suggest that, instead of plugging this into a set of "conversion equations", you draw the situation (or even build a little model with a box) so that you see how the quantities are related. Once you do this with one problem, the second will be much easier.
5. Jul 20, 2004
### Muzza
It brings this up.
:P
6. Jul 20, 2004
### MiGUi
That's because notation is not as important as meaning, but we must always specify.
Using astronomy language, I always used $$\theta$$ for "declination" (angle from vertical axe) and $$\phi$$ for "Right ascension" (angle from horizontal axe from left to right)
Similar Discussions: Ball coordinates to cartesian coordinates | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879912495613098, "perplexity": 2667.865432085865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103891.56/warc/CC-MAIN-20170817170613-20170817190613-00249.warc.gz"} |
http://scicomp.stackexchange.com/questions/3381/convergence-of-step-length-in-a-globally-convergent-newton-line-search-method-wi | # Convergence of step-length in a globally-convergent newton line search method with non-degenerate Jacobian
I'm working on a problem given in Nocedal & Wright's Numerical Optimization 2nd Edition, pg 303 Exercise 11.7:
Consider a line-search newton method in which the step length $\alpha_k$ is chosen to be the exact minimizer of the merit function $f(\cdot)$; that is,
$$\alpha_k = argmin_\alpha [f(x_k-\alpha J^{-1}(x_k)r(x_k))]$$
Show that if $J(x)$ is non-singular at the solution $x^*$, then $\alpha_k\rightarrow 1$ as $x_k\rightarrow x^*$.
In this problem, we seek $x^*$ such that $r(x^*)=0$, where $r:R^n\rightarrow R^n$ and use a merit function $f(x)$ to globalize the convergence of newton's method with a line search.
I'm trying to work this proof out on my own, but I've run into some difficulties. First of all, I understand that the newton direction $\Delta x_k =-J^{-1}(x_k)r(x_k)$ is a descent direction. Also, if we (theoretically) chose the step length $\alpha_k$ as above, then the new step should satisfy the sufficient decrease condition (Armijo condition):
$$f(x_k+\alpha_k\Delta x)\le f(x_k)+c\alpha_k\nabla f(x_k)^T\Delta x$$ for some $0<c\le1$.
I understand that in practice, we try using the full newton step $\alpha_k=1$ first. If the newton step doesn't produce sufficient decrease, then we search back until sufficient decrease is met. I'm thinking that the fact that the jacobian being non-degenerate implies that there exists a ball around $x^*$ such that the full newton step always satisfies armijo's condition. Thus, as we get closer to the root, the newton step is enough to ensure sufficient decrease. However, I'm not entirely sure that such a ball exists.
I've also tried assuming that $\alpha_k\rightarrow a\ne1$ as $x_k\rightarrow x^*$, but the search for a contradition is somewhat mysterious to me.
Any help with this would be greatly appreciated! :)
-
You misread the question. It isn't talking about any Armijo or sufficient decrease condition. It says assume that we compute $\alpha$ as in the formula. Then show that if the iteration converges, i.e., $x_k\rightarrow x^\ast$ that necessarily $\alpha\rightarrow 1$.
The proof of this statement is a simple application of Taylor expansion. If $x_k\rightarrow x^\ast$ then $f(x)$ is approximated better and better in a neighborhood of $x_k$ by a quadratic Taylor approximation whose minimum is attained at a point that corresponds to exactly $\alpha=1$. Simply form the Taylor expansion of $f(x_k+\alpha p_k)$ around $x_k$ with the given $p_k$ and see what happens.
The quadratic taylor expansion is $f(x_k+\alpha p_k)=f(x_k)+\alpha\nabla f(x_k)\cdot p_k +\frac{\alpha^2}{2}p_k\cdot\nabla^2 f(x_k) \cdot p_k$. Since $\alpha$ is chosen as the minimum, $\alpha = -\frac{\nabla f(x_k)\cdot p_k}{p_k\cdot\nabla^2 f(x_k) \cdot p_k}$. But as $x_k\rightarrow x^*$, we get an indeterminate form $\frac{0}{0}$. Should I use l'hoptical's rule at this point? It seems a bit too complicated for that... there must be a simpler way... – Paul♦ Sep 29 '12 at 0:13 Think about what $p_k$ is and plug that into your formula. – Wolfgang Bangerth Sep 29 '12 at 12:50 I can see that $p_k$ is simply $-J^{-1}(x_k)r(x_k)$. So when I substitute, I obtain $\alpha = \frac{\nabla f(x_k)\cdot J^{-1}(x_k)r(x_k)}{r^T(x_k)J^{-T}(x_k)\cdot\nabla^2 f(x_k) \cdot J^{-1}(x_k)r(x_k)}$. But as $x_k\rightarrow x^*$, $r(x_k)\rightarrow 0$ (yielding an indeterminate form). Unless I can somehow cancel terms from the numerator & denominator or use l'hopital's rule, I can't see how else to proceed. – Paul♦ Sep 29 '12 at 14:53 Remember that $r(x_k)=\nabla f(x_k)$ and $J(x_k)=\nabla^2 f(x_k)$. – Wolfgang Bangerth Sep 29 '12 at 15:04 Aha! That's the detail I was missing... By substituting these terms into the equation, everything simplifies to unity! :) Thank you so much, Wolfgang! :) – Paul♦ Sep 29 '12 at 18:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772781729698181, "perplexity": 174.648048895857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704933573/warc/CC-MAIN-20130516114853-00067-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://math.libretexts.org/Courses/Monroe_Community_College/MTH_225_Differential_Equations/5%3A_Linear_Second_Order_Equations/5.4%3A_The_Method_of_Undetermined_Coefficients_I | $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
# 5.4: The Method of Undetermined Coefficients I
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
In this section we consider the constant coefficient equation
$\label{eq:5.4.1} ay''+by'+cy=e^{\alpha x}G(x),$
where $$\alpha$$ is a constant and $$G$$ is a polynomial.
From Theorem 5.3.2, the general solution of Equation \ref{eq:5.4.1} is $$y=y_p+c_1y_1+c_2y_2$$, where $$y_p$$ is a particular solution of Equation \ref{eq:5.4.1} and $$\{y_1,y_2\}$$ is a fundamental set of solutions of the complementary equation
$ay''+by'+cy=0. \nonumber$
In Section 5.2 we showed how to find $$\{y_1,y_2\}$$. In this section we’ll show how to find $$y_p$$. The procedure that we’ll use is called the method of undetermined coefficients. Our first example is similar to Exercises 5.3.16-5.3.21.
##### Example 5.4.1 :
Find a particular solution of
$\label{eq:5.4.2} y''-7y'+12y=4e^{2x}.$
Then find the general solution.
Solution
Substituting $$y_p=Ae^{2x}$$ for $$y$$ in Equation \ref{eq:5.4.2} will produce a constant multiple of $$Ae^{2x}$$ on the left side of Equation \ref{eq:5.4.2}, so it may be possible to choose $$A$$ so that $$y_p$$ is a solution of Equation \ref{eq:5.4.2}. Let’s try it; if $$y_p=Ae^{2x}$$ then
$y_p''-7y_p'+12y_p=4Ae^{2x}-14Ae^{2x}+12Ae^{2x}=2Ae^{2x}=4e^{2x} \nonumber$
if $$A=2$$. Therefore $$y_p=2e^{2x}$$ is a particular solution of Equation \ref{eq:5.4.2}. To find the general solution, we note that the characteristic polynomial of the complementary equation
$\label{eq:5.4.3} y''-7y'+12y=0$
is $$p(r)=r^2-7r+12=(r-3)(r-4)$$, so $$\{e^{3x},e^{4x}\}$$ is a fundamental set of solutions of Equation \ref{eq:5.4.3}. Therefore the general solution of Equation \ref{eq:5.4.2} is
$y=2e^{2x}+c_1e^{3x}+c_2e^{4x}. \nonumber$
##### Example 5.4.2
Find a particular solution of
$\label{eq:5.4.4} y''-7y'+12y=5e^{4x}.$
Then find the general solution.
Solution
Fresh from our success in finding a particular solution of Equation \ref{eq:5.4.2} — where we chose $$y_p=Ae^{2x}$$ because the right side of Equation \ref{eq:5.4.2} is a constant multiple of $$e^{2x}$$ — it may seem reasonable to try $$y_p=Ae^{4x}$$ as a particular solution of Equation \ref{eq:5.4.4}. However, this will not work, since we saw in Example 5.4.1 that $$e^{4x}$$ is a solution of the complementary equation Equation \ref{eq:5.4.3}, so substituting $$y_p=Ae^{4x}$$ into the left side of Equation \ref{eq:5.4.4}) produces zero on the left, no matter how we choose$$A$$. To discover a suitable form for $$y_p$$, we use the same approach that we used in Section 5.2 to find a second solution of
$ay''+by'+cy=0 \nonumber$
in the case where the characteristic equation has a repeated real root: we look for solutions of Equation \ref{eq:5.4.4} in the form $$y=ue^{4x}$$, where $$u$$ is a function to be determined. Substituting
$\label{eq:5.4.5} y=ue^{4x},\quad y'=u'e^{4x}+4ue^{4x},\quad \text{and} \quad y''=u''e^{4x}+8u'e^{4x}+16ue^{4x}$
into Equation \ref{eq:5.4.4} and canceling the common factor $$e^{4x}$$ yields
$(u''+8u'+16u)-7(u'+4u)+12u=5, \nonumber$
or
$u''+u'=5. \nonumber$
By inspection we see that $$u_p=5x$$ is a particular solution of this equation, so $$y_p=5xe^{4x}$$ is a particular solution of Equation \ref{eq:5.4.4}. Therefore
$y=5xe^{4x}+c_1e^{3x}+c_2e^{4x} \nonumber$
is the general solution.
##### Example 5.4.3
Find a particular solution of
$\label{eq:5.4.6} y''-8y'+16y=2e^{4x}.$
Solution
Since the characteristic polynomial of the complementary equation
$\label{eq:5.4.7} y''-8y'+16y=0$
is $$p(r)=r^2-8r+16=(r-4)^2$$, both $$y_1=e^{4x}$$ and $$y_2=xe^{4x}$$ are solutions of Equation \ref{eq:5.4.7}. Therefore Equation \ref{eq:5.4.6}) does not have a solution of the form $$y_p=Ae^{4x}$$ or $$y_p=Axe^{4x}$$. As in Example 5.4.2 , we look for solutions of Equation \ref{eq:5.4.6} in the form $$y=ue^{4x}$$, where $$u$$ is a function to be determined. Substituting from Equation \ref{eq:5.4.5} into Equation \ref{eq:5.4.6} and canceling the common factor $$e^{4x}$$ yields
$(u''+8u'+16u)-8(u'+4u)+16u=2, \nonumber$
or
$u''=2. \nonumber$
Integrating twice and taking the constants of integration to be zero shows that $$u_p=x^2$$ is a particular solution of this equation, so $$y_p=x^2e^{4x}$$ is a particular solution of Equation \ref{eq:5.4.4}. Therefore
$y=e^{4x}(x^2+c_1+c_2x) \nonumber$
is the general solution.
The preceding examples illustrate the following facts concerning the form of a particular solution $$y_p$$ of a constant coefficent equation
$ay''+by'+cy=ke^{\alpha x}, \nonumber$
where $$k$$ is a nonzero constant:
1. If $$e^{\alpha x}$$ isn’t a solution of the complementary equation $\label{eq:5.4.8} ay''+by'+cy=0,$ then $$y_p=Ae^{\alpha x}$$, where $$A$$ is a constant. (See Example 5.4.1 ).
2. If $$e^{\alpha x}$$ is a solution of Equation \ref{eq:5.4.8} but $$xe^{\alpha x}$$ is not, then $$y_p=Axe^{\alpha x}$$, where $$A$$ is a constant. (See Example 5.4.2 .)
3. If both $$e^{\alpha x}$$ and $$xe^{\alpha x}$$ are solutions of Equation \ref{eq:5.4.8}, then $$y_p=Ax^2e^{\alpha x}$$, where $$A$$ is a constant. (See Example 5.4.3 .)
See Exercise 5.4.30 for the proofs of these facts.
In all three cases you can just substitute the appropriate form for $$y_p$$ and its derivatives directly into
$ay_p''+by_p'+cy_p=ke^{\alpha x},\nonumber$
and solve for the constant $$A$$, as we did in Example 5.4.1 . (See Exercises 5.4.31-5.4.33.) However, if the equation is
$ay''+by'+cy=k e^{\alpha x}G(x), \nonumber$
where $$G$$ is a polynomial of degree greater than zero, we recommend that you use the substitution $$y=ue^{\alpha x}$$ as we did in Examples 5.4.2 and 5.4.3 . The equation for $$u$$ will turn out to be
$\label{eq:5.4.9} au''+p'(\alpha)u'+p(\alpha)u=G(x),$
where $$p(r)=ar^2+br+c$$ is the characteristic polynomial of the complementary equation and $$p'(r)=2ar+b$$ (Exercise 5.4.30); however, you shouldn’t memorize this since it is easy to derive the equation for $$u$$ in any particular case. Note, however, that if $$e^{\alpha x}$$ is a solution of the complementary equation then $$p(\alpha)=0$$, so Equation \ref{eq:5.4.9} reduces to
$au''+p'(\alpha)u'=G(x), \nonumber$
while if both $$e^{\alpha x}$$ and $$xe^{\alpha x}$$ are solutions of the complementary equation then $$p(r)=a(r-\alpha)^2$$ and $$p'(r)=2a(r-\alpha)$$, so $$p(\alpha)=p'(\alpha)=0$$ and Equation \ref{eq:5.4.9}) reduces to
$au''=G(x). \nonumber$
##### Example 5.4.4
Find a particular solution of
$\label{eq:5.4.10} y''-3y'+2y=e^{3x}(-1+2x+x^2).$
Solution
Substituting
$y=ue^{3x},\quad y'=u'e^{3x}+3ue^{3x},\quad \text{and} y''=u''e^{3x}+6u'e^{3x}+9ue^{3x}\nonumber$
into Equation \ref{eq:5.4.10}) and canceling $$e^{3x}$$ yields
$(u''+6u'+9u)-3(u'+3u)+2u=-1+2x+x^2, \nonumber$
or
$\label{eq:5.4.11} u''+3u'+2u=-1+2x+x^2.$
As in Example 5.3.2, in order to guess a form for a particular solution of Equation \ref{eq:5.4.11}), we note that substituting a second degree polynomial $$u_p=A+Bx+Cx^2$$ for $$u$$ in the left side of Equation \ref{eq:5.4.11}) produces another second degree polynomial with coefficients that depend upon $$A$$, $$B$$, and $$C$$; thus,
$\text{if} \quad u_p=A+Bx+Cx^2\quad \text{then} \quad u_p'=B+2Cx\quad \text{and} \quad u_p''=2C. \nonumber$
If $$u_p$$ is to satisfy Equation \ref{eq:5.4.11}), we must have
\begin{aligned} u_p''+3u_p'+2u_p&=2C+3(B+2Cx)+2(A+Bx+Cx^2)\\ &=(2C+3B+2A)+(6C+2B)x+2Cx^2=-1+2x+x^2.\end{aligned}\nonumber
Equating coefficients of like powers of $$x$$ on the two sides of the last equality yields
$\begin{array}{rcr} 2C&=1\phantom{.}\\ 2B+6C&=2\phantom{.}\\ 2A+3B+2C&= -1. \end{array}\nonumber$
Solving these equations for $$C$$, $$B$$, and $$A$$ (in that order) yields $$C=1/2,B=-1/2,A=-1/4$$. Therefore
$u_p=-{1\over4}(1+2x-2x^2) \nonumber$
is a particular solution of Equation \ref{eq:5.4.11}, and
$y_p=u_pe^{3x}=-{e^{3x}\over4}(1+2x-2x^2) \nonumber$
is a particular solution of Equation \ref{eq:5.4.10}.
##### Example 5.4.5
Find a particular solution of
$\label{eq:5.4.12} y''-4y'+3y=e^{3x}(6+8x+12x^2).$
Solution
Substituting
$y=ue^{3x},\quad y'=u'e^{3x}+3ue^{3x},\quad \text{and } y''=u''e^{3x}+6u'e^{3x}+9ue^{3x} \nonumber$
into Equation \ref{eq:5.4.12}) and canceling $$e^{3x}$$ yields
$(u''+6u'+9u)-4(u'+3u)+3u=6+8x+12x^2, \nonumber$
or
$\label{eq:5.4.13} u''+2u'=6+8x+12x^2.$
There’s no $$u$$ term in this equation, since $$e^{3x}$$ is a solution of the complementary equation for Equation \ref{eq:5.4.12}). (See Exercise 5.4.30.) Therefore Equation \ref{eq:5.4.13}) does not have a particular solution of the form $$u_p=A+Bx+Cx^2$$ that we used successfully in Example 5.4.4 , since with this choice of $$u_p$$,
$u_p''+2u_p'=2C+(B+2Cx) \nonumber$
can’t contain the last term ($$12x^2$$) on the right side of Equation \ref{eq:5.4.13}). Instead, let’s try $$u_p=Ax+Bx^2+Cx^3$$ on the grounds that
$u_p'=A+2Bx+3Cx^2\quad \text{and} \quad u_p''=2B+6Cx\nonumber$
together contain all the powers of $$x$$ that appear on the right side of Equation \ref{eq:5.4.13}).
Substituting these expressions in place of $$u'$$ and $$u''$$ in Equation \ref{eq:5.4.13}) yields
$(2B+6Cx)+2(A+2Bx+3Cx^2)=(2B+2A)+(6C+4B)x+6Cx^2=6+8x+12x^2. \nonumber$
Comparing coefficients of like powers of $$x$$ on the two sides of the last equality shows that $$u_p$$ satisfies Equation \ref{eq:5.4.13}) if
$\begin{array}{rcr} 6C&=12\phantom{.}\\ 4B+6C&=8\phantom{.}\\ 2A+2B\phantom{+6u_2}&=6. \end{array}\nonumber$
Solving these equations successively yields $$C=2$$, $$B=-1$$, and $$A=4$$. Therefore
$u_p=x(4-x+2x^2) \nonumber$
is a particular solution of Equation \ref{eq:5.4.13}), and
$y_p=u_pe^{3x}=xe^{3x}(4-x+2x^2) \nonumber$
is a particular solution of Equation \ref{eq:5.4.12}).
##### Example 5.4.6
Find a particular solution of
$\label{eq:5.4.14} 4y''+4y'+y=e^{-x/2}(-8+48x+144x^2).$
Solution
Substituting
$y=ue^{-x/2},\quad y'=u'e^{-x/2}-{1\over2}ue^{-x/2},\quad \text{and} \quad y''=u''e^{-x/2}-u'e^{-x/2}+{1\over4}ue^{-x/2} \nonumber$
into Equation \ref{eq:5.4.14}) and canceling $$e^{-x/2}$$ yields
$4\left(u''-u'+{u\over4}\right)+4\left(u'-{u\over2}\right)+u=4u''=-8+48x+144x^2, \nonumber$
or
$\label{eq:5.4.15} u''=-2+12x+36x^2,$
which does not contain $$u$$ or $$u'$$ because $$e^{-x/2}$$ and $$xe^{-x/2}$$ are both solutions of the complementary equation. (See Exercise 5.4.30.) To obtain a particular solution of Equation \ref{eq:5.4.15}) we integrate twice, taking the constants of integration to be zero; thus,
$u_p'=-2x+6x^2+12x^3\quad \text{and} \quad u_p=-x^2+2x^3+3x^4=x^2(-1+2x+3x^2).\nonumber$
Therefore
$y_p=u_pe^{-x/2}=x^2e^{-x/2}(-1+2x+3x^2)\nonumber$
is a particular solution of Equation \ref{eq:5.4.14}).
## Summary
The preceding examples illustrate the following facts concerning particular solutions of a constant coefficent equation of the form
$ay''+by'+cy=e^{\alpha x}G(x),\nonumber$
where $$G$$ is a polynomial (see Exercise 5.4.30):
1. If $$e^{\alpha x}$$ isn’t a solution of the complementary equation $\label{eq:5.4.16} ay''+by'+cy=0,$ then $$y_p=e^{\alpha x}Q(x)$$, where $$Q$$ is a polynomial of the same degree as $$G$$. (See Example 5.4.4 ).
2. If $$e^{\alpha x}$$ is a solution of Equation \ref{eq:5.4.16} but $$xe^{\alpha x}$$ is not, then $$y_p=xe^{\alpha x}Q(x)$$, where $$Q$$ is a polynomial of the same degree as $$G$$. (See Example 5.4.5 .)
3. If both $$e^{\alpha x}$$ and $$xe^{\alpha x}$$ are solutions of Equation \ref{eq:5.4.16}, then $$y_p=x^2e^{\alpha x}Q(x)$$, where $$Q$$ is a polynomial of the same degree as $$G$$. (See Example 5.4.6 .)
In all three cases, you can just substitute the appropriate form for $$y_p$$ and its derivatives directly into
$ay_p''+by_p'+cy_p=e^{\alpha x}G(x), \nonumber$
and solve for the coefficients of the polynomial $$Q$$. However, if you try this you will see that the computations are more tedious than those that you encounter by making the substitution $$y=ue^{\alpha x}$$ and finding a particular solution of the resulting equation for $$u$$. (See Exercises 5.4.34-5.4.36.) In Case (a) the equation for $$u$$ will be of the form
$au''+p'(\alpha)u'+p(\alpha)u=G(x), \nonumber$
with a particular solution of the form $$u_p=Q(x)$$, a polynomial of the same degree as $$G$$, whose coefficients can be found by the method used in Example 5.4.4 . In Case (b) the equation for $$u$$ will be of the form
$au''+p'(\alpha)u'=G(x) \nonumber$
(no $$u$$ term on the left), with a particular solution of the form $$u_p=xQ(x)$$, where $$Q$$ is a polynomial of the same degree as $$G$$ whose coefficents can be found by the method used in Example 5.4.5 . In Case (c), the equation for $$u$$ will be of the form
$au''=G(x) \nonumber$
with a particular solution of the form $$u_p=x^2Q(x)$$ that can be obtained by integrating $$G(x)/a$$ twice and taking the constants of integration to be zero, as in Example 5.4.6 .
## Using the Principle of Superposition
The next example shows how to combine the method of undetermined coefficients and Theorem 5.3.3, the principle of superposition.
##### Example 5.4.7
Find a particular solution of
$\label{eq:5.4.17} y''-7y'+12y=4e^{2x}+5e^{4x}.$
Solution
In Example 5.4.1 we found that $$y_{p_1}=2e^{2x}$$ is a particular solution of
$y''-7y'+12y=4e^{2x}, \nonumber$
and in Example 5.4.2 we found that $$y_{p_2}=5xe^{4x}$$ is a particular solution of
$y''-7y'+12y=5e^{4x}. \nonumber$
Therefore the principle of superposition implies that $$y_p=2e^{2x}+5xe^{4x}$$ is a particular solution of Equation \ref{eq:5.4.17}). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878854155540466, "perplexity": 159.15666593204256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00618.warc.gz"} |
https://tex.stackexchange.com/questions/128723/algorithm-as-figure-and-without-italic-and-bold-formatting | # Algorithm as figure and without italic and bold formatting?
I want to include an algorithm in my latex document, however without printing most of it in italic and all the keywords in bold font like it's done by default by e.g. the algorithmicx package. I like this simple style:
(screenshot of a part of page 3 of http://research.microsoft.com/pubs/68869/naacl2k-proc-rev.pdf)
The only thing I'd like to add to this style are line numbers. Can anyone help me, how I get the formatting of the screenshot and the line numbers? Thanks :-)
So, here is what I have until now:
\documentclass{article}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{algpascal}
\begin{document}
\alglanguage{pascal}
\begin{algorithm}
\caption{Paull's algorithm}
\begin{algorithmic}[1]
\State Assign an ordering $A_{1}, ..., A_{n}$ to the nonterminals of the grammar.
\For{i = 1}{n}
\Begin
\For{j = 1}{i-1}
\Begin
\State for each production of the form $A_{i} \rightarrow A_{j} \alpha$
\End
\End
\end{algorithmic}
\end{algorithm}
\end{document}
This ends up as
Based on this I want the following changes:
• do and begin shall be on the same line
• end shall be vertically aligned to it's associated for (see first screenshot of this post).
• Integration as a figure or at least without a black border and with a caption below the algorithm would be prefered
• bold formatting for keywords should be turned off
Is this what you are trying to achieve?
This is the MWE:
\documentclass{article}
\usepackage[plain]{algorithm}
\usepackage{algpascal}
\begin{document}
\algrenewcommand\textkeyword{\textrm}
\algdef{SE}{For}{End}[2]{%
\textkeyword{for} $$#1$$ \textkeyword{to} $$#2$$ \textkeyword{do begin}}{%
\textkeyword{end}}
\begin{algorithm}
\begin{algorithmic}[1]
\State Assign an ordering $A_{1}, ..., A_{n}$ to the nonterminals of the grammar.
\For{i = 1}{n}
\For{j = 1}{i-1}
\State for each production of the form $A_{i} \rightarrow A_{j} \alpha$
\End
\End
\end{algorithmic}
\caption{Paull's algorithm}
\end{algorithm}
\end{document}
## Explanation
1. To match your first two requests, I've redefined the behavior of for to have an end statement by adding the lines:
\algdef{SE}{For}{End}[2]{%
\textkeyword{for} $$#1$$ \textkeyword{to} $$#2$$ \textkeyword{do begin}}{%
\textkeyword{end}}
2. To match your last request, it suffices to add the line:
\algrenewcommand\textkeyword{\textrm}
which redefines the font for keywords to be \textrm instead of \textbf.
3. In regards of your 3rd request, there are two ways.
• If you want the algorithm to behave as an algorithm, simply load the algorithm package with the option plain as in the above MWE:
\usepackage[plain]{algorithm}
• If you want the algorithm to behave as a figure, there is no need to load the algorithm package, simply insert the algorithmic environment inside a figure, i.e. replace the lines
\begin{algorithm}
\begin{algorithmic}[1]
...
\end{algorithmic}
\caption{Paull's algorithm}
\end{algorithm}
with
\begin{figure}
\begin{algorithmic}[1]
...
\end{algorithmic}
\caption{Paull's algorithm}
\end{figure}
and you will have
This is the complete implementation of the algorithm in the figure:
\documentclass{article}
\usepackage[plain]{algorithm}
\usepackage{algpascal}
\begin{document}
\algrenewcommand\textkeyword{\textrm}
\algdef{SE}{For}{End}[2]{%
\textkeyword{for} $$#1$$ \textkeyword{to} $$#2$$ \textkeyword{do begin}}{%
\textkeyword{end}}
\algdef{SE}{ForEach}{End}[1]{%
\textkeyword{for each} #1 \textkeyword{do begin}}{%
\textkeyword{end}}
\begin{algorithm}
\begin{algorithmic}[1]
\State Assign an ordering $A_{1}, \dots, A_{n}$ to the nonterminals of the grammar.
\For{i := 1}{n}
\For{j := 1}{i-1}
\ForEach{production of the form $A_{i} \rightarrow A_{j} \alpha$}
\State remove $A_{i} \rightarrow A_{j} \alpha$ from the grammar
\ForEach{production of the form $A_{j} \rightarrow \beta$}
\State add $A_{i} \rightarrow \beta\alpha$ to the grammar
\End
\End
\End
\State transform the $A_{i}$-productions to eliminate direct left recursion
\End
\end{algorithmic}
\caption{Paull's algorithm}
\end{algorithm}
\end{document}
There is the need to define a new command \ForEach:
\algdef{SE}{ForEach}{End}[1]{%
\textkeyword{for each} #1 \textkeyword{do begin}}{%
\textkeyword{end}}
Note that I've defined \ForEach so to take one "text" argument, because it seemed to me the best way to define it.
If you want it to take a "math" argument, then define it as
\algdef{SE}{ForEach}{End}[1]{%
\textkeyword{for each} $$#1$$ \textkeyword{do begin}}{%
\textkeyword{end}}
and use it as follows (amsmath is needed for the command \text):
\ForEach{\text{production of the form }A_{i} \rightarrow A_{j} \alpha}
• wow, thank you so much for that detailed reply! Unfortunately there is one problem with this: multiple lines/states in a for loop won't work. I've added an example at the end of my original post. Would be glad if you could have a look at this, thanks! – stefan.at.wpf Aug 18 '13 at 9:28
• @stefan.at.wpf You're welcome! – karlkoeller Aug 18 '13 at 10:00
• thank you karl, I solved it. I am just wondering, if it's possible to have a "for each" where the item (fo each item in items) isn't printed in italic (while in a "for" loop the "i" should remain printed in italic). reconfiguring textrm would change it everywhere. maybe you can help me with this instead, would be great :-) – stefan.at.wpf Aug 19 '13 at 19:43
• @stefan.at.wpf Yes, I'll help you, but please wait for tomorrow. – karlkoeller Aug 19 '13 at 19:45
• yes, of course :-) – stefan.at.wpf Aug 19 '13 at 19:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249095916748047, "perplexity": 1936.9438400670008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00741.warc.gz"} |
https://www.physicsforums.com/threads/special-relativity-four-current-and-diracs-delta.818953/ | # Special relativity - four current and dirac's delta
1. Jun 13, 2015
### luxux
1. The problem statement, all variables and given/known data
Hello,
in my special relativity course we are making the (electrical charge) current a four-current. I have serious problems understanding the following:
http://imageshack.com/a/img540/2046/8IYpxu.png [Broken]
In particular, on line 2, in the last equation, I don't understand why it is implicit (as the professor said) that there is an integral (formally - we are working with distributions) (over the 3d space) that, with a test function Φ(x) (also implicit), selects one spatial position (which one?!). That integral should be implicit in both the last term and in the one preceding.
I have no problems with the calculations, but to me the fact that there is an implicit integration makes no sense:
How can i have on line 1 j0(x) that, after the calculations, is equal to cρ(x), that should be integrated, thus giving cρ(z)?
j0(x)=...=cρ(x)=c∫ρ(x-z(x0))δ(3)(x-z(x0))Φ(x)d(3)x=cρ(z(x0)). j should be general (depends on coordinates x) and it is equal to ρ in point z?!
I asked to my fellow students and the professor both, but without success.
Could anyone explain me what is the meaning of that formal integral, why is it implicit and what would the physical meaning (if any) of the test function (implicit) be?
I think i have a good understanding of dirac's delta, at least I could make all the calculations, but when it comes to applying it to physics, I miss something.
x are the coordinates of the IRF, z are the coordinates of the charged particle, s is a parameter that describes the worldline of the particle, c is the speed of light and e is the charge of the particle, ρ is the charge density.
x0, z0 are the temporal coordinates. j0 is the temporal component of the four-current.
2. Relevant equations
I guess the same problem applies here, when we define the charge density as:
http://imageshack.com/a/img540/9453/mydsBa.png [Broken]
Also here there should be the formal integration.
If I think about this particle in space time, since it always existed and will always exist, I imagine it as a 2d plane (which represents the 3d space), described by the coordinates x, and on the vertical axis the value of the charge density. It's a very sharp peak, indeed an extremely sharp one. It is centered on the point of coordinates z (so x=z) and looks like a sharp peak (as we imagine a dirac's delta); in the usual limit, it is a line perpendicular to the plane.
I don't understand why I need the integral.
ρ is a function of x. It means that I'm not selecting any point of space-time. Indeed, the only place where ρ would be different from zero, it would be where the particle is, that is z(x0). this for the instant x0. If i put the integration over x, thus selecting z(x0), I get the value of ρ in z(x0).
Sorry for my bad english and the confusion: I am really confused.
EDIT: is it just saying that, a priori, j could depend on any point of spacetime and after the calculations we discover that, by integrating, it only depends on the position z of the particle? So it also works that if I have xA and xB they will both have j equal to zero unless they are part of the wordline of the particle.
Last edited by a moderator: May 7, 2017
2. Jun 19, 2015
### Greg Bernhardt
Thanks for the post! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post?
Draft saved Draft deleted
Similar Discussions: Special relativity - four current and dirac's delta | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561499953269958, "perplexity": 527.633399241699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825510.59/warc/CC-MAIN-20171023020721-20171023040721-00501.warc.gz"} |
http://bth.diva-portal.org/smash/record.jsf?faces-redirect=true&language=sv&searchType=SIMPLE&query=&af=%5B%5D&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&pid=diva2%3A1304134&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all | Ändra sökning
RefereraExporteraLänk till posten
Permanent länk
Direktlänk
Referera
Referensformat
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Annat format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annat språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf
Chain conditions for epsilon-strongly graded rings with applications to Leavitt path algebras
Blekinge Tekniska Högskola, Fakulteten för teknikvetenskaper, Institutionen för matematik och naturvetenskap.ORCID-id: 0000-0001-8445-3936
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]
Let $G$ be a group with neutral element $e$ and let $S=\bigoplus_{g \in G}S_g$ be a $G$-graded ring. A necessary condition for $S$ to be noetherian is that the principal component $S_e$ is noetherian. The following partial converse is well-known: If $S$ is strongly-graded and $G$ is a polycyclic-by-finite group, then $S_e$ being noetherian implies that $S$ is noetherian. We will generalize the noetherianity result to the recently introduced class of epsilon-strongly graded rings. We will also provide results on the artinianity of epsilon-strongly graded rings.
As our main application we obtain characterizations of noetherian and artinian Leavitt path algebras with coefficients in a general unital ring. This extends a recent characterization by Steinberg for Leavitt path algebras with coefficients in a commutative unital ring and previous characterizations by Abrams, Aranda Pino and Siles Molina for Leavitt path algebras with coefficients in a field. Secondly, we obtain characterizations of noetherian and artinian unital partial crossed products.
Nyckelord [en]
group graded ring, epsilon-strongly graded ring, chain conditions, Leavitt path algebra, partial crossed product.
Nationell ämneskategori
Algebra och logik
Identifikatorer
OAI: oai:DiVA.org:bth-17807DiVA, id: diva2:1304134
Ingår i avhandling
1. The structure of epsilon-strongly graded rings with applications to Leavitt path algebras and Cuntz-Pimsner rings
Öppna denna publikation i ny flik eller fönster >>The structure of epsilon-strongly graded rings with applications to Leavitt path algebras and Cuntz-Pimsner rings
2019 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]
The research field of graded ring theory is a rich area of mathematics with many connections to e.g. the field of operator algebras. In the last 15 years, algebraists and operator algebraists have defined algebraic analogues of important operator algebras. Some of those analogues are rings that come equipped with a group grading. We want to reach a better understanding of the graded structure of those analogue rings. Among group graded rings, the strongly graded rings stand out as being especially well-behaved. The development of the general theory of strongly graded rings was initiated by Dade in the 1980s and since then numerous structural results have been established for strongly graded rings.
In this thesis, we study the class of epsilon-strongly graded rings which was recently introduced by Nystedt, Öinert and Pinedo. This class is a natural generalization of the well-studied class of unital strongly graded rings. Our aim is to lay the foundation for a general theory of epsilon-strongly graded rings generalizing the theory of strongly graded rings. This thesis is based on three articles. The first two articles mainly concern structural properties of epsilon-strongly graded rings. In the first article, we investigate a functorial construction called the induced quotient group grading. In the second article, using results from the first article, we generalize the Hilbert Basis Theorem for strongly graded rings to epsilon-strongly graded rings and apply it to Leavitt path algebras. In the third article, we study the graded structure of algebraic Cuntz-Pimsner rings. In particular, we obtain a partial classification of unital strongly, epsilon-strongly and nearly epsilon-strongly graded Cuntz-Pimsner rings up to graded isomorphism.
Ort, förlag, år, upplaga, sidor
Karlskrona: Blekinge Tekniska Högskola, 2019
Serie
Blekinge Institute of Technology Licentiate Dissertation Series, ISSN 1650-2140 ; 7
Nyckelord
group graded ring, epsilon-strongly graded ring, chain conditions, Leavitt path algebra, partial crossed product, Cuntz-Pimsner rings
Nationell ämneskategori
Algebra och logik
Identifikatorer
urn:nbn:se:bth-17809 (URN)978-91-7295-376-5 (ISBN)
Presentation
2019-05-15, G340, Valhallavägen 1, Karlskrona, 14:35 (Engelska)
Open Access i DiVA
Fulltext saknas i DiVA
Personposter BETA
Lännström, Daniel
Sök vidare i DiVA
Av författaren/redaktören
Lännström, Daniel
Av organisationen
Institutionen för matematik och naturvetenskap
I ämnet
Algebra och logik
urn-nbn
Altmetricpoäng
urn-nbn
Totalt: 49 träffar
RefereraExporteraLänk till posten
Permanent länk
Direktlänk
Referera
Referensformat
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Annat format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annat språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857067883014679, "perplexity": 3851.641479766875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00097.warc.gz"} |
https://jp.maplesoft.com/support/help/maplesim/view.aspx?path=VectorCalculus%2FSpaceCurve | SpaceCurve - Maple Help
VectorCalculus
SpaceCurve
plot a space curve in ${R}^{2}$ or ${R}^{3}$
Calling Sequence SpaceCurve(C, r)
Parameters
C - Vector(algebraic); the free or position Vector representing a curve r - name=range; the range of the parameter of the curve
Description
• The SpaceCurve(C, t=a..b) calling sequence plots a space curve in ${R}^{2}$ or ${R}^{3}$. The plot is displayed in Cartesian coordinates.
• Alternatively, a curve can be defined using a position Vector and it can be visualized with PlotPositionVector.
Examples
> $\mathrm{with}\left(\mathrm{VectorCalculus}\right):$
The commands to create the plots in the Plotting Guide are
> $\mathrm{SpaceCurve}\left(⟨\mathrm{exp}\left(-t\right)\mathrm{cos}\left(t\right),\mathrm{exp}\left(-t\right)\mathrm{sin}\left(t\right)⟩,t=4..8\right)$
> $\mathrm{SpaceCurve}\left(⟨\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),t⟩,t=1..9\right)$
Other example
> $\mathrm{SetCoordinates}\left(\mathrm{polar}\left[r,t\right]\right)$
${{\mathrm{polar}}}_{{r}{,}{t}}$ (1)
> $\mathrm{SpaceCurve}\left(\mathrm{PositionVector}\left(\left[1,t\right]\right),t=0..2\mathrm{\pi }\right)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9805998802185059, "perplexity": 2074.425560000653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00237.warc.gz"} |
http://math.stackexchange.com/questions/246337/showing-that-lim-x-rightarrow-0-frac1x-int-0x-sin1-y-mathrmd-y | # Showing that $\lim_{x\rightarrow 0} \frac{1}{x}\int_0^x |\sin(1/y)| \mathrm{d} y \not=0$
How to show that:
$$\lim_{x\rightarrow 0} \frac{1}{x}\int_0^x |\sin(1/y)| \mathrm{d} y \not=0$$
It seems like a easy example of illustrating 0 is not in the Lebesgue set of $g(x)$ where $g(x)=\sin(1/x)$ if $x\neq 0$ and $g(0)=0$. But I fail to see why the above integral is true.
I tried looking at the intervals such that $\sin(1/y)$ is greater or equal to some constant (for example, $\left[\frac{1}{k\pi+\pi/6}, \frac{1}{k\pi+5\pi/6}\right]$ such that $\sin(1/y)\geq \frac{1}{2}$), however, $$\sum_{k \text{ large}} \left(\frac{1}{k\pi+\pi/6}-\frac{1}{k\pi+5\pi/6}\right)$$ converges, which is not strong enough to prove the claim. Any thoughts? Thanks in advance.
-
Let $y= 1/t$. We then get that $$I(x) = \dfrac1x \int_0^x \left \vert \sin(1/y) \right \vert dy = \dfrac1x \int_{\infty}^{1/x} \left \vert \sin(t) \right \vert \dfrac{-dt}{t^2} = \dfrac1x \int_{1/x}^{\infty} \dfrac{\left \vert \sin(t) \right \vert}{t^2} dt$$ Let $x=\dfrac1{n \pi}$. Hence, \begin{align} I_n & = n \pi \int_{n \pi}^{\infty} \dfrac{\vert \sin(t) \vert}{t^2} dt\\ & =n \pi \left(\sum_{k=n}^{\infty} \int_{k \pi}^{(k+1) \pi} \dfrac{\vert \sin(t) \vert}{t^2} dt \right)\\ & \geq n \pi \left(\sum_{k=n}^{\infty} \int_{k \pi}^{(k+1) \pi} \dfrac{\vert \sin(t) \vert}{(k+1)^2 \pi^2} dt \right)\\ & = n \pi \sum_{k=n}^{\infty} \left(\dfrac{1}{(k+1)^2 \pi^2}\displaystyle \int_{k \pi}^{(k+1) \pi} \vert \sin(t) \vert dt \right)\\ & = n \pi \sum_{k=n}^{\infty} \dfrac2{(k+1)^2 \pi^2}\\ & = \dfrac{2n}{\pi} \sum_{k=n}^{\infty} \dfrac1{(k+1)^2}\\ & > \dfrac{2n}{\pi} \int_{n+1}^{\infty} \dfrac{dt}{t^2}\\ & = \dfrac{2n}{\pi(n+1)} \end{align} Hence, we if let $x = \dfrac1{n \pi}$, then we get that $$I\left( \dfrac1{n \pi} \right) = I_n \geq \dfrac{2n}{\pi(n+1)}$$ Hence, $$\lim_{n \to \infty} I\left( \dfrac1{n \pi} \right) \geq \dfrac{2}{\pi}$$
-
Wouldn't the limit be exactly $\frac{2}{\pi}$? – susan Nov 28 '12 at 8:04
@susan: no, notice the > sign on the second to last line of the long list of steps. – Willie Wong Nov 28 '12 at 8:20
@willie wong: Yes, but what if we use the lower limit of the integral, namely $k\pi$ to obtain the first inequality(the reverse in this case). Then, wouldn't the integral be bounded by $2/\pi$ above as well? – solea Nov 28 '12 at 8:35
@susan The error induced in the second inequality goes as $\dfrac1n$ and hence the error goes to $0$ as $n \to \infty$ i.e. $$\lim_{n \to \infty} n \sum_{k=n+1}^{\infty} \dfrac1{k^2} = \lim_{n \to \infty}(1+\mathcal{O}(1/n)) = 1$$ However, there is also the first inequality, whose error must also be proved to go to $0$ as $n \to \infty$. – user17762 Nov 28 '12 at 8:47
You had the right idea: $$\int_{1/((k+1) \pi)}^{1/(k\pi)} |\sin(1/y)| \ dy \ge \frac{1}{2\pi} \left(\frac{1}{k+1/6} - \frac{1}{k+5/6}\right) \ge \frac{C}{(k+1)(k+2)} = \frac{C}{k+1} - \frac{C}{k+2}$$ for some positive constant $C$, so if $n \pi \le 1/x < (n+1)\pi$ $$\int_{0}^{x} |\sin(1/y)|\ dy \ge \int_{0}^{1/(n \pi)} |\sin(1/y)|\ dy \ge \sum_{k=n}^\infty \frac{C}{k+1} - \frac{C}{k+2} = \frac{C}{n+1}$$ and $$\frac{1}{x} \int_0^x |\sin(1/y)|\ dy \ge \ldots$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981639981269836, "perplexity": 202.42508877797647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133564.63/warc/CC-MAIN-20140914011213-00017-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.