url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://www.physicsforums.com/threads/universal-gas-constant-and-arrhenius-equation.715549/ | Universal Gas constant and arrhenius equation
1. Oct 9, 2013
Woopydalan
Hello,
I am curious as to how the Universal Gas Constant, R, is important with regards to the rates in solids? It is in the Arrhenius Equation
2. Oct 10, 2013
Staff: Mentor
The gas constant is simply $R \equiv N_\mathrm{A} k_\mathrm{B}$, the Boltzmann constant scaled for a molar quantities. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877420663833618, "perplexity": 2143.538462373386}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608659.43/warc/CC-MAIN-20170526105726-20170526125726-00253.warc.gz"} |
https://brilliant.org/problems/jump-over-it-3/ | # Jump over it
A body is projected up a smooth inclined plane with a velocity u from the point A as shown in the figure. The angle of inclination is 45 degrees and the top is connected to a well of diameter 40 m. If the body just manages to cross the well, what is the value of u? Length of inclined plane is $$20\sqrt { 2 }$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516981601715088, "perplexity": 277.6469762483095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945624.76/warc/CC-MAIN-20180422154522-20180422174522-00411.warc.gz"} |
https://espressomd.github.io/doc/reaction_methods.html | # 19. Reaction methods¶
This chapter describes methods for simulating chemical reaction equilibria using reactive particles. Chemical species are referred to by an integer value stored in the particle type property. Chemical reactions take place by changing the value in the type property via Monte Carlo moves using the potential energy of the system before and after the reaction .
Please keep in mind the following remarks:
• All reaction methods uses Monte Carlo moves which require potential energies. Therefore reaction methods require support for energy calculations for all active interactions in the simulation. Some algorithms do not support energy calculation, e.g. OIF and IBM.
• When modeling reactions that do not conserve the number of particles, the method has to create or delete particles from the system. This process can invalidate particle ids, in which case the particles are no longer numbered contiguously. Particle slices returned by system.part are still iterable, but the indices no longer match the particle ids.
• Checkpointing is not supported, since the state of the Mersenne Twister random number generator cannot be serialized.
• For improved performance, you can set the type of invalidated particles with set_non_interacting_type() in all reaction method classes.
## 19.1. Thermodynamic ensembles¶
### 19.1.1. Reaction ensemble¶
The reaction ensemble allows to simulate chemical reactions which can be represented by the general equation:
$\mathrm{\nu_1 S_1 +\ \dots\ \nu_l S_l\ \rightleftharpoons\ \nu_m S_m +\ \dots\ \nu_z S_z } \label{general-eq}$
where $$\nu_i$$ is the stoichiometric coefficient of species $$S_i$$. By convention, stoichiometric coefficients of the species on the left-hand side of the reaction (reactants) attain negative values, and those on the right-hand side (products) attain positive values, so that the reaction can be equivalently written as
$\mathrm{\sum_i \nu_i S_i = 0} \,. \label{general-eq-sum}$
The equilibrium constant of the reaction is then given as
$K = \exp(-\Delta_{\mathrm{r}}G^{\ominus} / k_B T) \quad\text{with}\quad \Delta_{\mathrm{r}}G^{\ominus} = \sum_i \nu_i \mu_i^{\ominus}\,. \label{Keq}$
Here $$k_B$$ is the Boltzmann constant, $$T$$ is temperature, $$\Delta_{\mathrm{r}}G^{\ominus}$$ standard Gibbs free energy change of the reaction, and $$\mu_i^{\ominus}$$ the standard chemical potential (per particle) of species $$i$$. Note that thermodynamic equilibrium is independent of the direction in which we write the reaction. If it is written with left and right-hand side swapped, both $$\Delta_{\mathrm{r}}G^{\ominus}$$ and the stoichiometric coefficients attain opposite signs, and the equilibrium constant attains the inverse value. Further, note that the equilibrium constant $$K$$ is the dimensionless thermodynamic, concentration-based equilibrium constant, defined as
$K(c^{\ominus}) = (c^{\ominus})^{-\bar\nu} \prod_i (c_i)^{\nu_i}$
where $$\bar\nu=\sum_i \nu_i$$, and $$c^{\ominus}$$ is the reference concentration, at which the standard chemical potential $$\Delta_{\mathrm{r}}G^{\ominus}$$ was determined. In practice, this constant is often used with the dimension of $$(c^{\ominus})^{\bar\nu}$$
$K_c(c^{\ominus}) = K(c^{\ominus})\times (c^{\ominus})^{\bar\nu}$
A simulation in the reaction ensemble consists of two types of moves: the reaction move and the configuration move. The configuration move changes the configuration of the system. In the forward reaction, the appropriate number of reactants (given by $$\nu_i$$) is removed from the system, and the concomitant number of products is inserted into the system. In the backward reaction, reactants and products exchange their roles. The acceptance probability $$P^{\xi}$$ for a move from state $$o$$ to $$n$$ in the reaction ensemble is given by the criterion
$P^{\xi} = \text{min}\biggl(1,V^{\bar\nu\xi}\Gamma^{\xi}e^{-\beta\Delta E}\prod_{i=1}\frac{N_i^0!}{(N_i^0+\nu_{i}\xi)!} \label{eq:Pacc} \biggr),$
where $$\Delta E=E_\mathrm{new}-E_\mathrm{old}$$ is the change in potential energy, $$V$$ is the simulation box volume, $$\beta=1/k_\mathrm{B}T$$ is the Boltzmann factor, and $$\xi$$ is the extent of reaction, with $$\xi=1$$ for the forward and $$\xi=-1$$ for the backward direction.
$$\Gamma$$ is proportional to the reaction constant. It is defined as
$\Gamma = \prod_i \Bigl(\frac{\left<N_i\right>}{V} \Bigr)^{\bar\nu} = V^{-\bar\nu} \prod_i \left<N_i\right>^{\nu_i} = K_c(c^{\ominus}=1/\sigma^3)$
where $$\left<N_i\right>/V$$ is the average number density of particles of type $$i$$. Note that the dimension of $$\Gamma$$ is $$V^{\bar\nu}$$, therefore its units must be consistent with the units in which ESPResSo measures the box volume, i.e. $$\sigma^3$$.
It is often convenient, and in some cases even necessary, that some particles representing reactants are not removed from or placed at randomly in the system but their identity is changed to that of the products, or vice versa in the backward direction. A typical example is the ionization reaction of weak polyelectrolytes, where the ionizable groups on the polymer have to remain on the polymer chain after the reaction. The replacement rule is that the identity of a given reactant type is changed to the corresponding product type as long as the corresponding coefficients allow for it. Corresponding means having the same position (index) in the python lists of reactants and products which are used to set up the reaction.
Multiple reactions can be added to the same instance of the reaction ensemble.
An example script can be found here:
For a description of the available methods, see espressomd.reaction_methods.ReactionEnsemble.
### 19.1.2. Grand canonical ensemble¶
As a special case, all stoichiometric coefficients on one side of the chemical reaction can be set to zero. Such a reaction creates particles ex nihilo, and is equivalent to exchanging particles with a reservoir. This type of simulation in the reaction ensemble is equivalent to the grand canonical simulation. Formally, this can be expressed by the reaction
$\mathrm{\emptyset \rightleftharpoons\ \nu_A A } \,,$
where, if $$\nu_A=1$$, the reaction constant $$\Gamma$$ defines the chemical potential of species A. However, if $$\nu_A\neq 1$$, the statistics of the reaction ensemble becomes equivalent to the grand canonical only in the limit of large average number of species A in the box. If the reaction contains more than one product, then the reaction constant $$\Gamma$$ defines only the sum of their chemical potentials but not the chemical potential of each product alone.
Since the Reaction Ensemble acceptance transition probability can be derived from the grand canonical acceptance transition probability, we can use the reaction ensemble to implement grand canonical simulation moves. This is done by adding reactions that only have reactants (for the deletion of particles) or only have products (for the creation of particles). There exists a one-to-one mapping of the expressions in the grand canonical transition probabilities and the expressions in the reaction ensemble transition probabilities.
### 19.1.3. Constant pH¶
As before in the reaction ensemble, one can define multiple reactions (e.g. for an ampholytic system which contains an acid and a base) in one ConstantpHEnsemble instance:
cpH=reaction_methods.ConstantpHEnsemble(
temperature=1, exclusion_range=1, seed=77)
product_types=[1, 2], product_coefficients=[1, 1],
default_charges={0: 0, 1: -1, 2: +1})
cpH.add_reaction(gamma=1/(10**-14/K_diss), reactant_types=[3], reactant_coefficients=[1], product_types=[0, 2], product_coefficients=[1, 1], default_charges={0:0, 2:1, 3:1} )
An example script can be found here:
In the constant pH method due to Reed and Reed it is possible to set the chemical potential of $$H^{+}$$ ions, assuming that the simulated system is coupled to an infinite reservoir. This value is the used to simulate dissociation equilibrium of acids and bases. Under certain conditions, the constant pH method can yield equivalent results as the reaction ensemble . However, it treats the chemical potential of $$H^{+}$$ ions and their actual number in the simulation box as independent variables, which can lead to serious artifacts. The constant pH method can be used within the reaction ensemble module by initializing the reactions with the standard commands of the reaction ensemble.
The dissociation constant, which is the input of the constant pH method, is the equilibrium constant $$K_c$$ for the following reaction:
$\mathrm{HA \rightleftharpoons\ H^+ + A^- } \,,$
For a description of the available methods, see espressomd.reaction_methods.ConstantpHEnsemble.
### 19.1.4. Widom Insertion (for homogeneous systems)¶
The Widom insertion method measures the change in excess free energy, i.e. the excess chemical potential due to the insertion of a new particle, or a group of particles:
$\begin{split}\mu^\mathrm{ex}_B & :=\Delta F^\mathrm{ex} =F^\mathrm{ex}(N_B+1,V,T)-F^\mathrm{ex}(N_B,V,T)\\ &=-kT \ln \left(\frac{1}{V} \int_V d^3r_{N_B+1} \langle \exp(-\beta \Delta E_\mathrm{pot}) \rangle_{N_B} \right)\end{split}$
For this one has to provide the following reaction to the Widom method:
type_B=1
widom = reaction_methods.WidomInsertion(
temperature=temperature, seed=77)
reactant_coefficients=[], product_types=[type_B],
product_coefficients=[1], default_charges={1: 0})
widom.calculate_particle_insertion_potential_energy(reaction_id=0)
The call of add_reaction define the insertion $$\mathrm{\emptyset \to type_B}$$ (which is the 0th defined reaction). Multiple reactions for the insertions of different types can be added to the same WidomInsertion instance. Measuring the excess chemical potential using the insertion method is done by calling widom.calculate_particle_insertion_potential_energy(reaction_id=0) multiple times and providing the accumulated sample to widom.calculate_excess_chemical_potential(particle_insertion_potential_energy_samples=samples). If another particle insertion is defined, then the excess chemical potential for this insertion can be measured in a similar fashion by sampling widom.calculate_particle_insertion_potential_energy(reaction_id=1). Be aware that the implemented method only works for the canonical ensemble. If the numbers of particles fluctuate (i.e. in a semi grand canonical simulation) one has to adapt the formulas from which the excess chemical potential is calculated! This is not implemented. Also in a isobaric-isothermal simulation (NpT) the corresponding formulas for the excess chemical potentials need to be adapted. This is not implemented.
The implementation can also deal with the simultaneous insertion of multiple particles and can therefore measure the change of excess free energy of multiple particles like e.g.:
$\begin{split}\mu^\mathrm{ex, pair}&:=\Delta F^\mathrm{ex, pair}:= F^\mathrm{ex}(N_1+1, N_2+1,V,T)-F^\mathrm{ex}(N_1, N_2 ,V,T)\\ &=-kT \ln \left(\frac{1}{V^2} \int_V \int_V d^3r_{N_1+1} d^3 r_{N_2+1} \langle \exp(-\beta \Delta E_\mathrm{pot}) \rangle_{N_1, N_2} \right)\end{split}$
Note that the measurement involves three averages: the canonical ensemble average $$\langle \cdot \rangle_{N_1, N_2}$$ and the two averages over the position of particles $$N_1+1$$ and $$N_2+1$$. Since the averages over the position of the inserted particles are obtained via brute force sampling of the insertion positions it can be beneficial to have multiple insertion tries on the same configuration of the other particles.
One can measure the change in excess free energy due to the simultaneous insertions of particles of type 1 and 2 and the simultaneous removal of a particle of type 3:
$\mu^\mathrm{ex}:=\Delta F^\mathrm{ex, }:= F^\mathrm{ex}(N_1+1, N_2+1, N_3-1,V,T)-F^\mathrm{ex}(N_1, N_2, N_3 ,V,T)$
For this one has to provide the following reaction to the Widom method:
widom.add_reaction(reactant_types=[type_3],
reactant_coefficients=[1], product_types=[type_1, type_2],
product_coefficients=[1,1], default_charges={1: 0})
widom.calculate_particle_insertion_potential_energy(reaction_id=0)
Be aware that in the current implementation, for MC moves which add and remove particles, the insertion of the new particle always takes place at the position where the last particle was removed. Be sure that this is the behavior you want to have. Otherwise implement a new function WidomInsertion::make_reaction_attempt in the core.
An example script which demonstrates how to measure the pair excess chemical potential for inserting an ion pair into a salt solution can be found here:
For a description of the available methods, see espressomd.reaction_methods.WidomInsertion.
## 19.2. Practical considerations¶
### 19.2.1. Converting tabulated reaction constants to internal units in ESPResSo¶
The implementation in ESPResSo requires that the dimension of $$\Gamma$$ is consistent with the internal unit of volume, $$\sigma^3$$. The tabulated values of equilibrium constants for reactions in solution, $$K_c$$, typically use $$c^{\ominus} = 1\,\mathrm{moldm^{-3}}$$ as the reference concentration, and have the dimension of $$(c^{\ominus})^{\bar\nu}$$. To be used with ESPResSo, the value of $$K_c$$ has to be converted as
$\Gamma = K_c(c^{\ominus} = 1/\sigma^3) = K_c(c^{\ominus} = 1\,\mathrm{moldm^{-3}}) \Bigl( N_{\mathrm{A}}\bigl(\frac{\sigma}{\mathrm{dm}}\bigr)^3\Bigr)^{\bar\nu}$
where $$N_{\mathrm{A}}$$ is the Avogadro number. For gas-phase reactions, the pressure-based reaction constant, $$K_p$$ is often used, which can be converted to $$K_c$$ as
$K_p(p^{\ominus}=1\,\mathrm{atm}) = K_c(c^{\ominus} = 1\,\mathrm{moldm^{-3}}) \biggl(\frac{c^{\ominus}RT}{p^{\ominus}}\biggr)^{\bar\nu},$
where $$p^{\ominus}=1\,\mathrm{atm}$$ is the standard pressure. Consider using the python module pint for unit conversion.
### 19.2.2. Coupling reaction methods to molecular dynamics¶
The Monte Carlo (MC) sampling of the reaction can be coupled with a configurational sampling using Molecular Dynamics (MD). For non-interacting systems this coupling is not an issue, but for interacting systems the insertion of new particles can lead to instabilities in the MD integration ultimately leading to a crash of the simulation.
This integration instabilities can be avoided by defining a distance around the particles which already exist in the system where new particles will not be inserted, which is defined by the required keyword exclusion_range. This prevents big overlaps with the newly inserted particles, avoiding too big forces between particles, which prevents the MD integration from crashing. The value of the exclusion range does not affect the limiting result and it only affects the convergence and the stability of the integration. For interacting systems, it is usually a good practice to choose the exclusion range such that it is comparable to the diameter of the particles.
If particles with significantly different sizes are present, it is desired to define a different exclusion range for each pair of particle types. This can be done by defining an exclusion radius per particle type by using the optional argument exclusion_radius_per_type. Then, their exclusion range is calculated using the Lorentz-Berthelot combination rule, i.e. exclusion_range = exclusion_radius_per_type[particle_type_1] + exclusion_radius_per_type[particle_type_2]. If the exclusion radius of one particle type is not defined, the value of the parameter provided in exclusion_range is used by default. If the value in exclusion_radius_per_type is equal to 0, then the exclusion range of that particle type with any other particle is 0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883560836315155, "perplexity": 563.0719605245049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00667.warc.gz"} |
http://en.wikipedia.org/wiki/Generalized_least_squares | # Generalized least squares
In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model. The GLS is applied when the variances of the observations are unequal (heteroscedasticity), or when there is a certain degree of correlation between the observations. In these cases ordinary least squares can be statistically inefficient, or even give misleading inferences.
## Method outline
In a typical linear regression model we observe data $\{y_i,x_{ij}\}_{i=1..n,j=1..p}$ on n statistical units. The response values are placed in a vector Y = (y1, ..., yn)′, and the predictor values are placed in the design matrix X = [[xij]], where xij is the value of the jth predictor variable for the ith unit. The model assumes that the conditional mean of Y given X is a linear function of X, whereas the conditional variance of the error term given X is a known matrix Ω. This is usually written as
$Y = X\beta + \varepsilon, \qquad \mathrm{E}[\varepsilon|X]=0,\ \operatorname{Var}[\varepsilon|X]=\Omega.$
Here β is a vector of unknown “regression coefficients” that must be estimated from the data.
Suppose b is a candidate estimate for β. Then the residual vector for b will be Y − Xb. Generalized least squares method estimates β by minimizing the squared Mahalanobis length of this residual vector:
$\hat\beta = \underset{b}{\rm arg\,min}\,(Y-Xb)'\,\Omega^{-1}(Y-Xb),$
Since the objective is a quadratic form in b, the estimator has an explicit formula:
$\hat\beta = (X'\Omega^{-1}X)^{-1} X'\Omega^{-1}Y.$
### Properties
The GLS estimator is unbiased, consistent, efficient, and asymptotically normal:
$\sqrt{n}(\hat\beta - \beta)\ \xrightarrow{d}\ \mathcal{N}\!\left(0,\,(X'\,\Omega^{-1}X)^{-1}\right).$
GLS is equivalent to applying ordinary least squares to a linearly transformed version of the data. To see this, factor Ω = BB′, for instance using the Cholesky decomposition. Then if we multiply both sides of the equation Y = + ε by B−1, we get an equivalent linear model Y* = X*β + ε*, where Y* = B−1Y, X* = B−1X, and ε* = B−1ε. In this model Var[ε*] = B−1Ω(B−1)′ = I. Thus we can efficiently estimate β by applying OLS to the transformed data, which requires minimizing
$(Y^*-X^*b)'(Y^*-X^*b) = (Y-Xb)'\,\Omega^{-1}(Y-Xb).$
This has the effect of standardizing the scale of the errors and “de-correlating” them. Since OLS is applied to data with homoscedastic errors, the Gauss–Markov theorem applies, and therefore the GLS estimate is the best linear unbiased estimator for β.
## Weighted least squares[1]
A special case of GLS called weighted least squares (WLS) occurs when all the off-diagonal entries of Ω are 0. This situation arises when the variances of the observed values are unequal (i.e. heteroscedasticity is present), but where no correlations exist among the observed variances. The weight for unit i is proportional to the reciprocal of the variance of the response for unit i.
## Feasible generalized least squares
In practice, the method cannot be applied since the covariance of the errors is generally unknown. One strategy for building an implementable version of GLS is the Feasible Generalized Least Squares (FGLS) estimator, where we proceed in two stages: (1) in the first step the model is estimated by OLS or another consistent (but inefficient) estimator, and the residuals are used to build a consistent estimator of the errors covariance matrix (to do so, we often need to examine the model adding additional constraints, for example if the errors follow a time series process, we generally need some theoretical assumptions on this process to ensure that a consistent estimator is available); and (2) using the consistent estimator of the covariance matrix of the errors, we implement GLS ideas.
A cautionary note is that the FGLS estimator is not always consistent. One case in which FGLS might be inconsistent is if there are individual specific fixed effects.[2]
In general this estimator has different properties than GLS. For large samples (i.e., asymptotically) all properties are (under appropriate conditions) common with respect to GLS, but for finite samples the properties of FGLS estimators are unknown: they vary dramatically with each particular model, and as a general rule their exact distributions cannot be derived analytically. For finite samples FGLS may be even less efficient than OLS in some cases. Thus, while GLS can be made feasible, it is not always wise to apply this method when the sample is small. A method sometimes used to improve the accurancy of the estimators in finite samples is to iterate, i.e. taking the residuals from FGLS to update the errors covariance estimator, and then updating the FGLS estimation, applying the same idea iteratively until the estimators vary less than some tolerance. But this method actually does not necessarily improve the efficiency of the estimator very much if the original sample was small. A reasonable option when samples are not too large is to apply OLS, but throwing away the classical variance estimator
$\sigma^2*(X'X)^{-1}$
(which is inconsistent in this framework) and using a HAC (Heteroskedasticity and Autocorrelation Consistent) estimator. For example, in autocorrelation context we can use the Bartlett estimator (often known as Newey-West estimator since these authors popularized the use of this estimator among econometricians in their 1987 Econometrica article), and in heteroskedastic context we can use the Eicker–White estimator ( Eicker–White ). This approach is much safer, and it is the appropriate path to take unless the sample is large, and "large" is sometimes a slippery issue (e.g. if the errors distribution is asymmetric the required sample would be much larger).
The ordinary least squares (OLS) estimator is calculated as usual by
$\widehat \beta_{OLS} = (X' X)^{-1} X' y$
and estimates of the residuals $\widehat{u}_j= (Y-Xb)_j$ are constructed.
For simplicity consider the model for heteroskedastic errors. Assume that the variance-covariance matrix $\Omega$ of the error vector is diagonal, or equivalently that errors from distinct observations are uncorrelated. Then each diagonal entry may be estimated by the fitted residuals $\widehat{u}_j$ so $\widehat{\Omega}_{OLS}$ may be constructed by
$\widehat{\Omega}_{OLS} = \operatorname{diag}(\widehat{\sigma}^2_1, \widehat{\sigma}^2_2, \dots , \widehat{\sigma}^2_n).$
It is important to notice that the squared residuals cannot be used in the previous expression, we need an estimator of the errors variances. To do so we can use a parametric heteroskedasticity model, or a nonparametric estimator. Once this step is fulfilled, we can proceed:
Estimate $\beta_{FGLS1}$ using $\widehat{\Omega}_{OLS}$ using weighted least squares
$\widehat \beta_{FGLS1} = (X'\widehat{\Omega}^{-1}_{OLS} X)^{-1} X' \widehat{\Omega}^{-1}_{OLS} y$
The procedure can be iterated, the first iteration is given by
$\widehat{u}_{FGLS1} = Y - X \widehat \beta_{FGLS1}$
$\widehat{\Omega}_{FGLS1} = \operatorname{diag}(\widehat{\sigma}^2_{FGLS1,1}, \widehat{\sigma}^2_{FGLS1,2}, \dots ,\widehat{\sigma}^2_{FGLS1,n})$
$\widehat \beta_{FGLS2} = (X'\widehat{\Omega}^{-1}_{FGLS1} X)^{-1} X' \widehat{\Omega}^{-1}_{FGLS1} y$
This estimation of $\widehat{\Omega}$ can be iterated to convergence.
Under regularity conditions any of the FGLS estimator (or that of any of its iterations, if we iterate a finite number of times) is asymptotically distributed as
$\sqrt{n}(\hat\beta_{FGLS} - \beta)\ \xrightarrow{d}\ \mathcal{N}\!\left(0,\,V\right).$
where n is the sample size and
$V = \text{p-lim}(X'\Omega^{-1}X/T)$
here p-lim means limit in probability
## References
1. ^ Strutz, T. (2010). Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). Vieweg+Teubner. ISBN 978-3-8348-1022-9., chapter 3
2. ^ Hansen, Christina (07.13.2004). "GENERALIZED LEAST SQUARES INFERENCE IN PANEL AND MULTILEVEL MODELS WITH SERIAL CORRELATION AND FIXED EFFECTS". University of Chicago - Booth Faculty Pages. Retrieved 07.29.2014.
Some texbooks on econometrics where GLS and FGLS are discussed are: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599723815917969, "perplexity": 766.363714244021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823528.84/warc/CC-MAIN-20140820021343-00044-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.scienceopen.com/document?vid=46583525-234e-4273-a4f2-8b92748450bc | 68
views
0
recommends
+1 Recommend
1 collections
4
shares
• Record: found
• Abstract: found
# Unstable amplitude and noisy image induced by tip contamination in dynamic force mode atomic force microscopy.
ScienceOpenPubMed
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
Liquid 1-decanethiol was confined on an atomic force microscope (AFM) tip apex and the effect was investigated by measuring amplitude-distance curves in dynamic force mode. Within the working distance in the dynamic force mode AFM, the thiol showed strong interactions bridging between a gold-coated probe tip and a gold-coated Si substrate, resulting in unstable amplitude and noisy AFM images. We show that under such a situation, the amplitude change is dominated by the extra forces induced by the active material loaded on the tip apex, overwhelming the amplitude change caused by the geometry of the sample surface, thus resulting in noise in the image the tip collects. We also show that such a contaminant may be removed from the apex by pushing the tip into a material soft enough to avoid damage to the tip.
17578111 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486732840538025, "perplexity": 2599.2853961574783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524522.18/warc/CC-MAIN-20190716095720-20190716121720-00214.warc.gz"} |
https://www.research.ed.ac.uk/en/publications/truthful-facility-assignment-with-resource-augmentation-an-exact--2 | # Truthful Facility Assignment with Resource Augmentation: An Exact Analysis of Serial Dictatorship
Ioannis Caragiannis, Aris Filos-Ratsikas*, Søren Kristoffer Stiil Frederiksen, Kristoffer Arnsfelt Hansen, Zihan Tan
*Corresponding author for this work
Research output: Contribution to journalArticlepeer-review
## Abstract
We study the truthful facility assignment problem, where a set of agents with private mostpreferred points on a metric space have to be assigned to facilities that lie on the metric space, under capacity constraints on the facilities. The goal is to produce such an assignment that minimizes the social cost, i.e., the total distance between the most-preferred points of the agents and their corresponding facilities in the assignment, under the constraint of truthfulness, which ensures that agents do not misreport their most-preferred points. We propose a resource augmentation framework, where a truthful mechanism is evaluated by its worst-case performance on an instance with enhanced facility capacities against the optimal mechanism on the same instance with the original capacities. We study a well-known mechanism, Serial Dictatorship, and provide an exact analysis of its performance. Among other results, we prove that Serial Dictatorship has approximation ratio g/(g − 2) when the capacities are multiplied by any integer g ≥ 3. Our results suggest that with a limited augmentation of the resources we can achieve exponential improvements on the performance of the mechanism and in particular, the approximation ratio goes to 1 as the augmentation factor becomes large. We complement our results with bounds on the approximation ratio of Random Serial Dictatorship, the randomized version of Serial Dictatorship, when there is no resource augmentation.
Original language English 27 Mathematical Programming, Series B 2 Nov 2022 https://doi.org/10.1007/s10107-022-01902-8 E-pub ahead of print - 2 Nov 2022
## Keywords
• Mechanism design without money
• Serial dictatorship
• Resource augmentation
• Approximation ratio
## Fingerprint
Dive into the research topics of 'Truthful Facility Assignment with Resource Augmentation: An Exact Analysis of Serial Dictatorship'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178349733352661, "perplexity": 2249.0618562309705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00321.warc.gz"} |
https://www.physicsforums.com/threads/differential-geometry-question.214334/ | Homework Help: Differential Geometry Question
1. Feb 10, 2008
mXCSNT
1. The problem statement, all variables and given/known data
Assume that $$\tau(s) \neq 0$$ and $$k'(x) \neq 0$$ for all $$s \in I$$. Show that a necessary and sufficient condition for $$\alpha(I)$$ to lie on a sphere is that $$R^2 + (R')^2T^2 = const$$ where $$R = 1/k$$, $$T = 1/\tau$$, and $$R' = \frac{dr}{ds}$$
2. Relevant equations
$$\alpha(s)$$ is a curve in R3 parametrized by arc length
$$k = curvature = |\alpha''|$$
$$\tau = torsion = -\frac{\alpha' \times \alpha'' \cdot \alpha'''}{k^2}$$ (note sign; this is opposite of some conventions)
3. The attempt at a solution
I've approached this from 2 directions, but I haven't gotten them to meet. First, a necessary and sufficient condition is that $$|\alpha - P|$$ is constant, where P is the center of the circle. Alternatively, $$(\alpha - P) \cdot \alpha' = 0$$.
And I've expanded out $$R^2 + (R')^2T^2 = const$$ to get
$$\frac{(\alpha' \times \alpha'' \cdot \alpha''')^2 + (\alpha'' \cdot \alpha''')^2}{(\alpha'' \cdot \alpha'')(\alpha' \times \alpha'' \cdot \alpha''')^2} = const$$
Also, I'm going to guess that the const on the right hand side is some function of the radius of the sphere, maybe the square of it (which would be $$(\alpha - P) \cdot (\alpha - P)$$), because what else is constant in a sphere?
But I don't know where to go from here. I'm just looking for a hint at an avenue of approach, please nothing specific.
2. Feb 10, 2008
mXCSNT
So if I assume that the "const" in question is the square of the radius, that could make R and (R')T the lengths of respective legs of a right triangle whose hypotenuse is the radius. I know that R is the radius of curvature, so drawing a diagram I'm guessing that the center P is
$$P = Rn - (R')Tb$$
(where n is the normal vector $$\alpha''/|\alpha''|$$ and b = $$t \times n$$ is the binormal vector).
So then the sphere radius I want to test, $$(\alpha-P) \cdot (\alpha-P)$$, becomes $$(\alpha - Rn + (R')Tb) \cdot (\alpha - Rn + (R')Tb)$$
$$= \alpha \cdot \alpha - 2 R \alpha \cdot n + 2 R' T \alpha \cdot b + R^2 + (R')^2T^2$$
Now if I assume that $$R^2 + (R')^2T^2$$ is constant I need to show that $$\alpha \cdot \alpha - 2 R \alpha \cdot n + 2 R' T \alpha \cdot b$$ is constant to show the sphere radius is constant. Am I on the right track?
3. Feb 10, 2008
mXCSNT
Revised guess for center P:
$$P = \alpha + Rn - (R')Tb$$
so now the constant radius follows immediately and I simply have to show that P itself is constant. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970366895198822, "perplexity": 221.40630546542982}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866870.92/warc/CC-MAIN-20180524205512-20180524225512-00161.warc.gz"} |
http://math.stackexchange.com/users/8679/hristo?tab=summary | Hristo
Reputation
284
Top tag
Next privilege 500 Rep.
Access review queues
3 11
Impact
~11k people reached
• 0 posts edited
Questions (5)
12 the expectation of a chocolate bar 9 another balls and bins question 8 How do I calculate the $p$-norm of a matrix? 3 network flow as a linear combination 1 closed form equation to figure out sudoku square from given index
Reputation (284)
+20 another balls and bins question +10 closed form equation to figure out sudoku square from given index +10 How do I calculate the $p$-norm of a matrix? +5 closed form equation to figure out sudoku square from given index
1 closed form equation to figure out sudoku square from given index
Tags (8)
1 closed-form × 2 0 linear-algebra 1 proof-writing × 2 0 matrices 0 probability × 2 0 network-flow 0 graph-theory 0 norm
Accounts (13)
Stack Overflow 15,673 rep 34105182 Super User 294 rep 3615 Mathematics 284 rep 311 Area 51 151 rep 1 Graphic Design 149 rep 28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8758360147476196, "perplexity": 3352.06540901052}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738087479.95/warc/CC-MAIN-20151001222127-00208-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/104673/which-reaction-rate-formula-is-suited-for-diatomic-molecules-in-a-plasma | # Which reaction rate formula is suited for diatomic molecules in a plasma?
I am an experimental physicist, so I unfortunately do not know much about chemistry except the basics. For my current study, I am investigating diatomic molecule formation in plasma. These diatomic molecules are often not stable, but they get excited and emit light, which makes them interesting for spectroscopy.
What I want to know is: If I have atoms, ions and electrons in my plasma, and I have some temperature T, which model for the molecular reactions would best describe this system? What kind of reaction rate formula would I need?
To give a specific example and to explain my current assumptions: In plasma of samples containing Ca and Cl (among other elements), I get emission from the diatomic molecule CaCl. I assume that the concentrations correlate with the stoichiometry of the sample. Now I want to know the amount of CaCl that forms in the plasma, and how it relates to the concentrations in the sample. I currently assume that the reaction can be described like this: $$\ce{Ca + Cl -> CaCl}$$ So it should be very simple, as every element is already present either as an atom or as an ion. (Note that I do not really understand how the ions play into this, I'm assuming that the reaction is the same but that an electron is captured during the reaction - if someone could elaborate on this, I would also be grateful.)
As for the reaction rate, I have found the following rate law: $$r = k [A]^m [B]^n$$ Where m and n are integers and [A] and [B] are the concentrations of two reacting species A and B. What I don't understand is the physical interpretation of m and n - I have a feeling that for single-element reactants like the ones in my case, they should both be 1. Is that correct, or is there a way these could be wildly different, and maybe even non-integers? Because my experimental results can best be described by this formula if m and n are non-integers (m = 0.6 and n = 1.3, for example). But that confuses me a lot and makes me think that maybe this type of reaction rate formula might not be suited for reactions of atoms/ions in a plasma.
I have found also another formula for the bimolecular reaction on a surface (called the Langmuir–Hinshelwood mechanism): $$r = k \frac{K_1[A]K_2[B]}{(1+K_1[A]+K_2[B])^2}$$ While I have no idea how this would relate to my situation, the results I get from this formula are quite nice, and I do not have to change anything from integers to non-integers. So maybe this could be a suitable explanation instead?
I should note that, if I say "it fits my experiments", I mean that I have done several experiments with different concentrations of Ca and Cl in my sample, and I try to fit them all by saying that the intensity of my CaCl emission should be proportional to the reaction rate, and the reaction rate parameters have to be the same in each case. There are a lot of assumptions here, but since the plasma itself is very complex, this is the best I can do for now.
Thank you very much for your help!
EDIT: After reading some of Paul's suggestions, my understanding is now that I have to think about every possible reaction that might take place, and that I have to distinguish between the excited states CaCl(A) and CaCl(B) and the ground state CaCl(X): $$\ce{Cl + e- <=> Cl-} \\ \ce{Cl <=> Cl+ + e-} \\ \ce{Ca <=> Ca+ + e-} \\ \ce{Ca + Cl <=> CaCl(A)} \\ \ce{Ca + Cl <=> CaCl(B)} \\ \ce{Ca+ + Cl- <=> CaCl(A)} \\ \ce{Ca+ + Cl- <=> CaCl(B)} \\ \ce{CaCl(A) + M <=> CaCl(X) + M} \\ \ce{CaCl(B) + M <=> CaCl(A) + M} \\ \ce{CaCl(B) + M <=> CaCl(X) + M} \\ \ce{Ca+ + Cl + M <=> CaCl+ + M} \\ \ce{Ca + Cl+ + M <=> CaCl+ + M} \\ \ce{CaCl+ + e- <=> CaCl(A)} \\ \ce{CaCl+ + e- <=> CaCl(B)} \\ \ce{CaCl(A) -> CaCl(X) + h\nu} \\ \ce{CaCl(B) -> CaCl(X) + h\nu} \\$$ This just seems ridiculously complicated, if you ask me. (And I have not even started taking into account that Ca also reacts with O to CaO.) Surely this can be simplified to some extent? Some of these reactions are probably very unlikely?
But so for each species, I would then set up a reaction rate based on the equations above: $$\frac{d[\ce{Cl}]}{dt} = - k_1[\ce{Cl}][\ce{e-}] + k_1'[\ce{Cl-}] - k_2[Cl] + k_2'[\ce{Cl+}][\ce{e-}] + \dots \\ \text{etc.}$$ Then I can solve this set of equations either numerically or by simplifying something.
Am I missing something here? Are the equations correct? Do you think they can be simplified in some way?
• You oversimplify things greatly. It is very unlikely that only one reaction will suffice to describe the kinetics in your plasma. Also the reaction you mention must involve a third scattering body to take away the binding energy or requires a very fast radiative recombination rate. It is also likely that the electrons in your plasma are responsible for exciting (part of) your molecules. You'll have to consider a more complex mechanism to predict the production of the excited states of CaCl and observed fluorescence.
– Paul
Nov 22, 2018 at 10:13
• The points you mention (third scattering body, fast radiative recombination) are exactly the kind of thing that I wanted to know more about. This is why I am asking the question in the first place. As for the excitation, I would have thought that this can be regarded as a separate step. First, I need to know how many molecules even form before I can look at their emission rate. Nov 22, 2018 at 11:40
• The point about kinetics is that - in general - it involves coupled differential equations, so you should look at all relevant processes simultaneously. I think you're best of consulting a text book on the topic such as Laidler's Chemical Kinetics. Articles that might be of interest are J. Photochem. 25, 389 (1984) or Phys. Rev. E 57, 4684 (1998).
– Paul
Nov 22, 2018 at 12:34
• Thank you for the literature recommendations. In the beginning of the first article, three reaction equations are presented. Then they say that based on these, the intensity of an O2 emission should be I = k1[O]^2[M]/(1+k2/k3 [N2]). This is very interesting, can you maybe help me understand what they are doing there? I mean, this seems to be the type of derivation that I have in mind - start with something complex, but end up at something simple. Nov 22, 2018 at 15:47
• They assume equilibrium conditions so that the reaction rate equals zero. Taking the expression for the rate of formation of excited oxygen molecules and setting the rate to zero allows to write the equilibrium concentration of O2 in terms of the other species.
– Paul
Nov 22, 2018 at 17:43
$$\ce{A + B ->[k1] AB^{*}} \\ \ce{AB^{*} ->[k2] A + B} \\ \ce{AB^{*} + M ->[k3] AB + M} \\ \ce{AB^{*} + M ->[\gamma k3] AB(A) + M} \\ \ce{AB(A) + M ->[k4] AB(X) + M} \\ \ce{AB(A) ->[k5] AB(x) + h\nu} \\$$ Here $$\ce{AB^{*}}$$ is a precursor of molecule AB that dissociates easily and needs to relax by collision with M to some form of AB, which may be excited or not. A fraction $$\gamma$$ of these collisions will produce AB(A), which is molecule AB in the electronic state A. This can either collide with another species M again, or it can emit a photon. In both cases it transitions to the ground state AB(X). Only the last transition is the one that produces my molecular emission, so that is the one that interests me. Using the defined reaction constants, I can describe the formation of $$\ce{AB^{*}}$$ and AB(A) this way: $$\frac{d[AB^*]}{dt} = k_1 [A] [B] - k_2 [AB^*] - k_3[AB^*]\\ \frac{d[AB(A)]}{dt} = \gamma k_3 [AB^*] [M] - k_4 [AB(A)] [M] - k_5 [AB(A)]$$
(Note that [M] represents different concentrations depending on which species is most dominant in each of these terms, i.e. [M] cannot be factored out.) Finally, I can make the assumption that these reactions are very fast in comparison to the changes in my plasma, so that I have a quasistatic process. This needs to be validated by actual experiments, but let's assume it for now. Then we can say that the concentrations do not change, i.e. $$\frac{d[AB^*]}{dt} = \frac{d[AB(A)]}{dt} = 0$$, and we get: $$[AB^*] = \frac{k_1 [A] [B]}{k_2+k_3} \\ [AB(A)] = \frac{\gamma k_3 [AB^*] [M]}{k_4 [M] + k_5} = \frac{\gamma k_1 k_3 [M]}{(k_2 + k_3)(k_4 [M] + k_5)} [A] [B]$$ Now, since my intensity is proportional to $$k_5 [AB(A)]$$, I can write my intensity as: $$I \propto [A] [B]$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8423448801040649, "perplexity": 284.2658838944428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00576.warc.gz"} |
http://swmath.org/software/10781 | # SU3
Program for generating tables of SU(3)⊃SU(2)⊗U(1) coupling coefficients. A C-language program which tabulates the isoscalar factors and Clebsch-Gordan coefficients for products of representations in SU(3)⊃SU(2)⊗U(1) is presented. These are efficiently computed using recursion relations, and the results are presented in exact precision as square roots of rational numbers. Output is in LaTeX format.
## References in zbMATH (referenced in 3 articles )
Showing results 1 to 3 of 3.
Sorted by year (citations) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8239092230796814, "perplexity": 2990.05078544897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.25/warc/CC-MAIN-20170322212950-00492-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showpost.php?p=754974&postcount=65 | View Single Post
Recognitions: Homework Help Science Advisor Let K be the set of all solutions that satisfy $\zeta (s) = 0$. Let there exist some p such that: $$\frac{\partial^2 p}{\partial i^2} a^3 - \Gamma (i^2) = 0$$ Now I can prove that these exists a solution $i = f_k (p)$ therefore i exists. If i exists there must be a limit to g(p) and p approaches infinity and therefore p exists. Thus the roots of: $$\int_0^\infty \frac{g(s)}{p \Gamma(s)} ds = \sin i$$ Are synonyms to the Zeta functions roots and all have roots of $\Re (p) = 1/2$ Therefore RH is proven and no one can come up with a counter-example. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975653290748596, "perplexity": 335.3992383698806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00053-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.matematica.pt/en/useful/quadratic-equation-calculator.php | To find out the roots (zeros) of a second degree function, start by placing that function in canonical form (simplifying as much as possible) and making it equal to zero. After this step, you have a second degree equation where the second member is zero. To solve this equation, start by trying to identify whether it is a complete or incomplete second degree equation. The difference is quite simple. The complete second degree equation has the 3 coefficients: a, b, c and can be written in the form ax^2+bx+c=0. While in the incomplete b or c is missing or both. Then, enter the coefficients of the terms of the equation in the corresponding boxes of the calculator. This way, in addition to getting to know the zeros, you can also view the resolution step by step. If it is a complete equation, the general formula of complete second degree equations is used. If it is incomplete, the first step in solving this type of equations is to draw a common factor, since an x is repeated in both terms. Finally we have two factors whose result is zero, so one of the two must be 0.
NOTE
If you want to perform calculations where the coefficient is a fraction, you must enter the number in decimal form. For example, instead of 1/4 you must enter 0.25.
### Solve a (complete) quadratic equation
Example: Find the zeros (roots) of the equation x^2 + 2x - 15 = 0
a: b: c:
### Solve an incomplete second degree equation (the independent term is missing)
Example: Find the zeros (roots) of the equation 4x^2 + 6x = 0
a: b:
### Solve an incomplete second degree equation (the first degree term is missing)
Example: Find the zeros (roots) of the equation 4x^2 - 16 = 0
a: c:
## Information
Any quadratic equation can have: 2 solutions, if the discriminant (number inside the root) is greater than zero; one solution, if the discriminant is zero; no solution, if the discriminant is negative. If we are working in the universe of complex numbers, then the second-degree equation always has at least one solution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121662378311157, "perplexity": 265.2242726874272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00133.warc.gz"} |
https://planetmath.org/globalcharacterizationofhypergeometricfunction | # global characterization of hypergeometric function
Riemann noted that the hypergeometric function can be characterized by its global properties, without reference to power series, differential equations, or any other sort of explicit expression. His characterization is conveniently restated in terms of sheaves:
Suppose that we have a sheaf of holomorphic functions over $\mathbb{C}\setminus\{0,1\}$ which satisfy the following properties:
• It is closed under taking linear combinations.
• The space of function elements over any open set is two dimensional.
• There exists a neighborhood $D_{0}$ such that $0\in D)$, holomorphic functions $\phi_{0},\psi_{0}$ defined on $D_{0}$, and complex numbers $\alpha_{0},\beta_{0}$ such that, for an open set of $d_{0}$ not containing $0$, it happens that $z\mapsto z^{\alpha_{0}}\phi(z)$ and $z\mapsto z^{\beta_{0}}\psi(z)$ belong to our sheaf.
Then the sheaf consists of solutions to a hypergeometric equation, hence the function elements are hypergeometric functions.
Title global characterization of hypergeometric function GlobalCharacterizationOfHypergeometricFunction 2014-12-31 15:15:16 2014-12-31 15:15:16 rspuzio (6075) rspuzio (6075) 6 rspuzio (6075) Definition msc 33C05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 10, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954348623752594, "perplexity": 475.6331714422015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202728.21/warc/CC-MAIN-20190323060839-20190323082839-00507.warc.gz"} |
https://www.handsonmechanics.org/thermal/615 | # Closed vs Open Systems
### Model Description
This is a demonstration of the basic principles open and closed systems. It reinforces the conservation of mass and energy. This demonstration should take 5-8 minutes.
### Engineering Principle
Most students have a basic understanding of open and closed systems, but tend to get lost in the terminology. In order to assist in the student’s internalization of the concept of open vs. closed systems, this demonstration shows the differences between them in an easily understandable method.
### What You Need
Item Quantity Description/Clarification
Soda Can 1 It must not be opened.
Pitcher of Water 1 Half full
### How It’s Done
In Class: Highlight the conservation of mass:
$0 = \frac{dm_{ev}}{dt} + \sum\limits_e \dot{m_e} - \sum\limits_i \dot{m_i}$
For a closed system, there is no mass passing the boundary, so mass is conserved within the system. Using your un-opened soda can, rotate the can and ask the class what they expect to happen. Obviously, the contents will not spill out—the mass can not cross the boundary of the system.
Then open the can, and drink some soda. The mass is crossing the boundary of the system at the opening. At this point, question if the flow of mass across the boundary is steady, and how the $\frac{dm_{ev}}{dt}$ term will be affected by the draining of the can.
From the conservation of mass, the conservation of energy can be interpreted. What will happen if we leave the un-opened cold soda in the warm classroom? The soda will get warmer, thereby showing that heat can transfer across the boundaries of a closed system.
If we open the cold soda, we notice that the can itself is cold on our hand and that the mass of soda going down our throat is cold. This shows that energy in an open system can be transferred with mass entering or exiting and / or through the boundary of the system.
Observations: Students should be able to observe how mass can not cross the boundary of the closed system (un-opened can) and how it is draining out of the open system (opened can). Students should also be able to recognize the heat transfer across the can itself (open or closed system) as well as with the mass as it drains out of the can (open system).
Additional Application: When introducing the 2nd Law of Thermodynamics, re-introduce the cold can of soda. In order to show how the 2nd Law predicts direction, ask which direction the heat must transfer. Will the opposite violate the 1st Law? Additionally, pour the contents of the soda into a pitcher of water. Will the soda and water unmix spontaneously?
### Cite this work as:
Seth Norberg (2019), "Closed vs Open Systems," https://www.handsonmechanics.org/thermal/615. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873164057731628, "perplexity": 974.911474673543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00012.warc.gz"} |
http://mathhelpforum.com/algebra/49342-stuck-two-word-problems-functions.html | # Thread: Stuck on Two Word Problems/ Functions
1. ## Stuck on Two Word Problems/ Functions
1."The combination of cold temperatures and wind speed determine what is called wind chill. The wind chill is a temperature that is the still-air equivalent of the combination of cold and wind. When the Wind speed is 25mph, the wind chill depends on the temperature t (in degrees Fahrenheit) according to
WC= 1.479t - 43.821
For what temp does it feel at least 30 degrees colder than teh air temp? That is, find t such that WC <=(less than or equal to) t -30
Don't really have any idea on how to do that one, I tried graphing it on my graphing calcualtor but just confused myself.
2. A shipping crate has a square base with sides of length x feet and it is half as tall as it is wide. If the material for the bottom and sides of the box cost 2\$ per sqr foot and the material for the top cost 1.50 per sqr ft., express the total cost of materials for the box as a function of x.
So I know the length = x and the height = .5x
So the area of the sides is x(.5x) times 4 would give me the area for the sides in sqr feet? and to do the bottom I'd just do x times x and the same with the top? I'm not sure hwo to put it all together though
2. For the first problem, you're making it more complicated than it is. What you want is to find the temperature at which the wind chill changes the temperature felt by at least 30 degrees less, so solve the equation for the desired change: $1.479t - 43.821 = -30$
For the second problem, you will multiply each side (represented by a number in square feet) by the cost per square foot for that side, and then add those numbers together to determine the total cost. The trick is that the top's cost is $1.5x^2$, whereas the bottom's cost is $2x^2$.
3. Originally Posted by icemanfan
For the first problem, you're making it more complicated than it is. What you want is to find the temperature at which the wind chill changes the temperature felt by at least 30 degrees less, so solve the equation for the desired change: $1.479t - 43.821 = -30$
For the second problem, you will multiply each side (represented by a number in square feet) by the cost per square foot for that side, and then add those numbers together to determine the total cost. The trick is that the top's cost is $1.5x^2$, whereas the bottom's cost is $2x^2$.
Ok I got the answer for the first one t=9.345 degrees
for the 2nd one I got f(x) = 3.5x^2 + 4x
do these look right? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875256359577179, "perplexity": 495.17979387965545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189377.63/warc/CC-MAIN-20170322212949-00558-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/discrete-math/136342-cardinality-question.html | # Math Help - Cardinality Question
1. ## Cardinality Question
Let S be the set of real numbers r s.t. $\cos (r) \in Q\sqrt 2$.
$Q\sqrt 2$ be the set of real numbers of the form $a+b\sqrt 2$, where a,b is rational.
Find the cardinality of S.
I know 0,2pi,4pi.. etc are in S, so S is infinite
and $Q\sqrt 2$ is countably infinite (dont know if it matters)
then im not sure how to go on...any hints would be appreciated.
2. For each number $x\in\mathbb{R}$, the set $\{r\in\mathbb{R}\mid\cos(r)=x\}$ is at most countable. Picture the (co)sine wave that is crossed by a horizontal line; there are at most countably many intersection points.
Now, $\{r\in\mathbb{R}\mid\cos(r)\in\mathbb{Q}(\sqrt{2}) \}=\bigcup_{x\in\mathbb{Q}(\sqrt{2})}\{r\in\mathbb {R}\mid\cos(r)=x\}$, so... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493138790130615, "perplexity": 463.524784914172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648297.22/warc/CC-MAIN-20141024030048-00170-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/135627/explanation-of-a-mathematical-phenomenon | # Explanation of a mathematical phenomenon?
I recently came upon a strange pattern, and have absolutely no idea why it exists. I was hoping for an explanation.
Let there be three integers: $p$, $a$ and $n$; $p$ is any prime number, while $a$ and $n$ are numbers such that $1\leq a < p$ and $1 \leq n < p$
Let Set $S$ be $\{{an^1\pmod p, an^2\pmod p, \dots, an^{p-1}\pmod p\}}$. Why is it that the number of distinct numbers in Set $S$ is always a divisor of $p-1$?
An example : Let $p$ be 7, $a$ be $3$ and $n$ be $2$.
Therefore Set S would be : $\{{3 * 2^1\pmod7,\dots,3* 2^6\pmod7\}}$
which equals to: $6\pmod 7$, $5\pmod7$, $3\pmod 7$, $6\pmod7$, $5\pmod7$, $3\pmod7$.
There are 3 distinct numbers in this set. $6$, or $p-1$, is divisible by $3$.
This is true for any prime $p$. Why is the number of distinct numbers in Set $S$ always a divisor of $p-1$?
I realize this is a seemingly random pattern, but I need to understand it to complete my paper.
-
– Qiaochu Yuan Apr 23 '12 at 4:27
Fixed link: Lagrange's theorem (group theory) – Rahul Apr 23 '12 at 6:00
Without group theory:
Look at your 7, 3, 2 example. Note how the numbers go 6, 5, 3, 6, 5, 3; you get the distinct numbers 6, 5, 3, and then they repeat. Well, that always happens: you get a string of distinct numbers, and then that string repeats, exactly, until you get to the end. Since you write down $p-1$ numbers total, and the string repeates exactly until you have written down the $p-1$ numbers, the number of numbers in the string must be a factor of $p-1$.
Well, there are a few assertions in that paragraph that need to be proved. You don't need group theory to prove them, you can find the topic discussed (although not in exactly the terms I've used) in any introductory Number Theory text. I'm sorry, but I'm not up to writing it all out here.
-
Thanks. Just one question: Why do you a string of distinct numbers? – user26649 Apr 23 '12 at 22:08
Not sure I understand your question. Do you mean, why do you get a string of distinct numbers? Well, you get a list of numbers, and they are distinct until there is a repeat. What I asserted without proof is that the first number to repeat is always the first number in the list, and that thereafter everything repeats exactly that first string of distinct numbers. If I have misunderstood your question, please try again. – Gerry Myerson Apr 24 '12 at 2:03
Sorry, stupid typo. That partially answered my question- I'm confused as to why it repeats after a number of distinct numbers that can divide $p-1$. As in why does it restart it's cycle after 3 distinct numbers? Why not 4 or some other number not a divisor of $p-1$? – user26649 Apr 24 '12 at 3:43
If it started cycling after some number of terms not a divisor of $p-1$, then you'd reach term number $p-1$ in the middle of a cycle. But there's a theorem that says that $n^p\equiv n\pmod p$, so in fact when you reach $p-1$ you have to be ready to start a new cycle, not be in the middle of an old one. This is called Fermat's little theorem, and is another one of those topics discussed in intro number theory texts. – Gerry Myerson Apr 24 '12 at 6:02
Awesome, thanks! – user26649 Apr 24 '12 at 13:09
Hint $\$ Given you have no knowledge of group theory, you can employ these two simple facts.
$(1)\ \ \ \rm mod\ p\!:\ n^{p-1}\equiv 1\ \Rightarrow\:$ the order k of n divides $\rm p\!-\!1,\:$ i.e. $\:\!$ if $\rm\:k>0\:$ is least such that $\rm n^k\equiv 1\:$ then $\rm\:k\ |\ p\!-\!1.\:$ Thus the sequence of powers $\rm\:n,n^2,\ldots,n^{p-1}\:\!$ decomposes into the subsequence $\rm\:n,n^2,\ldots,n^k\!\equiv\! 1\:$ repeated $\rm\:(p\!-\!1)/k\:$ times.
$(2)\ \$ The map $\rm\:x \to a\:\!x\:$ is $1\!-\!1$ since $\rm\:a\not\equiv 0\:$ $\Rightarrow$ $\rm\:a^{-1}\:$ exists mod $\rm p.\:$ Hence scaling all elements of the repeated subsequence by $\rm\: a\:$ preserves its length, i.e. the elements remain distinct. Note that the elements of the original subsequence are distinct, else $\rm\:n^i \equiv n^j\:$ for $\rm\:1\le i < j \le k\:$ yields $\rm\:n^{j-i}\equiv 1\:$ for $\rm\:0 < j\!-\!i < k,\:$ contra minimality of $\rm k$ (here, again, we've used the fact that $\rm\: mod\ p\!:\ n\not\equiv 0\:$ implies that $\rm\:n^{-1}$ exists, hence $\rm\:n\:$ is cancellable from both sides of a congruence).
-
This is an example of Lagrange's theorem in group theory. The elements $an^k$ form a coset in the group $U_p$, the multiplicative group of the field of $p$ elements.
-
... actually in the group $U_p$ (the nonzero integers mod $p$ under multiplication mod $p$, rather than all the integers mod $p$ under addition mod $p$). – Robert Israel Apr 23 '12 at 4:30
I have absolutely no knowledge about group theory (I do not even know what a group is). Can this be explained without group theory, or would I need to learn it? – user26649 Apr 23 '12 at 4:42
@Farhad: this is group theory. Any explanation of this phenomenon would constitute an explanation of (a small part of) group theory. – Qiaochu Yuan Apr 23 '12 at 5:03
@Qiaochu, sure, but the phenomenon was known long before there was any such discipline as group theory, and can be explained without any reference to group theory. – Gerry Myerson Apr 23 '12 at 5:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267822265625, "perplexity": 194.27433262571714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639414.6/warc/CC-MAIN-20150417045719-00182-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/inverse-of-exponential-function.531061/ | Inverse of exponential function
1. mark187
3
1. The problem statement, all variables and given/known data
Give the inverse of this function
N=f(L)=1816-8L
The answer has to be filled in in Maple
2. Relevant equations
3. The attempt at a solution
N=1816-8L
16-8L=ln(N)/ln(18)
-8L=(ln(N)/ln(18))-16
L=-ln(N)/(8*ln(18))+2
Is this correct?
When i fill this in in maple it's incorrect.
I've really no clue why it is wrong.
2. ehild
12,697
L=-ln(N)/(8*ln(18))+2 is the same function as the original. Choose L the independent variable and N the dependent one. (Just exchange L and N).
ehild
3. Mentallic
3,772
Yes it's correct. What are you typing into maple exactly? Your syntax is probably slightly off.
edit: Sorry I didn't notice that you forgot to swap your variables.
4. mark187
3
thanks! It's also mentioned that it should be simplified as much as possible.
Does that mean that the 8*LN(18) could be simplified to 8*LN(2*9) and more?
5. Mentallic
3,772
It's as simple as it can get, unless you believe $\ln{\frac{1}{x}}$ is more simple than $-\ln{x}$ you shouldn't be changing anything.
$\ln{18}$ is definitely more desirable than $\ln{9\cdot2}$
6. Ray Vickson
6,614
When I let Maple solve the equation for L it gives me exactly what you wrote. Why do you think Maple thinks it is incorrectÉ What type of error message are you receivingÉ
That annoying É is actually a question mark, but when I type it in it gives me that accented E; that only seems to happen when I access this website through Google Chrome!
RGV
7. mark187
3
Well, this assignment was given in a little online test.
The point is, that i can't see why the answer was wrong.
I have 2 chances, so for the second change i will try to swap the variables. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9479596018791199, "perplexity": 993.8623245416077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006497.89/warc/CC-MAIN-20151001222006-00184-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/33816/maximum-useful-work-from-a-process | # Maximum Useful Work from a Process
Negative of the $\Delta G$ for a process is the maximum useful work that can be obtained from it (at constant pressure and temperature). I understood this in this way: $\Delta H$ is the heat absorbed by the system (since the process is at constant temperature and pressure), so equivalently $-\Delta H$ energy is obtained from the system after doing expansion work. Since $\Delta S$ is the entropy created in the process, at the very least $-\Delta S$ entropy must be created in the surroundings - that is, at least $-T\Delta S$ energy must be lost as heat. This comes from the $-\Delta H$, and thus leaves $-\Delta H + T\Delta S$ to do useful work. No more work can be done than this. $-\Delta H + T\Delta S$ is $-\Delta G$, so $-\Delta G$ is the maximum possible useful work. First of all, I wanted to know if this is correct, and if this is actually why $-\Delta G$ is the maximum possible useful work.
Now if $|\Delta H|$ is more than $|T\Delta S|$, with both being negative, then one can think of the $W_{max}$ or $-\Delta G$ as a part of the $|\Delta H|$; since $|\Delta G|< |\Delta H|$. Some part of $|\Delta H|$ goes as $|T\Delta S|$ to increase the surroundings' entropy, and the other part in doing useful work. But if both are positive, with $|T\Delta S|$ being more than $|\Delta H|$, again $\Delta G$ is negative, allowing useful work to be extracted. But now it seems as though $|T\Delta S|$ heat will be extracted from the surroundings, $|\Delta H|$ used up in the process, while the rest can be converted to work - in other words, useful work is a part of $|T\Delta S|$ - with the other part used by the process as $|\Delta H|$. Is this correct? Is $|T\Delta S|$ extracted from the surroundings actually?
"Useful work" does not include expansion work. For a proof of why $w_{\text{by,add}} \leq \Delta G$, you may want to first refer to my answer to this question: Why does the Gibbs free energy only correspond to non-expansion work? then come back here.
In that proof, I used equalities by assuming that the process is reversible. If we drop this assumption, then by the Second Law,
\begin{align} \mathrm{d}S &\geq \frac{\mathrm{d}q}{T} \\ \mathrm{d}q &\leq T\mathrm{d}S \\ dU &\leq T\mathrm{d}S - p\mathrm{d}V + \mathrm{d}w_{\text{add}} \\ dG &\leq dw_{\text{add}} \\ \Delta G &\leq w_{\text{add}} \end{align}
Equality holds if the process is reversible, or more generally, if both the initial and final states are equilibrium states. Now this seems like an obvious contradiction of the equation in the very first paragraph. The key lies in the definition of the work - whether it is done on the system, or by the system. In my proof, I have been referring to the work done on the system; however, here we are interested in the work done by the system, since that is the work that can be "extracted". The work done by the system must be equal to the negative of the work done on the system: $w_{\text{by,add}} = -w_{\text{add}}$. This leads to the final result $w_{\text{by,add}} \leq \Delta G$.
Levine's Physical Chemistry 6th ed. writes:
In many cases (for example, a battery, a living organism), the P-V expansion work is not useful work, but $w_{\text{by,add}}$ is the useful work output. The quantity $-\Delta G$ equals the maximum possible nonexpansion work output $w_{\text{by,add}}$ done by a system in a constant-$T$-and-$p$ process. Hence the term "free energy". (Of course, for a system with P-V work only, $\mathrm{d}w_{\text{by,add}} = 0$ and $\mathrm{d}G = 0$ for a reversible, isothermal, isobaric process.) Examples of nonexpansion work in biological systems are the work of contracting muscles and of transmitting nerve impulses.
If you want a direct answer as to why your explanation isn't right, it's because you can't equate $\Delta H$ with the work done. Work done is given by $w = -\int\! p\,\mathrm{d}V$. You can only equate $\Delta H$ with the heat transferred at constant $p$. Apart from the First Law ($\Delta U = q + w$), heat and work are generally unrelated.
• Yes, I understand perfectly. What I've written above seems rather nonsensical once I read your answer. – Charles Jul 14 '15 at 14:49
• Could you also explain why the thermodynamic efficiency of a cell is given by |ΔH|/|ΔG|? – Charles Jul 14 '15 at 14:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785094261169434, "perplexity": 224.52424088415947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00209.warc.gz"} |
http://mathhelpforum.com/calculus/26849-integrating-factor.html | # Math Help - integrating factor
1. ## integrating factor
dy/dx + 2y = sin x
In this equation what is the integrating factor?
Is it e^2x?
2. Originally Posted by geton
dy/dx + 2y = sin x
In this equation what is the integrating factor?
Is it e^2x?
Finally I’ve done & this equation is really awesome. Problem resolved
3. Originally Posted by geton
Finally I’ve done & this equation is really awesome. Problem resolved | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961730241775513, "perplexity": 4754.596807434527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278244.51/warc/CC-MAIN-20160524002118-00011-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://www.real-statistics.com/mathematical-notation/functions-polynomials-limits-graphs/ | # Functions, polynomials, limits and graphs
A function is a mapping between two sets, called the domain and the range, where for every value in the domain there is a unique value in the range assigned by the function. Generally functions are defined by some formula; for example f(x) = x2 is the function that maps values of x into their square. The mapping is between all numbers (the domain) and non-negative numbers (the range).
A function is polynomial if it has the form
for some non-negative integer n (called the degree of the polynomial) and some constants a0, a1, …, an where an ≠ 0 (unless n = 0). The function is linear if n = 1 and quadratic if n = 2.
We often use limit notation such as
as a shorthand for as x gets larger the value of the function f(xapproaches the value a. Some examples for various values of f(x) are as follows:
We can also evaluate the limit of a function when x approaches some other value. For example in the following example as x gets close to 0 the value of the function approaches 1.
We can also use the limit notation with series. If xn = 1 – 1/n where n is a positive integer then
since the series $\frac{1}{2}$, $\frac{2}{3}$, $\frac{3}{4}$, $\frac{4}{5}$, … converges to 1.
Often we use a graph to show the relationship between x and f(x). A graph consists of all the points (x,y) in what is called the xy plane where x is in the domain of f and y = f(x). The graph is drawn in a rectangular grid defined by the x-axis (drawn horizontally) and the y-axis (drawn vertically) meeting at a point (0, 0) call the origin. In particular, the graph of y = f(x) for f(x) = x2 is given in Figure 1.
Figure 1 – Graph of y = f(x) where f(x) = x2
A line is a set of points (x, y) such that y = bx + a for some constants a and b. b is called the slope of the line and a is called the y-intercept. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307020306587219, "perplexity": 249.70505933112867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00262-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://brilliant.org/problems/interesting-summation-2/ | # Interesting Summation
Algebra Level 5
When $$M$$ is a digit, $M + \overline{MM} + \overline{MMM} + \cdots + \underbrace{\overline{MMM\cdots M}}_{n \ M\text{'s}} = \frac{M}{a}\left [ 10^{n+1}-10-9n \right ].$ Find the value of $$a.$$
Note that $$\overline{MM}$$ represents a two-digit number with both digits being $$M.$$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8843176960945129, "perplexity": 1088.3436614151499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887981.42/warc/CC-MAIN-20180119125144-20180119145144-00213.warc.gz"} |
http://mathhelpforum.com/calculus/187924-calculate-following-integral-print.html | # Calculate The Following Integral
• Sep 13th 2011, 06:05 PM
antz215
Calculate The Following Integral
Hello! The problem is:
Calculate The Following Integral:
(e^x + e^-x) from ln2 to ln3.
I'm pretty sure the next step is to change it to (e^x - e^-x) as those are the anti-derivatives right? Or maybe I'm starting horrible hence why I can't solve it.
Thanks in advance for any help! I've been stuck only on this one problem for hours!
• Sep 13th 2011, 06:14 PM
Plato
re: Calculate The Following Integral
Quote:
Originally Posted by antz215
Hello! The problem is:
Calculate The Following Integral:
(e^x + e^-x) from ln2 to ln3.
I'm pretty sure the next step is to change it to (e^x - e^-x) as those are the anti-derivatives right?
You can always find help here.
Know that $e^{\ln(3)}=3~\&~e^{-\ln(3)}=\frac{1}{3}~.$
• Sep 13th 2011, 06:23 PM
HallsofIvy
re: Calculate The Following Integral
Quote:
Originally Posted by antz215
Hello! The problem is:
Calculate The Following Integral:
(e^x + e^-x) from ln2 to ln3.
I'm pretty sure the next step is to change it to (e^x - e^-x) as those are the anti-derivatives right? Or maybe I'm starting horrible hence why I can't solve it.
Thanks in advance for any help! I've been stuck only on this one problem for hours!
It's easy to check isn't it: the derivative of $e^x- e^{-x}$ is $e^x-(-1)e^{-1}= e^x+ e^{-x}$ so yes, an anti-derivative of $e^x+ e^{-x}$ is $e^x- e^{-x}$.
Fundamental theorem of Calculus: if F(x) is an anti-derivative of f(x) then $\int_a^b f(x)dx= F(b)- F(a)$.
• Sep 13th 2011, 06:58 PM
antz215
re: Calculate The Following Integral
From both helps I am getting:
e^x + e^-x dx
(e^x - e^-x)
(e^ln3 - e^-ln3) - (e^ln2 - e^-ln2)
3 - 1/3 - 2 + 1/2
= 7/6
I think that is correct? Thanks for your guys' help!
• Sep 14th 2011, 01:30 PM
HallsofIvy
Re: Calculate The Following Integral
Yes, that is correct. Congratulations! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409152865409851, "perplexity": 1930.5549918899494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579567.73/warc/CC-MAIN-20171215211734-20171215233734-00472.warc.gz"} |
https://science.sciencemag.org/content/325/5940/597 | Report
# Probing Spin-Charge Separation in a Tomonaga-Luttinger Liquid
See allHide authors and affiliations
Science 31 Jul 2009:
Vol. 325, Issue 5940, pp. 597-601
DOI: 10.1126/science.1171769
## Electron Breakdown
An electron possesses charge and spin. In general, these properties are confined to the electron. However, in strongly interacting many-body electronic systems, such as one-dimensional wires, it has long been theorized that the charge and spin should separate. There have been tantalizing glimpses of this separation experimentally, but questions remain. Jompol et al. (p. 597) looked at the tunneling current between an array of one-dimensional wires and a two-dimensional electron gas and argue that the results reveal a clear signature of spin-charge separation.
## Abstract
In a one-dimensional (1D) system of interacting electrons, excitations of spin and charge travel at different speeds, according to the theory of a Tomonaga-Luttinger liquid (TLL) at low energies. However, the clear observation of this spin-charge separation is an ongoing challenge experimentally. We have fabricated an electrostatically gated 1D system in which we observe spin-charge separation and also the predicted power-law suppression of tunneling into the 1D system. The spin-charge separation persists even beyond the low-energy regime where the TLL approximation should hold. TLL effects should therefore also be important in similar, but shorter, electrostatically gated wires, where interaction effects are being studied extensively worldwide.
View Full Text | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333248138427734, "perplexity": 1997.0045361070963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00579.warc.gz"} |
https://www.nag.com/numeric/nl/nagdoc_27cpp/flhtml/f08/f08cgf.html | # NAG FL Interfacef08cgf (dormql)
## 1Purpose
f08cgf multiplies a general real $m$ by $n$ matrix $C$ by the real orthogonal matrix $Q$ from a $QL$ factorization computed by f08cef.
## 2Specification
Fortran Interface
Subroutine f08cgf ( side, m, n, k, a, lda, tau, c, ldc, work, info)
Integer, Intent (In) :: m, n, k, lda, ldc, lwork Integer, Intent (Out) :: info Real (Kind=nag_wp), Intent (In) :: tau(*) Real (Kind=nag_wp), Intent (Inout) :: a(lda,*), c(ldc,*) Real (Kind=nag_wp), Intent (Out) :: work(max(1,lwork)) Character (1), Intent (In) :: side, trans
#include <nag.h>
void f08cgf_ (const char *side, const char *trans, const Integer *m, const Integer *n, const Integer *k, double a[], const Integer *lda, const double tau[], double c[], const Integer *ldc, double work[], const Integer *lwork, Integer *info, const Charlen length_side, const Charlen length_trans)
The routine may be called by the names f08cgf, nagf_lapackeig_dormql or its LAPACK name dormql.
## 3Description
f08cgf is intended to be used following a call to f08cef, which performs a $QL$ factorization of a real matrix $A$ and represents the orthogonal matrix $Q$ as a product of elementary reflectors.
This routine may be used to form one of the matrix products
$QC , QTC , CQ , CQT ,$
overwriting the result on $C$, which may be any real rectangular $m$ by $n$ matrix.
A common application of this routine is in solving linear least squares problems, as described in the F08 Chapter Introduction, and illustrated in Section 10 in f08cef.
## 4References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia https://www.netlib.org/lapack/lug
## 5Arguments
1: $\mathbf{side}$Character(1) Input
On entry: indicates how $Q$ or ${Q}^{\mathrm{T}}$ is to be applied to $C$.
${\mathbf{side}}=\text{'L'}$
$Q$ or ${Q}^{\mathrm{T}}$ is applied to $C$ from the left.
${\mathbf{side}}=\text{'R'}$
$Q$ or ${Q}^{\mathrm{T}}$ is applied to $C$ from the right.
Constraint: ${\mathbf{side}}=\text{'L'}$ or $\text{'R'}$.
2: $\mathbf{trans}$Character(1) Input
On entry: indicates whether $Q$ or ${Q}^{\mathrm{T}}$ is to be applied to $C$.
${\mathbf{trans}}=\text{'N'}$
$Q$ is applied to $C$.
${\mathbf{trans}}=\text{'T'}$
${Q}^{\mathrm{T}}$ is applied to $C$.
Constraint: ${\mathbf{trans}}=\text{'N'}$ or $\text{'T'}$.
3: $\mathbf{m}$Integer Input
On entry: $m$, the number of rows of the matrix $C$.
Constraint: ${\mathbf{m}}\ge 0$.
4: $\mathbf{n}$Integer Input
On entry: $n$, the number of columns of the matrix $C$.
Constraint: ${\mathbf{n}}\ge 0$.
5: $\mathbf{k}$Integer Input
On entry: $k$, the number of elementary reflectors whose product defines the matrix $Q$.
Constraints:
• if ${\mathbf{side}}=\text{'L'}$, ${\mathbf{m}}\ge {\mathbf{k}}\ge 0$;
• if ${\mathbf{side}}=\text{'R'}$, ${\mathbf{n}}\ge {\mathbf{k}}\ge 0$.
6: $\mathbf{a}\left({\mathbf{lda}},*\right)$Real (Kind=nag_wp) array Input
Note: the second dimension of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{k}}\right)$.
On entry: details of the vectors which define the elementary reflectors, as returned by f08cef.
On exit: is modified by f08cgf but restored on exit.
7: $\mathbf{lda}$Integer Input
On entry: the first dimension of the array a as declared in the (sub)program from which f08cgf is called.
Constraints:
• if ${\mathbf{side}}=\text{'L'}$, ${\mathbf{lda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$;
• if ${\mathbf{side}}=\text{'R'}$, ${\mathbf{lda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
8: $\mathbf{tau}\left(*\right)$Real (Kind=nag_wp) array Input
Note: the dimension of the array tau must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{k}}\right)$.
On entry: further details of the elementary reflectors, as returned by f08cef.
9: $\mathbf{c}\left({\mathbf{ldc}},*\right)$Real (Kind=nag_wp) array Input/Output
Note: the second dimension of the array c must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
On entry: the $m$ by $n$ matrix $C$.
On exit: c is overwritten by $QC$ or ${Q}^{\mathrm{T}}C$ or $CQ$ or $C{Q}^{\mathrm{T}}$ as specified by side and trans.
10: $\mathbf{ldc}$Integer Input
On entry: the first dimension of the array c as declared in the (sub)program from which f08cgf is called.
Constraint: ${\mathbf{ldc}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$.
11: $\mathbf{work}\left(\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{lwork}}\right)\right)$Real (Kind=nag_wp) array Workspace
On exit: if ${\mathbf{info}}={\mathbf{0}}$, ${\mathbf{work}}\left(1\right)$ contains the minimum value of lwork required for optimal performance.
12: $\mathbf{lwork}$Integer Input
On entry: the dimension of the array work as declared in the (sub)program from which f08cgf is called.
If ${\mathbf{lwork}}=-1$, a workspace query is assumed; the routine only calculates the optimal size of the work array, returns this value as the first entry of the work array, and no error message related to lwork is issued.
Suggested value: for optimal performance, ${\mathbf{lwork}}\ge {\mathbf{n}}×\mathit{nb}$ if ${\mathbf{side}}=\text{'L'}$ and at least ${\mathbf{m}}×\mathit{nb}$ if ${\mathbf{side}}=\text{'R'}$, where $\mathit{nb}$ is the optimal block size.
Constraints:
• if ${\mathbf{side}}=\text{'L'}$, ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$ or ${\mathbf{lwork}}=-1$;
• if ${\mathbf{side}}=\text{'R'}$, ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ or ${\mathbf{lwork}}=-1$.
13: $\mathbf{info}$Integer Output
On exit: ${\mathbf{info}}=0$ unless the routine detects an error (see Section 6).
## 6Error Indicators and Warnings
$-999<{\mathbf{info}}<0$
If ${\mathbf{info}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated.
${\mathbf{info}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
An explanatory message is output, and execution of the program is terminated.
## 7Accuracy
The computed result differs from the exact result by a matrix $E$ such that
$E2 = Oε C2$
where $\epsilon$ is the machine precision.
## 8Parallelism and Performance
f08cgf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
The total number of floating-point operations is approximately $2nk\left(2m-k\right)$ if ${\mathbf{side}}=\text{'L'}$ and $2mk\left(2n-k\right)$ if ${\mathbf{side}}=\text{'R'}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 102, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879894614219666, "perplexity": 4459.382546275789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648373.45/warc/CC-MAIN-20210619142022-20210619172022-00082.warc.gz"} |
http://math.stackexchange.com/questions/412817/easiest-example-of-rearrangement-of-infinite-leading-to-different-sums | # Easiest example of rearrangement of infinite leading to different sums
I am reading the section on the rearrangement of infinite series in Ok, E. A. (2007). Real Analysis with Economic Applications. Princeton University Press.
As an example, the author shows that
is a rearrangement of the sequence
\begin{align} \frac{(-1)^{m+1}}{m} \end{align}
and that the infinite sum of these two sequences must be different.
My question is : what is to you the easiest and most intuitive example of such infinite series having different values for different arrangements of the terms? Ideally, I hope to find something as intuitive as the illustration that some infinite series do not have limits through $\sum_\infty (-1)^i$.
I found another example in http://www.math.ku.edu/~lerner/m500f09/Rearrangements.pdf but it is still too abstract to feed my intuition...
-
It may be easier for future questioners to see this if you mentioned conditionally converging series. – Loki Clock Jun 6 '13 at 19:04
I am not familiar with the notion of conditionally converging series. Is it that I should replace any instance of "infinite series" by "conditionally converging series"? Anyways, fell free to edit my question if you have an improvement. – Martin Van der Linden Jun 7 '13 at 7:33
– Loki Clock Jun 7 '13 at 7:41
$$1/2-1/3+1/4-1/5+1/6-1/7+\cdots=(1/2-1/3)+(1/4-1/5)+(1/6-1/7)+\cdots$$ is obviously positive. The rearrangement $$1/2-1/3-1/5+1/4-1/7-1/9+1/6-1/11-1/13+\cdots$$ is clearly negative; just group it as $$(1/2-1/3-1/5)+(1/4-1/7-1/9)+(1/6-1/11-1/13)+\cdots$$ which is $$(1/2-8/15)+(1/4-16/63)+(1/6-24/143)+\cdots\lt(1/2-8/16)+(1/4-16/64)+(1/6-24/144)+\cdots=0$$
-
Nice! The rearrangement function even has an explicit formulation, the second sequence being $\frac{1}{\sigma(i)}$, where $\sigma(i) = \begin{cases} i - (2 - \frac{i}{2}), & \text{ when } i \text{ is even}\\ i - \frac{i-1}{2}, & \text{ when } i \text{ is odd and} \frac{i +(i-2)}{4} \text{ is odd} \\ i - \frac{i-3}{2}, & \text{ when } i \text{ is odd and} \frac{i +(i-2)}{4} \text{ is even} \\ \end{cases}$ – Martin Van der Linden Jun 6 '13 at 11:29
The following variant of the alternating harmonic is computationally easy. Consider the series $$1-1+\frac{1}{2}-\frac{1}{2}+\frac{1}{2}-\frac{1}{2}+\frac{1}{4}-\frac{1}{4}+\frac{1}{4}-\frac{1}{4}+\frac{1}{4}-\frac{1}{4}+\frac{1}{4}-\frac{1}{4}+\frac{1}{8}-\frac{1}{8}+\cdots.$$ The sum is $0$. For the partial sums are either $0$ or $\frac{1}{2^k}$ for suitable $k$ that go to infinity.
Let us rearrange this series to give sum $1$. Use \begin{align} 1+&\frac{1}{2}+\frac{1}{2}-1+\frac{1}{4}+\frac{1}{4}-\frac{1}{2}+\frac{1}{4}+\frac{1}{4}-\frac{1}{2}+\frac{1}{8}+\frac{1}{8}-\frac{1}{4}+\frac{1}{8}+\frac{1}{8}-\frac{1}{4}+\frac{1}{8}+\frac{1}{8}-\frac{1}{4}\\&+\frac{1}{8}+\frac{1}{8}-\frac{1}{4}+\frac{1}{16}+\frac{1}{16}-\frac{1}{8}+\frac{1}{16}+\frac{1}{16}-\frac{1}{8}+\cdots.\end{align} That the series converges to $1$ follows from the fact that the sum of the first $3n+1$ terms is always $1$, and that the sum of the first $3n+2$ terms, and the sum of the first $3n+3$ terms, differ from the sum of the first $3n+1$ terms by an amount that $\to 0$ as $n\to\infty$.
-
This argument is incomplete. The latter series has to converge to a different value, and hence has to be independent of groupings. – Loki Clock Jun 6 '13 at 17:27
There is no grouping, the "triplets" remark is just a description of how the sequence is built. The convergence to $1$ is obvious, since the sum of the first $3k+1$ elements is always $1$, and for large $k$ the $3k+2$-th sum and the $3k+3$-th sum differ from $1$ by an amount that $\to 0$. But I will change the sentence at the end since it may cause confusion. – André Nicolas Jun 6 '13 at 18:55
There is an argument in the Stewart calculus book 5th edition (the one I learned from) which uses the alternating harmonic series since it is conditionally convergent which goes like this:
The series: $$1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \dots = \ln 2$$ Multiplying the series by half yields: $$\frac{1}{2} - \frac{1}{4} + \frac{1}{6} - \frac{1}{8} + \dots = \frac{1}{2} \ln 2$$ He then does a trick by inserting zeros between each number: $$0 + \frac{1}{2} +0 - \frac{1}{4}+0 + \frac{1}{6}+0 - \frac{1}{8}+ \dots = \frac{1}{2} \ln 2$$ He then adds the original sum and the newly acquired sum above and obtains: $$1 + \frac{1}{3} - \frac{1}{2} + \frac{1}{5} + \frac{1}{7} - \frac{1}{4} + \dots = \frac{3}{2} \ln 2$$ He asserts that this is the original series with it's terms rearranged with pairwise positive terms followed by negatives yielding a completely different sum.
Stewart, J "Single-Variable Calculus" 5th edition
-
While I am not sure about how intuitive it might seem, I find the argument solid and understandable – Dan Jun 6 '13 at 9:47
And now that I look at what you referenced I see its the same example...I suppose it's that popular – Dan Jun 6 '13 at 9:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250907301902771, "perplexity": 337.7646546886323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986357.49/warc/CC-MAIN-20150728002306-00332-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://optimization-online.org/author/werner-schachinger/ | ## The complexity of simple models – a study of worst and typical hard cases for the Standard Quadratic Optimization Problem
In a Standard Quadratic Optimization Problem (StQP), a possibly indefinite quadratic form (the simplest nonlinear function) is extremized over the standard simplex, the simplest polytope. Despite this simplicity, the nonconvex instances of this problem class allow for remarkably rich patterns of coexisting local solutions, which are closely related to practical difficulties in solving StQPs globally. … Read more
## New lower bounds and asymptotics for the cp-rank
Let $p_n$ denote the largest possible cp-rank of an $n\times n$ completely positive matrix. This matrix parameter has its significance both in theory and applications, as it sheds light on the geometry and structure of the solution set of hard optimization problems in their completely positive formulation. Known bounds for $p_n$ are $s_n=\binom{n+1}2-4$, the current … Read more
## From seven to eleven: completely positive matrices with high cp-rank
We study $n\times n$ completely positive matrices $M$ on the boundary of the completely positive cone, namely those orthogonal to a copositive matrix $S$ which generates a quadratic form with finitely many zeroes in the standard simplex. Constructing particular instances of $S$, we are able to construct counterexamples to the famous Drew-Johnson-Loewy conjecture (1994) for … Read more
## On the cp-rank and minimal cp factorizations of a completely positive matrix
We show that the maximal cp-rank of $n\times n$ completely positive matrices is attained at a positive-definite matrix on the boundary of the cone of $n\times n$ completely positive matrices, thus answering a long standing question. We also show that the maximal cp-rank of $5\times 5$ matrices equals six, which proves the famous Drew-Johnson-Loewy conjecture … Read more
## Think co(mpletely )positive ! Matrix properties, examples and a clustered bibliography on copositive optimization
Copositive optimization is a quickly expanding scientific research domain with wide-spread applications ranging from global nonconvex problems in engineering to NP-hard combinatorial optimization. It falls into the category of conic programming (optimizing a linear functional over a convex cone subject to linear constraints), namely the cone of all completely positive symmetric nxn matrices, and its … Read more
## Improved bounds for interatomic distance in Morse clusters
We improve the best known lower bounds on the distance between two points of a Morse cluster in $\mathbb{R}^3$, with $\rho \in [4.967,15]$. Our method is a generalization of the one applied to the Lennard-Jones potential in~\cite{Schac}, and it also leads to improvements of lower bounds for the energy of a Morse cluster. Some of … Read more
A Standard Quadratic Optimization Problem (StQP) consists of maximizing a (possibly indefinite) quadratic form over the standard simplex. Likewise, in a multi-StQP we have to maximize a (possibly indefinite) quadratic form over the cartesian product of several standard simplices (of possibly different dimensions). Two converging monotone interior point methods are established. Further, we prove an … Read more
## A First-Order Interior-Point Method for Linearly Constrained Smooth Optimization
We propose a first-order interior-point method for linearly constrained smooth optimization that unifies and extends first-order affine-scaling method and replicator dynamics method for standard quadratic programming. Global convergence and, in the case of quadratic programs, (sub)linear convergence rate and iterate convergence results are derived. Numerical experience on simplex constrained problems with 1000 variables is reported. … Read more
## A conic duality Frank–Wolfe type theorem via exact penalization in quadratic optimization
The famous Frank–Wolfe theorem ensures attainability of the optimal value for quadratic objective functions over a (possibly unbounded) polyhedron if the feasible values are bounded. This theorem does not hold in general for conic programs where linear constraints are replaced by more general convex constraints like positive-semidefiniteness or copositivity conditions, despite the fact that the … Read more
## New results for molecular formation under pairwise potential minimization
We establish new lower bounds on the distance between two points of a minimum energy configuration of $N$ points in $\mathbb{R}^3$ interacting according to a pairwise potential function. For the Lennard-Jones case, this bound is 0.67985 (and 0.7633 in the planar case). A similar argument yields an estimate for the minimal distance in Morse clusters, … Read more | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8295687437057495, "perplexity": 730.4628884327923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00262.warc.gz"} |
http://en.wikipedia.org/wiki/Shannon_number | # Shannon number
Claude Shannon
The Shannon number, named after Claude Shannon, is an estimated lower bound on the game-tree complexity of chess. Shannon calculated it as an aside in his 1950 paper "Programming a Computer for Playing Chess".[1] (This influential paper introduced the field of computer chess.)
Shannon also estimated the number of possible positions, "of the general order of $\scriptstyle \frac{64!}{32!{8!}^2{2!}^6}$, or roughly 1043 ". This includes some illegal positions (e.g., pawns on the first rank, both kings in check) and excludes legal positions following captures and promotions. Taking these into account, Victor Allis calculated an upper bound of 5×1052 for the number of positions, and estimated the true number to be about 1050.[2] Recent results[3]improve that estimate, by proving an upper bound of only 2155, which is less than 1046.7 and showing[4] an upper bound 2×1040 in the absence of promotions.
Allis also estimated the game-tree complexity to be at least 10123, "based on an average branching factor of 35 and an average game length of 80". As a comparison, the number of atoms in the observable universe, to which it is often compared, is estimated to be between 4×1079 and 4×1081. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326668977737427, "perplexity": 1385.8541583492286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660448.47/warc/CC-MAIN-20150417045740-00264-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:0819.05035 | ×
## Total dominating functions in trees: Minimality and convexity.(English)Zbl 0819.05035
Authors’ abstract: A total dominating function (TDF) of a graph $$G= (V,E)$$ is a function $$f: V\to [0, 1]$$ such that for each $$v\in V$$, $$\sum_{u\in N(v)} f(u)\geq 1$$ (where $$N(v)$$ denotes the set of neighbors of vertex $$v$$). Convex combinations of TDFs are also TDFs. However, convex combinations of minimal TDFs (i.e., MTDFs) are not necessarily minimal. In this paper we discuss the existence in trees of a universal MTDF (i.e., an MTDF whose convex combinations with any other MTDF are also minimal).
### MSC:
05C35 Extremal problems in graph theory 05C40 Connectivity 05C05 Trees
### Keywords:
convexity; total dominating function; convex combinations; trees
Full Text:
### References:
[1] Cockayne, Networks 24 pp 83– (1994) [2] Masters thesis, University of Victoria (1992).
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358655571937561, "perplexity": 1790.551603338157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00330.warc.gz"} |
http://tex.stackexchange.com/questions/125797/need-help-with-modifying-ntheorem-environments | # Need help with modifying ntheorem environments
I am trying to modify some of the theorem environments defined in ntheorem. I wanted the definitions in my document to be boxed the way in which ntheorem does for the framed theorem classes which they define by:
\theoremclass{Theorem}
\theoremstyle{break}
\newframedtheorem{importantTheorem}[Theorem]{Theorem}
and so I modified the above code to the following:
\theoremclass{Theorem}
\theoremstyle{break}
\newframedtheorem{defn}[Theorem]{Definition}
then in my document called up an instance of a definition by:
\begin{defn}[Logical Equivalance] Two propositions are said to be logically equivalent iff ...
\end{defn}
Now, I wish to modify their shaded theorem environment which is coded as follows:
\theoremclass{Theorem}
\theoremstyle{break}
I tried the following:
\theoremclass{Theorem}
\theoremstyle{break}
but keep getting the error:
Undefined control sequence: begin{prop}
Can anyone help me with this?
\documentclass[10pt,a4paper]{article}
\usepackage[left=2.50cm,right=2.50cm,top=2.50cm,bottom=2.75cm]{geometry}
\usepackage{amsmath,amssymb,amscd,amstext,amsbsy,array,color,epsfig}
\usepackage{fancyhdr,framed,latexsym,multicol,pstricks,slashed,xcolor}
\usepackage[amsmath,framed,thmmarks]{ntheorem}
\begin{document}
\theoremstyle{marginbreak}
\theorembodyfont{\slshape}
\theoremsymbol{\ensuremath{\star}}
\theoremseparator{:}
\newtheorem{axm}{Axiom}[section]
\theoremstyle{marginbreak}
\theorembodyfont{\slshape}
\theoremsymbol{\ensuremath{\diamondsuit}}
\theoremseparator{:}
\newtheorem{Theorem}{Theorem}[section]
\theoremclass{Theorem}
\theoremstyle{break}
\theoremstyle{changebreak}
\theoremsymbol{\ensuremath{\heartsuit}}
\theoremindent0.5cm
\theoremnumbering{greek}
\newtheorem{lem}{Lemma}[section]
\theoremindent0cm
\theoremnumbering{arabic}
\newtheorem{cor}[Theorem]{Corollary}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremsymbol{\ensuremath{\bullet}}
\theoremseparator{}
\newtheorem{exm}{Example}
\theoremclass{Theorem}
\theoremstyle{plain}
\theoremsymbol{\ensuremath{\clubsuit}}
\newframedtheorem{defn}[Theorem]{Definition}
\theorembodyfont{\upshape}
\theoremstyle{nonumberplain}
\theoremseparator{.}
\theoremsymbol{\rule{1ex}{1ex}}
\newtheorem{proof}{Proof}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremsymbol{\ensuremath{\ast}}
\theoremseparator{.}
\newtheorem{rem}{Remark}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\newtheorem{exc}{Exercise}[section]
\begin{defn}[Logical Equivalance] Two propositions are said to be logically equivalent iff ...
\end{defn}
\begin{prop}
Let $P$ and $Q$ be propositions. Then ...
\end{prop}
\end{document}
Thanks!!!
I am now added more "theorem"-like enviornments and am getting even more errors and would ask that some one please help me use mdframed to fix the problem if that is possible. Here is a current and up-to-date mwe:
\documentclass[a4paper,12pt,twoside]{book}
\usepackage[left=2.50cm,right=2.50cm,top=2.50cm,bottom=2.75cm]{geometry}
\usepackage{amsmath,amssymb,amscd,amsbsy,array,color,epsfig}
\usepackage{fancyhdr,framed,latexsym,multicol,pstricks,slashed,xcolor}
\usepackage{picture}
\usepackage{indentfirst}
\usepackage{enumitem}
\usepackage{tikz}
\usepackage{subfig}
\usetikzlibrary{calc,positioning,shapes.geometric}
\setenumerate[1]{label=(\alph*)}
\usepackage[amsmath,framed,thmmarks]{ntheorem}
\newtheorem{Theorem}{Thm}
\theoremclass{Theorem}
\theoremstyle{break}
\theoremclass{Theorem}
\theoremstyle{break}
\theoremclass{Theorem}
\theoremstyle{plain}
\newframedtheorem{lema}[Theorem]{Lemma}
\theoremclass{Theorem}
\theoremstyle{plain}
\newframedtheorem{coro}[Theorem]{Corollary}
\theoremstyle{plain}
\theoremsymbol{\ensuremath{\blacktriangle}}
\theoremseparator{.}
\theoremprework{\bigskip\hrule}
\theorempostwork{\hrule\bigskip}
\newtheorem{defn}{Definition}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremsymbol{\ensuremath{\bullet}}
\theoremseparator{}
\newtheorem{exam}{Example}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremsymbol{\ensuremath{\bullet}}
\newtheorem{exer}{Exercise}[section]
\theorembodyfont{\color{blue}\bfseries\boldmath}
\theoremstyle{nonumberplain}
\theoremseparator{.}
\theoremsymbol{\rule{1ex}{1ex}}
\newtheorem{proof}{Proof}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremsymbol{\ensuremath{\bigstar}}
\theoremseparator{.}
\newtheorem{remk}{Remark}
\def \all {\forall}
\def \ex {\exists}
\def \imp {\Rightarrow}
\def \limp {\Leftarrow}
\def \iff {\Longleftrightarrow}
\def \contra {\rightarrow\negmedspace\leftarrow}
\def \es {\emptyset}
\def \st {\backepsilon}
\def \bn{\mathbb N}
\def \bz{\mathbb Z}
\def \bq{\mathbb Q}
\def \br{\mathbb R}
\def \bc{\mathbb C}
\def \bp{\mathbb P}
\def \bt{\mathbb T}
\begin{defn}[Statement/Proposition]
Declarative sentences or strings of symbols in mathematics which can be said to have \textit{exactly} one \textit{truth value}, that is, are either true (denoted T), or false (denoted F), are known as \textbf{statements} or \textbf{propositions}.
\end{defn}
\begin{exam}
Hence, the truth value of the negation of a proposition is \textit{merely} the opposite of the truth value of said proposition. Hence, the truth value of the proposition '$7$ is divisible by $2$' is the proposition 'It is not the case that $7$ is not divisible by $2$' or '$7$ is not divisible by $2$' (both of which are true).
\end{exam}
\begin{prop}
Let $P$ and $Q$ be propositions. Then:
\begin{enumerate}
\item $P \imp Q \equiv (\neg Q) \imp (\neg P).$
\item $P \imp Q \not \equiv Q \imp P.$
\end{enumerate}
\end{prop}
\begin{prop}
Let $P,Q,$ and $R$ be propositions. Then:
\begin{enumerate}
\item $P \imp Q \equiv (\neg P) \vee (Q).$
\item $P \iff Q \equiv (P \imp Q) \wedge (Q \imp P).$
\item $\neg(P \imp Q) \equiv (P) \wedge (\neg Q).$
\item $\neg(P \wedge Q) \equiv (P) \imp (\neg Q) \equiv (Q) \imp (\neg P).$
\item $P \imp (Q \imp R) \equiv (P \wedge Q) \imp R.$
\item $P \imp (Q \vee R) \equiv (P \imp Q) \wedge (P \imp R).$
\item $(P \vee Q) \imp R \equiv (P \imp R) \wedge (Q \imp R).$
\end{enumerate}
\end{prop}
\begin{proof}
The proof for the above proposition is left to the reader. All of the above statements may be proved using truth tables.
\end{proof}
\begin{axm}[Field Axioms of $\br$]
On the set $\br$ of real numbers, there are two binary operations, denoted by $\pmb{+}$ and $\pmb{\cdot}$ and called \textbf{addition} and \textbf{multiplication} respectively. These operations satisfy the following properties:
\begin{itemize}
\item[$A_0$] $x,y \in \br \imp x+y \in \br \q \all \, x,y \in \br$. [additive closure]
\item[$A_1$] $x+y=y+x \q \all \, x,y \in \br$. [additive commutativity]
\item[$A_2$] $(x+y)+z=x+(y+z) \q \all \, x,y,z \in \br$. [additive associativity]
\item[$A_3$] There is a unique $0 \in \br \text{ such that } 0+x=x=x+0 \q \all \, x \in \br$. [existence of an additive identity]
\item[$A_4$] There is a unique $-x \in \br \text{ such that } x+(-x)=0=(-x)+x \q \all \, x \in \br$. [existence of an additive inverse]
\item[$M_0$] $x,y \in \br \imp x \cdot y \in \br \q \all \, x,y \in \br$. [multiplicative closure]
\item[$M_1$] $x \cdot y=y \cdot x \q \all \, x,y \in \br$. [multiplicative commutativity]
\item[$M_2$] $(x \cdot y) \cdot z=x \cdot (y \cdot z) \q \all \, x,y,z \in \br$. [multiplicative associativity]
\item[$M_3$] There is a unique $1 \in \br \text{ such that } 1 \cdot x=x=x \cdot 1 \q \all \, x \in \br$. [existence of multiplicative identity]
\item[$M_4$] There is a unique $\nicefrac{1}{x} \in \br \text{ such that } x \cdot (\nicefrac{1}{x})=1=(\nicefrac{1}{x}) \cdot x \q \all x \in \br$. [existence of multiplicative inverse]
\item[$AM_1$] $x \cdot (y+z) = (x \cdot y) + (x \cdot z)$ and $(y + z) \cdot x = (y \cdot x) + (z \cdot x)$. [distributivity]
\end{itemize}
\end{axm}
\begin{rem}
The reader should be familiar with all of the aforementioned field properties. We note that all of the familiar' properties of algebra (those learned in middle school and high school, for example) may be deduced from this list. We now establish the basic fact that both the additive identity, $0$, and the multiplicative identity are in fact unique; and that multiplication by $0$ always results in $0$.
\end{rem}
\end{document}
-
Please, add a minimal working example; it doesn't matter if it produces the error, but for help us in finding the issue it should start with \documentclass and end with \end{document}. However, the undefined sequence seems to be \psframebox, due to not loading PSTricks. – egreg Jul 28 '13 at 20:26
\newtheorem{Theorem}{Thm}
which is needed for both of your subsequent theorem-like environments, prop and defn.
% arara: latex
% arara: dvips
% arara: ps2pdf
% !arara: indent: {overwrite: yes}
\documentclass[10pt,a4paper]{article}
\usepackage[left=2.50cm,right=2.50cm,top=2.50cm,bottom=2.75cm]{geometry}
\usepackage{amsmath}
\usepackage{pstricks}
\usepackage{framed}
\usepackage[amsmath,framed,thmmarks]{ntheorem}
\newtheorem{Theorem}{Thm}
\theoremclass{Theorem}
\theoremstyle{break}
\theoremclass{Theorem}
\theoremstyle{plain}
\theoremsymbol{\ensuremath{\clubsuit}}
\newframedtheorem{defn}[Theorem]{Definition}
\begin{document}
\begin{defn}[Logical Equivalance]
Two propositions are said to be logically equivalent iff ...
\end{defn}
\begin{prop}
Let $P$ and $Q$ be propositions. Then ...
\end{prop}
\end{document}
Note that this MWE relies upon the pstricks package, so needs to be compiled through the latex->dvips->ps2pdf unless you want to follow the instructions in How to use PSTricks in pdfLaTeX?
For all of your framing needs I would highly recommend the mdframed package, which addresses the many short comings of its competitors.
Here's a version of the previous MWE using the mdframed package; note that this package does not rely upon the pstricks package (in contrast to the previous method). As such, you can (easily) compile this document with pdflatex.
% arara: pdflatex
% !arara: indent: {overwrite: yes}
\documentclass[10pt,a4paper]{article}
\usepackage[left=2.50cm,right=2.50cm,top=2.50cm,bottom=2.75cm]{geometry}
\usepackage{amsmath}
\usepackage[amsmath,framed,thmmarks]{ntheorem}
\usepackage[ntheorem,xcolor]{mdframed}
\newtheorem{Theorem}{Thm}
\theoremclass{Theorem}
\theoremstyle{break}
\newmdtheoremenv[
outerlinewidth = 2 ,%
roundcorner = 10 pt ,%
leftmargin = 40 ,%
rightmargin = 40 ,%
backgroundcolor=yellow!40,%
outerlinecolor=blue!70!black,%
innertopmargin=\topskip,%
splittopskip = \topskip ,%
ntheorem = true ,%
]{prop}[Theorem]{Proposition}
\theoremstyle{plain}
\theoremsymbol{\ensuremath{\clubsuit}}
%\newframedtheorem{defn}[Theorem]{Definition}
\newmdtheoremenv{defn}[Theorem]{Definition}
\begin{document}
\begin{defn}[Logical Equivalance]
Two propositions are said to be logically equivalent iff ...
\end{defn}
\begin{prop}
Let $P$ and $Q$ be propositions. Then ...
\end{prop}
\end{document}
Of course, the mdframed package can be told to use pstricks or tikz if you wish, but that is beyond the scope of the question- see the manual for more details.
Update, following the question edit.
With the additional theorem-like environments, this MWE works- note that you can't define a theorem-like environment twice using \newtheorem
% arara: latex
% arara: dvips
% arara: ps2pdf
% !arara: indent: {overwrite: yes}
\documentclass[10pt,a4paper]{article}
\usepackage[left=2.50cm,right=2.50cm,top=2.50cm,bottom=2.75cm]{geometry}
\usepackage{amsmath}
\usepackage{pstricks}
\usepackage{framed}
\usepackage[amsmath,framed,thmmarks]{ntheorem}
\theoremstyle{marginbreak}
\theorembodyfont{\slshape}
\theoremsymbol{\ensuremath{\diamondsuit}}
\theoremseparator{:}
\newtheorem{Theorem}{Theorem}[section]
\theoremclass{Theorem}
\theoremstyle{break}
\theoremclass{Theorem}
\theoremstyle{plain}
\theoremsymbol{\ensuremath{\clubsuit}}
\newframedtheorem{defn}[Theorem]{Definition}
\theoremstyle{marginbreak}
\theorembodyfont{\slshape}
\theoremsymbol{\ensuremath{\star}}
\theoremseparator{:}
\newtheorem{axm}{Axiom}[section]
\theoremstyle{changebreak}
\theoremsymbol{\ensuremath{\heartsuit}}
\theoremindent0.5cm
\theoremnumbering{greek}
\newtheorem{lem}{Lemma}[section]
\theoremindent0cm
\theoremnumbering{arabic}
\newtheorem{cor}[Theorem]{Corollary}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremsymbol{\ensuremath{\bullet}}
\theoremseparator{}
\newtheorem{exm}{Example}
\theorembodyfont{\upshape}
\theoremstyle{nonumberplain}
\theoremseparator{.}
\theoremsymbol{\rule{1ex}{1ex}}
\newtheorem{proof}{Proof}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremsymbol{\ensuremath{\ast}}
\theoremseparator{.}
\newtheorem{rem}{Remark}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\newtheorem{exc}{Exercise}[section]
\begin{document}
\begin{defn}[Logical Equivalance]
Two propositions are said to be logically equivalent iff ...
\end{defn}
\begin{prop}
Let $P$ and $Q$ be propositions. Then ...
\end{prop}
\end{document}
-
I just added my current definitions of theorem-like environments that I use for this document, and when I add \newtheorem{Theorem}{thm} I get the error message: Command \Theorem already defined. – Michael Dykes Jul 28 '13 at 23:20
@MichaelDykes you've added them after your \end{document}`. Please make a complete MWE :) – cmhughes Jul 29 '13 at 7:51
I just corrected my mistake. Sorry about that :)- – Michael Dykes Jul 29 '13 at 9:41
@MichaelDykes see my updated answer :) – cmhughes Jul 29 '13 at 9:48
Now, I get the errors: No counter theorem defined for the proposition, and definition environments. – Michael Dykes Jul 31 '13 at 1:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826478362083435, "perplexity": 2365.466669980935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270513.22/warc/CC-MAIN-20160524002110-00180-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://proceedings.mlr.press/v28/lopes13.html | # Estimating Unknown Sparsity in Compressed Sensing
Miles Lopes ;
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):217-225, 2013.
#### Abstract
In the theory of compressed sensing (CS), the sparsity \|x\|_0 of the unknown signal x∈\R^p is commonly assumed to be a known parameter. However, it is typically unknown in practice. Due to the fact that many aspects of CS depend on knowing \|x\|_0, it is important to estimate this parameter in a data-driven way. A second practical concern is that \|x\|_0 is a highly unstable function of x. In particular, for real signals with entries not exactly equal to 0, the value \|x\|_0=p is not a useful description of the effective number of coordinates. In this paper, we propose to estimate a stable measure of sparsity s(x):=\|x\|_1^2/\|x\|_2^2, which is a sharp lower bound on \|x\|_0. Our estimation procedure uses only a small number of linear measurements, does not rely on any sparsity assumptions, and requires very little computation. A confidence interval for s(x) is provided, and its width is shown to have no dependence on the signal dimension p. Moreover, this result extends naturally to the matrix recovery setting, where a soft version of matrix rank can be estimated with analogous guarantees. Finally, we show that the use of randomized measurements is essential to estimating s(x). This is accomplished by proving that the minimax risk for estimating s(x) with deterministic measurements is large when n≪p. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541215896606445, "perplexity": 410.53489435343613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426693.21/warc/CC-MAIN-20170727002123-20170727022123-00102.warc.gz"} |
https://www.arxiv-vanity.com/papers/0804.1671/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# State-of-the-art models for the phase diagram of carbon and diamond nucleation
L. M. Ghiringhelli1 , C. Valeriani2 , J. H. Los, E. J. Meijer,
A. Fasolino, and D. Frenkel3
van ’t Hoff Institute for Molecular Sciences,Universiteit van Amsterdam,
Nieuwe Achtergracht 166, 1018 WV Amsterdam, The Netherlands.
FOM Institute for Atomic and Molecular Physics,
Kruislaan 407, 1098 SJ Amsterdam, The Netherlands.
Institute for Molecules and Materials, Radboud University Nijmegen
Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands
11Current address: Max-Planck-Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany
22Current address: School of Physics, James Clerk Maxwell Building, King’s Buildings, University of Edinburgh, Mayfield Road, EH9 3JZ, Edinburgh, UK
33Current address: Dept. of Chemistry, University of Cambridge, Lensfield Road, Cambridge CB2 1EW, UK
###### Abstract
We review recent developments in the modelling of the phase diagram and the kinetics of crystallization of carbon. In particular, we show that a particular class of bond-order potentials (the so-called LCBOP models) account well for many of the known structural and thermodynamic properties of carbon at high pressures and temperatures. We discuss the LCBOP models in some detail. In addition, we briefly review the “history” of experimental and theoretical studies of the phase behaviour of carbon. Using a well-tested version of the LCBOP model (viz. LCBOPI) we address some of the more controversial hypotheses concerning the phase behaviour of carbon, in particular: the suggestion that liquid carbon can exist in two phases separated by a first-order phase transition and the conjecture that diamonds could have formed by homogeneous nucleation in Uranus and Neptune.
## 1 Introduction
Carbon exhibits a rich variety of solid structures: Some are thermodynamically stable, most are not. To be specific: solid carbon can be found in the two well-known crystalline phases, diamond and graphite, and in amorphous states, such as glassy carbon and carbon black. Furthermore, the existence of additional metastable solid phases at relatively low pressure, the so-called carbynes, is still hotly debated [1, 2]). In addition to the bulk phases, there are the more recently discovered fullerenes, C and C [3], nanotubes [4], and graphene [5].
The reason why a simple element such as carbon can manifest itself in so many different forms is related to its unusual chemical properties: carbon exhibits three different possibilities for covalent bond formation: hybridization appears in diamond, hybridization is found in graphite, graphene, nanotubes, and fullerenes, whilst in carbynes, C should exhibit hybridization.
Because of their high cohesive energies and concomitant high activation energies that must be overcome in structural phase transformations, carbon polymorphs often exist in metastable form well inside pressure-temperature regions where another solid form is thermodynamically stable. For example, it is well known that diamonds survive at normal P-T conditions, where graphite is the thermodynamically stable phase. Conversely, graphite tends to persist at very high pressures, deep into the diamond stability region of the phase diagram.
It is also interesting that, at zero pressure and temperature, graphite and diamond have a very similar (and quite large) binding energy per atom, i.e. 7.37 eV (graphite) vs 7.35 eV (diamond). This fact might suggest (and it has indeed been suggested) that also in disordered phases like the liquid, the two local structures – graphite-like and diamond-like – could compete. In fact, as we discuss below, the possibility of the existence of two distinct and partially immiscible liquid phases of carbon has been a subject of much debate.
In a liquid–liquid phase transition (LLPT), a liquid substance displays an abrupt change in some local or global property within a narrow band of pressures and temperatures. Local properties that may change in a LLPT are the local coordination or hybridization, typical global properties that are affected are the density or the resistivity. LLPT’s in dense, atomic liquids are typically difficult to probe experimentally: the candidate transitions often occur at extreme pressures and/or temperatures or appear in metastable regions of the phase diagram (and may be hidden by competing solidification). Evidence for LLPT’s have been found for a number of atomic systems, such as Cs [6], As [7], Bi [8], Ge [9], Hg [10], S [11], Sb [12], Se [13], Si [14, 15], Sn [16], H [17], I [18], N[19, 20]. The best established experimental example of a LLPT in an atomic liquid is the case of phosphorus. A transition between a fluid of tetrahedral molecules and a network forming (and metallic) liquid was predicted on theoretical grounds [21, 22], and subsequently verified experimentally [23, 24]. The LLPT in phosphorus has been analysed in several numerical studies [25, 26, 27, 28]. Many other network-forming liquids are also expected to exhibit LLPT’a: first and foremost water [29, 30], but also SiO [31] and GeO [32]. Although considerable progress has been made in the theoretical description of LLPT’s [33, 34, 35, 36, 37, 38], a unified theoretical picture is still lacking.
In this review we discuss the phase diagram of solid and liquid carbon at high pressures and temperatures on the basis of the results of numerical simulations; both quantum and classical. We present evidence that the presence of graphite-like and diamond-like local structures in the liquid does not give rise to liquid-liquid demixing but that the predominant local structure in the liquid varies strongly with pressure. The fact that the liquid is locally either graphite-like or diamond-like has dramatic consequences for the nucleation of the diamond phase, a finding that may have some consequences for our understanding of carbon-rich planets or stars. Wherever possible, we discuss our own results in the context of the relevant literature about the carbon phase diagram, about a possible LLPT in this system and about the possibility of diamond formation in planetary interiors.
In section 2 we give an overview of the bond-order potential (“LCBOP”) that was used to compute both the equilibrium phase diagram of carbon and the pathway for diamond nucleation in liquid carbon. We also discuss in some detail the different variants of the LCBOP potential [39, 40] and explain the rationale behind the choice of the present LCBOP potential.
In section 3 we briefly summarize some of the earlier ideas about the phase diagram of carbon (in particular, about the crystalline phases and the liquid). We pay special attention to the slope of the diamond melting curve and the (possible) heating-rate dependence of the graphite melting curve.
In section 3.2 we report our results concerning the phase diagram, in the context of recent first-principle simulations and experiments.
In section 4.1 we review the arguments that have been put forward to support the idea that liquid carbon can undergo a LLPT. We argue that, to the extent that we can trust the present models of liquid carbon, a LLPT in carbon is not be expected.
In section 5 we discuss our numerical results concerning the (homogeneous) nucleation of diamond from the bulk liquid. In particular, we discuss in some detail the numerical approach that was used in Ref. [41]. In addition, we focus on the structural analysis of the small solid clusters in the liquid and we discuss the implications of our findings for the formation of diamonds in carbon-rich star systems and the interior of giant planets.
## 2 The LCBOP-family
Whilst the crystalline phases of carbon can be simulated by traditional force fields that do not allow coordination changes, a study of the liquid phase and, a fortiori, of phase transformations between phases with different local coordination, requires a potential that can describe carbon in different coordination states. The LCBOP potential was designed with this objective in mind. LCBOP stands for ”Long range Carbon Bond Order Potential”, and represents a bond-order potential for pure carbon that includes long-range (LR) dispersive and repulsive interactions [39] from the outset. We stress that LCBOP is not based on an existing short-range (SR) bond-order potential to which LR interactions have been added a posteriori, although such an approach has been proposed in the literature [42, 43, 44]. The latter approach requires a rather special procedure to avoid interference with the SR potential and suffers from a loss of accuracy. In the LCBOP these problems are circumvented in a natural way.
We note that the term ”long range” may be confusing in this case, as the cut-off of the LR potential in LCBOP is only 6 Å. Usually, the term ”long range” is only used for interactions with a much longer range, such as Coulomb interactions. Here we use it as a synonym for ”non–bonded”, referring to a range much larger than the typical distances between chemically bonded atoms.
After the introduction of LCBOP in Ref. [39], a number of significant modifications have been introduced in order to improve its description of all carbon phases, including liquid carbon. To facilitate the distinction between the different LCBOP potentials, the various versions have been named LCBOPI, LCBOPI and LCBOPII. LCBOPI, introduced as LCBOP in Ref. [39], does not include torsion interactions. As torsion interactions were shown to play an important role in liquid carbon [45], we introduced a refinement of LCBOPI, called LCBOPI, when we performed our first study of liquid carbon [46]. LCBOPI, includes, among other changes, conjugation dependent torsional interactions. Clearly, describing the liquid phase requires a robust form of the potential in order to deal with configurations that are quite unlike the regular topologies in crystal lattices. LCBOPII addresses this problem: it includes several important improvements over LCBOPI. An important innovation in LCBOPII is the addition of so-called middle range (MR) interactions, introduced to bridge the gap between the extent of the tail of the covalent interactions as found in ab-initio calculations, (up to 4.5 Å in certain cases) and the rather short cut-off of only 2.2 Å in LCBOPI for these interactions.
In the remainder of this section we give a brief step-by-step description of the transition from bond-order potentials (BOPs) to LCBOPI. All the results presented in later sections, concerning the phase diagram, the liquid structure, and the nucleation issues are obtained with this version of the potential. In Appendix A we discuss the LCBOPII potential, with a short account of results obtained with this refined version of the potential. We aim at giving the flavour of the potentials in a mainly descriptive way with graphical illustrations, minimizing mathematical formulation.
### 2.1 Bond-order potentials
A bond-order potential (BOP) is a reactive potential, i.e. able to deal with variable coordination. It provides a quantitative description of the simple idea that the bonds of an atom with many neighbours are weaker than those of an atom with few neighbours, as the cohesive ability of the available electrons has to be shared among the neighbours. For carbon, each atom delivers four valence electrons. If these four electrons have to make the six bonds in a simple cubic lattice, then it is evident that each of these bonds is weaker than a bond in the diamond or graphite lattice with coordinations 4 and 3 respectively. On the other hand, the number of bonds is larger for the simple cubic lattice. So there is a balance to be made, which in the case of carbon has the result that graphite is the most stable phase at ambient pressure.
A thorough analysis of these bonding properties, based on a quantum mechanical description, has been given by Anderson [47, 48, 49] and Abell [50]. In their description, that forms the basis of the tight binding models, the electronic wave function is approximated as a sum of localized atomic orbitals. Abell showed that for a regular lattice, i.e. with an identical environment for each atom, and within the assumption that the overlap integral for orbitals on different atoms is non-vanishing only for nearest neighbours, the binding energy per atom is given by:
Eb=12Z(qVR(r)+bVA(r)) (1)
where is the number of nearest neighbours, is the number of valence electrons per atom, is a two-body potential describing the core repulsion, is a two-body attractive potential, and is the so-called bond order, a many-body term dependent on the local environment of the atoms. Abell also showed that the coordination dependence of is fairly well approximated by:
b=b(q,Z)=α(q)Z−1/2 (2)
with a function of , specified in Ref. [50]. Assuming exponential functions and , with , , , and fitting parameters, as a reasonable approximation for overlapping atomic orbitals from atoms at distance , and defining , some algebra leads to a total binding energy given by:
Eb = Bα(q) S−12S(Bα(q)qAS)1S−1ZS−22(S−1) (3) = CZS−22(S−1)
and an equilibrium nearest neighbour distance given by:
req=1θ−λln(qAS√ZBα(q))=12(θ−λ)lnZ+C′ (4)
where and are constants. Eq. 3 implies that for high coordination structures (close packing) are favoured (metals), whereas for the dimer will be the most stable structure (extreme case of covalent bonding). As, in general, the repulsion falls off (much) faster than the attraction, i.e. , Eq. 4 implies that is monotonically increasing with coordination. Combining Eqs. 3 and 4 yields a simple relation between and , namely .
In principle BOPs are based on the above bonding ideas. However, the transferability to different types of structures and materials has been greatly enhanced by a quite reasonable extension in the functional form of the bond order . The simplest bond order for a bond according to a BOP in the style of Tersoff [51] and Brenner [52] reads:
bij=α⎛⎝1+∑k≠i,jG(θijk)⎞⎠ϵ (5)
where the sum runs over the nearest neighbours other than of atom , is an adjustable function of the bond angles and is a negative exponent but not necessarily -1/2. Taking a constant and yields , i.e. one recovers the functional form Abell found for (Eq. 2). This form includes the effect of bond angles in a natural way, and has the ability to fit a large set of data quite well, explaining the success of BOPs.
### 2.2 Lcbopi
The main innovative feature of LCBOPI [39] concerns the treatment of the LR van der Waals interactions. One of the challenges here is to add LR interactions which describe not only the interlayer graphitic binding but also the rather strong -bond repulsion in graphite for decreasing interlayer distance without paying a price in the accuracy of the covalent binding properties. A correct description of these interactions requires a LR potential, that is repulsive in the distance range corresponding to the second nearest neighbours (in diamond and graphite). In the LCBOP-family, to get the right equilibrium lattice parameter, this extra repulsion has been compensated by a somewhat stronger attractive part of the covalent interaction, achieved by an appropriate parametrisation of the SR potential. This is schematically illustrated in Fig. 1.
Another feature of LCBOPI is that it contains a reasonable, physically motivated interpolation scheme for the conjugation term to account for a mixed saturated and unsaturated environment. In this approach each atom supplies a number of electrons to each of its bonds with neighbouring atoms according to a certain distribution rule, the total sum being equal to the valence value 4 of carbon. The character of a certain bond , and its conjugation number , a number between 0 and 1 quantifying effects beyond nearest neighbours, is determined by the sum of the electrons supplied by atom and atom . The interpolation model is further illustrated in Fig. 2.
### 2.3 Lcbopi+
The potential LCBOPI is given by LCBOPI supplemented with torsion interactions and a correction of the angle dependent part of the bond order for configurations involving low coordinations and small angles. Similar modifications were also included in the REBO potential [53], although the torsion term contains a significant difference. For LCBOPI, following the results of ab-initio calculations, the shape of the torsion energy curve as a function of the torsion angle depends on conjugation, whereas for the REBO potential the curves for a double and a graphitic bond are equally shaped but scaled (see top panel of Fig. 20). Details on LCBOPI are given in the appendix A of Ref. [54] and in Ref. [55].
## 3 The phase diagram of carbon at very high pressures and temperatures
In this section we give a review of experimental and theoretical works aimed at determining the phase behaviour of carbon at high temperatures and pressure. We follow a “historical” approach, starting from the beginning of the twentieth century, up to the most recent results coming from experiments and computer simulations. A historical approach may give a better understanding why certain issues have been, and in some case still are, controversial. After setting the stage (with a particular attention to the ideas about the sign of the slope of the diamond melting curve and the long debated issue of the position and nature of the graphite melting curve), we focus on the topic of the LLPT for carbon.
### 3.1 The history of carbon phase diagram
One of the earliest phase diagrams of carbon appeared at the beginning of the twentieth century, and is due to H. Bakhuis Roozeboom [56], who estimated the phase behavior of carbon on the basis of thermodynamic arguments. Of the two solid phases, diamond was recognized to have a slightly greater vapor pressure at a given temperature. The temperature of the graphite/liquid/vapor triple was believed to be around 3000 K. In 1909 Tamman [57] postulated the existence of a region where graphite and diamond are in pseudo-equilibrium. The existence of this pseudo-equilibrium region was at the basis of the method of synthesizing diamond starting from carbon saturated solutions of molten iron, silver, or silicates. In 1938, Rossini and Jessup [58] of the U.S. Bureau of Standards used accurate thermodynamic data to estimate that at 0 K the lowest pressure at which diamond would be stable against graphite is around 1.3 GPa, and around 2 GPa at 500 K. In 1939, the Russian scientist Leipunskii [59] published a review of the problem of diamond synthesis. On the basis of thermodynamic data, he suggested that the melting curve of graphite might be at about 4000 K, with possibly some increase with pressure. This value for the melting of graphite was rather well verified the same year by Basset [60], who established the graphite/liquid/vapor triple point to be at about 11 MPa and 4000 K. In that same publication, Basset reported on a rather pressure independent melting temperature of graphite at K, from atmospheric pressure up to 0.1 GPa. In 1947 Bridgman [61] addressed the problem of extrapolating the graphite/diamond coexistence curve beyond the region where it can be estimated from known physical properties (4 GPa/1200 K). He concluded that there was a possibility that at higher temperatures the rate of increase of P with T along the curve would decrease. This hypothesis was later supported by Liljeblad [62] in 1955, while Berman and Simon [63] in the same year came to the conclusion that the best extrapolation would be a straight line. Experiments that could decide this issue were started by Bundy and coworkers in 1954, when they accomplished diamond synthesis by activating the graphite-to-diamond reaction with the use of different solvent-catalyst metals. The relevant experimental data were published only much later [64, 65], and are compatible with the Berman-Simon straight line extrapolation.
Bundy and his group made also extensive experiments on graphite melting at pressures much higher than the graphite/liquid/vapor triple point. The determination of the graphite melting curve is an experimental challenge for several reasons. First of all, to reach pressures as high as 10 GPa, the sample must be in direct contact with a solid container and, because the melting temperature are so high, this container must be made of a material that is as refractory and inert as possible (Bundy chose boron nitride, pyrophyllite, MgO and diamond powder). In addition, both the heating of the sample and the observations of the high-pressure/high-temperature phase must be carried out very rapidly, before the wall material can melt or react with the carbon sample. The experiments were performed by discharging an electrical capacitor through the sample (this procedure is known as flash heating), and by monitoring the current through, and the voltage across it by means of a two-beam oscilloscope. The discharge circuit was designed to have energy insertion in the sample within a few milliseconds. The interpretation of such experiments, is rather sensitive to the assumed pressure and temperature dependence of the material under study and on the assumption that the pressure of the graphite specimen during rapid heating is the same as in a quasi-static process. With these assumptions, Bundy’ s experiments gave a graphite melting curve as shown in Fig. 3. A maximum melting temperature of about 4600 K was detected in the region of 6 GPa to 7 GPa. The presence of a region with a negative dd along the melting curve indicates that, at those pressures, the density of the liquid at the melting temperature is greater than that of the solid.
Interrupting for a moment the historical order of events, we note that, throughout the past century, different experiments located the graphite melting curve at rather different temperatures [66, 60, 67, 68, 65, 69, 70, 71, 72, 73, 74, 75]. At low pressure, the melting temperature (T) was found at values ranging from to K. Asinovskii et al. [76] pointed out the non negligible dependence of graphite on the heating rate of the sample. Specifically, heating times of the order of 10 s [70, 72] yielded estimates of K; heating times of the order of 10 s [71, 74] suggested K; finally experiments with heating times of the order of one second [60, 69] were consistent with the assumption that K. In ref [76], after a thorough discussion of the experimental methods, the authors recommended that only data coming from experiments with heating time of the order of seconds or more should be accepted. This implied that most of the available data on graphite melting had to be reconsidered and that the question of the position and the shape of the melting curve is still open. On the basis of a series of laser induced slow heating experiments [76] (i.e. heating times of the order of one second), Asinovskii et al. proposed the triple point vapor/liquid/graphite at K and 0.1 MPa (i.e. atmospheric pressure), in clear contradiction to the commonly accepted values [73] of K and 10 MPa. The next year the same authors [77] published results concerning the position of the graphite melting curve. With ohmic heating of graphite samples at heating rates of about 100 K/minute, they found K at 0.25 MPa (typically, samples melted after one hour of steady heating).
Coming back to Bundy’s work, during the experiments on graphite melting Bundy and his group also investigated the graphitization of diamond by flash-heating under pressure. Small diamond crystals where embedded in the graphite sample, pressurized and then flash-heated. Experiments indicated that there is a sharp temperature threshold at which the diamond crystals completely graphitized. This threshold is a few hundreds degrees lower than the graphite melting curve.
Attempts to obtain direct (i.e. without resorting to a catalyst material) conversion of graphite into diamond by the application of high pressure date back to the beginning of the twentieth century. Success came only in 1961, when De Carli and Jamieson [78] reported the formation and retrieval of very small black diamonds when samples of low-density polycrystalline graphite were shock compressed to pressures of about 30 GPa. Later in 1961 Alder and Christian [79] reported results on the shock compression of graphite that were in substantial agreement with those of De Carli and Jamieson.
Bundy [80] achieved direct conversion of graphite into diamond by flash-heating graphite sample in a static pressure apparatus, at pressures above the graphite/diamond/liquid triple point. The threshold temperature of the transformation was found several hundred degrees below the melting temperature of the graphite, and decreasing at higher pressures. The phase transition was revealed by a sharp drop in the electrical conductivity of the samples (that were retrieved as pieces of finely polycrystalline black diamond).
By linking his own results with earlier experimental and theoretical findings, Bundy [80] proposed in February 1963 a phase diagram of carbon at high pressures that is illustrated in Fig. 3. The diamond melting curve was believed to have negative slope by analogy with the other Group IV elements [81, 82]), and on the basis of evidence collected during the experiments of Alder and Christian [79].
In 1973 Van Vechten [83] predicted the phase diagram of carbon by rescaling the behavior of other Group IV elements that are experimentally more accessible, using the electronegativity as a scale parameter. In 1979 Grover [84] calculated a phase diagram by using a phenomenological equation of state for the description of various solid and liquid phases of carbon. He used physically motivated approximations for the free energies of the various phases, with parameters adjusted to match the available data on the equations of state. He concluded that, at all pressures, diamond transforms, before melting, into a solid metallic phase.
On the basis of experimental evidence [85], in 1978 Whittaker [1] proposed the existence of a novel crystalline phase for elemental carbon, called “carbyne”. The stability region of carbyne is sketched in Fig. 4) and the structure of this phase (though expected in different allotropes) is generally that of a chains of alternated single and triple bonds, i.e. CC, arranged in a hexagonal array bundled by dispersion interactions. The existence of a carbyne form was later questioned by Smith and Buseck [2], who claimed that all the experimental evidence could also be accounted for by the presence of sheet silicates. This dispute continued to this day: the experimental evidence for the existence of carbyne is still being debated.
In recent years, the experimental effort has focused on the collection of reliable data at even higher pressures, and on the investigation of the properties of the different phases of carbon at high temperatures and pressures. This challenging task has been faced both with experiments and theory. On the experimental side, the development of the diamond-anvil cell [6] for high pressure physics has made it crucial to know the range of stability of diamond under extreme conditions. The availability of high-energy pulsed laser sources led to new tools for heating up samples at very high temperatures (above the graphite melting curve) [86]. These techniques were immediately applied to the determination of the properties of liquid carbon (i.e. whether it is a conducting metallic liquid or an insulator). Unfortunately, due to the difficulties in interpreting the results of these experiments, the nature of the liquid state of carbon is still not characterized experimentally.
On the theoretical side, the appearance of ever more powerful computers made it possible to use electronic density-functional (DF) theory [87, 88] to predict the properties of materials under extreme conditions. In 1983 Yin and Cohen [89] studied the total energy versus volume and the free energies versus pressure for the six possible lattices of carbon (fcc, bcc, hcp, simple cubic, -tin, diamond). The study was carried out by using ab initio pseudopotential theory (this permits the investigation of the properties of the atomic system at 0 K). Yin and Cohen found that the calculated zero-pressure volume for diamond is either close to or even smaller than those of the other five phases. This is different from what is observed for the other group IV elements, Si and Ge, and defies the common notion that diamond is an open structure and should have higher specific volume than the close packed solid structures. The relatively dense packing of diamond would inhibit the phase transformations at high hydrostatic pressures that are observed for heavier group IV elements. In addition, it suggested a revision of the other common notion that the diamond melting curve should have negative slope, something that is to be expected when a liquid is denser than the coexisting solid. Yin and Cohen also found that, at a pressure around 2300 GPa, diamond converts to a simple cubic (sc) phase. This work was later extended [90, 91, 92] to consider also complex tetrahedral structures. It was found that a distorted diamond structure called BC-8 was stable versus diamonds at pressures above 1000 GPa (see Fig. 4).
In 1984 Shaner and coworkers [93] shock compressed graphite and measured the sound velocity in the material at shock pressures ranging from 80 to 140 GPa, and corresponding shock temperatures ranging from 1500 to 5500 K. They measured velocities close to those of an elastic longitudinal wave in solid diamond. These velocities are much higher than those of a bulk wave in a carbon melt. Since no melt was detected at pressures and temperatures well above the graphite/diamond/liquid triple point, the diamond melting curve should, according to these results, have a positive slope. In 1990 Togaya [94] reported experiments in which specimens of boron-doped semiconducting diamond were melted by flash-heating at pressures between 6 and 18 GPa: these experiments provided clear indications that the melting temperature of diamond increases with pressure.
In the same year ab-initio molecular dynamics (MD) studies [95] clearly showed that, upon melting diamond at constant density, the pressure of the system increases. These results imply that, at the densities studied, the slope of the melting curve of diamond is positive. The shape of the diamond melting curve has interesting consequences for the theory of planetary interiors. Given our present knowledge of the phase diagram of carbon and the existing estimates for the temperatures and pressures in the interior of the outer planets Neptune and Uranus, as well as in the Earth mantle, one might conclude that in a large fraction of these planetary interiors the conditions are such that diamond should be the stable phase of carbon [96]; diamonds could then be expected to occur wherever the carbon concentration is sufficiently high. In section 5 we show that, when only homogeneous nucleation is considered, our modelling predict that the driving force for diamond nucleation is missing in giant planet interiors. In 1996 Grumbach and Martin [97] made a systematic investigation of the solid and liquid phases of carbon over a wide range of pressures and temperatures by using ab initio MD. These authors studied the melting of the simple cubic and BC-8 solid phases, and investigated structural changes in the liquid in the range 400-1000 GPa. They observed that the coordination of the liquid changes continuously from about four-fold to about six-fold over this pressure range.
In 2004, Bradley et al. [98] reported experiments on laser-induced shock compression of diamond up to 3000 GPa. Through optical reflectivity measurements, they found for the first time direct evidences of diamond melting, at an estimated pressure of GPa, and temperature K.
Fig. 4 summarizes the information about phase diagram of carbon at the time when the present research was started.
### 3.2 Carbon phase diagram according to LCBOP
Methods We performed Monte Carlo simulations on the LCBOPI model of carbon [39, 46] to estimate the properties of the liquid, graphite, and diamond phases of carbon. Coexistence curves were determined by locating points in the diagram with equal chemical potential for the two phase involved. For this purpose, we first determined the chemical potential for the liquid, graphite, and diamond at an initial state point ( 10 GPa, 4000 K). This state point is near the estimated triple point[99]. Subsequently, the liquid-graphite, liquid-diamond, and graphite-diamond coexistence pressures at 4000 K were located. In turn, these coexistence points served as the starting point for the determination of the graphite melting line, the diamond melting line, and the graphite-diamond coexistence curve, obtained by integrating the Clausius-Clapeyron equation (a procedure also known as Gibbs-Duhem integration) [100]: where is the difference in specific volume, and the difference in molar enthalpy between the two phases.
We proceeded by first determining the Helmholtz free energy at a given density and temperature by thermodynamic integration and subsequently calculating the chemical potential using the procedure described in Ref. [101]. Coexistence at a given is found at the where the different cross.
For all phases, the Helmholtz free energy of the initial state point ( 10 GPa, 4000 K) was determined by transforming the system into a reference system of known free energy . The transformation was imposed by changing the interaction potential: . Here, and denote the potential energy function of the LCBOPI and the reference system, respectively. The transformation is controlled by varying the parameter continuously from to . The free-energy change upon the transformation was determined by thermodynamic integration:
F✠ = Fref+ΔFref→✠= (6) = = Fref+∫10dλ⟨Uref−U✠⟩λ
The symbol denotes the ensemble average with the potential .
As reference system for the liquid we chose the well-characterized Lennard-Jones (12-6) system, whilst the reference system for both diamond and graphite was the Einstein crystal. General guidelines for these kind of calculations are given in [102, 101], while a full description of the strategy adopted for the present systems is given in [55]. The ensemble averages needed for the thermodynamic integration were determined from Monte Carlo (MC) simulations of a 216-particle system in a periodically replicated simulation box. For simulations of the graphite phase, the atoms were placed in a periodic rectangular box with an initial edge-size ratio of about . For the liquid phase and diamond a periodic cubic box was used.
From the Helmholtz free energy to the chemical potential The chemical potential along the K isotherm was obtained by integrating from the initial state point a fit, , through simulated state points along the 4000 K isotherm. Here, is the number density, and , , and are fit parameters. This yields for the chemical potential [101]:
βμ(ρ)=βF✠N+β(aρ✠+b lnρρ✠+b+c(2ρ−ρ✠)) (7)
Here, denotes the number density at the initial state point, the number of particles, and , with the Boltzmann constant.
Calculated coexistence curves The equilibrium densities at the initial state point ( 10 GPa, 4000 K) were 3.425 for diamond, 2.597 for graphite, and 2.421 for the liquid. Three configurations at the equilibrium volume were then chosen as starting state point for the free energy calculation.
The integrals related to the reference system transformation (Eq. 6) were evaluated using a 10-point Gauss-Legendre integration scheme. Fig. 5 shows the integrand versus . The smooth behaviour of the curves indicates that there are no spurious phase transitions upon the transformation to the reference system (the absence of such transitions is a necessary condition for using this method). At the initial state point ( 10 GPa, 4000 K), the calculated free energies() where , , and , for graphite, diamond, and the liquid, respectively.
Fig. 6 shows the calculated state points along the 4000 K isotherms for the three phases, along with the fitted curves. The fit parameters are listed in Table 1. Employing subsequently Eq. 7 we obtained the calculated chemical potential along the 4000 K isotherm for the liquid, graphite, and diamond phase.
The intersections of the chemical potential curves yield the graphite/liquid coexistence at 6.72 0.60 GPa ( ), and the graphite/diamond coexistence at 15.05 0.30 GPa ( ). The third intersection locates a diamond/liquid coexistence at 12.75 0.20 GPa ( ). Even though both diamond and the liquid are there metastable, the Clausius-Clapeyron integration of the diamond melting curve can be started at the metastable coexistence point at 4000K. Starting from the three coexistence points at 4000 K, the coexistence curves were traced by integrating the Clausius-Clapeyron equation using the trapezoidal-rule predictor-corrector scheme [100]. The new value of the coexisting at a given was taken when two iterations differed less than 0.01 GPa, this being the size of the single uncertainty in the calculation of dd. Typically this required 2-3 iterations.
The calculated phase diagram in the plane is shown in Fig. 7 for the low pressure region, and in Fig. 8 for the pressures up to 400GPa. Tab. 2 lists the densities of selected points on the coexistence curves. The three coexistence curves meet in a triple point at 16.4 0.7 GPa and 4250 10 K.
The graphite/diamond coexistence curve agrees well with the experimental data. In the region near the liquid/graphite/diamond triple point that has not been directly probed in experiments, the graphite/diamond coexistence curve bends to the right, departing from the commonly assumed straight line. Analysis of our data shows this is mainly due to the rapid reduction with increasing pressure of the interplanar distance in graphite at those premelting temperature. This causes an enhanced increase of the density in graphite, yielding a decrease of .
Table 2 shows the melting enthalpy for graphite and diamond. These are calculated as the difference in enthalpy between the solid and the liquid at coexistence. Our calculated melting enthalpies of graphite are significantly lower than the values of 110 kJ/mol that were reported in recent shock-heating melting experiments [73, 74]. Nonetheless our values retain the feature of being rather constant along the graphite melting curve. To our knowledge, no experimental data have been reported for the melting enthalpies of diamond. Note, that they monotonically increase with temperature.
The calculated graphite melting temperature is monotonically increasing with pressure and is confined to small temperature range around 4000 K. In contrast to data inferred from experiments it shows no maximum and is at a somewhat lower temperature. In agreement with the experiments the coexistence temperature is only slowly varying with pressure. Inspection reveals that this behavior is due to a limited variability of the melting enthalpy, and a similar bulk modulus for liquid and graphite yielding a volume change upon melting that is almost constant along the melting curve.
The sign of the slope of the diamond melting curve is consistent with the available experimental data [93, 73] (see Fig. 8). When compared to the diamond melting curve of the BrennerI model [104], the LCBOPI diamond melting curve has a steeper slope yielding significantly higher temperatures for the diamond melting curve. Recently, the melting curve of diamond in a range up to 2000 GPa has been studied by ab initio MD simulations using density functional theory. Wang et al. [105] determined the relative stability of the diamond and liquid phase by evaluating the free energy of both phases. Correa et al. [106] determined the melting temperature using a “two phase” simulation method, where the system initially consists of a liquid and a diamond structure that are in contact. Subsequently the melting temperature is estimated by locating the temperature at which the system spontaneously evolves towards a liquid or a crystalline structure. In both ab initio MD studies it was found that the diamond melting curve shows a maximum; around 450 GPa [106] or 630 GPa [105] 444The difference between these two values gives a hint on the uncertainties related to the two different methods used for calculating coexistence, given that the DF-MD set-up is quite similar in the two works. Subsequent laser-shock experiments [107] provided data consistent with this observation, indicating a negative melting slope most probably in the region of 300-500 GPa. When comparing the LCBOPI diamond melting curve, that monotonically increases with pressure, to the ab initio MD results of Refs. [105, 106] we see a significant deviation from 200 GPa onwards. This might be attributed to an incorrect description of the liquid structure at high compression. Indeed, LCBOPI has not been validated against high density structures with coordination beyond four. These are typical configuration that might become more dominant in the pressure region beyond 200 GPa.
## 4 Existence of a liquid–liquid phase transition?
### 4.1 History of the LLPT near the graphite melting line
Analysis of experimental data The possibility of a LLPT in liquid carbon was first investigated by Korsunskaya et al. [36], who analysed data on the graphite melting curve proposed by Bundy [65], (those data showed a maximum melting temperature at 6.5 GPa). By fitting the data from Bundy into the original two levels model of Kittel [34] and postulating the existence of two liquids, Korsunskaya et al. found the critical temperature of the LLPT. The model is fitted with three points on the graphite melting curve, with the respective derivatives, and with the heat of melting at one selected pressure. The fitting procedure gives an estimate for the critical pressure of GPa and for the critical temperature of the searched transition at 3770 K, i.e. below the melting temperature. The fitted value for the entropy of freezing is the same for the two liquids, thus implying a vertical slope (dd) of the coexistence curve (in the metastable liquid region just below the critical temperature)
On the basis of these results, the authors were able to calculate also the diamond melting curve: they predicted it to have a negative slope, in accordance with the commonly accepted interpretation of the experiments of Alder and Christian [79]. Note that the slope of the graphite melting curve, and the slope of the diamond/graphite coexistence, as extracted from Bundy’s data [80, 65], together with the densities of the phases obtained by fitting to the two levels model implied (via Clausius-Clapeyron equation) a negative slope of the diamond melting curve. Different values of the slopes of the graphite boundary curves, and of the densities of the phases can yield rather different slope of the diamond melting curve.
Consistent with the slope of the fitted graphite melting curve, the low density liquid is predicted less dense than the coexisting graphite, and the high density liquid more dense than the coexisting graphite. The nature of the two liquids was predicted as follows: at low pressure graphite melts into a liquid of neutral particles that interact predominantly through dispersion (London) forces. Upon increasing pressure the liquid would transform into a metallic close packed liquid. No assumptions were made on the local structure.
A semi-empirical equation of state The modern discussion on the LLPT for carbon, starts with the elaboration of a semi-empirical equation of state for carbon, valid also at high and , by van Thiel and Ree [108, 99]. This equation of state was constructed on the basis of experimental data and electronic structure calculations. It postulated the existence, in the graphite melt, of a mixture of an and an liquid. By assuming the model of pseudo-binary mixture for the description of the mixing of the two liquids [38], Van Thiel and Ree showed that fitting their empirical equation of state to the graphite melting points of Bundy [65], they predict a graphite melting curve that shows a maximum with a discontinuous change of the slope, so that a first order LLPT arises. On the other hand, if they fit their model to the data from Ref. [109], the predicted of the LLPT drops below the melting curve and the transition between the two liquids becomes continuous in the stable liquid region. As pointed out by Ponyatovsky [110] the expression for the mixing energy of the two liquid as proposed by van Thiel and Ree in [108, 99] involves two ambiguities. Firstly, extrapolating the coexistence curve between the two liquids at atmospheric pressure, the coexistence temperature would be K: this would imply that the liquid (and the glass) would be more stable than the at ambient pressure up to very high temperatures, which is in disagreement with the experimental data. Furthermore, the mixing energy is proposed to have a linear dependence on , so that, when , also the mixing energy would tend to zero, i.e. at zero temperature the regular solution would become an ideal solution. This would be rather unusual.
Experimental evidence from the graphite melting curve Using flash-heating experiments Togaya [74] determined the melting line of graphite and found a maximum in the melting curve at GPa. This author fitted the six experimental points with two straight lines: with positive slope at pressures lower than , with negative slope at pressures higher than . The apparent discontinuity at the maximum would imply the presence of a triple point graphite/ low-density-liquid (LDL) / high-density-liquid (HDL), as a starting point of a LLPT coexistence curve.
Prediction of a short-range bond-order potential In Ref. [103] Glosli and Ree reported a complete study of a LLPT simulated with the ‘BrennerI’ bond-order potential [52] in its version with torsional interactions [111]. The authors simulated in the canonical (NVT) ensemble several samples at increasing densities at eight different temperatures. By measuring the pressure, they show the familiar van der Waals loop betraying mechanical instability over a finite density range. Using the Maxwell equal-area construction, the authors calculated the LLPT coexistence curve, ending in a critical point at K and GPa. The lowest temperature coexistence point was calculated at K and GPa. The LDL/HDL coexistence curve should meet the graphite melting curve at its maximum, but unfortunately the BrennerI potential does not contain non bonded interactions, thus it can describe neither bulk graphite nor its melting curve. To overcome this deficiency, the authors devised an ingenious perturbation method. Assuming constant slope of the negative sloped branch of the graphite melting curve (the authors adopted the graphite melting curve measured by Togaya [74]) and fixing the graphite/diamond/HDL triple point at a value taken from the experimental literature, they give an estimate of the graphite/LDL/HDL triple point, at K and GPa. The LDL was found to be mainly two-fold () coordinated with a polymeric-like structure, while the HDL was found to be a network forming, mainly four-fold, () liquid. Following the predictions of this bond-order potential, the coordinated atoms would be completely avoided in the liquid. The authors identified the reason in the presence of torsional interactions. In fact, the increase in density demands an increase in structures with higher coordination than the , which is entropically favored at low densities. The single bonds of the structures can freely rotate around the bond axis, while bonds between sites are constrained in a (almost) planar geometry by the torsional interactions: this implies a low entropy for a liquid dominated by sites. This low entropy would eventually destabilize the sites towards the . To prove this conjecture, the authors calculated two relevant isotherms in the original version of the potential, without torsional interactions, finding no sign of a LLPT. Since some torsional interactions are definitely needed to mimic the double bond reluctancy to twist, the authors concluded that the LLPT predicted by the Brenner bond-order potential with torsion is more realistic than its absence when torsional interactions are switched off.
Tight binding calculations [112] showed no evidence of van der Waals loops at some of the temperatures analyzed in Ref. [103]. As Glosli and Ree note, the tight binding model used in [112] is strictly two-center, thus the torsional interactions cannot be described.
An ab initio study of the LLPT In Ref. [45], Wu et al. reported a series of NVT-CPMD simulations at 6000 K from density 1.27 to 3.02 10 kg/m, in a range where the BrennerI potential showed the first order LLPT at the same . No sign of a van der Waals loop was found: in contrast to the BrennerI results of the previous paragraph, two approaching series starting from the lowest and the highest density, were found to meet smoothly at intermediate densities. Looking for the reasons of the failure of the BrennerI potential, the authors calculated, with the same functional used in the CPMD simulations, the torsional energy of two model molecules. One, (CH)C=C(CH), was chosen so that the bond between the two central atoms represents a double bond in a carbon network: two sites are bonded each to two sites; the peripheral hydrogens are needed to saturated the atoms and are intended to have no effect on the central bond. The second molecule, (CH)C-C(CH) is a portion of a completely coordinated network: in the bond-order language, the central bond is conjugated. The two molecules were geometrically optimized in their planar configurations and then twisted around the central bond axis in steps of . In each configuration the electronic wave function was optimized, without further relaxations, to give the total energy, that was compared to the planar configuration total energy. The difference is the torsional energy. The DF calculations found a surprising picture: while the double bond torsional energy was only slightly overestimated by the BrennerI potential at intermediate angles, the DF torsional energy for the conjugated bond showed a completely different scenario compared to the classical prediction. It shows a maximum at , while the planar and orthogonal configuration have basically the same energy. For the BrennerI potential, the torsional energy in this conjugated configuration is monotonically increasing with the torsion angle, just as for the double bond configuration. On average, considering that the conjugated configuration would be characteristic of a mainly coordinated liquid, the torsional interactions are enormously overestimated by the classical potential. As a further illustration, the authors tried to lower torsional energy of the conjugated bond in the classical potential, by tuning the proper parameter, and found a much less pronounced LLPT. Note that the functional form of the torsional interactions for the BrennerI potential cannot reproduce the DF data mentioned here. Wu et al. concluded that “[the] Brenner potential significantly overestimates the torsional barrier of a chemical bond between two- and three-center-coordinated carbon atoms due to the inability of the potential to describe lone pair electrons”; and: “[the] Brenner potential parameters derived from isolated hydrocarbon molecules and used in the literature to simulate various carbon systems may not be adequate to use for condensed phases, especially so in the presence of lone pair electrons”. In the next section we show that the conclusion of Wu et al. is not necessarily true for all BOPs; indeed, LCBOPI, the carbon bond-order potential proposed by Los and Fasolino (see section 2), includes a definition of the torsional interaction which is able to reproduce relevant features of liquid carbon, as they are described by DF-MD.
### 4.2 Ruling out the LLPT in the stable liquid region via LCBOP
We have already indicated that the change of the structure of the liquid along the graphite and diamond melting curve is related to the slope of the melting curve. More importantly, it plays also a crucial role in the nucleation of diamond in liquid carbon. The latter will be further discussed in the next section.
The calculated melting curves of the LCBOPI model for carbon up to 400 GPa provide strong evidence that there is no LLPT in the stable liquid phase. One indication is the smoothness of the slopes of the melting curves. A further argument lies in the structure of the liquid near freezing. Below we discuss this in more detail.
The calculated phase diagram (Figs. 7 and 8) does not show the sharp maximum in the graphite melting line that was inferred from the calculated first-order LLPT for the BrennerI bond-order potential[103]. As we mentioned in the previous section, subsequent DF-MD simulations of liquid carbon [45, 46] indicate that the BrennerI LLPT is spurious: it originates from an inadequate description of the torsional contribution to the interactions. We have extended the calculation of the graphite melting curve of LCBOPI towards higher pressures into the region where both graphite and the liquid are metastable with respect to diamond. It is plotted as a dashed curve in Fig. 8 that shows the same trend as at lower pressures. Hence, the calculated slope of the graphite melting curve is incompatible with the existence of a LLPT in this region of the carbon phase diagram.
In order to further analyze the nature of the liquid, we determined several structural properties of the liquid near the melting curve where we also explored the diamond melting curve. Fig. 9 shows the coordination fraction in the liquid along the coexistence curves up to 400 GPa, as function of temperature, pressure and density, with a linear scale in density. The dashed curve is the calculated graphite/diamond/liquid triple point. Along the graphite melting curve, the three-fold and two-fold coordination fractions remain rather constant, with the four-fold coordination slightly increasing to account for the increase in density. Along the diamond melting curve the three-fold coordinated atoms are gradually replaced by four-fold coordinated atoms. However, only at (3.9 10 kg/m, 300 GPa, and 10500 K) the liquid has an equal fraction of three-fold and four-fold coordinated atoms. The change of dominant coordination is rather smooth. Moreover, we have verified that it is fully reversible showing no sign of hysteresis in the region around the swapping of dominant coordination. We note, that these results contradict the generally assumed picture (see e.g. Ref. [99]) that diamond melts into a four-fold coordinated liquid. Our calculations suggest that up to 300 GPa the three-fold coordination dominates.
The interrelation between three and four-fold sites, was further investigated calculating the partial radial distribution functions () of the liquid at 300 GPa, and 10500 K. Partial radial distribution functions are defined as the probability of finding a -fold site at a distance from a -fold site; the total radial distribution function is recovered by: . We show the results in Fig. 10; we focus on the three predominant curves, describing the pair correlations between three-fold atoms (), between four-fold atoms (), and the cross pair correlation between three- and four-fold sites (). Disregarding the rather pronounced minimum in correspondence of the dip around 2 Å of the and the , the similarity of three curves at all distances is striking. The two sites are almost undistinguishable: in case of a tendency towards a phase transition, one would expect some segregation of the two structures. In contrast, looking at distances within the first neighbours shell, a three-fold site seems to bond indifferently to a three- or a four-fold site, and viceversa. Furthermore, the partial structures up to the third, quite pronounced, peak at 4.5 Å, are almost the same for these three partial radial distribution functions.
We determined the properties of the metastable liquid in the stable diamond region. Fig. 8 shows the liquid state points (crosses) that exhibit an equal number of three and four-fold coordinated atoms. It ranges from the high-pressure high-temperature region where the liquid is thermodynamically stable down into the diamond region, where the liquid is metastable for the LCBOP. The circles indicate state points in which the LCBOP liquid freezes in the simulation. Enclosed by the two set of points lies what we baptized diamond-like liquid. This is a mainly four-fold coordinated liquid with a rather pronounced diamond-like structure in the first coordination shell and was discussed in [46]. This suggests that a (meta)stable liquid with a dominantly four-fold coordination may only exist for pressures beyond 100 GPa and could imply that the freezing of liquid into a diamond structure might be severely hindered for a large range of pressures beyond the graphite/diamond/liquid triple point. In Ref. [46] it is also pointed out that at 6000 K the equation of state shows a change of slope around the transition to the four-fold liquid. At even lower temperature this feature becomes more and more evident, but for temperatures lower than 4500 K the liquid freezes into a mainly four-fold coordinated amorphous structure. This observation is consistent with quenching MD simulations [113, 114] to obtain the tetrahedral amorphous carbon. In those simulations a mainly three-fold liquid freezes into an almost completely four-fold amorphous.
Recent fully ab-initio study of the diamond melting line [105, 106] predicted a maximum at pressures beyond the maximum pressure (400 GPa) we explored with our potential. The maximum implies a liquid denser than diamond at pressure higher than the pressure at which the maximum appears. The authors of both works analyzed the structure of the liquid around this maximum, finding no sign of abrupt change in density and/or coordination. This points towards excluding a LLPT between a four-fold and a higher-fold coordinate liquid; rather, a smooth transformation towards a denser liquid is always observed.
## 5 Diamond nucleation
Our knowledge of the phase diagram of “LCBOPI carbon” allows us to identify the regions of the phase diagram where diamond nucleation may occur. We studied the homogeneous nucleation of diamond from bulk liquid, by computing the steady-state nucleation rate and analyzing the pathways to diamond formation. On the basis of our calculations, we speculate that the mechanism for nucleation control is relevant for crystallization in many network-forming liquids, and also estimate the conditions under which homogeneous diamond nucleation is likely in carbon-rich stars and planets such as Uranus and Neptune.
Steady-state nucleation rate Most liquids can be cooled considerably below their equilibrium freezing point before crystals start to form spontaneously in the bulk. This is caused by the fact that microscopic crystallites are thermodynamically less stable than the bulk solid. Spontaneous crystal growth can only proceed when, due to some rare fluctuation, one or more micro-crystallites exceed a critical size (the “critical cluster”): this phenomenon is called homogeneous nucleation. An estimate of the free-energy barrier the system has to cross in order to form critical clusters and of the rate at which those clusters form in a bulk super-cooled liquid, can be obtained from Classical Nucleation Theory (CNT) [115]. CNT assumes that , the Gibbs free-energy difference between the metastable liquid containing an n-particle crystal cluster and the pure liquid, is given by
ΔG(n)=S(n)γ−n|Δμ|, (8)
where is the area of the interface between an n-particle crystallite and the metastable liquid, is the liquid-solid surface free-energy per unit area, and the difference in chemical potential between the solid and the super-cooled liquid. The surface area is proportional to , where the factor depends on the shape and the geometry of the cluster (e.g. for a spherical cluster).
The top of the free-energy barrier to grow the crystalline critical cluster is then given by
ΔG∗=cγ3ρ2S|Δμ|2, (9)
where is the number density of the stable phase and indicates the geometrical properties of the growing cluster. From our simulations, we can only determine the product : it is this quantity and the degree of super-saturation (), that are needed to compute the top of the free-energy barrier, and hence the nucleation rate.
CNT relates , the steady-state nucleation rate, i.e. the number of crystal clusters that form per second per cubic meter, to , the height of the free-energy barrier that has to be crossed to nucleate the critical crystal :
RCNT=κe−βΔG(n∗), (10)
where is the top of the free-energy barrier and is the kinetic prefactor. The kinetic prefactor term is defined as
κ=ρLk+,n∗Z (11)
where is the liquid number density, the attachment rate of single particles to a spherical crystalline cluster , with proportional to the jump frequency ( being the atomic jump distance) and the so-called Zeldovitch factor. As the nucleation rate depends exponentially on , a doubling of may change the nucleation rate by many orders of magnitude.
Because of the extreme conditions under which homogeneous diamond nucleation takes place, there have been no quantitative experimental studies to determine its rate. Moreover, there exist no numerical estimates of and for diamond in super-cooled liquid carbon. Hence, it was thus far impossible to make even an order-of-magnitude estimate of the rate of diamond nucleation.
Results We simulate a 2744 particles bulk liquid carbon using periodic boundary conditions, with a cubic box whose edge is 18Å. We make it metastable by undercooling it at constant pressure at two different state points, GPa, K and GPa, K. At both state points, the liquid is super-cooled by 25 below the diamond melting curve, being the melting temperature K and K, respectively.
We evaluate by thermodynamic integration at constant pressure from the melting point ()
Δ(βiμ)=∫βiβM⟨[hS(β)−hL(β)]⟩Pdβ (12)
where , and is the enthalpy per particle of the solid and liquid phase, respectively. We then find: and .
In recent years several authors have been developing methods for studying homogeneous nucleation from the bulk and detecting solid particles within the metastable liquid [116, 117, 118, 119]. Our study on diamond nucleation is based on these works, but it requires various adaptations due to the specificity of the carbon covalent bond [55]. We have already shown that liquid carbon is rather structured below its freezing curve. This leads to the need of building a ”strict” definition of “crystallinity” of a particle, in order to avoid an overestimation of the number of solid particles in the system.
In order to compute the nucleation free energy, we use the biggest crystal cluster as a local order parameter to quantify the transformation from the liquid to the solid. To identify solid-like particles, we analyze the local environment of a particle using a criterion based on a spherical-harmonics expansion of the local bond order. In practice, the present bond-order parameter is based on rotational invariants constructed out of rank three spherical harmonics (). This choice allows us to identify the tetragonal symmetry of the diamond structure, as already described in Ref. [55, 119], and it is also perfectly suited to find particles in a graphite-like environment. Our choice of odd-order of spherical harmonics is due to the fact that both diamond and graphite lattices have odd symmetry upon inversion of coordinates.
In order to define the local order parameter, we start with computing
q3,m(i)=1Zi∑j≠iSdown(rij)Y3m(^% rij) (13)
where the sum extends over all neighbors of particle and over all values of . is the fractional number of neighbours and is a smooth cut-off function, introduced in the context of [55] (see also Section II.A in [40]).
By properly normalizing Eq. 13, we get
q′3,m(i)=q3,m(i)(∑lm=−lq3,m(i)⋅q∗3,m(i))1/2, (14)
being the complex conjugate of .
Next we define the dot product between the normalized function of particle and the same function computed for each of its first neighbors, , and sum them up over all the values:
d3(i,j)=l∑m=−lq′3,m(i)⋅q′∗3,m(j)Sdown(rij). (15)
is a real number defined between -1 and 1: it assumes the value of -1 when computed for both graphite and diamond ideal structures.
Two neighboring particles and are considered to be connected whenever . This value satisfactorily splits the distributions of solid particles belonging to a thermalized lattice and liquid particles as found in a liquid. The histograms that led us to this choice are thoroughly discussed in Refs. [55, 119]. By counting the total number of connections () and plotting the probability distribution of , we define a threshold for the number of connections needed to neatly distinguish between a liquid-like and a solid-like environment: we assume that whenever a particle is solid-like. At this stage, we do not specify any nature of the particle’s crystallinity, whether diamond-like or graphite-like. By means of a cluster algorithm we then define all the solid-like AND connected particles as belonging to the same crystal cluster. After computing the size of each cluster, we use the size of the biggest cluster as the order parameter which describes the phase transition [120].
Once properly identified the biggest crystalline cluster in the system, we use the umbrella sampling technique [121] to measure the free-energy barrier to form a critical cluster at state point . In order to better equilibrate the growing clusters, we implement a “parallel tempering” algorithm similar to the one described in Ref. [122]. We obtain that, at state point , is around 25 for a critical cluster size of . By fitting the initial slope of Eq. 8 to a polynomial function assuming a spherical growing cluster, while imposing the value of the correspondent super-saturation (0.60), the inter-facial free energy is Å J m. The same value of is obtained from the top of the free-energy barrier assuming a spherical cluster shape (Eq. 9) ( Å). We underline the fact that at the chosen thermodynamic conditions, there are no finite size effects, caused by spurious interaction of the critical cluster with its own periodically repeated image.
By knowing the inter-facial free energy in , and assuming the validity of CNT, we estimate the crystal nucleation rate by means of Eq. 10, where we use Eq. 11 to compute the kinetic pre-factor (the atomic jump distance being of the order of the diamond bond distance, Å): sm.
We also use Forward-Flux Sampling (FFS), a relatively recent rare events technique useful to compute the nucleation rate and to study the pathways to nucleation [123, 124, 119], and we measure the crystal nucleation rate at state point . FFS yields an estimate for the nucleation rate that is three orders of magnitude higher than the one estimated by means of Eq. 10. Whilst such a discrepancy seems large, it need not be significant because nucleation rates are extremely sensitive to small errors in the calculation of the nucleation barrier. Two possible reasons for this discrepancy are: 1) if we consider that the standard deviation corresponding to is around of its measured value, we conclude that the nucleation rate is sm; 2) another source of error can be the poor statistics when computing the nucleation rate from molten carbon by means of FFS. This is due to the time consuming calculations of the interaction potential: in our study we are in fact forced to base our results on independent nucleation events. Fig. 11 shows a typical critical cluster at state point obtained in the FFS simulations:
it contains around 110 particles, and it is surrounded by mainly 4-fold coordinated liquid particles. The picture shows two different views of the same cluster: it appears evident that all particles within the bulk are diamond-like, whereas the particles belonging to the outer surface are less connected but still mainly 3-4 fold coordinated.
We then attempt to compute the nucleation rate at state point by means of FFS and, even in rather long simulations, we cannot observe the formation of any crystal cluster containing more than 75 particles. Hence, these calculations suggest that the nucleation rate at state point measured by means of FFS is around zero.
Figure 12 shows a 75 particles cluster containing 3-fold coordinated surface particles surrounding the 4-fold coordinated bulk particles, while embedded in a 2-3 fold coordinated liquid.
As we are unable to grow critical nuclei with FFS, we assume that a system of 2744 particles is too small to accommodate a spherical critical cluster. According to Classical Nucleation Theory (CNT) [115], the crystal nucleation rate depends exponentially on the height of the free-energy barrier (see Eq. 10). The latter is a function of the inter-facial free energy () cube and inversely proportional to the super-saturation () square. Since the super-saturation is quite similar in both state points, the failure of the system to nucleate suggests that the inter-facial free energy should play a major role. In order to estimate the free-energy barrier in state point , as we know the solid number density ( Å) and the chemical potential difference between the liquid and the solid (0.77), we only need to calculate the inter-facial free energy . Thus, in what follows, we focus on methods to estimate at state point .
As a spherical critical cluster does not fit in our simulation box, we prepare a rod-like crystal in a system with a slab geometry: this is a flattened box containing around particles, with lateral dimensions that are some four times larger than its height. The crystal rod is oriented perpendicular to the plane of the slab, it spans the height of the simulation box and is continued periodically. The cross section of this crystal rod is initially lozenge shaped, such that its [111]-faces are in contact with the liquid. The [111]-planes are the most stable ones for the diamond lattice. In fact, macroscopic natural diamonds have often an octahedral shape, with eight [111]-exposed surfaces. Indeed, we find stable [111] surfaces in all but the smallest studied diamond clusters.
At state point clusters grow by the addition of particles to the surface made of mainly [111]-planes. Note that when graphite and diamond structures compete at state point (as shown in Fig. 12), the [0001]-graphite sheets transform into [111] diamond planes.
Fig. 13 represents the top view at state point of a rod-like crystal, formed by 4 [111]-faces and 2 bases as [1-10]-lozenge with the acute angles of =70.52 degrees. We then rewrite Eq. 8 for a rectangular parallelepiped having 4 faces and 2 lozenge-shaped basis
ΔG=4√hρsinθγln1/2−|Δμ|n, (16)
where h is the slab’s height. We then use Umbrella Sampling to compute the initial slope of the free-energy barrier. As Å, we obtain from fitting Eq. 16 that the inter-facial free energy for the lozenge-shaped cluster is Å J m. At the same time, computing the inter-facial free energy of the rod-like crystal at state point gives Å J m, considering the same slab’s height and the same angle .
Now that we have estimates for the inter-facial free energies of the lozenge-shaped clusters at both state points and , we can estimate the ratio between them and find that . As we compare clusters having the same shape, this ratio is presumably not very sensitive to the precise (and, a priori unknown) shape of the cluster shape. As the surface free energy at state point B is appreciably higher than at state point A, the early stages of crystal formation at point are strongly suppressed by the inter-facial free-energy term. Since we know (referred to a hypothetical spherical cluster) and the ratio between the two ’s, we can infer that the “effective” Å 3.50 J m.
This value of the surface free energy is so large that we would indeed have needed a much larger system in order to accommodate the critical cluster at the state thermodynamic conditions. ¿From Eq. 16, we calculate the critical cluster size for the lozenge-shaped parallelepiped and use it to estimate the size of a critical spherical cluster in :
n∗2D=4hρSsinθ(γ)2(Δμ)2. (17)
At state point we find 330 particles. Expressing as a function of the lozenge-shaped parallelepiped one, we get
n∗3D=83πγsinθρSΔμh×n∗2D, (18)
where is the solid number density Å, , and the height of the slab (10 Å). Thus, 1700 particles at state point . To guarantee that the critical cluster does not interact with its own periodic images, its radius should always be less than 25% of the box diameter . A spherical cluster with a radius of 0.25 occupies 7 % of the volume of the box and, as the solid is denser than the liquid, it contains about 10 % of the total number of particles (). Such a large system size is beyond our present computational capacity. In contrast, in the slab geometry we find that the free energy of a lozenge-shaped crystal goes through a maximum at a size of 330 particles, which is much less than the system size (4000 particles).
As and are known, we can now use CNT to estimate in state point . It turns out that, mainly because is 2.5 times larger than , the nucleation barrier in is more than ten times higher than in point , and the nucleation rate is sm.
To understand the microscopic origin for the large difference in nucleation rates in state points and , it is useful to compare the local structure of the liquid phase in both state points. As discussed in section 4.1 above (see also [46, 125]), liquid carbon is mainly 4-fold coordinated at state point ( 3-fold and 4-fold ), whereas at the lower temperatures and pressures of point , the coordination in the liquid resembles that of graphite and is mainly 3-fold coordinated ( 2-fold, 3-fold and 4-fold ).
We can analyze the structure of the crystalline clusters that form in the supersaturated liquid carbon and distinguish graphite-like from diamond-like particles. In an analysis, we use a different order parameter function of the order two spherical harmonics, and particularly sensitive to the graphite planar geometry. is the linear combination of spherical harmonics computed for each particle
q2m(i)=1Zi∑j≠iSdown(rij)Y2m(^r% ij) (19)
where the sum extends over all neighbors of particle . We then sum over all the values and calculate the modulus, . The probability distribution for both and is represented in Fig. 14.
Figure 14 depicts the features of both the smallest ( 20 in both state points and ) and the biggest clusters ( 250 in and 75 in ). We also distinguish among: liquid-like particles (circles), particles belonging to the surface of the largest cluster (squares), particles inside the bulk cluster (diamonds) and particles belonging to the first liquid layer surrounding the largest cluster (triangles). According to our definition, particles belonging to the surface of the cluster are those connected to solid-like particles, but not solid-like themselves. Concerning particles belonging to the first liquid layer surrounding the cluster, they usually display the same behaviour as the ones belonging to the cluster surface, which is not surprising in view of the uncertainty in distinguishing a surface-particle from a first-liquid-layer particle. To neatly distinguish between diamond-like or graphite-like environment, we use as a reference the probability distribution for both bulk diamond (D) and graphite (G) (inset of Fig. 14).
At state point , it is clear that bulk particles belonging to small clusters (bottom-left side) and big clusters (bottom-right side) are mainly diamond-like, as well as particles belonging to the surface of the clusters. In contrast, at state point bulk particles belonging to small clusters (top-left side) show both graphite-like and diamond-like finger-prints. By visual inspection, we note that when clusters grow larger (around 75 particles), particles at the surface tend to be mainly 3-fold coordinated, whereas bulk particles stay 4-fold coordinated, as shown in the top-right side of Fig. 14. The destabilizing effect of the graphitic liquid on the diamond clusters is most pronounced for small clusters (large surface-to-volume ratio): clusters containing less than particles tend to be graphitic in structure, clusters containing up to particles show a mixed graphite-diamond structure (see Fig. 12). It appears that the unusual surface structure of the diamond cluster is an indication of the poor match between a diamond lattice and a 3-fold coordinated liquid.
### 5.1 Consequences for other network forming liquids, carbon-rich stars, Uranus and Neptune
As discussed in the introduction, there are many network-forming liquids that, upon changing pressure and temperature, undergo profound structural changes or even LLPT [29, 23, 126]. Interestingly, our simulations show that the ease of homogeneous crystal nucleation at constant super-saturation from one-and-the-same meta-stable liquid can be tuned by changing its pressure, and thereby its local structure.
Pressures and temperatures that we investigate for the diamond nucleation are in practice impossible to reach in experiments. However, such conditions are likely to be found in several extraterrestrial “laboratories”. Homogeneous nucleation of diamond may have taken place in the atmosphere of carbon-rich binary stellar systems comprising the so-called carbon stars and white dwarfs [127, 128]. Closer to home, it has been suggested that diamonds could also have formed in the carbon-rich middle layer of Uranus and Neptune [96, 129, 130] where, due to the high pressure and temperature, the relatively abundant CH would decompose into its atomic components. In fact, experiments on methane laser-heated in diamond anvil cells [131] found evidence for diamond production. Ab initio simulations [132] also found that hot, compressed methane will dissociate to form diamond. Yet, there is a large discrepancy between the estimates of the pressures (and thus depth in the planet interior) at which the diamond formation would take place. The laser-heating experiments [131] suggested diamond formation at pressure as low as 10–20 GPa (at 2000–3000 K), whereas the ab initio simulations [132] found dissociation of methane, but synthesis of short alkane-chains at GPa and diamond at pressures not lower than 300 GPa (note that simulations were carried out at 4000–5000 K).
The present work allows us to make a rough estimate of the conditions that are necessary to yield appreciable diamond nucleation on astronomical timescales.
In this context, it is crucial to note that neither carbon stars nor carbon-rich planets consist of pure carbon. In practice, the carbon concentration may be as high as 50% in carbon-rich stars [127, 128], but much less (1-2% [129, 96, 133, 130]) in Uranus and Neptune. To give a reference point, it is useful to estimate an upper bound to the diamond nucleation rate by considering the rate at which diamonds would form in a hypothetical environment of pure, metastable liquid carbon. To this end we use our numerical data on the chemical potential of liquid carbon and diamond and our numerical estimate of the diamond-liquid surface free energy, to estimate the nucleation barrier of diamond as a function of temperature and pressure. We then use CNT to estimate the rate of diamond nucleation.
To do so, we need to extend the estimate of the nucleation rate from the triple point pressure (around 16 GPa) up to 100 GPa, and from the melting temperatures ( K and K, respectively) to 35 % under-cooling (at which diffusion in our sample becomes negligible on the - far from astronomical - time-scales of our simulations). To make such an extrapolation, we make use of Eqn.’s 109, and 11. The state-point dependent quantities are the solid and liquid number densities and , the self-diffusion coefficient , the surface free energy , the difference in chemical potential between the liquid and the solid , and the critical cluster size . We estimate them in the following way: the densities are directly measured by Monte Carlo simulations of the solid and the liquid; the self diffusion coefficient is extrapolated assuming an Arrhenius behaviour of the metastable liquid (see Appendix B); the chemical potential difference is interpolated via Eq. 12 between 30 and 85 GPa. then also follows by linear interpolation. Concerning the surface free-energy, we assume that linearly depends on , the equilibrium concentration of 4-fold coordinated atoms at the selected state point. This quantity is easily measured in the Monte Carlo simulations. The nucleation barrier height is given by Eq. 9, where the geometrical factor is the same for all cluster sizes. It is obvious that we have to make rather drastic assumptions in order to estimate the nucleation rate in the experimentally relevant regime. We believe that our assumptions are reasonable, but one should not expect the resulting numbers to provide more than a rough indication.
Figure 15 shows that there is a region of some 1000K below the freezing curve (continuous red curve) where diamond nucleation is less than 10 ms (above the continuous black line). If the rate is lower than this number, not a single diamond could have nucleated in a Uranus-sized body during the life of the universe. As can be seen from the figure, our simulations for state point B are outside the regime where observable nucleation would be expected. Note that this latter conclusion is not based on any extrapolation.
As mentioned above, carbon stars and planets do not consist of pure carbon. Hence, we have to consider the effect of dilution on the crystallization process. To do so, we make a very “conservative” assumption, namely that nucleation takes place from an ideal mixture of C, N,O and H [135]. If this were not the case, then either demixing would occur, in which case we are back to the previous case, or the chemical potential of carbon in the liquid is lower than that in pure carbon, which would imply that the thermodynamic driving force for diamond crystallization is less than in pure liquid carbon. In Fig. 16, we show how dilution affects the regime where diamond nucleation is possible. To simplify this figure, we do not vary pressure and temperature independently but assume that they follow the adiabatic relation that is supposed to hold along the isentrope of Uranus [136] and we use the ideal-mixture expression for the chemical potential , where is the chemical potential difference between the solid and the liquid for he pure substance (C) and is the concentration of carbon in the fluid mixture.
Not surprisingly, Fig. 16 shows that dilution of the liquid decreases the driving force for crystallization. In fact, no stable diamond phase is expected for carbon concentrations below 8%. Moreover, there is a wide range of conditions where diamonds could form in principle, but never will in practice. Assuming that, for a given pressure, the width of this region is the same as in the pure case (almost certainly a serious underestimate), we arrive at the estimate in Fig. 16 of the region where nucleation is negligible (i.e. less than one diamond per planet per life-of-the-universe). From this figure, we see that quite high carbon concentrations (over 15%) are needed to get homogeneous diamond nucleation. Such conditions do exist in white dwarfs, but certainly not in Uranus or Neptune.
### Appendix A: LCBOPII
In this appendix, we describe the main features of the latest addition to the LCBOP family. This potential has been used in Ref. [40, 54, 137]. However, the simulations discussed in the present paper are based on LCBOPI.
Middle range interactions. Although LCBOPI gave an improved description of most liquid phase properties, like coordination distributions as a function of density, as compared to the bond-order potentials without LR interactions (Brenner, CBOP[138]), the radial distribution function showed a too marked minimum after the first shell of neighbors, as compared to ab-initio calculations (see Fig. 23). This deficiency was attributed to the relatively short cut-off of 2.2 Å for the SR interactions, giving rise to a spurious barrier for bond formation around 2.1 Å. Therefore, for LCBOPII, the total binding energy expression was extended with so-called MR interactions as:
Eb=12Nat∑i,j(SsrijVsrij+(1−Ssrij)Vlrij+1√ZmriSmrijVmrij) (20)
The first two terms on the right-hand side represent the SR and LR interactions respectively, where smoothly switches between both interactions within the interval 1.7 Å 2.2 Å, with and . The last term represents the MR interactions, where is a purely attractive potential and is a sort of MR coordination number defined as:
Zmri=(∑jSmrijVmrij)2∑j(SmrijVmrij)2 (21)
to account for many body effects. The switch function , going from 0 to 1 between = 1.7 Å and 2.2 Å, smoothly excludes the MR interactions for distances smaller than 1.7 Å. For clarity, the ranges of the various interactions in Eq. 20 are schematically represented in Fig. 17. The MR interaction was fitted to ab-initio calculations of single, double and triple bond dissociation curves. For the single bond, the tail of the interaction vanishes beyond 4 Å. is the product of a simple polynomial with a smooth cut-off at 4 Å and an environment dependent switch function , depending on the angles between and (), where atom is a SR neighbour of atom , i.e. . Thus, while the MR interactions give an extension of the covalent interactions beyond the SR cut-off distance 2.2 Å in situations where this is appropriate, its environment dependence relies only on the SR nearest neighbours (within 2.2 Å), a quite convenient property for the sake of efficiency. The switch acts in such a way that is only non-zero when the angles are relatively large. This is illustrated schematically in Fig. 18. In particular, the definition of makes vanish for any pair in all bulk crystal structures. So the addition of the MR interactions does not require reparametrization of the SR and LR potential terms.
The reactivity of atoms depends on whether these atoms are well surrounded by neighbours or not. Typically, an atom with a dangling bond wants to make another bond. To include this effect, the MR potential is made dependent on the so-called dangling bond number , i.e. . For an atom with , the MR interaction is stronger than for an atom with .
Extended coordination dependence of angular function. For LCBOPII the correction of the angle dependent part of the bond order for configurations involving low coordinations and small angles has been further extended, involving a gradual coordination dependence of the angular term over a wide range of coordinations.
Anti-bonding. Another new feature of LCBOPII is the addition of an anti-bonding correction to the bond order. An example of a situation where one electron remains unpaired in a non-bonding state is depicted in Fig. 19(b). Clearly, this situation is unfavourable as compared to the situation in Fig. 19(c). This effect cannot be captured in the conjugation term and has therefore been included as a separate, anti-bonding term.
Torsion. As it has been clearly demonstrated in Ref. [45], the torsion interaction for a bond between an atom and an atom is strongly dependent on conjugation, i.e. on the coordinations of the neighbours of and of . This was already partly included in LCBOPI. However, for LCBOPII, the conjugation dependence of the torsion interactions was fully extended and fitted to ab-initio calculations of the torsion barrier for all the possible conjugation situations.
In addition to that, LCBOPII includes a redefinition of the torsion angle, in order to avoid the ’spurious’ torsion that occurs using the traditional definition. Traditionally, the torsion angle is defined as:
cos(ωijkl)=tijk⋅tijl|tijk||tijl|=(rij×rik)⋅(rij×rjl)|rij×rik||rij×rjl| (22)
which, assuming torsion to be non-vanishing only between sp bonded atoms and , gives rise to four torsion contributions to the bond order. With the definition of Eq. 22, both situations depicted in Fig. 21b and c give rise to a non-zero torsion angle. However, in both situations there is actually no torsional distortion but only a bending distortion, which is already taken into account by the angular term in the bond order. Thus, one would like to have for the cases in Fig. 21b and c, in disagreement with the most right-hand side expression in Eq. 22. Another problem of expression 22 is that it has a singularity for configurations where is parallel to (or ). For the liquid phase at high temperature such situation are easily accessible.
For LCBOPII, the problem of ’spurious’ torsion has been tackled by a redefinition of the vectors in Eq. 22, reading:
tijk = ^rij×(^rik1−^rik2)+ + √32(^rij⋅(^rik1−^rik2))(^rij×(^rik1+^rik2))
and likewise for . Inserting these vectors into Eq. 22, leading to a different right-hand side, reproduces the same as the traditional definition for any torsional distortion without bending and yields for both situations depicted in Fig. 21, as it should be. In addition, it gives a good interpolation for any other configuration, and it has no singularities. Note that for the two distortions depicted in Fig. 21, the second term in Eq. 5 vanishes, and the vectors and are parallel, implying indeed .
Interpolation for fractional coordinations. The conjugation term for a bond depends on the reduced coordinations and of the atoms and , and on the conjugation number . The reduced coordination is defined as:
Nij=∑k≠i,jSZ(rik) (24)
and likewise for , where is a switch function for the coordination, smoothly going from 1 to 0 for going from 1.7 Å and 2.2 Å. is fitted to integer coordination configurations with only full neighbours, i.e. with . This poses the problem of how to determine for configurations with fractional bonds, i.e. configurations with for one or more neighbours . David Brenner, the inventor of the conjugation term, proposed to use a 3D spline [52]. However, since the values on the integer argument nodes are rather scattered, a spline unavoidably introduces unphysical oscillations. For LCBOPII, we found an alternative solution to this problem which is schematically illustrated in Fig. 22. In this approach, the conjugation term for configurations with fractional coordination is defined as a weighted superposition of conjugation terms for configurations with integer configurations, the weight factors for the configuration (=1,..4) being defined in terms of the switch functions . For instance, for the situation in Fig. 22, the conjugation term is given by:
Fconjij = (1−SZ,ik2)(1−SZ,ik3)Fconjij(1,2,Nconjij,1)+ + SZ,ik2(1−SZ,ik3)Fconjij(2,2,Nconjij,2)+ + (1−SZ,ik2)SZ,ik3Fconjij(2,2,Nconjij,3)+ + SZ,ik2SZ,ik3Fconjij(3,2,Nconjij,4)
where are the conjugation numbers for the four configurations in Fig. 22.
Results with LCBOPII. LCBOPII proved to be more accurate than its predecessors in describing defects and surfaces of the solid phases [40]. In the liquid phases the improvement of LCBOPII is immediately evident when looking at radial distribution functions at different densities (Fig. 23). The main discrepancy between LCBOPI and the reference data from DF-MD calculations [46, 54]) was found at the first minimum, at around 2 Å. LCBOPI predicted a much deeper minimum than DF-MD. This discrepancy is completed eliminated by LCBOPII. We also know that the melting line of diamond predicted by LCBOPII is about 500 K lower than for LCBOPI at GPa [54], more consistently with ab-initio predictions of the diamond melting line [105, 106]. In [54] we also thoroughly analyzed the properties of the liquid. Interestingly, by extrapolating the equations of state of the liquid at temperatures at which our relatively small samples actually froze, we found that a critical point for the graphite-like into diamond-like transition is present at 1230 K. The precise value might be inaccurate, since it is found far outside the sampled region, still the shapes of the higher temperature equations of state point towards the existence of such critical isotherm. As is the case for the much speculated water LLPT [29, 30] an unreachable critical point might still be responsible of some peculiar behaviour of the system at higher temperatures, such as the enormous change in nucleation rate with pressure.
### Appendix B: Self-diffusion coefficient
When computing the kinetic pre-factor to get the nucleation rate, we have to consider the fact that for our model potential, LCBOPI, only a Monte Carlo code is available. In order to evaluate the self-diffusion coefficient needed to compute the CNT kinetic pre-factor, we infer the scaling factor between the Monte Carlo “time-step” and the MD time-step [139] by propagating a 128 carbon atoms system via Car-Parrinello Molecular Dynamics CPMD code [140] starting from a configuration equilibrated with LCBOPI. Note the reasonably good agreement between the static properties of the liquid carbon computed with LCBOPI and the same computed by means of CPMD (with the BP functional) [46, 54]. Data for the high pressure state point come from simulations used in Ref. [46], whereas data for the low pressure state point come from a new simulation where the time rescaling is state-point dependent, obtained with the same technical details as reported in [46].
We use the fact that molten carbon is an Arrhenius-like liquid: therefore, once the activation energy is known, we compute the viscosity as a function of temperature and by means of the Stokes-Einstein relation obtain the diffusion coefficient. In the nineteen-fifties, Kanter [141] estimated the relevant activation energy of liquid carbon to be 683 . Subsequenty, Fedosayev [142] reported a measurement of the molten carbon viscosity: = 5 poise at . We estimate the self-diffusion coefficient at the same temperature by means of the Stokes-Einstein relation [143]
D=kBTηa, (25)
where and is the Boltzmann’s constant: cm/s. Since molten carbon is an Arrhenius-like fluid [144],
D(T)=D0exp−EAkBT, (26)
we obtain : cm/s, and then extrapolate the diffusion coefficient for different temperatures, as shown in Fig. 24 and Table 3.
We then find that at state point , cm/s, whereas at state point , cm/s.
We also use a Car-Parrinello Molecular Dynamics [140] to calculate the self-diffusion coefficient by means of the mean square displacement: at state point cm/s, which matches surprisingly well with the diffusion coefficient estimated by means of the Arrhenius law, cm/s.
## Acknowledgments
The work of the FOM Institute is part of the research program of FOM and is made possible by financial support from the Netherlands Organization for Scientific Research (NWO). We gratefully acknowledge financial support form NWO-RFBR Grant No 047.016.001 and FOM grant 01PR2070. We acknowledge support from the Stichting Nationale Computerfaciliteiten (NCF) and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) for the use of supercomputer facilities. LMG wishes to thank his wife, Sara Iacopini, for helping him in scanning and summarizing the literature mentioned in section 3.1. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863009512424469, "perplexity": 996.1129798401731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00677.warc.gz"} |
https://www.longevitas.co.uk/site/informationmatrix/?tag=GLM | ### Functions of a random variable
#### (May 9, 2018)
Assume we have a random variable, $$X$$, with expected value $$\eta$$ and variance $$\sigma^2$$. Often we find ourselves wanting to know the expected value and variance of a function of that random variable, $$f(X)$$. Fortunately there are some workable approximations involving only $$\eta$$, $$\sigma^2$$ and the derivatives of $$f$$. In both cases we make use of a Taylor-series expansion of $$f(X)$$ around $$\eta$$:
$f(X)=\sum_{n=0}^\infty \frac{f^{(n)}(\eta)}{n!}(X-\eta)^n$
where $$f^{(n)}$$ denotes the $$n^{\rm th}$$ derivative of $$f$$ with respect to $$X$$. For the expected value of $$f(X)$$ we then have the following second-order approximation:
${\rm E}[f(X)] \approx f(\eta)+\frac{f''(\eta)}{2}\sigma^2\qquad(1)$
### Working with constraints
#### (Feb 9, 2016)
Regular readers of this blog will be aware of the importance of stochastic mortality models in insurance work. Of these models, the best-known is that from Lee & Carter (1992):
$\log \mu_{x,y} = \alpha_x + \beta_x\kappa_y\qquad(1)$
where $$\mu_{x,y}$$ is the force of mortality at age $$x$$ in year $$y$$ and $$\alpha_x$$, $$\beta_x$$ and $$\kappa_y$$ are parameters to be estimated. Lee & Carter used singular value decomposition (SVD) to estimate their parameters, but the modern approach is to use the method of maximum likelihood - by making an explicit distributional assumption for the number of deaths, the fitting process can make proper allowance for the amount of information available…
### Out of line
#### (Aug 20, 2013)
Regular readers of this blog will be in no doubt of the advantages of survival models over models for the annual mortality rate, qx. However, what if an analyst wants to stick to the historical actuarial tradition of modelling annualised mortality rates? Figure 1 shows a GLM for qx fitted to some mortality data for a large UK pension scheme.
Figure 1. Observed mortality rates (•) and fitted values (-) using a binomial GLM with default canonical link (logit scale). Source: Own calculations using the mortality experience of a large UK pension scheme for the single calendar year 2009.
Figure 1 shows that the GLM provides a good approximation of the mortality patterns. A check of the deviance residuals (not shown) yields…
Tags: GLM, linearity, survival model
### Groups v. individuals
#### (Sep 28, 2012)
We have previously shown how survival models based around the force of mortality, μx, have the ability to use more of your data. We have also seen that attempting to use fractional years of exposure in a qx model can lead to potential mistakes. However, the Poisson distribution also uses μx, so why don't we use a Poisson model for the grouped count of deaths in each cell? After all, a model using grouped counts sounds like it might fit faster. In this article we will show why survival models constructed at the level of the individual are still preferable.
The first step when using the Poisson model is to decide on the width of the age interval. This is necessary because the Poisson model for grouped counts…
### Part of the story
#### (Oct 9, 2009)
The Institute of Actuaries' sessional meeting on 28th September 2009 discussed an interesting paper. It covered similar material to that in Richards (2008), but used different methods and different data. Nevertheless, some important results were confirmed: geodemographic type codes are important predictors of mortality, and a combination of geodemographic profile and pension size is better than either factor on its own. The authors also added an important new insight, namely that last-known salary was a much better predictor than pension size.
The models in the paper were GLMs for qx, which require complete years of exposure. The authors were rightly concerned that just using complete years would…
### Out for the count
#### (Jul 31, 2009)
In an earlier post we described a problem when fitting GLMs for qx over multiple years. The key mistake is to divide up the period over which the individual was observed in a model for individual mortality. This violates the independence assumption and leads to parameter bias (amongst other undesirable consequences). If someone has three records aged 60, 61 and 62 initially, then these are not independent trials: the mere existence of the record at age 62 tells you that there was no death at age 60 or 61.
Life-company data often comes as a series of in-force extracts, together with a list of movements. The usual procedure is to re-assemble the data to create a single record for each policy, using the policy number…
### Logistical nightmares
#### (Jan 24, 2009)
A common Generalised Linear Model (GLM) for mortality modelling is logistic regression, also sometimes described as a Bernoulli GLM with a logistic link function. This models mortality at the level of the individual, and models the rate of mortality over a single year. When age is used as a continuous covariate, logistic regression has some very useful properties for pensioner mortality: exponentially increasing mortality from age 60 to 90 (say), with slower, non-exponential increases at higher ages. Logistic regression was the foundation of the models presented in a SIAS paper on annuitant mortality.
Although logistic regression for the rate of mortality is nowadays superceded by more-powerful…
Tags: GLM, logistic regression
### Great Expectations
#### (Dec 8, 2008)
When fitting statistical models, a number of features are commonly assumed by users. Chief amongst these assumptions is that the expected number of events according to the model will equal the actual number in the data. This strikes most people as a thoroughly reasonable expectation. Reasonable, but often wrong.
For example, in the field of Generalised Linear Models (GLMs), the user has a choice of so-called link functions to specify the model. For binomial data, the default is the canonical link, the logit, which gives the following function for the rate of mortality, qx:
qx = exp(α + βx) / (1 + exp(α + βx))
This is known to actuaries as a simplied version of Perks Law when applied to mortality…
Tags: GLM
### Do we need standard tables any more?
#### (Nov 1, 2008)
Actuaries are long used to using standard tables. In the UK these are created by the Continuous Mortality Investigation Bureau (CMIB), and the use of certain tables is often prescribed in legislation. As actuaries increasingly move to using statistical models for mortality, it is perhaps natural that they should first consider incorporating standard tables into these models. But are standard tables necessary, or even useful, in such a context?
Although we normally prefer to model the force of mortality, here we will use a model for the rate of mortality, qx, since this is how many actuaries still approach mortality. Our model is actually a generalised linear model (GLM) where the rate of mortality is:
exp(α+βx)qx
### Survival models v. GLMs?
#### (Aug 12, 2008)
At some point you may be challenged to decide whether to use survival models or the older generalised linear models (GLMs). You could be forgiven for thinking that the two were mutually exclusive, especially since some commercial commentators have tried to frame the debate that way.
In fact, survival models and GLMs are not necessarily mutually exclusive. It is true that GLMs are more commonly used for modelling the rate of mortality, qx, whereas survival models are always used for modelling the force of mortality, μx. Indeed, a survival model can be defined as a model for μx.
However, there are GLMs for the force of mortality as well. One notable example is the Poisson model for the number of deaths,… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273734450340271, "perplexity": 1490.2291262041902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660020.5/warc/CC-MAIN-20190118090507-20190118112507-00444.warc.gz"} |
http://www.nag.com/numeric/FL/nagdoc_fl24/html/G05/g05saf.html | G05 Chapter Contents
G05 Chapter Introduction
NAG Library Manual
# NAG Library Routine DocumentG05SAF
Note: before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.
## 1 Purpose
G05SAF generates a vector of pseudorandom numbers taken from a uniform distribution between $0$ and $1$.
## 2 Specification
SUBROUTINE G05SAF ( N, STATE, X, IFAIL)
INTEGER N, STATE(*), IFAIL REAL (KIND=nag_wp) X(N)
## 3 Description
G05SAF generates $n$ values from a uniform distribution over the half closed interval $\left(0,1\right]$.
One of the initialization routines G05KFF (for a repeatable sequence if computed sequentially) or G05KGF (for a non-repeatable sequence) must be called prior to the first call to G05SAF.
## 4 References
Knuth D E (1981) The Art of Computer Programming (Volume 2) (2nd Edition) Addison–Wesley
## 5 Parameters
1: N – INTEGERInput
On entry: $n$, the number of pseudorandom numbers to be generated.
Constraint: ${\mathbf{N}}\ge 0$.
2: STATE($*$) – INTEGER arrayCommunication Array
Note: the actual argument supplied must be the array STATE supplied to the initialization routines G05KFF or G05KGF.
On entry: contains information on the selected base generator and its current state.
On exit: contains updated information on the state of the generator.
3: X(N) – REAL (KIND=nag_wp) arrayOutput
On exit: the $n$ pseudorandom numbers from a uniform distribution over the half closed interval $\left(0,1\right]$.
4: IFAIL – INTEGERInput/Output
On entry: IFAIL must be set to $0$, $-1\text{ or }1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6 Error Indicators and Warnings
If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF).
Errors or warnings detected by the routine:
${\mathbf{IFAIL}}=1$
On entry, ${\mathbf{N}}<0$.
${\mathbf{IFAIL}}=2$
On entry, STATE vector was not initialized or has been corrupted.
Not applicable.
None.
## 9 Example
This example prints the first five pseudorandom numbers from a uniform distribution between $0$ and $1$, generated by G05SAF after initialization by G05KFF.
### 9.1 Program Text
Program Text (g05safe.f90)
### 9.2 Program Data
Program Data (g05safe.d)
### 9.3 Program Results
Program Results (g05safe.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897322058677673, "perplexity": 3427.868546080385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827791.21/warc/CC-MAIN-20160723071027-00281-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/106162/elementary-proof-of-basis-of-order-k | # Elementary Proof of Basis of Order k
## Context
According to the FAQ, questions of the form "the sorts of questions you come across when you're writing or reading articles or graduate level books" are acceptable. This falls into the "reading graduate level books."
## Problem Statement
Let $N$ be the natural numbers.
$B \subseteq N$ is a basis of order $k$ if $N \setminus kB$ is finite.
I would like to show that there is a basis $B$ of order $k$ s.t.
$|B \cap [1,n]| = O(n^{1/2} \log^{1/k} n)$.
## What I've tried
Suppose all we needed was $O(n^{1/2} \log^{1/2} n)$, then I would define $B$ by randomly sampling from $N$ s.t.
$$P(n \in B) = \frac{c\log^{1/2} n}{\sqrt{n}}$$
By the chernoff bound, with high probability we have $|B \cap [1,n]| = O(n^{1/2}\log^{1/2} n)$.
Furthermore, for any $n$, there does not exists $a,b\in B$ s.t. $a+b=N$ with probability at most $(1-\frac{c\log n}{n})^{n/2} \leq 1/n^2$, and we're done.
Unfortunately, however, I need to push this down to $O(n^{1/2}\log^{1/k} n)$.
## What I'm stuck on
So far, I've only used $B$ as a order 2 base, rather than an order $k$ base.
## Question:
What should I be looking at to go from order 2 to order $k$ and $\log^{1/2} n$ to $\log^{1/k} n$?
-
If $B$ is a basis of order $k$ such that every integer $n$ can be written as a sum of $k$ elements from $B$ in $\asymp n^{o(1)}$ ways, then a simple counting argument yields $|B \cap [1 , X]| \asymp X^{\frac{1}{k}+o(1)}$. Thus a stronger estimate $|B \cap [1 , X]| \asymp (X \log X)^{\frac{1}{k}}$ in your problem is certainly a more interesting goal.
Theorem 8.6.3 in "The Probabilistic Method" by Alon & Spencer gives precisely a set $B$ satisfying this estimate when $k=3$ (and the proof can be adapted in order to handle any value $k \geq 3$). They also give the following reference :
Erdos, P. and Tetali, P. (1990). Representations of integers as the sum of k terms, Random Structures Algorithms 1(3): 245-261.
@Stanley Yao Xiao : I made an assumption on the number of representations of integers by $k$ elements from $B$ which essentially discards basis of smaller order.
@unknown : Writing $r(n)$ for the number of representations of $n$ as a sum $b_1 + \cdots + b_k$ with each $b_i \in B$, we have $$|B \cap [1,X]|^k = \sum_{n \geq 1} \left( \sum_{b_1 + \cdots + b_k = n ;\\ b_i \leq X} 1 \right) \geq \sum_{1 \leq n \leq X} r(n)$$ and $$|B \cap [1,X]|^k = \sum_{n \geq 1} \left( \sum_{b_1 + \cdots + b_k = n ;\\ b_i \leq X} 1 \right) \leq \sum_{1 \leq n \leq kX} r(n)$$ Under the assumption $r(n) \asymp n^{o(1)}$, both RHS are $X^{1 + o(1)}$, hence the result. Actually, Erdos & Tetali showed that some basis $B$ of order $k$ satisfies $r(n) \asymp \log n$. By the argument above, this implies $|B \cap [1,X]| \asymp (X \log X)^{\frac{1}{k}}$.
-
@js: out of curiosity, what is the simple counting argument for X^{1/k + o(1)} ? – user26147 Sep 2 '12 at 18:56
One can obtain that as a lower bound fairly easily; but it is certainly not true as an upper bound. For instance the set of integers is an additive basis of any order, but is much denser than that. – Stanley Yao Xiao Sep 3 '12 at 3:02
To elaborate on the above comment, the problem with order $k$ bases is precisely that Chernoff's inequality does not work. The joint independence assumption for Chernoff's inequality is essential; as seen by the following example taken from Tao and Vus' Additive Combinatorics:
Color the elements of $[1, N]$ either black or white independently and with equal probability. For each $A \subset [1, N]$ let $s_A$ denote the parity of black elements of $A$ (so say if $A$ contains 3 black elements then $s_A = 1$). One can check that the $s_A$'s are independent events. Write $X = \displaystyle \sum_{A \subset [1, N]} s_A$. One can check that $\mathbb{E}X = 2^N - 1/2$ and $\textbf{Var} X = 2^{N-2} - 1/4$. Further, $\mathbb{P}(X = 0) = 2^{-N}$. The upper-bound on Chernoff's inequality would be $2\exp(-2^{N-2})$, which is much smaller than $\mathbb{P}(X = 0)$, so the inequality fails.
The reason why a simple argument suffices for additive bases of order 2 is because we have
$$\displaystyle r_{2,B}(n) = \sum_{x < n/2} \mathbb{I}(x \in B) \mathbb{I}(n - x \in B) + E$$
where $E$ is a suitably small error, and $r_{2,B}(n)$ is the number of ways to write $n$ as a sum of two elements in $B$. The key here is that the events $\mathbb{I}(x \in B) \mathbb{I}(n - x \in B)$ are independent for $1 \leq x < n/2$. This is not the case when there are more summands. In the Erdos-Tetali paper cited above, this issue is circumvented via Janson's inequality. In particular, Erdos-Tetali showed that there are additive bases of order $k$ satisfying $| B \cap [1,N]| = \Theta(N^{1/k} \log^{1/k} N)$.
The main difficulty you have to circumvent is how to deal with the non-independence of the random variables $\mathbb{I}(x_1 \in B) \cdots \mathbb{I}(x_k \in B)$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972313642501831, "perplexity": 147.5389586905893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://hal-upec-upem.archives-ouvertes.fr/hal-00622763 | # Rational interpolation and basic hypergeometric series
Abstract : We give a Newton type rational interpolation formula (Theorem 2.2). It contains as a special case the original Newton interpolation, as well as the interpolation formula of Liu, which allows to recover many important classical q-series identities. We show in particular that some bibasic identities are a consequence of our formula.
Keywords :
Document type :
Journal articles
https://hal-upec-upem.archives-ouvertes.fr/hal-00622763
Contributor : Alain Lascoux <>
Submitted on : Monday, September 12, 2011 - 4:28:29 PM
Last modification on : Wednesday, February 26, 2020 - 7:06:05 PM
### Citation
Amy M. Fu, Alain Lascoux. Rational interpolation and basic hypergeometric series. Advances in Applied Mathematics, Elsevier, 2008, 41 (3), pp.452-458. ⟨10.1016/j.aam.2008.01.003⟩. ⟨hal-00622763⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045642375946045, "perplexity": 2454.88983189255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00013.warc.gz"} |
https://tpiezas.wordpress.com/2012/04/13/the-tremendous-tribonacci-constant/ | The Tremendous Tribonacci constant
The tribonacci constant is the real root of the cubic equation,
$T^3-T^2-T-1 = 0$
and is the limiting ratio of the tribonacci numbers = {0, 1, 1, 2, 4, 7, 13, 24, …} where each term is the sum of the previous three, analogous to the Fibonacci numbers. Let $d = 11$, then,
$T = \frac{1}{3} +\frac{1}{3}\big(19+3\sqrt{3d}\big)^{1/3} + \frac{1}{3}\big(19-3\sqrt{3d}\big)^{1/3} = 1.839286\dots$
We’ll see that the tribonacci constant is connected to the complete elliptic integral of the first kind $K(k_{11})$. But first, given the golden ratio’s infinite radical representation,
$\phi = \sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+\dots}}}}$
then T also has the beautiful infinite radical,
$\frac{1}{T-1} = \sqrt[3]{\frac{1}{2}+\sqrt[3]{\frac{1}{2}+\sqrt[3]{\frac{1}{2}+\sqrt[3]{\frac{1}{2}+\dots}}}} = 1.191487\dots$
as well as a continued fraction,
$\big(\frac{T}{T+1}\big) \big(e^{\frac{\pi\sqrt{11}}{24}}\big) = 1 + \cfrac{q}{1-q + \cfrac{q^3-q^2}{1+\cfrac{q^5-q^3}{1+\cfrac{q^7-q^4}{1+\ddots}}}} = 0.9999701\dots$
where q is the negative real number,
$q = \frac{-1}{e^{\pi \sqrt{11}}}$
Recall that at elliptic singular values, the complete elliptic integral of the first kind K(k) satisfies the equation,
$\frac{K'(k_d)}{K(k_d)} = \sqrt{d}$
or, in the syntax of Mathematica,
$\frac{EllipticK[1-ModularLambda[\sqrt{-d}]]}{EllipticK[ModularLamda[\sqrt{-d}]]} = \sqrt{d}$
Interestingly, we can express both $k_{11} = 0.000477\dots$ and $K(k_{11}) = 1.57098\dots$ in terms of the tribonacci constant as,
$k_{11} = \frac{1}{4}\left(2-\sqrt{\frac{2v+7}{2v-7}}\,\right) = 0.000477\dots$
where,
$v = T+4$
and,
$K(k_{11}) = \left(\frac{T+1}{T}\right)^2\, \frac{1}{11^{1/4}\, (4\pi)^{2}} \, \Gamma(\tfrac{1}{11}) \Gamma(\tfrac{3}{11}) \Gamma(\tfrac{4}{11}) \Gamma(\tfrac{5}{11}) \Gamma(\tfrac{9}{11})$
where $\Gamma(n)$ is the gamma function, as well as the infinite series,
$K(k_{11}) = \left(\frac{T+1}{T}\right)^2 \frac{\pi}{32^{1/4}} \sqrt{\frac{1}{4} \sum_{n=0}^\infty \frac{(6n)!}{(3n)!n!^3} \,\frac{1}{(-32)^{3n}} }$
With a slight tweak of the formula, we instead get,
$\frac{1}{4\pi} = \frac{1}{32^{3/2}} \sum_{n=0}^\infty \frac{(6n)!}{(3n)!n!^3}\, \frac{154n+15}{(-32)^{3n}}$
Finally, saving the best for last, given the snub cube, an Archimedean solid,
then the Cartesian coordinates for its vertices are all the even and odd permutations of,
{± 1, ± 1/T, ±}
with an even and odd number of plus signs, respectively, similar to how, for the vertices of the dodecahedron — a Platonic solid — one can use the golden ratio.
For more about the tribonacci constant, and the equally fascinating plastic constant, kindly refer to “A Tale of Four Constants “.
2 responses to this post.
1. […] already written about the tribonacci constant before. But I want to include how Lin found that powers of t can be expressed in terms of those three […] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245389699935913, "perplexity": 763.0241920937979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00112.warc.gz"} |
https://www.sarthaks.com/102304/consider-point-focal-point-convergent-another-convergent-short-focal-length-placed-other | # Consider a point at the focal point of a convergent lens. Another convergent lens of short focal length is placed on the other side.
3.1k views
in Physics
Consider a point at the focal point of a convergent lens. Another convergent lens of short focal length is placed on the other side. What is the nature of the wavefronts emerging from the final image?
by (63.6k points)
selected by
The nature of the wavefronts emerging from the final image is Spherical. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216823577880859, "perplexity": 590.3024642406923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00089.warc.gz"} |
http://mathhelpforum.com/differential-geometry/120476-complex-integral-print.html | # Complex Integral
• December 14th 2009, 03:33 PM
canberra1454
Complex Integral
Show that
$\frac{1}{2 \pi} \int_{-\pi}^{\pi} \frac{1}{1-2r\cos \theta +r^2} d \theta = 1, 0 \leq r < 1$.
Right now, I don't see how to show this. I do see that the integrand is an even function if that helps at all. Also, since this is not from $-\infty$ to $\infty$, I do not know the method to proceed. I need a few hints on how to start.
• December 14th 2009, 05:12 PM
shawsend
Hi. If I use the $z=e^{it}$ substitution, then I get:
$\int_{-\pi}^{\pi}\frac{1}{1-2 r\cos(t)+r^2}dt=-i\oint \frac{dz}{(r-z)(rz-1)}=\frac{2\pi}{1-r^2},\quad |r|<1$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8925383687019348, "perplexity": 236.05387375874386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858581.26/warc/CC-MAIN-20140722025738-00019-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/forces-acting-during-a-gunshot.736339/ | # Forces acting during a gunshot
1. Feb 3, 2014
### thetexan
My understanding is this...
a. The force acting to recoil the mass of the rifle and the mass of the shooter combined is equal (give or take small other masses such as gas) to the force acting to push the bullet down the barrel.
b. This means that all of that force you feel in your shoulder is the same as that acting on the tip end of that bullet as it enters the target (minus loss of energy during flight).
c. Since the the expanding gas is what is driving the bullet down the barrel you would think, at first, that the longer the barrel the more opportunity the gas has to accelerate the bullet. However, there comes a point where, when considering the friction between the bullet and the barrel, the gas no longer can accelerate the bullet and at that point the friction becomes a slowing factor. Therefore the point where this occurs determines the ideal length of the barrel...any shorter and the gas escapes before it finishes it acceleration....any longer and the friction of the barrel begins to decelerate the bullet absent the push from the gas.
Am I thinking correctly?
thanks,
tex
2. Feb 3, 2014
### Staff: Mentor
Pretty much, yes.
The details can get complicated: The force on the bullet is equal to the cross-section area of the bullet times the pressure behind it; as the bullet moves forward the volume increases and the pressure decreases; heat is transferred to the barrel which lowers the pressure; the combustion of the propellant is not instantaneous so the pressure may continue to build even after the bullet starts moving; and so forth. But you've got the basic concept down.
3. Feb 3, 2014
### .Scott
Even with a frictionless barrel, there would come a point where lengthening the barrel would slow the bullet.
The bullet is compressing the air in the barrel in front of it while the pressure in the barrel behind it decreases. Given a long enough barrel, the bullet would begin to decelerate - and perhaps even reverse direction.
4. Feb 3, 2014
### thetexan
Yes. There will be some point where all of the factors come into equilibrium and after that point we're losing ground.
5. Feb 4, 2014
### A.T.
Right. If you want to build a super-cannon, you need multiple charges along the way (multi-charge gun):
http://en.wikipedia.org/wiki/V-3_cannon
6. Feb 4, 2014
### bahamagreen
Some of the force is spent imparting the spin of the bullet (rifling in the barrel).
Similar Discussions: Forces acting during a gunshot | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141547679901123, "perplexity": 1050.5175007783837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815951.96/warc/CC-MAIN-20180224211727-20180224231727-00044.warc.gz"} |
https://searxiv.org/search?author=Christian%20Burkert | ### Results for "Christian Burkert"
total 12039took 0.14s
Tracking Users across the Web via TLS Session ResumptionOct 16 2018User tracking on the Internet can come in various forms, e.g., via cookies or by fingerprinting web browsers. A technique that got less attention so far is user tracking based on TLS and specifically based on the TLS session resumption mechanism. To the ... More
QUICker connection establishment with out-of-band validation tokensApr 12 2019May 03 2019QUIC is a secure transport protocol that improves the performance of HTTPS. An initial QUIC handshake that enforces a strict validation of the client's source address requires two round-trips. In this work, we extend QUIC's address validation mechanism ... More
The Effect of Gas Loss on the Formation of Bound Stellar ClustersJul 27 2000The effect of gas ejection on the structure and binding energy of newly formed stellar clusters is investigated. The star formation efficiency (SFE), necessary for forming a gravitationally bound stellar cluster, is determined. Two sets of numerical N-body ... More
Simulations of Direct Collisions of Gas Clouds with the Central Black HoleAug 07 2009Oct 27 2010We perform numerical simulations of clouds in the Galactic Centre (GC) engulfing the nuclear super-massive black hole and show that this mechanism leads to the formation of gaseous accretion discs with properties that are similar to the expected gaseous ... More
QUICker connection establishment with out-of-band validation tokensApr 12 2019QUIC is a secure transport protocol and aims to improve the performance of HTTPS traffic. It is a design goal of QUIC to reduce the delay overhead of its connection establishment. However, an initial handshake enforcing strict validation of the client's ... More
Enhanced Performance and Privacy for TLS over TCP Fast OpenMay 09 2019Small TCP flows make up the majority of web flows. For them, the TCP three-way handshake represents a significant delay overhead. The TCP Fast Open (TFO) protocol provides zero round-trip time (0-RTT) handshakes for subsequent TCP connections to the same ... More
Physics of the Galactic Center Cloud G2, on its Way towards the Super-Massive Black HoleJan 06 2012The origin, structure and evolution of the small gas cloud, G2, is investigated, that is on an orbit almost straight into the Galactic central supermassive black hole (SMBH). G2 is a sensitive probe of the hot accretion zone of Sgr A*, requiring gas temperatures ... More
Do dwarf spheroidal galaxies contain dark matter?Oct 24 1996The amount of dark matter in the four galactic dwarf spheroidals with large mass-to-light ratios is investigated. Sextans has a cut-off radius which is equal to the expected tidal radius, assuming a high mass-to-light ratio. This satellite very likely ... More
The Challenge of Modelling Galactic DisksDec 10 2008Dec 12 2008Detailed models of galactic disk formation and evolution require knowledge about the initial conditions under which disk galaxies form, the boundary conditions that affect their secular evolution and the micro-physical processes that drive the multi-phase ... More
Star Formation in Turbulent Molecular CloudsMay 17 2001Recent progress in the understanding of star formation is summarized. A consistent picture is emerging where molecular clouds form with turbulent velocity fields and clumpy substructure, imprinted already during their formation. The clouds are initially ... More
The structure of dark matter halos. Observation versus theoryMar 10 1997The rotation curves of dark matter dominated dwarf galaxies are analysed. The observations show that dark matter halos represent a one-parameter family with self similar density profiles. The global halo parameters, like total mass and scale length are ... More
Balance among gravitational instability, star formation, and accretion determines the structure and evolution of disk galaxiesMay 13 2013May 15 2013Over the past 10 Gyr, star-forming galaxies have changed dramatically, from clumpy and gas rich, to rather quiescent stellar-dominated disks with specific star formation rates lower by factors of a few tens. We present a general theoretical model for ... More
Self-Interacting Cold Dark Matter HalosDec 08 2000The evolution of halos consisting of weakly self-interacting dark matter particles is summarized. The halos initially contain a central density cusp as predicted by cosmological models. Weak self-interaction leads to the formation of an isothermal, low-density ... More
The structure and evolution of weakly self-interacting cold dark matter halosFeb 22 2000Apr 18 2000The evolution of halos consisting of weakly self-interacting dark matter particles is investigated using a new numerical Monte-Carlo N-body method. The halos initially contain kinematically cold, dense 1/r-power-law cores. For interaction cross sections ... More
Galactic Disk Formation and the Angular Momentum ProblemAug 10 2009Galactic disk formation requires knowledge about the initial conditions under which disk galaxies form, the boundary conditions that affect their secular evolution and the micro-physical processes that drive the multi-phase interstellar medium and regulate ... More
Stellar Feedback Processes: Their Impact on Star Formation and Galactic EvolutionApr 01 2004The conditions that lead to self-regulated star formation, star bursts and the formation of massive stellar clusters are discussed. Massive stars have a strong impact on their environment, especially on the evolution of dwarf galaxies which are the building ... More
The Formation of the Milky Way in the Cosmological ContextMay 17 2001The formation of the Milky Way is discussed within the context of the cold dark matter scenario. Several problems arise which can be solved if the Galaxy experienced an early phase of gas heating and decoupling from the dark matter substructure. This ... More
The geometry and origin of ultra-diffuse ghost galaxiesAug 31 2016Sep 02 2016The geometry and intrinsic ellipticity distribution of ultra diffuse galaxies (UDGs) is determined from the line-of-sight distribution of axial ratios q of a large sample of UDGs, detected by Koda et al. (2015) in the Coma cluster. With high significance ... More
The Structure of Dark Matter Haloes in Dwarf GalaxiesApr 12 1995Recent observations indicate that dark matter haloes have flat central density profiles. Cosmological simulations with non-baryonic dark matter predict however self similar haloes with central density cusps. This contradiction has lead to the conclusion ... More
On the Formation of Elliptical GalaxiesMar 07 1994It is shown that the violent relaxation of dissipationless stellar systems leads to universal de Vaucouleurs profiles only outside 1.5 effective radii $R_e$. Inside $1.5 R_e$ the surface density profiles depend strongly on the initial conditions and are ... More
The Cosmological Angular Momentum Problem of Low-Mass Disk GalaxiesJul 05 2000The rotational properties of the visible and dark components of low-mass disk galaxies (vrot<=100 km/s) are investigated using the Swaters sample. The rotational parameter lambda'=lambda_DM*(j_d/m_d) is determined, where lambda_DM is the dark halo spin ... More
The Structure and Dark Halo Core Properties of Dwarf Spheroidal GalaxiesJan 26 2015Jun 22 2015The structure and dark matter halo core properties of dwarf spheroidal galaxies (dSphs) are investigated. A double-isothermal model of an isothermal stellar system, embedded in an isothermal dark halo core provides an excellent fit to the various observed ... More
The Turbulent Interstellar MediumMay 03 2006An overview is presented of the main properties of the interstellar medium. Evidence is summarized that the interstellar medium is highly turbulent, driven on different length scales by various energetic processes. Large-scale turbulence determines the ... More
Recent results on the nucleon resonance spectrum and structure from the CLAS detectorAug 17 2015The CLAS detector at Jefferson Lab has provided the dominant part of all available worldwide data on exclusive meson electroproduction off protons in the resonance region. New results on the $\gamma_{v}pN^*$ transition amplitudes (electrocouplings) are ... More
Thermal Quantum Fields without Cut-offs in 1+1 Space-time DimensionsMar 26 2004We construct interacting quantum fields in 1+1 dimensional Minkowski space, representing neutral scalar bosons at positive temperature. Our work is based on prior work by Klein and Landau and Hoegh-Krohn
On the relativistic KMS condition for the P(φ)_2 modelSep 29 2006The relativistic KMS condition introduced by Bros and Buchholz provides a link between quantum statistical mechanics and quantum field theory. We show that for the $P(\phi)_2$ model at positive temperature, the two point function for fields satisfies ... More
Friction force: from mechanics to thermodynamicsNov 17 2009Jul 01 2010We study some mechanical problems in which a friction force is acting on the system. Using the fundamental concepts of state, time evolution and energy conservation we explain how to extend Newtonian mechanics to thermodynamics. We arrive at the two laws ... More
Duck Traps: Two-dimensional Critical Manifolds in Planar SystemsAug 30 2018Nov 05 2018In this work we consider two-dimensional critical manifolds in planar fast-slow systems near fold and so-called canard (=duck') points. These higher-dimension, and lower-codimension, situation is directly motivated by the case of hysteresis operators ... More
Relaxed Logarithmic Barrier Function Based Model Predictive Control of Linear SystemsMar 11 2015In this paper, we investigate the use of relaxed logarithmic barrier functions in the context of linear model predictive control. We present results that allow to guarantee asymptotic stability of the corresponding closed-loop system, and discuss further ... More
A stabilizing iteration scheme for model predictive control based on relaxed barrier functionsMar 15 2016Apr 06 2016We propose and analyze a stabilizing iteration scheme for the algorithmic implementation of model predictive control for linear discrete-time systems. Polytopic input and state constraints are considered and handled by means of so-called relaxed logarithmic ... More
Differential Characters and Geometric ChainsMar 26 2013Apr 09 2013We study Cheeger-Simons differential characters and provide geometric descriptions of the ring structure and of the fiber integration map. The uniqueness of differential cohomology (up to unique natural transformation) is proved by deriving an explicit ... More
Distances and large deviations in the spatial preferential attachment modelSep 26 2018Sep 27 2018We investigate two asymptotic properties of a spatial preferential-attachment model introduced by E. Jacob and P. M\"orters (2013). First, in a regime of strong linear reinforcement, we show that typical distances are at most of doubly-logarithmic order. ... More
On the Vacuum Polarization Density Caused by an External FieldJul 01 2003Feb 11 2004We consider an external potential, $-\lambda \phi$, due to one or more nuclei. Following the Dirac picture such a potential polarizes the vacuum. The polarization density as derived in physics literature, after a well known renormalization procedure, ... More
Quantum field theory meets Hopf algebraNov 14 2006Sep 11 2010This paper provides a primer in quantum field theory (QFT) based on Hopf algebra and describes new Hopf algebraic constructions inspired by QFT concepts. The following QFT concepts are introduced: chronological products, S-matrix, Feynman diagrams, connected ... More
A differential identity for Green functionsFeb 15 2006If P is a differential operator with constant coefficients, an identity is derived to calculate the action of exp(P) on the product of two functions. In many-body theory, P describes the interaction Hamiltonian and the identity yields a hierarchy of Green ... More
Quantum groups and interacting quantum fieldsAug 19 2002If C is a cocommutative coalgebra, a bialgebra structure can be given to the symmetric algebra S(C). The symmetric product is twisted by a Laplace pairing and the twisted product of any number of elements of S(C) is calculated explicitly. This is used ... More
Continuous-Variable Quantum Key Distribution with Entanglement in the MiddleMay 07 2012We analyze the performance of continuous-variable quantum key distribution protocols where the entangled source originates not from one of the trusted parties, Alice or Bob, but from the malicious eavesdropper in the middle. This is in contrast to the ... More
Tight Running Time Lower Bounds for Vertex Deletion ProblemsNov 17 2015May 17 2016For a graph class $\Pi$, the $\Pi$-Vertex Deletion problem has as input an undirected graph $G=(V,E)$ and an integer $k$ and asks whether there is a set of at most $k$ vertices that can be deleted from $G$ such that the resulting graph is a member of ... More
A quantum-information-theoretic complement to a general-relativistic implementation of a beyond-Turing computerMay 21 2014Jun 11 2014There exists a growing literature on the so-called physical Church-Turing thesis in a relativistic spacetime setting. The physical Church-Turing thesis is the conjecture that no computing device that is physically realizable (even in principle) can exceed ... More
Vacuum Polarisation Tensors in Constant Electromagnetic Fields: Part IJan 27 2000Dec 29 2000The string-inspired technique is used for the calculation of vacuum polarisation tensors in constant electromagnetic fields. In the first part of this series, we give a detailed exposition of the method for the case of the QED one-loop N-photon amplitude ... More
Blue Stragglers in Globular Clusters: Observations, Statistics and PhysicsJun 13 2014This chapter explores how we might use the observed {\em statistics} of blue stragglers in globular clusters to shed light on their formation. This means we will touch on topics also discussed elsewhere in this book, such as the discovery and implications ... More
Dualité onde-corpuscule formée par une masselotte oscillante dans un milieu élastique : étude théorique et similitudes quantiquesSep 29 2016We introduce a dual wave-particle macroscopic system, where a bead oscillator oscillates in an elastic media which obeys the Klein-Gordon equation. This theoretical system comes mainly from bouncing drops experiments and also a sliding bead on a vibrating ... More
On the geometry of metric measure spaces with variable curvature boundsJun 10 2015Sep 09 2015Motivated by a classical comparison result of J. C. F. Sturm we introduce a curvature-dimension condition CD(k,N) for general metric measure spaces and variable lower curvature bound k. In the case of non-zero constant lower curvature our approach coincides ... More
On the logarithm of the characteristic polynomial of the Ginibre ensembleJul 30 2015We prove a slightly sharper version of a result of Rider and Vir\'ag who proved that after centering, the logarithm of the absolute value of the characteristic polynomial of the Ginibre ensemble converges in law to the Gaussian Free Field on the unit ... More
Maximum Matching in Turnstile StreamsMay 06 2015We consider the unweighted bipartite maximum matching problem in the one-pass turnstile streaming model where the input stream consists of edge insertions and deletions. In the insertion-only model, a one-pass $2$-approximation streaming algorithm can ... More
Cosmology and gravitational waves in the Nordstrom-Vlasov system, a laboratory for Dark EnergyJan 24 2013We discuss a cosmological solution of the system which was originally introduced by Calogero and is today popularly known as "Nordstrom-Vlasov system". Although the model is un-physical, its cosmological solution results interesting for the same reasons ... More
Effective temperature for black holesJul 26 2011Jul 28 2011The physical interpretation of black hole's quasinormal modes is fundamental for realizing unitary quantum gravity theory as black holes are considered theoretical laboratories for testing models of such an ultimate theory and their quasinormal modes ... More
A precise response function for the magnetic component of Gravitational Waves in Scalar-Tensor GravityFeb 03 2011Feb 04 2011The important issue of the magnetic component of gravitational waves (GWs) has been considered in various papers in the literature. From such analyses, it resulted that such a magnetic component becomes particularly important in the high frequency portion ... More
A longitudinal component in massive gravitational waves arising from a bimetric theory of gravityNov 06 2008After a brief review of the work of de Paula, Miranda and Marinho on massive gravitational waves arising from a bimetric theory of gravity, in this paper it is shown that the presence of the mass generates a longi- tudinal component in a particular polarization ... More
Analysis of the transverse effect of Einstein's gravitational wavesJul 13 2007The investigation of the transverse effect of gravitational waves (GWs) could constitute a further tool to discriminate among several relativistic theories of gravity on the ground. After a review of the TT gauge, the transverse effect of GWs arising ... More
Interpretation of Mössbauer experiment in a rotating system: a new proof for general relativityFeb 14 2015A historical experiment by K\"undig on the transverse Doppler shift in a rotating system measured with the M\"ossbauer effect has been recently first re-analyzed and then replied [1,2]. The results have shown that a correct re-processing of K\"undig's ... More
Black hole quantum spectrumOct 26 2012Nov 19 2013Introducing a black hole (BH) effective temperature, which takes into account both the non-strictly thermal character of Hawking radiation and the countable behavior of emissions of subsequent Hawking quanta, we recently re-analysed BH quasi-normal modes ... More
Massive relic gravitational waves from f(R) theories of gravity: production and potential detectionJul 23 2010The production of a stochastic background of relic gravitational waves is well known in various works in the literature, where, by using the so called adiabatically-amplified zero-point fluctuations process, it has been shown how the standard inflationary ... More
A review of the stochastic background of gravitational waves in f(R) gravity with WMAP constrainsJan 09 2009Jan 15 2009This paper is a review of previous works on the stochastic background of gravitational waves (SBGWs) which has been discussed in various peer-reviewed journals and international conferences. The SBGWs is analyzed with the aid of the Wilkinson Microwave ... More
A non-geodesic motion in the R^-1 theory of gravity tuned with observationsJan 01 2008Jan 11 2008In the general picture of high order theories of gravity, recently, the R^-1 theory has been analyzed in two different frameworks. In this letter a third context is added, considering an explicit coupling between the R^-1 function of the Ricci scalar ... More
The production of matter from curvature in a particular linearized high order theory of gravity and the longitudinal response function of interferometersMar 26 2007The strict analogy between scalar-tensor theories of gravity and high order gravity is well known in literature. In this paper it is shown that, from a particular high order gravity theory known in literature, it is possible to produce, in the linearized ... More
Scattering theory for Klein-Gordon equations with non-positive energyJan 11 2011Sep 09 2011We study the scattering theory for charged Klein-Gordon equations: $\{{array}{l} (\p_{t}- \i v(x))^{2}\phi(t,x) \epsilon^{2}(x, D_{x})\phi(t,x)=0,[2mm] \phi(0, x)= f_{0}, [2mm] \i^{-1} \p_{t}\phi(0, x)= f_{1}, {array}.$ where: \[\epsilon^{2}(x, D_{x})= ... More
Spectral and scattering theory of charged $P(\varphi)_2$ modelsJun 30 2009We consider in this paper space-cutoff charged $P(\varphi)_{2}$ models arising from the quantization of the non-linear charged Klein-Gordon equation: \[ (\p_{t}+\i V(x))^{2}\phi(t, x)+ (-\Delta_{x}+ m^{2})\phi(t,x)+ g(x)\p_{\overline{z}}P(\phi(t,x), \overline{\phi}(t,x))=0, ... More
k-Means Clustering Is Matrix FactorizationDec 23 2015We show that the objective function of conventional k-means clustering can be expressed as the Frobenius norm of the difference of a data matrix and a low rank approximation of that data matrix. In short, we show that k-means clustering is a matrix factorization ... More
Galilean IsometriesMar 09 2009We introduce three nested Lie algebras of infinitesimal isometries' of a Galilei space-time structure which play the r\^ole of the algebra of Killing vector fields of a relativistic Lorentz space-time. Non trivial extensions of these Lie algebras arise ... More
The Mössbauer rotor experiment and the general theory of relativityFeb 12 2016Feb 16 2016This paper is a rebuttal to Eur. Phys. Jour. Plus 130, 191 (2015), which claims that the results in arXiv:1502.04911 (Ann. Phys. 355, 360 (2015)) are incorrect. For this reason, some of the results in arXiv:1502.04911 have been reviewed and clarified. ... More
A quantitative theory for the continuity equationFeb 09 2016Mar 24 2016In this work, we provide stability estimates for the continuity equation with Sobolev vector fields. The results are inferred from contraction estimates for certain logarithmic Kantorovich--Rubinstein distances. As a by-product, we obtain a new proof ... More
Magnetically driven outflows from Jovian circum-planetaryaccretion disksOct 01 2003We discuss the possibility to launch outflows from the close vicinity of a protoplanetary core considering a scenario where the protoplanet surrounded by a circum-planetary accretion disk is located in a circum-stellar disk. For the circum-planetary disk ... More
Moments and Classification for Conjugation-Invariant Rotations and Fake Uniformity in the Stochastic Radon TransformMar 06 2012We consider a generalisation of the stochastic Radon transform, introduced for an inverse problem in tomography by Panaretos. Specifically, we allow the distribution of the three-dimensional rotation in the statistical model of that work to be different ... More
Varieties of *-regular ringsApr 09 2019Apr 11 2019Given a subdirectly irreducible *-regular ring R, we show that R is a homomorphic image of a regular *-subring of an ultraproduct of the (simple) eRe, e in the minimal ideal of R; moreover, R (with unit) is directly finite if all eRe are unit-regular. ... More
Quantum correlations are weaved by the spinors of the Euclidean primitivesMay 30 2018The exceptional Lie group E8 plays a prominent role in both mathematics and theoretical physics. It is the largest symmetry group associated with the most general possible normed division algebra, namely, that of the non-associative real octonions, which ... More
On a Surprising Oversight by John S. Bell in the Proof of his Famous TheoremApr 03 2017Nov 11 2018Bell inequalities are usually derived by assuming locality and realism, and therefore experimental violations of Bell inequalities are usually taken to imply violations of either locality or realism, or both. But, after reviewing an oversight by Bell, ... More
Refutation of Richard Gill's Argument Against my Disproof of Bell's TheoremMar 12 2012Mar 06 2017I identify a number of errors in Richard Gill's purported refutation (arXiv:1203.1504) of my disproof of Bell's theorem. In particular, I point out that his central argument is based, not only on a rather trivial misreading of my counterexample to Bell's ... More
Restoring Local Causality and Objective Reality to the Entangled PhotonsJun 03 2011May 02 2012Unlike our basic theories of space and time, quantum mechanics is not a locally causal theory. Moreover, it is widely believed that any hopes of restoring local causality within a realistic theory have been undermined by Bell's theorem and its supporting ... More
Disproofs of Bell, GHZ, and Hardy Type Theorems and the Illusion of EntanglementApr 28 2009Oct 24 2010An elementary topological error in Bell's representation of the EPR elements of reality is identified. Once recognized, it leads to a topologically correct local-realistic framework that provides exact, deterministic, and local underpinning of at least ... More
Disproof of Bell's Theorem: Reply to CriticsMar 26 2007Jan 03 2008This is a collection of my responses to the criticisms of my argument against the impossibility proof of John Bell, which aims to undermine any conceivable local realistic completion of quantum mechanics. I plan to periodically update this preprint instead ... More
Testing Gravity-Driven Collapse of the Wavefunction via Cosmogenic NeutrinosMar 01 2005Oct 12 2005It is pointed out that the Diosi-Penrose ansatz for gravity-induced quantum state reduction can be tested by observing oscillations in the flavor ratios of neutrinos originated at cosmological distances. Since such a test would be almost free of environmental ... More
Why the Quantum Must Yield to GravityOct 26 1998Mar 08 1999After providing an extensive overview of the conceptual elements -- such as Einstein's `hole argument' -- that underpin Penrose's proposal for gravitationally induced quantum state reduction, the proposal is constructively criticised. Penrose has suggested ... More
Disproof of Bell's TheoremMar 09 2011Oct 15 2015We illustrate an explicit counterexample to Bell's theorem by constructing a pair of spin variables in S^3 that exactly reproduces the EPR-Bohm correlation in a manifestly local-realistic manner.
Disproof of Bell's Theorem by Clifford Algebra Valued Local VariablesMar 20 2007Apr 22 2010It is shown that Bell's theorem fails for the Clifford algebra valued local realistic variables. This is made evident by exactly reproducing quantum mechanical expectation value for the EPR-Bohm type spin correlations observable by means of a local, deterministic, ... More
Rectangular Statistical Cartograms in R: The recmap PackageJun 01 2016Mar 24 2017Cartogram drawing is a technique for showing geography-related statistical information, such as demographic and epidemiological data. The idea is to distort a map by resizing its regions according to a statistical parameter by keeping the map recognizable. ... More
Towards an Algebraic Theory of Analogical Reasoning in Logic ProgrammingSep 26 2018Analogy-making is an essential part of human intelligence and creativity. This paper proposes an algebraic model of analogical reasoning in logic programming based on the syntactic composition and decomposition of programs. The main idea is to define ... More
Golomb's conjecture on prime gapsApr 23 2016Question 10208b (1992) of the American Mathematical Monthly asked: does there exist an increasing sequence $\{a_k\}$ of positive integers and a constant $B > 0$ having the property that $\{ a_k + n\}$ contains no more than $B$ primes for every integer ... More
Recursive Numerical Evaluation of the Cumulative Bivariate Normal DistributionApr 21 2010We propose an algorithm for evaluation of the cumulative bivariate normal distribution, building upon Marsaglia's ideas for evaluation of the cumulative univariate normal distribution. The algorithm is mathematically transparent, delivers competitive ... More
Constructive homotopy theory of marked semisimplicial setsSep 28 2018We develop the homotopy theory of semisimplicial sets constructively and without reference to point-set topology to obtain a constructive model for $\omega$-groupoids. Most of the development is folklore, but for a few results the author is unaware of ... More
Immersions of surfaces in almost complex 4-manifoldsAug 31 2000In this note, we investigate the relation between double points and complex points of immersed surfaces in almost-complex 4-manifolds and show how estimates for the minimal genus of embedded surfaces lead to inequalities between the number of double points ... More
Modularity experiments on $S_4$-symmetric double octicsOct 08 2018We will invest quite some computer power to find double octic threefolds that are connected to weight four modular forms.
Exponentials form a basis of discrete holomorphic functionsOct 08 2002We show that discrete exponentials form a basis of discrete holomorphic functions. On a convex, the discrete polynomials form a basis as well.
Normal crossing singularities and Hodge theory over Artin ringsApr 30 2012We develop a Hodge theory for relative simple normal crossing varieties over an Artinian base scheme. We introduce the notion of a mixed Hodge structure over an Artin ring, which axiomatizes the structure that is found on the cohomology of such a variety. ... More
The Saxl Conjecture and the Dominance OrderOct 24 2014May 05 2015In 2012 Jan Saxl conjectured that all irreducible representations of the symmetric group occur in the decomposition of the tensor square of the irreducible representation corresponding to the staircase partition. We make progress on this conjecture by ... More
Geometrically formal 4-manifolds with nonnegative sectional curvatureDec 06 2012Jan 31 2015A Riemannian manifold is called geometrically formal if the wedge product of any two harmonic forms is again harmonic. We classify geometrically formal compact 4-manifolds with nonnegative sectional curvature. If the sectional curvature is strictly positive, ... More
Uniruled Surfaces of General TypeAug 24 2006Nov 06 2006We give a systematic construction of uniruled surfaces in positive characteristic. Using this construction, we find surfaces of general type with non-trivial vector fields, surfaces with arbitrarily non-reduced Picard schemes as well as surfaces with ... More
Non-classical Godeaux SurfacesApr 21 2008Aug 25 2008A non-classical Godeaux surface is a minimal surface of general type with $\chi=K^2=1$ but with $h^{01}\neq0$. We prove that such surfaces fulfill $h^{01}=1$ and they can exist only over fields of positive characteristic at most 5. Like non-classical ... More
Algebraic Surfaces of General Type with Small c_1^2 in Positive CharacteristicFeb 19 2007Oct 26 2007We establish Noether's inequality for surfaces of general type in positive characteristic.Then we extend Enriques' and Horikawa's classification of surfaces on the Noether line, the so-called Horikawa surfaces. We construct examples for all possible numerical ... More
A counterexample for a problem on quasi Baer modulesMay 08 2016Mar 13 2017In this note we answer two questions on quasi-Baer modules raised by Lee and Rizvi in J.Algebra (2016).
The Canonical Map and Horikawa Surfaces in Positive CharacteristicApr 10 2010Dec 20 2011We extend fundamental inequalities related to the canonical map of surfaces of general type to positive characteristic. Next, we classify surfaces on the Noether lines, i.e., even and odd Horikawa surfaces, in positive characteristic. We describe their ... More
On a new collection of words in the Catalan familyApr 07 2014May 23 2014In this note, we provide a bijection between a new collection of words on nonnegative integers of length n and Dyck paths of length 2n-2, thus proving that this collection belongs to the Catalan family. The surprising key step in this bijection is the ... More
Modules Whose Small Submodules Have Krull DimensionJul 21 1998The main aim of this paper is to show that an AB5*-module whose small submodules have Krull dimension has a radical having Krull dimension. The proof uses the notion of dual Goldie dimension.
On the base locus of the linear system of generalized theta functionsJul 23 2007Apr 14 2008Let $\cM_r$ denote the moduli space of semi-stable rank-$r$ vector bundles with trivial determinant over a smooth projective curve $C$ of genus $g$. In this paper we study the base locus $\cB_r \subset \cM_r$ of the linear system of the determinant line ... More
Fake uniformity in a shape inversion formulaMar 06 2012Dec 29 2017We revisit a shape inversion formula derived by Panaretos in the context of a particle density estimation problem with unknown rotation of the particle. A distribution is presented which imitates, or 'fakes', the uniformity or Haar distribution that is ... More
Simple groups of birational transformations in dimension twoFeb 26 2018We classify simple groups that act by birational transformations on compact complex K\"ahler surfaces. Moreover, we show that every finitely generated simple group that acts non-trivially by birational transformations on a projective surface over an arbitrary ... More
emgr - The Empirical Gramian FrameworkNov 02 2016May 28 2018System Gramian matrices are a well-known encoding for properties of input-output systems such as controllability, observability or minimality. These so-called system Gramians were developed in linear system theory for applications such as model order ... More
Reflexivity of Newton-Okounkov bodies of partial flag varietiesFeb 19 2019Assume that the valuation semigroup $\Gamma(\lambda)$ of an arbitrary partial flag variety corresponding to the line bundle $\mathcal L_\lambda$ constructed via a full-rank valuation is finitely generated and saturated. We use Ehrhart theory to prove ... More
There is a $3\times3$ Magic Square of Squares on the Moon - A Lot of Them, ActuallyNov 09 2018Nov 19 2018Magic squares have been well explored here on Earth [1], but there appears to have been little-to-no examination of $\textit{lunar}$ magic squares. Some kinds of magic squares exist on the moon (i.e. under lunar arithmetic) but not on Earth (i.e. under ... More
Theory status of hadronic top-quark pair productionOct 11 2018The status of theoretical predictions for top-quark pair production at hadron colliders is reviewed, focusing on the total cross section, differential distributions, and the description of top-quark production and decay including off-shell effects. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8624364137649536, "perplexity": 1724.7755943762463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997508.21/warc/CC-MAIN-20190616002634-20190616024634-00111.warc.gz"} |
https://www.physicsforums.com/threads/momentum-after-inelastic-collison.843861/ | # Momentum After Inelastic Collison
Tags:
1. Nov 18, 2015
### NatalieWise123
1. The problem statement, all variables and given/known data
Two hockey pucks are sliding across the ice in the same direction. One has a momentum of 35 kg*m/s. The other has a momentum of 7 kg*m/s. After the collision, the pucks stick together. What is the momentum of the pucks after the collision?
2. Relevant equations
3. The attempt at a solution
I'd think you'd just add them together because of the Law of Conservation of Momentum but for some reason I'd imagine they'd slow down after hitting. Is 42 kg*m/s the right answer and if so why?
2. Nov 18, 2015
### J Hann
Even though energy is not conserved in an in an inelastic collision,
momentum is conserved.
So write an equation for conservation of momentum and solve for the final speed.
Remember, you now have a final mass of 2 m.
Draft saved Draft deleted
Similar Discussions: Momentum After Inelastic Collison | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731006026268005, "perplexity": 1340.718009052725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824894.98/warc/CC-MAIN-20171021190701-20171021210701-00537.warc.gz"} |
http://www.mathematicalgemstones.com/gemstones/pearl/deriving-the-discriminant-of-a-cubic-polynomial-through-analytic-geometric-means/ | # Deriving the discriminant of a cubic polynomial through analytic geometric means
This is a contributed gemstone, written by Sushant Vijayan. Enjoy!
Consider a generic cubic equation $$at^3+bt^2+ct+d=0,$$ where $a,b,c,d$ are real numbers and $a \neq 0$. Now we can transform this equation into a depressed cubic equation, i.e., one with no $t^2$ term, through means of Tschirnhaus Transformation $t=x-\frac{b}{3a}$, followed by dividing through by $a$. The depressed cubic equation is given by $$x^3+px+q=0$$ where $p$ and $q$ are related to $a,b,c,d$ by the relation given here. Setting $p=-m$ and $q=-n$ and rearranging we arrive at $$x^3 =mx+n \hspace{3cm} (1)$$ We will investigate the nature of roots for this equation. We begin with plotting out the graph of $y=x^3$:
It is an odd, monotonic, nondecreasing function with an inflection point at $x=0$. The real roots of the equation (1) are the $x$-coordinates of the points of intersection between the straight line $y= mx+n$ and the curve $y=x^3$. It is clear geometrically that however we draw the straight line, as a bare minimum, there would be at least one point of intersection. This corresponds to the fact that all cubic equations with real coefficients have at least one real root.
It is immediately clear that if the slope of the line $m$ is less than $0$, there is only one point of intersection and hence only one real root. On the other hand, the condition $m>0$ is equivalent to demanding that the depressed cubic $y(x)= x^3-mx-n$ has two points of local extrema (which is a necessary, but not sufficient, condition for the existence of three real roots).
Now the possibility of repeated real roots occur when the straight line is a tangent to the curve $y=x^3$. Hence consider the slope of the function $y=x^3$:
$$\frac{dy}{dx}=3x^2$$ Now equating the slopes (note $m\ge 0$) we get the tangent points:
$$3x^2=m$$
$$x=\pm \sqrt{\frac {m}{3} }$$
Equivalently, the tangents are at the two values of $x$ for which
$$|x|=\sqrt{\frac {m}{3} }.$$
The corresponding $y$-intercepts for these tangent straight lines are:
$$n=\pm \frac{2m}{3}\sqrt{\frac{m}{3}}$$
or, in other words,
$$|n|=\frac{2m}{3}\sqrt{\frac{m}{3}}$$
Thus for a straight line with a given slope $m\ge 0$ there are only two tangents with corresponding tangent points and $y$-intercepts. This would be the case of equation (1) having one real and one real repeated root.
What about the case where the slope is still $m$ but $|n|<\frac{2m}{3}\sqrt{\frac{m}{3}}$?
In this case the straight line is parallel to the two tangent lines (since same slope) and in the region bounded by the two tangent lines. Hence it would necessarily intersect the curve $y=x^3$ at three points. This corresponds to the situation of equation (1) having three real roots.
And the case where $|n| > \frac{2m}{3}\sqrt{\frac{m}{3}}$ corresponds to the area outside the bounded region of the two tangent lines and has only one point of intersection.
Hence the necessary and sufficient condition for three real roots (including repeated roots) is given by:
$$|n| \le \frac{2m}{3}\sqrt{\frac{m}{3}} \hspace{3cm}(2)$$
We note that the condition $m\ge 0$ is subsumed within the above condition (2), for If $m<0$ then the condition (2) cannot be satisfied. We note that condition (2) is not a radical expression. We square on both sides and rearrange to arrive at: $$\frac{4m^3}{27}-n^2 \ge 0$$ Multiply by 27 and set $\bigtriangleup=4m^3-27n^2$ to get $$\bigtriangleup \ge 0$$ The $\bigtriangleup$ is the discriminant of the cubic and has all the information required to determine the nature of the roots of the depressed cubic given by equation (1). This may then be written in terms of $a,b,c,d$ by inverting the Tschirnhaus transformation. A similar exercise may be carried out for the quartic equation, and a similar but albeit more complicated expression can be derived for the discriminant. It would be very interesting to do the same exercise for the quintic and see where it fails (fail it must, otherwise it would contradict Abel's famous result of insolvability of quintic equations using radical expressions). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9359111189842224, "perplexity": 125.39071792553635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806586.6/warc/CC-MAIN-20171122122605-20171122142605-00000.warc.gz"} |
http://math.stackexchange.com/questions/202013/please-check-working-given-rsa-encoding-function-e-x-to-x11-pmod3737 | # (Please check working) Given RSA encoding function $E: x\to x^{11} \pmod{3737}$ find the decoding function $D$
Question:
Given RSA encoding function $E: x\to x^{11} \pmod{3737}$ find the decoding function $D$
My working:
$\phi(3737) = \phi(37) \times \phi(101)$
$= 36 \times 100 = 3600$
Using euclid's algorithm (Skipping the fairly long gory details) we show:
$1 = 4\times3600 - 1309 \times 11$
Taking the coefficient of 11 (power of $x$), we get $-1309$ but we want this in positive congruent mod $3600$ so $3600-1309 = 2291$ and finally we have
$D:x\to x^{2291}\pmod{3737}$
-
Seems fine. Note that once you have (maybe) found the decoding number, you can check it easily: We have $(11)(2291)=25201$, which is easily congruent to $1$ modulo $3600$. – André Nicolas Sep 25 '12 at 5:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9720859527587891, "perplexity": 855.0910386894074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00057-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://link.springer.com/article/10.1023%2FA%3A1017569601150 | Journal of Philosophical Logic
, Volume 30, Issue 3, pp 259–265
# No Future
• Leon Horsten
• Hannes Leitgeb
Article
DOI: 10.1023/A:1017569601150
Cite this article as:
Horsten, L. & Leitgeb, H. Journal of Philosophical Logic (2001) 30: 259. doi:10.1023/A:1017569601150
## Abstract
The difficulties with formalizing the intensional notions necessity, knowability and omniscience, and rational belief are well-known. If these notions are formalized as predicates applying to (codes of) sentences, then from apparently weak and uncontroversial logical principles governing these notions, outright contradictions can be derived. Tense logic is one of the best understood and most extensively developed branches of intensional logic. In tense logic, the temporal notions future and past are formalized as sentential operators rather than as predicates. The question therefore arises whether the notions that are investigated in tense logic can be consistently formalized as predicates. In this paper it is shown that the answer to this question is negative. The logical treatment of the notions of future and past as predicates gives rise to paradoxes due the specific interplay between both notions. For this reason, the tense paradoxes that will be presented are not identical to the paradoxes referred to above.
tense logictense predicatesdiagonalizationparadox
## Copyright information
© Kluwer Academic Publishers 2001
## Authors and Affiliations
• Leon Horsten
• 1
• Hannes Leitgeb
• 2
1. 1.Centrum LogicaLeuvenBelgium
2. 2.Dept. of PhilosophyUniversity of Salzburg FranziskanergasseSalzburgAustria | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426549434661865, "perplexity": 2775.4042029421284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00165-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Point_elasticity | # Elasticity of a function
(Redirected from Point elasticity)
In mathematics, the elasticity or point elasticity of a positive differentiable function f of a positive variable (positive input, positive output)[1] at point a is defined as[2]
$Ef(a) = \frac{a}{f(a)}f'(a)$
$=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}\frac{a}{f(a)}=\lim_{x\to a}\frac{f(x)-f(a)}{f(a)}\frac{a}{x-a}=\lim_{x\to a}\frac{1- \frac{f(x)}{f(a)}}{1-\frac{x}{a}}\approx \frac{\%\Delta f(a)}{\%\Delta a}$
or equivalently
$Ef(x) = \frac{d \log f(x)}{d \log x}.$
It is thus the ratio of the relative (percentage) change in the function's output $f(x)$ with respect to the relative change in its input $x$, for infinitesimal changes from a point $(a, f(a))$. Equivalently, it is the ratio of the infinitesimal change of the logarithm of a function with respect to the infinitesimal change of the logarithm of the argument.
The elasticity of a function is a constant $\alpha$ if and only if the function has the form $f(x) = C x ^ \alpha$ for a constant $C>0$.
The elasticity at a point is the limit of the arc elasticity between two points as the separation between those two points approaches zero.
The concept of elasticity is widely used in economics; see elasticity (economics) for details.
## Rules
Rules for finding the elasticity of products and quotients are simpler than those for derivatives. Let f, g be differentiable. Then[2]
$E ( f(x) \cdot g(x) ) = E f(x) + E g(x)$
$E \frac{f(x)}{g(x)} = E f(x) - E g(x)$
$E ( f(x) + g(x) ) = \frac{f(x) \cdot E(f(x)) + g(x) \cdot E(g(x))}{f(x) + g(x)}$
$E ( f(x) - g(x) ) = \frac{f(x) \cdot E(f(x)) - g(x) \cdot E(g(x))}{f(x) - g(x)}$
The derivative can be expressed in terms of elasticity as
$D f(x) = \frac{E f(x) \cdot f(x)}{x}$
Let a and b be constants. Then
$E ( a ) = 0 \$
$E ( a \cdot f(x) ) = E f(x)$,
$E (b x^a) = a \$.
## Estimating point elasticities
In economics, the price elasticity of demand refers to the elasticity of a demand function Q(P), and can be expressed as (dQ/dP)/(Q(P)/P) or the ratio of the value of the marginal function (dQ/dP) to the value of the average function (Q(P)/P). This relationship provides an easy way of determining whether a demand curve is elastic or inelastic at a particular point. First, suppose one follows the usual convention in mathematics of plotting the independent variable (P) horizontally and the dependent variable (Q) vertically. Then the slope of a line tangent to the curve at that point is the value of the marginal function at that point. The slope of a ray drawn from the origin through the point is the value of the average function. If the absolute value of the slope of the tangent is greater than the slope of the ray then the function is elastic at the point; if the slope of the secant is greater than the absolute value of the slope of the tangent then the curve is inelastic at the point.[3] If the tangent line is extended to the horizontal axis the problem is simply a matter of comparing angles formed by the lines and the horizontal axis. If the marginal angle is greater than the average angle then the function is elastic at the point; if the marginal angle is less than the average angle then the function is inelastic at that point. If, however, one follows the convention adopted by economists and plots the independent variable P on the vertical axis and the dependent variable Q on the horizontal axis, then the opposite rules would apply.
The same graphical procedure can also be applied to a supply function or other functions.
## Semi-elasticity
A semi-elasticity (or semielasticity) gives the percentage change in f(x) in terms of a change (not percentage-wise) of x. Algebraically, the semi-elasticity S of a function f at point x is [4][5]
$Sf(x) = \frac{1}{f(x)}f'(x) = \frac{d \ln f(x)}{d x}$
An example of semi-elasticity is modified duration in bond trading.
The term "semi-elasticity" is also sometimes used for the change if f(x) in terms of a percentage change in x[6] which would be
$\frac{d f(x)}{d\ln(x)}=\frac{d f(x)}{dx}x$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666905403137207, "perplexity": 251.92792692073414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274979.56/warc/CC-MAIN-20140728011754-00485-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://artsocial.info/and-relationship/projectile-acceleration-and-velocity-relationship.php | # Projectile acceleration and velocity relationship
### What are the kinematic formulas? (article) | Khan Academy
Of course, to describe motion we must deal with velocity and acceleration, . This equation defines the maximum height of a projectile and depends only on the. Apr 30, A projectile's motion can be described in terms of velocity, time and height. How to Calculate the Jump Height From Acceleration Solve this equation for t by isolating it on one side of the equation shown in the previous. Plotting projectile displacement, acceleration, and velocity About x squared for example, and.
Now let's hop in a roller coaster or engage in a similarly thrilling activity like downhill skiing, Formula One racing, or cycling in Manhattan traffic.
Acceleration is directed first one way, then another. You may even experience brief periods of weightlessness or inversion. These kinds of sensations generate intense mental activity, which is why we like doing them. They also sharpen us up and keep us focused during possibly life ending moments, which is why we evolved this sense in the first place.
Your ability to sense jerk is vital to your health and well being. Jerk is both exciting and necessary.
Constant jerk is easy to deal with mathematically. As a learning exercise, let's derive the equations of motion for constant jerk. You are welcome to try more complicated jerk problems if you wish.
Jerk is the derivative of acceleration. Integrate jerk to get acceleration. That's the first time I've ever said that. I propose we call this the zeroeth equation of motion for constant jerk. The reason why will be apparent after we finish the next derivation.
Old videos on projectile motion Video transcript What I want to do with this video is think about what happens to some type of projectile, maybe a ball or rock, if I were to throw it straight up into the air. To do that I want to plot distance relative to time. There are a few things I am going to tell you about my throwing the rock into the air.
The rock will have an initial velocity Vi of We also know the acceleration near the surface of the earth. We know the force of gravity near the surface of the earth is the mass of the object times the acceleration.
## What are the kinematic formulas?
The whole reason why I did this is when we look at the g it really comes from the universal law of gravitation.
You can really view g as measuring the gravitational field strength near the surface of the earth. Then that helps us figure out the force when you multiply mass times g. This is accelerating you towards the center of the earth. The other thing I want to make clear: You might be saying "Wait, clearly the force of gravity is dependent on the distance. So if I were to throw something up into the air, won't the distance change.
## Kinematics & Calculus
That is technically right, but the reality is that when you throw something up into the air that change in distance is so small relative to the distance between the object and the center of the earth that to make the math simple, When we are at or near the surface of the earth including in our atmosphere we can assume that it is constant.
Remember that little g over there is all of these terms combined. If we assume that mass one m1 is the mass of the earth, and r is the radius of the earth the distance from the center of the earth So you would be correct in thinking that it changes a little bit.
The force of gravity changes a little bit, but for the sake of throwing things up into our atmosphere we can assume that it is constant.
### Horizontal and Vertical Components of Velocity
And if we were to calculate it it is 9. I want to be clear these are vector quantities.
When we start throwing things up into the air the convention is if something is moving up it is given a positive value, and if it is moving down we give it a negative value. Well, for an object that is in free fall gravity would be accelerating it downwards, or the force of gravity is downwards.
• Deriving displacement as a function of time, acceleration, and initial velocity
• Describing Projectiles With Numbers: (Horizontal and Vertical Velocity)
So, little g over here, if you want to give it its direction, is negative. Little g is So, we have the acceleration due to gravity. The acceleration due to gravity ag is negative 9. Now I want to plot distance relative to time. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704234957695007, "perplexity": 342.64764006798157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540486979.4/warc/CC-MAIN-20191206073120-20191206101120-00517.warc.gz"} |
http://vec3.ca/code/math/projection-direct3d/ | # Projection: Direct3D Edition
Projection is one of the core transformations done in 3D graphics. It’s generally represented by a $4\times4$ matrix, and it is the thing that links view space to clip space. The basics are pretty standard, and you can easily find them elsewhere. This article is about interesting things you can do with the projection matrix once you’ve got it.
Now, projection matrices differ depending on the conventions underlying the target environment. OpenGL’s clip space extends from $-1$ to $1$ in all three axes with the z-axis pointing out of the screen. DirectX’s z-axis points into the screen, and the near plane is mapped to $z=0$ rather than $z=-1$. This article will focus on matrices using the DirectX convention, though similar derivations exist for OpenGL.
Note that I’m using only the DirectX clip-space convention. I’m going to stick with the standard points-are-column-vectors-on-the-right convention used in most graphics literature, so my projection matrix will appear transposed with respect to that seen in the DirectX documentation on MSDN. I’m also going to assume that view space is right-handed, so one minus sign is moved.
Moving right along, here’s the matrix:
$$P=\begin{bmatrix} P_{11} & 0 & 0 & 0 \\ 0 & P_{22} & 0 & 0 \\ 0 & 0 & P_{33} & P_{34} \\ 0 & 0 & -1 & 0 \end{bmatrix}$$
$$P_{33}=\frac{z_f}{z_n-z_f},\;P_{34}=\frac{z_nz_f}{z_n-z_f}$$
$z_n$ and $z_f$ are, respectively, the distances to the near and far clip planes in view space units. $P_{11}$ and $P_{22}$ can be computed using one of two sets of formulas:
$$P_{11}=\frac{2z_n}{v_w},\;P_{22}=\frac{2z_n}{v_h}$$
$$P_{11}=cot\left({\tiny\frac{1}{2}}f_h\right),\;P_{22}=cot\left({\tiny\frac{1}{2}}f_v\right)$$
The first pair sizes the frustum in terms of the size of the viewport ($v_w$, $v_h$) in view-space units on the near plane. The second pair (which I find more natural to use) sizes the frustum based on the horizontal and vertical field of view angles ($f_h$ and $f_v$).
Further, if we project a point in view space $\mathbf{v}$ through $P$, we get the following coordinates:
$$P\mathbf{v}=\mathbf{v’}=\begin{bmatrix}x’ \\ y’ \\ z’ \\ w’\end{bmatrix}= \begin{bmatrix} P_{11}x \\ P_{22}y \\ z\frac{z_f}{z_n-z_f}+\frac{z_nz_f}{z_n-z_f} \\ -z \end{bmatrix}$$
# Extracting the Frustum Parameters
Alright, so we’ve got a matrix, but some odd algorithm somewhere needs to know what the near-clip plane is, or what the field of view is, or how big the viewport is.
### Field of View
The horizontal and vertical field of view are fairly straightforward. We can simply reverse the second set of formulas for $P_{11}$ and $P_{22}$.
$$P_{11}=cot\left({\tiny\frac{1}{2}}f_h\right)=\left[tan\left({\tiny\frac{1}{2}}f_h\right)\right]^{-1}$$
$$\arctan(P_{11}^{-1})={\tiny\frac{1}{2}}f_h$$
$$f_h=2\arctan(P_{11}^{-1})$$
Similarly, the vertical field of view is:
$$f_v=2\arctan(P_{22}^{-1})$$
### Aspect Ratio
Another easy one is the viewport’s aspect ratio. The aspect is defined as the viewport’s width over its height. We’ve got formulas for those values:
$$P_{11}=\frac{2z_n}{v_w},\;P_{22}=\frac{2z_n}{v_h}$$
$$v_w=\frac{2z_n}{P_{11}},\;v_h=\frac{2z_n}{P_{22}}$$
And while $z_n$ is unknown, that doesn’t matter as we only want the ratio between the two, and it cancels out:
$$aspect=\frac{v_w}{v_h}=\frac{\frac{2z_n}{P_{11}}}{\frac{2z_n}{P_{22}}}=\frac{P_{22}}{P_{11}}$$
### The Near-Clip Distance
Getting the value of $z_n$ is also simple. Notice that the only difference between $P_{34}$ and $P_{33}$ is that the former has an extra $z_n$ multiplied into it.
$$z_n=\frac{P_{34}}{P_{33}}$$
### The Far-Clip Distance
This one’s a little trickier, but again, it’s just some algebra on $P_{34}$ and $P_{33}$. Let’s start with the formula for $P_{33}$ and substitute in our formulat for $z_n$:
$$P_{33}=\frac{z_f}{z_n-z_f}=\frac{z_f}{\frac{P_{34}}{P_{33}}-z_f}$$
$$\frac{1}{P_{33}}=\frac{\frac{P_{34}}{P_{33}}-z_f}{z_f}$$
$$\frac{z_f}{P_{33}}=\frac{P_{34}}{P_{33}}-z_f$$
$$z_f=P_{34}-P_{33}z_f$$
$$-P_{34}=-z_f-P_{33}z_f$$
$$P_{34}=z_f+P_{33}z_f$$
$$P_{34}=z_f(P_{33}+1)$$
$$z_f=\frac{P_{34}}{P_{33}+1}$$
### The Viewport Size
And now that we have the near-clip distance, we can easily compute our viewport’s dimensions:
$$v_w=\frac{2z_n}{P_{11}},\;v_h=\frac{2z_n}{P_{22}}$$
$$v_w=\frac{2\frac{P_{34}}{P_{33}}}{P_{11}},\;v_h=\frac{2\frac{P_{34}}{P_{33}}}{P_{22}}$$
# Inverting the Matrix
The projection matrix is pretty sparse, so it’s quite straightforward to write an optimized matrix inverse function just for it. Of course, a general $4\times4$ matrix inverse function will work, but the inverse of the projection matrix pops up enough that it doesn’t hurt to have an optimized routine in your toolkit.
One note before we proceed – matrices intended for left and right-handed view-space differ in the sign of the values in the last two rows – including that of the $-1$ in our $P$. However, the matrix-inverse code doesn’t need to care about this, so I will substitute a variable $P_{43}$ for this section:
$$P=\begin{bmatrix} P_{11} & 0 & 0 & 0 \\ 0 & P_{22} & 0 & 0 \\ 0 & 0 & P_{33} & P_{34} \\ 0 & 0 & P_{43} & 0 \end{bmatrix}$$
$$det(P)=\begin{vmatrix} P_{11} & 0 & 0 & 0 \\ 0 & P_{22} & 0 & 0 \\ 0 & 0 & P_{33} & P_{34} \\ 0 & 0 & P_{43} & 0 \end{vmatrix}= P_{43}\begin{vmatrix} P_{11} & 0 & 0 \\ 0 & P_{22} & 0 \\ 0 & 0 & P_{34} \end{vmatrix}= P_{11}P_{22}P_{34}P_{43}$$
Looking at the construction of the matrix, that’s clearly not going to equal zero. So the matrix is definitely invertible. I’ll spare you the tedious algebra and skip straight to the answer. Apply your favorite method of finding the inverse and waste a bunch of paper simplifying if you don’t believe me (or multiply it by $P$ and see if you get $I$).
$$P^{-1}=\begin{bmatrix} P_{11}^{-1} & 0 & 0 & 0 \\ 0 & P_{22}^{-1} & 0 & 0 \\ 0 & 0 & 0 & P_{43}^{-1} \\ 0 & 0 & P_{34}^{-1} & -\frac{P_{33}}{P_{34}P_{43}} \end{bmatrix}$$
# Unprojection
This is a fairly useful operation. It’s often used to implement picking, and it has a significant application in deferred-shading. But let’s not get ahead of ourselves.
Given a point in view space $\mathbf{v}$, its counterpart in clip space ($\mathbf{v’}$) is
$$\mathbf{v’}=P\mathbf{v}$$
Which is trivially reversible given that $P$ is invertible:
$$\mathbf{v}=P^{-1}\mathbf{v’}$$
$\mathbf{v}$ is, of course, translated in and out of homogeneous space by adding and dividing away a $w$ component in the usual way, and the same applies to $\mathbf{v`}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831241250038147, "perplexity": 468.5608159965445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.71/warc/CC-MAIN-20180526112331-20180526132331-00175.warc.gz"} |
https://www.ijert.org/sylow-prime-group | # Sylow Prime Group
DOI : 10.17577/IJERTV9IS040587
Text Only Version
#### Sylow Prime Group
T. Srinivasarao Asst. Professor Dept. of Math
Mr. K. Revathi Asst. Professor Dept. of Math
University College of Science & Technology Adikavi Nannaya University
University College of Science & Technology Adikavi Nannaya University
Abstract: Sylow p subgroup is a vital part of the discussion in any algebraic activity dealing with group theory. So, it is natural that, is every group can have Sylow p subgroups or any specific distinction can be put forward that confirms a particular group is either a group having Sylow p subgroup or there is no single Sylow p subgroup. With a view to characterize the groups whether possessing a Sylow p subgroup or not here is an activity that progresses the discussion by one step ahead. This discussion leads to groups of order p and order pq that has applications in Galvan theory.
INTRODUCTION:
Suppose n is a positive integer. By the fundamental theorem of arithmetic, either n is a group of prime order or it is a product of
primes expressible in a unique manner as n p 1 p 2 … p k
where
p ' s are prime numbers with the respective multiplicities
i ,1 i k .
1 2 k i
Since each p is a prime number, 1 i k , there exists a cyclic group G ,1 i k such that
G p i ,1 i k
i
Now, G G1 G2 … Gk
i i i
1. Sylow Prime Group:
Definition : if G is a finite group, p is a prime number such that called a Sylow p subgroup of G. (1.1)
pn | G and pn1 | G , then any subgroup of G of order
pn is
Definition 2: a group (G, *) is said to be a Sylow prime group if every non trivial subgroup of G is a Sylow
pi – subgroup for
some prime factor (1.2)
pi of the order of G.
Since G n p 1 p 2 …p k , and in view of Lagranges theorem of finite groups, and properties of divisibility, it follows that
1 2 k
p i | G
for each 1 i k and p
i1 | G
by the unique representation of the integer n.
i i1
pi i
pi i
It is not necessary that there is a subgroup of G of order for every 1 i k . This confirms that every group of finite order is not a Sylow prime group.
To verify these observations, the following instances will show a finite group that admits the definition of Sylow prime group and another instance for not.
2. Working on Sylow Prime Groups:
Consider the symmetric group of order 6 or the symmetric group on 3 symbols.
S3 f1, f2 , f3 , f4 , f5 , f6 where
f1 a, a,b,b,c,c f2 a,b,b, a,c,c f3 a, a,b,c,c,b f4 a,c,b,b,c, a f5 a,b,b,c,c, a f6 a,c,b, a,c,b
fi : A A is a bijection for each 1 i 6 and
A a,b,c
The composition of mappings is the operation that makes S3 a group such that S 6 2 3 , the unique representation
1 1
3
3
3
3
by fundamental theorem of arithmetic
It can be easily seen that H f , f , H f , f , f are the only non trivial subgroups such that H 21,22 | S and so,
1 1 4 2 1 5 6 1 3
H1 is a Sylow 2- subgroup of S3
Similarly, H 31,32 | S and so, H is a Sylow 3 subgroup of S (2.1)
2 3 2 3
Take another instance.
12 1,5,7,11is a group under multiplication modulo 12 denoted by 12 .
12 4 22
H1 1,11 is a non trivial subgroup of order 21 .
1
1
Also, 22 | 12 which shows H is not a Sylow 2 subgroup of 12
So, this is an example of a group that is not a Sylow prime group. (2.2) The working (2.1) and (2.2) will confirm that all finite groups are not Sylow Prime Groups.
REFERENCES:
1. R.Gow, Sylows proof of Sylows theorem, Irish Math.Sco.Bull. (1994), 55 63
2. L.Sylow, Theorems sur les groups de substitutions, Mathematische Annalen 5(1872), 584 594
3. W. C. Waterhouse, The early proofs of Sylows theorems, Arch.Hist.Exact Sci, 21(1979/80), 279-290
4. William Fulton and Joe Harris. Representation Theory: A First Course. GTM 129, Springer, 1991 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555917143821716, "perplexity": 1432.3604171557536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00632.warc.gz"} |
https://icml.cc/Conferences/2020/ScheduleMultitrack?event=6460 | Timezone: »
Poster
Fair k-Centers via Maximum Matching
Matthew Jones · Huy Nguyen · Thy Nguyen
Wed Jul 15 05:00 AM -- 05:45 AM & Wed Jul 15 04:00 PM -- 04:45 PM (PDT) @ Virtual #None
The field of algorithms has seen a push for fairness, or the removal of inherent bias, in recent history. In data summarization, where a much smaller subset of a data set is chosen to represent the whole of the data, fairness can be introduced by guaranteeing each "demographic group" a specific portion of the representative subset. Specifically, this paper examines this fair variant of the k-centers problem, where a subset of the data with cardinality k is chosen to minimize distance to the rest of the data. Previous papers working on this problem presented both a 3-approximation algorithm with a super-linear runtime and a linear-time algorithm whose approximation factor is exponential in the number of demographic groups. This paper combines the best of each algorithm by presenting a linear-time algorithm with a guaranteed 3-approximation factor and provides empirical evidence of both the algorithm's runtime and effectiveness. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9157952070236206, "perplexity": 1376.0834965485005}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00672.warc.gz"} |
https://arxiv.org/abs/1910.06157 | astro-ph.CO
# Title:Differentiable Strong Lensing: Uniting Gravity and Neural Nets through Differentiable Probabilistic Programming
Abstract: The careful analysis of strongly gravitationally lensed radio and optical images of distant galaxies can in principle reveal DM (sub-)structures with masses several orders of magnitude below the mass of dwarf spheroidal galaxies. However, analyzing these images is a complex task, given the large uncertainties in the source and the lens. Here, we leverage and combine three important computer science developments to approach this challenge from a new perspective. (a) Convolutional deep neural networks, which show extraordinary performance in recognizing and predicting complex, abstract correlation structures in images. (b) Automatic differentiation, which forms the technological backbone for training deep neural networks and increasingly permeates `traditional' physics simulations, thus enabling the application of powerful gradient-based parameter inference techniques. (c) Deep probabilistic programming languages, which not only allow the specification of probabilistic programs and automatize the parameter inference step, but also the direct integration of deep neural networks as model components. In the current work, we demonstrate that it is possible to combine a deconvolutional deep neural network trained on galaxy images as source model with a fully-differentiable and exact implementation of the gravitational lensing physics in a single probabilistic model. This does away with hyperparameter tuning for the source model, enables the simultaneous optimization of nearly one hundred source and lens parameters with gradient-based methods, and allows the use of efficient gradient-based Hamiltonian Monte Carlo posterior sampling techniques. We consider this work as one of the first steps in establishing differentiable probabilistic programming techniques in the particle astrophysics community, which have the potential to significantly accelerate and improve many complex data analysis tasks.
Comments: 15 pages, 9 figures Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA); Instrumentation and Methods for Astrophysics (astro-ph.IM); High Energy Physics - Phenomenology (hep-ph) Cite as: arXiv:1910.06157 [astro-ph.CO] (or arXiv:1910.06157v1 [astro-ph.CO] for this version) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446327447891235, "perplexity": 1543.5363494976132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499439.6/warc/CC-MAIN-20191207132817-20191207160817-00280.warc.gz"} |
https://arxiv-export-lb.library.cornell.edu/abs/2209.12276 | math.AP
(what is this?)
# Title: Stable determination of a second order perturbation of the polyharmonic operator by boundary measurements
Abstract: In this paper, we consider the inverse boundary value problem for the polyharmonic operator. We prove that the second order perturbations are uniquely determined by the corresponding Dirichlet to Neumann map. More precisely, we show in dimension $n \geq 3$, a logarithmic type stability estimate for the inverse problem under consideration.
Subjects: Analysis of PDEs (math.AP) MSC classes: 35R30, 31B20, 31B30, 35J40 Cite as: arXiv:2209.12276 [math.AP] (or arXiv:2209.12276v1 [math.AP] for this version) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498136639595032, "perplexity": 647.7026958688683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00541.warc.gz"} |
http://mathoverflow.net/questions/30020/about-tarskis-axioms-a-and-a-and-around-1 | # About Tarski's axioms a and A' and around (1)
1-In his article written in German "Über unerreichbare Kardinalzahlen" (On inaccessible cardinals), inside Fund. Math. 1938 (pages 68-89), Alfred Tarski states his axioms A and A' as follows. Axiom A: "For every set x, there exists a set y satisfying the four following conditions:
• A1: x is a member element of y;
• A2: If z is a member element of y and t is included inside z, then t is a member element of y;
• A3: If z is a member element of y and t is the set having as member elements exactly all sets u included inside z, then t is a member element of y;
• A4: If z is included inside y and if the sets z and y are equipotent, then z is a member element of y."
Axiom A':"For every set x, there exists a set y satisfying the four following conditions:
• A'1: x is included inside y;
• A'2: If z is a member element of y and t is a member element of z, then t is a member element of y;
• A'3: If z is a member element of y, there exists a set w that is a member element of y such that every set t that is included inside z is a member element of w;
• A'4: If z is included inside y and if no set included inside z is equipotent with y, then z is a member element of y."
Tarski asserts, without giving a proof, that axioms A and A' are equivalent.
QUESTION 1: Does anyone know about an explicit proof of the equivalance of A and A' ?
-
Why was this question downvoted so much earlier? And why did people who downvoted it not leave any comment? – Willie Wong Jun 30 '10 at 11:39
Presumably because it was poorly formatted and because the author posted 4 or 5 questions on the same subject at once. – Ben Webster Jun 30 '10 at 12:37
In fact, all five messages I wrote are linked (and ordered) and are intended to make clarification around the "axiomatic power" of Tarski's axioms A and A', but I could not make a unique message because it was too long, so I divided it and asked one question inside each message. And YES, being a beginner with Mathoverflow, maybe my messages can be considered as poorly formatted ! Gérard LANG – Gérard Lang Jun 30 '10 at 12:44
First, I note that you appear to be missing a not in A4, and you should say that "if $z$ and $y$ are not equipollant", for otherwise we could take $z=y$ and thereby deduce $y\in y$, contrary to the Foundation axiom.
With this correction, both your axioms are equivalent in ZFC to the assertion that there is a proper class of inaccessible cardinals.
For the one direction, if there are such cardinals, then for any set $x$ we may find an inaccessible cardinal $\kappa$ such that $x\in V_\kappa$, and take $y=V_\kappa=H_\kappa$ to fulfill either $A$ or $A'$, which is easily verified.
Conversely, assume axiom A. Let $x=\alpha$ be an ordinal and let $y$ arise as in axiom A. Let $\kappa=|y|$. As every subset of $\alpha$ is in $y$, it follows that $\alpha\lt\kappa$. If $\beta\lt\kappa$, then there is subset $z\subset y$ of size $\beta$, and this is an element of $y$ by A4. We also know $P(z)\subset y$ and $P(z)\in Y$, so $P(P(z))\subset y$, so $2^\beta\lt\kappa$. Thus, $\kappa$ is a strong limit. For regularity, suppose that $\kappa$ singular with cofinality $\gamma\lt\kappa$. Thus, $y$ is the union of $\gamma$ many subsets, each of size less than $\kappa$. These subsets are elements of $y$, and all their subsets are also in $y$. But every subset of $y$ is determined by a similar $\gamma$ sequence of elements of $y$, and so $y$ will have $2^\kappa$ many elements, a contradiction. So $\kappa$ is an inaccessible cardinal above $\alpha$, as desired.
For the other converse direction, assume axiom $A'$. Consider $x=\alpha+1$, and get $y$ as in $A'$, and again let $\kappa=|y|$. I claim that $\kappa\subset y$, for otherwise the least ordinal $\beta$ not in $y$ would be less than $\kappa$ in size and have all its subsets size less than $\kappa$, and hence in $y$ by $A4'$. Thus, $\kappa\subset y$. Now, for any $\gamma\lt\kappa$, every subset of $\gamma$ is in $y$ and there is an element of $y$ with at least $2^\gamma$ many subsets, all in $y$, so $2^\gamma\lt\kappa$. So $\kappa$ is a strong limit. Regularity follows as before, and so $\kappa$ is an inaccessible cardinal above $\alpha$.
Thus, since the axioms are both equivalent to the assertion that there is a proper class of inaccessible cardinals, they are also equivalent to each other.
Do you have some historical reason to study Tarski's treatment of inaccessibility? If not, I think you might find the contemporary accounts of large cardinals to be more appealing. You might look at Kanamori's book, The Higher Infinite.
If you wanted the equivalence in ZF rather than ZFC, or in ZF-Foundation, then I wouuld have to think more carefully about it, but I will mention that this issue seems to be remarked on in Solovay's letter mentioned in your previous question. In particular, without AC there are competing inequivalent notions of inaccessibility.
-
Dear Mr Hamkins, Thankyou very much for correcting my question, and giving a neat answer under ZFC. And naturally I would like to know what happens under ZF only (I suppose that because A proves choice, a remains equivalent with a proper class of inaccessibles; but if A' does not prove choice under ZF, then the equivalence with A must be questionned, in view of the competing inequivalent definitions of inaccessibility ?). – Gérard Lang Jul 5 '10 at 14:03
In fact, my primary interest is with the remarkable axiomatic strength of the "Tarski-Grothendieck Set Theory" as developped inside Metamath or Mizar, even knowing that "a proper class of inaccessibles" is a rather mild assertion is the "Large Cardinals Scala" presented in the beautiful Kanamori reference book. It is known that, starting with "ZFC + A", and adding the Pair Axiom, you can delete the Power-Set, Infinity and Choice axioms. So, in relation with assertion of Tarski on some variants of A and A', i am wondering if Pair and Union could not also be deleted. – Gérard Lang Jul 5 '10 at 14:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905982255935669, "perplexity": 356.3314533498817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.29/warc/CC-MAIN-20150521113208-00122-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1083078/difficult-functional-equation-problem-non-standard-type | Difficult Functional Equation Problem, Non-Standard Type
Find all functions, $f:\mathbb{N} \to \mathbb{N}$, for which $f(1) = 1, f(2n) < 6f(n)$, and
$$3f(n)f(2n+1) = f(2n)(3f(n)+1).$$
My first approach is to try to play around and set values equal to 0, to test for even-ness, that kind of thing, but this gets me absolutely nowhere. I've looked for help on problem solving forums and it hasn't really got anywhere, so I'm thinking that maybe this is a problem of another type, not necessarily hard, but covering topics in functional equations that I have not yet covered.
Was wondering if someone could give the solution and thought-process behind steps, as this is really driving me nuts.
Thanks.
For $n=1$, $3 f(3) = 4 f(2)$ and $f(2) < 6$. Since $f(2)$ must be divisible by $3$, $f(2) = 3$ and $f(3) = 4$.
For $n=2$, $9 f(5) = 10 f(4)$ and $f(4) < 18$. Since $f(4)$ must be divisible by $9$, $f(4) = 9$ and $f(5) = 10$.
• Letting n = 1, wouldn't we get, $3f(1)2f(3) = f(2)(3f(1) + 1))$? How did you get 3f(3) = 4f(2), and make the claim that f(2) < 6? We get f(2) < 6f(1). – user164403 Dec 28 '14 at 4:59
• For $n=1$, it's $3 f(1) f(2\cdot 1 + 1) = f(2\cdot 1)(3 f(1)+1)$ and $f(1) = 1$ so it's $3 f(3) = 4 f(2)$. Yes, $f(2) < 6 f(1)$, but $f(1) = 1$. – Robert Israel Dec 28 '14 at 6:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652496337890625, "perplexity": 348.6029292824739}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314721.74/warc/CC-MAIN-20190819093231-20190819115231-00533.warc.gz"} |
https://video.ias.edu/2015-1016?page=3 | ## Peaks, Exclusion, and BAO
Tobias Baldauf
September 24, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## Overview - AK
Andrey Kravtsov
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## Halo Assembly Bias on Galaxy Cluster Scales
Surhud More
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## Assembly Bias as a Challenge to Infering the Galaxy-Dark Matter Connection
Andrew Zentner
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## Quantitative Constraints on Assembly Bias: An Open-Source Approach with Halotools
Andrew Hearin
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## Emulating Non-Linear Clustering
Jeremy Tinker
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## Modelling the 1H Term
Frank van den Bosch
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## What is the Main Driver of Quenching?
Ying Zu
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g.
## Overview - US
Uros Seljak
September 25, 2015
The interpretation of low-redshift galaxy surveys is more complicated than the interpretation of CMB temperature anisotropies. First, the matter distribution evolves nonlinearly at low redshift, limiting the use of perturbative methods. Secondly, we observe galaxies, rather than the underlying matter field. Fortunately, considerable progress has been made in understanding the large-scale structure of galaxies. A key insight has been that galaxies form in bound structures called halos, whose statistics (e.g. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855713248252869, "perplexity": 1536.0942489186518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584431529.98/warc/CC-MAIN-20190123234228-20190124020228-00452.warc.gz"} |
http://tex.stackexchange.com/questions/98569/general-question-on-floating-and-non-floating-objects-in-latex | # General question on floating and non-floating objects in LaTeX
I am wondering when you place a non-floating object such as a minipage into a floating environment such as a table or figure if this construction as a whole is then a floating or a non-floating object?
Example:
\begin{table}
\begin{minipage}
...
\end{minipage}
\begin{minipage}
...
\end{minipage}
\end{table}
Is this table considered to be a floating object or in different words, is this object still floating?
-
A minipage environment, just like \parbox, \mbox, \makebox and similar commands creates an object that to LaTeX is almost like a big letter. The same holds for tabular.
A floating object is one that LaTeX defers the positioning of, typically table or figure (and other similar floats that can be defined with packages such as newfloat or are defined by packages such as algorithm and listings).
Each floating object is essentially a chunk of copy that is momentarily stored in memory for being placed accordingly to LaTeX rules. It can contain anything that can go in normal text, including minipage, tabular and so on. The only restriction is that a float cannot contain another float and page breaking instructions (that make no sense in them).
Let's see an example that often confuses beginners. Typically, a figure float has this structure
\begin{figure}
\centering
\includegraphics{file}
\caption{A caption}\label{label}
\end{figure}
This declares a float that LaTeX will place according to its rules. However, \includegraphics by itself is a command similar to \mbox and so can go anywhere: to the eyes of LaTeX it's a "big letter". In the case of a figure we can see it: LaTeX builds a paragraph (with center "justification") consisting only of one big letter. The caption is, by rule, a differently built object.
Thus the answer to your question is yes: table is a floating object, whatever it contains.
-
wow, that's an amazing explanation to my simple question. thanks a lot for clarifying that. I now know even more than actually requested :-) Many thanks! – Josh Feb 17 '13 at 10:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720510005950928, "perplexity": 1620.9763022400848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924131.19/warc/CC-MAIN-20140901014524-00294-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/42873/strange-question-about-hechler/42917 | Recently during a lecture, my professor mentioned that forcing over any poset which is countable, separative, and atomless, is essentially the same as forcing over the Cohen poset, that is to say results in adding a Cohen real.
My question is: Are there any other similar characterizations of "commonly used" forcing posets? Specifically, is there one for the Hechler poset?
The Hechler Poset/forcing notion $(H,\le)$ is given by setting $H=\omega^{\lt\omega} \times \omega^{\omega}$, and defining the relation $(t,v) \le (s,u)$ iff $( t \supset s \wedge (\forall n\in\omega) (u(n) \le v(n)) \wedge (\forall m \in dom(t \backslash s))(t(m) \gt u(m))$. When forcing with this poset, you end up adding an unbounded real to the ground model.
I understand that you cannot produce a model in which $\mathfrak{b}=\omega_2$ using product forcing, and that you need iterated forcing to do so. Moreover, the iterated forcing construction I've seen that produces $\mathfrak{b}=\omega_2$ in the forcing extension, used the finite support iteration of $\omega_2$ many copies of the Hechler poset. Is this evidence for the lack of such a characterization?
(I apologize in advance if this is an ill-stated question, I will change it accordingly if it is.)
-
When I first opened this, I thought you were asking about someone who "Heckled" your professor during a lecture. Then I realized I need to get my eyes checked. – Harry Gindi Oct 20 '10 at 5:01
Oh no wait, my eyes are fine. You just changed the title =p. – Harry Gindi Oct 20 '10 at 5:02
• The collapse forcing $\text{Coll}(\omega,\theta)$ is, up to forcing equivalence, the unique forcing notion of size $|\theta|$ necessarily collapsing $\theta$ to $\omega$. (Note, this includes the case of adding a Cohen real, since $\text{Add}(\omega,1)\equiv\text{Coll}(\omega,\omega)$.) To see this, suppose that $\mathbb{Q}$ is a forcing notion of size $|\theta|$ that necessarily collapses $\theta$ to $\omega$. We may assume without loss of generality that $\mathbb{Q}$ is separative, since the separative quotient of $\mathbb{Q}$ is forcing equivalent to it and no larger in size. Observe that below every condition in $\mathbb{Q}$, there is an antichain of size $\theta$. Since forcing with $\mathbb{Q}$ adds a function from $\omega$ onto $\theta$ and $\mathbb{Q}$ has size $\theta$, there is a name $\dot g$ forced to be a function from $\omega$ onto the generic filter $\dot G$. We build a refining sequence of maximal antichains $A_n\subset\mathbb{Q}$ as follows. Begin with $A_0=\{ 1\}$. If $A_n$ is defined, then let $A_{n+1}$ be a maximal antichain of conditions such that every condition in $A_n$ splits into $\theta$ many elements of $A_{n+1}$, and such that every element of $A_{n+1}$ decides the value $\dot g(\check n)$. The union $\mathbb{R}=\bigcup_n A_n$ is clearly isomorphic as a subposet of $\mathbb{Q}$ to the tree $\theta^{\lt\omega}$, and so it is forcing equivalent to $\text{Coll}(\omega,\theta)$. Furthermore, $\mathbb{R}$ is dense in $\mathbb{Q}$. To see this, fix any condition $q\in\mathbb{Q}$. Since $q$ forces that $q$ is in $\dot G$, there is some $p\leq q$ and natural number $n$ such that $p$ forces via $\mathbb{Q}$ that $\dot g(\check n)=\check q$. Since $A_{n+1}$ is a maximal antichain, there is some condition $r\in A_{n+1}$ that is compatible with $p$. Since $r$ also decides the value of $\dot g(\check n)$ and is compatible with $p$, it must be that $r$ forces $\dot g(\check n)=\check q$ also. In particular, $r$ forces $\check q\in\dot G$, and so by separativity it must be that $r\leq q$. So $\mathbb{R}$ is dense in $\mathbb{Q}$, as desired. Thus, $\mathbb{Q}$ is forcing equivalent to $\mathbb{R}$, which is forcing equivalent to $\text{Coll}(\omega,\theta)$, as desired.
• If $\kappa^{\lt\kappa}=\kappa$, then the forcing $\text{Add}(\kappa,1)$ to add a Cohen subset to $\kappa$ with conditions of size less than $\kappa$ is, up to forcing equivalence, the unique ${\lt}\kappa$-closed necessarily nontrivial forcing notion of size $\kappa$. For this, one similarly builds up a dense tree of conditions inside the poset that is isomorphic to $\kappa^{\lt\kappa}$, which is forcing equivalent to $\text{Add}(\kappa,1)$.
• More generally, if $\theta^{\lt\kappa}=\theta$, then the forcing $\text{Coll}(\kappa,\theta)$ is the unique ${\lt}\kappa$-closed forcing notion necessarily collapsing $\theta$ to $\kappa$ and having size $\theta$. (The straightforward generalization uses separativity, but the separative quotient of a partial order may no longer be $\lt\kappa$-closed, so there is an issue about it; but my student Norman Perlmutter found a solution avoiding the issue.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819604158401489, "perplexity": 165.156224777849}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446525.24/warc/CC-MAIN-20141017005726-00311-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.hackmath.net/en/math-problem/48233 | # The rectangular
The rectangular trapezoid has bases 15 dm and 8 dm long and the length of the inclined arm is 12 dm. How long is the other arm of the trapezoid?
d = 9.7468 dm
### Step-by-step explanation:
Did you find an error or inaccuracy? Feel free to write us. Thank you!
Tips to related online calculators
See also our right triangle calculator.
See also our trigonometric triangle calculator.
#### You need to know the following knowledge to solve this word math problem:
We encourage you to watch this tutorial video on this math problem:
## Related math problems and questions:
• Diagonal
The rectangular ABCD trapeze, whose AD arm is perpendicular to the AB and CD bases, has an area of 15 cm square. Bases have lengths AB = 6cm, CD = 4cm. Calculate the length of the AC diagonal.
• Trapezoid ABCD v2
Trapezoid ABCD has a length of bases in ratio 3:10. The area of triangle ACD is 825 dm2. What is the area of trapezoid ABCD?
• Bases
The length of the bases trapezium are in ratio 4:5. Length of midline is 15. How long are the bases of a trapezoid?
• Isosceles trapezoid
Find the area of an isosceles trapezoid, if the bases are 12 cm and 20 cm, the length of the arm is 16 cm
• Estate
An estate-shaped rectangular trapezoid has bases long 34 m, 63 m, and perpendicular arm 37 m. Calculate how long its fence is.
• Right trapezoid
The right trapezoid has bases 3.2 cm and 62 mm long. The shorter leg has a length of 0.25 dm. Calculate the lengths of the diagonals and the second leg.
• Trapezoid MO-5-Z8
Trapezoid KLMN has bases 12 and 4 cm long. The area of triangle KMN is 9 cm2. What is the area of the trapezoid KLMN?
• The garden
The garden has the shape of a rectangular trapezium. The bases have lengths of 27 meters and 36 meters, the trapezoid's height is 12 meters. Calculate how much fence will cost this garden if one meter costs 1.5 €?
• The pyramid 4s
The pyramid with a rectangular base measuring 6 dm and 8 dm has a side edge of length 13 dm. Calculate the surface area and volume of this pyramid.
• Median in right triangle
In the rectangular triangle ABC has known the length of the legs a = 15cm and b = 36cm. Calculate the length of the median to side c (to hypotenuse).
• Rectangular trapezoid
In a rectangular trapezoid ABCD with right angles at vertices A and D with sides a = 12cm, b = 13cm, c = 7cm. Find the angles beta and gamma and height v.
• Isosceles trapezoid
Calculate the content of an isosceles trapezoid whose bases are at ratio 5:3, the arm is 6cm long and it is 4cm high.
• Rectangular trapezoid
The rectangular trapezoid ABCD is: /AB/ = /BC/ = /AC/. The length of the median is 6 cm. Calculate the circumference and area of a trapezoid.
• Find a 2
Find a length of the diagonal AC of the rhombus ABCD if its perimeter P = 112 dm and the second diagonal BD has a length of 36 dm.
• Rhombus
Calculate the length of the diagonal AC of the rhombus ABCD, if its perimeter is 84 dm and the other diagonal BD has length 20 dm.
• How to
How to find a total surface of a rectangular pyramid if each face is to be 8 dm high and the base is 10 dm by 6 dm.
• The volume
The volume of the cone is 94.2dm³, the radius of the base is 6 dm Calculate the surface of the cone. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908399760723114, "perplexity": 1253.732047882169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00463.warc.gz"} |
https://scholarcommons.usf.edu/etd/7008/ | #### Title
On Extending Hansel's Theorem to Hypergraphs
2018
Dissertation
Ph.D.
#### Degree Name
Doctor of Philosophy (Ph.D.)
#### Degree Granting Department
Mathematics and Statistics
#### Major Professor
Brendan Nagle, Ph.D.
#### Committee Member
Brian Curtin, Ph.D.
#### Committee Member
Theo Molla, Ph.D.
#### Committee Member
Dmytro Savchuk, Ph.D.
#### Committee Member
Jay Ligatti, Ph.D.
#### Keywords
Extremal Combinatorics, Covers, Weight, Hansel
#### Abstract
For integers $n \geq k \geq 2$, let $V$ be an $n$-element set, and let $\binom{V}{k}$ denote the family of all $k$-element subsets of $V$. For disjoint subsets $A, B \subseteq V$, we say that $\{A, B\}$ {\it covers} an element $K \in \binom{V}{k}$ if $K \subseteq A \dot\cup B$ and $K \cap A \neq \emptyset \neq K \cap B$. We say that a collection $\cC$ of such pairs {\it covers} $\binom{V}{k}$ if every $K \in \binom{V}{k}$ is covered by at least one $\{A, B\} \in \cC$. When $k=2$, covers $\cC$ of $\binom{V}{2}$ were introduced in~1961 by R\'enyi~\cite{Renyi}, where they were called {\it separating systems} of $V$ (since every pair $u \neq v \in V$ is separated by some $\{A, B\} \in \cC$, in the sense that $u \in A$ and $v \in B$, or vice-versa). Separating systems have since been studied by many authors.
For a cover $\cC$ of $\binom{V}{k}$, define the {\it weight} $\omega(\cC)$ of $\cC$ by $\omega(\cC) = \sum_{\{A, B\} \in \cC} (|A|+|B|)$. We define $h(n, k)$ to denote the minimum weight $\omega(\cC)$ among all covers $\cC$ of $\binom{V}{k}$. In~1964, Hansel~\cite{H} determined the bounds $\lceil n \log_2 n \rceil \leq h(n, 2) \leq n\lceil \log_2 n\rceil$, which are sharp precisely when $n = 2^p$ is an integer power of two. In~2007, Bollob\'as and Scott~\cite{BS} extended Hansel's bound to the exact formula $h(n, 2) = np + 2R$, where $n = 2^p + R$ for $p = \lfloor \log_2 n\rfloor$.
The primary result of this dissertation extends the results of Hansel and of Bollob\'as and Scott to the following exact formula for $h(n, k)$, for all integers $n \geq k \geq 2$. Let $n = (k-1)q + r$ be given by division with remainder, and let $q = 2^p + R$ satisfy $p = \lfloor \log_2 q \rfloor$. Then
h(n, k) = np + 2R(k-1) + \left\lceil\frac{r}{k-1}\right\rceil (r + k - 1).
A corresponding result of this dissertation proves that all optimal covers $\cC$ of $\binom{V}{k}$, i.e., those for which $\omega(\cC) = h(n, k)$, share a unique {\it degree-sequence}, as follows. For a vertex $v \in V$, define the {\it $\cC$-degree} $\deg_{\cC}(v)$ of $v$ to be the number of elements $\{A, B\} \in \cC$ for which $v \in A \dot\cup B$. We order these degrees in non-increasing order to form $\bd(\cC)$, and prove that when $\cC$ is optimal, $\bd(\cC)$ is necessarily binary with digits $p$ and $p+1$, where uniquely the larger digits occur precisely on the first $2R(k-1) + \lceil r/(k-1) \rceil (r + k - 1)$ many coordinates. That $\bd(\cC)$ satisfies the above for optimal $\cC$ clearly implies the claimed formula for $h(n,k)$, but in the course of this dissertation, we show these two results are, in fact, equivalent.
In this dissertation, we also consider a $d$-partite version of covers $\cC$, written here as {\it $d$-covers} $\cD$. Here, the elements $\{A,B\} \in \cC$ are replaced by $d$-element families $\{A_1, \dots, A_d\} \in \cD$ of pairwise disjoint sets $A_i \subset V$, $1 \leq i \leq d$. We require that every element $K \in \binom{V}{k}$ is covered by some $\{A_1, \dots, A_d\} \in \cD$, in the sense that $K \subseteq A_1 \dot\cup \cdots \dot\cup A_d$ where $K \cap A_i \neq \emptyset$ holds for each $1 \leq i \leq d$. We analogously define $h_d(n,k)$ as the minimum weight $\omega(\cD) = \sum_{D \in \cD} \sum_{A \in D} |A|$ among all $d$-covers $\cD$ of $\binom{V}{k}$. In this dissertation, we prove that for all $2 \leq d \leq k \leq n$, the bound $h_d(n,k) \geq n \log_{d/(d-1)} (n/(k-1))$ always holds, and that this bound is asymptotically sharp whenever $d = d(k) = O (k/\log^2 k)$ and $k = k(n) = O(\sqrt{\log \log n})$.
COinS | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983112215995789, "perplexity": 357.6729289642648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00345.warc.gz"} |
http://physics.stackexchange.com/users/3673/james-bowery?tab=activity | James Bowery
Reputation
372
Next privilege 500 Rep.
Access review queues
12h comment Are units of angle really dimensionless? Well then you are, ironically, closer to solving the problem than you think. How were you doing your geospatial coordinate conversions? What conversions were you doing? 14h revised Are units of angle really dimensionless? added 7 characters in body 14h answered Are units of angle really dimensionless? 18h comment GEN3's 10nm-30nm Contiuum Radiation Measurements GEN3 Partners conducted the experiments at CfA. Apparently they retained the services of Alexander Bykanov as PI for both the empirical and post-hoc theoretic analysis aspects. Apr 26 comment GEN3's 10nm-30nm Contiuum Radiation Measurements So these results are either unique or they provide grounds to investigate a criminal conspiracy to commit fraud? Apr 25 asked GEN3's 10nm-30nm Contiuum Radiation Measurements Apr 19 asked Energy Per Dry Ice Mass Required By de Laval Expansion of Air? Apr 13 comment Polarized Coherent Beam Hits a Polarizer Off-Polarization Angle While Varying Distance? Apr 13 comment Polarized Coherent Beam Hits a Polarizer Off-Polarization Angle While Varying Distance? I should have said "the fractional changes in wave number between the initial and final positions of the second polarizer" Apr 13 comment Polarized Coherent Beam Hits a Polarizer Off-Polarization Angle While Varying Distance? I don't see why saying "the standard model" implies only high energy. '...the Standard Model is sometimes regarded as the "theory of almost everything".' en.wikipedia.org/wiki/Standard_Model Apr 13 comment Polarized Coherent Beam Hits a Polarizer Off-Polarization Angle While Varying Distance? The second polarizer would know where it was relative to the first by the wave number between the two. Apr 13 comment Polarized Coherent Beam Hits a Polarizer Off-Polarization Angle While Varying Distance? I don't necessarily expect there to be a position dependence. This is just an experiment that is somewhat cheaper than say aLIGO or the LHC, hence might not demand as much in the way of theoretic justification. However, having said that, it would be interesting to know what the Standard Model says should happen and then test this prediction the way, for example, everyone knows GR is true but large amounts of money are spent testing it nevertheless. Apr 12 revised Polarized Coherent Beam Hits a Polarizer Off-Polarization Angle While Varying Distance? added 1 character in body Apr 12 asked Polarized Coherent Beam Hits a Polarizer Off-Polarization Angle While Varying Distance? Apr 6 accepted G4v Gravitational Wave vs General Relativity vs LIGO Observation Apr 5 comment Aharonov Bohm Effect Interaction Energy Interpretation: $\vec E_m = -∇Φ - D\vec A/Dt$? This is a further elaboration of my question's premise: That the conclusion is in conflict with mainstream physics. Reiterating the end of my OP: This question is not the same as asking, "Why is the conclusion... in conflict with mainstream theory."... There is either a problem with the interaction energy interpretation of the ABe, or there is a problem with the steps taken in reaching the above conclusion from the interaction energy interpretation of the ABe. Mar 30 revised Aharonov Bohm Effect Interaction Energy Interpretation: $\vec E_m = -∇Φ - D\vec A/Dt$? added 1 character in body Mar 30 revised Deriving Konopinski's Operational Definitions of Scalar and Vector Potential Link to the paper. Mar 30 revised Aharonov Bohm Effect Interaction Energy Interpretation: $\vec E_m = -∇Φ - D\vec A/Dt$? Link to the derivation from the conjugate momentum. Mar 30 revised Aharonov Bohm Effect Interaction Energy Interpretation: $\vec E_m = -∇Φ - D\vec A/Dt$? Further constrain the question. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467931985855103, "perplexity": 2320.496046000809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00035-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://www.simmme.com.br/inexpensive-party-yhiunef/page.php?154560=injective-matrix-calculator | Let f : A ⟶ B and g : X ⟶ Y be two functions represented by the following diagrams. Clearly, f : A ⟶ B is a one-one function. Forums. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). Let f : A ----> B. For instance, if you're looking how to calculate the moment of inertia of a rectangle you can use the tool above simply by selecting rectangle from the drop down list then entering some dimensions for height and width (e.g. (iii) One to one and onto or Bijective function. f maps distinct elements of A into distinct images in B and every element in B is an image of some element in A. (0,1) nicht. Something is going to be one-to-one if and only if, the rank of your matrix is equal to n. And you can go both ways. Calculate the Torsion Constant (J) of a beam section; Moment of Inertia. Aus der 1. Injective and surjective functions There are two types of special properties of functions which are important in many di erent mathematical theories, and which you may have seen. . Therefore, f is one to one or injective function. Eine reguläre, invertierbare oder nichtsinguläre Matrix ist in der Mathematik eine quadratische Matrix, die eine Inverse besitzt. As a result you will get the inverse calculated on the right. Just type matrix elements and click the button. sprich mit x=(0,0) und y(0,1) wäre die erste Bedingung erfüllt, jedoch ist x nicht gleich y. Surjektivität zeigst du indem du den ganzen Raum triffst. Prove that T is injective (one-to-one) if and only if the nullity of Tis zero. Here you can calculate inverse matrix with complex numbers online for free with a very detailed solution. 1. But g : X ⟶ Y is not one-one function because two distinct elements x1 and x3have the same image under function g. (i) Method to check the injectivity of a functi… In each case determine whether T: b) mathbb{R}^{l} is injective, surjective, both, or neither, where T is defined by the matrix:a) rightarrow mathbb{R}^{k} Get more help from Chegg Solve it with our algebra problem solver and calculator Suppose that T (x)= Ax is a matrix transformation that is not one-to-one. To calculate inverse matrix you need to do the following steps. Set the matrix (must be square) and append the identity matrix of the same dimension to it. Note that, if A is invertible, then A red has a 1 in every column and in every row. So it's going to be equal to n. Or another way to say it is that the rank of your matrix is going to be equal to n. So now we have a condition for something to be one-to-one. - Matrix Addition. In the above arrow diagram, all the elements of A have images in B and every element of A has a unique image. Eine Matrix, deren Zeilen oder Spalten linear abhängig sind, besitzt keine Inverse. Eine Matrix wird transponiert, indem man aus den Zeilen Spalten macht. To understand inverse calculation better input any example, choose "very detailed solution" option and examine the solution. All of the vectors in the null space are solutions to T (x)= 0. This corresponds to the maximal number of linearly independent columns of A.This, in turn, is identical to the dimension of the vector space spanned by its rows. That is, no element of X has more than one image. Indeed the matrix of $$L$$ in the standard basis is $$\begin{pmatrix}1&1\\1&2\\0&1\end{pmatrix}\, . There is a yet another way to look at systems of linear equations. The words surjective and injective refer to the relationships between the domain, range and codomain of a function. The rst property we require is the notion of an injective function. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. But we can have a "B" without a matching "A" Injective is also called "One-to-One" Surjective means that every "B" has at least one matching "A" (maybe more than one). Determine if Injective (One to One) f(x)=1/x A function is said to be injective or one-to-one if every y-value has only one corresponding x-value. Let f : A ----> B be a function. Functions can be injections (one-to-one functions), surjections (onto functions) or bijections (both one-to-one and onto). Show Instructions In general, you can skip … Searching by value will also work. Injective (One-to-One) Schließlich heißt f bijektiv, falls f injektiv und surjektiv ist. Free matrix inverse calculator - calculate matrix inverse step-by-step This website uses cookies to ensure you get the best experience. Wer liebt sie nicht, die visuellen Effekte des herabfallenden Binärcode-Regens aus dem Film "Matrix"? You need to enable it. Bijective means both Injective and Surjective together. Injective and Surjective Linear Maps. As it is also a function one-to-many is not OK. 4) injective. The function f is called an one to one, if it takes different elements of A into different elements of B. If a determinant of the main matrix is zero, inverse doesn't exist. Matrix Calculator: A beautiful, free matrix calculator from Desmos.com. So f is onto function. Think of it as a "perfect pairing" between the sets: every one has a partner and no one is left out. Write the elements of f (ordered pairs) using arrow diagram as shown below. KOSTENLOSE "Mathe-FRAGEN-TEILEN-HELFEN Plattform für Schüler & Studenten!" - Matrix Subtraction. A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. This website is made of javascript on 90% and doesn't work without it. This can only happen if A is a square matrix… With help of this calculator you can: find the matrix determinant, the rank, raise the matrix to a power, find the sum and the multiplication of matrices, calculate the inverse matrix. Every element of A has a different image in B. The inverse is calculated using Gauss-Jordan elimination. Zum Beispiel zeichnen sich reguläre Matrizen dadurch aus, dass die durch sie beschriebene lineare Abbildung bijektiv ist. Thus, f : A ⟶ B is one-one. injektiv; surjektiv; matrix; lineare-abbildung; Gefragt 17 Jan 2017 von mathemaggie Siehe "Bijektiv" im Wiki 1 Antwort + +1 ... Injektiv falls A(x) = A(y) nur mit x=x gilt. You can check any number and see more parallel results for this number gematronic value. Every element of B has a pre-image in A. The figure given below represents a one-one function. Analysis I TUHH, Winter 2006/2007 Armin Iske 29. A one-one function is also called an Injective function. This can only happen if A is a square matrix… De nition. The matrix exponential is not surjective when seen as a map from the space of all n ... Then f is surjective since it is a projection map, and g is injective by definition. Du triffst bei A z.B. Write the elements of f (ordered pairs) using arrow diagram as shown below. Given a matrix M, form this equation: y = M x Note that, if A is invertible, then A red has a 1 in every column and in every row. Supported matrix operations: - Matrix Inverse. Its main task – calculate mathematical matrices. Multipliziere die Elemente auf der Hauptdiagonalen - das Ergebnis ist die Determinante. Details (Matrix multiplication) With help of this calculator you can: find the matrix determinant, the rank, raise the matrix to a power, find the sum and the multiplication of matrices, calculate the inverse matrix. Determinante und inverse Matrix. One-to-one functions are also called $$\textit{injective}$$ functions. Definition. Is this an injective function? This is what breaks it's surjectiveness. Leave extra cells empty to enter non-square matrices. Transponierte Matrix - Beispiel. That is, in B all the elements will be involved in mapping. Die Matrizenmultiplikation ist eine binäre Verknüpfung auf der Menge der Matrizen über einem Ring (oft der Körper der reellen Zahlen), also eine Abbildung ⋅ : × × × → ×, (,) ↦ = ⋅, die zwei Matrizen = und = eine weitere Matrix = zuordnet. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. The function f is called an onto function, if every element in B has a pre-image in A. Well, no, because I have f of 5 and f of 4 both mapped to d. So this is what breaks its one-to-one-ness or its injectiveness. Induced surjection and induced bijection. Mensuration calculators. A, B and f are defined as. Statistics calculators. Let f : X ----> Y. X, Y and f are defined as. Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. - Matrix Scalar multiplication. This application is absolutely free mathematical calculator. Gebe die Matrix an (muss quadratisch sein). Die transponierte Matrix $$A^{T}$$ erhält man durch Vertauschen der Zeilen und Spalten der Matrix $$A$$. Coordinate geometry calculators. A one-one function is also called an Injective function. It is not required that a is unique; The function f may map one or more elements of A to the same element of B. Analog definiert man den Spaltenraum und den Spaltenrang durch die Spaltenvektoren. The figure given below represents a onto function. - Matrix Multiplication. Das ist genau dann der Fall, wenn die Determinante der Matrix gleich Null ist. Diagramatic interpretation in the Cartesian plane, defined by the mapping f : X → Y, where y = f(x), X = domain of function, Y = range of function, and im(f) denotes image of f.Every one x in X maps to exactly one unique y in Y.The circled parts of the axes represent domain and range sets— in accordance with the standard diagrams above.$$ The columns of this matrix encode the possible outputs of the function $$L$$ because $$Injective and Surjective Linear Maps. des mit transponieren brauchst du gar nicht, da der rang einer transponierten gleich der rang der untransponierten ist. The application can work with: - Integers (-2, -1, 0, 1, 2 etc. In mathematics, a square root of a number x is a number y such that y² = x; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y) is x. Missing addend Double facts Doubles word problems. Let A and B be rings, U a (B, A)-bimodule and $$T=\begin{pmatrix}A&{}0\\ U&{}B\end{pmatrix}$$ a triangular matrix ring. Projection onto a subspace..$$ P = A(A^tA)^{-1}A^t Rows: Kapitel 1: Aussagen, Mengen, Funktionen Beispiele. University Math Help. May 22, 2011 #1 Find the nullspace of T = 1 3 4 1 4 6-1 -1 0 which i found to be (2,-2,1). To calculate inverse matrix you need to do the following steps. Just type matrix elements and click the button. Reduce the left matrix to row echelon form using elementary row operations for the whole matrix (including the right one). Explain your answer. Suppose thatwe want to find all solutions of the followingsystem of linear equations where A is an m by n matrix of coefficients and b is the column of rightsides. If the function is one-to-one, there will be a unique inverse. In other words f is one-one, if no element in B is associated with more than one element in A. You can copy and paste the entire matrix right here. We will now look at two important types of linear maps - maps that are injective, and maps that are surjective, both of which terms are analogous to that of regular functions. In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. Chemistry periodic calculator. Any ideas? a) Find the matrix of F with respect to the standard bases of R2 and P. b) Is F injective? LIFE MATHEMATICS. For every n-vector v we can get an m-vector Av.Our goal is to findall n-vectors v such that this m-vector is b.Thus we have a function which takes any vectorv from Rn to the vector Av from Rmand our goal is to find all valuesof the argument of this function for which the function has a pa… The previous three examples can be summarized as follows. 1. Onto Function (surjective): If every element b in B has a corresponding element a in A such that f(a) = b. Informally, an injection has each output mapped to by at most one input, a surjection includes the entire possible range in the output, and a bijection has both conditions be true. T. timb89. Generell kann man sagen: ist die Matrix mehr breit als hoch, kann sie nur surjektiv sein, und ist sie mehr hoch als breit kann sie nur injektiv sein. 100, 200). By the theorem, there is a nontrivial solution of Ax = 0. On the other hand the map is not injective (in fact each $(x,y)\in \mathbb R^2\setminus \{0\}$ has infinitely many counterimages). MATH FOR KIDS. Thread starter timb89; Start date May 22, 2011; Tags injective matrix; Home. Gerechnet wird mit Matrix A und … Notice that injectivity is a condition on the pre-images of $$f$$. Let U and V be vector spaces over a scalar field F. Let T:U→Vbe a linear transformation. The function f is called an one to one, if it takes different elements of A into different elements of B. An onto function is also called a surjective function. c) … By using this website, you agree to our Cookie Policy. A criterion for global invertibility A useful criterion for global invertibility is the following. In mathematics, more specifically in linear algebra and functional analysis, the kernel of a linear mapping, also known as the null space or nullspace, is the set of vectors in the domain of the mapping which are mapped to the zero vector. Die Begriffe Injektiv, Surjektiv und Bijektiv beschreiben Eigenschaften von Funktionen bzw. This is what breaks it's surjectiveness. Nandan, inverse of a matrix is related to notions of bijective, injective and surjective functions. If a map is both injective and surjective, it is called invertible. −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 3.5 4 f Is this an injective function? Sie werden vor allem verwendet, um lineare Abbildungen darzustellen. Solve. That means you can invert a matrix only is it is square (bijective function). In order to apply this to matrices, we have to have a way of viewing a matrix as a function. Injective Matrix? That is, no element of A has more than one image. If implies , the function is called injective, or one-to-one.. Orthogonal Projection Matrix Calculator - Linear Algebra. Verify whether f is a function. Erstelle den Matrix Regen mithilfe der Eingabeaufforderung. It is not required that a is unique; The function f may map one or more elements of A to the same element of B. After clicking "Calculate", the tool will calculate the moment of inertia. If a map is both injective and surjective, it is called invertible. - Matrix Determinant. In this section, you will learn the following three types of functions. The function f is called an onto function, function, if f is both a one to one and an onto function, f maps distinct elements of A into distinct images. We can express that f is one-to-one using quantifiers as or equivalently , where the universe of discourse is the domain of the function.. A function f : A ⟶ B is said to be a one-one function or an injection, if different elements of A have different images in B. Injective means we won't have two or more "A"s pointing to the same "B". The figure given below represents a one-one function. As a result you will get the inverse calculated on the right. A function f from a set X to a set Y is injective (also called one-to-one) Though the second part of the question asks if T is injective? This means, for every v in R‘, there is exactly one solution to Au = v. So we can make a map back in the other direction, taking v to u. We will now look at two important types of linear maps - maps that are injective, and maps that are surjective, both of which terms are analogous to that of regular functions. If you have any feedback about our math content, please mail us : You can also visit the following web pages on different stuff in math. So there is a perfect " one-to-one correspondence " between the members of the sets. bijektiv können also nur quadratische sein. Each row must begin with a new line. — and F (1) =1-2. Show Instructions. In this talk, I’ll consider the injective tensor norm, which for k=1 is the length of a vector and for k=2 is the largest singular value of a matrix. May 2011 1 0. 100, 666, 312. The figure shown below represents a one to one and onto or bijective function. This means, for every v in R‘, there is exactly one solution to Au = v. So we can make a map back in the other direction, taking v to u. - Matrix Transposition. The linear map F:R2 P2 is such that F F (1) = . That is, we say f is one to one In other words f is one-one, if no element in B is associated with more than one element in A. Type a math problem. The calculator will find the inverse of the given function, with steps shown. Matrix Calculator. sorry about the incorrect format. Well, no, because I have f of 5 and f of 4 both mapped to d. So this is what breaks its one-to-one-ness or its injectiveness. So many-to-one is NOT OK (which is OK for a general function). This means that the null space of A is not the zero space. Now if I wanted to make this a surjective and an injective function, I would delete that mapping and I would change f … In this paper, we firstly construct a right recollement of stable categories of Ding injective B-modules, Ding injective T-modules and Ding injective A-modules and then establish a recollement of stable categories of these modules. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Leave extra cells empty to enter non-square matrices. Weiterhin heißt f injektiv, falls die Gleichung f(x) = y f¨ur y ∈ N h¨ochstens eine L¨osung x ∈ M besitzt, d.h. ∀x1,x2 ∈ M:f(x1) = f(x2) =⇒ x1 = x2. ). Set the matrix (must be square) and append the identity matrix of the same dimension to it. Dieses Element y wird auch mit bezeich By using this website, you agree to our Cookie Policy. Therefore, f is one to one and onto or bijective function. Direct proportion and inverse proportion. Therefore, f is onto or surjective function. Elements must be separated by a space. if so, what type of function is f ? Abbildungen, also Abbildungseigenschaften.Eine Abbildung oder eine Funktion ist eine eindeutige Zuordnung zwischen zwei Mengen A und B. Durch eine Abbildung f wird also jedem Element aus der der Definitionsmenge A genau ein Element aus der Zielmenge B zugeordnet. We have n columns. The function f is called as one to one and onto or a bijective function, if f is both a one to one and an onto function. The natural way to do that is with the operation of matrix multiplication. in B and every element in B is an image of some element in A. Square Root. Add to solve later Sponsored Links If for any in the range there is an in the domain so that , the function is called surjective, or onto.. The calculator will find the inverse of the square matrix using the Gaussian elimination method, with steps shown. E.g. CALCULATORS. We can express that f is one-to-one using quantifiers as or equivalently , where the universe of discourse is the domain of the function.. algebra trigonometry statistics calculus matrices variables list. Any function induces a surjection by restricting its codomain to its range. That is, no two or more elements of A have the same image in B. Alle drei Verfahren, die im Folgenden besprochen werden, führen zu demselben Ergebnis. Matrizen (singular Matrix) sind rechteckige Anordungnen von mathematischen Elementen, wie Zahlen oder Variablen, mit denen sich im Ganzen rechnen lässt. Matrix Calculators. Möglichkeit 1. Algebra calculators. Solving linear equations using elimination method, Solving linear equations using substitution method, Solving linear equations using cross multiplication method, Solving quadratic equations by quadratic formula, Solving quadratic equations by completing square, Nature of the roots of a quadratic equations, Sum and product of the roots of a quadratic equations, Complementary and supplementary worksheet, Complementary and supplementary word problems worksheet, Sum of the angles in a triangle is 180 degree worksheet, Special line segments in triangles worksheet, Proving trigonometric identities worksheet, Quadratic equations word problems worksheet, Distributive property of multiplication worksheet - I, Distributive property of multiplication worksheet - II, Writing and evaluating expressions worksheet, Nature of the roots of a quadratic equation worksheets, Determine if the relationship is proportional worksheet, Trigonometric ratios of some specific angles, Trigonometric ratios of some negative angles, Trigonometric ratios of 90 degree minus theta, Trigonometric ratios of 90 degree plus theta, Trigonometric ratios of 180 degree plus theta, Trigonometric ratios of 180 degree minus theta, Trigonometric ratios of 270 degree minus theta, Trigonometric ratios of 270 degree plus theta, Trigonometric ratios of angles greater than or equal to 360 degree, Trigonometric ratios of complementary angles, Trigonometric ratios of supplementary angles, Domain and range of trigonometric functions, Domain and range of inverse trigonometric functions, Sum of the angle in a triangle is 180 degree, Different forms equations of straight lines, Word problems on direct variation and inverse variation, Complementary and supplementary angles word problems, Word problems on sum of the angles of a triangle is 180 degree, Domain and range of rational functions with holes, Converting repeating decimals in to fractions, Decimal representation of rational numbers, L.C.M method to solve time and work problems, Translating the word problems in to algebraic expressions, Remainder when 2 power 256 is divided by 17, Remainder when 17 power 23 is divided by 16, Sum of all three digit numbers divisible by 6, Sum of all three digit numbers divisible by 7, Sum of all three digit numbers divisible by 8, Sum of all three digit numbers formed using 1, 3, 4, Sum of all three four digit numbers formed with non zero digits, Sum of all three four digit numbers formed using 0, 1, 2, 3, Sum of all three four digit numbers formed using 1, 2, 5, 6, Volume and Surface Area of Composite Solids Worksheet, Example Problems on Surface Area with Combined Solids. You can search your name or any other phrase and the online gematria calculator will calculate the Gimatria value not only in English but also in Hebrew Gematria, Jewish Gematria and the Simple Gematria method. a ≠ b ⇒ f(a) ≠ f(b) for all a, b ∈ A ⟺ f(a) = f(b) ⇒ a = b for all a, b ∈ A. e.g. If a vector has one index and a matrix has two, then a tensor has k indices, where k could be 3 or more. Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Onto Function (surjective): If every element b in B has a corresponding element a in A such that f(a) = b. Für eine Matrix definiert man den Zeilenraum als die lineare Hülle der Zeilenvektoren aus .Die Dimension des Zeilenraums bezeichnet man als Zeilenrang, sie entspricht der Maximalzahl linear unabhängiger Zeilenvektoren. In the above arrow diagram, all the elements of X have images in Y and every element of X has a unique image. Reguläre Matrizen können auf mehrere äquivalente Weisen charakterisiert werden. A non-injective non-surjective function (also not a bijection) . Reduziere die Matrix auf Zeilenstufenform, mithilfe von elementaren Zeilenumformungen, so dass alle Elemente unter der Diagonalen Null betragen. jordan normal form calculator. Injective functions. If both conditions are met, the function is called bijective, or one-to-one and onto. There won't be a "B" left out. Advanced Algebra . Free matrix calculator - solve matrix operations and functions step-by-step This website uses cookies to ensure you get the best experience. Related Concepts. That f f ( 1 ) = Ax is a condition on the one. Of B null ist see more parallel results for this number gematronic value untransponierten ist a one one... Prove that T is injective ( also called an onto function is called,... Or equivalently, where the universe of discourse is the domain, range and codomain of a has than... Im Folgenden besprochen werden, führen zu demselben Ergebnis denen sich im Ganzen rechnen lässt so dass Elemente... Onto ) can check any number and see more parallel results for this number value. Analysis I TUHH, Winter 2006/2007 Armin Iske 29, there will be involved in mapping one-to-many not! Can work with: - Integers ( -2, -1, 0, 1 2... You agree to our Cookie Policy the structures onto functions ) or bijections ( one-to-one... Our google custom search here ( must be square ) and append the matrix... 'S breakthrough technology & knowledgebase, relied on by millions of students & professionals find inverse! For free with a very detailed solution '' option and examine the solution square ( bijective function ) every... Using Wolfram 's breakthrough technology & knowledgebase, relied on by millions of students &.! Matrix you need to do the following element in B all the elements of a invertible! To have a way of viewing a matrix as a function f is one-one, all the elements be. Or bijective function property we require is the domain of the given function, if no of! Transponierten gleich der rang einer transponierten gleich der rang einer transponierten gleich der rang der untransponierten ist gematronic.. Homomorphism between algebraic structures is a matrix as a function gleich der rang der untransponierten ist (. Alle drei Verfahren, die im Folgenden besprochen werden, führen zu demselben Ergebnis Torsion (! A matrix is zero, inverse does n't work without it inverse calculator - calculate matrix inverse step-by-step this,... That is, in B reguläre, invertierbare oder nichtsinguläre matrix ist in der eine. A into different elements of a has a different image in B and every element in B Y is (. Lineare Abbildungen darzustellen to it and onto injective matrix calculator bijective function auf Zeilenstufenform, mithilfe von Zeilenumformungen. Matrix '' universe of discourse is the following steps will get the inverse the! Variablen, mit denen sich im Ganzen rechnen lässt injective matrix calculator führen zu demselben Ergebnis gar nicht, da rang. A one-one function is also called one-to-one ) is f transponieren brauchst du gar nicht, visuellen... Injective matrix ; Home Anordungnen von mathematischen Elementen, wie Zahlen oder Variablen, mit denen sich im Ganzen lässt. Visuellen Effekte des herabfallenden Binärcode-Regens aus dem Film matrix '' X to a Y. Starter timb89 ; Start date May 22, 2011 ; Tags injective matrix ; Home matrix '' oder matrix! A B '' left out is equivalent to 5 * X calculation better any... A very detailed solution '' option and examine the solution reduziere die matrix auf Zeilenstufenform, von., so dass alle Elemente unter der Diagonalen null betragen Funktionen bzw compatible with the of. Allem verwendet, um lineare Abbildungen darzustellen matrix ; Home step-by-step this website uses cookies to ensure you get best... On 90 % and does n't work without it '' between the domain of the function! Stuff given above, if no element of X has more than one element in a is called! Spaltenraum und den Spaltenrang durch die Spaltenvektoren search here, wie Zahlen oder,.: a ⟶ B and every element of B has a 1 in every and. Gaussian elimination method, with steps shown 5x is equivalent to 5 * ! Involved in mapping g: X -- -- > B be a function, der., f is called an injective function not one-to-one has more than image... Lineare injective matrix calculator darzustellen main matrix is zero, inverse of the same image in B an. Wolfram 's breakthrough technology & knowledgebase, relied on by millions of students & professionals Beispiel. Are defined as to do the following steps transponierten gleich der rang einer transponierten gleich der rang untransponierten. Invertible, then a red has a pre-image in a domain of the given function, if need. If the function f is called injective, or one-to-one and onto or bijective function ) werden führen... Can only happen if a is a function one-to-many is not the zero space alle Elemente unter der null. F are defined as ( onto functions ) or bijections ( both one-to-one and or... And see more parallel results for this number gematronic value matrix, die eine inverse besitzt is! Eine quadratische matrix, deren Zeilen oder Spalten linear abhängig sind, besitzt keine.! Nicht, da der rang einer transponierten gleich der rang der untransponierten ist a beam ;! Abbildung bijektiv ist B all the elements of B after clicking calculate '', the will... Row operations for the whole matrix ( must be square ) and append the identity of... F Injektiv und injective matrix calculator ist transponiert, indem man aus den Zeilen Spalten macht can calculate inverse matrix with numbers... Abbildungen darzustellen our Cookie Policy ensure you get the inverse of the square matrix using Gaussian! Example, choose very detailed solution and every element in B and:. Correspondence between the sets: every one has a unique image or injective function the whole matrix ( be. In math, please use our google custom search here a determinant of the sets: every one a. Injective refer to the relationships between the domain, range and codomain of a beam section Moment! Abbildung bijektiv ist with respect to the relationships between the domain of the main matrix is,. Of R2 and P. B ) is this an injective function stuff given above, if element... Will be a B '' left out bijektiv beschreiben Eigenschaften von Funktionen bzw this means that the null are..., um lineare Abbildungen darzustellen Start date May 22, 2011 ; Tags matrix! Where the universe of discourse is the domain, range and codomain of a have the same dimension it... ) sind rechteckige Anordungnen von mathematischen Elementen, wie Zahlen oder Variablen, mit denen sich im rechnen..., dass die durch sie beschriebene lineare Abbildung bijektiv ist dimension to it matrix sind! R2 and P. B ) is f injective every column and in every and. Tis zero, mithilfe von elementaren Zeilenumformungen, so 5x equivalent... Ensure you get the best experience way of viewing a matrix transformation that,! A one-one function is called an injective function correspondence between the so! Functions ), surjections ( onto functions ) or bijections ( both one-to-one and onto or bijective function the! matrix '' to its range genau dann der Fall, wenn die Determinante der matrix gleich ist. ) one to one, if a map is both injective and surjective, it is invertible. Append the identity matrix of the function f is one-to-one using quantifiers or. Matrices, we have to have a way of viewing a matrix as function. F maps distinct elements of f with respect to the relationships between the members of function... Dass alle Elemente unter der Diagonalen null betragen whole matrix ( must be square ) and append identity! The function is also called one-to-one ) is this an injective function you get the experience. A very detailed solution, Mengen, Funktionen Beispiele, 1, 2 etc map f: R2 is... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299026489257812, "perplexity": 1340.2123296814718}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00109.warc.gz"} |
https://en.m.wikibooks.org/wiki/Trigonometry/Derivative_of_Sine | # Trigonometry/Derivative of Sine
To find the derivative of sin(θ).
${\displaystyle {\frac {d}{dx}}{\bigl [}\sin(x){\bigr ]}=\lim _{h\to 0}{\frac {\sin(x+h)-\sin(x)}{h}}=\lim _{h\to 0}{\frac {2\cos {\bigl (}x+{\frac {h}{2}}{\bigr )}\sin {\bigl (}{\frac {h}{2}}{\bigr )}}{h}}=\lim _{h\to 0}\left[\cos {\bigl (}x+{\tfrac {h}{2}}{\bigr )}{\frac {\sin {\bigl (}{\frac {h}{2}}{\bigr )}}{\frac {h}{2}}}\right]}$ .
Clearly, the limit of the first term is ${\displaystyle \cos(x)}$ since ${\displaystyle \cos(x)}$ is a continuous function. Write ${\displaystyle k={\frac {h}{2}}}$ ; the second term is then
${\displaystyle {\frac {\sin(k)}{k}}}$ .
Which we proved earlier tends to 1 as ${\displaystyle k\to 0}$ .
And since
${\displaystyle k\to 0{\text{ as }}h\to 0}$ ,
the limit of the second term is 1 too. Thus
${\displaystyle {\frac {d}{dx}}{\bigl [}\sin(x){\bigr ]}=\cos(x)}$ . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989932179450989, "perplexity": 731.7746146453854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661962.7/warc/CC-MAIN-20160924173741-00259-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://events.dm.unipi.it/event/45/?view=event | Algebraic and Arithmetic Geometry Seminar
The Beauville-Voisin conjecture for $\mathsf{Hilb}(K3)$ and the Virasoro algebra
by Andrei Negut (MIT)
Europe/Rome
Aula Magna (Dipartimento di Matematica)
Aula Magna
Dipartimento di Matematica
Description
We give a geometric representation theory proof of a mild version of the Beauville-Voisin Conjecture for Hilbert schemes of $K3$ surfaces, namely the injectivity of the cycle map restricted to the subring of Chow generated by tautological classes. Our approach involves lifting formulas of Lehn and Li-Qin-Wang from cohomology to Chow groups, and using them to solve the problem by invoking the irreducibility criteria of Virasoro algebra modules, due to Feigin-Fuchs. Joint work with Davesh Maulik. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398298025131226, "perplexity": 1759.750156247933}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00439.warc.gz"} |
https://www.opuscula.agh.edu.pl/om-vol35iss5art6 | Opuscula Math. 35, no. 5 (2015), 689-712
http://dx.doi.org/10.7494/OpMath.2015.35.5.689
Opuscula Mathematica
# Maillet type theorem for singular first order nonlinear partial differential equations of totally characteristic type. Part II
Akira Shirai
Abstract. In this paper, we study the following nonlinear first order partial differential equation: $f(t,x,u,\partial_t u,\partial_x u)=0\quad\text{with}\quad u(0,x)\equiv 0.$ The purpose of this paper is to determine the estimate of Gevrey order under the condition that the equation is singular of a totally characteristic type. The Gevrey order is indicated by the rate of divergence of a formal power series. This paper is a continuation of the previous papers [Convergence of formal solutions of singular first order nonlinear partial differential equations of totally characteristic type, Funkcial. Ekvac. 45 (2002), 187-208] and [Maillet type theorem for singular first order nonlinear partial differential equations of totally characteristic type, Surikaiseki Kenkyujo Kokyuroku, Kyoto University 1431 (2005), 94-106]. Especially the last-mentioned paper is regarded as part I of this paper.
Keywords: singular partial differential equations, totally characteristic type, nilpotent vector field, formal solution, Gevrey order, Maillet type theorem.
Mathematics Subject Classification: 35F20, 35A20, 35C10.
Full text (pdf)
1. H. Chen, Z. Luo, On the holomorphic solution of non-linear totally characteristic equations with several space variables, Preprint 99/23, November 1999, Institute für Mathematik, Universität Potsdam.
2. H. Chen, Z. Luo, H. Tahara, Formal solutions of nonlinear first order totally characteristic type PDE with irregular singularity, Ann. Inst. Fourier (Grenoble) 51 (2001) 6, 1599-1620.
3. H. Chen, H. Tahara, On totally characteristic type non-linear partial differential equations in complex domain, Publ. RIMS. Kyoto Univ. 35 (1999), 621-636.
4. R. Gérard, H. Tahara, Singular nonlinear partial differential equations, Vieweg, 1996.
5. M. Hibino, Divergence property of formal solutions for singular first order linear partial differential equations, Publ. RIMS, Kyoto Univ. 35 (1999), 893-919.
6. M. Miyake, A. Shirai, Convergence of formal solutions of first order singular nonlinear partial differential equations in complex domain, Ann. Polon. Math. 74 (2000), 215-228.
7. M. Miyake, A. Shirai, Structure of formal solutions of nonlinear first order singular partial differential equations in complex domain, Funkcial. Ekvac. 48 (2005), 113-136.
8. M. Miyake, A. Shirai, Two proofs for the convergence of formal solutions of singular first order nonlinear partial differential equations in complex domain, Surikaiseki Kenkyujo Kokyuroku Bessatsu, Kyoto Unviversity B37 (2013), 137-151.
9. T. Oshima, On the theorem of Cauchy-Kowalevski for first order linear differential equations with degenerate principal symbols, Proc. Japan Acad. 49 (1973), 83-87.
10. T. Oshima, Singularities in contact geometry and degenerate psude-differential equations, Journal of the Faculty of Science, The University of Tokyo 21 (1974), 43-83.
11. J.P. Ramis, Dévissage Gevrey, Astérisque 59/60 (1978), 173-204.
12. A. Shirai, Maillet type theorem for nonlinear partial differential equations and Newton polygons, J. Math. Soc. Japan 53 (2001), 565-587.
13. A. Shirai, Convergence of formal solutions of singular first order nonlinear partial differential equations of totally characteristic type, Funkcial. Ekvac. 45 (2002), 187-208.
14. A. Shirai, A Maillet type theorem for first order singular nonlinear partial differential equations, Publ. RIMS. Kyoto Univ. 39 (2003), 275-296.
15. A. Shirai, Maillet type theorem for singular first order nonlinear partial differential equations of totally characteristic type, Surikaiseki Kenkyujo Kokyuroku, Kyoto University 1431 (2005), 94-106.
16. A. Shirai, Alternative proof for the convergence or formal solutions of singular first order nonlinear partial differential equations, Journal of the School of Education, Sugiyama Jogakuen University 1 (2008), 91-102.
17. A. Shirai, Gevrey order of formal solutions of singular first order nonlinear partial differential equations of totally characteristic type, Journal of the School of Education, Sugiyama Jogakuen University 6 (2013), 159-172.
18. H. Yamazawa, Newton polyhedrons and a formal Gevrey space of double indices for linear partial differential operators, Funkcial. Ekvac. 41 (1998), 337-345.
19. H. Yamazawa, Formal Gevrey class of formal power series solution for singular first order linear partial differential operators, Tokyo J. Math. 23 (2000), 537-561.
• Akira Shirai
• Sugiyama Jogakuen University, School of Education, Department of Child Development, 17-3 Hoshigaoka Motomachi, Chikusa, Nagoya, 464-8662, Japan
• Communicated by Mirosław Lachowicz.
• Revised: 2014-09-01.
• Accepted: 2014-09-04.
• Published online: 2015-04-27. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504489660263062, "perplexity": 2130.2686294350215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00547.warc.gz"} |
https://www.physicsforums.com/threads/quick-simple-question-about-contraviant-and-covariant-components.594932/ | # Quick, simple question about contraviant and covariant components
1. Apr 9, 2012
### enfield
Can the covariant components of a vector, v, be thought of as v multiplied by a matrix of linearly independent vectors that span the vector space, and the contravariant components of the same vector, v, the vector v multiplied by the *inverse* of that same matrix?
thinking about it like that makes it easy to see why the covariant and contravariant components are equal when the basis is the normalized mutually orthogonal one, for example, because then the matrix is just the identity one, which is its own inverse.
that's what the definitions i read seem to imply.
Thanks!
2. Apr 9, 2012
### Mentz114
The contravariant and covariant components of a vector are linear combinations of each other, but the transformation is performed by the metric and the inverse metric. It is a change of basis between the tangent and cotangent spaces.
The zero'th component of Vμ is given by
V0= g0aVa = g00V0+g01V1+g02V2+g03V3
If the metric is the identity matrix then the components are the same.
Last edited: Apr 9, 2012 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9299337863922119, "perplexity": 462.7682929322206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00090-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://pure.au.dk/portal/en/publications/on-shrinking-targets-for-zm-actions-on-tori(29c74310-dbd3-11dd-9710-000ea68e967b).html | # Department of Mathematics
## On shrinking targets for Zm actions on tori
Research output: Working paper
Let $~A$ be an $~n \times m$ matrix with real entries. Consider the set $~\mathbf{Bad}_A$ of $~\mathbf{x} \in [0,1)^n$ for which there exists a constant $~c(\mathbf{x})>0$ such that for any $~\mathbf{q} \in \mathbb{Z}^m$ the distance between $~\mathbf{x}$ and the point $~\{A \mathbf{q}\}$ is at least $~c(\mathbf{x}) |\mathbf{q}|^{-m/n}$. It is shown that the intersection of $~\mathbf{Bad}_A$ with any suitably regular fractal set is of maximal Hausdorff dimension. The linear form systems investigated in this paper are natural extensions of irrational rotations of the circle. Even in the latter one-dimensional case, the results obtained are new. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 10, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797682762145996, "perplexity": 112.32171207140016}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948426.82/warc/CC-MAIN-20180426164149-20180426184149-00177.warc.gz"} |
http://nrich.maths.org/646 | ### Some(?) of the Parts
A circle touches the lines OA, OB and AB where OA and OB are perpendicular. Show that the diameter of the circle is equal to the perimeter of the triangle | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685755729675293, "perplexity": 313.0387233603941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108268.39/warc/CC-MAIN-20170821114342-20170821134342-00501.warc.gz"} |
http://www.purplemath.com/learning/viewtopic.php?f=17&t=534 | ## A recent study indicated that 20% of adults exercise... Find
Standard deviation, mean, variance, z-scores, t-tests, etc.
### A recent study indicated that 20% of adults exercise... Find
A recent study indicated that twenty percent of adults exercise regularly. Suppose that five adults are selected at random. Use the binomial probability formula to find the probability that the number of people in the sample who exercise is:
a. exactly 3
b. at least 3
Also, find the:
c. mean
d. standard deviation
FWT
Posts: 107
Joined: Sat Feb 28, 2009 8:53 pm
FWT wrote:A recent study indicated that twenty percent of adults exercise regularly. Suppose that five adults are selected at random. Use the binomial probability formula to find the probability that the number of people in the sample who exercise is:
a. exactly 3
b. at least 3
The formula for the probability P that x of n results will be "good" is:
. . . . .$P(x)\, =\, C(n,\, x)\,p^x\, (1\, -\, p)^{n\, -\, x}$
In this case, n = 5 and p = 0.2. Then:
. . . . .$\mbox{a. }\, P(3)\, =\, \frac{5!}{2!3!}\, (0.2)^3\, (0.8)^2$
Evaluate to find the needed value.
For part (b), note that "at least three" means three, four, or five. Compute all three, and sum the values.
FWT wrote:Also, find the:
c. mean
d. standard deviation
The expected value (or mean) is np, or (5)(0.2). The standard deviation is given by:
. . . . .$\sigma\, =\, \sqrt{np(1\, -\, p)}$
stapel_eliz
Posts: 1729
Joined: Mon Dec 08, 2008 4:22 pm
### Re: A recent study indicated that 20% of adults exercise... Find
for (b) I did P(3)+P(4)+P(5) and got the ansewr in the back of the book. thanks!
FWT
Posts: 107
Joined: Sat Feb 28, 2009 8:53 pm | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9427481293678284, "perplexity": 1864.6502354611528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122152935.2/warc/CC-MAIN-20150124175552-00111-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/26826/quantum-statistics-of-branes?answertab=votes | # Quantum statistics of branes
Quantum statistics of particles (bosons, fermions, anyons) arises due to the possible topologies of curves in D-dimensional spacetime winding around each other
What happens if we replace particles by branes? It seems like their quantum statistics should be described by something like a generalization of TQFT in which the "spacetime" (worldbrane) is equipped with an embedding into an "ambient" manifold (actual spacetime). The inclusion of non-trivial topology for the "ambient" manifold introduces additional effects, to 1st approximation describable by inclusion of k-form fluxes coupling to the brane. To 2nd approximation, however, there is probably non-trivial coupling between these fluxes and the "generalized quantum statistics"
A simple example of non-trivial "brane quantum statistics" is the multiplication of quantum amplitudes of strings by the exponential of the euler charactestic times a constant. In string theory this corresponds to changing the string coupling constant / dilaton background.
Were such generalized TQFTs studied? Which non-trivial examples are there for branes in string theory?
-
You are probably aware of this, but just for completeness: N coincident branes have a U(N) gauge symmetry, which is broken to $U(1)^N\times S_N$ when they are separated. The permutation symmetry $S_N$ is a discrete gauge symmetry which ensures branes are treated as identical particles. Your question seems related to which kind of identical particles they are (bosons, fermions, or anyons). – user566 Dec 3 '11 at 2:18
Just slightly picky, user: the product should be semidirect, not direct. ;-) – Luboš Motl Jan 30 at 14:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849626779556274, "perplexity": 1581.0663486204191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663007.13/warc/CC-MAIN-20140930004103-00464-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.ideals.illinois.edu/browse?type=subject&value=Philosophy%20of%20Language | # Browse by Subject "Philosophy of Language"
• (2014-01-16)
Classical logic and a given nonclassical logic are, by definition, incompatible in some sense. In some cases, this incompatibility is innocuous. In other cases, the nonclassical logic is incompatible with classical logic ...
application/pdf
PDF (460Kb)
• (2012-09-18)
This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, ...
application/pdf
PDF (1Mb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812278151512146, "perplexity": 4299.544464698007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297689.58/warc/CC-MAIN-20150323172137-00221-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://brilliant.org/problems/answer-within-a-minute-iv-corrected/ | # Answer Within A Minute IV (Corrected)
Algebra Level 1
Evaluate the following expression:
$123456789^{2} - (123456788 \times 123456790).$
If you use a calculator whose precision is not strong enough to answer this question, then you will answer this problem incorrectly.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374241590499878, "perplexity": 1162.3897594153766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00629.warc.gz"} |
http://semantic-domain.blogspot.ca/2011/ | ## Monday, December 19, 2011
### Adding Equations to System F, at ESOP 2012
Our paper on adding equations to system F was accepted to ESOP 2012!
The reviewers suggested rather a lot of improvements to the paper, so I'm taking the draft down while we incorporate their suggestions. It will go up again in a few weeks, after the revisions are done, and undoubtedly in a much-improved state.
## Tuesday, November 15, 2011
### Updata
The final version of our POPL paper on space-bounded FRP is available now -- we bought two extra pages from the ACM, and used them to add a lot more detail to the examples. With luck the paper will be a lot clearer now.
Also, Bob Atkey has a new blog post on using delay operators to model guarded definitions in type theory. He takes advantage of the fact that delay is a strong lax monoidal functor (with respect to the monoidal structure of products) to use the syntax of applicative functors.
This is a very elegant way of embedding these things into Haskell. Unfortunately, I have never liked the typing rules for the idiom syntax, since they don't work solely on the outermost type constructor. This means type-theoretic properties like normalization are messier to prove. (Bob evades this problem with semantic techniques, though.)
All his other posts are very good, too.
## Monday, October 17, 2011
### Adding Equations to System F
Nick and I have a new draft out, on adding types for term-level equations to System F. Contrary to the experience of dependent types, this is not a very hairy extension -- in fact, I would not even hesitate to call it simple.
However, it does open the door to all sorts of exciting things, such as many peoples' long-standing goal of putting semantic properties of modules into the module interfaces. This is good for documentation, and also (I would hope) good for compilers --- imagine Haskell, if the Monad typeclass definition also told you (and ghc!) all the equational rewriting that it was supposed to do.
## Tuesday, October 4, 2011
### Higher-Order Functional Reactive Programming in Bounded Space
I just learned that our (with Nick Benton and Jan Hoffmann) paper, Higher-Order Functional Reactive Programming in Bounded Space, was accepted for publication at POPL 2012!
I am very happy with this work, since it resolves some of the thorniest problems in FRP (memory usage and space leaks) without giving up the highly expressive higher-order abstractions that makes FRP attractive in the first place.
A technical surprise in this work is that this stuff is the first place I've seen where the magic wand of separation logic is actually essential. The denotational model in this paper is a variation on Martin Hofmann's length space model, which forms a doubly-closed category modelling bunched implications. In this paper, we needed both function spaces: we needed the linear function space to control allocation, and we needed the ordinary function space to make use of sharing.
It would be interesting to see if there are ways to transport some of these ideas back to Hoare-style separation logic to find some interesting new invariants.
## Thursday, September 29, 2011
This isn't really on-topic for this blog, but I can't resist posting this: the Princeton mathematician Edward Nelson claims to have found an inconsistency in arithmetic! On the FOM mailing list, he posted:
I am writing up a proof that Peano arithmetic (P), and even a small fragment of primitive-recursive arithmetic (PRA), are inconsistent. This is posted as a Work in Progress at http://www.math.princeton.edu/~nelson/books.html
A short outline of the book is at:
http://www.math.princeton.edu/~nelson/papers/outline.pdf
The outline begins with a formalist critique of finitism, making the case that there are tacit infinitary assumptions underlying finitism. Then the outline describes how inconsistency will be proved. It concludes with remarks on how to do modern mathematics within a consistent theory.
There's some discussion of this at The n-Category Cafe, including a brief (so far) discussion between Terry Tao and Nelson himself. Obviously, I expect a flaw will be found in his proof -- but I sure hope he's right! That would mean all of mathematics will be in need of revision.
Update: Nelson says that Tao has indeed found a hole in the proof. Exponentiation remains total, for now.
## Wednesday, September 28, 2011
### The most surprising paper at ICFP
I was just at ICFP, which was very nice -- it was my first trip ever to Japan, and I found the people very friendly. (The cuisine, alas, is not so vegetarian-friendly, if you do not regard fish as a vegetable. But the people made up for it!) There were many excellent talks, but only one result which really shocked me:
Linearity and PCF: a semantic insight!, by Marco Gaboardi, Luca Paolini, and Mauro Piccolo.
Linearity is a multi-faceted and ubiquitous notion in the analysis and the development of programming language concepts. We study linearity in a denotational perspective by picking out programs that correspond to linear functions between coherence spaces.
We introduce a language, named SlPCF*, that increases the higher-order expressivity of a linear core of PCF by means of new operators related to exception handling and parallel evaluation. SlPCF* allows us to program all the finite elements of the model and, consequently, it entails a full abstraction result that makes the reasoning on the equivalence between programs simpler.
Denotational linearity provides also crucial information for the operational evaluation of programs. We formalize two evaluation machineries for the language. The first one is an abstract and concise operational semantics designed with the aim of explaining the new operators, and is based on an infinite-branching search of the evaluation space. The second one is more concrete and it prunes such a space, by exploiting the linear assumptions. This can also be regarded as a base for an implementation.
In this paper, the authors considered a really simple language based on coherence spaces, consisting of the flat coherence space of natural numbers and the linear function space. The nice thing about this model is that tokens of function spaces are just trees with natural numbers at the leaves, with branching determined by the parenthesization of the function type. This really shows off how concrete, simple, and easy-to-use coherence spaces are. Then they found the extension to PCF for which this model was fully abstract, as semanticists are prone to doing.
But: the additional operator and its semantics are really bizarre and ugly. I don't mean this as a criticism -- in fact it is exactly why I liked their paper so much! It implies that there are fundamental facts about linear types which we don't understand. I asked Marco Gaboardi about it after his talk, and he told me that he thinks the issue is that the properties of flat domains in coherence spaces are not well understood.
Anyway, this was a great paper, in a "heightening the contradictions" sort of way.
## Monday, August 8, 2011
### Functional Programming, Program Transformations, and Compiler Construction
I just discovered that Lex Augusteijn's 1993 PhD thesis, Functional Programming, Program Transformations, and Compiler Construction, is available online. It was from reading his papers with Renee Leermakers and Frans Kruseman-Aretz that I finally understood how LR parsing worked, so I'm looking forward to reading the extended exposition in this thesis.
## Wednesday, July 27, 2011
### A new lambda calculus for bunched implications
I am going to put off syntactic extensions for a bit, and talk about an entirely different bit of proof theory instead. I am going to talk about a new lambda-calculus for the logic of bunched implications that I have recently been working on. $\newcommand{\lolli}{\multimap} \newcommand{\tensor}{\otimes}$
First, the logic of bunched implications (henceforth BI) is the substructural logic associated with things like separation logic. Now, linear logic basically works by replacing the single context of intuitionistic logic with two contexts, one for unrestricted hypotheses (which behaves the same as the intuitionistic one, and access to which is controlled by a modality $!A$), and one which is linear --- the rules of contraction and weakening are no longer allowed. The intuitionistic connectives are then encoded, so that $A \to B \equiv !A \lolli B$. BI also puts contraction and weakening under control, but does not do so by simply creating two zones. Instead, contexts now become trees, which lets BI simply add the substructural connectives $A \tensor B$ and $A \lolli B$ to intuitionistic logic. Here's what things look like in the implicational fragment:
$$\begin{array}{lcl} A & ::= & P \bnfalt A \lolli B \bnfalt A \to B \\ \Gamma & ::= & A \bnfalt \cdot_a \bnfalt \Gamma; \Gamma \bnfalt \cdot_m \bnfalt \Gamma, \Gamma \\ \end{array}$$
Note that contexts are trees, since there are two context concatenation operators $\Gamma;\Gamma'$ and $\Gamma,\Gamma'$ (with units $\cdot_a$ and $\cdot_m$) which can be freely nested. They don't distribute over each other in any way, but they do satisfy the commutative monoid properties. The natural deduction rules look like this (implicitly assuming the commutative monoid properties):
$$\begin{array}{ll} \frac{}{A \vdash A} & \\ \frac{\Gamma; A \vdash B} {\Gamma \vdash A \to B} & \frac{\Gamma \vdash A \to B \qquad \Gamma' \vdash A} {\Gamma;\Gamma' \vdash B} \\ \frac{\Gamma, A \vdash B} {\Gamma \vdash A \lolli B} & \frac{\Gamma \vdash A \lolli B \qquad \Gamma' \vdash A} {\Gamma,\Gamma' \vdash B} \\ \frac{\Gamma(\Delta) \vdash A} {\Gamma(\Delta;\Delta') \vdash A} & \frac{\Gamma(\Delta;\Delta) \vdash A} {\Gamma(\Delta) \vdash A} \end{array}$$
Note that the substructural implication adds hypotheses with a comma, and the intuitionistic implication uses a semicolon. I've given the weakening and contraction explicitly in the final two rules, so we can reuse the hypothesis rule.
Adding lambda-terms to this calculus is pretty straightforward, too:
$$\begin{array}{ll} \frac{}{x:A \vdash x:A} & \\ \frac{\Gamma; x:A \vdash e:B} {\Gamma \vdash \fun{x}{e} : A \to B} & \frac{\Gamma \vdash e : A \to B \qquad \Gamma' \vdash e' : A} {\Gamma;\Gamma' \vdash e\;e' : B} \\ \frac{\Gamma, x:A \vdash e : B} {\Gamma \vdash A \lolli \hat{\lambda}x.e : B} & \frac{\Gamma \vdash e : A \lolli B \qquad \Gamma' e': \vdash A} {\Gamma,\Gamma' \vdash e\;e' : B} \\ \frac{\Gamma(\Delta) \vdash e : A} {\Gamma(\Delta;\Delta') \vdash e : A} & \frac{\Gamma(\Delta;\Delta') \vdash \rho(e) : A \qquad \Delta \equiv \rho \circ \Delta'} {\Gamma(\Delta) \vdash e : A} \end{array}$$
Unfortunately, typechecking these terms is a knotty little problem. The reason is basically that we want lambda terms to tell us what the derivation should be, without requiring us to do much search. But contraction can rename a host of variables at once, which means that there is a lot of search involved in typechecking these terms. (In fact, I personally don't know how to do it, though I expect that it's decidable, so somebody probably does.) What would be really nice is a calculus which is saturated with respect to weakening, so that the computational content of the weakening lemma is just the identity on derivation trees, and for which the computational content of contraction is an \emph{easy} renaming of a single obvious variable.
Applying the usual PL methodology of redefining the problem to one that can be solved, we can give an alternate type theory for BI in the following way. First, we'll define nested contexts so that they make the alternation of spatial and intuitionistic parts explicit:
$$\begin{array}{lcl} \Gamma & ::= & \cdot \bnfalt \Gamma; x:A \bnfalt r[\Delta] \\ \Delta & ::= & \cdot \bnfalt \Delta, x:A \bnfalt r[\Gamma] \\ & & \\ e & ::= & x \bnfalt \fun{x}{e} \bnfalt \hat{\lambda}x.\;e \bnfalt e\;e' \bnfalt r[e] \bnfalt \rho r.\;e \\ \end{array}$$
The key idea is to make the alternation of the spatial parts syntactic, and then to name each level shift with a variable $r$. So $x$ are ordinary variables, and $r$ are variables naming nested contexts. Then we'll add syntax $r[e]$ and $\rho r.e$ to explicitly annotate the level shifts:
$$\begin{array}{ll} \frac{}{\Gamma; x:A \vdash x:A} & \frac{}{\cdot,x:A \vdash x:A} \\ \frac{\Gamma; x:A \vdash e:B} {\Gamma \vdash \fun{x}{e} : A \to B} & \frac{\Gamma \vdash e : A \to B \qquad \Gamma \vdash e' : A} {\Gamma \vdash e\;e' : B} \\ \frac{\Delta, x:A \vdash e : B} {\Delta \vdash A \lolli \hat{\lambda}x.e : B} & \frac{\Delta \vdash e : A \lolli B \qquad \Delta' e': \vdash A} {\Delta,\Delta' \vdash e\;e' : B} \\ \frac{r[\Gamma] \vdash e : A} {\Gamma \vdash \rho r.\;e : A} & \frac{r[\Delta] \vdash e : A} {\Delta \vdash \rho r.\;e : A} \\ \frac{\Gamma \vdash e : A} {\cdot, r[\Gamma] \vdash r[e] : A} & \frac{\Delta \vdash e : A} {\Gamma; r[\Delta] \vdash r[e] : A} \end{array}$$
It's fairly straightforward to prove that contraction and weakening are admissible, and doing a contraction is now pretty easy, since you just have to rename some occurences of $r$ to two different variables, but may otherwise leave the branching contexts to be the same.
It's obvious that you can take any derivation in this calculus and erase the $r$-variables to get a well-typed term of the $\alpha\lambda$-calculus. It's only a bit more work to go the other way: given an $\alpha\lambda$-derivation, you prove that given any new context which erases to the $\alpha\lambda$-context, you can find a new derivation proving the same thign. Then there's an obvious proof that you can indeed find a new-context which erases properly.
One interesting feature of this calculus is that the $r$-variables resemble regions in region calculi. The connection is not entirely clear to me, since my variables don't show up in the types. This reminds me a little of the contextual modal type theory of Nanevski, Pfenning and Pientka. It reminds me even more of Bob Atkey's PhD thesis on generalizations of BI to allow arbitrary graph-structured sharing. But all of these connections are still speculative.
There is one remaining feature of these rules which is still non-algorithmic: as in linear logic, the context splitting in the spatial rules with multiple premises (for example, the spatial application rule) just assumes that you know how to divide the context into two pieces. I think the standard state-passing trick should still work, and it may even be nicer than the additives of linear logic. But I haven't worked out the details yet.
## Tuesday, July 26, 2011
### Functional Programming as a Particular Use of Modules
Bob Harper sometimes gets grumbly when people say that ML is an impure language, even though he knows exactly what they mean (and, indeed, agrees with it), because this way of phrasing things does not pay data abstraction its full due.
Data abstraction lets us write verifiably pure programs in ML. By "verifiably pure", I mean that we can use the type system to guarantee that our functions are pure and total, even though ML's native function space contains all sorts of crazy effects involving higher-order state, control, IO, and concurrency. (Indeed, ML functions can even spend your money to rent servers: now that's a side effect!) Given that the ML function space contains these astonishing prodigies and freaks of nature, how can we control them? The answer is data abstraction: we can define new types of functions which only contains well-behaved functions, and ensure through type abstraction that the only ways to form elements of this new type preserve well-behavedness.
Indeed, we will not just define a type of pure functions, but give an interface containing all the type constructors of the guarded recursion calculus I have described in the last few posts. The basic idea is to give an ML module signature containing:
• One ML type constructor for each type constructor of the guarded recursion calculus.
• A type constructor for the hom-sets of the categorical interpretation of this calculus
• One function in the interface for each categorical combinator, such as identity, composition, and each of the natural transformations corresponding to the universal properties of the functors interpreting the type constructors.
That's a mouthful, but it is much easier to understand by looking at the following (slightly pretty-printed) Ocaml module signature:
module type GUARDED =sig type one type α × β type α ⇒ β type α stream type num type •α type (α, β) hom val id : (α, α) hom val compose : (α, β) hom -> (β, γ) hom -> (α, γ) hom val one : (α, one) hom val fst : (α × β, α) hom val snd : (α × β, β) hom val pair : (α, β) hom -> (α, γ) hom -> (α, β × γ) hom val curry : (α × β,γ) hom -> (α,(β,γ) exp) hom val eval : ((α ⇒ β, α) times, β) hom val head : (α stream, α) hom val tail : (α stream, •(α stream)) hom val cons : (α × •(α stream), α stream) hom val zero : (one,num) hom val succ : (num, num) hom val plus : (num × num, num) hom val prod : (num × num, num) hom val delay : (α, •α) hom val next : (α,β) hom -> (•α, •β) hom val zip : (•α × •β, •(α × β)) hom val unzip : (•(α × β), •α × •β) hom val fix : (•α ⇒ α, α) hom val run : (one, num stream) hom -> (unit -> int)end
As you can see, we introduce abstract types corresponding to each of our calculus's type constructors. So α × β and α ⇒ β are not ML pairs and functions, but rather is the type of products and functions of our calculus. This is really the key idea -- since ML functions have too much stuff in them, we'll define a new type of pure functions. I replaced the "natural" numbers of the previous posts with a stream type, corresponding to our LICS 2011 paper, since they are really a kind of lazy conatural, and not the true inductive type of natural numbers. The calculus guarantees definitions are productive, but it's kind of weird in ML to see something called nat which isn't. So I replaced it with streams, which are supposed to yield an unbounded number of elements. (For true naturals, you'll have to wait for my post on Mendler iteration, which is a delightful application of parametricity.)
The run function takes a num stream, and gives you back an imperative function that successively enumerates the elements of the stream. This is the basic trick for making streams fit nicely into an event loop a la FRP.
However, we can implement these functions and types using traditional ML types:
module Guarded : GUARDED =struct type one = unit type α × β = α * β type α ⇒ β = α -> β type •α = unit -> α type α stream = Stream of α * α stream delay type num = int type (α, β) hom = α -> β let id x = x let compose f g a = g (f a) let one a = () let fst (a,b) = a let snd (a,b) = b let pair f g a = (f a, g a) let curry f a = fun b -> f(a,b) let eval (f,a) = f a let head (Stream(a, as')) = a let tail (Stream(a, as')) = as' let cons (a,as') = Stream(a,as') let zero () = 0 let succ n = n + 1 let plus (n,m) = n + m let prod (n,m) = n * m let delay v = fun () -> v let next f a' = fun () -> f(a'()) let zip (a',b') = fun () -> (a'(), b'()) let unzip ab' = (fun () -> fst(ab'())), (fun () -> snd(ab'())) let rec fix f = f (fun () -> fix f) let run h = let r = ref (h()) in fun () -> let Stream(x, xs') = !r in let () = r := xs'() in xend
Here, we're basically just implementing the operations of the guarded recursion calculus using the facilities offered by ML. So our guarded functions are just plain old ML functions, which happen to live at a type in which they cannot be used to misbehave!
This is the sense in which data abstraction lets us have our cake (effects!) and eat it too (purity!).
Note that when we want to turn a guarded lambda-term into an ML term, we can basically follow the categorical semantics to tell us what to write. Even though typechecking will catch all misuses of this DSL, actually using it is honestly not that much fun (unless you're a Forth programmer), since even even small terms turn into horrendous combinator expressions -- but in another post I'll show how we can write a CamlP4 macro/mini-compiler to embed this language into OCaml. This macro will turn out to involve some nice proof theory there, just as this ML implementation shows off how to use the denotational semantics in our ML programming.
## Wednesday, July 20, 2011
### Termination of guarded recursion
I will now give a termination proof for the guarded recursion calculus I sketched in the last two posts. This post got delayed because I tried to oversimplify the proof, and that didn't work -- I actually had to go back and look at Nakano's original proof to figure out where I was going wrong. It turns out the proof is still quite simple, but there's one really devious subtlety in it.
First, we recall the types, syntax and values.
A ::= N | A → B | •Ae ::= z | s(e) | case(e, z → e₀, s(x) → e₁) | λx.e | e e | •e | let •x = e in e | μx.e | xv ::= z | s(e) | λx.e | •e
The typing rules are in an earlier post, and I give some big-step evaluation rules at the end of this post. Now, the question is, given · ⊢ e : A[n], can we show that e ↝ v?
To do this, we'll turn to our old friend, Mrs. step-indexed logical relation. This is a Kripke logical relation in which the Kripke worlds are given by the natural numbers and the accessibility relation is given by ≤. So, we define a family of predicates on closed values indexed by type, and by a Kripke world (i.e., a natural number n).
V(•A)ⁿ = {•e | ∀j<n. e ∈ E(A)ʲ}V(A ⇒ B)ⁿ = {λx.e | ∀j≤n, v ∈ V(A)ʲ. [v/x]e ∈ E(B)ʲ}V(N)ⁿ = {z} ∪ { s(e) | ∀j<n. e ∈ E(N)ʲ}E(A)ʲ = {e | ∃v. e ↝ v and v ∈ V(A)ʲ}
This follows the usual pattern of logical relations, where we give a relation defining values mutually recursively with a relation defining well-behaved expressions (i.e., expressions are ok if they terminate and reduce to a value in the relation at that type).
Note that as we expect, j ≤ n implies V(A)ⁿ ⊆ V(A)ʲ. (The apparent antitonicity comes from the fact that if v is in the n-relation, it's also in the j relation.) One critical feature of this definition is that at n = 0, the condition on V(•A)⁰ always holds, because of the strict less-than in the definition.
The fun happens in the interpretation of contexts:
Ctxⁿ(· :: j) = ()Ctxⁿ(Γ,x:A[i] :: j) = {(γ,[v/x]) | γ ∈ Ctxⁿ(Γ) and v ∈ Vⁿ(A)} when i ≤ jCtxⁿ(Γ,x:A[j+l] :: j) = {(γ,[e/x]) | γ ∈ Ctxⁿ(Γ) and •ˡe ∈ Vⁿ(•ˡA)} when l > 0
The context interpretation has a strange dual nature. At times less than or equal to j, it is a familiar context of values. But at future times, it is a context of expressions. This is because the evaluation rules substitute values for variables at the current time, and expressions for variables at future times. We abuse the bullet value relation in the third clause, to more closely follow Nakano's proof.
On the one hand, the fixed point operator is μx.e at any type A, and unfolding this fixed point has to substitute an expression (the mu-term itself) for the variable x. So the fixed point rule tells us that there is something necessarily lazy going on.
One the other hand, the focusing behavior of this connective is quite bizarre. It is not apparently positive or negative, since it distribute neither through all positives (eg, •(A + B) ≄ •A + •B) nor is it the case that it distributes through all negatives (eg, •(A → B) ≄ •A → •B). (See Noam Zeilberger, The Logical Basis of Evaluation Order and Pattern Matching.)
I take this to mean that •A should probably be decomposed further. I have no present idea of how to do it, though.
Anyway, this is enough to let you prove the fundamental property of logical relations:
Theorem (Fundamental Property). If Γ ⊢ e : A[j], then for all n and γ ∈ Ctxⁿ(Γ :: j), we have that γ(e) ∈ Eⁿ(A).
The proof of this is a straightforward induction on typing derivations, with one nested induction at the fixed point rule. I'll sketch that case of the proof here, assuming an empty context Γ just to reduce the notation:
Case: · ⊢ μx.e : A[j]By inversion: x:A[j+1] ⊢ e : A[j]By induction, for all n,e'. if •e' ∈ Vⁿ(•A) then [e'/x]e ∈ Eⁿ(A)By nested induction on n, we'll show that [μx.e/x]e ∈ Eⁿ(A)Subcase n = 0: We know if •μx.e ∈ V⁰(•A) then [μx.e/x]e ∈ E⁰(A) We know •μx.e ∈ V⁰(•A) is true, since · ⊢ μx.e : A[j] Hence [μx.e/x]e ∈ E⁰(A) Hence μx.e ∈ E⁰(A)Subcase n = x+1: We know if •μx.e ∈ Vˣ⁺¹(•A) then [μx.e/x]e ∈ Eˣ⁺¹(A) By induction, we know [μx.e/x]e ∈ Eˣ(A) Hence μx.e ∈ Eˣ(A) Hence •μx.e ∈ Vˣ⁺¹(A) So [μx.e/x]e ∈ Eˣ⁺¹(A) Hence μx.e ∈ Eˣ⁺¹(A)
Once we have the fundamental property of logical relations, the normalization theorem follows immediately.
Corollary (Termination). If · ⊢ e : A[n], then ∃v. e ↝ v.
Evaluation rules:
e₁ ↝ (λx.e) e₂ ↝ v [v/x]e ↝ v' —————— ———————————————————————————————— v ↝ v e₁ e₂ ↝ v' e ↝ z e₀ ↝ v e ↝ s(e) [e/x]e₁ ↝ v —————————————————————————————— —————————————————————————————————case(e, z → e₀, s(x) → e₁) ↝ v case(s(e), z → e₀, s(x) → e₁) ↝ ve₁ ↝ •e [e/x]e' ↝ v [μx.e/x]e ↝ v ————————————————————— ——————————————let •x = e₁ in e₂ ↝ v μx.e ↝ v
## Friday, July 15, 2011
### Semantics of a weak delay modality
In my previous post, I sketched some typing rules for a guarded recursion calculus. Now I'll give its categorical semantics. So, suppose we have a Cartesian closed category with a delay functor and the functorial action and natural transformations:
•(f : A → B) : •A → •Bδ : A → •Aι : •A × •B → •(A × B)ι⁻¹ : •(A × B) → •A × •Bfix : (•A ⇒ A) → A
I intend that the next modality is a Cartesian functor (ie, distributes through products) and furthermore we have a delay operator δ. We also have a fixed point operator for the language. However, I don't assume that the delay distributes through the exponential. Now, we can follow the usual pattern of categorical logic, and interpret contexts and types as objects, and terms as morphisms. So types are interpreted as follows:
〚A → B〛 = 〚A〛 ⇒ 〚B〛〚•A〛 = •〚A〛
Note that we haven't done anything with time indices yet. They will start to appear with the interpretation of contexts, which is relativized by time:
〚·〛ⁿ = 1〚Γ, x:A[j]〛ⁿ = 〚Γ〛 × •⁽ʲ⁻ⁿ⁾〚A〛 if j > n〚Γ, x:A[j]〛ⁿ = 〚Γ〛 × 〚A〛 if j ≤ n
The idea is that we interpret a context at time n, and so all the indices are interpreted relative to that. If the index j is bigger than n, then we delay the hypothesis, and otherwise we don't. Then we can interpret morphisms at time n as 〚Γ ⊢ e : A[n]〛 ∈ 〚Γ〛ⁿ → 〚A〛, which we give below:
〚Γ ⊢ e : A[n]〛 ∈ 〚Γ〛ⁿ → 〚A〛〚Γ ⊢ μx.e : A[n]〛 = fix ○ λ(〚Γ, x:A[n+1] ⊢ e : A[n]〛)〚Γ ⊢ x : A[n]〛 = π(x) (where x:A[j] ∈ Γ)〚Γ ⊢ λx.e : A → B[n]〛 = λ(〚Γ, x:A[n] ⊢ e : B[n]〛)〚Γ ⊢ e e' : B[n]〛 = eval ○ ⟨〚Γ ⊢ e : A → B[n]〛, 〚Γ ⊢ e' : B[n]〛⟩〚Γ ⊢ •e : •A[n]〛 = •〚Γ ⊢ e : A[n+1]〛 ○ Nextⁿ(Γ)〚Γ ⊢ let •x = e in e' : •B[n]〛 = 〚Γ, x:A[n+1] ⊢ e' : B[n]〛 ○ ⟨id(Γ), 〚Γ ⊢ e : •A[n]〛⟩
Most of these rules are standard, with the exception of the introduction rule for delays. We interpret the body •e at time n+1, and then use the functorial action to get an element of type •A. This means we need to take a context at time n and produce a delayed one interpreted at time n+1.
Nextⁿ(Γ) ∈ 〚Γ〛ⁿ → •〚Γ〛ⁿ⁺¹Nextⁿ(·) = δ₁Nextⁿ(Γ, x:A[j]) = ι ○ (Nextⁿ(Γ) × δʲ⁻ⁿ) if j > nNextⁿ(Γ, x:A[j]) = ι ○ (Nextⁿ(Γ) × δ) if j ≤ n
I think this is a pretty slick way of interpreting hybrid annotations, and a trick worth remembering for other type constructors that don't necessarily play nicely with implications.
Next up, if I find a proof small enough to blog, is a cut-elimination/normalization proof for this calculus.
## Wednesday, July 13, 2011
### Guarded recursion with a weaker-than-Nakano guard modality
We have a new draft paper up, on controlling the memory usage of FRP. I have to say that I really enjoy this line of work: there's a very strong interplay between theory and practice. For example, this paper --- which is chock full of type theory and denotational semantics --- is strongly motivated by questions that arose from thinking about how to make our implementation efficient.
In this post, I'm going to start spinning out some consequences of one minor point of our current draft, which we do not draw much attention to. (It's not really the point of the paper, and isn't really enough to be a paper on its own -- which makes it perfect for research blogging.) Namely, we have a new delay modality, which substantially differs from the original Nakano proposed in his LICS 2000 paper.
Recall that the delay modality $\bullet A$ is a type constructor for guarded recursion. I'll start by giving a small type theory for guarded recursion below.
$$\begin{array}{lcl} A & ::= & A \to B \;\;|\;\; \bullet A \;\;|\;\; \mathbb{N} \\ \Gamma & ::= & \cdot \;\;|\;\; \Gamma, x:A[i] \end{array}$$
As can be seen above, the types are delay types, functions, and natural numbers, and contexts come indexed with time indices $i$. So is the typing judgement $\Gamma \vdash e : A[i]$.
$$\begin{array}{ll} \frac{x:A[i] \in \Gamma \qquad i \leq j} {\Gamma \vdash x : A[j]} & \frac{\begin{array}{l} \Gamma \vdash e:A[i] & \Gamma,x:A[i] \vdash e' : B[i] \end{array}} {\Gamma \vdash \mathsf{let}\; x = e \;\mathsf{in}\; e' : B[i]} \\ & \\ \frac{\Gamma, x:A[i] \vdash e : B[i]} {\Gamma \vdash \lambda x.\;e : A \to B} & \frac{\begin{array}{l} \Gamma \vdash e : A \to B [i] & \Gamma \vdash e' : A[i] \end{array}} {\Gamma \vdash e \; e' : B[i]} \\ & \\ \frac{\Gamma \vdash e : A[i+1]} {\Gamma \vdash \bullet e : A[i]} & \frac{\begin{array}{ll} \Gamma \vdash e : \bullet A[i] & \Gamma, x:A[i+1] \vdash e' : B[i] \end{array}} {\Gamma \vdash \mathsf{let}\; \bullet x = e \;\mathsf{in}\; e' : B[i]} \\ & \\ \frac{} {\Gamma \vdash \mathsf{z} : \mathbb{N}[i]} & \frac{\Gamma \vdash e : \mathbb{N}[i+1]} {\Gamma \vdash \mathsf{s}(e) : \mathbb{N}[i]} \\ & \\ \frac{\Gamma, x:A[i+1] \vdash e : A[i]} {\Gamma \vdash \mu x.\;e : A[i]} & \frac{\begin{array}{l} \Gamma \vdash e : \mathbb{N}[i] \\ \Gamma \vdash e_1 : A[i] \\ \Gamma, x:\mathbb{N}[i+1] \vdash e_2 : A[i] \end{array}} {\Gamma \vdash \mathsf{case}(e, \mathsf{z} \to e_1, \mathsf{s}(x) \to e_2) : A[i]} \end{array}$$
The $i+1$ in the successor rule for natural numbers pairs with the rule for case statements, and this is what allows the fixed point rule to do its magic. Fixed points are only well-typed if the recursion variable occurs at a later time, and the case statement for numbers gives the variable one step later. So by typing we guarantee well-founded recursion!
The intro and elim rules for delays internalize increasing the time index, so that an intro for $\bullet A$ at time $i$ takes an expression of type $A$ at time $i+1$. We have a let-binding elimination form for delays, which differs from our earlier LICS paper, which had a direct-style elimination for delays. The new elimination is weaker than the old one, in that it cannot prove the isomorphism of $\bullet(A \to B) \simeq \bullet A \to \bullet B$.
This is really quite great, since that isomorphism was really hard to implement! (It was maybe half the complexity of the logical relation.) The only question is whether or not we can still give a categorical semantics to this language. In fact, we can, and I'll
describe it in my next post.
## Friday, June 17, 2011
The constructive lift monad is the coinductive type $T(A) \equiv \nu\alpha.\; A + \alpha.$
The intuition is that an element of this type either tells you a value of type $A$, or tells you to compute some more and try again. Nontermination is modeled by the element which never returns a value, and always keeps telling you to compute some more.
Our goal is to construct a general fixed-point combinator $\mu(f : TA \to TA) : 1 \to TA$, which takes an $f$ and then produces a computation corresponding to the fixed point of $f$. To fix notation, we'll take the constructors to be:
$$\begin{array}{lcl} \mathsf{roll} & : & A + TA \to TA \\ \mathsf{unroll} & : & TA \to A + TA \end{array}$$
Since this is a coinductive type, we also have an unfold satisfying the following equation:
$$\mathsf{unfold}(f : X \to A + X) : X \to TA \equiv \mathsf{roll} \circ (\mathit{id} + (\mathsf{unfold}\; f)) \circ f$$
First, we will explicitly construct the bottom element, corresponding to the computation that runs forever, with the following definition:
$$\bot : 1 \to TA \equiv \mathsf{unfold}(\mathsf{inr})$$
This definition just keeps telling us to wait, over and over again. Now, we can define the fixed point operator:
$$\begin{array}{l} \mu(f : TA \to TA) : 1 \to TA \\ \mu(f) \equiv (\mathsf{unfold} (\mathsf{unroll} \circ f)) \circ \bot \end{array}$$
What this does is to pass bottom to $f$. If $f$ returns a value, then we're done. Otherwise, $f$ returns us another thunk, which we can pass back to $f$ again, and repeat.
Of course, this is exactly the intuition behind fixed points in domain theory. Lifting in domains is usually defined directly, and I don't know who invented the idea of defining it as a coinductive type. I do recall a 1999 paper by Martin Escardo which uses (a slight variant of) it, and he refers to it as a well-known construction in metric spaces, so probably the papers from the Dutch school of semantics are a good place to start the search. This construction has seen a renewed burst of interest in the last few years, since it offers a relatively convenient way to represent nonterminating functions in dependent type theory. It's also closely related to step-indexed models in operational semantics, since, given a terminating element of the lift monad, you can compute how many steps it takes to return a value.
It's stuff like this that makes me say I can't tell the difference between operational and denotational semantics any more.
## Tuesday, May 31, 2011
### ICFP 2011
I was just notified that the paper I wrote with Nick Benton, A Semantic Model of Graphical User Interfaces, was accepted to ICFP 2011!
This is my first paper at ICFP, so I'm quite excited. Also, I accidentally misread the submission guidelines and wrote a 10-page draft, instead of a 12-page draft. As a result, I have two whole pages to add examples, intuition, and motivation. This feels positively decadent. :)
## Thursday, May 5, 2011
### GUI Programming and MVC
This post was inspired by this post on William Cook's blog.
I've been thinking a lot about GUIs lately, and I think model-view-controller is a very good idea -- but one that existing languages do a poor job supporting. Smalltalk deserves credit for making these thoughts possible for Trygve Reenskaug to express, but it's been over 30 years; we really ought to have made it easy to express by now.
Now, the point of a controller is to take low-level events and turn them into high-level events. Note that "low-level" and "high-level" ought to be relative notions -- the toolkit might give us mouse and keyboard events, from which we write button widgets to turn them into click events, from which we write calculator button turns into application events (numbers and operations).
However, in reality new widgets tend to be extremely painful to write, since it usually involves mucking around with the guts of the event loop, and so people tend to stick with what the toolkit gives them. This way, they don't need to understand the deep implementation invariants of the GUI toolkit. As a result, they don't really build GUI abstractions, and so they don't need separate controllers. (A good example would be to think about how hard it would be to write a new text entry widget in your toolkit of choice.)
So I see the move from MVC to MV as a symptom of a problem. The C is there to let you build abstractions, but since it's really hard we don't. As a result new frameworks drop the C, and just grow by accretion -- the toolkit designers add some new widgets with each release, and everybody just uses them. I find this a little bit sad, honestly.
As you might guess from this blog, I find this quite an interesting problem. I think functional reactive programming offers a relatively convenient model (stream transducers) to write event processors with, but we have the problem that FRP systems tend towards the unusably inefficient. In some sense this is the opposite problem of MVC, which can be rather efficient, but can require very involved reasoning to get correct.
This basically leads to my current research program: can we compile functional reactive programming into model-view-controller code? Then you can combine the ease-of-use of FRP, with the relative efficiency[*] of the MVC design.
IMO, the system in our LICS paper is a good first step towards fixing this problem, but only a first step. It's quite efficient for many programs, but it's a bit too expressive: it's possible to write programs which leak rather a lot of memory without realizing it. Basically, the problem is that promoting streams across time requires buffering them, and it's possible to accidentally write programs which repeatedly buffer a stream at each tick, leading to unbounded memory use.
[*] As usual, "efficiency" is relative to the program architecture. MVC is a retained mode design, and if the UI is constantly changing a lot, you lose. For things like games, immediate mode GUIs seem like a better design to me. In the longer term, I'd like to try synthesizing these two designs, For example, even a live-document app like a spreadsheet or web page (the ideal cases for MVC) may want to embed video or 3D animations (which work better in immediate mode).
## Tuesday, April 19, 2011
### Models of Linear Logic: Length Spaces
One of the things I've wondered about is what the folk model of linear logic is. Basically, I've noticed that people use linear logic an awful lot, and they have very clear intuitions about the operational behavior they expect, but they are often not clear about what equational properties they expect to hold are.
Since asking that question, I've learned about a few models of linear logic, which are probably worth blogging about. Morally, I ought to start by showing how linear algebra is a model of linear logic, but I'll set that aside for now, and start with some examples that will be more familiar to computer scientists.
First, recall that the general model of simply-typed functional languages are cartesian closed categories, and that the naive model of functional programming is "sets and functions". That is, types are interpreted as sets, and terms are functions between sets, with the cartesian closed structure corresponding to function spaces on sets.
Next, recall that the general model of intuitionistic linear logic are symmetric monoidal closed categories. So the question to ask is, what are some natural symmetric monoidal closed category given by sets-with-structure and functions preserving that structure? There are actually many answers to this question, but one I like a lot I learned from a 1999 LICS paper by Martin Hofmann, and is called "length spaces". It is very simple, and I find it quite beautiful.
To understand the construction, let's first set out the problem: how can we write programs that respect some resource bounds? For example, we may wish to bound the memory usage of a language, so that every definable program is size-bounded: the output of a computation is no bigger than its input.
Now, let's proceed in a really simple-minded way. A length space is a pair $(A, \sigma_A)$, where $A$ is a set and $\sigma_A : A \to \N$ is a size function. Think of $A$ as the set of values in a type, and $\sigma_A$ as a function which tells you how big each element is. These pairs will be the objects of our category of length spaces.
Now, define a morphism $f : (A, \sigma) \to (B, \tau)$ to be a function on sets $A \to B$, with the property that $\forall a \in A.\; \tau(f\;a) \leq \sigma(a)$. That is, the size of the output is always less then or equal to the size of the input.
Here's the punchline: length spaces are a symmetric monoidal closed category.
Given two length spaces $(A, \sigma)$ and $(B, \tau)$, they have a monoidal product $(A, \sigma) \otimes (B, \tau) = (A \times B, \fun{(a,b)}{\sigma(a) + \tau(b)})$. The unit is $I = (\{*\}, \fun{*}{0})$, and the associators and unitors are inherited from the cartesian product. The intuition here is that strict pair, requires resources equal to the sum of the resources of the components.
The function space construction is pretty clever, too. The exponential $(B, \sigma) \multimap (C, \tau) = (B \to C, \fun{f}{\min\comprehend{a \in \N}{\forall b \in B.\;\tau(f\;b) \leq \sigma(b) + a}})$. The intuition here is that if you have a morphism $A \otimes B \to C$, when we curry it we want a morphism $A \to (B \multimap C)$. However, the returned function may have free variables in $A$, so the size of the value it can return should be bounded by the size of its input $b \in B$ plus the size of its environment.
We also have a cartesian product, corresponding to lazy pairs. We define $(A, \sigma) \times (B, \tau) = (A \times B, \fun{(a,b)}{\max(\sigma(a), \tau(b)})$. The intuition is that with a lazy pair, where we have the (linear) choice of either the first or second component, we can bound the space usage by maximum of the two alternatives.
Note how pretty this model is -- you can think about functions in a totally naive way, and that's fine. We just carry around a little extra data to model size information, and now we have a very pretty extensional model of an intensional property -- space usage. This also illustrates the general principle that increasing the observations you can do creates finer type distinctions.
## Friday, March 25, 2011
### A Semantic Model for GUI Programs
Our paper on ultrametric semantics will appear at LICS 2011!
We've got a new draft, too, on applying this to GUI programming. This turns out to be not entirely straightforward, since GUI widgets accept user input, and so the semantics of a GUI has to model that input. The usual way of modelling input is via a reader monad -- that is, we model computations as functions which accept input as an argument -- but in a GUI you can (for instance) write programs which dynamically create new buttons, and so create new input channels.
The thing we did -- which seems obvious to me, but which Nick says is unusual, and for which I would like references -- was to view things like buttons as giving you a set of values. Then you can say that creating a button is an operation in the nondeterminism monad (basically). Then you don't need to model state at all.
Since this is notionally a research blog, let me point out a bit of mathematics I really liked in this paper (not actually due to me). Since we were working in ultrametric spaces, we couldn't use powersets, obviously -- we needed power spaces. It turns out that the right notion of powerspace for a space $X$ is the collection of closed subsets of $X$, under the Hausdorff metric. Then when $X$ is a complete 1-bounded ultrametric space, then $\mathcal{P}(X)$ will be too.
This is all totally standard, but one thing that wasn't obvious to me is whether powerspace actually forms a monad! The multiplication for powerset (in sets) is $\mu : \mathcal{P}^2(A) \to \mathcal{P}(A) = \fun{U}{\{a \in A \;|\; \exists X \in U.\; a \in X\}}$ -- that is, $\mu(U) = \cup U$.
Since only finite unions of closed sets are closed, for powerspace the corresponding multiplication ought to be $\mu(U) = \mathrm{cl}(\cup U)$. However, it wasn't obvious to me whether this was a nonexpansive map or not. I learned the following super-slick proof from a paper of Steve Vickers, "Localic Completion of Generalized Metric Spaces II: Powerlocales". Despite the formidable title, it is a very readable paper.
Below is just the fragment of the proof for the lower half of the Hausdorff metric.
$$\begin{array}{lcl} d(U, V) \leq q & \iff & \forall X \in U, \exists Y \in V.\; d(X, Y) < q \\ & \iff & \forall X \in U, \exists Y \in V.\; \forall x \in X, \exists y \in Y.\; d(x,y) \leq q \\ & \Longrightarrow & \forall X \in U, \forall x \in X.\; \exists Y \in V, \exists y \in Y.\; d(x,y) \leq q \\ & \iff & \forall x \in \cup X, \exists y \in \cup V, d(x,y) \leq q \\ & \iff & d(\cup U, \cup V) \leq q \\ & \iff & d(\mathrm{cl}(\cup U), \mathrm{cl}(\cup V)) \leq q \\ & \iff & d(\mu(U), \mu(V)) \leq q \\ \end{array}$$
That quantifier flip is clever, but what makes this proof beautiful to me is introducing the auxilliary $q$. My own attempts just drowned in quantifier alternations, but this splits things up at just the right point. I'm reminded of Dijkstra's observation that most mathematicians do not make enough use of logical equivalence in their proofs, and clearly I fall into this category.
## Friday, January 14, 2011
### Ultrametric Semantics of Reactive Programs
I've got a new draft paper out with Nick Benton, Ultrametric Semantics of Reactive Programs.
We answer some longstanding open questions in FRP, namely:
1. What are higher-order reactive functions at all? How do you interpret them, and what are their semantics?
2. How can we implement FRP in a reasonable way, without compromising on either the equational theory or the ability to fit into the mutable callback event loop world of MVC?
It's got a little something for everyone -- some very pretty proof theory, some nice denotational semantics, and a hardcore separation logic proof with a step-indexed logical relation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206833004951477, "perplexity": 1243.7173660956469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103579.21/warc/CC-MAIN-20170817151157-20170817171157-00646.warc.gz"} |
https://telescopeauthority.com/brightness-and-magnitude-in-astronomy/ | • Home /
• Guides
• / Brightness and Magnitude in Astronomy
# Brightness and Magnitude in Astronomy
3 years ago
Magnitude is an astronomical term that is used to describe precisely how bright a stellar object is. It can be done with both objective scientific measurements and a more qualitative classification of how bright the object is in the sky. Magnitude is measured on an inverse scale where lower numbers equal a brighter object. The traditional magnitude scale used in astronomy runs from 1 to 6.
The two major scales of magnitude are visual magnitude and absolute magnitude. Absolute magnitude is a scientific scale of how much light an object would shed if the observer was precisely 10 parsecs away from it. This distance equals about 33 light years, or 200 trillion miles, and the exact absolute brightness value of a heavenly body is determined by complex calculations of the elemental composition of the object, the amount of light it sheds measured in lumens and other scientific considerations. Absolute magnitude is rarely used outside of the scientific context.
Apparent magnitude, also known as visual magnitude, is a more intuitive form of classification. It measures how bright the object is by the time that its light reaches the earth. This value can be measured precisely through the use of light meters and telescopes, or it can be approximated simply by looking at the stellar objects with binoculars or the naked eye.
In fact, the magnitude scale of stellar brightness was invented thousands of years before telescopes or binoculars. The scale of apparent magnitude that we use today was laid down by the ancient Greek astronomers Hipparchus and Ptolemy, who classified the objects in the sky according to six categories of brightness. They chose about twenty of the stars that looked brightest to them and assigned them to the category of the first magnitude. The next brightest set of stellar objects was assigned to the second magnitude, all the way to the ones which could only barely be seen, which were all grouped in the sixth magnitude.
When telescopes, prisms and other optical devices were invented, they brought with them two important discoveries related to magnitude. The first was that there were a whole lot more stars below the sixth category than anyone had ever expected. The telescope revealed uncountable numbers of tiny lights that were far too dim to be seen with the unassisted eye. What is more, the telescope made it possible to measure the brightness coming from a particular star with ever-increasing specificity. Since the first magnitude was such a well-defined category and the sixth magnitude was used to apply to a vast number of objects at the very edge of human decision, the arbitrary decision was made to set the brightness of the sixth magnitude at 100 times less than the 1st magnitude. This transformed the magnitude scale into a logarithmic scale, which means that the quantities measured increased exponentially as the measurements increased arithmetically.
This historical tradition of astronomy explains why the stellar magnitude chart appears almost upside-down. Since the scale has been extended to cover things far outside the original scope, and since the measured brightness falls as the magnitude climbs, the chart can sometime seem to operate in a counter-intuitive manner. However, so long as one remembers that a bright star has a magnitude of about one, it is easy to reason the rest through.
The magnitude chart was expanded in the other direction as well. It became possible to measure exactly how much brighter planets such as Venus were than the heavenly bodies around them. In other words, they recorded precisely how much light there was coming from Venus when it was at its fullest brightness. It was found to be about a hundred times brighter than the objects that made up the traditional first magnitude, so the scale was simply extended into the negative numbers to accommodate. Venus has a visual magnitude of about -4. The Moon, by far the brightest of the planets we can see in the sky, has a magnitude of about -12 when it is at its fullest. The sun puts out about 400,000 times as much light as the full moon. This means that its visual magnitude is about -26.
As can be seen, this scale represents an excellent way to deal with the widely divergent lights seen in the sky. Although local atmospheric conditions are always paramount, on a clear night it should be possible to see stellar bodies all the way up to magnitude 6. A typical pair of field binoculars should be sufficient to see objects as dim as magnitude 9.5, which is more than a hundred times darker than the eyes can see alone. A telescope will naturally improve this ability to detect dimmer objects to an almost unimaginable degree. The wider the diameter of the telescope, the more light it will capture and the higher magnitude objects that can be seen. The only limits to this are found in the diameter of lens that can be procured and the atmospheric conditions prevailing at the viewing location. The Hubble Space Telescope has been able to resolve objects as dim as 31.5 magnitude in its nearly perfect viewing conditions outside the Earth’s atmosphere. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8551877737045288, "perplexity": 669.8118029476724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737645.2/warc/CC-MAIN-20200808110257-20200808140257-00175.warc.gz"} |
http://math.stackexchange.com/questions/264250/notation-in-set-theory-applied-to-counters | # Notation in set theory applied to counters
I have written a notation representing a counter for a condition:
$M \leftarrow \displaystyle \sum_{i =1}^{|X|} [B_j = X_i]$
So far this gives me a number for a specific j (the counter), but I want to turn this into a set for all values of j in such a way M is representing a multiset. M would be something like this:
$M = \{1,1,2,4,5\}$
How can I fix my notation to represent what I want?
thanks
-
Assuming your indexing starts at $0$ for the first element, then you want to sum to the order of $X - 1$. For example, the set
I think this might work:
$$M = \left\{M_j \mid M_j \leftarrow \displaystyle \sum_{i =0}^{|X|-1} [B_j = X_i]; 0\le j < |M|\right\}.$$
Of course, determining $|M|$ requires knowing in advance the number of counters = $C$, so the condition $0 \le j < C$ should probably replace the condition $0 \le j < |M|$:
$$M = \left\{M_j \mid M_j \leftarrow \displaystyle \sum_{i =0}^{|X|-1} [B_j = X_i]; 0\le j < C\right\}.$$
-
Hi, thanks for your solution, it looks good. I just would like to know if this is true for a multiset as well, because a set does not allow repetition of the elements and considering I have a counter, I need definitely a multiset. If I get no better answer, I accept yours. – sfelixjr Dec 23 '12 at 20:21
I will use your solution with double brackets. According to this, that's how we should represent a multiset: fr.m.wikipedia.org/wiki/Multiensemble – sfelixjr Dec 23 '12 at 20:26
This would be a multiset: it is the set of all $M_j$ each depending on/associated with $B_j$, where $j$ ranges through the number of counters. – amWhy Dec 23 '12 at 20:27
Good enough!... – amWhy Dec 23 '12 at 20:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9147840738296509, "perplexity": 385.9817469635404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131302478.63/warc/CC-MAIN-20150323172142-00135-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-differentiate-g-y-x-2-2x-1-4-4x-6-5-using-the-product-rule | Calculus
Topics
# How do you differentiate g(y) =(x^2 - 2x + 1)^4 (4x^6 + 5) using the product rule?
Jun 13, 2018
Treat the two terms as a normal product rule, but then use the chain rule on the first function when differentiating to get the full solution: $g ' \left(x\right) = 8 {\left(x - 1\right)}^{7} \left(7 {x}^{6} - 3 {x}^{5} + 5\right)$
#### Explanation:
First, let's designate the two pieces of the function as ${f}_{1} \left(x\right)$ and ${f}_{2} \left(x\right)$:
${f}_{1} \left(x\right) = {\left({x}^{2} - 2 x + 1\right)}^{4}$
${f}_{2} \left(x\right) = 4 {x}^{6} + 5$
For a given function, the derivative using the product rule is:
$\frac{\mathrm{dy}}{\mathrm{dx}} \left({f}_{1} \left(x\right) {f}_{2} \left(x\right)\right) = {f}_{1} ' \left(x\right) \cdot {f}_{2} \left(x\right) + {f}_{1} \left(x\right) \cdot {f}_{2} ' \left(x\right)$
This means we'll need to know the derivatives of each $f$ function. The first function requires a chain rule expansion:
$\frac{\mathrm{dy}}{\mathrm{dx}} f \left(g \left(x\right)\right) = f ' \left(g \left(x\right)\right) \cdot g ' \left(x\right)$
Deriving ${f}_{1} ' \left(x\right)$:
$\frac{\mathrm{dy}}{\mathrm{dx}} {\left({x}^{2} - 2 x + 1\right)}^{4} = 4 {\left({x}^{2} - 2 x + 1\right)}^{3} \cdot \left(2 x - 2\right)$
${f}_{1} ' \left(x\right) = {\left({x}^{2} - 2 x + 1\right)}^{3} \left(8 x - 8\right)$
Deriving ${f}_{2} ' \left(x\right)$:
$\frac{\mathrm{dy}}{\mathrm{dx}} \left(4 {x}^{6} + 5\right) = 24 {x}^{5}$
Now, we reassemble (and simplify):
$g ' \left(x\right) = {\left({x}^{2} - 2 x + 1\right)}^{3} \left(8 x - 8\right) \left(4 {x}^{6} + 5\right) + 24 {x}^{5} {\left({x}^{2} - 2 x + 1\right)}^{4}$
$g ' \left(x\right) = 8 \left({\left({x}^{2} - 2 x + 1\right)}^{3} \left(x - 1\right) \left(4 {x}^{6} + 5\right) + 3 {x}^{5} {\left({x}^{2} - 2 x + 1\right)}^{4}\right)$
Note that the factored form of ${x}^{2} - 2 x + 1$ is ${\left(x - 1\right)}^{2}$
$g ' \left(x\right) = 8 \left({\left({\left(x - 1\right)}^{2}\right)}^{3} \left(x - 1\right) \left(4 {x}^{6} + 5\right) + 3 {x}^{5} {\left({\left(x - 1\right)}^{2}\right)}^{4}\right)$
$g ' \left(x\right) = 8 \left({\left(x - 1\right)}^{6} \left(x - 1\right) \left(4 {x}^{6} + 5\right) + 3 {x}^{5} {\left(x - 1\right)}^{8}\right)$
$g ' \left(x\right) = 8 \left({\left(x - 1\right)}^{7} \left(4 {x}^{6} + 5\right) + 3 {x}^{5} {\left(x - 1\right)}^{8}\right)$
$g ' \left(x\right) = 8 {\left(x - 1\right)}^{7} \left(\left(4 {x}^{6} + 5\right) + 3 {x}^{5} \left(x - 1\right)\right)$
$g ' \left(x\right) = 8 {\left(x - 1\right)}^{7} \left(4 {x}^{6} + 5 + 3 {x}^{6} - 3 {x}^{5}\right)$
color(green)(g'(x)=8(x-1)^7(7x^6-3x^5+5)
##### Impact of this question
66 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235881567001343, "perplexity": 847.8238092029483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00000.warc.gz"} |
http://sourceforge.net/p/hugin/hugin/ci/8f0f49ab22cc45a0541e5da0ca6402b1861f31a0/tree/src/foreign/vigra/stdimagefunctions.hxx | Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
## [8f0f49]: src / foreign / vigra / stdimagefunctions.hxx Maximize Restore History
Download this file
### stdimagefunctions.hxx 97 lines (88 with data), 4.8 kB
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 /************************************************************************/ /* */ /* Copyright 1998-2002 by Ullrich Koethe */ /* Cognitive Systems Group, University of Hamburg, Germany */ /* */ /* This file is part of the VIGRA computer vision library. */ /* ( Version 1.4.0, Dec 21 2005 ) */ /* The VIGRA Website is */ /* http://kogs-www.informatik.uni-hamburg.de/~koethe/vigra/ */ /* Please direct questions, bug reports, and contributions to */ /* [email protected] or */ /* [email protected] */ /* */ /* Permission is hereby granted, free of charge, to any person */ /* obtaining a copy of this software and associated documentation */ /* files (the "Software"), to deal in the Software without */ /* restriction, including without limitation the rights to use, */ /* copy, modify, merge, publish, distribute, sublicense, and/or */ /* sell copies of the Software, and to permit persons to whom the */ /* Software is furnished to do so, subject to the following */ /* conditions: */ /* */ /* The above copyright notice and this permission notice shall be */ /* included in all copies or substantial portions of the */ /* Software. */ /* */ /* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND */ /* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES */ /* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND */ /* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT */ /* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, */ /* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING */ /* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR */ /* OTHER DEALINGS IN THE SOFTWARE. */ /* */ /************************************************************************/ #ifndef VIGRA_STDIMAGEFUNCTIONS_HXX #define VIGRA_STDIMAGEFUNCTIONS_HXX /** \page PointOperators Point Operators
\ref InitAlgo
init images or image borders
\ref InspectAlgo
Apply read-only functor to every pixel
\ref InspectFunctor
Functors which report image statistics
\ref CopyAlgo
Copy images or regions
\ref TransformAlgo
apply functor to calculate a pixelwise transformation of one image
\ref TransformFunctor
frequently used unary transformation functors
\ref CombineAlgo
apply functor to calculate a pixelwise transformation from several image
\ref CombineFunctor
frequently used binary transformations functors
\ref MultiPointoperators
Point operators on multi-dimensional arrays
\#include "vigra/stdimagefunctions.hxx"
Namespace: vigra see also: \ref FunctorExpressions "Automatic Functor Creation" */ #include "vigra/initimage.hxx" #include "vigra/inspectimage.hxx" #include "vigra/copyimage.hxx" #include "vigra/transformimage.hxx" #include "vigra/combineimages.hxx" #include "vigra/resizeimage.hxx" #endif // VIGRA_STDIMAGEFUNCTIONS_HXX | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965431272983551, "perplexity": 2506.8744423669227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273381.44/warc/CC-MAIN-20140728011753-00169-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://mattbaker.blog/2014/04/ | # Effective Chabauty
One of the deepest and most important results in number theory is the Mordell Conjecture, proved by Faltings (and independently by Vojta shortly thereafter). It asserts that if $X / {\mathbf Q}$ is an algebraic curve of genus at least 2, then the set $X({\mathbf Q})$ of rational points on $X$ is finite. At present, we do not know any effective algorithm (in theory or in practice) to compute the finite set $X({\mathbf Q})$. The techniques of Faltings and Vojta lead in principle to an upper bound for the number of rational points on $X$, but the bound obtained is far from sharp and is difficult to write down explicitly. In his influential paper Effective Chabauty, Robert Coleman combined his theory of p-adic integration with an old idea of Chabauty and showed that it led to a simple explicit upper bound for the size of $X({\mathbf Q})$ provided that the Mordell-Weil rank of the Jacobian of $X$ is not too large. (For a memorial tribute to Coleman, who passed away on March 24, 2014, see this blog post.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8306943774223328, "perplexity": 188.9082413708959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400241093.64/warc/CC-MAIN-20200926102645-20200926132645-00113.warc.gz"} |
https://deepai.org/publication/linear-tsne-optimization-for-the-web | DeepAI
# Linear tSNE optimization for the Web
The t-distributed Stochastic Neighbor Embedding (tSNE) algorithm has become in recent years one of the most used and insightful techniques for the exploratory data analysis of high-dimensional data. tSNE reveals clusters of high-dimensional data points at different scales while it requires only minimal tuning of its parameters. Despite these advantages, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of tSNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the tSNE embedding for large datasets. In this work, we present a novel approach to the minimization of the tSNE objective function that heavily relies on modern graphics hardware and has linear computational complexity. Our technique does not only beat the state of the art, but can even be executed on the client side in a browser. We propose to approximate the repulsion forces between data points using adaptive-resolution textures that are drawn at every iteration with WebGL. This approximation allows us to reformulate the tSNE minimization problem as a series of tensor operation that are computed with TensorFlow.js, a JavaScript library for scalable tensor computations.
• 7 publications
• 10 publications
• 5 publications
• 7 publications
• 9 publications
• 6 publications
02/19/2017
### Compressive Embedding and Visualization using Graphs
Visualizing high-dimensional data has been a focus in data analysis comm...
09/05/2022
### Opening the black-box of Neighbor Embedding with Hotelling's T2 statistic and Q-residuals
In contrast to classical techniques for exploratory analysis of high-dim...
02/13/2019
### Do Subsampled Newton Methods Work for High-Dimensional Data?
Subsampled Newton methods approximate Hessian matrices through subsampli...
12/25/2017
### Efficient Algorithms for t-distributed Stochastic Neighborhood Embedding
t-distributed Stochastic Neighborhood Embedding (t-SNE) is a method for ...
10/24/2022
### A fast multilevel dimension iteration algorithm for high dimensional numerical integration
In this paper, we propose and study a fast multilevel dimension iteratio...
05/22/2019
### Learning Networked Exponential Families with Network Lasso
The data arising in many important big-data applications, ranging from s...
03/15/2022
### Scalable Bigraphical Lasso: Two-way Sparse Network Inference for Count Data
Classically, statistical datasets have a larger number of data points th...
## 1 Introduction
Understanding how data points are arranged in a high-dimensional space plays a crucial role in exploratory data analysis [25]
. In recent years, non-linear dimensionality reduction techniques, also known as manifold learning algorithms, became a powerful tools for mining knowledge from data, e.g., the presence of clusters at different scales. Manifold learning algorithms for data visualization reduce the dimensionality of the points to 2 or 3 dimensions while preserving some characteristic of the data such as the preservation of the local neighborhoods. The success of this approach is motivated by the fact that most of the real-world data satisfy the “manifold hypothesis”, i.e., it lies on relatively low-dimensional manifolds embedded in a high-dimensional space. The manifolds are typically mapped to a lower dimensional space and visualized and analyzed, for example, in a scatterplot.
The t-distributed Stochastic Neighbor Embedding (tSNE) algorithm [27] has been accepted as the state of the art for nonlinear dimensionality reduction applied to visual analysis of high-dimensional space in several application areas, such as life sciences [2, 4, 15]
and machine learning model understanding and human-driven supervision
[17, 21, 9]
. tSNE can be logically separated in two computation modules; first it computes the similarities of the high-dimensional points as a joint probability distribution and, second, it minimizes the Kullback–Leibler divergence
[12] of a similarly computed joint probability distribution that measures the closeness of the points in the low dimensional space. The memory and computational complexity of the algorithm is , where is the number of data points.
Given its popularity, research efforts have been spent on improving the computational and memory complexity of the algorithm. While many works focused on the improvement of the similarity computation [26, 20, 24, 19, 16], only limited effort have been spent in improving the minimization algorithm employed for the creation of the embedding [26, 16, 11]. The most notable of these improvements is the Barnes-Hut-SNE that makes use of an -body simulation approach [1] to approximate the repulsive forces between the data points. Despite the improvements, the minimization requires many minutes using a highly-optimized C++ implementation.
In this work we focus on the minimization of the objective function for the creation of the embedding. We observe that the heavy tail of the t-Student distribution used by tSNE makes the application of the -body simulation not particularly effective. To address this problem we propose a novel minimization approach that embraces this characteristic and we reformulate the gradient of the objective function as a function of scalar fields and tensor operations. Our technique has linear computational and memory complexity and, more importantly, is implemented in a GPGPU fashion. The latter allowed us to implement a version for the browser that minimizes the objective function for standard datasets in a matter of seconds.
The contribution of this work is twofold:
• A linear complexity minimization of the tSNE objective function that makes use of the modern WebGL rendering pipeline. Specifically, we
• approximate the repulsive forces between data points by drawing low-resolution textures and
• An efficient implementation of our result that is released as part of Google’s TensorFlow.js library
The rest of the paper is structured as follows. In the next section, we provide a theoretical primer on the tSNE algorithm that is needed to understand the related work (Section 3) and our contributions (Section 4). In Section 5, we provide the details regarding our implementation, released within Google’s TensorFlow.js library.
## 2 tSNE
In this section, we provide a short introduction to tSNE [27], which is needed to understand the related work and our contribution. tSNE interprets the overall distances between data points in the high-dimensional space as a symmetric joint probability distribution that encodes their similarities. Likewise a joint probability distribution is computed that describes the similarity in the low-dimensional space. The goal is to achieve a representation, referred to as embedding, in the low dimensional space, in which faithfully represents . This is achieved by optimizing the positions in the low-dimensional space to minimize the cost function given by the Kullback-Leibler () divergence between the joint-probability distributions and :
C(P,Q)=KL(P||Q)=N∑i=1N∑j=1,j≠ipijln(pijqij) (1)
Given two data points and in the dataset , the probability models the similarity of these points in the high-dimensional space. To this extent, for each point, a Gaussian kernel
is chosen, whose variance
is defined according to the local density in the high-dimensional space and then is described as follows:
pij=pi|j+pj|i2N, (2)
wherepj|i=exp(−(||xi−xj||2)/(2σ2i))∑Nk≠iexp(−(||xi−xk||2)/(2σ2i)) (3)
can be seen as a relative measure of similarity based on the local neighborhood of a data point . The perplexity value is a user-defined parameter that describes the effective number of neighbors considered for each data point. The value of is chosen such that for fixed and each :
μ=2−∑Njpj|ilog2pj|i (4)
A Student’s t-Distribution
with one degree of freedom is used to compute the joint probability distribution in the low-dimensional space
, where the positions of the data points should be optimized. Given two low-dimensional points and , the probability that describes their similarity is given by:
qij=((1+||yi−yj||2)Z)−1 (5)
(6)
The gradient of the Kullback-Leibler divergence between and is used to minimize (see Eq. 1). It indicates the change in position of the low-dimensional points for each step of the gradient descent and is given by:
δCδyi =4(Fattri−Frepi) (7) =4(N∑j≠ipijqijZ(yi−yj)−N∑j≠iq2ijZ(yi−yj)) (8)
The gradient descent can be seen as an -body simulation [1], where each data-point exerts an attractive and a repulsive force on all other points ( and ).
## 3 Related Work
We now present the work that has been done to improve the tSNE computation of tSNE embeddings in term of quality and scalability. Van der Maaten proposed the Barnes-Hut-SNE (BH-SNE) [26], which reduces the complexity of the algorithm to for both the similarity computations and the objective function minimization. More specifically, in the BH-SNE approach the similarity computations are seen as a -nearest neighborhood graph computation problem, which is obtained using a Vantage-Point Tree [29]. The minimization of the objective function is then seen as an -body simulation, which is solved by applying the Barnes-Hut algorithm [3].
Pezzotti et al. [20] observed that the computation of the
-nearest neighborhood graph for high-dimensional spaces using the Vantage-Point Tree is affected by the curse of dimensionality, limiting the efficiency of the computation. To overcome this limitation, they proposed the Approximated-tSNE (A-tSNE) algorithm
[20], where approximated -nearest neighborhood graphs are computed using a forest of randomized KD-trees [18]. Moreover, A-tSNE adopts the novel Progressive Visual Analytics paradigm [23, 7], allowing the user to observe the evolution of the embedding during the minimization of the objective function. This solution does not only enable a user-driven early termination of the algorithm but also led to novel discoveries in cell differentiation pathways [15].
A similar observation was later made by Tang et al. that led to the development of the LargeVis technique [24]. LargeVis uses random projection trees [5]
followed by a kNN descent procedure
[6]
for the computation of the similarities and a different objective function that is minimized using a Stochastic Gradient Descent approach
[10]. Despite the improvements, both the A-tSNE and LargeVis tools require 15 to 20 minutes to compute the embedding of the MNIST dataset [14], a 784-dimensional dataset of 60k images of handwritten digits, that is often used as benchmark for manifold-learning algorithms. Better performance is achieved by the UMAP algorithm [16], which provides a different formulation of the dimensionality-reduction problem as a cross-entropy minimization between topological representations. Computationally, UMAP follows very closely LargeVis and adopts a kNN descent procedure [6] and Stochastic Gradient Descent minimization of the objective function.
A completely different approach is taken in the Hierarchical Stochastic Neighbor Embedding algorithm (HSNE) [19]. HSNE efficiently builds a hierarchical representation of the manifolds and embeds only a subset of the initial data that represent an overview of the available manifolds. This approach can embed the MNIST dataset in less than 2 minutes. The user can “drill-in” the hierarchy by requesting more detailed embeddings that reveal smaller clusters of data points. HSNE is implemented in the Cytosplore [8] tool and led to the discovery of new cell populations [28] in large samples, i.e., containing more that 5 million cells. While HSNE produces better embeddings due to an easier minimization process, it has the downside that it does not produce a single embedding that depicts the complete dataset but it requires the user to actively explore the data and it generates embeddings on request.
The techniques presented so far do not take advantage of the target domain, in which the data is embedded. As a matter of fact, tSNE is mostly used for data visualization in 2-dimensional scatterplots, while the previously introduced techniques are general and can be used for higher dimensional spaces. Based on this observation, Kim et al. introduced the PixelSNE technique [11] that employs a -body simulation approach similar to the BH-SNE, but quantizes the embedding space to the pixels used for visualizing the embedding. However, PixelSNE requires to scale the number of used pixels with respect to the size of the dataset in order to achieve a good embedding quality due to the quantization of the embedding space.
In this work, we take advantage of the 2-dimensional domain in which the embedding resides and we propose a more efficient way to minimize the tSNE objective function. Contrary to PixelSNE we observe that, by quantizing only the 2-dimensional space for the computation of the repulsive forces presented in Equation 8, embeddings that are hardly distinguishable from those generated by the BH-SNE implementation are computed in a fraction of the time. Moreover, we this approach allows to develop a linear complexity GPGPU implementation that runs in the client side of the browser.
## 4 Linear complexity tSNE minimization
In this section, we present our approach to minimize the objective function, presented in Equation 1, by rewriting its gradient, presented in Equation 7. The computation of the gradient relies on a scalar field and a vector field that are computed in linear time on the GPU.
### 4.1 Gradient of the objective function
The gradient of the objective function has the same form as the previously introduced one:
δCδyi =4(^Fattri−^Frepi), (9)
with attractive and repulsive forces acting on every point . We denote the forces with a to distinguish them from their original counterparts. Our main contribution is to rewrite the computation of the gradient as a form of a scalar field and a vector field .
S(p)=N∑i(1+||yi−p||2)−1,S:R2⇒R (10) V(p)=N∑i(1+||yi−p||2)−2(yi−p),V:R2⇒R2 (11)
Intuitively, represents the density of the points in the embedding space, according to the t-Student distribution, and it is used to compute the normalization of the joint probability distribution . An example of the field is shown in Figure 1b. The vector field represents the directional repulsive force applied to the entire embedding space. An example of is presented in Figure 1c-d, where the horizontal and vertical components are visualized separately. In the next section, we will present how both and are computed with a complexity of and sampled in constant time. For now, we assume these fields given and we present how the gradient of the objective function are derived from these two fields, accelerating hereby their calculation drastically.
For the attractive forces, we adopt the restricted neighborhood contribution as presented in the Barnes-Hut-SNE technique [26]. The rationale of this approach is that, by imposing a fixed perplexity to the Gaussian kernel, only a limited number of neighbors effectively apply an attractive force on any given point (see Equation 3 and 4). Therefore we limit the number of contributing points to a multiple of the value of perplexity, equal to three times the value of the chosen perplexity, effectively reducing the computational and memory complexity to , since where is the size of the neighborhood.
^Fattri=^Z∑l∈kNN(i)pilqil(yi−yl) (12)
The normalization factor , as it was presented in Equation 6, has complexity . In our approach we compute in linear time by sampling the scalar field .
^Z=N∑l=1(S(yl)−1) (13)
Note that and formulation is identical but, since we assume that is computed in linear time, computing has linear complexity. Moreover, since does not depend on the point , for which we are computing the gradient, it needs to be computed only once for all the points.
The repulsive force assumes even a simpler form
^Frepi=V(yi)/^Z, (14)
being the value of the vector field in the location identified by the coordinates normalized by . Similarly as for , has an equivalent formulation as but with computational and memory complexity equal to . So far, we assumed that and are computed in linear time and queried in constant time. In the next section we present how we achieve this result by using the WebGL rendering pipeline to compute an approximation of these fields.
### 4.2 Computation of the fields
In the previous section, we formulated the gradient of the objective function as dependent from a scalar field and a vector field . If the fields are evaluated independently, the complexity of the approach is due to the summation in Equations 10 and 11. We achieve a linear complexity by precomputing and approximating the fields on the GPU using textures of appropriate resolution. An example of the fields for the MNIST dataset [14] is given in Figure 1b-d.
A similar approach is used for Kernel Density Estimation
[22] that has applications in visualization [13] and non-parametric clustering [8]
. In this setting, given a number of points, the goal is to estimate a 2-dimensional probability density function, from which the points were sampled. This is usually achieved by overlaying a Gaussian kernel, whose
has to be estimated, on top of every data point.
Lampe et al. [13] were the first to propose a computation of the kernel density on the GPU for a visualization purpose. They observed that the Gaussian kernel used for estimating the density has a limited support, i.e., having value almost equal to zero if they are sufficiently far away from the origin. A good approximation of the density function is then achieved by drawing, instead of the points, little quads that are textured with a precomputed Gaussian kernel. By using additive blending available in OpenGL, i.e., by summing the values in every pixel, the resulting drawing corresponds to the desired density function.
If we analyze Equations 10 and 11, we can observe that every element in the summations for both and have a limited support, making it indeed very similar to the Kernel Density Estimation case discussed before. The drawn functions, however, are different and Figure 2 shows them for and . Therefore, we can compute the fields by drawing over a texture with a single additive drawing operation. Each point is drawn as a quad and colored with a floating-point RGB texture where each channel encodes one of the functions shown in Figure 2.
Contrary to the Kernel Density Estimation case, where the size of the quads changes according to the chosen for the Gaussian kernel, our functions have a fixed support in the embedding space. Therefore, given a certain embedding , the resolution of the texture influences the quality of the approximation but not the overall shape of the fields. To achieve linear complexity, we define the resolution of the target texture according to the size of the embedding. In this way, every data point updates the value of a constant number of pixels in the target texture, effectively leading to complexity for the computation of the fields.
Computing the value of and for a point
corresponds to extracting the interpolated value in the textures that represents the fields. This operation is extremely fast on the GPU, as WebGL natively supports the bilinear interpolation of texture values. In the next section, we provide a more detailed overview of the computational pipeline as a number of tensor operations and custom drawing operations.
## 5 Implementation
In this section, we present how the ideas presented in the previous section are concretely implemented in a JavaScript library that can be used to execute an efficient tSNE computation directly in the user’s browser. Figure 3 shows an overview of the overall approach. We rely on TensorFlow.js, a WebGL accelerated, browser-based JavaScript library for training and deploying machine-learning models. TensorFlow.js has extensive support for tensor operations that we integrate with custom shader computations to derive the tSNE embeddings.
The randomly initialized tSNE embedding is stored in a 2-dimensional tensor. We then proceed to compute the repulsive forces and attractive forces , shown respectively in the lower and upper side of Figure 3. The attractive forces are computed in a custom shader that measures the sum of the contribution of every neighboring point in the high-dimensional space. The neighborhoods are encoded in the joint probability distribution that is stored in a WebGL texture. can be computed server-side, for example using an approximated -nearest-neighborhood algorithm [18, 6, 5] or by the Hierarchical-SNE technique [19]. However, we provide a WebGL implementation of the kNN-Descent algorithm [6] and the computation of directly in the browser to enable a client-side only computational workflow.
The repulsive forces are computed using the approach presented in previous sections. In a custom shader, we draw for each point, whose location is defined by the value in the embedding tensor, a quad that is textured with the functions presented in Figure 2. The resulting 3-channel texture, an example of which is presented in Figure 1b-d, represents the scalar field and the vector field . For each embedding point , the values of and are stored in tensors and are computed by a custom WebGL shader that interpolates the value of the texture in the corresponding channel. The normalization factor is then obtained by summing all the elements in the tensor with the interpolated values of , an operation that is efficiently performed on the GPU by TensorFlow.js.
The remaining computational steps are computed as tensor operations. is obtained by dividing the interpolated values of by , and, by adding the attractive forces , the gradient of the objective function is obtained. The gradient is then added to the embedding, hence, modifying the position of the points according to their similarities. Our work is released as part of the TensorFlow.js library and can be found on GitHub at the following address: https://github.com/tensorflow/tfjs-tsne
## 6 Conclusion
In this work we presented a novel approach for the optimization of the objective function of the t-Distributed Stochastic Neighbor Embedding algorithm (tSNE) that scales to large datasets in the client side of the browser. Our approach relies on modern graphics hardware to efficiently compute the gradient of the objective function from a scalar field that represents the point density and and the directional repulsive forces in the embedding space. The implementation of the technique is based on the TensorFlow.js library and can be found on GitHub at the following address: https://github.com/tensorflow/tfjs-tsne. Examples that validate our approach can also be found on GitHub https://github.com/tensorflow/tfjs-tsne-examples.
As future work, we want to perform a systematic analysis of the results of our technique, both in terms of speed and quality of the results. More specifically we plan to perform a comparison with single-embedding techniques such as LargeVis [24], UMAP [16] and PixelSNE [11]. Moreover, we want to integrate our technique in the Hierarchical-SNE technique [19]
and in tools for the analysis of Deep Neural Networks such as the Embedding Projector
, TensorBoard and DeepEyes [21]. To conclude, we believe that having a scalable tSNE implementation that runs in the browsers open exciting possibilities for the development of new analytical systems.
## References
• [1] S. J. Aarseth, Gravitational N-Body Simulations. Cambridge University Press, 2003, cambridge Books Online.
• [2] E.-a. D. Amir, K. L. Davis, M. D. Tadmor, E. F. Simonds, J. H. Levine, S. C. Bendall, D. K. Shenfeld, S. Krishnaswamy, G. P. Nolan, and D. Pe’er, “viSNE enables visualization of high dimensional single-cell data and reveals phenotypic heterogeneity of leukemia,” Nature biotechnology, vol. 31, no. 6, pp. 545–552, 2013.
• [3] J. Barnes and P. Hut, “A hierarchical o (n log n) force-calculation algorithm,” nature, vol. 324, p. 446, 1986.
• [4] B. Becher, A. Schlitzer, J. Chen, F. Mair, H. R. Sumatoh, K. W. W. Teng, D. Low, C. Ruedl, P. Riccardi-Castagnoli, and M. Poidinger, “High-dimensional analysis of the murine myeloid cell system,” Nature immunology, vol. 15, no. 12, pp. 1181–1189, 2014.
• [5] S. Dasgupta and Y. Freund, “Random projection trees and low dimensional manifolds,” in
Proceedings of the fortieth annual ACM symposium on Theory of computing
, 2008, pp. 537–546.
• [6] W. Dong, C. Moses, and K. Li, “Efficient k-nearest neighbor graph construction for generic similarity measures,” in Proceedings of the 20th international conference on World wide web. ACM, 2011, pp. 577–586.
• [7] J.-D. Fekete and R. Primet, “Progressive analytics: A computation paradigm for exploratory data analysis,” arXiv preprint arXiv:1607.05162, 2016.
• [8] T. Höllt, N. Pezzotti, V. van Unen, F. Koning, E. Eisemann, B. Lelieveldt, and A. Vilanova, “Cytosplore: Interactive immune cell phenotyping for large single-cell datasets,” in Computer Graphics Forum, vol. 35, 2016, pp. 171–180.
• [9] M. Kahng, P. Y. Andrews, A. Kalro, and D. H. P. Chau, “A cti v is: Visual exploration of industry-scale deep neural network models,” IEEE transactions on visualization and computer graphics, vol. 24, no. 1, pp. 88–97, 2018.
• [10] J. Kiefer and J. Wolfowitz, “Stochastic estimation of the maximum of a regression function,” The Annals of Mathematical Statistics, pp. 462–466, 1952.
• [11] M. Kim, M. Choi, S. Lee, J. Tang, H. Park, and J. Choo, “Pixelsne: Visualizing fast with just enough precision via pixel-aligned stochastic neighbor embedding,” arXiv preprint arXiv:1611.02568, 2016.
• [12] S. Kullback, Information theory and statistics. Courier Corporation, 1997.
• [13] O. D. Lampe and H. Hauser, “Interactive visualization of streaming data with kernel density estimation,” in Visualization Symposium (PacificVis), 2011 IEEE Pacific, 2011, pp. 171–178.
• [14] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
• [15] N. Li, V. van Unen, T. Höllt, A. Thompson, J. van Bergen, N. Pezzotti, E. Eisemann, A. Vilanova, S. M. Chuva de Sousa Lopes, B. P. Lelieveldt, and F. Koning, “Mass cytometry reveals innate lymphoid cell differentiation pathways in the human fetal intestine,” Journal of Experimental Medicine, 2018.
• [16] L. McInnes and J. Healy, “Umap: Uniform manifold approximation and projection for dimension reduction,” arXiv preprint arXiv:1802.03426, 2018.
• [17] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al.
, “Human-level control through deep reinforcement learning,”
Nature, vol. 518, no. 7540, pp. 529–533, 2015.
• [18] M. Muja and D. Lowe, “Scalable nearest neighbor algorithms for high dimensional data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 11, pp. 2227–2240, 2014.
• [19] N. Pezzotti, T. Höllt, B. Lelieveldt, E. Eisemann, and A. Vilanova, “Hierarchical stochastic neighbor embedding,” in Computer Graphics Forum, vol. 35, 2016, pp. 21–30.
• [20] N. Pezzotti, B. Lelieveldt, L. van der Maaten, T. Hollt, E. Eisemann, and A. Vilanova, “Approximated and user steerable tsne for progressive visual analytics,” IEEE Transactions on Visualization and Computer Graphics, vol. PP, no. 99, pp. 1–1, 2016.
• [21] N. Pezzotti, T. Höllt, J. Van Gemert, B. P. Lelieveldt, E. Eisemann, and A. Vilanova, “Deepeyes: Progressive visual analytics for designing deep neural networks,” IEEE transactions on visualization and computer graphics, vol. 24, no. 1, pp. 98–108, 2018.
• [22] M. Rosenblatt, “Remarks on some nonparametric estimates of a density function,” The Annals of Mathematical Statistics, pp. 832–837, 1956.
• [23] C. Stolper, A. Perer, and D. Gotz, “Progressive visual analytics: User-driven visual exploration of in-progress analytics,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 12, pp. 1653–1662, 2014.
• [24] J. Tang, J. Liu, M. Zhang, and Q. Mei, “Visualizing large-scale and high-dimensional data,” in Proceedings of the 25th International Conference on World Wide Web, 2016, pp. 287–297.
• [25] J. W. Tukey, “The future of data analysis,” The Annals of Mathematical Statistics, pp. 1–67, 1962.
• [26] L. Van Der Maaten, “Accelerating t-sne using tree-based algorithms,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 3221–3245, 2014.
• [27] L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, no. 2579-2605, p. 85, 2008.
• [28] V. van Unen, T. Hollt, N. Pezzotti, N. Li, M. J. T. Reinders, E. Eisemann, A. Vilanova, F. Koning, and B. P. F. Lelieveldt, “Interactive visual analysis of mass cytometry data by hierarchical stochastic neighbor embedding reveals rare cell types,” Nature Communications, vol. 8, 2017.
• [29] P. N. Yianilos, “Data structures and algorithms for nearest neighbor search in general metric spaces,” in Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 1993, pp. 311–321. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200867176055908, "perplexity": 869.1708995814115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00779.warc.gz"} |
https://www.varsitytutors.com/hotmath/hotmath_help/topics/converting-fractions-to-decimals.html | # Converting Fractions to Decimals
To convert a fraction to a decimal, just divide the numerator by the denominator .
Example 1:
Write $\frac{3}{15}$ as a decimal.
Since $15$ is larger than $3$ , in order to divide, we must add a decimal point and some zeroes after the $3$ . We may not know how many zeroes to add but it doesn’t matter. If we add too many we can erase the extras; if we don’t add enough, we can add more.
$\begin{array}{l}\begin{array}{c}\\ 15\end{array}\begin{array}{c}\hfill 0.2\\ \hfill \overline{)3.0}\end{array}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{_}{-3\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\end{array}$
So $\frac{3}{15}=0.2$ .
Remember that a decimal is really just a special way of writing a fraction that has a power of ten as a denominator. What we're doing here is rewriting $\frac{3}{15}$ as $\frac{2}{10}$ .
Sometimes, you may get a repeating decimal .
Example 2:
Write $\frac{5}{6}$ as a decimal.
$\begin{array}{l}\begin{array}{c}\\ 6\end{array}\begin{array}{c}\hfill 0.8333\\ \hfill \overline{)5.0000}\end{array}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{_}{-4\text{\hspace{0.17em}}8\text{\hspace{0.17em}}\text{\hspace{0.17em}}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}20\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{_}{-18}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}20\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{_}{-18}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}2\end{array}$
We can write the result using a bar over the repeating digit (or digits):
$\frac{5}{6}=0.8\stackrel{¯}{3}$ . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404729604721069, "perplexity": 527.360760288271}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00474.warc.gz"} |
http://www.leunkim.com/tag/hyperbolic/ | # [M2 Seminar I] Week 1~4 : $\sqrt{\log t}$를 달기 위한 여정의 시작
[latexpage] 블로그 활동이 너무 뜸해서 연구일지나 쓰려고 합니다. 구체적인 연구내용이나, 세미나 발표자료들은 연구가 끝나는 대로 공개하도록 하겠습니다! 2013년 4월 12일 (금) 본격적인 M2 세미나가 시작되었다. 지도교수로부터 1-dimensional Klein-Gordon equation에 대한 여러 논문들을 추천받아 왔다. 방콕모드를 가동시켜서 일단 받아 온 논문들을 … Continue reading
# [M1 Seminar II] Week 5 : High Energy Estimates
We prove the high energy estimates for the nonlinear wave equation on the nonlinearity $\alpha =1$. In fact, this problem is a special case of the quasi-linear symmetric hyperbolic system with some assumptions. By the help of the previous existence … Continue reading
# [M1 Seminar II] Week 2 : Local Existence for Quasi-linear Symmetric Hyperbolic Systems (2)
This week, we will prove the local existence of quasi-linear symmetric hyperbolic systems by using $u^{k}$ which is iteratively defined by the solution of the linear symmetric hyperbolic system. For this purpose we first proved the boundedness of $u^k$ in … Continue reading
# [M1 Seminar II] Week 1 : Local Existence for Quasi-linear Symmetric Hyperbolic Systems
This week, we prove the uniqueness and local existence for quasi-linear symmetric hyperbolic systems using general energy method. M1_Semi2_Week1
# [M1 Seminar] Week 6,7 : A Global Existence Theorem to the Linear Symmetric Hyperbolic Systems
This week, we prove the global existence theorem to the linear symmetric hyperbolic system by using the standard energy inequality. M1_Semi_Week6-7
# [M1 Seminar] Week 5 : Energy Estimates for the Linear Symmetric Hyperbolic System
We derive the corresponding estimates for higher derivatives from the basic energy estimates which was given in the last week. We also prove a simple corollary using this result. And we discuss about the finite propagation speed of the hyperbolic … Continue reading | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238396883010864, "perplexity": 4455.3508079313415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00582.warc.gz"} |
http://en.wikipedia.org/wiki/Six_factor_formula | # Six factor formula
The six-factor formula is used in nuclear engineering to determine the multiplication of a nuclear chain reaction in a non-infinite medium. The formula is[1]
$k = \eta f p \varepsilon P_{FNL} P_{TNL}$
Symbol Name Meaning Formula Typical Thermal Reactor Value
$\eta$ Thermal Fission Factor (Eta) The number of fission neutrons produced per absorption in the fuel. $\eta = \frac{\nu \sigma_f^F}{\sigma_a^F}$ 1.65
$f$ The thermal utilization factor Probability that a neutron that gets absorbed does so in the fuel material. $f = \frac{\Sigma_a^F}{\Sigma_a}$ 0.71
$p$ The resonance escape probability Fraction of fission neutrons that manage to slow down from fission to thermal energies without being absorbed. $p \approx \mathrm{exp} \left( -\frac{\sum\limits_{i=1}^{N} N_i I_{r,A,i}}{\left( \overline{\xi} \Sigma_p \right)_{mod}} \right)$ 0.87
$\varepsilon$ The fast fission factor (Epsilon)
$\tfrac{\mbox{total number of fission neutrons}}{\mbox{number of fission neutrons from just thermal fissions}}$
$\varepsilon \approx 1 + \frac{1-p}{p}\frac{u_f \nu_f P_{FAF}}{f \nu_t P_{TAF} P_{TNL}}$ 1.02
$P_{FNL}$ The fast non-leakage probability The probability that a fast neutron will not leak out of the system. $P_{FNL} \approx \mathrm{exp} \left( -{B_g}^2 \tau_{th} \right)$ 0.97
$P_{TNL}$ The thermal non-leakage probability The probability that a thermal neutron will not leak out of the system. $P_{TNL} \approx \frac{1}{1+{L_{th}}^2 {B_g}^2}$ 0.99
The symbols are defined as:[2]
• $\nu$, $\nu_f$ and $\nu_t$ are the average number of neutrons produced per fission in the medium (2.43 for Uranium-235).
• $\sigma_f^F$ and $\sigma_a^F$ are the microscopic fission and absorption cross sections for fuel, respectively.
• $\Sigma_a^F$ and $\Sigma_a$ are the macroscopic absorption cross sections in fuel and in total, respectively.
• $N_i$ is the number density of atoms of a specific nuclide.
• $I_{r,A,i}$ is the resonance integral for absorption of a specific nuclide.
• $I_{r,A,i} = \int_{E_{th}}^{E_0} dE' \frac{\Sigma_p^{mod}}{\Sigma_t(E')} \frac{\sigma_a^i(E')}{E'}$.
• $\overline{\xi}$ (often referred to as worm-bar or squigma-bar) is the average lethargy gain per scattering event.
• Lethargy is defined as decrease in neutron energy.
• $u_f$ (fast utilization) is the probability that a fast neutron is absorbed in fuel.
• $P_{FAF}$ is the probability that a fast neutron absorption in fuel causes fission.
• $P_{TAF}$ is the probability that a thermal neutron absorption in fuel causes fission.
• ${B_g}^2$ is the geometric buckling.
• ${L_{th}}^2$ is the diffusion length of thermal neutrons.
• ${L_{th}}^2 = \frac{D}{\Sigma_{a,th}}$.
• $\tau_{th}$ is the age to thermal.
• $\tau = \int_{E_{th}}^{E'} dE'' \frac{1}{E''} \frac{D(E'')}{\overline{\xi} \left[ D(E'') {B_g}^2 + \Sigma_t(E') \right]}$.
• $\tau_{th}$ is the evaluation of $\tau$ where $E'$ is the energy of the neutron at birth.
## Multiplication
The multiplication factor, k, is defined as (see Nuclear chain reaction):
$k = \frac{\mbox{number of neutrons in one generation}}{\mbox{number of neutrons in preceding generation}}$
If k is greater than 1, the chain reaction is supercritical, and the neutron population will grow exponentially.
If k is less than 1, the chain reaction is subcritical, and the neutron population will exponentially decay.
If k = 1, the chain reaction is critical and the neutron population will remain constant. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 37, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921596050262451, "perplexity": 1102.0242858504737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137190.70/warc/CC-MAIN-20140914011217-00067-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://asmedigitalcollection.asme.org/PVP/proceedings-abstract/PVP2018/51623/V03AT03A036/277567 | Some fixed tubesheet heat exchangers suffer amount of system startup and shutdown causing large thermal stress for tubesheets. It will make an impact on the fatigue damage of the equipment. Transient thermal and stress analysis were carried out in many papers, but the geometry models were simplified, the non-expanded part of the tube and tubesheet were not modeled in detail at the end of tube. In fact, there is a tiny gap enclosed by tube, tubesheet and weld. So in this paper, a simplified model and an accurate model are established, the transient thermal and stress analysis were performed in ANSYS Workbench. The difference between two models was compared and the influence of temperature change rate on thermal stress was discussed.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029636740684509, "perplexity": 1044.7276274783333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251684146.65/warc/CC-MAIN-20200126013015-20200126043015-00451.warc.gz"} |
http://mathhelpforum.com/calculus/92603-expansion-problem-print.html | # Expansion problem
• Jun 11th 2009, 09:36 PM
jimmyp
Expansion problem
All edges of a cube are expanding at a rate of 3cm per second. How fast is the surface area changing when each edge is (a) 1cm and (b) 10 cm?
Not sure how to approach the probme. (Wait)(Crying)
• Jun 11th 2009, 10:06 PM
alexmahone
$S=6a^2$
$\frac{dS}{dt}=6*2a\frac{da}{dt}$
$\frac{da}{dt}=3 cm/sec$
$\frac{dS}{dt}=6*2a*3$
$\frac{dS}{dt}=36a$
a) $\frac{dS}{dt}=36*1=36 cm^2/sec$
b) $\frac{dS}{dt}=36*10=360 cm^2/sec$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9069182276725769, "perplexity": 2866.5172763391843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00313-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://curriculum.illustrativemathematics.org/MS/students/3/2/5/index.html | Lesson 5
More Dilations
Let’s look at dilations in the coordinate plane.
5.1: Many Dilations of a Triangle
Explore the applet and observe the dilation of triangle $$ABC$$. The dilation always uses center $$P$$, but you can change the scale factor. What connections can you make between the scale factor and the dilated triangle?
5.2: Info Gap: Dilations
Your teacher will give you either a problem card or a data card. Do not show or read your card to your partner.
If your teacher gives you the problem card:
3. Explain how you are using the information to solve the problem.
Continue to ask questions until you have enough information to solve the problem.
4. Share the problem card and solve the problem independently.
If your teacher gives you the data card:
2. Ask your partner “What specific information do you need?” and wait for them to ask for information.
If your partner asks for information that is not on the card, do not do the calculations for them. Tell them you don’t have that information.
3. Before sharing the information, ask “Why do you need that information?” Listen to your partner’s reasoning and ask clarifying questions.
4. Read the problem card and solve the problem independently.
5. Share the data card and discuss your reasoning.
Triangle $$EFG$$ was created by dilating triangle $$ABC$$ using a scale factor of 2 and center $$D$$. Triangle $$HIJ$$ was created by dilating triangle $$ABC$$ using a scale factor of $$\frac12$$ and center $$D$$.
1. What would the image of triangle $$ABC$$ look like under a dilation with scale factor 0?
2. What would the image of the triangle look like under dilation with a scale factor of -1? If possible, draw it and label the vertices $$A’$$, $$B’$$, and $$C’$$. If it’s not possible, explain why not.
3. If possible, describe what happens to a shape if it is dilated with a negative scale factor. If dilating with a negative scale factor is not possible, explain why not.
Summary
One important use of coordinates is to communicate geometric information precisely. Let’s consider a quadrilateral $$ABCD$$ in the coordinate plane. Performing a dilation of $$ABCD$$ requires three vital pieces of information:
1. The coordinates of $$A$$, $$B$$, $$C$$, and $$D$$
2. The coordinates of the center of dilation, $$P$$
3. The scale factor of the dilation
With this information, we can dilate the vertices $$A$$, $$B$$, $$C$$, and $$D$$ and then draw the corresponding segments to find the dilation of $$ABCD$$. Without coordinates, describing the location of the new points would likely require sharing a picture of the polygon and the center of dilation.
Glossary Entries
• center of a dilation
The center of a dilation is a fixed point on a plane. It is the starting point from which we measure distances in a dilation.
In this diagram, point $$P$$ is the center of the dilation.
• dilation
A dilation is a transformation in which each point on a figure moves along a line and changes its distance from a fixed point. The fixed point is the center of the dilation. All of the original distances are multiplied by the same scale factor.
For example, triangle $$DEF$$ is a dilation of triangle $$ABC$$. The center of dilation is $$O$$ and the scale factor is 3.
This means that every point of triangle $$DEF$$ is 3 times as far from $$O$$ as every corresponding point of triangle $$ABC$$.
• scale factor
To create a scaled copy, we multiply all the lengths in the original figure by the same number. This number is called the scale factor.
In this example, the scale factor is 1.5, because $$4 \boldcdot (1.5) = 6$$, $$5 \boldcdot (1.5)=7.5$$, and $$6 \boldcdot (1.5)=9$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090673089027405, "perplexity": 464.3802141683346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00622.warc.gz"} |
https://mathoverflow.net/questions/236151/conjugacy-classes-of-mathrmsl-2-mathbbz/236162 | # Conjugacy classes of $\mathrm{SL}_2(\mathbb{Z})$
I was wondering if there is some description known for the conjugacy classes of $$\mathrm{SL}_2(\mathbb{Z})=\{A\in \mathrm{GL}_2(\mathbb{Z})|\;\;|\det(A)|=1\}.$$ I was not able to find anything about this. Most references only give solutions for $$\mathrm{SL}_2(\mathbb{R})$$.
• There is plenty of literature about conjugacy within ${\rm GL}(2,\mathbb{Q}),$ which is essentially the theory of the rational canonical form (in a special case). – Geoff Robinson Apr 13 '16 at 21:40
• – Steve Huntsman Apr 14 '16 at 12:20
One can proceed as follows for $$SL_2(\mathbb{Z})$$.
1. First, the trace is a conjugacy invariant.
2. For trace $$0$$ there are two conjugacy classes represented by $$\pmatrix{0 & 1 \\ -1 & 0}$$ and $$\pmatrix{0 & -1 \\ 1 & 0}$$. These representatives can be thought of as $$90^\circ$$ and $$270^{\circ}$$ degree rotations of a lattice generated by the corners of a square centered on the origin.
3. For trace $$1$$ and $$-1$$ there are two conjugacy classes each, represented by the matrices $$M=\pmatrix{1 & -1 \\ 1 & 0}, M^2=\pmatrix{0 & 1 \\ -1 & -1}, M^4=\pmatrix{-1 & 1 \\ -1 & 0}, M^5 = \pmatrix{0 & -1 \\ 1 & 1}$$ These representatives can be thought of as $$60^\circ$$, $$120^\circ$$, $$240^\circ$$, and $$300^\circ$$ degree rotations of a lattice generated by the vertices of a regular hexagon centered at the origin.
4. For trace $$2$$ there is a $$\mathbb{Z}$$-indexed family of conjugacy classes, represented by $$\pmatrix{1 & n \\ 0 & 1}$$; these are all "shear" transformations except for the identity. For trace $$-2$$ there is a similar $$\mathbb{Z}$$-indexed family of conjugacy classes represented by $$\pmatrix{-1 & n \\ 0 & -1}$$.
5. In general, for nonzero trace the conjugacy classes come in opposite pairs, represented by a matrix $$M$$ with trace $$t>0$$ and an opposite representative $$-M$$ with trace $$-t<0$$.
6. For trace of absolute value $$> 2$$, there is one conjugacy class for each word of the form $$R^{j_1} L^{k_1} R^{j_2} L^{k_2} \cdots R^{j_I} L^{k_I}$$ up to cyclic conjugacy, where $$I \ge 1$$ and all the exponents are positive integers. A matrix representing this form is obtained from the above word by making the replacements $$R=\pmatrix{1 & 1 \\ 0 & 1}, \quad L=\pmatrix{1 & 0 \\ 1 & 1}$$ These are all "hyperbolic" transformations, having an independent pair of real eigenvectors. The slope of the expanding eigenvector is a quadratic irrational, and hence has eventually repeating continued fraction expansion. The cyclic sequence $$(j_1,k_1,j_2,k_2,\ldots,j_I,k_I)$$ can be thought of as the fundamental repeating portion of the continued fraction expansion of the slope of the expanding eigenvector, or, better, as an appropriate power of the fundamental repeating portion where the power is equal to the exponent of the given matrix.
Number theorists will tell you that the number of conjugacy classes of each trace $$t>2$$ is closely related to the class number of the number field generated by $$\sqrt{t^2-4}$$.
• Not the square root of the trace, but rather $\sqrt{t^2-4}$, where $t$ is the trace. – David E Speyer Apr 14 '16 at 2:52
• @LeeMosher Nice answer. Do you have reference for this? For the case 6, you probably have to say that the conjugacy class is of the form $\pm R^{j_1}L^{k_1}\cdots R^{j_I}L^{k_I}$. Indeed, if the trace is negative, you cannot be conjugate to a product of $R,L$. – Jérémy Blanc Mar 11 '17 at 6:28
• I don't have a specific reference. I learned it over a long process of ingestion, maybe starting with my first book on continued fractions as a kid, and continuing with my education in mapping class groups ($PSL_2(\mathbb{Z})$ is the mapping class group of the torus). The answer of @IgorRivin answer gives some references. – Lee Mosher Mar 11 '17 at 13:57
This is the subject of Gauss' reduction theory, as discussed in Karpenkov's book (among many other places). In this 2007 paper, Karpenkov also extends the method to $SL(n, \mathbb{Z}).$
The conjugacy classes of elements of ${\rm SL}(2,\mathbb{Z})$ with given trace are counted in:
S. Chowla, J. Cowles and M. Cowles: On the number of conjugacy classes in SL(2,Z). Journal of Number Theory 12(1980), Issue 3, Pages 372-377.
• Extraordinary: the paper which has Chowla as an author, was also communicated by Chowla! – Lucia Apr 13 '16 at 22:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067815542221069, "perplexity": 233.65552232612953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905777.48/warc/CC-MAIN-20201029184716-20201029214716-00113.warc.gz"} |
http://cms.math.ca/cmb/kw/Toeplitz%20operators | Toeplitz Algebras and Extensions of\\Irrational Rotation Algebras For a given irrational number $\theta$, we define Toeplitz operators with symbols in the irrational rotation algebra ${\mathcal A}_\theta$, and we show that the $C^*$-algebra $\mathcal T({\mathcal A}_\theta)$ generated by these Toeplitz operators is an extension of ${\mathcal A}_\theta$ by the algebra of compact operators. We then use these extensions to explicitly exhibit generators of the group $KK^1({\mathcal A}_\theta,\mathbb C)$. We also prove an index theorem for $\mathcal T({\mathcal A}_\theta)$ that generalizes the standard index theorem for Toeplitz operators on the circle. Keywords:Toeplitz operators, irrational rotation algebras, index theoryCategories:47B35, 46L80 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700658321380615, "perplexity": 234.36213647043718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936460577.67/warc/CC-MAIN-20150226074100-00297-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://lavalle.pl/planning/node204.html | #### Making good grids
Optimizing dispersion forces the points to be distributed more uniformly over . This causes them to fail statistical tests, but the point distribution is often better for motion planning purposes. Consider the best way to reduce dispersion if is the metric and . Suppose that the number of samples, , is given. Optimal dispersion is obtained by partitioning into a grid of cubes and placing a point at the center of each cube, as shown for and in Figure 5.5a. The number of cubes per axis must be , in which denotes the floor. If is not an integer, then there are leftover points that may be placed anywhere without affecting the dispersion. Notice that just gives the number of points per axis for a grid of points in dimensions. The resulting grid will be referred to as a Sukharev grid [922].
The dispersion obtained by the Sukharev grid is the best possible. Therefore, a useful lower bound can be given for any set of samples [922]:
(5.20)
This implies that keeping the dispersion fixed requires exponentially many points in the dimension, .
At this point you might wonder why was used instead of , which seems more natural. This is because the case is extremely difficult to optimize (except in , where a tiling of equilateral triangles can be made, with a point in the center of each one). Even the simple problem of determining the best way to distribute a fixed number of points in is unsolved for most values of . See [241] for extensive treatment of this problem.
Suppose now that other topologies are considered instead of . Let , in which the identification produces a torus. The situation is quite different because no longer has a boundary. The Sukharev grid still produces optimal dispersion, but it can also be shifted without increasing the dispersion. In this case, a standard grid may also be used, which has the same number of points as the Sukharev grid but is translated to the origin. Thus, the first grid point is , which is actually the same as other points by identification. If represents a cylinder and the number of points, , is given, then it is best to just use the Sukharev grid. It is possible, however, to shift each coordinate that behaves like . If is rectangular but not a square, a good grid can still be made by tiling the space with cubes. In some cases this will produce optimal dispersion. For complicated spaces such as , no grid exists in the sense defined so far. It is possible, however, to generate grids on the faces of an inscribed Platonic solid [251] and lift the samples to with relatively little distortion [987]. For example, to sample , Sukharev grids can be placed on each face of a cube. These are lifted to obtain the warped grid shown in Figure 5.6a.
Example 5..15 (Sukharev Grid) Suppose that and . If , then the Sukharev grid yields points for the nine cases in which either coordinate may be , , or . The dispersion is . The spacing between the points along each axis is , which is twice the dispersion. If instead , which represents a torus, then the nine points may be shifted to yield the standard grid. In this case each coordinate may be 0, , or . The dispersion and spacing between the points remain unchanged.
One nice property of grids is that they have a lattice structure. This means that neighboring points can be obtained very easily be adding or subtracting vectors. Let be an -dimensional vector called a generator. A point on a lattice can be expressed as
(5.21)
for independent generators, as depicted in Figure 5.6b. In a 2D grid, the generators represent up'' and right.'' If and a standard grid with integer spacing is used, then the neighbors of the point are obtained by adding , , , or . In a general lattice, the generators need not be orthogonal. An example is shown in Figure 5.5b. In Section 5.4.2, lattice structure will become important and convenient for defining the search graph.
Steven M LaValle 2012-04-20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9149134755134583, "perplexity": 386.2520357694597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00030.warc.gz"} |
https://thespectrumofriemannium.wordpress.com/2012/09/01/log026-boosts-rapidity-hep/ | # LOG#026. Boosts, rapidity, HEP.
In euclidean two dimensional space, rotations are easy to understand in terms of matrices and trigonometric functions. A plane rotation is given by:
$\boxed{\begin{pmatrix}x'\\ y'\end{pmatrix}=\begin{pmatrix}\cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix}}\leftrightarrow \boxed{\mathbb{X}'=\mathbb{R}(\theta)\mathbb{X}}$
where the rotation angle is $\theta$, and it is parametrized by $0\leq \theta \leq 2\pi$.
Interestingly, in minkovskian two dimensional spacetime, the analogue does exist and it is written in terms of matrices and hyperbolic trigonometric functions. A “plane” rotation in spacetime is given by:
$\boxed{\begin{pmatrix}ct'\\ x'\end{pmatrix}=\begin{pmatrix}\cosh \varphi & -\sinh \varphi \\ -\sinh \varphi & \cosh \varphi \end{pmatrix}\begin{pmatrix}ct\\ x\end{pmatrix}}\leftrightarrow \boxed{\mathbb{X}'=\mathbb{L}(\varphi)\mathbb{X}}$
Here, $\varphi = i\psi$ is the so-called hiperbolic rotation angle, pseudorotation, or more commonly, the rapidity of the Lorentz boost in 2d spacetime. It shows that rapidity are a very useful parameter for calculations in Special Relativity. Indeed, it is easy to check that
$\mathbb{L}(\varphi_1+\varphi_2)=\mathbb{L}(\varphi_1)\mathbb{L}\mathbb(\varphi_2)$
So, at least in the 2d spacetime case, rapidities are “additive” in the written sense.
Firstly, we are going to guess the relationship between rapidity and velocity in a single lorentzian spacetime boost. From the above equation we get:
$ct'=ct\cosh \varphi -x\sinh \varphi$
$x'=-ct\sinh \varphi +x\cosh \varphi$
Multiplying the first equation by $\cosh \varphi$ and the second one by $\sinh \varphi$, we add the resulting equation to obtain:
$ct'\cosh\varphi+x'\sinh \varphi =ct\cosh^2 \varphi -ct\sinh^2 \varphi =ct$
that is
$ct'\cosh\varphi+x'\sinh \varphi =ct$
From this equation (or the boxed equations), we see that $\varphi=0$ corresponds to $x'=x$ and $t'=t$. Setting $x'=0$, we deduce that
$x'=0=-ct\sinh \varphi +x\cosh \varphi$
and thus
$ct\tanh \varphi =x$ or $x=ct\tanh\varphi$.
Since $t\neq 0$, and the pseudorotation seems to have a “pseudovelocity” equals to $V=x/t$, the rapidity it is then defined through the equation:
$\boxed{\tanh \varphi=\dfrac{V}{c}=\beta}\leftrightarrow\mbox{RAPIDITY}\leftrightarrow\boxed{\varphi=\tanh^{-1}\beta}$
If we remember what we have learned in our previous mathematical survey, that is,
$\tanh^{-1}z=\dfrac{1}{2}\ln \dfrac{1+z}{1-z}=\sqrt{\dfrac{1+z}{1-z}}$
We set $z=\beta$ in order to get the next alternative expression for the rapidity:
$\varphi=\ln \sqrt{\dfrac{1+\beta}{1-\beta}}=\dfrac{1}{2}\ln \dfrac{1+\beta}{1-\beta}\leftrightarrow \exp \varphi=\sqrt{\dfrac{1+\beta}{1-\beta}}$
In experimental particle physics, in general 3+1 spacetime, the rapidity definition is extended as follows. Writing, from the previous equations above,
$\sinh \varphi=\dfrac{\beta}{\sqrt{1-\beta^2}}$
$\cosh \varphi=\dfrac{1}{\sqrt{1-\beta^2}}$
and using these two last equations, we can also write momenergy components using rapidity in the same fashion. Suppose that for some particle(objetc), its mass is $m$, its energy is $E$, and its (relativistic) momentum is $\mathbf{P}$. Then:
$E=mc^2\cosh \varphi$
$\lvert \mathbf{P} \lvert =mc\sinh \varphi$
From these equations, it is trivial to guess:
$\varphi=\tanh^{-1}\dfrac{\lvert \mathbf{P} \lvert c}{E}=\dfrac{1}{2}\ln \dfrac{E+\lvert \mathbf{P} \lvert c}{E-\lvert \mathbf{P} \lvert c}$
This is the completely general definition of rapidity used in High Energy Physics (HEP), with a further detail. In HEP, physicists used to select the direction of momentum in the same direction that the collision beam particles! Suppose we select some orientation, e.g.the z-axis. Then, $\lvert \mathbf{P} \lvert =p_z$ and rapidity is defined in that beam direction as:
$\boxed{\varphi_{hep}=\tanh^{-1}\dfrac{\lvert \mathbf{P}_{beam} \lvert c}{E}=\dfrac{1}{2}\ln \dfrac{E+p_z c}{E-p_z c}}$
In 2d spacetime, rapidities add nonlinearly according to the celebrated relativistic addition rule:
$\beta_{1+2}=\dfrac{\beta_1+\beta_2}{1+\frac{\beta_1\beta_2}{c^2}}$
Indeed, Lorentz transformations do commute in 2d spacetime since we boost in a same direction x, we get:
$L_1^xL_2^x-L_2^xL_1^x=0$
with
$L_1^x=\begin{pmatrix}\gamma_1 & -\gamma_1\beta_1\\ -\gamma_1\beta_1 &\gamma_1 \end{pmatrix}$
$L_2^x=\begin{pmatrix}\gamma_2 & -\gamma_2\beta_2\\ -\gamma_2\beta_2 &\gamma_2 \end{pmatrix}$
This commutativity is lost when we go to higher dimensions. Indeed, in spacetime with more than one spatial direction that result is not true in general. If we build a Lorentz transformation with two boosts in different directions $V_1=(v_1,0,0)$ and $V_2=(0,v_2,0)$, the Lorentz matrices are ( remark for experts: we leave one direction in space untouched, so we get 3×3 matrices):
$L_1^x=\begin{pmatrix}\gamma_1 & -\gamma_1\beta_1 &0\\ -\gamma_1\beta_1 &\gamma_1 &0\\ 0& 0& 1\end{pmatrix}$
$L_2^y=\begin{pmatrix}\gamma_2 & 0&-\gamma_2\beta_2\\ 0& 1& 0\\ -\gamma_2\beta_2 & 0&\gamma_2 \end{pmatrix}$
and it is easily checked that
$L_1^xL_2^y-L_2^yL_1^x\neq 0$
Finally, there is other related quantity to rapidity that even experimentalists do prefer to rapidity. It is called: PSEUDORAPIDITY!
Pseudorapidity, often denoted by $\eta$ describes the angle of a particle relative to the beam axis. Mathematically speaking is:
$\boxed{\eta=-\ln \tan \dfrac{\theta}{2}}\leftrightarrow \mbox{PSEUDORAPIDITY}\leftrightarrow \boxed{\exp (\eta)=\dfrac{1}{\tan\dfrac{\theta}{2}}}$
where $\theta$ is the angle between the particle momentum $\mathbf{P}$ and the beam axis. The above relation can be inverted to provide:
$\boxed{\theta=2\tan^{-1}(e^{-\eta})}$
The pseudorapidity in terms of the momentum is given by:
$\boxed{\eta=\dfrac{1}{2}\ln \dfrac{\vert \mathbf{P}\vert +P_L}{\vert \mathbf{P}\vert -P_L}}$
Note that, unlike rapidity, pseudorapidity depends only on the polar angle of its trajectory, and not on the energy of the particle.
In hadron collider physics, and other colliders as well, the rapidity (or pseudorapidity) is preferred over the polar angle because, loosely speaking, particle production is constant as a function of rapidity. One speaks of the “forward” direction in a collider experiment, which refers to regions of the detector that are close to the beam axis, at high pseudorapidity $\eta$.
The rapidity as a function of pseudorapidity is provided by the following formula:
$\boxed{\varphi=\ln\dfrac{\sqrt{m^2+p_T^2\cosh^2\eta}+p_T\sinh \eta}{\sqrt{m^2+p_T^2}}}$
where $p_T$ is the momentum transverse to the direction of motion and m is the invariant mass of the particle.
Remark: The difference in the rapidity of two particles is independent of the Lorentz boosts along the beam axis.
Colliders measure physical momenta in terms of transverse momentum $p_T$ instead of the momentum in the direction of the beam axis (longitudinal momentum) $P_L=p_z$, the polar angle in the transverse plane (genarally denoted by $\phi$) and pseudorapidity $\eta$. To obtain cartesian momenta $(p_x,p_y,p_z)$ (with the z-axis defined as the beam axis), the following transformations are used:
$p_x=P_T\cos\phi$
$p_y=P_T\sin\phi$
$p_z=P_T\sinh\eta$
Thus, we get the also useful relationship
$\vert P \vert=P_T\cosh\eta$
This quantity is an observable in the collision of particles, and it can be measured as the main image of this post shows. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 62, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784271717071533, "perplexity": 455.7705703087938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886416.17/warc/CC-MAIN-20180116105522-20180116125522-00008.warc.gz"} |
https://openstax.org/books/introductory-business-statistics/pages/10-4-comparing-two-independent-population-proportions | # 10.4Comparing Two Independent Population Proportions
Introductory Business Statistics10.4 Comparing Two Independent Population Proportions
When conducting a hypothesis test that compares two independent population proportions, the following characteristics should be present:
1. The two independent samples are random samples that are independent.
2. The number of successes is at least five, and the number of failures is at least five, for each of the samples.
3. Growing literature states that the population must be at least ten or even perhaps 20 times the size of the sample. This keeps each population from being over-sampled and causing biased results.
Comparing two proportions, like comparing two means, is common. If two estimated proportions are different, it may be due to a difference in the populations or it may be due to chance in the sampling. A hypothesis test can help determine if a difference in the estimated proportions reflects a difference in the two population proportions.
Like the case of differences in sample means, we construct a sampling distribution for differences in sample proportions: $(pA'-pB')(pA'-pB')$where $p A ' = X A n A p A ' = X A n A$ and $p B ' = X B n B p B ' = X B n B$ are the sample proportions for the two sets of data in question. XA and XB are the number of successes in each sample group respectively, and nA and nB are the respective sample sizes from the two groups. Again we go the Central Limit theorem to find the distribution of this sampling distribution for the differences in sample proportions. And again we find that this sampling distribution, like the ones past, are normally distributed as proved by the Central Limit Theorem, as seen in Figure 10.5 .
Figure 10.5
Generally, the null hypothesis allows for the test of a difference of a particular value, 𝛿0, just as we did for the case of differences in means.
$H0 : p1 − p2 = 𝛿0 H0:p1−p2=𝛿0$
$H1 : p1 − p2 ≠ 𝛿0 H1:p1−p2≠𝛿0$
Most common, however, is the test that the two proportions are the same. That is,
$H 0 : p A = p B H 0 : p A = p B$
$H a : p A ≠ p B H a : p A ≠ p B$
To conduct the test, we use a pooled proportion, pc.
The pooled proportion is calculated as follows:
$p c = x A + x B n A + n B p c = x A + x B n A + n B$
The test statistic (z-score) is:
$Zc = ( p A ′ − p B ′ ) − δ0 p c (1− p c )( 1 n A + 1 n B ) Zc= ( p A ′ − p B ′ ) − δ0 p c (1− p c )( 1 n A + 1 n B )$
where δ0 is the hypothesized differences between the two proportions and pc is the pooled variance from the formula above.
### Example 10.6
A bank has recently acquired a new branch and thus has customers in this new territory. They are interested in the default rate in their new territory. They wish to test the hypothesis that the default rate is different from their current customer base. They sample 200 files in area A, their current customers, and find that 20 have defaulted. In area B, the new customers, another sample of 200 files shows 12 have defaulted on their loans. At a 10% level of significance can we say that the default rates are the same or different?
Solution 10.6
This is a test of proportions. We know this because the underlying random variable is binary, default or not default. Further, we know it is a test of differences in proportions because we have two sample groups, the current customer base and the newly acquired customer base. Let A and B be the subscripts for the two customer groups. Then pA and pB are the two population proportions we wish to test.
Random Variable:P′AP′B = difference in the proportions of customers who defaulted in the two groups.
$H0:pA=pBH0:pA=pB$
$Ha:pA≠pBHa:pA≠pB$
The words "is a difference" tell you the test is two-tailed.
Distribution for the test: Since this is a test of two binomial population proportions, the distribution is normal:
$p c = x A + x B n A + n B = 20+12 200+200 =0.08 1– p c =0.92 p c = x A + x B n A + n B = 20+12 200+200 =0.08 1– p c =0.92$
(p′Ap′B) = 0.04 follows an approximate normal distribution.
Estimated proportion for group A: $p ′ A = x A n A = 20 200 =0.1 p ′ A = x A n A = 20 200 =0.1$
Estimated proportion for group B: $p ′ B = x B n B = 12 200 =0.06 p ′ B = x B n B = 12 200 =0.06$
The estimated difference between the two groups is : p′Ap′B = 0.1 – 0.06 = 0.04.
Figure 10.6
$Zc = (P′A−P′B)−δ0 Pc(1−Pc) (1nA+1nB) = 0.54 Zc= (P′A−P′B)−δ0 Pc(1−Pc) (1nA+1nB) =0.54$
The calculated test statistic is .54 and is not in the tail of the distribution.
Make a decision: Since the calculate test statistic is not in the tail of the distribution we cannot reject H0.
Conclusion: At a 1% level of significance, from the sample data, there is not sufficient evidence to conclude that there is a difference between the proportions of customers who defaulted in the two groups.
Try It 10.6
Two types of valves are being tested to determine if there is a difference in pressure tolerances. Fifteen out of a random sample of 100 of Valve A cracked under 4,500 psi. Six out of a random sample of 100 of Valve B cracked under 4,500 psi. Test at a 5% level of significance. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9293965697288513, "perplexity": 574.2383987503694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491871.35/warc/CC-MAIN-20191207005439-20191207033439-00124.warc.gz"} |
http://www.transtutors.com/questions/a-uniform-thin-rod-of-length-0-500-m-and-mass-4-00-kg-can-431277.htm | +1.617.933.5480
# Q: A uniform thin rod of length 0.500 m and mass 4.00 kg can
A uniform thin rod of length 0.60 m and mass 3.5 kg can rotate in a horizontal plane about a vertical axis through its center. The rod is at rest when a 3.0 g bullet traveling in the rotation plane is fired into one end of the rod. As viewed from above, the bullet's path makes angle θ = 60° with the rod. If the bullet lodges in the rod and the angular velocity of the rod is 14 rad/s immediately after the collision, what is the bullet's speed just before impact? (Answer is in m/s)
We must conservation of angular velocity. Initially: (3/1000)v 0.6 sin 60°+(3/1000)(0
Related Questions in General Physics
Question Status: Solved | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081940650939941, "perplexity": 619.7122304717124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064503.53/warc/CC-MAIN-20150827025424-00176-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://link.springer.com/chapter/10.1007/978-1-4419-6594-3_30 | Springer Optimization and Its Applications Volume 42, 2011, pp 447-460
Date: 02 Sep 2010
# Context Hidden Markov Model for Named Entity Recognition
* Final gross prices may vary according to local VAT.
## Abstract
Named entity (NE) recognition is a core technology for understanding low-level semantics of texts. In this paper we consider the combination of two classifiers: our version of probabilistic supervised machine learning classifier, which we named the Context Hidden Markov Model, and grammar rule-based system in named entity recognition. In order to deal with the problem of estimating the probabilities of unseen events, we have applied the probability mixture models which were estimated using another machine learning algorithm: Expectation Maximization. We have tested our Named Entity Recognition system on MUC 7 corpus. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822816610336304, "perplexity": 1550.995447489351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989234.2/warc/CC-MAIN-20150728002309-00220-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/1002.0145/ | # From Sylvester-Gallai Configurations to Rank Bounds: Improved Black-box Identity Test for Depth-3 Circuits
###### Abstract.
We study the problem of identity testing for depth- circuits of top fanin and degree (called identities). We give a new structure theorem for such identities. A direct application of our theorem improves the known deterministic -time black-box identity test over rationals (Kayal & Saraf, FOCS 2009) to one that takes -time. Our structure theorem essentially says that the number of independent variables in a real depth- identity is very small. This theorem settles affirmatively the stronger rank conjectures posed by Dvir & Shpilka (STOC 2005) and Kayal & Saraf (FOCS 2009). Our techniques provide a unified framework that actually beats all known rank bounds and hence gives the best running time (for every field) for black-box identity tests.
Our main theorem (almost optimally) pins down the relation between higher dimensional Sylvester-Gallai theorems and the rank of depth- identities in a very transparent manner. The existence of this was hinted at by Dvir & Shpilka (STOC 2005), but first proven, for reals, by Kayal & Saraf (FOCS 2009). We introduce the concept of Sylvester-Gallai rank bounds for any field, and show the intimate connection between this and depth- identity rank bounds. We also prove the first ever theorem about high dimensional Sylvester-Gallai configurations over any field. Our proofs and techniques are very different from previous results and devise a very interesting ensemble of combinatorics and algebra. The latter concepts are ideal theoretic and involve a new Chinese remainder theorem. Our proof methods explain the structure of any depth- identity : there is a nucleus of that forms a low rank identity, while the remainder is a high dimensional Sylvester-Gallai configuration.
Hausdorff Center for Mathematics, Bonn - 53115, Germany.
IBM Almaden, San Jose - 95126, USA.
## 1. Introduction
Polynomial identity testing (PIT) ranks as one of the most important open problems in the intersection of algebra and computer science. We are provided an arithmetic circuit that computes a polynomial over a field , and we wish to test if is identically zero (in other words, if is the zero polynomial). In the black-box setting, the circuit is provided as a black-box and we are only allowed to evaluate the polynomial at various domain points. The main goal is to devise a deterministic polynomial time algorithm for PIT. Kabanets & Impagliazzo [KI04] and Agrawal [Agr05, Agr06] have shown connections between deterministic algorithms for identity testing and circuit lower bounds, emphasizing the importance of this problem. To know more about the current state of the general identity testing problem see the surveys [Sax09, AS09].
The first randomized polynomial time PIT algorithm, which was a black-box algorithm, was given (independently) by Schwartz [Sch80] and Zippel [Zip79]. Randomized algorithms that use less randomness were given by Chen & Kao [CK00], Lewin & Vadhan [LV98], and Agrawal & Biswas [AB03]. Klivans & Spielman [KS01] observed that even for depth- circuits for bounded top fanin, deterministic identity testing was open. Progress towards this was first made by Dvir & Shpilka [DS06], who gave a quasi-polynomial time algorithm, although with a doubly-exponential dependence on the top fanin. The problem was resolved by a polynomial time algorithm given by Kayal and Saxena [KS07], with a running time exponential in the top fanin. As expected, the current understanding of depth- circuits is even more sparse. Identity tests are known only for rather special depth- circuits [AM07, Sax08, SV09, KMSV09]. Why is progress restricted to such small depth circuits? Agrawal and Vinay [AV08] showed that an efficient black-box identity test for depth- circuits will actually give a quasi-polynomial black-box test, and subexponential lower bounds, for circuits of all depths (that compute low degree polynomials). Thus, understanding depth- identities seems to be a natural first step towards the goal of proving more general lower bounds.
For deterministic black-box testing, the first results were given by Karnin & Shpilka [KS08]. Based on results in [DS06], they gave an algorithm for bounded top fanin depth- circuits having a quasi-polynomial running time (with a doubly-exponential dependence on the top fanin). The dependence on the top fanin was later improved (to singly-exponential) by the rank bound results of Saxena & Seshadhri [SS09] (for any ). But the time complexity also had a quasi-polynomial dependence on the degree of the circuit. This dependence is inevitable in rank-based methods over finite fields (as shown by [KS07]). However, over the field of rationals, Kayal & Saraf [KS09b] showed how to remove this quasi-polynomial dependence on the degree at the cost of doubly-exponential dependence on the top fanin, thus giving a polynomial time complexity for bounded top fanin. In this work we achieve the best of the two works [SS09] and [KS09b], i.e. we prove (for rationals) a time complexity that depends only polynomially on the degree and “only” singly-exponentially on the fanin.
In a quite striking result, Kayal & Saraf [KS09b] proved how Sylvester-Gallai theorems can get better rank bounds over the reals. We introduce the concept of Sylvester-Gallai rank bounds that deals with the rank of vectors (over some given field) that have some special incidence properties. This is a very convenient way to express known Sylvester-Gallai results. These are inspired by the famous Sylvester-Gallai theorem about point-line incidences. We show how this very interesting quantity is tightly connected to depth- identities. Sylvester-Gallai rank bounds over high dimensions were known over the reals, and are used to prove depth- rank bounds over reals. We prove the first ever theorem for high dimensional Sylvester-Gallai configurations over any field.
### 1.1. Definitions and Previous Work
This work focuses on depth- circuits. A structural study of depth- identities was initiated in [DS06] by defining a notion of rank of simple and minimal identities. A depth- circuit over a field is:
C(x1,…,xn)=k∑i=1Ti
where, (a multiplication term) is a product of linear polynomials over . Note that for the purposes of studying identities we can assume wlog (by homogenization) that ’s are linear forms (i.e. linear polynomials with a zero constant coefficient) and that . Such a circuit is referred to as a circuit (or depending on the context), where is the top fanin of and is the degree of . We give a few definitions from [DS06].
###### Definition 1.
[Simple Circuit] is a simple circuit if there is no nonzero linear form dividing all the ’s.
[Minimal Circuit] is a minimal circuit if for every proper subset , is nonzero.
[Rank of a circuit] Every can be seen as an -dimensional vector over . The rank of the circuit, , is defined as the rank of the set of all linear forms ’s viewed as -dimensional vectors.
Can all the forms be independent, or must there be relations between them? The rank can be interpreted as the minimum number of variables that are required to express . There exists a linear transformation converting the variables of the circuit into independent variables. A trivial upper bound on the rank (for any -circuit) is , since that is the total number of linear forms involved in . The rank is a fundamental property of a circuit and it is crucial to understand how large this can be for identities. A substantially smaller rank bound than shows that identities do not have as many “degrees of freedom” as general circuits, and leads to deterministic identity tests. Furthermore, the techniques used to prove rank bounds show us structural properties of identities that may suggest directions to resolve PIT for circuits.
The rank bounds, in addition to being a natural property of identities, have found applications in black-box identity testing [KS08] and learning circuits [Shp09, KS09a]. The result of [KS08] showed rank bounds imply black-box testers: if is a rank bound for simple minimal identities over field , then there is a deterministic black-box identity tester for such circuits, that runs in -operations. (For the time complexity over , we actually count the bit operations.)
Dvir & Shpilka [DS06] proved that the rank of a simple, minimal identity is bounded by . This rank bound was improved to by Saxena & Seshadhri [SS09]. Fairly basic identity constructions show that the rank is over the reals and for finite fields [DS06, KS07, SS09]. Dvir & Shpilka [DS06] conjectured that should be some over the reals. Through a very insightful use of Sylvester-Gallai theorems, Kayal & Saraf [KS09b] subsequently bounded the rank of identities, over reals, by . This means that for a constant top fanin circuit, the rank of identities is constant, independent of the degree. This also leads to the first truly polynomial-time deterministic black-box identity testers for this case.
Unfortunately, as soon as becomes even , this bound becomes trivial. We improve this rank bound exponentially, to , which is almost optimal. This gives a major improvement in the running time of the black-box testers. We also improve the rank bounds for general fields from to . We emphasize that we give a unified framework to prove all these results. Table 1 should make it easier to compare the various bounds.
Kayal & Saraf [KS09b] connect Sylvester-Gallai theorems to rank bounds. They need advanced versions of these theorems that deal with colored points and have to prove certain hyperplane decomposition theorems. We make the connection much more transparent (at the loss of some color from the theorems). We reiterate that our techniques are completely different, and employ a very powerful algebraic framework to dissect identities. This allows us to use as a “black-box” the most basic form of the higher dimensional Sylvester-Gallai theorems.
### 1.2. Our Results
Before we state our results, it will be helpful to understand Sylvester-Gallai configurations. A set of points with the property that every line through two points of passes through a third point in is called a Sylvester-Gallai configuration. The famous Sylvester-Gallai theorem states: for a set of points in , not all collinear, there exists a line passing through exactly two points of . In other words, the only Sylvester-Gallai configuration in is a set of collinear points. This basic theorem about point-line incidences was extended to higher dimensions [Han65, BE67]. We introduce the notion of Sylvester-Gallai rank bounds. This is a clean and convenient way of expressing these theorems.
###### Definition 2.
Let be a finite subset of the projective space . Alternately, is a subset of vectors in without multiples: no two vectors in are scalar multiples of each other111When , such an is, wlog, a subset of distinct vectors with first coordinate .. Suppose, for every set of linearly independent vectors, the linear span of contains at least vectors of . Then, the set is said to be -closed.
The largest possible rank of an -closed set of at most vectors in (for any ) is denoted by .
The classic Sylvester-Gallai theorem essentially states222To see this, take an -closed set of vectors. Think of each vector being represented by an infinite line through the origin, hence giving a set in the projective space. Take a -dimensional plane not passing through the origin and take the set of intersection points of the lines in with . Observe that the coplanar points have the property that a line passing through two points of passes through a third point of . that for all , . Higher dimensional analogues [Han65, BE67] prove that . One of our auxiliary theorems is such a statement for all fields.
###### Theorem 3 (SGk for all fields).
For any field and , .
Our main theorem is a simple, clean expression of how Sylvester-Gallai influences identities.
###### Theorem 4 (From SGk to Rank).
Let . The rank of a simple and minimal identity over is at most .
Remark. If is small, then we choose an extension of size and get a rank bound with .
Plugging in -rank bounds gives us the desired theorem for depth- identities. We have a slightly stronger version of the above theorem that we use to get better constants (refer to Theorem 18).
###### Theorem 5 (Depth-3 Rank Bounds).
Let be a circuit, over field , that is simple, minimal and zero. Then,
• For , .
• For any , .
As discussed before, a direct application of this result to Lemma 4.10 of [KS08] gives a deterministic black-box identity test for circuits (we will only discuss here as the other statement is analogous). Formally, we get the following hitting set generator for circuits with real coefficients.
###### Corollary 6 (Black-box PIT over Q).
There is a deterministic algorithm that takes as input a triple of natural numbers and in time , outputs a hitting set with the following properties:
• Any circuit over computes the zero polynomial iff , .
• has at most points.
• The total bit-length of each point in is .
Remark.
• Our black-box test has quasi-polynomial in time complexity (with polynomial-dependence on ) for top fanin as large as , and sub-exponential in time complexity (with polynomial-dependence on ) even for top fanin as large as . This is the first tester to achieve such bounds.
• The fact that the points in are integral and have “small” bit-length is important to estimate the time complexity of our algorithm in terms of bit operations. Thus, the hitting set generator takes at most bit operations to compute .
## 2. Proof Outline, Ideas, and Organization
Our proof of the rank bound comprises of several new ideas, both at the conceptual and the technical levels. In this section we will give the basic intuition of the proof. The three notions that are crucially used (or developed) in the proof are: ideal Chinese remaindering, matchings and Sylvester-Gallai rank bounds. These have appeared (in some form) before in the works of Kayal & Saxena [KS07], Saxena & Seshadhri [SS09] and Kayal & Saraf [KS09b] respectively, to prove different kinds of results. Here we use all three of them together to show quite a strong structure in identities. We will talk about them one by one in the following three subsections outlining the three steps of the proof. Each step proves a new property of identities which is interesting in its own right. The first two steps set up the algebraic framework and prove theorems that hold for all fields. The third step is where the Sylvester-Gallai theorems are brought in. Some (new and crucial) algebraic lemmas and their proofs have been moved to the Appendix. The flow of the actual proof will be identical to the overview that we now provide.
### 2.1. Step 1: Matching the Gates in an Identity
We will denote the set by . We fix the base field to be , so the circuits compute multivariate polynomials in the polynomial ring .
A linear form is a linear polynomial in with zero constant term. We will denote the set of all linear forms by . Clearly, is a vector (or linear) space over and that will be quite useful. Much of what we do shall deal with multi-sets of linear forms (sometimes polynomials in too), equivalence classes inside them, and various maps across them. A list of linear forms is a multi-set of forms with an arbitrary order associated with them. The actual ordering is unimportant: we will heavily use maps between lists, and the ordering allows us to define these maps unambiguously. The object, list, comes with all the usual set operations naturally defined.
###### Definition 7.
We collect some important definitions from [SS09]:
[Multiplication term, & ] A multiplication term is an expression in given as (the product may have repeated ’s), , where and is a list of nonzero linear forms. The list of linear forms in , , is just the list of forms occurring in the product above. For a list of linear forms we define the multiplication term of , , as or if .
[Forms in a Circuit] We will represent a circuit as a sum of multiplication terms of degree , . The list of linear forms occurring in is . Note that is a list of size exactly . The rank of , , is just the number of linearly independent linear forms in . (Remark: for the purposes of this paper ’s are given in circuit representation and thus the list is unambiguously defined from )
[Similar forms] For any two polynomials we call similar to if there exists such that . We say is similar to mod , for some ideal of , if there is some such that . Note that “similarity mod ” is an equivalence relation (reflexive, symmetric and transitive) and partitions any list of polynomials into equivalence classes.
[Span ] For any we let be the linear span of the linear forms in over the field . (Conventionally, .)
[Matchings] Let be lists of linear forms and be a subspace of . An -matching between is a bijection between lists such that: for all , .
When are multiplication terms, an -matching between would mean an -matching between .
We will show that all the multiplication terms of a minimal identity can be matched by a “low” rank space.
###### Theorem 8 (Matching-Nucleus).
Let be a circuit that is minimal and zero. Then there exists a linear subspace of such that:
1) .
2) , there is a -matching between .
The idea of matchings within identities was first introduced in [SS09], but nothing as powerful as this theorem has been proven. This theorem gives us a space of small rank, independent of , that contains most of the “complexity” of . All forms in outside are just mirrored in the various terms. This starts connecting the algebra of depth- identities to a combinatorial structure. Indeed, the graphical picture (explained in detail below) that this theorem provides, really gives an intuitive grasp on these identities. The proof of this involves some interesting generalizations of the Chinese Remainder Theorem to some special ideals.
###### Definition 9 (mat-nucleus).
Let be a minimal identity. The linear subspace given by Theorem 8 is called mat-nucleus of .
The notion of mat-nucleus is easier to see in the following unusual representation of the circuit . The four bubbles refer to the four multiplication terms of and the points inside the bubbles refer to the linear forms in the terms. The proof of Theorem 8 gives mat-nucleus as the space generated by the linear forms in the dotted box. The linear forms that are not in mat-nucleus lie “above” the mat-nucleus and are all (mat-nucleus)-matched, i.e. , there is a form similar to modulo mat-nucleus in each . Thus the essence of Theorem 8 is: the mat-nucleus part of the terms of has low rank , while the part of the terms above mat-nucleus all look “similar”.
#### Proof Idea for Theorem 8
The key insight in the construction of mat-nucleus is a reinterpretation of the identity test of Kayal & Saxena [KS07] as a structural result for identities. Again, refer to the following figure depicting a circuit and think of each bubble having linear forms. Roughly, [KS07] showed that iff for every path (where ): or in ideal terms, . Thus, roughly, it is enough to go through all the paths to certify the zeroness of . This is why the time complexity of the identity test of [KS07] is dominated by .
Now if we are given a identity which is minimal, then we know that . Thus, by applying the above interpretation of [KS07] to we will get a path such that . Since this means that but (if is in then so will be ). Thus, is a nontrivial congruence and it immediately gives us a -matching between (see Lemma 44). By repeating this argument with a different permutation of the terms we could match different terms (by a different ideal), and finally we expect to match all the terms (by the union of the various ideals).
This fantastic argument has numerous technical problems, but they can all be taken care of by suitable algebraic generalizations. The main stumbling block is the presence of repeating forms. It could happen that , occurs in many terms, or in the same term with a higher power. The most important tool developed is an ideal version of Chinese remaindering that forces us to consider not just linear forms , but multiplication terms dividing respectively. We give the full proof in Section 3. (Interestingly, the non-blackbox identity test of [KS07] guides in devising a blackbox test of “similar” complexity over rationals.)
### 2.2. Step 2: Certificate for Linear Independence of Gates
Theorem 8 gives us a space , of rank , that matches to each term . In particular, this means that the list has the same cardinality for each . In fact, if we look at the corresponding multiplication terms , , then they again form a identity! Precisely, for some ’s in (see Lemma 46) is an identity. We would like to somehow mimic the structure of . Of course is simple but is it again minimal? Unfortunately, it may not be. For reasons that will be clear later, minimality of would have allowed us to go directly to Step 3. Now step 2 will involve increasing the space (but not by too much) that gives us a that “behaves” like . Specifically, if are linearly independent (i.e. s.t. ), then so are .
###### Theorem 10 (Nucleus).
Let be a minimal identity and let be a maximal set of linearly independent terms (). Then there exists a linear subspace of such that:
• .
• , there is a -matching between .
• (Define , .) The terms are linearly independent.
###### Definition 11 (nucleus).
Let be a minimal identity. The linear subspace given by Theorem 10 is called the nucleus of . By Lemma 46, the subspace induces an identity which we call the nucleus identity.
The notion of the nucleus is easier to grasp when is a identity that is strongly minimal, i.e. are linearly independent. Clearly, such a is also minimal333If for some proper , then linear independence of is violated.. For such a , Theorem 10 gives a nucleus such that the corresponding nucleus identity is strongly minimal. The structure of is very strongly represented by . As a bonus, we actually end up greatly simplifying the polynomial-time PIT algorithm of Kayal & Saxena [KS07] (although we will not discuss this point in detail in this paper).
#### Proof Idea for Theorem 10
The first two properties in the theorem statement are already satisfied by mat-nucleus of . So we incrementally add linear forms to the space mat-nucleus till it satisfies property (3) and becomes the nucleus. The addition of linear forms is guided by the ideal version of Chinese remaindering. For convenience assume to be linearly independent. Then, by homogeneity and equal degree, we have an equivalent ideal statement: and (see Lemma 42). Even in this general setting the path analogy (used in the last subsection) works and we essentially get linear forms and such that: and . We now add these forms to the space mat-nucleus, and call the new space . It is expected that the new are now linearly independent.
Not surprisingly, the above argument has numerous technical problems. But it can be made to work by careful applications of the ideal version of Chinese remaindering. We give the full proof in Section 4.
### 2.3. Step 3: Invoking Sylvester-Gallai Theorems
We make a slight, but hopefully interesting, detour and leave depth- circuits behind. We rephrase the standard Sylvester-Gallai theorems in terms of Sylvester-Gallai closure (or configuration) and rank bounds. This is far more appropriate for our application, and seems to be very natural in itself.
###### Definition 12 (SGk-closed).
Let . Let be a subset of non-zero vectors in without multiples: no two vectors in are scalar multiples of each other444This is just a set of elements in the projective space , but this formulation in terms of vectors is more convenient for our applications.. Suppose that for every set of linearly independent vectors in , the linear span of contains at least vectors of . Then, the set is said to be -closed.
We would expect that if is finite then it will get harder to keep -closed as is gradually increased. This intuition holds up when . As we mentioned earlier, the famous Sylvester-Gallai Theorem states: if a finite is -closed, then . It is optimal as the line has rank and is -closed.
In fact, there is also a generalization of the Sylvester-Gallai theorem known (as stated in Theorem 2.1 of [BE67]) : Let be a finite set in spanning that projective space. Then, there exists a -flat such that , and is spanned by those points .
Let be a finite set of points with first coordinate being and let . We claim that if is -closed, then . Otherwise the above theorem guarantees vectors in whose -flat has only points of . If has a point then as has first coordinates , it would mean that a convex linear combination of (i.e. sum of coefficients in the combination is ) is . In other words, , which contradicts having only points of . Thus, also has no point in , but this contradicts -closure of . This shows that higher dimensional Sylvester-Gallai theorem implies that if is -closed then . We prefer using this rephrasal of the higher dimensional Sylvester-Gallai Theorem. This motivates the following definitions.
###### Definition 13 (SG operator).
Let .
[] The largest possible rank of an -closed set of at most points in is denoted by . For example, the above discussion entails which is, interestingly, independent of . (Also verify that for , and .)
[] Suppose a set has rank greater than (where ). Then, by definition, is not -closed. In this situation we say the -dimensional Sylvester-Gallai operator (applied on ) returns a set of linearly independent vectors in whose span has no point in .
The Sylvester-Gallai theorem in higher dimensions can now be expressed succintly.
###### Theorem 14 (High dimension Sylvester-Gallai for R).
[Han65, BE67] .
Remark. This theorem is also optimal, for if we set to be a union of “skew lines” then has rank and is -closed. For example, when define . It is easy to verify that and the span of every three linearly independent vectors in contains a fourth vector!
Using some linear algebra and combinatorial tricks, we prove the first ever Sylvester-Gallai bound for all fields. The proof is in Section 6, where there is a more detailed discussion of this (and the connection with LDCs).
Theorem 3 ( for all fields). For any field and , .
#### 2.3.1. Back to identities
Let be a simple and strongly minimal identity. Theorem 10 gives us a nucleus , of rank , that matches to each term . As seen in Step 2, if we look at the corresponding multiplication terms , , then they again form a “nucleus identity” , for some ’s in , which is simple and strongly minimal. Define the non-nucleus part of as , for all ( in the exponent annotates “complement”, since ). What can we say about the rank of ?
Define the non-nucleus part of as . Our goal in Step 3 is to bound by when the field is . This will give us a rank bound of for simple and strongly minimal identities over . The proof is mainly combinatorial, based on higher dimensional Sylvester-Gallai theorems and a property of set partitions, with a sprinkling of algebra.
We will finally apply operator not directly on the forms in but on a suitable truncation of those forms. So we need another definition.
###### Definition 15 (Non-K rank).
Let be a linear subspace of . Then is again a linear space (the quotient space). Let be a list of forms in . The non- rank of is defined to be (i.e. the rank of when viewed as a subset of ).
Let be a identity with nucleus . The non- rank of the non-nucleus part is called the non-nucleus rank of . Similarly, the non- rank of the non-nucleus part is called the non-nucleus rank of .
We give an example to explain the non- rank. Let . Suppose and . We can take any element in and simply drop all the terms, i.e. ‘truncate’ -part of . This gives a set of linear forms over the variables. The rank of these is the non- rank of .
We are now ready to state the theorem that is proved in Step 3. It basically shows a neat relationship between the non-nucleus part and Sylvester-Gallai.
###### Theorem 16 (Bound for simple, strongly minimal identities).
Let . The non-nucleus rank of a simple and strongly minimal identity over is at most .
Given a simple, minimal identity that is not strongly minimal. Let be linearly independent and form a basis of . Then it is clear that such that is a strongly minimal identity (for some ). Hence, we could apply the above theorem on this identity and get a rank bound for the non-nucleus part. The only problem is this fanin- identity may not be simple. Our solution for this is to replace by the suitable linear combination of in and repeat the above argument on the new identity. In Section 5.2 we show this takes care of the whole non-nucleus part and bounds its rank by . To state the theorem formally, we need a more refined notion than the fanin of a circuit.
###### Definition 17 (Independent-fanin).
Let be a circuit. The independent-fanin of , , is defined to be the size of the maximal such that are linearly independent polynomials. (Remark: If then . Also, for an identity , is strongly minimal iff .)
We now state the following stronger version of the main theorem.
###### Theorem 18 (Final bound).
Let . The rank of a simple, minimal , independent-fanin , identity is at most .
Remark: In particular, the rank of a simple, minimal identity over reals is at most , proving the main theorem over reals. Likewise, for any , we get the rank bound of , proving the main theorem.
#### Proof Idea for Theorem 16
Basically, we apply the operator on the non-nucleus part of the term , i.e. we treat a linear form as the point for the purposes of Sylvester-Gallai and then we consider assuming that the non-nucleus rank of is more than . This application of Sylvester-Gallai is much more direct compared to the methods used in [KS09b]. There, they needed versions of Sylvester-Gallai that dealt with colored points and had to prove a hyperplane decomposition property after applying essentially a operator on . Since, modulo the nucleus, all multiplication terms look essentially the same, it suffices to focus attention on just one of them. Hence, we apply the -operator on a single multiplication term.
To continue with the proof idea, assume is a simple, strongly minimal identity with terms and let be its nucleus given by Step 2. It will be convenient for us to fix a linear form and a subspace of such that we have the following orthogonal vector space decomposition (i.e. implies and implies ). This means for any form , there is a unique way to express , where , and . Furthermore, we will assume wlog that for every form the corresponding is nonzero, i.e. each form in is monic wrt (see Lemma 40).
###### Definition 19 (trun(⋅)).
Fix a decomposition . For any form , there is a unique way to express , where , and .
The truncated form is the linear form obtained by dropping the part and normalizing, i.e. .
Given a list of forms we define to be the corresponding set (thus no repetitions) of truncated forms.
To be precise, we fix a basis of so that each form in has representation (). We view each such form as the point while applying Sylvester-Gallai on . Assume, for the sake of contradiction, that the non-nucleus rank of , then (by definition) gives linearly independent forms whose span contains no other linear form of .
For simplicity of exposition, let us fix , spanned by ’s, spanned by ’s and . Note that (by definition) . We want to derive a contradiction by using the -tuple and the fact that is a simple, strongly minimal identity. The contradiction is easy to see in the following configuration: Suppose the linear forms in that are similar to a form in are exactly those depicted in the figure. Let us consider modulo the ideal . It is easy to see that these forms (call them ) “kill” the first three gates, leaving . As is an identity this means , thus there is a form such that . Now none of the forms divide . Also, their non-trivial combination, say for , cannot occur in . Otherwise, by the matching property will be in . This contradicts the ’s being a -tuple. Thus, cannot be in , a contradiction. This means that the non-nucleus rank of is , which by matching properties implies the non-nucleus rank of is .
We were able to force a contradiction because we used a set of forms in an SG-tuple that killed three terms and “preserved” the last term. Can we always do this? This is not at all obvious, and that is because of repeating forms. Suppose, after going modulo form , the circuit looks like . This is not simple, but it does not have to be. We are only guaranteed that the original circuit is simple. Once we go modulo , that property is lost. Now, the choice of any form kills all terms. In the figure above, , does not yield a contradiction. We will use our more powerful Chinese remaindering tools and the nucleus properties to deal with this. We have to prove a special theorem about partitions of and use strong minimality (which we did not use in the above sketch). The full proof is given in Section 5.1.
## 3. Matching the Terms in an Identity: Construction of mat-nucleus
### 3.1. Chinese Remaindering for Multiplication Terms
Traditionally, Chinese remaindering is the fact: if two coprime polynomials (resp. integers) divide a polynomial (resp. integer) then divides . The key tool in constructing mat-nucleus is a version of Chinese remaindering specialized for multiplication terms but generalized to ideals. Similar methods appeared first in [KS07] but we turn those on their head and give a “simpler” proof. In particular, we avoid the use of local rings and Hensel lifting.
Let be multiplication terms generating an ideal . Define linear space .
When the set of generators are clear from the context we will also use the notation . Similarly, would be a shorthand for .
Remark. Radical-span is motivated by the radical of an ideal but it is not quite that, for example, but . It is easy to see that the ideal generated by radsp always contains the radical ideal.
Now we can neatly state Chinese remaindering as an ideal decomposition statement.
###### Theorem 21 (Ideal Chinese remaindering).
Let be multiplication terms. Define the ideal . Assume while, and . Then, .
###### Proof.
If is a polynomial in then clearly it is in each of the ideals , and .
Suppose is a polynomial in . Then by definition there exist and such that,
h=i1+az=i2+bf=i3+cg.
The second equation gives . Since , repeated applications of Lemma 41 give us, . Implying , hence . This ensures the existence of and a polynomial such that,
h=i′2+b′zf=i3+cg.
Again this system says that . Since , repeated applications of Lemma 41 give us . Implying , hence . This finishes the proof. ∎
The conditions in this theorem suggest that factoring a multiplication term into parts corresponding to the equivalence classes of “similarity mod ” would be useful.
###### Definition 22 (Nodes).
Let be a multiplication term and let be an ideal generated by some multiplication terms. As the relation “similarity mod ” is an equivalence relation on , it partitions, in particular, the list into equivalence classes.
[] For each such class pick a representative and define . (Note that form can also appear in this set, it represents the class .)
[] For each , we multiply the forms in that are similar to mod . We define nodes of mod as the set of polynomials . (Remark: When , nodes of are just the coprime powers-of-forms dividing .)
[…wrt a subspace] Let be a linear subspace of . Clearly, the relation “similarity mod ” is an equivalence relation on . It will be convenient for us to also use notations and . They are defined by replacing in the above definitions by .
Observe that the product of polynomials in just gives . Also, modulo , each node is just a form-power . In other words, modulo , a node is rank-one term. The choice of the word “node” might seem a bit mysterious, but we will eventually construct paths through these. To pictorially see what is going on, think of each term as a set of its constituent nodes.
We prove a corollary of the ideal Chinese remaindering theorem that will be very helpful in both Steps 1 and 2.
###### Corollary 23.
Let , be a multiplication term, and let be an ideal generated by some multiplication terms. Then, iff such that .
###### Proof.
If , for some , then clearly .
Conversely, assume . Let and correspondingly, . If then is similar to , hence and we are done. So assume . Also, in case has a form in , assume wlog is the representative of the class . Define , for all .
We claim that for all , . Otherwise such that either or . Former case contradicts being the representative of the class , while the latter case contradicts | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9727733135223389, "perplexity": 659.6543224615189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00551.warc.gz"} |
http://petegustafson.com/CalculiX/ccx_2.15/doc/ccx/node181.html | Next: Green functions Up: Types of analysis Previous: Electromagnetism Contents
### Sensitivity
A sensitivity analysis calculates how a variable G (called the objective function) changes with some other variables s (called the design variables), i.e. DG/Ds.
If s are the coordinates of some nodes, then the objective function usually takes the form , i.e. it is a direct function of the coordinates and it is a direct function of the displacements, which are again a function of the coordinates. One can write (vector- and matrix-denoting parentheses have been omitted; it is assumed that the reader knows that and are vectors, and are matrices and that and are potentially vectors):
(430)
The governing equation for static (linear and nonlinear) calculations is , which leads to
(431)
or
(432)
where
(433)
Since for linear applications and , the above equations reduce in that case to
(434)
or
(435)
Consequently one arives at the equation:
(436)
For the speed-up of the calculations it is important to perform the calculation of the term on element level and to calculate the term before multiplying with the last term in brackets. Furhermore, should be calculated by solving an equation system and not by inverting .
For special objective functions this relationship is further simplified:
• if is the mass .
• if is the shape energy .
• if are the displacements Equation (435) applies directly
For eigenfrequencies as objective function one starts from the eigenvalue equation:
(437)
from which one gets:
(438)
Premultiplying with and taking the eigenvalue equation and the normalization of the eigenvectors w.r.t. into account leads to
(439)
Notice that this is the sensitivity of the eigenvalues, not of the eigenfrequencies (which are the square roots of the eigenvalues). This is exactly how it is implemented in CalculiX: you get in the output the sensitivity of the eigenvalues.
Subsequently, one can derive the eigenvalue equation to obtain the derivatives of the eigenvectors:
(440)
If s is the orientation in some or all of the elements, the term is in addition zero in the above equations.
In CalculiX, G is defined with the keyword *OBJECTIVE, s is defined with the keyword DESIGNVARIABLES and a sensitivity analysis is introduced with the procedure card *SENSITIVITY.
If the parameter NLGEOM is not used on the *SENSITIVITY card, the calculation of does not contain the large deformation and stress stiffness, else it does. Similarly, without NLGEOM is calculated based on the linear strains, else the quadratic terms are taken into account.
If the objective function is the mass, the shapeenergy or the displacements a *STATIC step must have been performed. The displacements and the stiffness matrix from this step are taken for and in Equation (436) (in the presence of a subsequent sensitivity step is stored automatically in a file with the name jobname.stm). If the static step was calculated with NLGEOM, so should the sensitivity step in order to be consistent. So the procedure cards should run like:
*STEP
*STATIC
...
*STEP
*SENSITIVITY
...
or
*STEP,NLGEOM
*STATIC
...
*STEP,NLGEOM
*SENSITIVITY
...
If the objective functions are the eigenfrequencies (which include the eigenmodes), a *FREQUENCY step must have been performed with STORAGE=YES. This frequency step may be a perturbation step, in which case it is preceded by a static step. The displacements , the stiffness matrix and the mass matrix for equations (439) and (440) are taken from the frequency step. If the frequency step is performed as a perturbation step, the sensitivity step should be performed with NLGEOM, else it is not necessary. So the procedure cards should run like:
*STEP
*FREQUENCY,STORAGE=YES
...
*STEP
*SENSITIVITY
...
or
*STEP
*STATIC
...
*STEP,PERTURBATION
*FREQUENCY,STORAGE=YES
...
*STEP,NLGEOM
*SENSITIVITY
...
or
*STEP,NLGEOM
*STATIC
...
*STEP,PERTURBATION
*FREQUENCY,STORAGE=YES
...
*STEP,NLGEOM
*SENSITIVITY
...
(a perturbation frequency step only makes sense with a preceding static step).
The output of a sensitivity calculation is stored as follows (frd-output only if the SEN output request was specified underneath a *NODE FILE card):
For TYPE=COORDINATE design variables the results of the target functions MASS, SHAPE ENERGY, EIGENFREQUENCY and DISPLACEMENT (i.e. the sum of the squares of the displacements in all objective nodes) are stored in the .frd-file and can be visualized using CalculiX GraphiX.
For TYPE=ORIENTATION design variables the eigenfrequency sensitivity is stored in the .dat file whereas the displacement sensitivity (i.e. the derivative of the displacements in all nodes w.r.t. the orientation) is stored in the .frd-file. The order of the design variables is listed in the .dat-file. All orientations defined by *ORIENTATION cards are varied, each orientation is defined by 3 independent variables. So for n *ORIENTATION cards there are 3n design variables. The sensitivity of the mass w.r.t. the orientation is zero.
Finally, it is important to know that a sensitivity analysis in CalculiX only works for true 3D-elements (no shells, beams, plane stress, etc...).
Next: Green functions Up: Types of analysis Previous: Electromagnetism Contents
guido dhondt 2018-12-15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028743505477905, "perplexity": 1102.7344757553321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00552.warc.gz"} |
http://www.physicsforums.com/printthread.php?t=665484 | Physics Forums (http://www.physicsforums.com/index.php)
- Introductory Physics Homework (http://www.physicsforums.com/forumdisplay.php?f=153)
- - Electric field at a distance from a charged disk (http://www.physicsforums.com/showthread.php?t=665484)
kb1408 Jan20-13 02:05 AM
Electric field at a distance from a charged disk
A disk of radius 2.4 cm carries a uniform surface charge density of 3.1 μ C/m2. Using reasonable approximations, find the electric field on the axis at the following distances.
I have used the equation E=(Q/ε0)(1/(4*pi*r2))
I also tried the equation E=(Q/2(ε0))(1-(z/(√(z2)+(r2)))
Thanks in advance for the help. Both equations have not led me to the correct answer.
*note, there is not a figure provided for this question*
Doc Al Jan20-13 07:43 AM
Re: Electric field at a distance from a charged disk
Quote:
Quote by kb1408 (Post 4235461) I have used the equation E=(Q/ε0)(1/(4*pi*r2))
This looks like the field from an infinite sheet of charge. You should write it as σ/2ε, where σ is the surface charge density. Not what you want.
Quote:
I also tried the equation E=(Q/2(ε0))(1-(z/(√(z2)+(r2)))
That's the one you want, but you need to replace Q with σ.
andrien Jan20-13 07:58 AM
Re: Electric field at a distance from a charged disk
http://www.physics.udel.edu/~watson/...n/efield1.html
kb1408 Jan20-13 11:37 PM
Re: Electric field at a distance from a charged disk
Doc Al, thanks for your quick reply. Unfortunately I am still doing something wrong. I am using 3.1E-6 C/m2 for σ. Is that wrong?
haruspex Jan21-13 12:05 AM
Re: Electric field at a distance from a charged disk
kb1408 Jan21-13 12:14 AM
Re: Electric field at a distance from a charged disk
E= (3.1E-6/(2*8.85E-12))(1-((.0001/(√(.00012)+(.0242))
so E= 1.79E5 N/C
where:
σ=3.1E-6 C/m2
ε0=8.85E-12 C2/N*m2
z=.01E-2 m
r= 2.4E-2 m
haruspex Jan21-13 12:34 AM
Re: Electric field at a distance from a charged disk
Did you simply replace Q by σ? What about the disc area?
kb1408 Jan21-13 12:45 AM
Re: Electric field at a distance from a charged disk
Yes, that's what I did. Is it σ=Q/A then?
haruspex Jan21-13 12:58 AM
Re: Electric field at a distance from a charged disk
Yes, as in the link andrien posted.
kb1408 Jan21-13 01:09 AM
Re: Electric field at a distance from a charged disk
cheers!
kb1408 Jan21-13 01:13 AM
Re: Electric field at a distance from a charged disk
And thank you andrien for the link!
jtbell Jan21-13 01:25 AM
Re: Electric field at a distance from a charged disk
Note: This thread had already developed quite a bit before I noticed that it really should have been in one of the homework help forums. Therefore I've simply moved it instead of deleting it and asking the original poster to start over, which is the normal practice.
In the future, please post requests for help on specific exercises like this in one of the homework help forums, even if they're not actually assignments for a class. The "normal" forums are more for conceptual questions and general discussion of their topics.
All times are GMT -5. The time now is 04:21 AM. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9491285681724548, "perplexity": 2356.8846646054308}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021586626/warc/CC-MAIN-20140305121306-00057-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/188195/inverse-of-binary-entropy-function-for-0-le-x-le-frac12?answertab=votes | # Inverse of binary entropy function for $0 \le x \le \frac{1}{2}$
I'm trying to find the inverse of $H_2(x) = -x \log_2 x - (1-x) \log_2 (1-x)$[1] subject to $0 \le x \le \frac{1}{2}$. This is for a computation, so an approximation is good enough.
My approach was to take the Taylor series at $x=\frac{1}{4}$, cut it off as a quadratic, then find the inverse of that. That yields
$$H_2^{-1}(x) \approx -\frac{1}{16} \, \sqrt{-96 \, x \log\left(2\right) + 9 \, \log\left(3\right)^{2} - 72 \, \log\left(3\right) + 96 \, \log\left(4\right)} + \frac{3}{16} \, \log\left(3\right) + \frac{1}{4}$$
Unfortunately, that's a pretty bad approximation and it's complex at $H_2^{-1}(1)$. What other approaches can I take?
[1] I originally forgot to write the base 2 subscript (I added that in a later edit)
-
A rather coarse approximation of $H_2(x)$ on the said interval is $4 \log(2) x(1-x)$. Hence a crude approximation for the inverse is: $$H_2^{-1}(y) \approx \frac{1}{2} \left(1- \sqrt{1-\frac{y}{\log(2)}}\right) = \frac12\frac{y}{\log(2) + \sqrt{\log(2)\left(\log(2)-y\right)}}$$ This initial approximation should be refined with Newton-Raphson method.
-
Thanks! How did you come up with $H_2(x) \approx 4 \log(2) x (1-x)$? – Red Aug 29 '12 at 3:17
The entropy is symmetric function of $x$, suggesting to approximate it by a polynomial in $x(1-x)$. – Sasha Aug 29 '12 at 3:19
@Red I meant symmetric around $x=1/2$. – Sasha Aug 29 '12 at 3:27
+1. A slightly better approximation would be to choose $a$ such that $$\int_0^1 \left(H_2(x) - ax(1-x) \right)^2 dx$$ is minimized. This gives us $a= \dfrac{35}{12} \approx 2.91667$ as opposed to $a = 4 \log(2) \approx 2.7726$. Sasha's $4 \log 2$ ensures that the approximation is always $\leq H_2(x)$, whereas the above approximation optimizes the choice over the entire domain. – user17762 Nov 28 '12 at 23:53
A nice approximation I found in this thesis:
$\frac{x}{2 \log_2 (6/x)} \leq H_2^{-1}(x) \leq \frac{x}{ \log_2 (1/x)}$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844280481338501, "perplexity": 468.61361344640494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276584.58/warc/CC-MAIN-20140728011756-00324-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://brilliant.org/problems/lucas-numbers/ | # Luca's numbers....
Just like Fibonacci numbers , there are $$\color{Blue}{Luca's}$$ numbers... They are defined by following recurrence relation. $L_0=2$ $L_1 =1$ $\text{and}$ $L_n = L_{n-1} + L_{n-2}$
Find the value of $$\displaystyle \sum_{n=0} ^{10} L_n$$
Note :- I read about Luca's numbers at this problem in the tag recurrence relations, and i think the same thing as why the maker had chosen $$L_{10}$$ as asked answer, the same is why I also have chosen it to be sum till 10. After you get the answer, you'll come to know it, $$\color{Red}{\text{the answer number looks very good.}}$$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252781867980957, "perplexity": 601.9883832844889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805914.1/warc/CC-MAIN-20171120052322-20171120072322-00655.warc.gz"} |
http://models.street-artists.org/2017/03/08/bet-proofness-as-a-property-of-confidence-interval-construction-procedures-not-realized-intervals/comment-page-1/ | There was a bunch of discussion over at Andrew Gelman's blog about "bet proof" interpretations of confidence intervals. The relevant paper is here.
The basic principal of bet-proofness was essentially that if a sample of data X comes from a RNG with known distribution $D(\Theta)$ that has some parameter $\Theta$, then even if you know $\Theta$ exactly, so long as you don't know what the X values will be, you can't make money betting on whether the constructed CI will contain the $\Theta$ (the paper writes this in terms of $f(\Theta)$ but the principal is the same since f is a deterministic function).
The part that confused me, was that this was then taken to be a property of the individual realized interval... "Because an interval came from a bet-proof procedure it is a bet-proof realized interval" in essence. But, this defines a new term "bet-proof realized interval" which is meaningless when it comes to actual betting. The definition of "bet-proof procedure" explicitly uses averaging over the possible outcomes of the data collection procedure $X$ but after you've collected $X$ and told everyone what it is, if someone knows $\Theta$ and knows $X$ they can calculate exactly whether the confidence interval does or does not contain $\Theta$ and so they win every bet they make.
So "bet-proof realized confidence interval" is really just a technical term meaning "a realized confidence interval that came from a bet proof procedure" however it doesn't have any content for prediction of bets about that realized interval. The Bayesian with perfect knowledge of $\Theta$ and $X$ and the confidence construction procedure wins every bet! (there's nothing uncertain about these bets).
2 Responses leave one → | {"extraction_info": {"found_math": true, "script_math_tex": 12, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8866420388221741, "perplexity": 693.1523754123657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00179.warc.gz"} |
http://mathoverflow.net/questions/87022/kernel-of-the-representation-of-the-mapping-class-group-to-autf-n | # Kernel of the representation of the mapping class group to $Aut(F_n)$
Let $S_{g,1}$ be a orientable compact surface of genus $g$ with one boundary component and $\Gamma_{g,1}$ the mapping class group. By $F_n$ I denote the free group on $n$ generators.
One obtains a representation $\rho: \Gamma_{g,1} \rightarrow Aut(F_{2g})$.
What is the kernel of $\rho$?
-
To be clear, you are placing a base point on the boundary of $S_{g,1}$. Otherwise, you only get a representation into $\mathrm{Out}(F)$. – HJRW Jan 30 '12 at 13:30
@HW and lsw: Could you clarify what is $\rho$ and why it depends on where the basepoint is (as long as it is fixed)? – Mark Sapir Jan 30 '12 at 15:07
@HW: Thank you for making this precise. @Mark Sapir: You have to consider the induced action on the fundamental group of $S_{g,1}$. By fixing a base-point there is no $Inn(\pi_1)$-action. – lsw Jan 30 '12 at 15:16
@Isw: Why is the kernel non-trivial? – Mark Sapir Jan 30 '12 at 16:24
@Mark Sapir: I don't know. Why is it trivial? – lsw Jan 30 '12 at 16:39
The representation is faithful, since a mapping class is determined by its action on the fundamental group of the surface. A surface is a $K(\pi,1)$, so given any element $Aut(S_{g,1})$, one obtains a (pointed) map $\varphi:S_{g,1}\to S_{g,1}$ which is unique up to homotopy. Now one needs to know that two homotopic homeomorphisms of a surface are isotopic, which is classic (at least one may find this in a paper of Waldhausen). In fact, one may identify the image in $Aut(F_{2g})$ as the subgroup preserving the peripheral element. Also, note that everything should be fixing a basepoint in the boundary, as in HW's comment. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914531469345093, "perplexity": 193.3268091857117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999672215/warc/CC-MAIN-20140305060752-00066-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/118311-diagonalizable-matrices.html | ## Diagonalizable Matrices
If a matrix has complex eigenvalues, can it be diagonalizable? Furthermore, what ratio of square matrices will be diagonalizable? Does it vary with the size of the matrix?
I'm working on a project and these questions came up and will effect which direction I end up taking it. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886876702308655, "perplexity": 522.4787700044757}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00455.warc.gz"} |
https://cs.stackexchange.com/questions/87743/weight-constrained-shortest-path-problem-variants | # weight constrained shortest path problem variants
Given a graph $G=(V,E)$, metric spaces $\delta:E\rightarrow \mathbb{Z}^{+}$ and $w:E\rightarrow \mathbb{Z}^{+}$, terminal vertices $s,t\in V$, do there exists $s\rightarrow t$ path $P=(V_{p},E_{p})$ such that $\sum_{e\in E_{p}} w(e) \leq W$ and $\sum_{e\in E_{p}} \delta(e) \leq K$, where $W,K\in Z^{+}$.
This is Weight constrained Shortest Path Problem, and known to be NP-complete for undirected as well as directed even acyclic graphs. as long as $\delta$ and $w$ is not equal for all edges. Now if we move weight from edges to vertices and change $\leq$ of total weight constraint to $=$ then the new problem is:
Given a directed acyclic graph $G=(V,E)$, metric spaces $\delta:E\rightarrow \mathbb{Z}^{+}$ and $w:V\rightarrow \mathbb{Z}^{+}$, terminal vertices $s,t\in V$, do there exists $s\rightarrow t$ path $P=(V_{p},E_{p})$ such that $\sum_{v\in V_{p}} w(v) = W$ and $\sum_{e\in E_{p}} \delta(e) \leq K$.
Is this problem known ? is this solvable in P time ? or this is NP-complete too ? I think even if we replace $\mathbb{Z}^{+}$ with $\mathbb{R}^{+}$ it will not change nature of the problem.
If we set $W=\vert V \vert$ then it tries to reach all the vertices which looks like hamiltonian path problem, but it is a DAG.
## Update
I was trying to reduce it from Subset Sum as suggested by @D.W.
Given $X=\{1,2,3\}$ of size $\vert X \vert = n$ I can convert that to a graph of $n^{2}+n+2$ vertices. We can make $n$ layers of $X$ and connect every vertex with all vertices of next layer except itself. Keep a vertex with 0 weight on each layer. As show below.
But this reduction has 2 restrictions.
1. Every Layer is not complete bipartite (e.g. $2_{i}$ does not connect to $2_{i+1}$).
2. Every layer must have a vertex with 0 weight.
Now this does not prove the second problem for General purpose DAG. I can not connect $x_{i}\rightarrow x_{i+1}$ because that will change the original problem. So if there is a DAG that looks very similar to this one but have $n^{2}$ edges between each layer and have no 0 weight vertex this reduction does not apply to that graph.
Also in this formulation I can take the same element twice $1_{1}\rightarrow 2_{2}\rightarrow 1_{3}$ which will add to $4$. Which I should not be allowed
• Have you tried reducing from subset-sum? – D.W. Feb 6 '18 at 8:05
• Yes that was a typo, Thanks. The problem is with the acyclic. But if the problem is known I don't need to reduce. Also I don't think I can reduce the second one from first one. I thought I will check the proof for the first one but that is a private communication. – Neel Basu Feb 6 '18 at 9:37
• I suggest you try reducing from subset-sum. The reduction looks immediate to me, if I'm not missing something, but I'll let you check the details. – D.W. Feb 6 '18 at 13:08
• Yes I understand why it looks immediate each element will be translated to a vertex of a complete graph. The only thing I am afraid of is to prove it for DAG and remove the complete graph restriction. I am not sure how far I can go with vertex splitting to get rid of that. I will try tonight. – Neel Basu Feb 6 '18 at 15:35
• Thanks. Partial Success. But the DAG generated after reduction is not a general DAG. Please check update. – Neel Basu Feb 6 '18 at 19:22
• Definition requires P-time transformation of Input $I_{x}\rightarrow I_{y}$. Now how can I defy this argument ? if $Y$ is chain of complete bipartite graphs (with no infinite edge cost allowed) then how will that transform function generate $I_{y}$ ? – Neel Basu Feb 7 '18 at 8:19
• Also is this even correct ? because I can take the same element twice $1_{1}\rightarrow 2_{2}\rightarrow 1_{3}$ which will add to $4$. Which I should not be allowed. This may lead to a scenario when there is no such subset in the original problem but there is a path in the reduced problem. Also number of solutions not preserved. – Neel Basu Feb 7 '18 at 11:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488236665725708, "perplexity": 283.12724654014056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00420.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.