url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://etna.ricam.oeaw.ac.at/volumes/2011-2020/vol44/abstract.php?vol=44&pages=624-638 | ## Perturbation of partitioned linear response eigenvalue problems
Zhongming Teng, Linzhang Lu, and Ren-Cang Li
### Abstract
This paper is concerned with bounds for the linear response eigenvalue problem for $H=\begin{bmatrix} 0 & K \\ M & 0 \end{bmatrix}$, where $K$ and $M$ admit a $2\times 2$ block partitioning. Bounds on how the changes of its eigenvalues are obtained when $K$ and $M$ are perturbed. They are of linear order with respect to the diagonal block perturbations and of quadratic order with respect to the off-diagonal block perturbations in $K$ and $M$. The result is helpful in understanding how the Ritz values move towards eigenvalues in some efficient numerical algorithms for the linear response eigenvalue problem. Numerical experiments are presented to support the analysis.
Full Text (PDF) [420 KB], BibTeX
### Key words
linear response eigenvalue problem, random phase approximation, perturbation, quadratic perturbation bound
15A42, 65F15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510281085968018, "perplexity": 965.6638103699185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738819.78/warc/CC-MAIN-20200811180239-20200811210239-00533.warc.gz"} |
http://clay6.com/qa/962/the-number-of-arbitrary-constants-in-the-general-solution-of-a-differential | Browse Questions
# The number of arbitrary constants in the general solution of a differential equation of fourth order are
$(A)\;0\qquad(B)\;2\qquad(C)\;3\qquad(D)\;4$
Toolbox:
• The number of arbitrary constants in a solution of a differential equation of order n is equal to its order.
In the given problem says the differential equation is of fourth order, so the number of arbitrary constants is 4.
Hence the correct answer is $D$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635086059570312, "perplexity": 82.28604600049631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00398-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-3-equations-and-problem-solving-3-1-solving-first-degree-equations-problem-set-3-1-page-100/21 | ## Elementary Algebra
$b=0.27$
Using the properties of equality, the value of the variable that satisfies the given equation, $b+0.19=0.46 ,$ is \begin{array}{l}\require{cancel} b=0.46-0.19 \\\\ b=0.27 .\end{array} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945158362388611, "perplexity": 3952.1482674470485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743184.39/warc/CC-MAIN-20181116194306-20181116220306-00177.warc.gz"} |
http://poisotlab.io/EcologicalNetwork.jl/latest/community/nlinks/ | # Number of links and connectance
# EcologicalNetwork.linksFunction.
Number of links in a network
links(N::EcoNetwork)
For all type of networks, this is the sum of the adjacency matrix. Note that for quantitative networks, this is the cumulative sum of link weights.
# EcologicalNetwork.link_numberFunction.
Number of links in a quantitative network
link_number(N::QuantitativeNetwork)
In quantitative networks only, returns the number of non-zero interactions.
# EcologicalNetwork.links_varFunction.
Variance in the expected number of links
links_var(N::ProbabilisticNetwork)
Expected variance of the number of links for a probabilistic network.
# EcologicalNetwork.linkage_densityFunction.
linkage_density(N::DeterministicNetwork)
Number of links divided by species richness.
## Connectance
# EcologicalNetwork.connectanceFunction.
Connectance
connectance(N::EcoNetwork)
Number of links divided by the number of possible interactions. In unipartite networks, this is $L/S^2$. In bipartite networks, this is $L/(T × B)$.
Connectance of a quantitative network
connectance(N::QuantitativeNetwork)
Connectance of a quantitative network – the information on link weight is ignored.
# EcologicalNetwork.connectance_varFunction.
Variance in the expected connectance
connectance_var(N::ProbabilisticNetwork)
Expected variance of the connectance for a probabilistic matrix, measured as the variance of the number of links divided by the squared size of the matrix. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717333674430847, "perplexity": 1733.891125722349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00258.warc.gz"} |
http://mathhelpforum.com/differential-equations/193253-homogeneous-ode-print.html | # Homogeneous ODE
• December 2nd 2011, 11:17 AM
losm1
Homogeneous ODE
Equation is $xy' = y \ln \frac{y}{x}$.
I have tried to substitute $v = \frac{y}{x}$ and I am stuck at $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$
To be exact I do not know where to go from there in calculus sense.
• December 2nd 2011, 12:07 PM
Darkprince
Re: Homogeneous ODE
So you have (1/x)dx = -(1/(vlnv-v) )dv= -1/(v(lnv-1)
Now the integral of -1/(v(lnv-1) is -ln(lnv - 1)
Then proceed accordingly, hope I helped :)
• December 2nd 2011, 12:08 PM
alexmahone
Re: Homogeneous ODE
Quote:
Originally Posted by losm1
Equation is $xy' = y \ln \frac{y}{x}$.
I have tried to substitute $v = \frac{y}{x}$ and I am stuck at $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$
To be exact I do not know where to go from there in calculus sense.
$\frac{dv}{v(\ln v-1)}=\ln (\ln v-1)+C$
• December 3rd 2011, 03:19 AM
losm1
Re: Homogeneous ODE
In my example differentials dx and dv are in denominator:
$\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$
I'm having trouble applying your formula in this case. Can you please clarify further?
• December 3rd 2011, 04:13 AM
Prove It
Re: Homogeneous ODE
Quote:
Originally Posted by losm1
Equation is $xy' = y \ln \frac{y}{x}$.
I have tried to substitute $v = \frac{y}{x}$ and I am stuck at $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$
To be exact I do not know where to go from there in calculus sense.
Make the substitution \displaystyle \begin{align*} v = \frac{y}{x} \implies y = v\,x \implies \frac{dy}{dx} = \frac{dy}{dx} = v + x\,\frac{dv}{dx} \end{align*} and the DE becomes
\displaystyle \begin{align*} x\,\frac{dy}{dx} &= y\ln{\left(\frac{y}{x}\right)} \\ x\left(v + x\,\frac{dv}{dx}\right) &= v\,x\ln{v} \\ v + x\,\frac{dv}{dx} &= v\ln{v} \\ x\,\frac{dv}{dx} &= v\ln{v} - v \\ x\,\frac{dv}{dx} &= v\left(\ln{v} - 1\right) \\ \frac{1}{v\left(\ln{v} - 1\right)}\,\frac{dv}{dx} &= \frac{1}{x} \\ \int{\frac{1}{v\left(\ln{v} - 1\right)}\,\frac{dv}{dx}\,dx} &= \int{\frac{1}{x}\,dx} \\ \int{\frac{1}{\ln{v} - 1}\,\frac{1}{v}\,dv} &= \ln{|x|} + C_1 \\ \int{\frac{1}{u}\,du} &= \ln{|x|} + C_1 \textrm{ after making the substitution }u = \ln{v} - 1 \implies du = \frac{1}{v}\,dv \\ \ln{|u|} + C_2 &= \ln{|x|} + C_1 \\ \ln{|u|} - \ln{|x|} &= C \textrm{ where }C = C_1 - C_2 \\ \ln{\left|\frac{u}{x}\right|} &= C \\ \ln{\left|\frac{\ln{v} - 1}{x}\right|} &= C \\ \ln{\left|\frac{\ln{\left(\frac{y}{x}\right)} - 1}{x}\right|} &= C \\ \frac{\ln{\left(\frac{y}{x}\right)} - 1}{x} &= A \textrm{ where } A = \pm e^C\end{align*}
You could get y in terms of x if you wanted to.
• December 3rd 2011, 05:17 AM
Darkprince
Re: Homogeneous ODE
Quote:
Originally Posted by losm1
In my example differentials dx and dv are in denominator:
$\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$
I'm having trouble applying your formula in this case. Can you please clarify further?
So you have x/dx = v(lnv - 1)/dv implies xdv = v(lnv-1)dx implies (1/x)dx = (1/v(lnv-1)) dv
• December 3rd 2011, 09:47 AM
tom@ballooncalculus
Re: Homogeneous ODE
Yes, that is the idea. However, just in case an overview helps...
http://www.ballooncalculus.org/draw/double/five.png
http://www.ballooncalculus.org/draw/double/fivea.png
http://www.ballooncalculus.org/draw/double/fiveb.png
... where (key in spoiler) ...
Spoiler:
http://www.ballooncalculus.org/asy/chain.png
... is the chain rule. Straight continuous lines differentiate downwards (integrate up) with respect to the main variable (in this case x), and the straight dashed line similarly but with respect to the dashed balloon expression (the inner function of the composite which is subject to the chain rule).
__________________________________________________ __________
Don't integrate - balloontegrate!
Balloon Calculus; standard integrals, derivatives and methods
Balloon Calculus Drawing with LaTeX and Asymptote! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 4672.260195300459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157012.30/warc/CC-MAIN-20160205193917-00111-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1535 | ## Theory and Numerics of Open System Thermodynamics
• A general framework for the thermodynamics of open systems is developed in the spatial and the material setting. Special emphasis is placed on the balance of mass which is enhanced by additional source and flux terms. Different solution strategies within the finite element technique are derived and compared. A number of numerical examples illustrates the features of the proposed approach.
• Theorie und Numerik der Thermodynamik Offener Systeme
$Rev: 13581$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111805558204651, "perplexity": 1112.1962775169504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00091-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/hep-lat/9411067/ | ###### Abstract
We study the valence approximation in lattice QCD of hadrons where the cloud quarks and antiquarks are deleted by truncating the backward time propagation (Z graphs) in the connected insertions. Whereas, the sea quarks are eliminated via the quenched approximation and in the disconnected insertions. It is shown that the ratios of isovector to isoscalar matrix elements in the nucleon reproduce the SU(6) quark model predictions in a lattice QCD calculation. We also discuss how the hadron masses are affected.
UK/94-03
Oct. 1994
hep-lat/9411067
Quark Model From Lattice QCD
Keh-Fei Liu ***Talk presented at Int. Conf. High Energy Phys., Galsgow, July, 1994 and Shao-Jing Dong
[0.5cm] Dept. of Physics and Astronomy
Univ. of Kentucky, Lexington, KY 40506
[1em]
## 1 Introduction
In addition to its classification scheme, the quark model is, by and large, quite successful in delineating the spectrum and structure of mesons and baryons. One often wonders what the nature of the approximation is, especially in view of the advent of quantum chromodynamics (QCD). In order to answer this question, we need to understand first where the quark model is successful and where it fails.
To begin with, we need to define what we mean by the quark model. We consider the simplest approach which includes the following ingredients:
• The Fock space is restricted to the valence quarks only.
• These valence quarks, be them the dressed constituent quarks or the bare quarks, are confined in a potential or a bag. To this zeroth order, the hadron wavefunctions involving u,d, and s quarks are classified by the totally symmetric wavefunctions in the flavor-spin and orbital space according to the and totally antisymmetric/symmetric in the color space for the baryons/mesons.
• The degeneracy within the the multiplts are lifted by the different quark masses and the residual interaction between the quarks which is weak compared to the confining potential. The one-gluon exchange potential is usually taken as this residual interaction to describe the hyper-fine and fine splittings of the hadron masses.
Given what we mean by the quark model, it is easier to understand where the quark model succeeds and fails. It is successful in describing hadron masses, relations of coupling and decay constants, magnetic moments, Okubo-Zweig rule, etc. It is worthwhile noting that all these are based on the valence picture aided with group for its color-spin and space group. On the other hand, it fails to account for the U(1) anomaly (the mass) , the proton spin crisis and the term. All these problems involve large contribution from disconnected insertions involving sea-quarks [1]. It is natural not to expect the valence quark model to work. There are other places where the valence quark model does not work well. These include , scatterings, current algebra relations, and the form factors of the nucleon which are better described by meson effective theories with chiral symmetry taken into account. For example, the scattering is well described in the chiral perturbation theory, the scattering and the nucleon electromagnetic ,axial, and pseudoscalar form factors (especially the neutron charge radius), Goldberg-Treiman relation are all quite well given in the skyrmion approach [2]. One common theme of these models is the chiral symmetry which involves meson cloud and hence the higher Fock space beyond the valence.
## 2 Valence Approximation
It is then clear that there are three ingredients in the classification the quarks, i.e. the valence, the cloud, and the sea quarks. The question is how one defines them unambiguously and in a model independent way in QCD. It has been shown recently [3] that in evaluating the hadronic tensor in the deep inelastic scattering, the three topological distinct contractions of quark fields lead to the three quark-line skeleton diagrams. The self-contraction of the current leading to a quark loop is separated from the quark lines joining the nucleon interpolating fields. This disconnected insertion (D.I.) refers to the quark lines which are of courses connected by the gloun lines. This D.I. defines the sea-parton. One class of the connected insertion (C.I.) involves an anti-quark propagating backwards in time between the currents and is defined as the cloud anti-quark. Another class of the C.I. involves a quark propagating forward in time between the currents and is defined to be the sum of the valence and cloud quarks. Thus, in the parton model, the antiquark distribution should be written as
¯¯¯qi(x)=¯¯¯qic(x)+¯¯¯qis(x). (1)
to denote their respective origins for each flavor i. Similarly, the quark distribution is written as
qi(x)=qiV(x)+qic(x)+qis(x) (2)
Since , we define so that will be responsible for the baryon number, i.e. and for the proton.
We can reveal the role of these quarks in the nucleon matrix elements which involve the three-point function with one current. The D.I. in the three-point function involves the sea-quark contribution to the m.e. It has been shown that the this diagram has indeed large contributions for the flavor-singlet scalar and axial charges [4] so that the discrepancy between the valence quark model and the experiment in the term and the flavor-singlet can be understood. Thus we conclude that in order to simulate the valence quark model, the first step is to eliminate the quark loops. This can be done in the well-known quenched approximation by setting the fermion determinant to a constant.
In order to reveal the effect of the cloud degree of freedom, we have calculated the ratios of the isoscalar to isovector axial and scalar charges in a quenched lattice calculation. The ratio of the isoscalar (the C.I. part) to isovector axial charge can be written as
(3)
where is the polarized parton distribution of the u(d) quark and antiquark in the C.I. For the non-relativistic case, is 5/3 and for the C.I. is 1 Thus, the ratio should be 3/5. Our lattice results based on quenched lattices with for the Wilson ranging between 0.154 to 0.105 which correspond to strange and twice the charm masses are plotted in Fig. 1 as a function of the quark mass . We indeed find this ratio for the heavy quarks (i.e. or in Fig.1). This is to be expected because the cloud antiquarks which involves Z-graphs are suppressed for non-relativistic quarks by . Interestingly, the ratio dips under 3/5 for light quarks. We interpret this as due to the cloud quark and antiquark, since in the relativistic valence quark models (i.e. no cloud nor sea quarks) the ratio remains to be 3/5. To verify that this is indeed caused by the cloud antiquarks from the backward time propagation, we perform the following approximation. In the Wilson lattice action, the backward time hopping is prescribed by the term . We shall amputate this term from the quark matrix in our calculation of the quark propagators. As a result, the quarks are limited to propagating forward in time and there will be no Z-graph and hence no cloud quarks and antiquarks. The Fock space is limited to 3 valence quarks. Thus we shall refer to this as the valence approximation and we believe it simulates what the naive quark model is supposed to describe by design. After making this valence approximation for the light quarks with and 0.154 (The quark mass turns out to differ from before only at the perturbative one-loop order,i.e. , which is very small. we find that the ratio becomes 3/5 with errors less than the size of the circles in Fig. 1. Since the valence quark model prediction of is well reproduced by the valence approximation, we believe this proves our point that the deviation of from 3/5 in Fig. 1 is caused by the backward time propagation, i.e. the cloud quarks and antiquarks.
Similar situation happens in the scalar matrix elements. In the parton model description of the forward m.e., the ratio of the isovector to isoscalar scalar charge of the proton for the C.I. is then approximated according to eqs. (1) and (2) as
RS=⟨p|¯uu−¯dd|p⟩⟨p|¯uu+¯dd|p⟩\footnotesizeC.I.=1+2∫dx[¯uc(x)−¯dc(x)]3+2∫dx[¯uc(x)+¯dc(x)] (4)
Since the quark/antiquark number is positive definite, we expect this ratio to be . For heavy quarks where the cloud antiquarks are suppressed, the ratio is indeed 1/3 (see Fig. 2). For quarks lighter than , we find that the ratio is in fact less than 1/3. The lattice results of the valence approximation for the light quarks, shown as the circles in Fig. 2, turn out to be 1/3. This shows that the deviation of from 1/3 is caused by the cloud quarks and antiquarks. With these findings, we obtain an upper-bound for the violation of GSR [3], i.e. . This clearly shows that is negative and is quite consistent with the experimental result .
To further explore the consequences of the valence approximation, we calculate the baryon masses. Plotted in fig. 3 are masses of , and as a function of the quark mass on our lattice with quenched approximation. We see that the hyper-fine splittings between the and N, and the and grow when the quark mass approaches the chiral limit as expected. However, it is surprising to learn that in the valence approximation, the and N become degenerate within errors, so do the and as shown in Fig. 4. Since the one-gluon exchange is not switched off in the valence approximation, the hyper-fine splitting is probably not due to the one-gluon exchange potential as commonly believed. Since this is a direct consequence of eliminating the cloud quark/antiquark degree of freedom, one can speculate that it has something to do with the cloud. It seems that a chiral soliton like the skyrmion might delineate a more accurate dynamical picture than the one-gluon exchange spin-spin interaction.
To conclude, we find that the valence approximation in QCD reproduces the SU(6) results of the valence quark model better than we anticipated. Especially in hadron masses, the results seem to indicate that there are no hyper-fine splittings, modulo the uncertainty due to the statistical and systematic errors.
## References
Figure Captions:
Fig. 1 The ratio of eq. (3) as a function of the quark mass .
Fig. 2 The ratio of eq. (4) as a function of the quark mass ma.
Fig. 3 Masses of , and (in lattice units) as a function of the quark mass ma in the quenched approximation.
Fig. 4 The same as in Fig. 3 with the valence approximation. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705812335014343, "perplexity": 630.6297061068404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00650.warc.gz"} |
https://www.universetoday.com/1848/gamma-ray-bursts-eject-matter-at-nearly-the-speed-of-light/?shared=email&msg=fail | # Gamma Ray Bursts Eject Matter at Nearly the Speed of Light
Gamma ray bursts are the most powerful explosions in the Universe, emitting more energy in an instant than our Sun can give off in its entire lifetime. But they don’t just blast out radiation, they also eject matter. And it turns out, they eject matter very very quickly – at 99.9997% the speed of light.
This discovery was made by a large group of European researchers. They targeted the European Southern Observatory’s robotic La Silla Observatory at two recent gamma ray burst explosions. The observatory receives its targets automatically from NASA’s Swift satellite, and it autonomously zeros in to capture as much data as possible during the first few seconds after the explosion is detected.
In two cases, La Silla observed the light curve of the explosion, and measured the peak. And measuring the peak is the key, since it allowed them to calculate the velocity of matter ejected from the explosion. In the case of these two explosions, the matter was calculated to be traveling 99.9997% the speed of light.
That’s fast.
Original Source: ESO News Release | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066978454589844, "perplexity": 1374.1092819277092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00688.warc.gz"} |
http://www.ck12.org/algebra/Simplifying-Rational-Expressions/lesson/Rational-Expression-Simplification-Honors/r4/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
You are viewing an older version of this Concept. Go to the latest version.
# Simplifying Rational Expressions
## Factor numerator and denominator and cancel
Estimated10 minsto complete
%
Progress
Practice Simplifying Rational Expressions
Progress
Estimated10 minsto complete
%
Rational Expression Simplification
How could you use factoring to help simplify the following rational expression?
### Guidance
A rational number is any number of the form , where . A rational expression is any algebraic expression of the form , where . An example of a rational expression is: .
Consider that any number or expression divided by itself is equal to 1. For example, and . This fact allows you to simplify rational expressions that are in factored form by looking for "1's". Consider the following rational expression:
Factor both the numerator and denominator completely:
Notice that there is one factor of in both the numerator and denominator. These factors divide to make 1, so they "cancel out" (the second factor of in the denominator will remain there).
Also, the reduces to just . The simplified expression is:
Keep in mind that you cannot "cancel out" common factors until both the numerator and denominator have been factored.
A rational expression is like any other fraction in that it is said to be undefined if the denominator is equal to zero. Values of the variable that cause the denominator of a rational expression to be zero are referred to as restrictions and must be excluded from the set of possible values for the variable. For the original expression above, the restriction is because if then the denominator would be equal to zero. Note that to determine the restrictions you must look at the original expression before any common factors have been cancelled.
#### Example A
Simplify the following and state any restrictions on the denominator.
Solution: To begin, factor both the numerator and the denominator:
Cancel out the common factor of to create the simplified expression:
The restrictions are and because both of those values for would have made the denominator of the original expression equal to zero.
#### Example B
Simplify the following and state any restrictions on the denominator.
Solution: To begin, factor both the numerator and the denominator:
Cancel out the common factor of to create the simplified expression:
The restrictions are and because both of those values for would have made the denominator of the original expression equal to zero.
#### Example C
Simplify the following and state any restrictions on the denominator.
Solution: To begin, factor both the numerator and the denominator:
Cancel out the common factor of to create the simplified expression:
The restrictions are and because both of those values for would have made the denominator of the original expression equal to zero.
where and
### Vocabulary
Rational Expression
A rational expression is an algebraic expression that can be written in the form where .
Restriction
Any value of the variable in a rational expression that would result in a zero denominator is called a restriction on the denominator.
### Guided Practice
Simplify each of the following and state the restrictions.
1.
2.
3.
1. , ;
2. ,
3. , ;
### Practice
For each of the following rational expressions, state the restrictions.
Simplify each of the following rational expressions and state the restrictions.
### Vocabulary Language: English
Rational Expression
Rational Expression
A rational expression is a fraction with polynomials in the numerator and the denominator.
Restriction
Restriction
A restriction is a value of the domain where $x$ cannot be defined. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.9692960977554321, "perplexity": 604.1287777431647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157262.85/warc/CC-MAIN-20160205193917-00324-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Definition:Set_Equality/Definition_1 | # Definition:Set Equality/Definition 1
## Definition
Let $S$ and $T$ be sets.
$S$ and $T$ are equal if and only if they have the same elements:
$S = T \iff \paren {\forall x: x \in S \iff x \in T}$
Otherwise, $S$ and $T$ are distinct, or unequal.
## Equality of Classes
In the context of class theory, the same definition applies.
Let $A$ and $B$ be classes.
$A$ and $B$ are equal, denoted $A = B$, if and only if:
$\forall x: \paren {x \in A \iff x \in B}$
where $\in$ denotes class membership. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648146629333496, "perplexity": 627.4659478596566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00192.warc.gz"} |
https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/ndwi/ | # custom-scripts
A repository of custom scripts that can be used with Sentinel-Hub services.
# NDWI Normalized Difference Water Index
## General description of the script
The NDWI is used to monitor changes related to water content in water bodies. As water bodies strongly absorb light in visible to infrared electromagnetic spectrum, NDWI uses green and near infrared bands to highlight water bodies. It is sensitive to built-up land and can result in over-estimation of water bodies. The index was proposed by McFeeters, 1996.
Values description: Index values greater than 0.5 usually correspond to water bodies. Vegetation usually corresponds to much smaller values and built-up areas to values between zero and 0.2.
Note: NDWI index is often used synonymously with the NDMI index, often using NIR-SWIR combination as one of the two options. NDMI seems to be consistently described using NIR-SWIR combination. As the indices with these two combinations work very differently, with NIR-SWIR highlighting differences in water content of leaves, and GREEN-NIR highlighting differences in water content of water bodies, we have decided to separate the indices on our repository as NDMI using NIR-SWIR, and NDWI using GREEN-NIR.
## Description of representative images
NDWI of Italy. Acquired on 2020-08-01.
NDWI of Canadian lakes. Acquired on 2020-08-05.
## References
Source: https://en.wikipedia.org/wiki/Normalized_difference_water_index | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179417252540588, "perplexity": 4872.447769660387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.38/warc/CC-MAIN-20210921131252-20210921161252-00038.warc.gz"} |
https://www.universetoday.com/114163/weird-x-rays-what-happens-when-eta-carinaes-massive-stars-get-close/ | Weird X-Rays: What Happens When Eta Carinae’s Massive Stars Get Close?
While the stars appear unchanging when you take a quick look at the night sky, there is so much variability out there that astronomers will be busy forever. One prominent example is Eta Carinae, a star system that erupted in the 19th century for about 20 years, becoming one of the brightest stars you could see in the night sky. It’s so volatile that it’s a high candidate for a supernova.
The two stars came again to their closest approach this month, under the watchful eye of the Chandra X-Ray Observatory. The observations are to figure out a puzzling dip in X-ray emissions from Eta Carinae that happen during every close encounter, including one observed in 2009.
The two stars orbit in a 5.5-year orbit, and even the lesser of them is massive — about 30 times the mass of the Sun. Winds are flowing rapidly from both of the stars, crashing into each other and creating a bow shock that makes the gas between the stars hotter. This is where the X-rays come from.
Here’s where things get interesting: as the stars orbit around each other, their distance changes by a factor of 20. This means that the wind crashes differently depending on how close the stars are to each other. Surprisingly, the X-rays drop off when the stars are at their closest approach, which was studied closely by Chandra when that last occurred in 2009.
“The study suggests that part of the reason for the dip at periastron is that X-rays from the apex are blocked by the dense wind from the more massive star in Eta Carinae, or perhaps by the surface of the star itself,” a Chandra press release stated.
“Another factor responsible for the X-ray dip is that the shock wave appears to be disrupted near periastron, possibly because of faster cooling of the gas due to increased density, and/or a decrease in the strength of the companion star’s wind because of extra ultraviolet radiation from the massive star reaching it.”
More observations are needed, so researchers are eagerly looking forward to finding out what Chandra dug up in the latest observations. A research paper on this was published earlier this year in the Astrophysical Journal, which you can also read in preprint version on Arxiv. The work was led by Kenji Hamaguchi, who is with NASA’s Goddard Space Flight Center in Maryland.
Source: Chandra X-Ray Observatory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834246814250946, "perplexity": 841.6433541760002}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00304.warc.gz"} |
http://www.exampleproblems.com/wiki/index.php/Amplitude | # Amplitude
For the video game of the same name, see Amplitude (game).
Amplitude is a nonnegative scalar measure of a wave's magnitude of oscillation, that is, magnitude of the maximum disturbance in the medium during one wave cycle.
In the following diagram,
the distance y is the amplitude of the wave.
Sometimes this distance is called the "peak amplitude", distinguishing it from another concept of amplitude, used especially in electrical engineering: the root mean square (RMS) amplitude, defined as the square root of the temporal mean of the square of the vertical distance of this graph from the horizontal axis. The use of peak amplitude is unambiguous for symmetric, periodic waves, like a sine wave, a square wave, or a triangular wave. For an unsymmetric wave, for example periodic pulses in one direction, the peak amplitude becomes ambiguous because the value obtained is different depending on whether the maximum positive signal is measured relative to the mean, the maximum negative signal is measured relative to the mean, or the maximum positive signal is measured relative the maximum negative signal and then divided by two.
For complex waveforms, especially non-repeating signals like noise, the RMS amplitude is usually used because it is unambiguous and because it has physical significance. For example, the power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude (and not, in general, to the square of the peak amplitude).
There are a few ways to formalize amplitude:
In the simple wave equation
${\displaystyle y=A\sin(t-K)+b}$
A is the amplitude of the wave.
The units of the amplitude depends on the type of wave.
For waves on a string, or in medium such as water, the amplitude is a distance.
The amplitude of sound waves and audio signals conventionally refers to the amplitude of the air pressure in the wave, but sometimes the amplitude of the displacement (movements of the air or the diaphragm of a speaker) is described. Its logarithm is usually measured in dB, so a null amplitude corresponds to -infinity dB.
For electromagnetic radiation, the amplitude corresponds to the electric field of the wave. The square of the amplitude is termed the intensity of the wave.
The amplitude may be constant (in which case the wave is a continuous wave) or may vary with time and/or position. The form of the variation of amplitude is called the envelope of the wave. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856535196304321, "perplexity": 391.55413107027806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00167.warc.gz"} |
https://admin.clutchprep.com/physics/practice-problems/46601/as-shown-in-figure-1-a-beam-of-particles-is-fired-at-a-stationary-target-the-res | # Problem: As shown in figure 1, a beam of particles is fired at a stationary target. The resulting nuclei from this collision are highly unstable, and decay almost immediatebly into more stable daughter nuclei. During this decay, charged particles are emitted, which curve in the magnetic field within the detector (in this case, the field is pointing out of the page). Each of these decay particles are collected by the detector and their energies are measured, producing the graph shown in figure 2. What type of decay are the unstable nuclei undergoing? A) α decay B) β- decay C) β+ decay D) γ decay
###### Problem Details
As shown in figure 1, a beam of particles is fired at a stationary target. The resulting nuclei from this collision are highly unstable, and decay almost immediatebly into more stable daughter nuclei. During this decay, charged particles are emitted, which curve in the magnetic field within the detector (in this case, the field is pointing out of the page). Each of these decay particles are collected by the detector and their energies are measured, producing the graph shown in figure 2. What type of decay are the unstable nuclei undergoing?
A) α decay
B) β- decay
C) β+ decay
D) γ decay | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779672622680664, "perplexity": 916.1390644011814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00739.warc.gz"} |
https://www.ias.ac.in/listing/bibliography/pram/A_SAHA | • A SAHA
Articles written in Pramana – Journal of Physics
• Observational constraints on extended Chaplygin gas cosmologies
We investigate cosmological models with extended Chaplygin gas (ECG) as a candidate for dark energy and determine the equation of state parameters using observed data namely, observed Hubble data, baryon acousticoscillation data and cosmic microwave background shift data. Cosmological models are investigated considering cosmic fluid which is an extension of Chaplygin gas, however, it reduces to modified Chaplygin gas (MCG) andalso to generalized Chaplygin gas (GCG) in special cases. It is found that in the case of MCG and GCG, the best-fit values of all the parameters are positive. The distance modulus agrees quite well with the experimental Union2data. The speed of sound obtained in the model is small, necessary for structure formation. We also determine the observational constraints on the constants of the ECG equation.
• # Pramana – Journal of Physics
Volume 96, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868138194084167, "perplexity": 2258.438266733866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00668.warc.gz"} |
http://davidkader.blogspot.com/ | ## 24 November 2007
### 1st posting
under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977189302444458, "perplexity": 4391.779783853453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.88/warc/CC-MAIN-20150521113210-00066-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/991739/absolute-value-problem-x-y-y-x | # Absolute value problem $|x-y|=|y-x|$
My question is from Apostol's Vol. 1 One-variable calculus with introduction to linear algebra textbook.
Page 43. Problem 1 Prove each of the following properties of absolute values.
(c) $|x-y|=|y-x|$.
The attempt at a solution: I solved similar problem, which was this: $|x|-|y|\le|x-y|$, by manipulating triangle inequality, I guess this one might be similar but I don't see it. Please help.
So far I have proven following properties:
$|x|=0$ if and only if $x=0$.
$|-x|=|x|$.
Also, absolute value is defined in such way: If $x$ is a real number, the absolute value of $x$ is a nonnegative real number denoted by $|x|$ and defined as follows: $|x|=\begin{cases} x, & \text{if$x\ge0$,} \\ -x, & \text{if$x\le0$.} \end{cases}$.
• If you've shown that the absolute value is multiplicative, then you could say $|x-y|=|(-1)(y-x)|=|-1||y-x|=|y-x$. – Hayden Oct 26 '14 at 13:31
• No, I have not done that yet, that's (f) part of problem 1. – George Apriashvili Oct 26 '14 at 13:33
• it might be helpful to include the properties you have proven, including, for example, how you're defining the absolute value (i.e. as either the square root of the square, or as a piecewise function, although these are clearly equivalent) – Hayden Oct 26 '14 at 13:34
• @Hayden Yes, Sorry for not being clear, I listed all that now. – George Apriashvili Oct 26 '14 at 13:42
$$|x-y|=|x-y|$$ $$|x-y|=|1|\cdot|x-y|$$ $$|x-y|=|-1|\cdot|x-y|$$ $$|x-y|=|-1\cdot(x-y)|$$ $$|x-y|=|y-x|$$
Without $|x||y|=|xy|$
If $x>y$
Since $y-x<0$ that means $|y-x|=-(y-x)=x-y$ $$|y-x|=x-y$$ Since $x-y>0$ that means $|x-y|=x-y$ $$|x-y|=x-y$$ Equality is transitive $$|x-y|=|y-x|$$
If $y>x$
Since $x-y<0$ that means $|x-y|=-(x-y)=y-x$ $$|x-y|=y-x$$ Since $y-x>0$ that means $|y-x|=y-x$ $$|y-x|=y-x$$ Equality is transitive $$|x-y|=|y-x|$$
The case of $x=y$ is left as an exercise for the reader.
• I have not proven property $|xy|=|x||y|$ yet, so is there any other way to achieve the result? – George Apriashvili Oct 26 '14 at 13:36
• @GeorgeDirac Look at my edit – Alice Ryhl Oct 26 '14 at 13:45
You say that you have proven that $|x|=|-x|$, then it immediately follows that
$$|x-y| = |-(x-y)| =|-x+y| = |y-x|.$$
• Yeah, thats true. Well I feel stupid now, that I didn't think of that, thanks for simple explanation. – George Apriashvili Oct 26 '14 at 13:54
• @GeorgeDirac You're welcome :-) – Eff Oct 26 '14 at 13:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904375433921814, "perplexity": 276.880969294634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00533.warc.gz"} |
http://math.stackexchange.com/questions/74682/injective-functions-also-surjective | # injective functions also surjective?
Is it true that for each Set $M$ a given injective function $f: M \rightarrow M$ is surjective, too?
Can someone explain why it is true or not and give an example?
-
If the set $M$ is finite, then yes. – Mariano Suárez-Alvarez Oct 21 '11 at 21:19
I could swear that this question was asked 10 times before. I guess it's easier to write a one line answer than to find the duplicates... – Asaf Karagila Oct 21 '11 at 21:24
@Asaf In that case, it would also make sense to include this as a frequently asked question. :) – Srivatsan Oct 21 '11 at 21:28
@Srivatsan: Yes, it would very much make sense to do that. In fact it might be worth a while to add a few other elementary set theory questions there. – Asaf Karagila Oct 21 '11 at 21:29
This statement is true if $M$ is a finite set, and false if $M$ is infinite.
In fact, one definition of an infinite set is that a set $M$ is infinite iff there exists a bijection $g : M \to N$ where $N$ is a proper subset of $M$. Given such a function $g$, the function $f : M \to M$ defined by $f(x) = g(x)$ for all $x \in M$ is injective, but not surjective. Henning's answer illustrates this with an example when $M = \mathbb N$. To put that example in the context of my answer, let $E \subseteq \mathbb N$ be the set of positive even numbers, and consider the bijection $g: \mathbb N \to E$ given by $g(x) = 2x$ for all $x \in \mathbb N$.
On the other hand, if $M$ is finite and $f: M \to M$, then it is true that $f$ is injective iff it is surjective. Let $m = |M| < \infty$. Suppose $f$ is not surjective. Then $f(M)$ is a strict subset of $M$, and hence $|f(M)| < m$. Now, think of $x \in M$ as pigeons, and throw the pigeon $x$ in the hole $f(x)$ (also a member of $M$). Since the number of pigeons strictly exceeds the number of holes (both these numbers are finite), it follows from the pigeonhole principle that some two pigeons go into the same hole. That is, there exist distinct $x_1, x_2 \in M$ such that $f(x_1) = f(x_2)$, which shows that $f$ is not injective. (See if you can prove the other direction: if $f$ is surjective, then it is injective.)
Note that the pigeonhole principle itself needs a proof and that proof is a little elaborate (relying on the definition of a finite set, for instance). I ignore such complications in this answer.
-
I didn't get why this is true for finite sets. What is the difference here between finite and infinite sets? – sschaef Oct 21 '11 at 21:44
@Antoras: For starters, infinite sets are not finite. They allow more room to move things around. – Asaf Karagila Oct 21 '11 at 22:02
Yes, that is true. But why do they make the injective function f not surjective? – sschaef Oct 21 '11 at 22:07
@Antoras: It does not mean that every injective function is not surjective. It just means that some injective functions are not surjective, and some surjective functions are not injective either. – Asaf Karagila Oct 21 '11 at 22:27
@Antoras: It appears that you believe a function is some universal object, but it is not. Different functions can have different domains (the set on which they operate). In fact the categorical approach defines a function along with its domain and codomain. This is exactly the point. If the function has a finite domain then injective is the same as surjective. If it has an infinite domain then this is no longer true. – Asaf Karagila Oct 22 '11 at 8:42
No. Consider $f:\mathbb N\to\mathbb N$ defined by $f(n)=2n$. It is injective but not surjective.
-
Or $f(n)=n+1$. Peano says that $f$ is not surjective. – lhf Oct 21 '11 at 22:47
Right. I wrote that first, but then changed it to $2n$ because I didn't want the confusion of choosing between saying "1 is not in the image" and "0 is not in the image". Then I forgot to add "the odd numbers are not in the image" to the answer anyway, so I might as well have left it at the successor function. – Henning Makholm Oct 21 '11 at 22:53
For finite sets, consider the two point set $\{a,b\}$ . If you have an injective function, $f(a)\neq f(b)$, so one has to be $a$ and one has to be $b$, so the function is surjective. The same idea works for sets of any finite size. If the size is $n$ and it is injective, then $n$ distinct elements are in the range, which is all of $M$, so it is surjective.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493027925491333, "perplexity": 162.00090164984883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926964.7/warc/CC-MAIN-20150521113206-00070-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.simscale.com/docs/content/simulation/analysis_types/heatTransferDescription.html?highlight=heat%20transfer | # Heat transfer¶
The simulation type Heat transfer allows the calculation of the temperature distribution and heat flux in a solid under thermal loads as for example convection and radiation.
As a result you can analyse the temperature distribution in a steady state scenario as well as for example a transient heating process of a mechanical part. A negative heat flux over the borders of the domain illustrates the thermal power loss of e.g. a cooled device.
Thermal change in a PCB
In the following the different simulation settings you have to define are described in detail as well as the various options you can add.
## Analysis Type¶
You can choose if you want to calculate the steady-state behavior of the system comparable to the Static analysis or if you want to take time dependent effects into consideration in a transient analysis.
## Domain¶
In order to perform an analysis a given geometrical domain you have to discretize your model by creating a mesh out of it. Details of CAD handling and Meshing are described in the Pre-processing section.
After you assigned a mesh to the simulation you can add some optional domain-related settings and have a look on the mesh details. Please note that if you have an assembly of multiple bodies that are not fused together, you have to add Contacts if you want to build connections between those independent parts.
### Materials¶
In order to define the material properties of the whole domain, you have to assign exactly one material to every part and define the thermal properties of those. Note that the specific heat is only needed for transient analyses.
### Initial Conditions¶
For a time dependent behaviour of a solid structure it is important to define the Initial Conditions carefully, since these values determine the solution of the analysis. If you chose to run a transient analysis the temperature depends on time. It is set to room temperature (293.15 K) by default and is also provided for steady-state simulations for convergence reasons.
### Boundary conditions¶
You can define temperature and thermal load boundary conditions. If you provide a temperature boundary condition on an entity, the temperature value of all contained nodes is set to the given prescribed value. Thermal load boundary conditions define the heatflux into or out of the domain via different mechanisms. Note that a negative heat flux indicates a heat loss to the environment. As a temperature boundary condition prescribes the temperature value on a given part of the domain it is not possible to simultaneously add a thermal load on that part as it would be overconstrained in that case.
Temperature boundary condition types (Thermal Constraints)
Heat flux boundary condition types (Thermal Loads)
## Numerics¶
Under numerics you can set the equation solver of your simulation. The choice highly influences the computational time and the required memory size of the simulation.
## Simulation Control¶
The Simulation Control settings define the overall process of the calculation as for example the timestepping interval and the maximum time you want your simulation to run before it is automatically cancelled.
### Solver¶
The described Heat transfer analysis of the finite element code CalculiX Crunchix (CCX) is only available via the solver perspective. You may as well choose the finite element package Code_Aster for this analysis type (Heat transfer CA) using the standard Heat transfer analysis from the physics perspective or via the solver perspective choosing Code_Aster as solver. See our Third-party software section for further information.
See our Third-party software section for further information. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851544976234436, "perplexity": 657.1032859498865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830305.92/warc/CC-MAIN-20181219005231-20181219031221-00039.warc.gz"} |
https://bird.bcamath.org/handle/20.500.11824/10/browse?authority=7e3b49e4-7ec5-4851-8896-ffb2805d6e2e&type=author | Now showing items 1-1 of 1
• #### Time-varying coefficient estimation in SURE models. Application to portfolio management.
(2017-01-01)
This paper provides a detailed analysis of the asymptotic properties of a kernel estimator for a Seemingly Unrelated Regression Equations model with time-varying coefficients (tv-SURE) under very general conditions. ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055453300476074, "perplexity": 1879.3272016731237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00497.warc.gz"} |
http://mathoverflow.net/questions/162888/forcing-with-nontransitive-models | # Forcing with Nontransitive Models
A common approach to forcing is to use countable transitive model $M \in V$ with $\mathbb{P} \in M$ and take a $G \in M$ (which always exists) to form a countable transitive model $M[G]$. Another approach takes $M$ to be countable such that $M \prec H_\theta$ for sufficiently large $\theta$ (and hence may not be transitive). For example, a definition of proper forcing considers such models.
Forcing with transitive models are quite convenient since many absoluteness results can be used to transfer properties of $x \in M[G]$ which hold in $M[G]$ up to $V$. If $M \prec H_\theta$ is not transitive, then it is not clear what type of property that $M[G]$ can prove about $x$ transfer to $V$. For instance, if $M[G] \models x \in {}^\omega\omega$, is $x \in {}^\omega\omega$ in $V$? Of course, one remedy could be to Mostowski collapse everything and then use the familiar absoluteness for transitive models. For $x \in {}^\omega\omega$, one could use the fact that $M \prec H_\theta$ implies $\omega \subseteq M$ and hence the Mostowski collapse of $M[G]$ would maps each real to itself and then use absoluteness to prove that $V \models x \in {}^\omega\omega$ as well. Is there a more direct way to prove these type of result rather than collapsing the forcing extension, which seem to suggest one should have started by collapse $M$ before starting the forcing construction.
So my questions are
1 First, if one chooses to work with countable $M \prec H_\theta$ are there any changes that need to made to the forcing construction and the forcing theorem as they appear in Kunen or Jech? Of course, the definition of a generic filter should be changed to meeting those dense sets that appear in $M$.
2 I am aware that if $G$ has master conditions, then $M[G] \prec H_\theta[G]$? Is $H_\theta[G]$ just the forcing construction applied to $H_\theta$? As $G$ is not necessarily generic over $H_\theta$, it is not clear to me that the forcing theorem need to apply to $H_\theta[G]$ (or a priori $H_\Theta[G]$ models any particular amount of $\text{ZF}- \text{P}$, but since $M[G] \prec H_\theta[G]$, actually $H_\Theta[G]$ would model as much as $M[G]$.) In general without addition assumption like master conditions, does the relation $M[G] \prec H_\Theta[G]$ still hold.
Also perhaps I am misunderstanding something, but since $\mathbb{P} \in M$, it appears that if $\theta$ is large enough, every $G \subseteq \mathbb{P}$ which is $\mathbb{P}$-generic over $M$ is already in $H_\Theta$. Would this not imply that $H_\theta[G] = H_\theta$ and hence $M[G] \prec H_\Theta$. Since $M \prec H_\theta$, $M$ and $M[G]$ models the exact same sentences. This surely can not happen.
Thanks for any help and clarification that can be provided.
-
For 2, properness seems to be sufficient. – Mohammad Golshani Apr 9 '14 at 12:29
For your last comment, you need $G$ be $H_\theta-$generic to form $H_\theta[G]$ – Mohammad Golshani Apr 9 '14 at 12:32
All standard forcing machinery works when forcing over such $M$ because they satisfy a large enough fragment of $ZFC$, namely $ZFC$ without the powerset axiom. The purpose of forcing over such models is rarely to transfer results to $V$, although something like this can be done in the following way. Suppose that $M\prec H_\theta$ is countable with $\mathbb P\in M$ and for every $M$-generic $G$ in $V$, we have that $M[G]\models\varphi$. Then $M$ satisfies that $\varphi$ is forced by $\mathbb P$. But then by elementarity, $H_\theta$ satisfies that $\varphi$ is forced by $\mathbb P$ as well. Thus, $H_\theta[G]\models\varphi$ in every forcing extension $V[G]$. So in a way, we have transfered a property from $M[G]$ to $V[G]$. I recently encountered many such arguments when working with Schindler's remarkable cardinals and I have some notes written up here. In the case of remarkable cardinals, you use some properties of the transitive collapse of $M$ to argue that certain generic embeddings exist in its forcing extension by $Coll(\omega,<\kappa)$. Using the argument above you then conclude that such generic embeddings must exist in $H_\theta[G]$ where $G\subseteq Coll(\omega,<\kappa)$ is $V$-generic.
The argument that $M[G]\prec H_\theta[G]$ works only in the case that $G$ is both fully $H_\theta$-generic and also $M$-generic (meets every dense set of $M$ in $M$ itself). Indeed, in most situations where forcing over $M\prec H_\theta$ is used, as in say proper forcing, the arguments usually involve fully generic $G$. It seems that generally the purpose of such arguments is to use $M[G]$ to conclude that some property holds in $V[G]$ by reflecting down to countable objects. This is for instance how one can use the definition of proper posets, in terms of the existence of $M$-generic filters for countable $M\prec H_\theta$, to argue that they don't collapse $\omega_1$.
-
Vika, I think your claim that "The argument that $M[G]\prec H_\theta[G]$ works only in the case that $G$ is both fully $H_\theta$-generic and also $M$-generic," is not actually correct, in light of the theorem in my answer. You don't actually need that $G$ is $M$-generic for this conclusion. – Joel David Hamkins Apr 10 '14 at 3:58
To clarify: $(M[G],{\in})\prec (H_\theta[G],{\in})$ is indeed true for all $H_\theta$-generic $G$. But often we want to use an additional unary predicate for $V$ or $H_\theta$, so we are interested in $(M[G],{\in},M)\prec (H_\theta[G],{\in},H_\theta)$. For $H_\theta$-generic $G$, this property is equivalent to $M$-genericity. – Goldstern Mar 8 at 16:29
What I'd like to point out is that, contrary to what has been stated, one doesn't actually need to assume that $G$ is $M$-generic in order to conclude $M[G]\prec H_\theta[G]$; having $G\subset\mathbb{P}\in M$ being $H_\theta$-generic (that is, $V$-generic) is sufficient.
Let's begin by correcting, as Victoria does, your definition of what it means for $G\subset\mathbb{P}$ to be $M$-generic, in the case where $M\prec H_\theta$ is a possibly non-transitive elementary submodel of some $H_\theta$. You said to be generic means to meet every dense subset $D\subset \mathbb{P}$ with $D\in M$, but this is not the right definition. You want to say instead that $G$ meets every such dense set $D$ inside $M$. That is, that $G\cap D\cap M\neq\emptyset$. If we only have $G\cap D\neq\emptyset$, then $M$ will not have access to the conditions $p\in G\cap D$ that are useful when a filter meets a dense set. So it is the corrected definition that treats $\langle M,{\in^M}\rangle$ as a model of set theory in its own right, insisting that for every dense set in this structure, the filter meets it.
Proper forcing is of course concerned all about this, since we seek a condition $p\in\mathbb{P}$ forcing that whenever $G\subset\mathbb{P}$ is $V$-generic, then it is also $M$-generic in this sense.
But we may still form the extension $M[G]$ whether or not $G$ is $M$-generic in this sense, defining $M[G]=\{\tau_G\mid\tau\in M^{\mathbb{P}}\}$ to be the interpretation of all names in $M$ by the filter $G$. Now, it turns out that for $V$-generic filters $G$, we have that $G$ is $M$-generic just in case $M[G]\cap\text{Ord}=M\cap\text{Ord}$, which holds just in case $M[G]\cap V=M$. This is easy to see, since any name $\dot\alpha$ for an ordinal in $M$ gives rise to an antichain of possibilities in $M$, and so if $G$ is $M$-generic, then it will force $\dot\alpha$ to be an ordinal already in $M$. And for the other direction, given any maximal antichain in $M$, we may construct by the mixing lemma a name $\dot\alpha$ for an ordinal, which will be a new ordinal just in case $G$ does not meet $A\cap M$.
Assume $H_\theta$ satisfies a sufficiently large fragment of ZFC.
Theorem. If $M\prec H_\theta$ and $G\subset\mathbb{P}\in M$ is $H_\theta$-generic, then $M[G]\prec H_\theta[G]$.
Proof. Suppose that $M\prec H_\theta$ and $G\subset\mathbb{P}\in M$ is $H_\theta$-generic. We may still form $M[G]=\{\tau_G\mid \tau\in M^{\mathbb{P}}\}$ as the set of interpretations of names in $M$ using the filter $G$. Let $\bar M=M[G]\cap V$. This is larger than $M$, precisely when $G$ is not $M$-generic. I claim that $\bar M\prec H_\theta$, by verifying the Tarski-Vaught criterion, since if $H_\theta$ has a witness, then we may find a name in $M$ for such a ground-model object, and so we will find a witness in $\bar M$. And since $\bar M\subset \bar M[G]\cap V\subset M[G]\cap V=\bar M$, it follows that $\bar M[G]\cap V=\bar M$, and so $G$ is actually $\bar M$-generic. So $M[G]=\bar M[G]\prec H_\theta[G]$ by reducing to the case where we do have the extra genericity. QED
In regard to question 2, of course we want $G$ to be $H_\theta$-generic, since without this it is easy to make counterexamples to $M[G]\prec H_\theta[G]$. For example, if $M$ is countable we can easily find $M$-generic filters $G$ with $G\in H_\theta$, and in this case, if the forcing is nontrivial then $M[G]$ is definitely not an elementary substructure of $H_\theta[G]=H_\theta$. This is the argument of your last paragraph, and that is totally right; so the conclusion is that for this question we want to assume $G$ is $V$-generic.
Lastly, let me point out that one doesn't need countable models in order to undertake the forcing construction, and one can speak of the forcing extensions of any model of set theory, whether it is countable, transitive, uncountable, nonstandard, whatever. The most illuminating way to do this is via Boolean-valued models, and by taking the quotient, one arrives at the Boolean ultrapower construction. The basic situation is the if $V$ is a model of set theory containing a complete Boolean algebra $\mathbb{B}$, and $U\subset\mathbb{B}$ is an ultrafilter ($U\in V$ is completely fine), then one may form the quotient $V^{\mathbb{B}}/U$ of the $\mathbb{B}$-valued structure, and this is realized as a forcing extension of its ground model $\check V_U$, and furthermore there is an elementary embedding of $V$ into $\check V_U$, called the Boolean ultrapower map. So the entire composition $$V\overset{\prec}{\scriptsize\sim} \check V_U\subset \check V_U[G]=V^{\mathbb{B}}/U$$ lives inside $V$. There is no need for $V$ to be countable and no need for $U$ to be generic in any sense, yet $G$, which is the equivalence class of the name $\dot G$ by $U$, is still nevertheless $\check V_U$-generic. You can find fuller details in my paper with D. Seabold, Boolean ultrapowers as large cardinal embeddings.
-
Joel, for some reason I am suspicious of that argument every time I see it :). Can you say something more about the statement "if $H_\theta$ has a witness, then we may find a name in $M$ for such a ground model object..." I don't quite follow it. – Victoria Gitman Apr 10 '14 at 11:59
If $H_\theta\models\varphi(x,\tau_G)$, with $\tau\in M$ and $\tau_G\in H_\theta$, then there is an antichain of possible values of $\tau$, and for each possible $y\in H_\theta$ that it might be, we have an $x$ for which $H_\theta\models\varphi(x,y)$. Now, by mixing $\check x$ along the antichain, we find a name $\dot x$ such that $H_\theta\models\varphi(\dot x_G,\tau_G)$. By elementarity $M\prec H_\theta$, there is such a name $\dot x$ inside $M$. And so $M[G]$ has the witness $\dot x_G$, which is one of the $x$'s that we mixed. – Joel David Hamkins Apr 10 '14 at 12:17
Ok, great! I am convinced. – Victoria Gitman Apr 10 '14 at 12:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600288271903992, "perplexity": 160.47456330791735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398458553.38/warc/CC-MAIN-20151124205418-00152-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.science.gov/topicpages/a/acid-base+equilibrium+constants.html | #### Sample records for acid-base equilibrium constants
1. Conjugate Acid-Base Pairs, Free Energy, and the Equilibrium Constant
ERIC Educational Resources Information Center
Beach, Darrell H.
1969-01-01
Describes a method of calculating the equilibrium constant from free energy data. Values of the equilibrium constants of six Bronsted-Lowry reactions calculated by the author's method and by a conventional textbook method are compared. (LC)
2. Distinguishing between keto-enol and acid-base forms of firefly oxyluciferin through calculation of excited-state equilibrium constants.
PubMed
Falklöf, Olle; Durbeej, Bo
2014-11-15
Although recent years have seen much progress in the elucidation of the mechanisms underlying the bioluminescence of fireflies, there is to date no consensus on the precise contributions to the light emission from the different possible forms of the chemiexcited oxyluciferin (OxyLH2) cofactor. Here, this problem is investigated by the calculation of excited-state equilibrium constants in aqueous solution for keto-enol and acid-base reactions connecting six neutral, monoanionic and dianionic forms of OxyLH2. Particularly, rather than relying on the standard Förster equation and the associated assumption that entropic effects are negligible, these equilibrium constants are for the first time calculated in terms of excited-state free energies of a Born-Haber cycle. Performing quantum chemical calculations with density functional theory methods and using a hybrid cluster-continuum approach to describe solvent effects, a suitable protocol for the modeling is first defined from benchmark calculations on phenol. Applying this protocol to the various OxyLH2 species and verifying that available experimental data (absorption shifts and ground-state equilibrium constants) are accurately reproduced, it is then found that the phenolate-keto-OxyLH(-) monoanion is intrinsically the preferred form of OxyLH2 in the excited state, which suggests a potential key role for this species in the bioluminescence of fireflies.
3. Equilibrium Constants You Can Smell.
ERIC Educational Resources Information Center
Anderson, Michael; Buckley, Amy
1996-01-01
Presents a simple experiment involving the sense of smell that students can accomplish during a lecture. Illustrates the important concepts of equilibrium along with the acid/base properties of various ions. (JRH)
4. Theoretical calculations of homoconjugation equilibrium constants in systems modeling acid-base interactions in side chains of biomolecules using the potential of mean force.
PubMed
Makowska, Joanna; Makowski, Mariusz; Liwo, Adam; Chmurzyński, Lech
2005-02-01
The potentials of mean force (PMFs) were determined for systems forming cationic and anionic homocomplexes composed of acetic acid, phenol, isopropylamine, n-butylamine, imidazole, and 4(5)-methylimidazole, and their conjugated bases or acids, respectively, in three solvents with different polarity and hydrogen-bonding propensity: acetonitrile (AN), dimethyl sulfoxide (DMSO), and water (H(2)O). For each pair and each solvent a series of umbrella-sampling molecular dynamics simulations with the AMBER force field, explicit solvent, and counterions added to maintain a zero net charge of a system were carried out and the PMF was calculated by using the Weighted Histogram Analysis Method (WHAM). Subsequently, homoconjugation-equilibrium constants were calculated by numerical integration of the respective PMF profiles. In all cases but imidazole stable homocomplexes were found to form in solution, which was manifested as the presence of contact minima corresponding to hydrogen-bonded species in the PMF curves. The calculated homoconjugation constants were found to be greater for complexes with the OHO bridge (acetic acid and phenol) than with the NHN bridge and they were found to decrease with increasing polarity and hydrogen-bonding propensity of the solvent (i.e., in the series AN > DMSO > H(2)O), both facts being in agreement with the available experimental data. It was also found that interactions with counterions are manifested as the broadening of the contact minimum or appearance of additional minima in the PMF profiles of the acetic acid-acetate, phenol/phenolate system in acetonitrile, and the 4(5)-methylimidazole/4(5)-methylimidzole cation conjugated base system in dimethyl sulfoxide.
5. Philicities, Fugalities, and Equilibrium Constants.
PubMed
Mayr, Herbert; Ofial, Armin R
2016-05-17
The mechanistic model of Organic Chemistry is based on relationships between rate and equilibrium constants. Thus, strong bases are generally considered to be good nucleophiles and poor nucleofuges. Exceptions to this rule have long been known, and the ability of iodide ions to catalyze nucleophilic substitutions, because they are good nucleophiles as well as good nucleofuges, is just a prominent example for exceptions from the general rule. In a reaction series, the Leffler-Hammond parameter α = δΔG(⧧)/δΔG° describes the fraction of the change in the Gibbs energy of reaction, which is reflected in the change of the Gibbs energy of activation. It has long been considered as a measure for the position of the transition state; thus, an α value close to 0 was associated with an early transition state, while an α value close to 1 was considered to be indicative of a late transition state. Bordwell's observation in 1969 that substituent variation in phenylnitromethanes has a larger effect on the rates of deprotonation than on the corresponding equilibrium constants (nitroalkane anomaly) triggered the breakdown of this interpretation. In the past, most systematic investigations of the relationships between rates and equilibria of organic reactions have dealt with proton transfer reactions, because only for few other reaction series complementary kinetic and thermodynamic data have been available. In this Account we report on a more general investigation of the relationships between Lewis basicities, nucleophilicities, and nucleofugalities as well as between Lewis acidities, electrophilicities, and electrofugalities. Definitions of these terms are summarized, and it is suggested to replace the hybrid terms "kinetic basicity" and "kinetic acidity" by "protophilicity" and "protofugality", respectively; in this way, the terms "acidity" and "basicity" are exclusively assigned to thermodynamic properties, while "philicity" and "fugality" refer to kinetics
6. Acid Base Equilibrium in a Lipid/Water Gel
Streb, Kristina K.; Ilich, Predrag-Peter
2003-12-01
A new and original experiment in which partition of bromophenol blue dye between water and lipid/water gel causes a shift in the acid base equilibrium of the dye is described. The dye-absorbing material is a monoglyceride food additive of plant origin that mixes freely with water to form a stable cubic phase gel; the nascent gel absorbs the dye from aqueous solution and converts it to the acidic form. There are three concurrent processes taking place in the experiment: (a) formation of the lipid/water gel, (b) absorption of the dye by the gel, and (c) protonation of the dye in the lipid/water gel environment. As the aqueous solution of the dye is a deep purple-blue color at neutral pH and yellow at acidic pH the result of these processes is visually striking: the strongly green-yellow particles of lipid/water gel are suspended in purple-blue aqueous solution. The local acidity of the lipid/water gel is estimated by UV vis spectrophotometry. This experiment is an example of host-guest (lipid/water gel dye) interaction and is suitable for project-type biophysics, physical chemistry, or biochemistry labs. The experiment requires three, 3-hour lab sessions, two of which must not be separated by more than two days.
7. Born energy, acid-base equilibrium, structure and interactions of end-grafted weak polyelectrolyte layers
SciTech Connect
Nap, R. J.; Tagliazucchi, M.; Szleifer, I.
2014-01-14
This work addresses the effect of the Born self-energy contribution in the modeling of the structural and thermodynamical properties of weak polyelectrolytes confined to planar and curved surfaces. The theoretical framework is based on a theory that explicitly includes the conformations, size, shape, and charge distribution of all molecular species and considers the acid-base equilibrium of the weak polyelectrolyte. Namely, the degree of charge in the polymers is not imposed but it is a local varying property that results from the minimization of the total free energy. Inclusion of the dielectric properties of the polyelectrolyte is important as the environment of a polymer layer is very different from that in the adjacent aqueous solution. The main effect of the Born energy contribution on the molecular organization of an end-grafted weak polyacid layer is uncharging the weak acid (or basic) groups and consequently decreasing the concentration of mobile ions within the layer. The magnitude of the effect increases with polymer density and, in the case of the average degree of charge, it is qualitatively equivalent to a small shift in the equilibrium constant for the acid-base equilibrium of the weak polyelectrolyte monomers. The degree of charge is established by the competition between electrostatic interactions, the polymer conformational entropy, the excluded volume interactions, the translational entropy of the counterions and the acid-base chemical equilibrium. Consideration of the Born energy introduces an additional energetic penalty to the presence of charged groups in the polyelectrolyte layer, whose effect is mitigated by down-regulating the amount of charge, i.e., by shifting the local-acid base equilibrium towards its uncharged state. Shifting of the local acid-base equilibrium and its effect on the properties of the polyelectrolyte layer, without considering the Born energy, have been theoretically predicted previously. Account of the Born energy leads
8. Born energy, acid-base equilibrium, structure and interactions of end-grafted weak polyelectrolyte layers.
PubMed
Nap, R J; Tagliazucchi, M; Szleifer, I
2014-01-14
This work addresses the effect of the Born self-energy contribution in the modeling of the structural and thermodynamical properties of weak polyelectrolytes confined to planar and curved surfaces. The theoretical framework is based on a theory that explicitly includes the conformations, size, shape, and charge distribution of all molecular species and considers the acid-base equilibrium of the weak polyelectrolyte. Namely, the degree of charge in the polymers is not imposed but it is a local varying property that results from the minimization of the total free energy. Inclusion of the dielectric properties of the polyelectrolyte is important as the environment of a polymer layer is very different from that in the adjacent aqueous solution. The main effect of the Born energy contribution on the molecular organization of an end-grafted weak polyacid layer is uncharging the weak acid (or basic) groups and consequently decreasing the concentration of mobile ions within the layer. The magnitude of the effect increases with polymer density and, in the case of the average degree of charge, it is qualitatively equivalent to a small shift in the equilibrium constant for the acid-base equilibrium of the weak polyelectrolyte monomers. The degree of charge is established by the competition between electrostatic interactions, the polymer conformational entropy, the excluded volume interactions, the translational entropy of the counterions and the acid-base chemical equilibrium. Consideration of the Born energy introduces an additional energetic penalty to the presence of charged groups in the polyelectrolyte layer, whose effect is mitigated by down-regulating the amount of charge, i.e., by shifting the local-acid base equilibrium towards its uncharged state. Shifting of the local acid-base equilibrium and its effect on the properties of the polyelectrolyte layer, without considering the Born energy, have been theoretically predicted previously. Account of the Born energy leads
9. An Intuitive and General Approach to Acid-Base Equilibrium Calculations.
ERIC Educational Resources Information Center
Felty, Wayne L.
1978-01-01
Describes the intuitive approach used in general chemistry and points out its pedagogical advantages. Explains how to extend it to acid-base equilibrium calculations without the need to introduce additional sophisticated concepts. (GA)
10. Using the Logarithmic Concentration Diagram, Log "C", to Teach Acid-Base Equilibrium
ERIC Educational Resources Information Center
Kovac, Jeffrey
2012-01-01
Acid-base equilibrium is one of the most important and most challenging topics in a typical general chemistry course. This article introduces an alternative to the algebraic approach generally used in textbooks, the graphical log "C" method. Log "C" diagrams provide conceptual insight into the behavior of aqueous acid-base systems and allow…
11. [Acid-base equilibrium in sportsmen during physical exercise].
PubMed
Brinzak, V P; Kalinskiĭ, M I; Val'tin, A I; Povzhitkova, M S
1983-01-01
Acid-base balance in venous blood of basketball players was studied under specific loadings of various intensity by means of the micro-Astrup device. It is established that under acyclic loadings (throwing the ball into the basket) the state of metabolic acidosis is developed in the sportsmen and the more intensive the work, the higher the degree of the state of metabolic acidosis. The efficiency of actions of the persons examined was in inverse dependence on the degree of metabolic disturbances, i.e. the least efficiency was marked under the most profound acidosis.
12. [Dichotomizing method applied to calculating equilibrium constant of dimerization system].
PubMed
Cheng, Guo-zhong; Ye, Zhi-xiang
2002-06-01
The arbitrary trivariate algebraic equations are formed based on the combination principle. The univariata algebraic equation of equilibrium constant kappa for dimerization system is obtained through a series of algebraic transformation, and it depends on the properties of monotonic functions whether the equation is solvable or not. If the equation is solvable, equilibrium constant of dimerization system is obtained by dichotomy and its final equilibrium constant of dimerization system is determined according to the principle of error of fitting. The equilibrium constants of trisulfophthalocyanine and biosulfophthalocyanine obtained with this method are 47,973.4 and 30,271.8 respectively. The results are much better than those reported previously.
13. Calculation of individual isotope equilibrium constants for geochemical reactions
USGS Publications Warehouse
Thorstenson, D.C.; Parkhurst, D.L.
2004-01-01
Theory is derived from the work of Urey (Urey H. C. [1947] The thermodynamic properties of isotopic substances. J. Chem. Soc. 562-581) to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by ?? = (Kex)1/n, where n is the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example 13C16O18O and 1H2H18O. The equilibrium constants of the isotope exchange reactions can be expressed as ratios of individual isotope equilibrium constants for geochemical reactions. Knowledge of the equilibrium constant for the dominant isotopic species can then be used to calculate the individual isotope equilibrium constants. Individual isotope equilibrium constants are calculated for the reaction CO2g = CO2aq for all species that can be formed from 12C, 13C, 16O, and 18O; for the reaction between 12C18 O2aq and 1H218Ol; and among the various 1H, 2H, 16O, and 18O species of H2O. This is a subset of a larger number of equilibrium constants calculated elsewhere (Thorstenson D. C. and Parkhurst D. L. [2002] Calculation of individual isotope equilibrium constants for implementation in geochemical models. Water-Resources Investigation Report 02-4172. U.S. Geological Survey). Activity coefficients, activity-concentration conventions for the isotopic variants of H2O in the solvent 1H216Ol, and salt effects on isotope fractionation have been included in the derivations. The effects of nonideality are small because of the chemical similarity of different isotopic species of the same molecule or ion. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation
14. Chemical Equilibrium, Unit 4: Equilibria in Acid-Base Systems. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide.
ERIC Educational Resources Information Center
Settle, Frank A., Jr.
Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this acid-base equilibria unit includes objectives, prerequisites, pretest, a discussion of equilibrium constants, and 20 problem sets.…
15. Effects of intravenous solutions on acid-base equilibrium: from crystalloids to colloids and blood components.
PubMed
Langer, Thomas; Ferrari, Michele; Zazzeron, Luca; Gattinoni, Luciano; Caironi, Pietro
2014-01-01
Intravenous fluid administration is a medical intervention performed worldwide on a daily basis. Nevertheless, only a few physicians are aware of the characteristics of intravenous fluids and their possible effects on plasma acid-base equilibrium. According to Stewart's theory, pH is independently regulated by three variables: partial pressure of carbon dioxide, strong ion difference (SID), and total amount of weak acids (ATOT). When fluids are infused, plasma SID and ATOT tend toward the SID and ATOT of the administered fluid. Depending on their composition, fluids can therefore lower, increase, or leave pH unchanged. As a general rule, crystalloids having a SID greater than plasma bicarbonate concentration (HCO₃-) cause an increase in plasma pH (alkalosis), those having a SID lower than HCO₃- cause a decrease in plasma pH (acidosis), while crystalloids with a SID equal to HCO₃- leave pH unchanged, regardless of the extent of the dilution. Colloids and blood components are composed of a crystalloid solution as solvent, and the abovementioned rules partially hold true also for these fluids. The scenario is however complicated by the possible presence of weak anions (albumin, phosphates and gelatins) and their effect on plasma pH. The present manuscript summarises the characteristics of crystalloids, colloids, buffer solutions and blood components and reviews their effect on acid-base equilibrium. Understanding the composition of intravenous fluids, along with the application of simple physicochemical rules best described by Stewart's approach, are pivotal steps to fully elucidate and predict alterations of plasma acid-base equilibrium induced by fluid therapy.
16. Constant Entropy Properties for an Approximate Model of Equilibrium Air
NASA Technical Reports Server (NTRS)
Hansen, C. Frederick; Hodge, Marion E.
1961-01-01
Approximate analytic solutions for properties of equilibrium air up to 15,000 K have been programmed for machine computation. Temperature, compressibility, enthalpy, specific heats, and speed of sound are tabulated as constant entropy functions of temperature. The reciprocal of acoustic impedance and its integral with respect to pressure are also given for the purpose of evaluating the Riemann constants for one-dimensional, isentropic flow.
17. Microcomputer Calculation of Equilibrium Constants from Molecular Parameters of Gases.
ERIC Educational Resources Information Center
Venugopalan, Mundiyath
1989-01-01
Lists a BASIC program which computes the equilibrium constant as a function of temperature. Suggests use by undergraduates taking a one-year calculus-based physical chemistry course. Notes the program provides for up to four species, typically two reactants and two products. (MVL)
18. Acid-base equilibrium dynamics in methanol and dimethyl sulfoxide probed by two-dimensional infrared spectroscopy.
PubMed
Lee, Chiho; Son, Hyewon; Park, Sungnam
2015-07-21
Two-dimensional infrared (2DIR) spectroscopy, which has been proven to be an excellent experimental method for studying thermally-driven chemical processes, was successfully used to investigate the acid dissociation equilibrium of HN3 in methanol (CH3OH) and dimethyl sulfoxide (DMSO) for the first time. Our 2DIR experimental results indicate that the acid-base equilibrium occurs on picosecond timescales in CH3OH but that it occurs on much longer timescales in DMSO. Our results imply that the different timescales of the acid-base equilibrium originate from different proton transfer mechanisms between the acidic (HN3) and basic (N3(-)) species in CH3OH and DMSO. In CH3OH, the acid-base equilibrium is assisted by the surrounding CH3OH molecules which can directly donate H(+) to N3(-) and accept H(+) from HN3 and the proton migrates through the hydrogen-bonded chain of CH3OH. On the other hand, the acid-base equilibrium in DMSO occurs through the mutual diffusion of HN3 and N3(-) or direct proton transfer. Our 2DIR experimental results corroborate different proton transfer mechanisms in the acid-base equilibrium in protic (CH3OH) and aprotic (DMSO) solvents.
19. Acid-base titration curves for acids with very small ratios of successive dissociation constants.
PubMed
Campbell, B H; Meites, L
1974-02-01
The shapes of the potentiometric acid-base titration curves obtained in the neutralizations of polyfunctional acids or bases for which each successive dissociation constant is smaller than the following one are examined. In the region 0 < < 1 (where is the fraction of the equivalent volume of reagent that has been added) the slope of the titration curve decreases as the number j of acidic or basic sites increases. The difference between the pH-values at = 0.75 and = 0.25 has (1 j)log 9 as the lower limit of its maximum value.
20. Species-Specific Thiol-Disulfide Equilibrium Constant: A Tool To Characterize Redox Transitions of Biological Importance.
PubMed
Mirzahosseini, Arash; Somlyay, Máté; Noszál, Béla
2015-08-13
Microscopic redox equilibrium constants, a new species-specific type of physicochemical parameters, were introduced and determined to quantify thiol-disulfide equilibria of biological significance. The thiol-disulfide redox equilibria of glutathione with cysteamine, cysteine, and homocysteine were approached from both sides, and the equilibrium mixtures were analyzed by quantitative NMR methods to characterize the highly composite, co-dependent acid-base and redox equilibria. The directly obtained, pH-dependent, conditional constants were then decomposed by a new evaluation method, resulting in pH-independent, microscopic redox equilibrium constants for the first time. The 80 different, microscopic redox equilibrium constant values show close correlation with the respective thiolate basicities and provide sound means for the development of potent agents against oxidative stress.
1. Chromophore Structure of Photochromic Fluorescent Protein Dronpa: Acid-Base Equilibrium of Two Cis Configurations.
PubMed
Higashino, Asuka; Mizuno, Misao; Mizutani, Yasuhisa
2016-04-01
Dronpa is a novel photochromic fluorescent protein that exhibits fast response to light. The present article is the first report of the resonance and preresonance Raman spectra of Dronpa. We used the intensity and frequency of Raman bands to determine the structure of the Dronpa chromophore in two thermally stable photochromic states. The acid-base equilibrium in one photochromic state was observed by spectroscopic pH titration. The Raman spectra revealed that the chromophore in this state shows a protonation/deprotonation transition with a pKa of 5.2 ± 0.3 and maintains the cis configuration. The observed resonance Raman bands showed that the other photochromic state of the chromophore is in a trans configuration. The results demonstrate that Raman bands selectively enhanced for the chromophore yield valuable information on the molecular structure of the chromophore in photochromic fluorescent proteins after careful elimination of the fluorescence background. PMID:26991398
2. Chromophore Structure of Photochromic Fluorescent Protein Dronpa: Acid-Base Equilibrium of Two Cis Configurations.
PubMed
Higashino, Asuka; Mizuno, Misao; Mizutani, Yasuhisa
2016-04-01
Dronpa is a novel photochromic fluorescent protein that exhibits fast response to light. The present article is the first report of the resonance and preresonance Raman spectra of Dronpa. We used the intensity and frequency of Raman bands to determine the structure of the Dronpa chromophore in two thermally stable photochromic states. The acid-base equilibrium in one photochromic state was observed by spectroscopic pH titration. The Raman spectra revealed that the chromophore in this state shows a protonation/deprotonation transition with a pKa of 5.2 ± 0.3 and maintains the cis configuration. The observed resonance Raman bands showed that the other photochromic state of the chromophore is in a trans configuration. The results demonstrate that Raman bands selectively enhanced for the chromophore yield valuable information on the molecular structure of the chromophore in photochromic fluorescent proteins after careful elimination of the fluorescence background.
3. Complexation Constants of Ubiquinone,0 and Ubiquinone,10 with Nucleosides and Nucleic Acid Bases
Rahawi, Kassim Y.; Shanshal, Muthana
2008-02-01
UV spectrophotometric measurements were done on mixtures of the ubiquinones Ub,0 and Ub,10 in their monomeric form (c < 10-5 mol/l) with the nucleosides; adenosine, cytidine, 2'-desoxyadenosine, 2'-desoxy-quanosine, guanosine and thymidine, as well as the nucleic acid bases; adenine, cytosine, hypoxanthine, thymine and uracil. Applying the Liptay method, it was found that both ubiquinones form 1 : 1 interaction complexes with the nucleic acid components. The complexation constants were found in the order of 105 mol-1. The calculated ΔG values were negative (˜-7.0 kcal/mol), suggesting a favoured hydrogen bridge formation. This is confirmed by the positive change of the entropy ΔS. The complexation enthalpies ΔH for all complexes are negative, suggesting exothermal interactions.
4. Effect of water content on the acid-base equilibrium of cyanidin-3-glucoside.
PubMed
Coutinho, Isabel B; Freitas, Adilson; Maçanita, António L; Lima, J C
2015-04-01
Laser Flash Photolysis was employed to measure the deprotonation and reprotonation rate constants of cyanidin 3-monoglucoside (kuromanin) in water/methanol mixtures. It was found that the deprotonation rate constant kd decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin, which may accommodate and stabilize the outgoing protons. On the other hand, the reprotonation rate constant, kp, increases with the decrease in water concentration from a value of kp = 2 × 10(10) l mol(-1) s(-1) in water up to kp = 6 × 10(10) l mol(-1) s(-1) at 5.6M water concentration in the mixture. The higher value of kp at lower water concentrations reflects the fact that the proton is not freely escaping the solvation shell of the molecule. The deprotonation rate constant decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin that can accommodate the outgoing protons. Overall, the acidity constant of the flavylium cation decreases with the decrease in water concentration from pKa values of 3.8 in water to approximately 4.8 in water-depleted media, thus shifting the equilibrium towards the red-coloured form, AH(+), at low water contents. The presence, or lack, of water, will affect the colour shade (red to blue) of kuromanin. This is relevant for its role as an intrinsic food component and as a food pigment additive (E163). PMID:25442581
5. Effect of water content on the acid-base equilibrium of cyanidin-3-glucoside.
PubMed
Coutinho, Isabel B; Freitas, Adilson; Maçanita, António L; Lima, J C
2015-04-01
Laser Flash Photolysis was employed to measure the deprotonation and reprotonation rate constants of cyanidin 3-monoglucoside (kuromanin) in water/methanol mixtures. It was found that the deprotonation rate constant kd decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin, which may accommodate and stabilize the outgoing protons. On the other hand, the reprotonation rate constant, kp, increases with the decrease in water concentration from a value of kp = 2 × 10(10) l mol(-1) s(-1) in water up to kp = 6 × 10(10) l mol(-1) s(-1) at 5.6M water concentration in the mixture. The higher value of kp at lower water concentrations reflects the fact that the proton is not freely escaping the solvation shell of the molecule. The deprotonation rate constant decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin that can accommodate the outgoing protons. Overall, the acidity constant of the flavylium cation decreases with the decrease in water concentration from pKa values of 3.8 in water to approximately 4.8 in water-depleted media, thus shifting the equilibrium towards the red-coloured form, AH(+), at low water contents. The presence, or lack, of water, will affect the colour shade (red to blue) of kuromanin. This is relevant for its role as an intrinsic food component and as a food pigment additive (E163).
6. Computational calculation of equilibrium constants: addition to carbonyl compounds.
PubMed
Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio
2009-10-22
Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within +/- 0.5 log K(hyd) units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than +/- 1.0 log K(hyd), on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary. PMID:19761202
7. Computational Calculation of Equilibrium Constants: Addition to Carbonyl Compounds
Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio
2009-09-01
Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within ± 0.5 log Khyd units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than ± 1.0 log Khyd, on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary.
8. Determination of acid-base dissociation constants of azahelicenes by capillary zone electrophoresis.
PubMed
Ehala, Sille; Mísek, Jirí; Stará, Irena G; Starý, Ivo; Kasicka, Václav
2008-08-01
CZE was employed to determine acid-base dissociation constants (pK(a)) of ionogenic groups of azahelicenes in methanol (MeOH). Azahelicenes are unique 3-D aromatic systems, which consist of ortho-fused benzene/pyridine units and exhibit helical chirality. The pK(a) values of pyridinium groups of the studied azahelicenes were determined from the dependence of their effective electrophoretic mobility on pH by a nonlinear regression analysis. The effective mobilities of azahelicenes were determined by CZE at pH range between 2.1 and 10.5. Thermodynamic pK(a) values of monobasic 1-aza[6]helicene and 2-aza[6]helicene in MeOH were determined to be 4.94 +/- 0.05 and 5.68 +/- 0.05, respectively, and pK(a) values of dibasic 1,14-diaza[5]helicene were found to be equal to 7.56 +/- 0.38 and 8.85 +/- 0.26. From these values, the aqueous pK(a) of these compounds was estimated.
9. Why and How To Teach Acid-Base Reactions without Equilibrium.
ERIC Educational Resources Information Center
Carlton, Terry S.
1997-01-01
Recommends an approach to the treatment of acid-base equilibria that involves treating each reaction as either going to completion or not occurring at all. Compares the method with the traditional approach step by step. (DDR)
10. Equilibrium constants and protonation site for N-methylbenzenesulfonamides
PubMed Central
Rosa da Costa, Ana M; García-Río, Luis; Pessêgo, Márcia
2011-01-01
Summary The protonation equilibria of four substituted N-methylbenzenesulfonamides, X-MBS: X = 4-MeO (3a), 4-Me (3b), 4-Cl (3c) and 4-NO2 (3d), in aqueous sulfuric acid were studied at 25 °C by UV–vis spectroscopy. As expected, the values for the acidity constants are highly dependent on the electron-donor character of the substituent (the pK BH+ values are −3.5 ± 0.2, −4.2 ± 0.2, −5.2 ± 0.3 and −6.0 ± 0.3 for 3a, 3b, 3c and 3d, respectively). The solvation parameter m* is always higher than 0.5 and points to a decrease in the importance of solvation on the cation stabilization as the electron-donor character of the substituent increases. Hammett plots of the equilibrium constants showed a better correlation with the σ+ substituent parameter than with σ, which indicates that the initial protonation site is the oxygen atom of the sulfonyl group. PMID:22238552
11. Using nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) for simultaneous determination of concentration and equilibrium constant.
PubMed
Kanoatov, Mirzo; Galievsky, Victor A; Krylova, Svetlana M; Cherney, Leonid T; Jankowski, Hanna K; Krylov, Sergey N
2015-03-01
Nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) is a versatile tool for studying affinity binding. Here we describe a NECEEM-based approach for simultaneous determination of both the equilibrium constant, K(d), and the unknown concentration of a binder that we call a target, T. In essence, NECEEM is used to measure the unbound equilibrium fraction, R, for the binder with a known concentration that we call a ligand, L. The first set of experiments is performed at varying concentrations of T, prepared by serial dilution of the stock solution, but at a constant concentration of L, which is as low as its reliable quantitation allows. The value of R is plotted as a function of the dilution coefficient, and dilution corresponding to R = 0.5 is determined. This dilution of T is used in the second set of experiments in which the concentration of T is fixed but the concentration of L is varied. The experimental dependence of R on the concentration of L is fitted with a function describing their theoretical dependence. Both K(d) and the concentration of T are used as fitting parameters, and their sought values are determined as the ones that generate the best fit. We have fully validated this approach in silico by using computer-simulated NECEEM electropherograms and then applied it to experimental determination of the unknown concentration of MutS protein and K(d) of its interactions with a DNA aptamer. The general approach described here is applicable not only to NECEEM but also to any other method that can determine a fraction of unbound molecules at equilibrium.
12. The Perils of Carbonic Acid and Equilibrium Constants.
ERIC Educational Resources Information Center
Jencks, William P.; Altura, Rachel A.
1988-01-01
Discusses the effects caused by small amounts of carbon dioxide usually present in water and acid-base equilibria of dilute solutions. Notes that dilute solutions of most weak acids and bases undergo significant dissociation or protonation. (MVL)
13. Calculation of individual isotope equilibrium constants for implementation in geochemical models
USGS Publications Warehouse
Thorstenson, Donald C.; Parkhurst, David L.
2002-01-01
Theory is derived from the work of Urey to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by , where is n the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example and , and to include the effects of nonideality. The equilibrium constants of the isotope exchange reactions provide a basis for calculating the individual isotope equilibrium constants for the geochemical modeling reactions. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation factors. Equilibrium constants are calculated for all species that can be formed from and selected species containing , in the molecules and the ion pairs with where the subscripts g, aq, l, and s refer to gas, aqueous, liquid, and solid, respectively. These equilibrium constants are used in the geochemical model PHREEQC to produce an equilibrium and reaction-transport model that includes these isotopic species. Methods are presented for calculation of the individual isotope equilibrium constants for the asymmetric bicarbonate ion. An example calculates the equilibrium of multiple isotopes among multiple species and phases.
14. Effect of Acid-Base Equilibrium on Absorption Spectra of Humic acid in the Presence of Copper Ions
Lavrik, N. L.; Mulloev, N. U.
2014-03-01
The reaction between humic acid (HA, sample IHSS) and a metal ion (Cu2+) that was manifested as absorption bands in the range 210-350 nm was recorded using absorption spectroscopy. The reaction was found to be more effective as the pH increased. These data were interpreted in the framework of generally accepted concepts about the influence of acid-base equilibrium on the dissociation of salts, according to which increasing the solution pH increases the concentration of HA anions. It was suggested that [HA-Cu2+] complexes formed.
15. Measurement of both the equilibrium constant and rate constant for electronic energy transfer by control of the limiting kinetic regimes.
PubMed
Vagnini, Michael T; Rutledge, W Caleb; Wagenknecht, Paul S
2010-02-01
Electronic energy transfer can fall into two limiting cases. When the rate of the energy transfer back reaction is much faster than relaxation of the acceptor excited state, equilibrium between the donor and acceptor excited states is achieved and only the equilibrium constant for the energy transfer can be measured. When the rate of the back reaction is much slower than relaxation of the acceptor, the energy transfer is irreversible and only the forward rate constant can be measured. Herein, we demonstrate that with trans-[Cr(d(4)-cyclam)(CN)(2)](+) as the donor and either trans-[Cr([15]ane-ane-N(4))(CN)(2)](+) or trans-[Cr(cyclam)(CN)(2)](+) as the acceptor, both limits can be obtained by control of the donor concentration. The equilibrium constant and rate constant for the case in which trans-[Cr([15]ane-ane-N(4))(CN)(2)](+) is the acceptor are 0.66 and 1.7 x 10(7) M(-1) s(-1), respectively. The equilibrium constant is in good agreement with the value of 0.60 determined using the excited state energy gap between the donor and acceptor species. For the thermoneutral case in which trans-[Cr(cyclam)(CN)(2)](+) is the acceptor, an experimental equilibrium constant of 0.99 was reported previously, and the rate constant has now been measured as 4.0 x 10(7) M(-1) s(-1).
16. A Unified Kinetics and Equilibrium Experiment: Rate Law, Activation Energy, and Equilibrium Constant for the Dissociation of Ferroin
ERIC Educational Resources Information Center
Sattar, Simeen
2011-01-01
Tris(1,10-phenanthroline)iron(II) is the basis of a suite of four experiments spanning 5 weeks. Students determine the rate law, activation energy, and equilibrium constant for the dissociation of the complex ion in acid solution and base dissociation constant for phenanthroline. The focus on one chemical system simplifies a daunting set of…
17. The Rigorous Evaluation of Spectrophotometric Data to Obtain an Equilibrium Constant.
ERIC Educational Resources Information Center
Long, John R.; Drago, Russell S.
1982-01-01
Most students do not know how to determine the equilibrium constant and estimate the error in it from spectrophotometric data that contain experimental errors. This "dry-lab" experiment describes a method that may be used to determine the "best-fit" value of the 1:1 equilibrium constant to spectrophotometric data. (Author/JN)
18. Constants and thermodynamics of the acid-base equilibria of triglycine in water-ethanol solutions containing sodium perchlorate at 298 K
Pham Tkhi, L.; Usacheva, T. R.; Tukumova, N. V.; Koryshev, N. E.; Khrenova, T. M.; Sharnin, V. A.
2016-02-01
The acid-base equilibrium constants for glycyl-glycyl-glycine (triglycine) in water-ethanol solvents containing 0.0, 0.1, 0.3, and 0.5 mole fractions of ethanol are determined by potentiometric titration at 298.15 K and an ionic strength of 0.1, maintained with sodium perchlorate. It is established that an increase in the ethanol content in the solvent reduces the dissociation constant of the carboxyl group of triglycine (increases p K 1) and increases the dissociation constant of the amino group of triglycine (decreases p K 2). It is noted that the weakening of the acidic properties of a triglycinium ion upon an increase of the ethanol content in the solvent is due to the attenuation of the solvation shell of the zwitterionic form of triglycine, and to the increased solvation of triglycinium ions. It is concluded that the acid strength of triglycine increases along with a rise in the EtOH content in the solvent, due to the desolvation of the tripeptide zwitterion and the enhanced solvation of protons.
19. Acid-base equilibrium during capnoretroperitoneoscopic nephrectomy in patients with end-stage renal failure: a preliminary report.
PubMed
Demian, A D; Esmail, O M; Atallah, M M
2000-04-01
We have studied the acid-base equilibrium in 12 patients with end-stage renal failure (ESRF) during capnoretroperitoneoscopic nephrectomy. Bupivacaine (12 mL, 0.375%) and morphine (2mg) were given in the lumbar epidural space, and fentanyl (0.5 microg kg(-1)) and midazolam (50 microg kg(-1)) were given intravenously. Anaesthesia was induced by thiopental, maintained with halothane carried by oxygen enriched air (inspired oxygen fraction = 0.35), and ventilation was achieved with a tidal volume of 10 mL kg(-1) at a rate of 12 min(-1). This procedure resulted in a mild degree of respiratory acidosis that was cleared within 60 min. We conclude that capnoretroperitoneoscopic nephrectomy can be performed in patients with end-stage renal failure with minimal transient respiratory acidosis that can be avoided by increased ventilation.
20. Acid-base equilibrium during capnoretroperitoneoscopic nephrectomy in patients with end-stage renal failure: a preliminary report.
PubMed
Demian, A D; Esmail, O M; Atallah, M M
2000-04-01
We have studied the acid-base equilibrium in 12 patients with end-stage renal failure (ESRF) during capnoretroperitoneoscopic nephrectomy. Bupivacaine (12 mL, 0.375%) and morphine (2mg) were given in the lumbar epidural space, and fentanyl (0.5 microg kg(-1)) and midazolam (50 microg kg(-1)) were given intravenously. Anaesthesia was induced by thiopental, maintained with halothane carried by oxygen enriched air (inspired oxygen fraction = 0.35), and ventilation was achieved with a tidal volume of 10 mL kg(-1) at a rate of 12 min(-1). This procedure resulted in a mild degree of respiratory acidosis that was cleared within 60 min. We conclude that capnoretroperitoneoscopic nephrectomy can be performed in patients with end-stage renal failure with minimal transient respiratory acidosis that can be avoided by increased ventilation. PMID:10866009
1. Galvanic Cells and the Determination of Equilibrium Constants
ERIC Educational Resources Information Center
Brosmer, Jonathan L.; Peters, Dennis G.
2012-01-01
Readily assembled mini-galvanic cells can be employed to compare their observed voltages with those predicted from the Nernst equation and to determine solubility products for silver halides and overall formation constants for metal-ammonia complexes. Results obtained by students in both an honors-level first-year course in general chemistry and…
2. Weak Acid Ionization Constants and the Determination of Weak Acid-Weak Base Reaction Equilibrium Constants in the General Chemistry Laboratory
ERIC Educational Resources Information Center
Nyasulu, Frazier; McMills, Lauren; Barlag, Rebecca
2013-01-01
A laboratory to determine the equilibrium constants of weak acid negative weak base reactions is described. The equilibrium constants of component reactions when multiplied together equal the numerical value of the equilibrium constant of the summative reaction. The component reactions are weak acid ionization reactions, weak base hydrolysis…
3. Profiles of equilibrium constants for self-association of aromatic molecules.
PubMed
Beshnova, Daria A; Lantushenko, Anastasia O; Davies, David B; Evstigneev, Maxim P
2009-04-28
Analysis of the noncovalent, noncooperative self-association of identical aromatic molecules assumes that the equilibrium self-association constants are either independent of the number of molecules (the EK-model) or change progressively with increasing aggregation (the AK-model). The dependence of the self-association constant on the number of molecules in the aggregate (i.e., the profile of the equilibrium constant) was empirically derived in the AK-model but, in order to provide some physical understanding of the profile, it is proposed that the sources for attenuation of the equilibrium constant are the loss of translational and rotational degrees of freedom, the ordering of molecules in the aggregates and the electrostatic contribution (for charged units). Expressions are derived for the profiles of the equilibrium constants for both neutral and charged molecules. Although the EK-model has been widely used in the analysis of experimental data, it is shown in this work that the derived equilibrium constant, K(EK), depends on the concentration range used and hence, on the experimental method employed. The relationship has also been demonstrated between the equilibrium constant K(EK) and the real dimerization constant, K(D), which shows that the value of K(EK) is always lower than K(D).
4. A one-term extrapolation method for estimating equilibrium constants of aqueous reactions at elevated temperatures
Gu, Y.; Gammons, C. H.; Bloom, M. S.
1994-09-01
A one-term method for extrapolating equilibrium constants for aqueous reactions is proposed which is based on the observation that the change in free energy of a well-balanced isocoulombic reaction is nearly independent of temperature. The current practice in extrapolating log K values for isocoulombic reactions is to omit the ΔCp term but include a ΔS term (i.e., the two-term extrapolation equation of LINDSAY, 1980). However, we observe that the ΔCp and ΔS terms for many isocoulombic reactions are not only small, but are often opposite in sign, and therefore tend to cancel one another. Thus, inclusion of an entropy term often yields estimates which are less accurate than omission of both terms. The one-term extrapolation technique is tested with literature data for a large number of isocoulombic reactions involving ion-ligand exchange, cation hydrolysis, acid-base neutralization, redox, and selected reactions involving solids. In most cases the extrapolated values are in excellent agreement with the experimental measurements, especially at higher temperatures where they are often more accurate than those obtained using the two-term equation of LINDSAY (1980). The results are also comparable to estimates obtained using the modified HKF model of TANGER and HELGESON (1988) and the density model of ANDERSON et al. (1991). It is also found to produce reasonable estimates for isocoulombic reactions at elevated pressure (up to P = 2 kb) and ionic strength (up to I = 1.0). The principal advantage of the one-term method is that accurate estimates of high temperature equilibrium constants may be obtained using only free energy data for the reaction of interest at one reference temperature. The principal disadvantage is that the accuracies of the estimates are somewhat dependent on the model reaction selected to balance the isocoulombic reaction. Satisfactory results are obtained for reactions that have minimal energetic, electrostatic, structural, and volumetric
5. The Equilibrium Constant for Bromothymol Blue: A General Chemistry Laboratory Experiment Using Spectroscopy
ERIC Educational Resources Information Center
Klotz, Elsbeth; Doyle, Robert; Gross, Erin; Mattson, Bruce
2011-01-01
A simple, inexpensive, and environmentally friendly undergraduate laboratory experiment is described in which students use visible spectroscopy to determine a numerical value for an equilibrium constant, K[subscript c]. The experiment correlates well with the lecture topic of equilibrium even though the subject of the study is an acid-base…
6. STUDIES OF THE ACID-BASE EQUILIBRIUM IN DISEASE FROM THE POINT OF VIEW OF BLOOD GASES.
PubMed
Means, J H; Bock, A V; Woodwell, M N
1921-01-31
Carbon dioxide diagrams (Haggard and Henderson (9)) have been constructed for the blood of a series of hospital patients as a method of studying disturbances in their acid-base equilibrium. A diabetic with a low level of blood alkali, but with a normal blood reaction, a compensated acidosis in other words, showed a rapid return towards normal with no treatment but fasting and increased water and salt intake. A nephritic with a decompensated acidosis and a very low blood alkali was rapidly brought to a condition of decompensated alkalosis with a high blood alkali by the therapeutic administration of sodium bicarbonate. It is suggested that the therapeutic use of alkali in acidosis is probably only indicated in the decompensated variety, and that there it should be controlled carefully and the production of alkalosis avoided. The diagram obtained in three pneumonia patients suggested that they were suffering from a condition of carbonic acidosis, due perhaps to insufficient pulmonary ventilation. In two out of three cases of anemia the dissociation curve was found to lie at a higher level than normal. No explanation for this finding was offered. PMID:19868489
7. Determination of acid-base dissociation constants of very weak zwitterionic heterocyclic bases by capillary zone electrophoresis.
PubMed
Ehala, Sille; Grishina, Anastasiya A; Sheshenev, Andrey E; Lyapkalo, Ilya M; Kašička, Václav
2010-12-17
Thermodynamic acid-base dissociation (ionization) constants (pK(a)) of seven zwitterionic heterocyclic bases, first representatives of new heterocyclic family (2,3,5,7,8,9-hexahydro-1H-diimidazo[1,2-c:2',1'-f][1,3,2]diazaphosphinin-4-ium-5-olate 5-oxides), originally designed as chiral Lewis base catalysts for enantioselective reactions, were determined by capillary zone electrophoresis (CZE). The pK(a) values of the above very weak zwitterionic bases were determined from the dependence of their effective electrophoretic mobility on pH in strongly acidic background electrolytes (pH 0.85-2.80). Prior to pK(a) calculation by non-linear regression analysis, the CZE measured effective mobilities were corrected to reference temperature, 25°C, and constant ionic strength, 25 mM. Thermodynamic pK(a) values of the analyzed zwitterionic heterocyclic bases were found to be particularly low, in the range 0.04-0.32. Moreover, from the pH dependence of effective mobility of the bases, some other relevant characteristics, such as actual and absolute ionic mobilities and hydrodynamic radii of the acidic cationic forms of the bases were determined.
8. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants
ERIC Educational Resources Information Center
Vargas, Francisco M.
2014-01-01
The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…
9. Apparent equilibrium constants and standard transformed Gibbs energies of biochemical reactions involving carbon dioxide.
PubMed
Alberty, R A
1997-12-01
When carbon dioxide is produced in a biochemical reaction, the expression for the apparent equilibrium constant K' can be written in terms of the partial pressure of carbon dioxide in the gas phase or the total concentration of species containing CO2 in the aqueous phase, referred to here as [TotCO2]. The values of these two apparent equilibrium constants are different because they correspond to different ways of writing the biochemical equations. Their dependencies on pH and ionic strength are also different. The ratio of these two apparent equilibrium constants is equal to the apparent Henry's law constant K'H. This article provides derivations of equations for the calculation of the standard transformed Gibbs energies of formation of TotCO2 and values of the apparent Henry's law constant at various pH levels and ionic strengths. These equations involve the four equilibrium constants interconnecting the five species [CO2(g), CO2(aq), H2CO3, HCO3-, and CO3(2-)] of carbon dioxide. In the literature there are many errors in the treatment of equilibrium data on biochemical reactions involving carbon dioxide, and so several examples are discussed here, including calculation of standard transformed Gibbs energies of formation of reactants. This approach also applies to net reactions, and the net reaction for the oxidation of glucose to carbon dioxide and water is discussed.
10. Determination of acid-base dissociation constants of amino- and guanidinopurine nucleotide analogs and related compounds by capillary zone electrophoresis.
PubMed
Solínová, Veronika; Kasicka, Václav; Koval, Dusan; Cesnek, Michal; Holý, Antonín
2006-03-01
CZE has been applied for determination of acid-base dissociation constants (pKa) of ionogenic groups of newly synthesized amino- and (amino)guanidinopurine nucleotide analogs, such as acyclic nucleoside phosphonate, acyclic nucleoside phosphonate diesters and other related compounds. These compounds bear characteristic pharmacophores contained in various important biologically active substances, such as cytostatics and antivirals. The pKa values of ionogenic groups of the above compounds were determined by nonlinear regression analysis of the experimentally measured pH dependence of their effective electrophoretic mobilities. The effective mobilities were measured by CZE performed in series of BGEs in a broad pH range (3.50-11.25), at constant ionic strength (25 mM) and temperature (25 degrees C). pKa values were determined for the protonated guanidinyl group in (amino)guanidino 9-alkylpurines and in (amino)guanidinopurine nucleotide analogs, such as acyclic nucleoside phosphonates and acyclic nucleoside phosphonate diesters, for phosphonic acid to the second dissociation degree (-2) in acyclic nucleoside phosphonates of amino and (amino)guanidino 9-alkylpurines, and for protonated nitrogen in position 1 (N1) of purine moiety in acyclic nucleoside phosphonates of amino 9-alkylpurines. Thermodynamic pKa of protonated guanidinyl group was estimated to be in the range of 7.75-10.32, pKa of phosphonic acid to the second dissociation degree achieved values of 6.64-7.46, and pKa of protonated nitrogen in position 1 of purine was in the range of 4.13-4.89, depending on the structure of the analyzed compounds.
11. Classical calculation of the equilibrium constants for true bound dimers using complete potential energy surface
SciTech Connect
Buryak, Ilya; Vigasin, Andrey A.
2015-12-21
The present paper aims at deriving classical expressions which permit calculation of the equilibrium constant for weakly interacting molecular pairs using a complete multidimensional potential energy surface. The latter is often available nowadays as a result of the more and more sophisticated and accurate ab initio calculations. The water dimer formation is considered as an example. It is shown that even in case of a rather strongly bound dimer the suggested expression permits obtaining quite reliable estimate for the equilibrium constant. The reliability of our obtained water dimer equilibrium constant is briefly discussed by comparison with the available data based on experimental observations, quantum calculations, and the use of RRHO approximation, provided the latter is restricted to formation of true bound states only.
12. Classical calculation of the equilibrium constants for true bound dimers using complete potential energy surface
Buryak, Ilya; Vigasin, Andrey A.
2015-12-01
The present paper aims at deriving classical expressions which permit calculation of the equilibrium constant for weakly interacting molecular pairs using a complete multidimensional potential energy surface. The latter is often available nowadays as a result of the more and more sophisticated and accurate ab initio calculations. The water dimer formation is considered as an example. It is shown that even in case of a rather strongly bound dimer the suggested expression permits obtaining quite reliable estimate for the equilibrium constant. The reliability of our obtained water dimer equilibrium constant is briefly discussed by comparison with the available data based on experimental observations, quantum calculations, and the use of RRHO approximation, provided the latter is restricted to formation of true bound states only.
13. Classical calculation of the equilibrium constants for true bound dimers using complete potential energy surface.
PubMed
Buryak, Ilya; Vigasin, Andrey A
2015-12-21
The present paper aims at deriving classical expressions which permit calculation of the equilibrium constant for weakly interacting molecular pairs using a complete multidimensional potential energy surface. The latter is often available nowadays as a result of the more and more sophisticated and accurate ab initio calculations. The water dimer formation is considered as an example. It is shown that even in case of a rather strongly bound dimer the suggested expression permits obtaining quite reliable estimate for the equilibrium constant. The reliability of our obtained water dimer equilibrium constant is briefly discussed by comparison with the available data based on experimental observations, quantum calculations, and the use of RRHO approximation, provided the latter is restricted to formation of true bound states only.
14. Estimation of the initial equilibrium constants in the formation of tetragonal lysozyme nuclei
NASA Technical Reports Server (NTRS)
Pusey, Marc L.
1991-01-01
Results are presented from a study of the equilibria, kinetic rates, and the aggregation pathway which leads from a lysozyme monomer crystal to a tetragonal crystal, using dialyzed and recrystallized commercial hen eggwhite lysozyme. Relative light scattering intensity measurements were used to estimate the initial equilibrium constants for undersaturated lysozyme solutions in the tetragonal regime. The K1 value was estimated to be (1-3) x 10 exp 4 L/mol. Estimates of subsequent equilibrium constants depend on the crystal aggregation model chosen or determined. Experimental data suggest that tetragonal lysozyme crystal grows by addition of aggregates preformed in the bulk solution, rather than by monomer addition.
15. Does the Addition of Inert Gases at Constant Volume and Temperature Affect Chemical Equilibrium?
ERIC Educational Resources Information Center
Paiva, Joao C. M.; Goncalves, Jorge; Fonseca, Susana
2008-01-01
In this article we examine three approaches, leading to different conclusions, for answering the question "Does the addition of inert gases at constant volume and temperature modify the state of equilibrium?" In the first approach, the answer is yes as a result of a common students' alternative conception; the second approach, valid only for ideal…
16. Revealing equilibrium and rate constants of weak and fast noncovalent interactions.
PubMed
Mironov, Gleb G; Okhonin, Victor; Gorelsky, Serge I; Berezovski, Maxim V
2011-03-15
Rate and equilibrium constants of weak noncovalent molecular interactions are extremely difficult to measure. Here, we introduced a homogeneous approach called equilibrium capillary electrophoresis of equilibrium mixtures (ECEEM) to determine k(on), k(off), and K(d) of weak (K(d) > 1 μM) and fast kinetics (relaxation time, τ < 0.1 s) in quasi-equilibrium for multiple unlabeled ligands simultaneously in one microreactor. Conceptually, an equilibrium mixture (EM) of a ligand (L), target (T), and a complex (C) is prepared. The mixture is introduced into the beginning of a capillary reactor with aspect ratio >1000 filled with T. Afterward, differential mobility of L, T, and C along the reactor is induced by an electric field. The combination of differential mobility of reactants and their interactions leads to a change of the EM peak shape. This change is a function of rate constants, so the rate and equilibrium constants can be directly determined from the analysis of the EM peak shape (width and symmetry) and propagation pattern along the reactor. We proved experimentally the use of ECEEM for multiplex determination of kinetic parameters describing weak (3 mM > K(d) > 80 μM) and fast (0.25 s ≥ τ ≥ 0.9 ms) noncovalent interactions between four small molecule drugs (ibuprofen, S-flurbiprofen, salicylic acid and phenylbutazone) and α- and β-cyclodextrins. The affinity of the drugs was significantly higher for β-cyclodextrin than α-cyclodextrin and mostly determined by the rate constant of complex formation.
17. Equilibrium constant for carbamate formation from monoethanolamine and its relationship with temperature
SciTech Connect
Aroua, M.K.; Benamor, A.; Haji-Sulaiman, M.Z.
1999-09-01
Removal of acid gases such as CO{sub 2} and H{sub 2}S using aqueous solutions of alkanolamines is an industrially important process. The equilibrium constant for the formation of carbamate from monoethanolamine was evaluated at various temperatures of 298, 308, 318, and 328 K and ionic strengths up to 1.7 M. From the plot of log{sub 10} K versus I{sup 0.5}, the variation of the thermodynamical constant with temperature follows the relationship log{sub 10} K{sub 1} = {minus}0.934 + (0.671 {times} 10{sup 3})K/T.
18. Spectrophotometric Determination of the Dissociation Constant of an Acid-Base Indicator Using a Mathematical Deconvolution Technique
ERIC Educational Resources Information Center
Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.
2005-01-01
A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…
19. Measuring Equilibrium Binding Constants for the WT1-DNA Interaction Using a Filter Binding Assay.
PubMed
Romaniuk, Paul J
2016-01-01
Equilibrium binding of WT1 to specific sites in DNA and potentially RNA molecules is central in mediating the regulatory roles of this protein. In order to understand the functional effects of mutations in the nucleic acid-binding domain of WT1 proteins and/or mutations in the DNA- or RNA-binding sites, it is necessary to measure the equilibrium constant for formation of the protein-nucleic acid complex. This chapter describes the use of a filter binding assay to make accurate measurements of the binding of the WT1 zinc finger domain to the consensus WT1-binding site in DNA. The method described is readily adapted to the measurement of the effects of mutations in either the WT1 zinc finger domain or the putative binding sites within a promoter element or cellular RNA.
20. Equilibrium and dynamic osmotic behaviour of aqueous solutions with varied concentration at constant and variable volume.
PubMed
Minkov, Ivan L; Manev, Emil D; Sazdanova, Svetla V; Kolikov, Kiril H
2013-01-01
Osmosis is essential for the living organisms. In biological systems the process usually occurs in confined volumes and may express specific features. The osmotic pressure in aqueous solutions was studied here experimentally as a function of solute concentration (0.05-0.5 M) in two different regimes: of constant and variable solution volume. Sucrose, a biologically active substance, was chosen as a reference solute for the complex tests. A custom made osmotic cell was used. A novel operative experimental approach, employing limited variation of the solution volume, was developed and applied for the purpose. The established equilibrium values of the osmotic pressure are in agreement with the theoretical expectations and do not exhibit any evident differences for both regimes. In contrast, the obtained kinetic dependences reveal striking divergence in the rates of the process at constant and varied solution volume for the respective solute concentrations. The rise of pressure is much faster at constant solution volume, while the solvent influx is many times greater in the regime of variable volume. The results obtained suggest a feasible mechanism for the way in which the living cells rapidly achieve osmotic equilibrium upon changes in the environment.
1. Does the ligand-biopolymer equilibrium binding constant depend on the number of bound ligands?
PubMed
Beshnova, Daria A; Lantushenko, Anastasia O; Evstigneev, Maxim P
2010-11-01
Conventional methods, such as Scatchard or McGhee-von Hippel analyses, used to treat ligand-biopolymer interactions, indirectly make the assumption that the microscopic binding constant is independent of the number of ligands, i, already bound to the biopolymer. Recent results on the aggregation of aromatic molecules (Beshnova et al., J Chem Phys 2009, 130, 165105) indicated that the equilibrium constant of self-association depends intrinsically on the number of molecules in an aggregate due to loss of translational and rotational degrees of freedom on formation of the complex. The influence of these factors on the equilibrium binding constant for ligand-biopolymer complexation was analyzed in this work. It was shown that under the conditions of binding of "small" molecules, these factors can effectively be ignored and, hence, do not provide any hidden systematic error in such widely-used approaches, such as the Scatchard or McGhee-von Hippel methods for analyzing ligand-biopolymer complexation. © 2010 Wiley Periodicals, Inc. Biopolymers 93: 932-935, 2010.
2. Anomalously slow cyanide binding to Glycera dibranchiata monomer methemoglobin component II: Implication for the equilibrium constant
SciTech Connect
Mintorovitch, J.; Satterlee, J.D. )
1988-10-18
In comparison to sperm whale metmyoglobin, metleghemoglobin {alpha}, methemoglobins, and heme peroxidases, the purified Glycera dibranchiata monomer methemoglobin component II exhibits anomalously slow cyanide ligation kinetics. For the component II monomer methemoglobin this reaction has been studied under pseudo-first-order conditions at pH 6.0, 7.0, 8.0, and 9.0, employing 100-250-fold mole excesses of potassium cyanide at each pH. The analysis shows that the concentration-independent bimolecular rate constant is small in comparison to those of the other heme proteins. Furthermore, the results show that the dissociation rate is extremely slow. Separation of the bimolecular rate constant into contributions from k{sub CN{sup {minus}}} (the rate constant for CN{sup {minus}} binding) and from k{sub HCN} (the rate constant for HCN binding) shows that the former is approximately 90 times greater. These results indicate that cyanide ligation reactions are not instantaneous for this protein, which is important for those attempting to study the ligand-binding equilibria. From the results presented here the authors estimate that the actual equilibrium dissociation constant (K{sub D}) for cyanide binding to this G. dibranchiata monomer methemoglobin has a numerical upper limit that is at least 2 orders of magnitude smaller than the value reported before the kinetic results were known.
3. Acid-base equilibria in ethylene glycol--III: selection of titration conditions in ethylene glycol medium, protolysis constants of alkaloids in ethylene glycol and its mixtures.
PubMed
Zikolov, P; Zikolova, T; Budevsky, O
1976-08-01
Theoretical titration curves are used for the selection of appropriate conditions for the acid-base volumetric determination of weak bases in ethylene glycol medium. The theoretical curves for titration of some alkaloids are deduced graphically on the basis of the logarithmic concentration diagram. The acid-base constants used for the construction of the theoretical titration curves were determined by potentiometric titration in a cell without liquid junction, equipped with a glass and a silver-silver chloride electrode. It is shown that the alkaloids investigated can be determined accurately by visual or potentiometric titration. The same approach for the selection of titration conditions seems to be applicable to other non-aqueous amphiprotic solvents.
4. Determination of the Equilibrium Constants of a Weak Acid: An Experiment for Analytical or Physical Chemistry
Bonham, Russell A.
1998-05-01
A simple experiment, utilizing readily available equipment and chemicals, is described. It allows students to explore the concepts of chemical equilibria, nonideal behavior of aqueous solutions, least squares with adjustment of nonlinear model parameters, and errors. The relationship between the pH of a solution of known initial concentration and volume of a weak acid as it is titrated by known volumes of a monohydroxy strong base is developed rigorously assuming ideal behavior. A distinctive feature of this work is a method that avoids dealing with the problems presented by equations with multiple roots. The volume of base added is calculated in terms of a known value of the pH and the equilibrium constants. The algebraic effort involved is nearly the same as the alternative of deriving a master equation for solving for the hydrogen ion concentration or activity and results in a more efficient computational algorithm. This approach offers two advantages over the use of computer software to solve directly for the hydrogen ion concentration. First, it avoids a potentially lengthy iterative procedure encountered when the polynomial exceeds third order in the hydrogen ion concentration; and second, it provides a means of obtaining results with a hand calculator that can prove useful in checking computer code. The approach is limited to weak solutions to avoid dealing with molalities and to insure that the Debye-Hückel limiting law is applicable. The nonlinear least squares algorithm Nonlinear Fit, found in the computational mathematics library Mathematica, is utilized to fit the measured volume of added base to the calculated value as a function of the measured pH subject to variation of all the equilibrium constants as parameters (including Kw). The experiment emphasizes both data collection and data analysis aspects of the problem. Data for the titration of phosphorous acid, H3PO3, by NaOH are used to illustrate the approach. Fits of the data without corrections
5. The estimation of affinity constants for the binding of model peptides to DNA by equilibrium dialysis.
PubMed Central
Standke, K C; Brunnert, H
1975-01-01
The binding of lysine model peptides of the type Lys-X-Lys, Lys-X-X-Lys and Lys-X-X-X-Lys (X = different aliphatic and aromatic amino acids) has been studied by equilibrium dialysis. It was shown that the strong electrostatic binding forces generated by protonated amino groups of lysine can be distinguished from the weak forces stemming from neutral and aromatic spacer amino acids. The overall binding strength of the lysine model peptides is modified by these weak binding forces and the apparent binding constants are influenced more by the hydrophobic character of the spacer amino acid side chains than by the chainlength of the spacers. PMID:1187347
6. Equilibrium constant for the reversible reaction ClO + O2 - ClO-O2
NASA Technical Reports Server (NTRS)
Demore, W. B.
1990-01-01
It is shown here that the equilibrium constant for the reversible reaction ClO + O2 - ClO-O2 at stratospheric temperatures must be at least three orders of magnitude less than the current NASA upper limit. The new upper limit greatly diminishes the possible role of ClO-O2 in the chlorine-photosensitized decomposition of O3. Nevertheless, it does not preclude the possibility that it is a significant reservoir of ClO, as well as a possible reactant, at low temperatures characteristic of polar vortices.
7. Temperature dependency of the equilibrium constant for the formation of carbamate from diethanolamine
SciTech Connect
Aroua, M.K.; Amor, A.B.; Haji-Sulaiman, M.Z.
1997-07-01
Aqueous alkanolamine solutions are frequently used to remove acidic components such as H{sub 2}S and CO{sub 2} from process gas streams. The equilibrium constant for the formation of diethanolamine carbamate was determined experimentally at (303, 313, 323, and 331) K for ionic strengths up to 1.8 mol/dm{sup 3}, the inert electrolyte being NaClO{sub 4}. A linear relationship was found to hole between log K and I{sup 0.5}. The thermodynamical constant has been determined and expressed by the equation log K{sub 1} = {minus}5.12 + 1.781 {times} 10{sup 3} K/T.
8. Surface-dependent chemical equilibrium constants and capacitances for bare and 3-cyanopropyldimethylchlorosilane coated silica nanochannels.
PubMed
Andersen, Mathias Bækbo; Frey, Jared; Pennathur, Sumita; Bruus, Henrik
2011-01-01
We present a combined theoretical and experimental analysis of the solid-liquid interface of fused-silica nanofabricated channels with and without a hydrophilic 3-cyanopropyldimethylchlorosilane (cyanosilane) coating. We develop a model that relaxes the assumption that the surface parameters C(1), C(2), and pK(+) are constant and independent of surface composition. Our theoretical model consists of three parts: (i) a chemical equilibrium model of the bare or coated wall, (ii) a chemical equilibrium model of the buffered bulk electrolyte, and (iii) a self-consistent Gouy-Chapman-Stern triple-layer model of the electrochemical double layer coupling these two equilibrium models. To validate our model, we used both pH-sensitive dye-based capillary filling experiments as well as electro-osmotic current-monitoring measurements. Using our model we predict the dependence of ζ potential, surface charge density, and capillary filling length ratio on ionic strength for different surface compositions, which can be difficult to achieve otherwise.
9. Determination of equilibrium constants for the reaction between acetone and HO2 using infrared kinetic spectroscopy.
PubMed
Grieman, Fred J; Noell, Aaron C; Davis-Van Atta, Casey; Okumura, Mitchio; Sander, Stanley P
2011-09-29
The reaction between the hydroperoxy radical, HO(2), and acetone may play an important role in acetone removal and the budget of HO(x) radicals in the upper troposphere. We measured the equilibrium constants of this reaction over the temperature range of 215-272 K at an overall pressure of 100 Torr using a flow tube apparatus and laser flash photolysis to produce HO(2). The HO(2) concentration was monitored as a function of time by near-IR diode laser wavelength modulation spectroscopy. The resulting [HO(2)] decay curves in the presence of acetone are characterized by an immediate decrease in initial [HO(2)] followed by subsequent decay. These curves are interpreted as a rapid (<100 μs) equilibrium reaction between acetone and the HO(2) radical that occurs on time scales faster than the time resolution of the apparatus, followed by subsequent reactions. This separation of time scales between the initial equilibrium and ensuing reactions enabled the determination of the equilibrium constant with values ranging from 4.0 × 10(-16) to 7.7 × 10(-18) cm(3) molecule(-1) for T = 215-272 K. Thermodynamic parameters for the reaction determined from a second-law fit of our van't Hoff plot were Δ(r)H°(245) = -35.4 ± 2.0 kJ mol(-1) and Δ(r)S°(245) = -88.2 ± 8.5 J mol(-1) K(-1). Recent ab initio calculations predict that the reaction proceeds through a prereactive hydrogen-bonded molecular complex (HO(2)-acetone) with subsequent isomerization to a hydroxy-peroxy radical, 2-hydroxyisopropylperoxy (2-HIPP). The calculations differ greatly in the energetics of the complex and the peroxy radical, as well as the transition state for isomerization, leading to significant differences in their predictions of the extent of this reaction at tropospheric temperatures. The current results are consistent with equilibrium formation of the hydrogen-bonded molecular complex on a short time scale (100 μs). Formation of the hydrogen-bonded complex will have a negligible impact on the
10. Determination of equilibrium constants for the reaction between acetone and HO2 using infrared kinetic spectroscopy.
PubMed
Grieman, Fred J; Noell, Aaron C; Davis-Van Atta, Casey; Okumura, Mitchio; Sander, Stanley P
2011-09-29
The reaction between the hydroperoxy radical, HO(2), and acetone may play an important role in acetone removal and the budget of HO(x) radicals in the upper troposphere. We measured the equilibrium constants of this reaction over the temperature range of 215-272 K at an overall pressure of 100 Torr using a flow tube apparatus and laser flash photolysis to produce HO(2). The HO(2) concentration was monitored as a function of time by near-IR diode laser wavelength modulation spectroscopy. The resulting [HO(2)] decay curves in the presence of acetone are characterized by an immediate decrease in initial [HO(2)] followed by subsequent decay. These curves are interpreted as a rapid (<100 μs) equilibrium reaction between acetone and the HO(2) radical that occurs on time scales faster than the time resolution of the apparatus, followed by subsequent reactions. This separation of time scales between the initial equilibrium and ensuing reactions enabled the determination of the equilibrium constant with values ranging from 4.0 × 10(-16) to 7.7 × 10(-18) cm(3) molecule(-1) for T = 215-272 K. Thermodynamic parameters for the reaction determined from a second-law fit of our van't Hoff plot were Δ(r)H°(245) = -35.4 ± 2.0 kJ mol(-1) and Δ(r)S°(245) = -88.2 ± 8.5 J mol(-1) K(-1). Recent ab initio calculations predict that the reaction proceeds through a prereactive hydrogen-bonded molecular complex (HO(2)-acetone) with subsequent isomerization to a hydroxy-peroxy radical, 2-hydroxyisopropylperoxy (2-HIPP). The calculations differ greatly in the energetics of the complex and the peroxy radical, as well as the transition state for isomerization, leading to significant differences in their predictions of the extent of this reaction at tropospheric temperatures. The current results are consistent with equilibrium formation of the hydrogen-bonded molecular complex on a short time scale (100 μs). Formation of the hydrogen-bonded complex will have a negligible impact on the
11. Revealing model dependencies in "Assessing the RAFT equilibrium constant via model systems: an EPR study".
PubMed
Junkers, Thomas; Barner-Kowollik, Christopher; Coote, Michelle L
2011-12-01
In a recent article (W. Meiser, M. Buback, Assessing the RAFT Equilibrium Constant via Model Systems: An EPR Study, Macromol. Rapid Commun. 2011, 18, 1490-1494), it is claimed that evidence is found that unequivocally proves that quantum mechanical calculations assessing the equilibrium constant and fragmentation rate coefficients in dithiobenzoate-mediated reversible addition fragmentation transfer (RAFT) systems are beset with a considerable uncertainty. In the present work, we show that these claims made by Meiser and Buback are beset with a model dependency, as a critical key parameter in their data analysis - the addition rate coefficient of the radicals attacking the C=S double bond in the dithiobenzoate - induces a model insensitivity into the data analysis. Contrary to the claims made by Meiser and Buback, their experimental results can be brought into agreement with the quantum chemical calculations if a lower addition rate coefficient of cyanoisopropyl radicals (CIP) to the CIP dithiobenzoate (CPDB) is assumed. To resolve the model dependency, the addition rate coefficient of CIP radicals to CPDB needs to be determined as a matter of priority.
12. Water dimers in the atmosphere III: equilibrium constant from a flexible potential.
PubMed
Scribano, Yohann; Goldman, Nir; Saykally, R J; Leforestier, Claude
2006-04-27
We present new results for the water dimer equilibrium constant K(p)(T) in the range 190-390 K, using a flexible potential energy surface fitted to spectroscopical data. The increased numerical complexity due to explicit consideration of the monomer vibrations is handled via an adiabatic (6 + 6)d decoupling between intra- and intermolecular modes. The convergence of the canonical partition function of the dimer is ensured by computing all energy levels up to dissociation for total angular momentum values J = 0-5 and using an extrapolation scheme to higher values. The newly calculated values for K(p)(T) are in very good agreement with available experimental data at room temperature. At higher temperatures, an analysis of the convergence of the partition function reveals that quasi-bound states are likely to contribute to the equilibrium constant. Additional thermodynamical quantities (deltaG, deltaH, deltaS, and C(p)) have also been determined and fit to quadratic expressions a + bT + cT2.
13. "Assessing the RAFT equilibrium constant via model systems: an EPR study"--response to a comment.
PubMed
Meiser, Wibke; Buback, Michael
2012-08-14
We have presented an EPR-based approach for deducing the RAFT equilibrium constant, K(eq), of a dithiobenzoate-mediated system [Meiser, W. and Buback M. Macromol. Rapid Commun. 2011, 32, 1490]. Our value is by four orders of magnitude below K(eq) from ab initio calculations for the identical monomer-free system. Junkers et al. [Macromol. Rapid Commun. 2011, 32, 1891] claim that our EPR approach would be model dependent and our data could be equally well fitted by assuming slow addition of radicals to the RAFT agent and slow fragmentation of the so-obtained intermediate radical as well as high cross-termination rate. By identification of all side products, our EPR-based method is shown to be model independent and to provide reliable K(eq) values, which demonstrate the validity of the intermediate radical termination model.
14. Assessing the RAFT equilibrium constant via model systems: an EPR study.
PubMed
Meiser, Wibke; Buback, Michael
2011-09-15
Reversible addition-fragmentation chain transfer (RAFT) equilibrium constants, K(eq), for the model system cyano-iso-propyl dithiobenzoate (CPDB) - cyano-iso-propyl radical (CIP) have been deduced via electron paramagnetic resonance (EPR) spectroscopy. The CIP species is produced by thermal decomposition of azobis-iso-butyronitrile (AIBN). In solution of toluene at 70 °C, K(eq) has been determined to be (9 ± 1) L · mol(-1). Measurement of K(eq) = k(ad)/k(β) between 60 and 100 °C yields ΔE(a) = (-28 ± 4) kJ · mol(-1) as the difference in the activation energies of k(ad) and k(β). The data measured on the model system are indicative of fast fragmentation of the intermediate radical produced by addition of CIP to CPDB.
15. Rough-to-smooth transition of an equilibrium neutral constant stress layer
NASA Technical Reports Server (NTRS)
Logan, E., Jr.; Fichtl, G. H.
1975-01-01
Purpose of research on rough-to-smooth transition of an equilibrium neutral constant stress layer is to develop a model for low-level atmospheric flow over terrains of abruptly changing roughness, such as those occurring near the windward end of a landing strip, and to use the model to derive functions which define the extent of the region affected by the roughness change and allow adequate prediction of wind and shear stress profiles at all points within the region. A model consisting of two bounding logarithmic layers and an intermediate velocity defect layer is assumed, and dimensionless velocity and stress distribution functions which meet all boundary and matching conditions are hypothesized. The functions are used in an asymptotic form of the equation of motion to derive a relation which governs the growth of the internal boundary layer. The growth relation is used to predict variation of surface shear stress.
16. Calculation of cooperativity and equilibrium constants of ligands binding to G-quadruplex DNA in solution.
PubMed
Kudrev, A G
2013-11-15
Equilibrium model of a ligand binding with DNA oligomer has been considered as a process of small molecule adsorption onto a lattice of multiple binding sites. An experimental example has been used to verify the assertion that during saturation of the macromolecule by a ligand should expect effect of cooperativity due to changes in DNA conformation or the mutual influence between bound ligands. Such phenomenon cannot be entirely described by the classical stepwise complex formation model. To evaluate a ligand binding affinity and cooperativity of ligand-oligomer complex formation the statistical approach has been proposed. This new computational approach used to re-examine previously studded ligand binding towards DNA quadruplexes targets with multiple binding sites. The intrinsic equilibrium constants K1-3 of the mesotetrakis-(N-methyl-4-pyridyl)-porphyrin (TMPyP4) binding with the [d(T4G4)]4 and with the [AG3(T2AG3)3] quadruplexes and the correction for the mutual influence between bound ligands (cooperativity parameters ω) was determined from the Job plots based upon the nonlinear least-squares fitting procedure. The re-examination of experimental curves reveals that the equilibrium is affected by the positive cooperative (ω>1) binding of the TMPyP4 ligand with tetramolecular [d(T4G4)]4. However for an intramolecular antiparallel-parallel hybrid structure [AG3(T2AG3)3] the weak anti-cooperativity of TMPyP4 accommodation (ω<1) onto two from three nonidentical sites was detected. PMID:24148442
17. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition.
PubMed
Zheng, Xiliang; Wang, Jin
2015-04-01
We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453
18. The Universal Statistical Distributions of the Affinity, Equilibrium Constants, Kinetics and Specificity in Biomolecular Recognition
PubMed Central
Zheng, Xiliang; Wang, Jin
2015-01-01
We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453
19. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition.
PubMed
Zheng, Xiliang; Wang, Jin
2015-04-01
We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.
20. Modulation and Salt-Induced Reverse Modulation of the Excited-State Proton-Transfer Process of Lysozymized Pyranine: The Contrasting Scenario of the Ground-State Acid-Base Equilibrium of the Photoacid.
PubMed
Das, Ishita; Panja, Sudipta; Halder, Mintu
2016-07-28
Here we report on the excited-state behavior in terms of the excited-state proton-transfer (ESPT) reaction as well as the ground-state acid-base property of pyranine [8-hydroxypyrene-1,3,6-trisulfonate (HPTS)] in the presence of an enzymatic protein, human lysozyme (LYZ). HPTS forms a 1:1 ground-state complex with LYZ having the binding constant KBH = (1.4 ± 0.05) × 10(4) M(-1), and its acid-base equilibrium gets shifted toward the deprotonated conjugate base (RO(-)), resulting in a downward shift in pKa. This suggests that the conjugate base (RO(-)) is thermodynamically more favored over the protonated (ROH) species inside the lysozyme matrix, resulting in an increased population of the deprotonated form. However, for the release of the proton from the excited photoacid, interestingly, the rate of proton transfer gets slowed down due to the "slow" acceptor biological water molecules present in the immediate vicinity of the fluorophore binding region inside the protein. The observed ESPT time constants, ∼140 and ∼750 ps, of protein-bound pyranine are slower than in bulk aqueous media (∼100 ps, single exponential). The molecular docking study predicts that the most probable binding location of the fluorophore is in a region near to the active site of the protein. Here we also report on the effect of external electrolyte (NaCl) on the reverse modulation of ground-state prototropy as well as the ESPT process of the protein-bound pyranine. It is found that there is a dominant role of electrostatic forces in the HPTS-LYZ interaction process, because an increase in ionic strength by the addition of NaCl dislodges the fluorophore from the protein pocket to the bulk again. The study shows a considerably different perspective of the perturbation offered by the model macromolecular host used, unlike the available literature reports on the concerned photoacid. PMID:27355857
1. Modulation and Salt-Induced Reverse Modulation of the Excited-State Proton-Transfer Process of Lysozymized Pyranine: The Contrasting Scenario of the Ground-State Acid-Base Equilibrium of the Photoacid.
PubMed
Das, Ishita; Panja, Sudipta; Halder, Mintu
2016-07-28
Here we report on the excited-state behavior in terms of the excited-state proton-transfer (ESPT) reaction as well as the ground-state acid-base property of pyranine [8-hydroxypyrene-1,3,6-trisulfonate (HPTS)] in the presence of an enzymatic protein, human lysozyme (LYZ). HPTS forms a 1:1 ground-state complex with LYZ having the binding constant KBH = (1.4 ± 0.05) × 10(4) M(-1), and its acid-base equilibrium gets shifted toward the deprotonated conjugate base (RO(-)), resulting in a downward shift in pKa. This suggests that the conjugate base (RO(-)) is thermodynamically more favored over the protonated (ROH) species inside the lysozyme matrix, resulting in an increased population of the deprotonated form. However, for the release of the proton from the excited photoacid, interestingly, the rate of proton transfer gets slowed down due to the "slow" acceptor biological water molecules present in the immediate vicinity of the fluorophore binding region inside the protein. The observed ESPT time constants, ∼140 and ∼750 ps, of protein-bound pyranine are slower than in bulk aqueous media (∼100 ps, single exponential). The molecular docking study predicts that the most probable binding location of the fluorophore is in a region near to the active site of the protein. Here we also report on the effect of external electrolyte (NaCl) on the reverse modulation of ground-state prototropy as well as the ESPT process of the protein-bound pyranine. It is found that there is a dominant role of electrostatic forces in the HPTS-LYZ interaction process, because an increase in ionic strength by the addition of NaCl dislodges the fluorophore from the protein pocket to the bulk again. The study shows a considerably different perspective of the perturbation offered by the model macromolecular host used, unlike the available literature reports on the concerned photoacid.
2. Lysozyme adsorption in pH-responsive hydrogel thin-films: the non-trivial role of acid-base equilibrium.
PubMed
Narambuena, Claudio F; Longo, Gabriel S; Szleifer, Igal
2015-09-01
We develop and apply a molecular theory to study the adsorption of lysozyme on weak polyacid hydrogel films. The theory explicitly accounts for the conformation of the network, the structure of the proteins, the size and shape of all the molecular species, their interactions as well as the chemical equilibrium of each titratable unit of both the protein and the polymer network. The driving forces for adsorption are the electrostatic attractions between the negatively charged network and the positively charged protein. The adsorption is a non-monotonic function of the solution pH, with a maximum in the region between pH 8 and 9 depending on the salt concentration of the solution. The non-monotonic adsorption is the result of increasing negative charge of the network with pH, while the positive charge of the protein decreases. At low pH the network is roughly electroneutral, while at sufficiently high pH the protein is negatively charged. Upon adsorption, the acid-base equilibrium of the different amino acids of the protein shifts in a nontrivial fashion that depends critically on the particular kind of residue and solution composition. Thus, the proteins regulate their charge and enhance adsorption under a wide range of conditions. In particular, adsorption is predicted above the protein isoelectric point where both the solution lysozyme and the polymer network are negatively charged. This behavior occurs because the pH in the interior of the gel is significantly lower than that in the bulk solution and it is also regulated by the adsorption of the protein in order to optimize protein-gel interactions. Under high pH conditions we predict that the protein changes its charge from negative in the solution to positive within the gel. The change occurs within a few nanometers at the interface of the hydrogel film. Our predictions show the non-trivial interplay between acid-base equilibrium, physical interactions and molecular organization under nanoconfined conditions
3. Lysozyme adsorption in pH-responsive hydrogel thin-films: the non-trivial role of acid-base equilibrium.
PubMed
Narambuena, Claudio F; Longo, Gabriel S; Szleifer, Igal
2015-09-01
We develop and apply a molecular theory to study the adsorption of lysozyme on weak polyacid hydrogel films. The theory explicitly accounts for the conformation of the network, the structure of the proteins, the size and shape of all the molecular species, their interactions as well as the chemical equilibrium of each titratable unit of both the protein and the polymer network. The driving forces for adsorption are the electrostatic attractions between the negatively charged network and the positively charged protein. The adsorption is a non-monotonic function of the solution pH, with a maximum in the region between pH 8 and 9 depending on the salt concentration of the solution. The non-monotonic adsorption is the result of increasing negative charge of the network with pH, while the positive charge of the protein decreases. At low pH the network is roughly electroneutral, while at sufficiently high pH the protein is negatively charged. Upon adsorption, the acid-base equilibrium of the different amino acids of the protein shifts in a nontrivial fashion that depends critically on the particular kind of residue and solution composition. Thus, the proteins regulate their charge and enhance adsorption under a wide range of conditions. In particular, adsorption is predicted above the protein isoelectric point where both the solution lysozyme and the polymer network are negatively charged. This behavior occurs because the pH in the interior of the gel is significantly lower than that in the bulk solution and it is also regulated by the adsorption of the protein in order to optimize protein-gel interactions. Under high pH conditions we predict that the protein changes its charge from negative in the solution to positive within the gel. The change occurs within a few nanometers at the interface of the hydrogel film. Our predictions show the non-trivial interplay between acid-base equilibrium, physical interactions and molecular organization under nanoconfined conditions
4. Discovering a Change in Equilibrium Constant with Change in Ionic Strength: An Empirical Laboratory Experiment for General Chemistry
Stolzberg, Richard J.
1999-05-01
Students are challenged to investigate the hypothesis that an equilibrium constant, Kc, measured as a product and quotient of molar concentrations, is constant at constant temperature. Spectrophotometric measurements of absorbance of a solution of Fe3+(aq) and SCN-(aq) treated with different amounts of KNO3 are made to determine Kc for the formation of FeSCN2+(aq). Students observe a regular decrease in the value of Kc as the concentration of added KNO3 is increased.
5. Partition functions and equilibrium constants for diatomic molecules and atoms of astrophysical interest
Barklem, P. S.; Collet, R.
2016-04-01
Partition functions and dissociation equilibrium constants are presented for 291 diatomic molecules for temperatures in the range from near absolute zero to 10 000 K, thus providing data for many diatomic molecules of astrophysical interest at low temperature. The calculations are based on molecular spectroscopic data from the book of Huber & Herzberg (1979, Constants of Diatomic Molecules) with significant improvements from the literature, especially updated data for ground states of many of the most important molecules by Irikura (2007, J. Phys. Chem. Ref. Data, 36, 389). Dissociation energies are collated from compilations of experimental and theoretical values. Partition functions for 284 species of atoms for all elements from H to U are also presented based on data collected at NIST. The calculated data are expected to be useful for modelling a range of low density astrophysical environments, especially star-forming regions, protoplanetary disks, the interstellar medium, and planetary and cool stellar atmospheres. The input data, which will be made available electronically, also provides a possible foundation for future improvement by the community. Full Tables 1-8 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A96
6. Non-Condon equilibrium Fermi's golden rule electronic transition rate constants via the linearized semiclassical method
Sun, Xiang; Geva, Eitan
2016-06-01
In this paper, we test the accuracy of the linearized semiclassical (LSC) expression for the equilibrium Fermi's golden rule rate constant for electronic transitions in the presence of non-Condon effects. We do so by performing a comparison with the exact quantum-mechanical result for a model where the donor and acceptor potential energy surfaces are parabolic and identical except for shifts in the equilibrium energy and geometry, and the coupling between them is linear in the nuclear coordinates. Since non-Condon effects may or may not give rise to conical intersections, both possibilities are examined by considering: (1) A modified Garg-Onuchic-Ambegaokar model for charge transfer in the condensed phase, where the donor-acceptor coupling is linear in the primary mode coordinate, and for which non-Condon effects do not give rise to a conical intersection; (2) the linear vibronic coupling model for electronic transitions in gas phase molecules, where non-Condon effects give rise to conical intersections. We also present a comprehensive comparison between the linearized semiclassical expression and a progression of more approximate expressions. The comparison is performed over a wide range of frictions and temperatures for model (1) and over a wide range of temperatures for model (2). The linearized semiclassical method is found to reproduce the exact quantum-mechanical result remarkably well for both models over the entire range of parameters under consideration. In contrast, more approximate expressions are observed to deviate considerably from the exact result in some regions of parameter space.
7. Non-Condon equilibrium Fermi's golden rule electronic transition rate constants via the linearized semiclassical method.
PubMed
Sun, Xiang; Geva, Eitan
2016-06-28
In this paper, we test the accuracy of the linearized semiclassical (LSC) expression for the equilibrium Fermi's golden rule rate constant for electronic transitions in the presence of non-Condon effects. We do so by performing a comparison with the exact quantum-mechanical result for a model where the donor and acceptor potential energy surfaces are parabolic and identical except for shifts in the equilibrium energy and geometry, and the coupling between them is linear in the nuclear coordinates. Since non-Condon effects may or may not give rise to conical intersections, both possibilities are examined by considering: (1) A modified Garg-Onuchic-Ambegaokar model for charge transfer in the condensed phase, where the donor-acceptor coupling is linear in the primary mode coordinate, and for which non-Condon effects do not give rise to a conical intersection; (2) the linear vibronic coupling model for electronic transitions in gas phase molecules, where non-Condon effects give rise to conical intersections. We also present a comprehensive comparison between the linearized semiclassical expression and a progression of more approximate expressions. The comparison is performed over a wide range of frictions and temperatures for model (1) and over a wide range of temperatures for model (2). The linearized semiclassical method is found to reproduce the exact quantum-mechanical result remarkably well for both models over the entire range of parameters under consideration. In contrast, more approximate expressions are observed to deviate considerably from the exact result in some regions of parameter space. PMID:27369495
8. SARS CoV main proteinase: The monomer-dimer equilibrium dissociation constant.
PubMed
Graziano, Vito; McGrath, William J; Yang, Lin; Mangel, Walter F
2006-12-12
The SARS coronavirus main proteinase (SARS CoV main proteinase) is required for the replication of the severe acute respiratory syndrome coronavirus (SARS CoV), the virus that causes SARS. One function of the enzyme is to process viral polyproteins. The active form of the SARS CoV main proteinase is a homodimer. In the literature, estimates of the monomer-dimer equilibrium dissociation constant, KD, have varied more than 65,0000-fold, from <1 nM to more than 200 microM. Because of these discrepancies and because compounds that interfere with activation of the enzyme by dimerization may be potential antiviral agents, we investigated the monomer-dimer equilibrium by three different techniques: small-angle X-ray scattering, chemical cross-linking, and enzyme kinetics. Analysis of small-angle X-ray scattering data from a series of measurements at different SARS CoV main proteinase concentrations yielded KD values of 5.8 +/- 0.8 microM (obtained from the entire scattering curve), 6.5 +/- 2.2 microM (obtained from the radii of gyration), and 6.8 +/- 1.5 microM (obtained from the forward scattering). The KD from chemical cross-linking was 12.7 +/- 1.1 microM, and from enzyme kinetics, it was 5.2 +/- 0.4 microM. While each of these three techniques can present different, potential limitations, they all yielded similar KD values.
9. SPECIES - EVALUATING THERMODYNAMIC PROPERTIES, TRANSPORT PROPERTIES & EQUILIBRIUM CONSTANTS OF AN 11-SPECIES AIR MODEL
NASA Technical Reports Server (NTRS)
Thompson, R. A.
1994-01-01
Accurate numerical prediction of high-temperature, chemically reacting flowfields requires a knowledge of the physical properties and reaction kinetics for the species involved in the reacting gas mixture. Assuming an 11-species air model at temperatures below 30,000 degrees Kelvin, SPECIES (Computer Codes for the Evaluation of Thermodynamic Properties, Transport Properties, and Equilibrium Constants of an 11-Species Air Model) computes values for the species thermodynamic and transport properties, diffusion coefficients and collision cross sections for any combination of the eleven species, and reaction rates for the twenty reactions normally occurring. The species represented in the model are diatomic nitrogen, diatomic oxygen, atomic nitrogen, atomic oxygen, nitric oxide, ionized nitric oxide, the free electron, ionized atomic nitrogen, ionized atomic oxygen, ionized diatomic nitrogen, and ionized diatomic oxygen. Sixteen subroutines compute the following properties for both a single species, interaction pair, or reaction, and an array of all species, pairs, or reactions: species specific heat and static enthalpy, species viscosity, species frozen thermal conductivity, diffusion coefficient, collision cross section (OMEGA 1,1), collision cross section (OMEGA 2,2), collision cross section ratio, and equilibrium constant. The program uses least squares polynomial curve-fits of the most accurate data believed available to provide the requested values more quickly than is possible with table look-up methods. The subroutines for computing transport coefficients and collision cross sections use additional code to correct for any electron pressure when working with ionic species. SPECIES was developed on a SUN 3/280 computer running the SunOS 3.5 operating system. It is written in standard FORTRAN 77 for use on any machine, and requires roughly 92K memory. The standard distribution medium for SPECIES is a 5.25 inch 360K MS-DOS format diskette. The contents of the
10. A rigorous multiple independent binding site model for determining cell-based equilibrium dissociation constants.
PubMed
Drake, Andrew W; Klakamp, Scott L
2007-01-10
A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data. PMID:17141800
11. Reversible inhibition of proton release activity and the anesthetic-induced acid-base equilibrium between the 480 and 570 nm forms of bacteriorhodopsin.
PubMed Central
Boucher, F; Taneva, S G; Elouatik, S; Déry, M; Messaoudi, S; Harvey-Girard, E; Beaudoin, N
1996-01-01
In purple membrane added with general anesthetics, there exists an acid-base equilibrium between two spectral forms of the pigment: bR570 and bR480 (apparent pKa = 7.3). As the purple 570 nm bacteriorhodopsin is reversibly transformed into its red 480 nm form, the proton pumping capability of the pigment reversibly decreases, as indicated by transient proton release measurements and proton translocation action spectra of mixture of both spectral forms. It happens in spite of a complete photochemical activity in bR480 that is mostly characterized by fast deprotonation and slow reprotonation steps and which, under continuous illumination, bleaches with a yield comparable to that of bR570. This modified photochemical activity has a correlated specific photoelectrical counterpart: a faster proton extrusion current and a slower reprotonation current. The relative areas of all photocurrent phases are reduced in bR480, most likely because its photochemistry is accompanied by charge movements for shorter distances than in the native pigment, reflecting a reversible inhibition of the pumping activity. PMID:8789112
12. Using Electrophoretic Mobility Shift Assays to Measure Equilibrium Dissociation Constants: GAL4-p53 Binding DNA as a Model System
ERIC Educational Resources Information Center
Heffler, Michael A.; Walters, Ryan D.; Kugel, Jennifer F.
2012-01-01
An undergraduate biochemistry laboratory experiment is described that will teach students the practical and theoretical considerations for measuring the equilibrium dissociation constant (K[subscript D]) for a protein/DNA interaction using electrophoretic mobility shift assays (EMSAs). An EMSA monitors the migration of DNA through a native gel;…
13. Rate and Equilibrium Constants for an Enzyme Conformational Change during Catalysis by Orotidine 5'-Monophosphate Decarboxylase.
PubMed
Goryanova, Bogdana; Goldman, Lawrence M; Ming, Shonoi; Amyes, Tina L; Gerlt, John A; Richard, John P
2015-07-28
complex between FOMP and the open enzyme, that the tyrosyl phenol group stabilizes the closed form of ScOMPDC by hydrogen bonding to the substrate phosphodianion, and that the phenyl group of Y217 and F217 facilitates formation of the transition state for the rate-limiting conformational change. An analysis of kinetic data for mutant enzyme-catalyzed decarboxylation of OMP and FOMP provides estimates for the rate and equilibrium constants for the conformational change that traps FOMP at the enzyme active site.
14. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.
PubMed
van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël
2014-01-01
Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be.
15. Equilibrium Fermi's Golden Rule Charge Transfer Rate Constants in the Condensed Phase: The Linearized Semiclassical Method vs Classical Marcus Theory.
PubMed
Sun, Xiang; Geva, Eitan
2016-05-19
In this article, we present a comprehensive comparison between the linearized semiclassical expression for the equilibrium Fermi's golden rule rate constant and the progression of more approximate expressions that lead to the classical Marcus expression. We do so within the context of the canonical Marcus model, where the donor and acceptor potential energy surface are parabolic and identical except for a shift in both the free energies and equilibrium geometries, and within the Condon region. The comparison is performed for two different spectral densities and over a wide range of frictions and temperatures, thereby providing a clear test for the validity, or lack thereof, of the more approximate expressions. We also comment on the computational cost and scaling associated with numerically calculating the linearized semiclassical expression for the rate constant and its dependence on the spectral density, temperature, and friction.
16. Rate and equilibrium constants for the addition of N-heterocyclic carbenes into benzaldehydes: a remarkable 2-substituent effect.
PubMed
Collett, Christopher J; Massey, Richard S; Taylor, James E; Maguire, Oliver R; O'Donoghue, AnnMarie C; Smith, Andrew D
2015-06-01
Rate and equilibrium constants for the reaction between N-aryl triazolium N-heterocyclic carbene (NHC) precatalysts and substituted benzaldehyde derivatives to form 3-(hydroxybenzyl)azolium adducts under both catalytic and stoichiometric conditions have been measured. Kinetic analysis and reaction profile fitting of both the forward and reverse reactions, plus onwards reaction to the Breslow intermediate, demonstrate the remarkable effect of the benzaldehyde 2-substituent in these reactions and provide insight into the chemoselectivity of cross-benzoin reactions.
17. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.
PubMed
van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël
2014-01-01
Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be. PMID:24465496
18. A METHOD FOR THE MEASUREMENT OF SITE-SPECIFIC TAUTOMERIC AND ZWITTERIONIC MICROSPECIES EQUILIBRIUM CONSTANTS
EPA Science Inventory
We describe a method for the individual measurement of simultaneously occurring, unimolecular, site-specific "microequilibrium" constants as in, for example, prototropic tautomerism and zwitterionic equilibria. Our method represents an elaboration of that of Nygren et al. (Anal. ...
19. METHOD FOR THE MEASUREMENT OF SITE-SPECIFIC TAUTOMERIC AND ZWITTERIONIC MICROSPECIES EQUILIBRIUM CONSTANTS
EPA Science Inventory
We describe a method for the individual measurement of simultaneously occurring, unimolecular, site-specific “microequilibrium” constants as in, for example, prototropic tautomerism and zwitterionic equilibria. Our method represents an elaboration of that of Nygren et al. (Anal. ...
20. Experimental determination of equilibrium constant for the complexing reaction of nitric oxide with hexamminecobalt(II) in aqueous solution.
PubMed
Mao, Yan-Peng; Chen, Hua; Long, Xiang-Li; Xiao, Wen-de; Li, Wei; Yuan, Wei-Kang
2009-02-15
Ammonia solution can be used to scrub NO from the flue gases by adding soluble cobalt(II) salts into the aqueous ammonia solutions. The hexamminecobalt(II), Co(NH3)6(2+), formed by ammonia binding with Co2+ is the active constituent of eliminating NO from the flue gas streams. The hexamminecobalt(II) can combine with NO to form a complex. For the development of this process, the data of the equilibrium constants for the coordination between NO and Co(NH3)6(2+)over a range of temperature is very important. Therefore, a series of experiments were performed in a bubble column to investigate the chemical equilibrium. The equilibrium constant was determined in the temperature range of 30.0-80.0 degrees C under atmospheric pressure at pH 9.14. All experimental data fit the following equation well: [see text] where the enthalpy and entropy are DeltaH degrees = - (44.559 +/- 2.329)kJ mol(-1) and DeltaS degrees = - (109.50 +/- 7.126) J K(-1)mol(-1), respectively.
1. A procedure to find thermodynamic equilibrium constants for CO2 and CH4 adsorption on activated carbon.
PubMed
Trinh, T T; van Erp, T S; Bedeaux, D; Kjelstrup, S; Grande, C A
2015-03-28
Thermodynamic equilibrium for adsorption means that the chemical potential of gas and adsorbed phase are equal. A precise knowledge of the chemical potential is, however, often lacking, because the activity coefficient of the adsorbate is not known. Adsorption isotherms are therefore commonly fitted to ideal models such as the Langmuir, Sips or Henry models. We propose here a new procedure to find the activity coefficient and the equilibrium constant for adsorption which uses the thermodynamic factor. Instead of fitting the data to a model, we calculate the thermodynamic factor and use this to find first the activity coefficient. We show, using published molecular simulation data, how this procedure gives the thermodynamic equilibrium constant and enthalpies of adsorption for CO2(g) on graphite. We also use published experimental data to find similar thermodynamic properties of CO2(g) and of CH4(g) adsorbed on activated carbon. The procedure gives a higher accuracy in the determination of enthalpies of adsorption than ideal models do.
2. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water
PubMed Central
Wu, Xiongwu; Brooks, Bernard R.
2015-01-01
Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66’s pKa. PMID:26506245
3. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water.
PubMed
Wu, Xiongwu; Brooks, Bernard R
2015-10-01
Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa.
4. Dynamics of Equilibrium Folding and Unfolding Transitions of Titin Immunoglobulin Domain under Constant Forces
PubMed Central
Chen, Hu; Yuan, Guohua; Winardhi, Ricksen S.; Yao, Mingxi; Popa, Ionel; Fernandez, Julio M.; Yan, Jie
2015-01-01
The mechanical stability of force-bearing proteins is crucial for their functions. However, slow transition rates of complex protein domains have made it challenging to investigate their equilibrium force-dependent structural transitions. Using ultra stable magnetic tweezers, we report the first equilibrium single-molecule force manipulation study of the classic titin I27 immunoglobulin domain. We found that individual I27 in a tandem repeat unfold/fold independently. We obtained the force-dependent free energy difference between unfolded and folded I27 and determined the critical force (∼5.4 pN) at which unfolding and folding have equal probability. We also determined the force-dependent free energy landscape of unfolding/folding transitions based on measurement of the free energy cost of unfolding. In addition to providing insights into the force-dependent structural transitions of titin I27, our results suggest that the conformations of titin immunoglobulin domains can be significantly altered during low force, long duration muscle stretching. PMID:25726700
5. Dynamics of equilibrium folding and unfolding transitions of titin immunoglobulin domain under constant forces.
PubMed
Chen, Hu; Yuan, Guohua; Winardhi, Ricksen S; Yao, Mingxi; Popa, Ionel; Fernandez, Julio M; Yan, Jie
2015-03-18
The mechanical stability of force-bearing proteins is crucial for their functions. However, slow transition rates of complex protein domains have made it challenging to investigate their equilibrium force-dependent structural transitions. Using ultra stable magnetic tweezers, we report the first equilibrium single-molecule force manipulation study of the classic titin I27 immunoglobulin domain. We found that individual I27 in a tandem repeat unfold/fold independently. We obtained the force-dependent free energy difference between unfolded and folded I27 and determined the critical force (∼5.4 pN) at which unfolding and folding have equal probability. We also determined the force-dependent free energy landscape of unfolding/folding transitions based on measurement of the free energy cost of unfolding. In addition to providing insights into the force-dependent structural transitions of titin I27, our results suggest that the conformations of titin immunoglobulin domains can be significantly altered during low force, long duration muscle stretching.
6. Toward Improving Atmospheric Models and Ozone Projections: Laboratory UV Absorption Cross Sections and Equilibrium Constant of ClOOCl
Wilmouth, D. M.; Klobas, J. E.; Anderson, J. G.
2015-12-01
Thirty years have now passed since the discovery of the Antarctic ozone hole, and despite comprehensive international agreements being in place to phase out CFCs and halons, polar ozone losses generally remain severe. The relevant halogen compounds have very long atmospheric lifetimes, which ensures that seasonal polar ozone depletion will likely continue for decades to come. Changes in the climate system can further impact stratospheric ozone abundance through changes in the temperature and water vapor structure of the atmosphere and through the potential initiation of solar radiation management efforts. In many ways, the rate at which climate is changing must now be considered fast relative to the slow removal of halogens from the atmosphere. Photochemical models of Earth's atmosphere play a critical role in understanding and projecting ozone levels, but in order for these models to be accurate, they must be built on a foundation of accurate laboratory data. ClOOCl is the centerpiece of the catalytic cycle that accounts for more than 50% of the chlorine-catalyzed ozone loss in the Arctic and Antarctic stratosphere every spring, and so uncertainties in the ultraviolet cross sections of ClOOCl are particularly important. Additionally, the equilibrium constant of the dimerization reaction of ClO merits further study, as there are important discrepancies between in situ measurements and lab-based models, and the JPL-11 recommended equilibrium constant includes high error bars at atmospherically relevant temperatures (~75% at 200 K). Here we analyze available data for the ClOOCl ultraviolet cross sections and equilibrium constant and present new laboratory spectroscopic results.
7. Computer codes for the evaluation of thermodynamic properties, transport properties, and equilibrium constants of an 11-species air model
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1990-01-01
The computer codes developed provide data to 30000 K for the thermodynamic and transport properties of individual species and reaction rates for the prominent reactions occurring in an 11-species nonequilibrium air model. These properties and the reaction-rate data are computed through the use of curve-fit relations which are functions of temperature (and number density for the equilibrium constant). The curve fits were made using the most accurate data believed available. A detailed review and discussion of the sources and accuracy of the curve-fitted data used herein are given in NASA RP 1232.
8. An Experimental Evaluation of Programed Instruction as One of Two Review Techniques for Two-Year College Students Concerned with Solving Acid-Base Chemical Equilibrium Problems.
ERIC Educational Resources Information Center
Sharon, Jared Bear
The major purpose of this study was to design and evaluate a programed instructional unit for a first year college chemistry course. The topic of the unit was the categorization and solution of acid-base equilibria problems. The experimental programed instruction text was used by 41 students and the fifth edition of Schaum's Theory and Problems of…
9. Rate Constant in Far-from-Equilibrium States of a Replicating System with Mutually Catalyzing Chemicals
Kamimura, Atsushi; Yukawa, Satoshi; Ito, Nobuyasu
2006-02-01
As a first step to study reaction dynamics in far-from-equilibrium open systems, we propose a stochastic protocell model in which two mutually catalyzing chemicals are replicating depending on the external flow of energy resources J. This model exhibits an Arrhenius type reaction; furthermore, it produces a non-Arrhenius reaction that exhibits a power-law reaction rate with regard to the activation energy. These dependences are explained using the dynamics of J; the asymmetric random walk of J results in the Arrhenius equation and conservation of J results in a power-law dependence. Further, we find that the discreteness of molecules results in the power change. Effects of cell divisions are also discussed in our model.
10. Equilibrium theory analysis of liquid chromatography with non-constant velocity.
PubMed
Ortner, Franziska; Joss, Lisa; Mazzotti, Marco
2014-12-19
In liquid chromatography, adsorption and desorption lead to velocity variations within the column if the adsorbing compounds make up a high volumetric ratio of the mobile phase and if there is a substantial difference in the adsorption capacities. An equilibrium theory model for binary systems accounting for these velocity changes is derived and solved analytically for competitive Langmuir isotherms. Characteristic properties of concentration and velocity profiles predicted by the derived model are illustrated by two exemplary systems. Applicability of the model equations for the estimation of isotherm parameters from experimental data is investigated, and accurate results are obtained for systems with one adsorbing and one inert compound, as well as for systems with two adsorbing compounds.
11. The polysiloxane cyclization equilibrium constant: a theoretical focus on small and intermediate size rings.
PubMed
Madeleine-Perdrillat, Claire; Delor-Jestin, Florence; de Sainte Claire, Pascal
2014-01-01
The nonlinear dependence of polysiloxane cyclization constants (log(K(x))) with ring size (log(x)) is explained by a thermodynamic model that treats specific torsional modes of the macromolecular chains with a classical coupled hindered rotor model. Several parameters such as the dependence of the internal rotation kinetic energy matrix with geometry, the effect of potential energy hindrance, anharmonicity, and the couplings between internal rotors were investigated. This behavior arises from the competing effects of local molecular entropy that is mainly driven by the intrinsic transformation of vibrations in small cycles into hindered rotations in larger cycles and configurational entropy.
12. A benchmark study of molecular structure by experimental and theoretical methods: Equilibrium structure of thymine from microwave rotational constants and coupled-cluster computations
Vogt, Natalja; Demaison, Jean; Ksenafontov, Denis N.; Rudolph, Heinz Dieter
2014-11-01
Accurate equilibrium, re, structures of thymine have been determined using two different, and to some extent complementary techniques. The composite ab initio Born-Oppenheimer, re(best ab initio), structural parameters are obtained from the all-electron CCSD(T) and MP2 geometry optimizations using Gaussian basis sets up to quadruple-zeta quality. The semi-experimental mixed estimation method, where internal coordinates are fitted concurrently to equilibrium rotational constants and geometry parameters obtained from a high level of electronic structure theory. The equilibrium rotational constants are derived from experimental effective ground-state rotational constants and rovibrational corrections based on a quantum-chemical cubic force field. Equilibrium molecular structures accurate to 0.002 Å and 0.2° have been determined. This work is one of a few accurate equilibrium structure determinations for large molecules. The poor behavior of Kraitchman's equations is discussed.
13. Theory for rates, equilibrium constants, and Brønsted slopes in F1-ATPase single molecule imaging experiments
PubMed Central
Volkán-Kacsó, Sándor; Marcus, Rudolph A.
2015-01-01
A theoretical model of elastically coupled reactions is proposed for single molecule imaging and rotor manipulation experiments on F1-ATPase. Stalling experiments are considered in which rates of individual ligand binding, ligand release, and chemical reaction steps have an exponential dependence on rotor angle. These data are treated in terms of the effect of thermodynamic driving forces on reaction rates, and lead to equations relating rate constants and free energies to the stalling angle. These relations, in turn, are modeled using a formalism originally developed to treat electron and other transfer reactions. During stalling the free energy profile of the enzymatic steps is altered by a work term due to elastic structural twisting. Using biochemical and single molecule data, the dependence of the rate constant and equilibrium constant on the stall angle, as well as the Børnsted slope are predicted and compared with experiment. Reasonable agreement is found with stalling experiments for ATP and GTP binding. The model can be applied to other torque-generating steps of reversible ligand binding, such as ADP and Pi release, when sufficient data become available. PMID:26483483
14. Theory for rates, equilibrium constants, and Brønsted slopes in F1-ATPase single molecule imaging experiments.
PubMed
Volkán-Kacsó, Sándor; Marcus, Rudolph A
2015-11-17
A theoretical model of elastically coupled reactions is proposed for single molecule imaging and rotor manipulation experiments on F1-ATPase. Stalling experiments are considered in which rates of individual ligand binding, ligand release, and chemical reaction steps have an exponential dependence on rotor angle. These data are treated in terms of the effect of thermodynamic driving forces on reaction rates, and lead to equations relating rate constants and free energies to the stalling angle. These relations, in turn, are modeled using a formalism originally developed to treat electron and other transfer reactions. During stalling the free energy profile of the enzymatic steps is altered by a work term due to elastic structural twisting. Using biochemical and single molecule data, the dependence of the rate constant and equilibrium constant on the stall angle, as well as the Børnsted slope are predicted and compared with experiment. Reasonable agreement is found with stalling experiments for ATP and GTP binding. The model can be applied to other torque-generating steps of reversible ligand binding, such as ADP and Pi release, when sufficient data become available.
15. Optimization of Electrospray Ionization by Statistical Design of Experiments and Response Surface Methodology: Protein-Ligand Equilibrium Dissociation Constant Determinations
Pedro, Liliana; Van Voorhis, Wesley C.; Quinn, Ronald J.
2016-09-01
Electrospray ionization mass spectrometry (ESI-MS) binding studies between proteins and ligands under native conditions require that instrumental ESI source conditions are optimized if relative solution-phase equilibrium concentrations between the protein-ligand complex and free protein are to be retained. Instrumental ESI source conditions that simultaneously maximize the relative ionization efficiency of the protein-ligand complex over free protein and minimize the protein-ligand complex dissociation during the ESI process and the transfer from atmospheric pressure to vacuum are generally specific for each protein-ligand system and should be established when an accurate equilibrium dissociation constant (KD) is to be determined via titration. In this paper, a straightforward and systematic approach for ESI source optimization is presented. The method uses statistical design of experiments (DOE) in conjunction with response surface methodology (RSM) and is demonstrated for the complexes between Plasmodium vivax guanylate kinase ( PvGK) and two ligands: 5'-guanosine monophosphate (GMP) and 5'-guanosine diphosphate (GDP). It was verified that even though the ligands are structurally similar, the most appropriate ESI conditions for KD determination by titration are different for each.
16. Optimization of Electrospray Ionization by Statistical Design of Experiments and Response Surface Methodology: Protein-Ligand Equilibrium Dissociation Constant Determinations.
PubMed
Pedro, Liliana; Van Voorhis, Wesley C; Quinn, Ronald J
2016-09-01
Electrospray ionization mass spectrometry (ESI-MS) binding studies between proteins and ligands under native conditions require that instrumental ESI source conditions are optimized if relative solution-phase equilibrium concentrations between the protein-ligand complex and free protein are to be retained. Instrumental ESI source conditions that simultaneously maximize the relative ionization efficiency of the protein-ligand complex over free protein and minimize the protein-ligand complex dissociation during the ESI process and the transfer from atmospheric pressure to vacuum are generally specific for each protein-ligand system and should be established when an accurate equilibrium dissociation constant (KD) is to be determined via titration. In this paper, a straightforward and systematic approach for ESI source optimization is presented. The method uses statistical design of experiments (DOE) in conjunction with response surface methodology (RSM) and is demonstrated for the complexes between Plasmodium vivax guanylate kinase (PvGK) and two ligands: 5'-guanosine monophosphate (GMP) and 5'-guanosine diphosphate (GDP). It was verified that even though the ligands are structurally similar, the most appropriate ESI conditions for KD determination by titration are different for each. Graphical Abstract ᅟ.
17. Equilibrium constant for the reaction ClO + ClO ↔ ClOOCl between 250 and 206 K.
PubMed
Hume, Kelly L; Bayes, Kyle D; Sander, Stanley P
2015-05-14
The chlorine peroxide molecule, ClOOCl, is an important participant in the chlorine-catalyzed destruction of ozone in the stratosphere. Very few laboratory measurements have been made for the partitioning between monomer ClO and dimer ClOOCl at temperatures lower than 250 K. This paper reports absorption spectra for both ClO and ClOOCl when they are in equilibrium at 1 atm and temperatures down to 206 K. The very low ClO concentrations involved requires measuring and calibrating a differential cross section, ΔσClO, for the 10-0 band of ClO. A third law fit of the new results gives Keq = [(2.01 ± 0.17) 10–27 cm3 molecule–1] e(8554∓21)K/T, where the error limits reflect the uncertainty in the entropy change. The resulting equilibrium constants are slightly lower than currently recommended. The slope of the van’t Hoff plot yields a value for the enthalpy of formation of ClOOCl at 298 K, ΔHfo, of 129.8 ± 0.6 kJ mol–1. Uncertainties in the absolute ultraviolet cross sections of ClOOCl and ClO appear to be the limiting factors in these measurements. The new Keq parameters are consistent with the measurements of Santee et al.42 in the stratosphere. PMID:25560546
18. Equilibrium constant for the reaction ClO + ClO ↔ ClOOCl between 250 and 206 K.
PubMed
Hume, Kelly L; Bayes, Kyle D; Sander, Stanley P
2015-05-14
The chlorine peroxide molecule, ClOOCl, is an important participant in the chlorine-catalyzed destruction of ozone in the stratosphere. Very few laboratory measurements have been made for the partitioning between monomer ClO and dimer ClOOCl at temperatures lower than 250 K. This paper reports absorption spectra for both ClO and ClOOCl when they are in equilibrium at 1 atm and temperatures down to 206 K. The very low ClO concentrations involved requires measuring and calibrating a differential cross section, ΔσClO, for the 10-0 band of ClO. A third law fit of the new results gives Keq = [(2.01 ± 0.17) 10–27 cm3 molecule–1] e(8554∓21)K/T, where the error limits reflect the uncertainty in the entropy change. The resulting equilibrium constants are slightly lower than currently recommended. The slope of the van’t Hoff plot yields a value for the enthalpy of formation of ClOOCl at 298 K, ΔHfo, of 129.8 ± 0.6 kJ mol–1. Uncertainties in the absolute ultraviolet cross sections of ClOOCl and ClO appear to be the limiting factors in these measurements. The new Keq parameters are consistent with the measurements of Santee et al.42 in the stratosphere.
19. Fundamental and overtone vibrational spectroscopy, enthalpy of hydrogen bond formation and equilibrium constant determination of the methanol-dimethylamine complex.
PubMed
Du, Lin; Mackeprang, Kasper; Kjaergaard, Henrik G
2013-07-01
We have measured gas phase vibrational spectra of the bimolecular complex formed between methanol (MeOH) and dimethylamine (DMA) up to about 9800 cm(-1). In addition to the strong fundamental OH-stretching transition we have also detected the weak second overtone NH-stretching transition. The spectra of the complex are obtained by spectral subtraction of the monomer spectra from spectra recorded for the mixture. For comparison, we also measured the fundamental OH-stretching transition in the bimolecular complex between MeOH and trimethylamine (TMA). The enthalpies of hydrogen bond formation (ΔH) for the MeOH-DMA and MeOH-TMA complexes have been determined by measurements of the fundamental OH-stretching transition in the temperature range from 298 to 358 K. The enthalpy of formation is found to be -35.8 ± 3.9 and -38.2 ± 3.3 kJ mol(-1) for MeOH-DMA and MeOH-TMA, respectively, in the 298 to 358 K region. The equilibrium constant (Kp) for the formation of the MeOH-DMA complex has been determined from the measured and calculated transition intensities of the OH-stretching fundamental transition and the NH-stretching second overtone transition. The transition intensities were calculated using an anharmonic oscillator local mode model with dipole moment and potential energy curves calculated using explicitly correlated coupled cluster methods. The equilibrium constant for formation of the MeOH-DMA complex was determined to be 0.2 ± 0.1 atm(-1), corresponding to a ΔG value of about 4.0 kJ mol(-1).
20. Acid-base titrations of functional groups on the surface of the thermophilic bacterium Anoxybacillus flavithermus: comparing a chemical equilibrium model with ATR-IR spectroscopic data.
PubMed
Heinrich, Hannah T M; Bremer, Phil J; Daughney, Christopher J; McQuillan, A James
2007-02-27
Acid-base functional groups at the surface of Anoxybacillus flavithermus (AF) were assigned from the modeling of batch titration data of bacterial suspensions and compared with those determined from in situ infrared spectroscopic titration analysis. The computer program FITMOD was used to generate a two-site Donnan model (site 1: pKa = 3.26, wet concn = 2.46 x 10(-4) mol g(-1); site 2: pKa = 6.12, wet concn = 6.55 x 10(-5) mol g(-1)), which was able to describe data for whole exponential phase cells from both batch acid-base titrations at 0.01 M ionic strength and electrophoretic mobility measurements over a range of different pH values and ionic strengths. In agreement with information on the composition of bacterial cell walls and a considerable body of modeling literature, site 1 of the model was assigned to carboxyl groups, and site 2 was assigned to amino groups. pH difference IR spectra acquired by in situ attenuated total reflection infrared (ATR-IR) spectroscopy confirmed the presence of carboxyl groups. The spectra appear to show a carboxyl pKa in the 3.3-4.0 range. Further peaks were assigned to phosphodiester groups, which deprotonated at slightly lower pH. The presence of amino groups could not be confirmed or discounted by IR spectroscopy, but a positively charged group corresponding to site 2 was implicated by electrophoretic mobility data. Carboxyl group speciation over a pH range of 2.3-10.3 at two different ionic strengths was further compared to modeling predictions. While model predictions were strongly influenced by the ionic strength change, pH difference IR data showed no significant change. This meant that modeling predictions agreed reasonably well with the IR data for 0.5 M ionic strength but not for 0.01 M ionic strength.
1. Evaluation of equilibrium constants for the interaction of lactate dehydrogenase isoenzymes with reduced nicotinamide-adenine dinucleotide by affinity chromatography.
PubMed Central
Brinkworth, R I; Masters, C J; Winzor, D J
1975-01-01
Rabbit muscle lactate dehydrogenase was subjected to frontal affinity chromatography on Sepharose-oxamate in the presence of various concentrations of NADH and sodium phosphate buffer (0.05 M, pH 6.8) containing 0.5 M-NaCl. Quantitative interpretation of the results yields an intrinsic association constant of 9.0 x 10 (4)M-1 for the interaction of enzyme with NADH at 5 degrees C, a value that is confirmed by equilibrium-binding measurements. In a second series of experiments, zonal affinity chromatography of a mouse tissue extract under the same conditions was used to evaluate assoication constants of the order 2 x 10(5)M-1, 3 x 10(5)M-1, 4 x 10(5)M-1, 7 x 10(5)M-1 and 2 x 10(6)M-1 for the interaction of NADH with the M4, M3H, M2H2, MH3 and H4 isoenzymes respectively of lactate dehydrogenase. PMID:175784
2. Effect-compartment equilibrium rate constant (keo) for propofol during induction of anesthesia with a target-controlled infusion device.
PubMed
Lim, Thiam Aun; Wong, Wai Hong; Lim, Kin Yuee
2006-01-01
The effect-compartment concentration (C(e)) of a drug at a specific pharmacodynamic endpoint should be independent of the rate of drug injection. We used this assumption to derive an effect-compartment equilibrium rate constant (k(eo)) for propofol during induction of anesthesia, using a target controlled infusion device (Diprifusor). Eighteen unpremedicated patients were induced with a target blood propofol concentration of 5 microg x ml(-1) (group 1), while another 18 were induced with a target concentration of 6 microg x ml(-1) (group 2). The time at loss of the eyelash reflex was recorded. Computer simulation was used to derive the rate constant (k(eo)) that resulted in the mean C(e) at loss of the eyelash reflex in group 1 being equal to that in group 2. Using this population technique, we found the k(eo) to be 0.57 min(-1). The mean (SD) effect compartment concentration at loss of the eyelash reflex was 2.39 (0.70) microg x ml(-1). This means that to achieve a desired C(e) within 3 min of induction, the initial target blood concentration should be set at 1.67 times that of the desired C(e) for 1 min, after which it should revert to the desired concentration.
3. On the Temperature Dependence of Intrinsic Surface Protonation Equilibrium Constants: An Extension of the Revised MUSIC Model.
PubMed
Machesky, Michael L.; Wesolowski, David J.; Palmer, Donald A.; Ridley, Moira K.
2001-07-15
The revised multisite complexation (MUSIC) model of T. Hiemstra et al. (J. Colloid Interface Sci. 184, 680 (1996)) is the most thoroughly developed approach to date that explicitly considers the protonation behavior of the various types of hydroxyl groups known to exist on mineral surfaces. We have extended their revised MUSIC model to temperatures other than 25 degrees C to help rationalize the adsorption data we have been collecting for various metal oxides, including rutile and magnetite to 300 degrees C. Temperature-corrected MUSIC model A constants were calculated using a consistent set of solution protonation reactions with equilibrium constants that are reasonably well known as a function of temperature. A critical component of this approach was to incorporate an empirical correction factor that accounts for the observed decrease in cation hydration number with increasing temperature. This extension of the revised MUSIC model matches our experimentally determined pH of zero net proton charge pH values (pH(znpc)) for rutile to within 0.05 pH units between 25 and 250 degrees C and for magnetite within 0.2 pH units between 50 and 290 degrees C. Moreover, combining the MUSIC-model-derived surface protonation constants with the basic Stern description of electrical double-layer structure results in a good fit to our experimental rutile surface protonation data for all conditions investigated (25 to 250 degrees C, and 0.03 to 1.0 m NaCl or tetramethylammonium chloride media). Consequently, this approach should be useful in other instances where it is necessary to describe and/or predict the adsorption behavior of metal oxide surfaces over a wide temperature range. Copyright 2001 Academic Press. PMID:11426995
4. Effect of Temperature on Acidity and Hydration Equilibrium Constants of Delphinidin-3-O- and Cyanidin-3-O-sambubioside Calculated from Uni- and Multiwavelength Spectroscopic Data.
PubMed
Vidot, Kévin; Achir, Nawel; Mertz, Christian; Sinela, André; Rawat, Nadirah; Prades, Alexia; Dangles, Olivier; Fulcrand, Hélène; Dornier, Manuel
2016-05-25
Delphinidin-3-O-sambubioside and cyanidin-3-O-sambubioside are the main anthocyanins of Hibiscus sabdariffa calyces, traditionally used to make a bright red beverage by decoction in water. At natural pH, these anthocyanins are mainly in their flavylium form (red) in equilibrium with the quinonoid base (purple) and the hemiketal (colorless). For the first time, their acidity and hydration equilibrium constants were obtained from a pH-jump method followed by UV-vis spectroscopy as a function of temperature from 4 to 37 °C. Equilibrium constant determination was also performed by multivariate curve resolution (MCR). Acidity and hydration constants of cyanidin-3-O-sambubioside at 25 °C were 4.12 × 10(-5) and 7.74 × 10(-4), respectively, and were significantly higher for delphinidin-3-O-sambubioside (4.95 × 10(-5) and 1.21 × 10(-3), respectively). MCR enabled the obtaining of concentration and spectrum of each form but led to overestimated values for the equilibrium constants. However, both methods showed that formations of the quinonoid base and hemiketal were endothermic reactions. Equilibrium constants of anthocyanins in the hibiscus extract showed comparable values as for the isolated anthocyanins. PMID:27124576
5. Effect of Temperature on Acidity and Hydration Equilibrium Constants of Delphinidin-3-O- and Cyanidin-3-O-sambubioside Calculated from Uni- and Multiwavelength Spectroscopic Data.
PubMed
Vidot, Kévin; Achir, Nawel; Mertz, Christian; Sinela, André; Rawat, Nadirah; Prades, Alexia; Dangles, Olivier; Fulcrand, Hélène; Dornier, Manuel
2016-05-25
Delphinidin-3-O-sambubioside and cyanidin-3-O-sambubioside are the main anthocyanins of Hibiscus sabdariffa calyces, traditionally used to make a bright red beverage by decoction in water. At natural pH, these anthocyanins are mainly in their flavylium form (red) in equilibrium with the quinonoid base (purple) and the hemiketal (colorless). For the first time, their acidity and hydration equilibrium constants were obtained from a pH-jump method followed by UV-vis spectroscopy as a function of temperature from 4 to 37 °C. Equilibrium constant determination was also performed by multivariate curve resolution (MCR). Acidity and hydration constants of cyanidin-3-O-sambubioside at 25 °C were 4.12 × 10(-5) and 7.74 × 10(-4), respectively, and were significantly higher for delphinidin-3-O-sambubioside (4.95 × 10(-5) and 1.21 × 10(-3), respectively). MCR enabled the obtaining of concentration and spectrum of each form but led to overestimated values for the equilibrium constants. However, both methods showed that formations of the quinonoid base and hemiketal were endothermic reactions. Equilibrium constants of anthocyanins in the hibiscus extract showed comparable values as for the isolated anthocyanins.
6. Colorimetric Determination of the Iron(III)-Thiocyanate Reaction Equilibrium Constant with Calibration and Equilibrium Solutions Prepared in a Cuvette by Sequential Additions of One Reagent to the Other
ERIC Educational Resources Information Center
Nyasulu, Frazier; Barlag, Rebecca
2011-01-01
The well-known colorimetric determination of the equilibrium constant of the iron(III-thiocyanate complex is simplified by preparing solutions in a cuvette. For the calibration plot, 0.10 mL increments of 0.00100 M KSCN are added to 4.00 mL of 0.200 M Fe(NO[subscript 3])[subscript 3], and for the equilibrium solutions, 0.50 mL increments of…
7. Solubility of stibnite in hydrogen sulfide solutions, speciation, and equilibrium constants, from 25 to 350 degree C
SciTech Connect
Krupp, R.E. )
1988-12-01
Solubility of stibnite (Sb{sub 2}S{sub 3}) was measured in aqueous hydrogen sulfide solutions as a function of pH and total free sulfur (TFS) concentrations at 25, 90, 200, 275, and 350{degree}C and at saturated vapor pressures. At 25 and 90{degree}C and TFS {approx} 0.01 molal solubility is controlled by the thioantimonite complexes H{sub 2}Sb{sub 2}S{sup 0}{sub 4}, Hsb{sub 2}S{sub 4}{sup {minus}}, Sb{sub 2}S{sup 2{minus}}{sub 4}. At higher temperatures the hydroxothioantimonite complex Sb{sub 2}S{sub 2}(OH){sup 0}{sub 2} becomes dominant. Polymerization due to condensation reactions yields long chains made up of trigonal-pyramidal SbS{sub 3}-groups. Equilibrium constants were derived for the dimers. The transition from thioantimonites to the hydroxothioantimonite species at approximately 120{degree}C is endothermic and is entirely driven by a gain in entropy. Stibnite solubility calculated for some geothermal fluids indicate that these fluids are undersaturated in Sb if stibnite is the solid equilibrium phase. At high temperatures (> 100{degree}C) precipitation of stibnite from ore fluids can occur in response to conductive cooling, while at low temperatures, where thioantimonites dominate, acidification of the fluid is the more likely mechanism. Precipitation of stibnite from fluids containing hydroxothioantimonite consumes H{sub 2}S and may thus trigger precipitation of other metals carried as sulfide complexes, e.g. Au(HS){sup {minus}}{sub 2}.
8. (SO2)-S-34-O-16: High-resolution analysis of the (030),(101), (111), (002) and (201) vibrational states; determination of equilibrium rotational constants for sulfur dioxide and anharmonic vibrational constants
SciTech Connect
Lafferty, Walter; Flaud, Jean-marie; Ngom, El Hadji A.; Sams, Robert L.
2009-01-02
High resolution Fourier transform spectra of a sample of sulfur dioxide, enriched in 34S (95.3%). were completely analyzed leading to a large set of assigned lines. The experimental levels derived from this set of transitions were fit to within their experimental uncertainties using Watson-type Hamiltonians. Precise band centers, rotational and centrifugal distortion constants were determined. The following band centers in cm-1 were obtained: ν0(3ν2)=1538.720198(11), ν0(ν1+ν3)=2475.828004(29), ν0(ν1+ν2+ν3)=2982.118600(20), ν0(2ν3)=2679.800919(35), and ν0(2ν1+ν3)=3598.773915(38). The rotational constants obtained in this work have been fit together with the rotational constants of lower lying vibrational states [ W.J. Lafferty, J.-M. Flaud, R.L. Sams and EL Hadjiabib, in press] to obtain equilibrium constants as well as vibration-rotation constants. These equilibrium constants have been fit together with those of 32S16O2 [J.-M. Flaud and W.J. Lafferty, J. Mol. Spectrosc. 16 (1993) 396-402] leading to an improved equilibrium structure. Finally the observed band centers have been fit to obtain anharmonic rotational constants.
9. Determination of acid/base dissociation constants based on a rapid detection of the half equivalence point by feedback-based flow ratiometry.
PubMed
Tanaka, Hideji; Tachibana, Takahiro
2004-06-01
Acid dissociation constants (Ka) were determined through the rapid detection of the half equivalence point (EP1/2) based on a feedback-based flow ratiometry. A titrand, delivered at a constant flow rate, was merged with a titrant, whose flow rate was varied in response to a control voltage (Vc) from a controller. Downstream, the pH of the mixed solution was monitored. Initially, Vc was increased linearly. At the instance that the detector sensed EP1/2, the ramp direction of Vc changed downward. When EP1/2 was sensed again, Vc was increased again. This series of process was repeated automatically. The pH at EP1/2 was regarded as being pKa of the analyte after an activity correction. Satisfactory results were obtained for different acids in various matrices with good precision (RSD approximately 3%) at a throughput rate of 56 s/determination.
10. Using electrophoretic mobility shift assays to measure equilibrium dissociation constants: GAL4-p53 binding DNA as a model system.
PubMed
Heffler, Michael A; Walters, Ryan D; Kugel, Jennifer F
2012-01-01
An undergraduate biochemistry laboratory experiment is described that will teach students the practical and theoretical considerations for measuring the equilibrium dissociation constant (K(D) ) for a protein/DNA interaction using electrophoretic mobility shift assays (EMSAs). An EMSA monitors the migration of DNA through a native gel; the DNA migrates more slowly when bound to a protein. To determine a K(D) the amount of unbound and protein-bound DNA in the gel is measured as the protein concentration increases. By performing this experiment, students will be introduced to making affinity measurements and gain experience in performing quantitative EMSAs. The experiment describes measuring the K(D) for the interaction between the chimeric protein GAL4-p53 and its DNA recognition site; however, the techniques are adaptable to other DNA binding proteins. In addition, the basic experiment described can be easily expanded to include additional inquiry-driven experimentation. © 2012 by The International Union of Biochemistry and Molecular Biology.
11. Rate and Equilibrium Constants for an Enzyme Conformational Change during Catalysis by Orotidine 5′-Monophosphate Decarboxylase
PubMed Central
2016-01-01
from the complex between FOMP and the open enzyme, that the tyrosyl phenol group stabilizes the closed form of ScOMPDC by hydrogen bonding to the substrate phosphodianion, and that the phenyl group of Y217 and F217 facilitates formation of the transition state for the rate-limiting conformational change. An analysis of kinetic data for mutant enzyme-catalyzed decarboxylation of OMP and FOMP provides estimates for the rate and equilibrium constants for the conformational change that traps FOMP at the enzyme active site. PMID:26135041
12. Understanding Chemical Equilibrium Using Entropy Analysis: The Relationship between [delta]S[subscript tot](sys[superscript o]) and the Equilibrium Constant
ERIC Educational Resources Information Center
Bindel, Thomas H.
2010-01-01
Entropy analyses as a function of the extent of reaction are presented for a number of physicochemical processes, including vaporization of a liquid, dimerization of nitrogen dioxide, and the autoionization of water. Graphs of the total entropy change versus the extent of reaction give a visual representation of chemical equilibrium and the second…
13. Norfloxacin Zn(II)-based complexes: acid base ionization constant determination, DNA and albumin binding properties and the biological effect against Trypanosoma cruzi.
PubMed
Gouvea, Ligiane R; Martins, Darliane A; Batista, Denise da Gama Jean; Soeiro, Maria de Nazaré C; Louro, Sonia R W; Barbeira, Paulo J S; Teixeira, Letícia R
2013-10-01
Zn(II) complexes with norfloxacin (NOR) in the absence or in the presence of 1,10-phenanthroline (phen) were obtained and characterized. In both complexes, the ligand NOR was coordinated through a keto and a carboxyl oxygen. Tetrahedral and octahedral geometries were proposed for [ZnCl2(NOR)]·H2O (1) and [ZnCl2(NOR)(phen)]·2H2O (2), respectively. Since the biological activity of the chemicals depends on the pH value, pH titrations of the Zn(II) complexes were performed. UV spectroscopic studies of the interaction of the complexes with calf-thymus DNA (CT DNA) have suggested that they can bind to CT DNA with moderate affinity in an intercalative mode. The interactions between the Zn(II) complexes and bovine serum albumin (BSA) were investigated by steady-state and time-resolved fluorescence spectroscopy at pH 7.4. The experimental data showed static quenching of BSA fluorescence, indicating that both complexes bind to BSA. A modified Stern-Volmer plot for the quenching by complex 2 demonstrated preferential binding near one of the two tryptophan residues of BSA. The binding constants obtained (K b ) showed that BSA had a two orders of magnitude higher affinity for complex 2 than for 1. The results also showed that the affinity of both complexes for BSA was much higher than for DNA. This preferential interaction with protein sites could be important to their biological mechanisms of action. The analysis in vitro of the Zn(II) complexes and corresponding ligand were assayed against Trypanosoma cruzi, the causative agent of Chagas disease and the data showed that complex 2 was the most active against bloodstream trypomastigotes.
14. Analysis of responsive characteristics of ionic-strength-sensitive hydrogel with consideration of effect of equilibrium constant by a chemo-electro-mechanical model.
PubMed
Li, Hua; Lai, Fukun; Luo, Rongmo
2009-11-17
A multiphysics model is presented in this paper for analysis of the influence of various equilibrium constants on the smart hydrogel responsive to the ionic strength of environmental solution, and termed the multieffect-coupling ionic-strength stimulus (MECis) model. The model is characterized by a set of partial differential governing equations by consideration of the mass and momentum conservations of the system and coupled chemical, electrical, and mechanical multienergy domains. The Nernst-Planck equations are derived by the mass conservation of the ionic species in both the interstitial fluid of the hydrogel and the surrounding solution. The binding reaction between the fixed charge groups of the hydrogel and the mobile ions in the solution is described by the fixed charge equation, which is based on the Langmuir monolayer theory. As an important effect for the binding reaction, the equilibrium constant is incorporated into the fixed charge equation. The kinetics of the hydrogel swelling/deswelling is illustrated by the mechanical equation, based on the law of momentum conservation for the solid polymeric networks matrix within the hydrogel. The MECis model is examined by comparison of the numerical simulations and experiments from open literature. The analysis of the influence of different equilibrium constants on the responsive characteristics of the ionic-strength-sensitive hydrogel is carried out with detailed discussion.
15. On the use of dynamic fluorescence measurements to determine equilibrium and kinetic constants. The inclusion of pyrene in β-cyclodextrin cavities
De Feyter, Steven; van Stam, Jan; Boens, Noël; De Schryver, Fans C.
1996-01-01
An analysis of the kinetic identifiability of two-state excited-state processes goves the conditions which have to be fulfilled to make it possible to estimate the ground-state equilibrium constant from dynamic fluorescence data. For the aqueous system β-cyclodextrin:pyrene it turns out that the only kinetic parameters which can be estimated are (i) the deactivation rate constant of pyrene dissolved in the aqeuous mbulk, (ii) the rate of formation of a β-cyclodextrin:pyrene inclusion complex in the excited-state, which is negligibly slow, and (iii) the sum of the rate constats for deactivation to the ground-state and for exclusion into the aqueous bulk of the excited pyrene participating in inclusion complex formation. This sum cannot be separated into its individual rate constant contributions, and it is impossible to determine the ground-state equilibrium constant for the formation of β-cyclodextrin:pyrene inclusion complexes solely from fluorescence decay data, a fact not taken into account in the literature.
16. Analytic calculation of physiological acid-base parameters in plasma.
PubMed
Wooten, E W
1999-01-01
Analytic expressions for plasma total titratable base, base excess (DeltaCB), strong-ion difference, change in strong-ion difference (DeltaSID), change in Van Slyke standard bicarbonate (DeltaVSSB), anion gap, and change in anion gap are derived as a function of pH, total buffer ion concentration, and conditional molar equilibrium constants. The behavior of these various parameters under respiratory and metabolic acid-base disturbances for constant and variable buffer ion concentrations is considered. For constant noncarbonate buffer concentrations, DeltaSID = DeltaCB = DeltaVSSB, whereas these equalities no longer hold under changes in noncarbonate buffer concentration. The equivalence is restored if the reference state is changed to include the new buffer concentrations.
17. Beyond transition state theory: accurate description of nuclear quantum effects on the rate and equilibrium constants of chemical reactions using Feynman path integrals.
PubMed
Vanícek, Jirí
2011-01-01
Nuclear tunneling and other nuclear quantum effects have been shown to play a significant role in molecules as large as enzymes even at physiological temperatures. I discuss how these quantum phenomena can be accounted for rigorously using Feynman path integrals in calculations of the equilibrium and kinetic isotope effects as well as of the temperature dependence of the rate constant. Because these calculations are extremely computationally demanding, special attention is devoted to increasing the computational efficiency by orders of magnitude by employing efficient path integral estimators.
18. Stability of equilibrium of a superconducting ring that levitates in the field of a fixed ring with constant current
Bishaev, A. M.; Bush, A. A.; Gavrikov, M. B.; Kamentsev, K. E.; Kozintseva, M. V.; Savel'ev, V. V.; Sigov, A. S.
2015-11-01
In order to develop a plasma trap with levitating superconducting magnetic coils, it is necessary to search for their stable levitating states. An analytical expression for the potential energy of a single superconducting ring that captures a fixed magnetic flux in the field of a fixed ring with constant current versus the coordinate of the free ring on the axis of the system, deviation angle of its axis from the axis of the system, and radial displacement of its plane is derived for uniform gravity field in the thin ring approximation. The calculated stable levitation states of the superconducting ring in the field of the ring with constant current are proven in experiments. The generalization of such an approach to the levitation of several rings makes it possible to search for stable levitation states of several coils that form a magnetic system of a multipole trap.
19. The determination of equilibrium constants, DeltaG, DeltaH and DeltaS for vapour interaction with a pharmaceutical drug, using gravimetric vapour sorption.
PubMed
Willson, Richard J; Beezer, Anthony E
2003-06-01
The application of gravimetric vapour sorption (GVS) to the characterisation of pharmaceutical drugs is often restricted to the study of gross behaviour such as a measure of hygroscopicity. Although useful in early development of a drug substance, for example, in salt selection screening exercises, such types of analysis may not contribute to a fundamental understanding of the properties of the material. This paper reports a new methodology for GVS experimentation that will allow specific sorption parameters to be calculated; equilibrium constant (K), van't Hoff enthalpy change (DeltaH(v)), Gibbs free energy for sorption (DeltaG) and the entropy change for sorption (DeltaS). Unlike other reports of such type of analysis that require the application of a specific model, this method is model free. The analysis does require that over the narrow temperature range of the study DeltaH(v) is constant and there is no change in interaction mechanism.
20. The equilibrium constant for N2O5 = NO2 + NO3 - Absolute determination by direct measurement from 243 to 397 K
NASA Technical Reports Server (NTRS)
Cantrell, C. A.; Davidson, J. A.; Mcdaniel, A. H.; Shetter, R. E.; Calvert, J. G.
1988-01-01
Direct determinations of the equilibrium constant for the reaction N2O5 = NO2 + NO3 were carried out by measuring NO2, NO3, and N2O5 using long-path visible and infrared absorption spectroscopy as a function of temperature from 243 to 397 K. The first-order decay rate constant of N2O5 was experimentally measured as a function of temperature. These results are in turn used to derive a value for the rate coefficient for the NO-forming channel in the reaction of NO3 with NO2. The implications of the results for atmospheric chemistry, the thermodynamics of NO3, and for laboratory kinetics studies are discussed.
1. Equilibrium and rate constants, and reaction mechanism of the HF dissociation in the HF(H2O)7 cluster by ab initio rare event simulations.
PubMed
Elena, Alin Marin; Meloni, Simone; Ciccotti, Giovanni
2013-12-12
We perform restrained hybrid Monte Carlo (MC) simulations to compute the equilibrium constant of the dissociation reaction of HF in HF(H2O)7. We find that the HF is a stronger acid in the cluster than in the bulk, and its acidity is higher at lower T. The latter phenomenon has a vibrational entropic origin, resulting from a counterintuitive balance of intra- and intermolecular terms. We find also a temperature dependence of the reactions mechanism. At low T (≤225 K) the dissociation reaction follows a concerted path, with the H atoms belonging to the relevant hydrogen bond chain moving synchronously. At higher T (300 K), the first two hydrogen atoms move together, forming an intermediate metastable state having the structure of an eigen ion (H9O4(+)), and then the third hydrogen migrates completing the reaction. We also compute the dissociation rate constant, kRP. At very low T (≤75 K) kRP depends strongly on the temperature, whereas it gets almost constant at higher T’s. With respect to the bulk, the HF dissociation in the HF(H2O)7 is about 1 order of magnitude faster. This is due to a lower free energy barrier for the dissociation in the cluster.
2. Determination of the dissociation constant of valine from acetohydroxy acid synthase by equilibrium partition in an aqueous two-phase system.
PubMed
Engel, S; Vyazmensky, M; Barak, Z; Chipman, D M; Merchuk, J C
2000-06-23
An aqueous polyethylene glycol/salt two-phase system was used to estimate the dissociation constant, K(dis), of the Escherichia coli isoenzyme AHAS III regulatory subunit, ilvH protein, from the feedback inhibitor valine. The amounts of the bound and free radioactive valine in the system were determined. A Scatchard plot of the data revealed a 1:1 valine-protein binding ratio and K(dis) of 133+/-14 microM. The protein did not bind leucine, and the ilvH protein isolated from a valine resistant mutant showed no valine binding. This method is very simple, rapid and requires only a small amounts of protein compared to the presently used equilibrium dialysis method.
3. Rate and equilibrium constant of the reaction of 1-methylvinoxy radicals with O2: CH3COCH2 + O2<--> CH3COCH2O2.
PubMed
Hassouna, Melynda; Delbos, Eric; Devolder, Pascal; Viskolcz, Bela; Fittschen, Christa
2006-06-01
The reaction of 1-methylvinoxy radicals, CH3COCH2, with molecular oxygen has been investigated by experimental and theoretical methods as a function of temperature (291-520 K) and pressure (0.042-10 bar He). Experiments have been performed by laser photolysis coupled to a detection of 1-methylvinoxy radicals by laser-induced fluorescence LIF. The potential energy surface calculations were performed using ab inito molecular orbital theory at the G3MP2B3 and CBSQB3 level of theory based on the density function theory optimized geometries. Derived molecular properties of the characteristic points of the potential energy surface were used to describe the mechanism and kinetics of the reaction under investigation. At 295 K, no pressure dependence of the rate constant for the association reaction has been observed: k(1,298K) = (1.18 +/- 0.04) x 10(-12) cm3 s(-1). Biexponential decays have been observed in the temperature range 459-520 K and have been interpreted as an equilibrium reaction. The temperature-dependent equilibrium constants have been extracted from these decays and a standard reaction enthalpy of deltaH(r,298K) = -105.0 +/- 2.0 kJ mol(-1) and entropy of deltaS(r,298K) = -143.0 +/- 4.0 J mol(-1) K(-1) were derived, in excellent agreement with the theoretical results. Consistent heats of formation for the vinoxy and the 1-methylvinoxy radical as well as their O2 adducts are recommended based on our complementary experimental and theoretical study deltaH(f,298K) = 13.0 +/- 2.0, -32. 9+/- 2.0, -85.9 +/- 4.0, and -142.1 +/- 4.0 kJ mol(-1) for CH2CHO, CH3COCH2 radicals, and their adducts, respectively.
4. Basis for the equilibrium constant in the interconversion of l-lysine and l-beta-lysine by lysine 2,3-aminomutase.
PubMed
Chen, Dawei; Tanem, Justinn; Frey, Perry A
2007-02-01
l-beta-lysine and beta-glutamate are produced by the actions of lysine 2,3-aminomutase and glutamate 2,3-aminomutase, respectively. The pK(a) values have been titrimetrically measured and are for l-beta-lysine: pK(1)=3.25 (carboxyl), pK(2)=9.30 (beta-aminium), and pK(3)=10.5 (epsilon-aminium). For beta-glutamate the values are pK(1)=3.13 (carboxyl), pK(2)=3.73 (carboxyl), and pK(3)=10.1 (beta-aminium). The equilibrium constants for reactions of 2,3-aminomutases favor the beta-isomers. The pH and temperature dependencies of K(eq) have been measured for the reaction of lysine 2,3-aminomutase to determine the basis for preferential formation of beta-lysine. The value of K(eq) (8.5 at 37 degrees C) is independent of pH between pH 6 and pH 11; ruling out differences in pK-values as the basis for the equilibrium constant. The K(eq)-value is temperature-dependent and ranges from 10.9 at 4 degrees C to 6.8 at 65 degrees C. The linear van't Hoff plot shows the reaction to be enthalpy-driven, with DeltaH degrees =-1.4 kcal mol(-1) and DeltaS degrees =-0.25 cal deg(-1) mol(-1). Exothermicity is attributed to the greater strength of the bond C(beta)-N(beta) in l-beta-lysine than C(alpha)-N(alpha) in l-lysine, and this should hold for other amino acids.
5. Equilibrium binding constants for Tl+ with gramicidins A, B and C in a lysophosphatidylcholine environment determined by 205Tl nuclear magnetic resonance spectroscopy.
PubMed Central
Hinton, J F; Koeppe, R E; Shungu, D; Whaley, W L; Paczkowski, J A; Millett, F S
1986-01-01
Nuclear Magnetic Resonance (NMR) 205Tl spectroscopy has been used to monitor the binding of Tl+ to gramicidins A, B, and C packaged in aqueous dispersions of lysophosphatidylcholine. For 5 mM gramicidin dimer in the presence of 100 mM lysophosphatidylcholine, only approximately 50% or less of the gramicidin appears to be accessible to Tl+. Analysis of the 205Tl chemical shift as a function of Tl+ concentration over the 0.65-50 mM range indicates that only one Tl+ ion can be bound by gramicidin A, B, or C under these experimental conditions. In this system, the Tl+ equilibrium binding constant is 582 +/- 20 M-1 for gramicidin 1949 +/- 100 M-1 for gramicidin B, and 390 +/- 20 M-1 for gramicidin C. Gramicidin B not only binds Tl+ more strongly but it is also in a different conformational state than that of A and C, as shown by Circular Dichroism spectroscopy. The 205Tl NMR technique can now be extended to determinations of binding constants of other cations to gramicidin by competition studies using a 205Tl probe. PMID:2420383
6. Oligomer formation of the bacterial second messenger c-di-GMP: reaction rates and equilibrium constants indicate a monomeric state at physiological concentrations.
PubMed
Gentner, Martin; Allan, Martin G; Zaehringer, Franziska; Schirmer, Tilman; Grzesiek, Stephan
2012-01-18
Cyclic diguanosine-monophosphate (c-di-GMP) is a bacterial signaling molecule that triggers a switch from motile to sessile bacterial lifestyles. This mechanism is of considerable pharmaceutical interest, since it is related to bacterial virulence, biofilm formation, and persistence of infection. Previously, c-di-GMP has been reported to display a rich polymorphism of various oligomeric forms at millimolar concentrations, which differ in base stacking and G-quartet interactions. Here, we have analyzed the equilibrium and exchange kinetics between these various forms by NMR spectroscopy. We find that the association of the monomer into a dimeric form is in fast exchange (equilibrium constant of about 1 mM. At concentrations above 100 μM, higher oligomers are formed in the presence of cations. These are presumably tetramers and octamers, with octamers dominating above about 0.5 mM. Thus, at the low micromolar concentrations of the cellular environment and in the absence of additional compounds that stabilize oligomers, c-di-GMP should be predominantly monomeric. This finding has important implications for the understanding of c-di-GMP recognition by protein receptors. In contrast to the monomer/dimer exchange, formation and dissociation of higher oligomers occurs on a time scale of several hours to days. The time course can be described quantitatively by a simple kinetic model where tetramers are intermediates of octamer formation. The extremely slow oligomer dissociation may generate severe artifacts in biological experiments when c-di-GMP is diluted from concentrated stock solution. We present a simple method to quantify c-di-GMP monomers and oligomers from UV spectra and a procedure to dissolve the unwanted oligomers by an annealing step.
7. Constraining the chlorine monoxide (ClO)/chlorine peroxide (ClOOCl) equilibrium constant from Aura Microwave Limb Sounder measurements of nighttime ClO.
PubMed
Santee, Michelle L; Sander, Stanley P; Livesey, Nathaniel J; Froidevaux, Lucien
2010-04-13
The primary ozone loss process in the cold polar lower stratosphere hinges on chlorine monoxide (ClO) and one of its dimers, chlorine peroxide (ClOOCl). Recently, analyses of atmospheric observations have suggested that the equilibrium constant, K(eq), governing the balance between ClOOCl formation and thermal decomposition in darkness is lower than that in the current evaluation of kinetics data. Measurements of ClO at night, when ClOOCl is unaffected by photolysis, provide a useful means of testing quantitative understanding of the ClO/ClOOCl relationship. Here we analyze nighttime ClO measurements from the National Aeronautics and Space Administration Aura Microwave Limb Sounder (MLS) to infer an expression for K(eq). Although the observed temperature dependence of the nighttime ClO is in line with the theoretical ClO/ClOOCl equilibrium relationship, none of the previously published expressions for K(eq) consistently produces ClO abundances that match the MLS observations well under all conditions. Employing a standard expression for K(eq), A x exp(B/T), we constrain the parameter A to currently recommended values and estimate B using a nonlinear weighted least squares analysis of nighttime MLS ClO data. ClO measurements at multiple pressure levels throughout the periods of peak chlorine activation in three Arctic and four Antarctic winters are used to estimate B. Our derived B leads to values of K(eq) that are approximately 1.4 times smaller at stratospherically relevant temperatures than currently recommended, consistent with earlier studies. Our results are in better agreement with the newly updated (2009) kinetics evaluation than with the previous (2006) recommendation.
8. Chemical Principles Revisited: Chemical Equilibrium.
ERIC Educational Resources Information Center
Mickey, Charles D.
1980-01-01
Describes: (1) Law of Mass Action; (2) equilibrium constant and ideal behavior; (3) general form of the equilibrium constant; (4) forward and reverse reactions; (5) factors influencing equilibrium; (6) Le Chatelier's principle; (7) effects of temperature, changing concentration, and pressure on equilibrium; and (8) catalysts and equilibrium. (JN)
9. Understanding Acid Base Disorders.
PubMed
Gomez, Hernando; Kellum, John A
2015-10-01
The concentration of hydrogen ions is regulated in biologic solutions. There are currently 3 recognized approaches to assess changes in acid base status. First is the traditional Henderson-Hasselbalch approach, also called the physiologic approach, which uses the relationship between HCO3(-) and Pco2; the second is the standard base excess approach based on the Van Slyke equation. The third approach is the quantitative or Stewart approach, which uses the strong ion difference and the total weak acids. This article explores the origins of the current concepts framing the existing methods to analyze acid base balance.
10. Evaluating the Equilibrium Association Constant between ArtinM Lectin and Myeloid Leukemia Cells by Impedimetric and Piezoelectric Label Free Approaches
PubMed Central
Carvalho, Fernanda C.; Martins, Denise C.; Santos, Adriano; Roque-Barreira, Maria-Cristina; Bueno, Paulo R.
2014-01-01
Label-free methods for evaluating lectin–cell binding have been developed to determine the lectin–carbohydrate interactions in the context of cell-surface oligosaccharides. In the present study, mass loading and electrochemical transducer signals were compared to characterize the interaction between lectin and cellular membranes by measuring the equilibrium association constant, Ka, between ArtinM lectin and the carbohydrate sites of NB4 leukemia cells. By functionalizing sensor interfaces with ArtinM, it was possible to determine Ka over a range of leukemia cell concentrations to construct analytical curves from impedimetric and/or mass-associated frequency shifts with analytical signals following a Langmuir pattern. Using the Langmuir isotherm-binding model, the Ka obtained were (8.9 ± 1.0) × 10−5 mL/cell and (1.05 ± 0.09) × 10−6 mL/cell with the electrochemical impedance spectroscopy (EIS) and quartz crystal microbalance (QCM) methods, respectively. The observed differences were attributed to the intrinsic characteristic sensitivity of each method in following Langmuir isotherm premises. PMID:25587428
11. Evaluating the Equilibrium Association Constant between ArtinM Lectin and Myeloid Leukemia Cells by Impedimetric and Piezoelectric Label Free Approaches.
PubMed
Carvalho, Fernanda C; Martins, Denise C; Santos, Adriano; Roque-Barreira, Maria-Cristina; Bueno, Paulo R
2014-12-01
Label-free methods for evaluating lectin-cell binding have been developed to determine the lectin-carbohydrate interactions in the context of cell-surface oligosaccharides. In the present study, mass loading and electrochemical transducer signals were compared to characterize the interaction between lectin and cellular membranes by measuring the equilibrium association constant, Ka , between ArtinM lectin and the carbohydrate sites of NB4 leukemia cells. By functionalizing sensor interfaces with ArtinM, it was possible to determine Ka over a range of leukemia cell concentrations to construct analytical curves from impedimetric and/or mass-associated frequency shifts with analytical signals following a Langmuir pattern. Using the Langmuir isotherm-binding model, the Ka obtained were (8.9 ± 1.0) × 10(-5) mL/cell and (1.05 ± 0.09) × 10(-6) mL/cell with the electrochemical impedance spectroscopy (EIS) and quartz crystal microbalance (QCM) methods, respectively. The observed differences were attributed to the intrinsic characteristic sensitivity of each method in following Langmuir isotherm premises.
12. Determination of equilibrium constant of amino carbamate adduct formation in sisomicin by a high pH based high performance liquid chromatography.
PubMed
Wlasichuk, Kenneth B; Tan, Li; Guo, Yushen; Hildebrandt, Darin J; Zhang, Hao; Karr, Dane E; Schmidt, Donald E
2015-01-01
Amino carbamate adduct formation from the amino group of an aminoglycoside and carbon dioxide has been postulated as a mechanism for reducing nephrotoxicity in the aminoglycoside class compounds. In this study, sisomicin was used as a model compound for amino carbamate analysis. A high pH based reversed-phase high performance liquid chromatography (RP-HPLC) method is used to separate the amino carbamate from sisomicin. The carbamate is stable as the breakdown is inhibited at high pH and any reactive carbon dioxide is removed as the carbonate. The amino carbamate was quantified and the molar fraction of amine as the carbamate of sisomicin was obtained from the HPLC peak areas. The equilibrium constant of carbamate formation, Kc, was determined to be 3.3 × 10(-6) and it was used to predict the fraction of carbamate over the pH range in a typical biological systems. Based on these results, the fraction of amino carbamate at physiological pH values is less than 13%, and the postulated mechanism for nephrotoxicity protection is not valid. The same methodology is applicable for other aminoglycosides.
13. Acid-Base Homeostasis.
PubMed
Hamm, L Lee; Nakhoul, Nazih; Hering-Smith, Kathleen S
2015-12-01
Acid-base homeostasis and pH regulation are critical for both normal physiology and cell metabolism and function. The importance of this regulation is evidenced by a variety of physiologic derangements that occur when plasma pH is either high or low. The kidneys have the predominant role in regulating the systemic bicarbonate concentration and hence, the metabolic component of acid-base balance. This function of the kidneys has two components: reabsorption of virtually all of the filtered HCO3(-) and production of new bicarbonate to replace that consumed by normal or pathologic acids. This production or generation of new HCO3(-) is done by net acid excretion. Under normal conditions, approximately one-third to one-half of net acid excretion by the kidneys is in the form of titratable acid. The other one-half to two-thirds is the excretion of ammonium. The capacity to excrete ammonium under conditions of acid loads is quantitatively much greater than the capacity to increase titratable acid. Multiple, often redundant pathways and processes exist to regulate these renal functions. Derangements in acid-base homeostasis, however, are common in clinical medicine and can often be related to the systems involved in acid-base transport in the kidneys.
14. Estimating the plasma effect-site equilibrium rate constant (Ke₀) of propofol by fitting time of loss and recovery of consciousness.
PubMed
Wu, Qi; Sun, Baozhu; Wang, Shuqin; Zhao, Lianying; Qi, Feng
2013-01-01
The present paper proposes a new approach for fitting the plasma effect-site equilibrium rate constant (Ke0) of propofol to satisfy the condition that the effect-site concentration (Ce) is equal at the time of loss of consciousness (LOC) and recovery of consciousness (ROC). Forty patients receiving intravenous anesthesia were divided into 4 groups and injected propofol 1.4, 1.6, 1.8, or 2 mg/kg at 1,200 mL/h. Durations from the start of injection to LOC and to ROC were recorded. LOC and ROC were defined as an observer's assessment of alertness and sedation scale change from 3 to 2 and from 2 to 3, respectively. Software utilizing bisection method iteration algorithms was built. Then, Ke0 satisfying the CeLOC=CeROC condition was estimated. The accuracy of the Ke0 estimated by our method was compared with the Diprifusor TCI Pump built-in Ke0 (0.26 min(-1)), and the Orchestra Workstation built-in Ke0 (1.21 min(-1)) in another group of 21 patients who were injected propofol 1.4 to 2 mg/kg. Our results show that the population Ke0 of propofol was 0.53 ± 0.18 min(-1). The regression equation for adjustment by dose (mg/kg) and age was Ke0=1.42-0.30 × dose-0.0074 × age. Only Ke0 adjusted by dose and age achieved the level of accuracy required for clinical applications. We conclude that the Ke0 estimated based on clinical signs and the two-point fitting method significantly improved the ability of CeLOC to predict CeROC. However, only the Ke0 adjusted by dose and age and not a fixed Ke0 value can meet clinical requirements of accuracy.
15. History of medical understanding and misunderstanding of Acid base balance.
PubMed
Aiken, Christopher Geoffrey Alexander
2013-09-01
To establish how controversies in understanding acid base balance arose, the literature on acid base balance was reviewed from 1909, when Henderson described how the neutral reaction of blood is determined by carbonic and organic acids being in equilibrium with an excess of mineral bases over mineral acids. From 1914 to 1930, Van Slyke and others established our acid base principles. They recognised that carbonic acid converts into bicarbonate all non-volatile mineral bases not bound by mineral acids and determined therefore that bicarbonate represents the alkaline reserve of the body and should be a physiological constant. They showed that standard bicarbonate is a good measure of acidosis caused by increased production or decreased elimination of organic acids. However, they recognised that bicarbonate improved low plasma bicarbonate but not high urine acid excretion in diabetic ketoacidosis, and that increasing pCO2 caused chloride to shift into cells raising plasma titratable alkali. Both indicate that minerals influence pH. In 1945 Darrow showed that hyperchloraemic metabolic acidosis in preterm infants fed milk with 5.7 mmol of chloride and 2.0 mmol of sodium per 100 kcal was caused by retention of chloride in excess of sodium. Similar findings were made but not recognised in later studies of metabolic acidosis in preterm infants. Shohl in 1921 and Kildeberg in 1978 presented the theory that carbonic and organic acids are neutralised by mineral base, where mineral base is the excess of mineral cations over anions and organic acid is the difference between mineral base, bicarbonate and protein anion. The degree of metabolic acidosis measured as base excess is determined by deviation in both mineral base and organic acid from normal.
16. Acid-base properties of xanthosine 5'-monophosphate (XMP) and of some related nucleobase derivatives in aqueous solution: micro acidity constant evaluations of the (N1)H versus the (N3)H deprotonation ambiguity.
PubMed
Massoud, Salah S; Corfù, Nicolas A; Griesser, Rolf; Sigel, Helmut
2004-10-11
The first acidity constant of fully protonated xanthosine 5'-monophosphate, that is, of H3(XMP)+, was estimated by means of a micro acidity constant scheme and the following three deprotonations of the H2(XMP)+/- (pKa=0.97), H(XMP)- (5.30), and XMP2- (6.45) species were determined by potentiometric pH titrations; further deprotonation of (XMP-H)3- is possible only with pKa>12. The most important results are that the xanthine residue is deprotonated before the P(O)2(OH)- group loses its final proton; that is, twofold negatively charged XMP carries one negative charge in the pyrimidine ring and one at the phosphate group. Micro acidity constant evaluations reveal that this latter mentioned species occurs with a formation degree of 88 %, whereas its tautomer with a neutral xanthine moiety and a PO3(2-) group is formed only to 12 %; this distinguishes XMP from its related nucleoside 5'-monophosphates, like guanosine 5'-monophosphate. At the physiological pH of about 7.5 mainly (XMP-H)3- exists. The question, which of the purine sites, (N1)H or (N3)H, is deprotonated in this species cannot be answered unequivocally, though it appears that the (N3)H site is more acidic. By application of several methylated xanthine species intrinsic micro acidity constants are calculated and it is shown that, for example, for 7-methylxanthine the N1-deprotonated tautomer occurs with a formation degree of about 5 %; a small but significant amount that, as is discussed, may possibly be enhanced by metal ion coordination to N7, which is known to occur preferably to this site.
17. Three applications of path integrals: equilibrium and kinetic isotope effects, and the temperature dependence of the rate constant of the [1,5] sigmatropic hydrogen shift in (Z)-1,3-pentadiene.
PubMed
Zimmermann, Tomáš; Vaníček, Jiří
2010-11-01
Recent experiments have confirmed the importance of nuclear quantum effects even in large biomolecules at physiological temperature. Here we describe how the path integral formalism can be used to describe rigorously the nuclear quantum effects on equilibrium and kinetic properties of molecules. Specifically, we explain how path integrals can be employed to evaluate the equilibrium (EIE) and kinetic (KIE) isotope effects, and the temperature dependence of the rate constant. The methodology is applied to the [1,5] sigmatropic hydrogen shift in pentadiene. Both the KIE and the temperature dependence of the rate constant confirm the importance of tunneling and other nuclear quantum effects as well as of the anharmonicity of the potential energy surface. Moreover, previous results on the KIE were improved by using a combination of a high level electronic structure calculation within the harmonic approximation with a path integral anharmonicity correction using a lower level method.
18. Analysis of fast and slow acid dissociation equilibria of 3',3″,5',5″-tetrabromophenolphthalein and determination of its equilibrium constants by capillary zone electrophoresis.
PubMed
Takayanagi, Toshio
2013-01-01
Acid dissociation constants of 3',3″,5',5″-tetrabrompohenolphthalein (TBPP) were determined in an aqueous solution by capillary zone electrophoresis at an ionic strength of 0.01 mol/L. Two steps of the fast acid-dissociation equilibria including precipitable species of H2TBPP were analyzed at a weakly acidic pH region by using the change in effective electrophoretic mobility of TBPP with the pH of the separation buffer. On the other hand, an acid-dissociation reaction of TBPP at an alkaline pH region was reversible, but very slow to reach its equilibrium; the two TBPP species concerned with the equilibrium were detected as distinct signals in the electropherograms. After reaching its equilibrium, the acid-dissociation constant was determined with the signal height corresponding to its dianion form. Thus, three steps of the acid dissociation constants of TBPP were determined in an aqueous solution as pKa1 = 5.29 ± 0.06, pKa2 = 6.35 ± 0.02, and pKa3 = 11.03 ± 0.04.
19. Assessment of acid-base balance. Stewart's approach.
PubMed
Fores-Novales, B; Diez-Fores, P; Aguilera-Celorrio, L J
2016-04-01
The study of acid-base equilibrium, its regulation and its interpretation have been a source of debate since the beginning of 20th century. Most accepted and commonly used analyses are based on pH, a notion first introduced by Sorensen in 1909, and on the Henderson-Hasselbalch equation (1916). Since then new concepts have been development in order to complete and make easier the understanding of acid-base disorders. In the early 1980's Peter Stewart brought the traditional interpretation of acid-base disturbances into question and proposed a new method. This innovative approach seems more suitable for studying acid-base abnormalities in critically ill patients. The aim of this paper is to update acid-base concepts, methods, limitations and applications.
20. Calculation of equilibrium constants from multiwavelength spectroscopic data--II: SPECFIT: two user-friendly programs in basic and standard FORTRAN 77.
PubMed
Gampp, H; Maeder, M; Meyer, C J; Zuberbühler, A D
1985-04-01
A new program (SPECFIT), written in HP BASIC or FORTRAN 77, for the calculation of stability constants from spectroscopic data, is presented. Stability constants have been successfully calculated from multiwavelength spectrophotometric and EPR data, but the program can be equally well applied to the numerical treatment of other spectroscopic measurements. The special features included in SPECFIT to improve convergence, increase numerical reliability, and minimize memory as well as computing time requirements, include (i) elimination of the linear parameters (i.e., molar absorptivities), (ii) the use of analytical instead of numerical derivatives and (iii) factor analysis. Calculation of stability constants from spectroscopic data is then as straightforward as from potentiometric titration curves and gives results of analogous reproducibility. The spectroscopic method has proved, however, to be superior in discrimination between chemical models.
1. Thermodynamic and microscopic equilibrium constants of molecular species formed from pyridoxal 5'-phosphate and 2-amino-3-phosphonopropionic acid in aqueous and D/sub 2/O solution
SciTech Connect
Szpoganicz, B.; Martell, A.E.
1984-09-19
Schiff base formation between pyridoxal 5'-phosphate (PLP) and 2-amino-3-phosphonopropionic acid (APP) has been investigated by measurement of the corresponding NMR and electronic absorption spectra. A value of 0.26 was found for the formation constant of the completely deprotonated Schiff base species, and is much smaller than the values reported for pyridoxal-..beta..-chloroalanine and pyridoxal-O-phosphoserine. The protonation constants for the aldehyde and hydrate forms of PLP were determined in D/sub 2/O by measurement of the variation of chemical shifts with pD (pH in D/sub 2/O). The hydration constants of PLP were determined in a pD range 2-12, and species distributions were calculated. The protonation constants of the APP-PLP Schiff base determined by NMR in D/sub 2/O were found to have the log values 12.54, 8.10, 6.70, and 5.95, and the species distributions were calculated for a range of pD values. Evidence is reported for hydrogen bonding involving the phosphate and phosphonate groups of the diprotonated Schiff base. The cis and trans forms of the Schiff bases were distinguished with the aid of the nuclear Overhauser effect. 43 references, 9 figures, 3 tables.
2. Students' Understanding of Acids/Bases in Organic Chemistry Contexts
ERIC Educational Resources Information Center
Cartrette, David P.; Mayo, Provi M.
2011-01-01
Understanding key foundational principles is vital to learning chemistry across different contexts. One such foundational principle is the acid/base behavior of molecules. In the general chemistry sequence, the Bronsted-Lowry theory is stressed, because it lends itself well to studying equilibrium and kinetics. However, the Lewis theory of…
3. Formation and reactivity of a porphyrin iridium hydride in water: acid dissociation constants and equilibrium thermodynamics relevant to Ir-H, Ir-OH, and Ir-CH2- bond dissociation energetics.
PubMed
2011-11-01
Aqueous solutions of group nine metal(III) (M = Co, Rh, Ir) complexes of tetra(3,5-disulfonatomesityl)porphyrin [(TMPS)M(III)] form an equilibrium distribution of aquo and hydroxo complexes ([(TMPS)M(III)(D(2)O)(2-n)(OD)(n)]((7+n)-)). Evaluation of acid dissociation constants for coordinated water show that the extent of proton dissociation from water increases regularly on moving down the group from cobalt to iridium, which is consistent with the expected order of increasing metal-ligand bond strengths. Aqueous (D(2)O) solutions of [(TMPS)Ir(III)(D(2)O)(2)](7-) react with dihydrogen to form an iridium hydride complex ([(TMPS)Ir-D(D(2)O)](8-)) with an acid dissociation constant of 1.8(0.5) × 10(-12) (298 K), which is much smaller than the Rh-D derivative (4.3 (0.4) × 10(-8)), reflecting a stronger Ir-D bond. The iridium hydride complex adds with ethene and acetaldehyde to form organometallic derivatives [(TMPS)Ir-CH(2)CH(2)D(D(2)O)](8-) and [(TMPS)Ir-CH(OD)CH(3)(D(2)O)](8-). Only a six-coordinate carbonyl complex [(TMPS)Ir-D(CO)](8-) is observed for reaction of the Ir-D with CO (P(CO) = 0.2-2.0 atm), which contrasts with the (TMPS)Rh-D analog which reacts with CO to produce an equilibrium with a rhodium formyl complex ([(TMPS)Rh-CDO(D(2)O)](8-)). Reactivity studies and equilibrium thermodynamic measurements were used to discuss the relative M-X bond energetics (M = Rh, Ir; X = H, OH, and CH(2)-) and the thermodynamically favorable oxidative addition of water with the (TMPS)Ir(II) derivatives.
4. Automated method for determination of dissolved organic carbon-water distribution constants of structurally diverse pollutants using pre-equilibrium solid-phase microextraction.
PubMed
Ripszam, Matyas; Haglund, Peter
2015-02-01
Dissolved organic carbon (DOC) plays a key role in determining the environmental fate of semivolatile organic environmental contaminants. The goal of the present study was to develop a method using commercially available hardware to rapidly characterize the sorption properties of DOC in water samples. The resulting method uses negligible-depletion direct immersion solid-phase microextraction (SPME) and gas chromatography-mass spectrometry. Its performance was evaluated using Nordic reference fulvic acid and 40 priority environmental contaminants that cover a wide range of physicochemical properties. Two SPME fibers had to be used to cope with the span of properties, 1 coated with polydimethylsiloxane and 1 coated with polystyrene divinylbenzene polydimethylsiloxane, for nonpolar and semipolar contaminants, respectively. The measured DOC-water distribution constants showed reasonably good reproducibility (standard deviation ≤ 0.32) and good correlation (R(2) = 0.80) with log octanol-water partition coefficients for nonpolar persistent organic pollutants. The sample pretreatment is limited to filtration, and the method is easy to adjust to different DOC concentrations. These experiments also utilized the latest SPME automation that largely decreases total cycle time (to 20 min or shorter) and increases sample throughput, which is advantageous in cases when many samples of DOC must be characterized or when the determinations must be performed quickly, for example, to avoid precipitation, aggregation, and other changes of DOC structure and properties. The data generated by this method are valuable as a basis for transport and fate modeling studies.
5. An Acid-Base Chemistry Example: Conversion of Nicotine
Summerfield, John H.
1999-10-01
The current government interest in nicotine conversion by cigarette companies provides an example of acid-base chemistry that can be explained to students in the second semester of general chemistry. In particular, the conversion by ammonia of the +1 form of nicotine to the easier-to-assimilate free-base form illustrates the effect of pH on acid-base equilibrium. The part played by ammonia in tobacco smoke is analogous to what takes place when cocaine is "free-based".
6. Chemical Principles Revisited: Using the Equilibrium Concept.
ERIC Educational Resources Information Center
Mickey, Charles D., Ed.
1981-01-01
Discusses the concept of equilibrium in chemical systems, particularly in relation to predicting the position of equilibrium, predicting spontaneity of a reaction, quantitative applications of the equilibrium constant, heterogeneous equilibrium, determination of the solubility product constant, common-ion effect, and dissolution of precipitates.…
7. A study of pH-dependent photodegradation of amiloride by a multivariate curve resolution approach to combined kinetic and acid-base titration UV data.
PubMed
De Luca, Michele; Ioele, Giuseppina; Mas, Sílvia; Tauler, Romà; Ragno, Gaetano
2012-11-21
Amiloride photostability at different pH values was studied in depth by applying Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) to the UV spectrophotometric data from drug solutions exposed to stressing irradiation. Resolution of all degradation photoproducts was possible by simultaneous spectrophotometric analysis of kinetic photodegradation and acid-base titration experiments. Amiloride photodegradation showed to be strongly dependent on pH. Two hard modelling constraints were sequentially used in MCR-ALS for the unambiguous resolution of all the species involved in the photodegradation process. An amiloride acid-base system was defined by using the equilibrium constraint, and the photodegradation pathway was modelled taking into account the kinetic constraint. The simultaneous analysis of photodegradation and titration experiments revealed the presence of eight different species, which were differently distributed according to pH and time. Concentration profiles of all the species as well as their pure spectra were resolved and kinetic rate constants were estimated. The values of rate constants changed with pH and under alkaline conditions the degradation pathway and photoproducts also changed. These results were compared to those obtained by LC-MS analysis from drug photodegradation experiments. MS analysis allowed the identification of up to five species and showed the simultaneous presence of more than one acid-base equilibrium.
8. Rapid-Equilibrium Enzyme Kinetics
ERIC Educational Resources Information Center
Alberty, Robert A.
2008-01-01
Rapid-equilibrium rate equations for enzyme-catalyzed reactions are especially useful because if experimental data can be fit by these simpler rate equations, the Michaelis constants can be interpreted as equilibrium constants. However, for some reactions it is necessary to use the more complicated steady-state rate equations. Thermodynamics is…
9. Renal acidification responses to respiratory acid-base disorders.
PubMed
2010-01-01
Respiratory acid-base disorders are those abnormalities in acid-base equilibrium that are expressed as primary changes in the arterial carbon dioxide tension (PaCO2). An increase in PaCO2 (hypercapnia) acidifies body fluids and initiates the acid-base disturbance known as respiratory acidosis. By contrast, a decrease in PaCO2 (hypocapnia) alkalinizes body fluids and initiates the acid-base disturbance known as respiratory alkalosis. The impact on systemic acidity of these primary changes in PaCO2 is ameliorated by secondary, directional changes in plasma [HCO3¯] that occur in 2 stages. Acutely, hypercapnia or hypocapnia yields relatively small changes in plasma [HCO3¯] that originate virtually exclusively from titration of the body's nonbicarbonate buffers. During sustained hypercapnia or hypocapnia, much larger changes in plasma [HCO3¯] occur that reflect adjustments in renal acidification mechanisms. Consequently, the deviation of systemic acidity from normal is smaller in the chronic forms of these disorders. Here we provide an overview of the renal acidification responses to respiratory acid-base disorders. We also identify gaps in knowledge that require further research.
PubMed
Erdey, L; Gimesi, O; Szabadváry, F
1969-03-01
Acid-base titrations can be performed with radiometric end-point detection by use of labelled metal salts (e.g., ZnCl(2), HgCl(2)). Owing to the formation or dissolution of the corresponding hydroxide after the equivalence point, the activity of the titrated solution linearly increases or decreases as excess of standard solution is added. The end-point of the titration is determined graphically.
11. Electroreduction and acid-base properties of dipyrrolylquinoxalines.
PubMed
Fu, Zhen; Zhang, Min; Zhu, Weihua; Karnas, Elizabeth; Mase, Kentaro; Ohkubo, Kei; Sessler, Jonathan L; Fukuzumi, Shunichi; Kadish, Karl M
2012-10-18
The electroreduction and acid-base properties of dipyrrolylquinoxalines of the form H(2)DPQ, H(2)DPQ(NO(2)), and H(2)DPQ(NO(2))(2) were investigated in benzonitrile (PhCN) containing 0.1 M tetra-n-butylammonium perchlorate (TBAP). This study focuses on elucidating the complete electrochemistry, spectroelectrochemistry, and acid-base properties of H(2)DPQ(NO(2))(n) (n = 0, 1, or 2) in PhCN before and after the addition of trifluoroacetic acid (TFA), tetra-n-butylammonium hydroxide (TBAOH), tetra-n-butylammonium fluoride (TBAF), or tetra-n-butylammonium acetate (TBAOAc) to solution. Electrochemical and spectroelectrochemical data provide support for the formation of a monodeprotonated anion after disproportionation of a dipyrrolylquinoxaline radical anion produced initially. The generated monoanion is then further reduced in two reversible one-electron-transfer steps at more negative potentials in the case of H(2)DPQ(NO(2)) and H(2)DPQ(NO(2))(2). Electrochemically monitored titrations of H(2)DPQ(NO(2))(n) with OH(-), F(-), or OAc(-) (in the form of TBA(+)X(-) salts) give rise to the same monodeprotonated H(2)DPQ(NO(2))(n) produced during electroreduction in PhCN. This latter anion can then be reduced in two additional one-electron-transfer steps in the case of H(2)DPQ(NO(2)) and H(2)DPQ(NO(2))(2). Spectroscopically monitored titrations of H(2)DPQ(NO(2))(n) with X(-) show a 1:2 stoichiometry and provide evidence for the production of both [H(2)DPQ(NO(2))(n)](-) and XHX(-). The spectroscopically measured equilibrium constants range from log β(2) = 5.3 for the reaction of H(2)DPQ with TBAOAc to log β(2) = 8.8 for the reaction of H(2)DPQ(NO(2))(2) with TBAOH. These results are consistent with a combined deprotonation and anion binding process. Equilibrium constants for the addition of one H(+) to each quinoxaline nitrogen of H(2)DPQ, H(2)DPQ(NO(2)), and H(2)DPQ(NO(2))(2) in PhCN containing 0.1 M TBAP were also determined via electrochemical and spectroscopic means
12. The Conceptual Change Approach to Teaching Chemical Equilibrium
ERIC Educational Resources Information Center
Canpolat, Nurtac; Pinarbasi, Tacettin; Bayrakceken, Samih; Geban, Omer
2006-01-01
This study investigates the effect of a conceptual change approach over traditional instruction on students' understanding of chemical equilibrium concepts (e.g. dynamic nature of equilibrium, definition of equilibrium constant, heterogeneous equilibrium, qualitative interpreting of equilibrium constant, changing the reaction conditions). This…
13. Use of lipophilic ion adsorption isotherms to determine the surface area and the monolayer capacity of a chromatographic packing, as well as the thermodynamic equilibrium constant for its adsorption.
PubMed
Cecchi, T
2005-04-29
A method that champions the approaches of two independent research groups, to quantitate the chromatographic stationary phase surface available for lipophilic ion adsorption, is presented. For the first time the non-approximated expression of the electrostatically modified Langmuir adsorption isotherm was used. The non approximated Gouy-Chapman (G-C) theory equation was used to give the rigorous surface potential. The method helps model makers, interested in ionic interactions, determine whether the potential modified Langmuir isotherm can be linearized, and, accordingly, whether simplified retention equations can be properly used. The theory cultivated here allows the estimates not only of the chromatographically accessible surface area, but also of the thermodynamic equilibrium constant for the adsorption of the amphiphile, the standard free energy of its adsorption, and the monolayer capacity of the packing. In addition, it establishes the limit between a theoretical and an empirical use of the Freundlich isotherm to determine the surface area. Estimates of the parameters characterising the chromatographic system are reliable from the physical point of view, and this greatly validates the present comprehensive approach.
14. Temperature dependence of the NO3 absorption cross-section above 298 K and determination of the equilibrium constant for NO3 + NO2 <--> N2O5 at atmospherically relevant conditions.
PubMed
Osthoff, Hans D; Pilling, Michael J; Ravishankara, A R; Brown, Steven S
2007-11-21
The reaction NO3 + NO2 <--> N2O5 was studied over the 278-323 K temperature range. Concentrations of NO3, N2O5, and NO2 were measured simultaneously in a 3-channel cavity ring-down spectrometer. Equilibrium constants were determined over atmospherically relevant concentration ranges of the three species in both synthetic samples in the laboratory and ambient air samples in the field. A fit to the laboratory data yielded Keq = (5.1 +/- 0.8) x 10(-27) x e((10871 +/- 46)/7) cm3 molecule(-1). The temperature dependence of the NO3 absorption cross-section at 662 nm was investigated over the 298-388 K temperature range. The line width was found to be independent of temperature, in agreement with previous results. New data for the peak cross section (662.2 nm, vacuum wavelength) were combined with previous measurements in the 200 K-298 K region. A least-squares fit to the combined data gave sigma = [(4.582 +/- 0.096) - (0.00796 +/- 0.00031) x T] x 10(-17) cm2 molecule(-1).
15. A chemical equilibrium model for metal adsorption onto bacterial surfaces
Fein, Jeremy B.; Daughney, Christopher J.; Yee, Nathan; Davis, Thomas A.
1997-08-01
This study quantifies metal adsorption onto cell wall surfaces of Bacillus subtilis by applying equilibrium thermodynamics to the specific chemical reactions that occur at the water-bacteria interface. We use acid/base titrations to determine deprotonation constants for the important surface functional groups, and we perform metal-bacteria adsorption experiments, using Cd, Cu, Pb, and Al, to yield site-specific stability constants for the important metal-bacteria surface complexes. The acid/base properties of the cell wall of B. subtilis can best be characterized by invoking three distinct types of surface organic acid functional groups, with pK a values of 4.82 ± 0.14, 6.9 ± 0.5, and 9.4 ± 0.6. These functional groups likely correspond to carboxyl, phosphate, and hydroxyl sites, respectively, that are displayed on the cell wall surface. The results of the metal adsorption experiments indicate that both the carboxyl sites and the phosphate sites contribute to metal uptake. The values of the log stability constants for metal-carboxyl surface complexes range from 3.4 for Cd, 4.2 for Pb, 4.3 for Cu, to 5.0 for Al. These results suggest that the stabilities of the metal-surface complexes are high enough for metal-bacterial interactions to affect metal mobilities in many aqueous systems, and this approach enables quantitative assessment of the effects of bacteria on metal mobilities.
16. Grinding kinetics and equilibrium states
NASA Technical Reports Server (NTRS)
1984-01-01
The temporary and permanent equilibrium occurring during the initial stage of cement grinding does not indicate the end of comminution, but rather an increased energy consumption during grinding. The constant dynamic equilibrium occurs after a long grinding period indicating the end of comminution for a given particle size. Grinding equilibrium curves can be constructed to show the stages of comminution and agglomeration for certain particle sizes.
17. Hemolymph acid-base balance of the crayfish Astacus leptodactylus as a function of the oxygenation and the acid-base balance of the ambient water.
PubMed
Dejours, P; Armand, J
1980-07-01
The acid-base balance of the prebranchial hemolymph of the crayfish Astacus leptodactylus was studied at various acid-base balances and levels of oxygenation of the ambient water at 13 degrees C. The water acid-base balance was controlled automatically by a pH-CO2-stat. Into water of constant titration alkalinity, TA, this device intermittenly injects carbon dioxide to maintain the pH at a preset value. Water pH was reduced to the same value either by hypercapnia (at constant TA) or by adding HCl or H2SO4 to decrease the TA (at constant CO2 tension). Decrease of hemolymph pH and increase of hemolymph PCO2 were similar for the three acidic waters. Water oxygenation changes strongly affected hemolymph ABB. In crayfish living in hyperoxic water (PO2 congruent to 600 Torr) compared to those in hypoxic water (PO2 congruent to 40 Torr), hemolymph pH was 0.3 to 0.4 unit lower and hemolymph PCO2 several times higher, the exact values of pH and PCO2 depending on the controlled ambient acid-base balance. In any study of the hemolymph acid-base balance of the crayfish, it is an important to control ambient water's acid-base balance and oxygenation as it is to control its temperature, a conclusion which probably holds true for studies on all water breathers.
18. Implementing an Equilibrium Law Teaching Sequence for Secondary School Students to Learn Chemical Equilibrium
ERIC Educational Resources Information Center
Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio
2015-01-01
A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…
19. Molten fatty acid based microemulsions.
PubMed
Noirjean, Cecile; Testard, Fabienne; Dejugnat, Christophe; Jestin, Jacques; Carriere, David
2016-06-21
We show that ternary mixtures of water (polar phase), myristic acid (MA, apolar phase) and cetyltrimethylammonium bromide (CTAB, cationic surfactant) studied above the melting point of myristic acid allow the preparation of microemulsions without adding a salt or a co-surfactant. The combination of SANS, SAXS/WAXS, DSC, and phase diagram determination allows a complete characterization of the structures and interactions between components in the molten fatty acid based microemulsions. For the different structures characterized (microemulsion, lamellar or hexagonal phases), a similar thermal behaviour is observed for all ternary MA/CTAB/water monophasic samples and for binary MA/CTAB mixtures without water: crystalline myristic acid melts at 52 °C, and a thermal transition at 70 °C is assigned to the breaking of hydrogen bounds inside the mixed myristic acid/CTAB complex (being the surfactant film in the ternary system). Water determines the film curvature, hence the structures observed at high temperature, but does not influence the thermal behaviour of the ternary system. Myristic acid is partitioned in two "species" that behave independently: pure myristic acid and myristic acid associated with CTAB to form an equimolar complex that plays the role of the surfactant film. We therefore show that myristic acid plays the role of a solvent (oil) and a co-surfactant allowing the fine tuning of the structure of oil and water mixtures. This solvosurfactant behaviour of long chain fatty acid opens the way for new formulations with a complex structure without the addition of any extra compound. PMID:27241163
20. Surface properties of bacillus subtilis determined by acid/base titrations, and the implications for metal adsorption in fluid-rock systems
SciTech Connect
Fein, J.B.; Davis, T.A.
1996-10-01
Bacteria are ubiquitous in low temperature aqueous systems, but quantifying their effects on aqueous mass transport remains a problem. Numerous studies have qualitatively examined the metal binding capacity of bacterial cell walls. However, quantitative thermodynamic modeling of metal-bacteria-mineral systems requires a detailed knowledge of the surface properties of the bacterial functional groups. In this study, we have conducted acid/base titrations of suspensions of B. subtilis, a common subsurface species whose surface properties are largely controlled by carboxyl groups. Titrations were conducted between pH 2 and 11 at several ionic strengths. The data are analyzed using a constant capacitance model to account for the surface electric field effects on the acidity constant. The pK{sub a} value that best fits the titration data is 3.9 {plus_minus} 0.3. This result represents the first step toward quantifying bacteria-metal and mineral-bacteria-metal interactions using equilibrium thermodynamics.
1. The Kidney and Acid-Base Regulation
ERIC Educational Resources Information Center
Koeppen, Bruce M.
2009-01-01
Since the topic of the role of the kidneys in the regulation of acid base balance was last reviewed from a teaching perspective (Koeppen BM. Renal regulation of acid-base balance. Adv Physiol Educ 20: 132-141, 1998), our understanding of the specific membrane transporters involved in H+, HCO , and NH transport, and especially how these…
2. The Conjugate Acid-Base Chart.
ERIC Educational Resources Information Center
Treptow, Richard S.
1986-01-01
Discusses the difficulties that beginning chemistry students have in understanding acid-base chemistry. Describes the use of conjugate acid-base charts in helping students visualize the conjugate relationship. Addresses chart construction, metal ions, buffers and pH titrations, and the organic functional groups and nonaqueous solvents. (TW)
3. Acid-Base Balance in Uremic Rats with Vascular Calcification
PubMed Central
Peralta-Ramírez, Alan; Raya, Ana Isabel; Pineda, Carmen; Rodríguez, Mariano; Aguilera-Tejero, Escolástico; López, Ignacio
2014-01-01
Background/Aims Vascular calcification (VC), a major complication in humans and animals with chronic kidney disease (CKD), is influenced by changes in acid-base balance. The purpose of this study was to describe the acid-base balance in uremic rats with VC and to correlate the parameters that define acid-base equilibrium with VC. Methods Twenty-two rats with CKD induced by 5/6 nephrectomy (5/6 Nx) and 10 nonuremic control rats were studied. Results The 5/6 Nx rats showed extensive VC as evidenced by a high aortic calcium (9.2 ± 1.7 mg/g of tissue) and phosphorus (20.6 ± 4.9 mg/g of tissue) content. Uremic rats had an increased pH level (7.57 ± 0.03) as a consequence of both respiratory (PaCO2 = 28.4 ± 2.1 mm Hg) and, to a lesser degree, metabolic (base excess = 4.1 ± 1 mmol/l) derangements. A high positive correlation between both anion gap (AG) and strong ion difference (SID) with aortic calcium (AG: r = 0.604, p = 0.02; SID: r = 0.647, p = 0.01) and with aortic phosphorus (AG: r = 0.684, p = 0.007; SID: r = 0.785, p = 0.01) was detected. Conclusions In an experimental model of uremic rats, VC showed high positive correlation with AG and SID. PMID:25177336
4. Acid-base properties of bentonite rocks with different origins.
PubMed
Nagy, Noémi M; Kónya, József
2006-03-01
Five bentonite samples (35-47% montmorillonite) from a Sarmatian sediment series with bentonite sites around Sajóbábony (Hungary) is studied. Some of these samples were tuffogenic bentonite (sedimentary), the others were bentonitized tuff with volcano sedimentary origin. The acid-base properties of the edge sites were studied by potentiometric titrations and surface complexation modeling. It was found that the number and the ratio of silanol and aluminol sites as well as the intrinsic stability constants are different for the sedimentary bentonite and bentonitized tuff. The characteristic properties of the edges sites depend on the origins. The acid-base properties are compared to other commercial and standard bentonites.
5. Potentiometric study of reaction between periodate and iodide as their tetrabutylammonium salts in chloroform. Application to the determination of iodide and potentiometric detection of end points in acid-base titrations in chloroform.
PubMed
1995-03-01
A potentiometric method for the titration of tetrabutylammonium iodide (TBAI) in chloroform using tetrabutylammonium periodate (TBAPI) as a strong and suitable oxidizing reagent is described. The potentiometric conditions were optimized and the equilibrium constants of the reactions occurring during the titration were determined. The method was used for the determination of iodide both in chloroform and aqueous solutions after extraction into chloroform as ion-association with tetraphenylarsonium. The reaction between TBAPI and TBAI was also used as acid indicator for the potentiometric detection of end points of acid-base titrations in chloroform.
6. Determination of Henry's constant, the dissociation constant, and the buffer capacity of the bicarbonate system in ruminal fluid.
PubMed
Hille, Katharina T; Hetz, Stefan K; Rosendahl, Julia; Braun, Hannah-Sophie; Pieper, Robert; Stumpff, Friederike
2016-01-01
Despite the clinical importance of ruminal acidosis, ruminal buffering continues to be poorly understood. In particular, the constants for the dissociation of H2CO3 and the solubility of CO2 (Henry's constant) have never been stringently determined for ruminal fluid. The pH was measured in parallel directly in the rumen and the reticulum in vivo, and in samples obtained via aspiration from 10 fistulated cows on hay- or concentrate-based diets. The equilibrium constants of the bicarbonate system were measured at 38°C both using the Astrup technique and a newly developed method with titration at 2 levels of partial pressure of CO2 (pCO2; 4.75 and 94.98 kPa), yielding mean values of 0.234 ± 0.005 mmol ∙ L(-1) ∙ kPa(-1) and 6.11 ± 0.02 for Henry's constant and the dissociation constant, respectively (n/n = 31/10). Both reticular pH and the pH of samples measured after removal were more alkalic than those measured in vivo in the rumen (by ΔpH = 0.87 ± 0.04 and 0.26 ± 0.04). The amount of acid or base required to shift the pH of ruminal samples to 6.4 or 5.8 (base excess) differed between the 2 feeding groups. Experimental results are compared with the mathematical predictions of an open 2-buffer Henderson-Hasselbalch equilibrium model. Because pCO2 has pronounced effects on ruminal pH and can decrease rapidly in samples removed from the rumen, introduction of a generally accepted protocol for determining the acid-base status of ruminal fluid with standard levels of pCO2 and measurement of base excess in addition to pH should be considered.
7. Determination of Henry's constant, the dissociation constant, and the buffer capacity of the bicarbonate system in ruminal fluid.
PubMed
Hille, Katharina T; Hetz, Stefan K; Rosendahl, Julia; Braun, Hannah-Sophie; Pieper, Robert; Stumpff, Friederike
2016-01-01
Despite the clinical importance of ruminal acidosis, ruminal buffering continues to be poorly understood. In particular, the constants for the dissociation of H2CO3 and the solubility of CO2 (Henry's constant) have never been stringently determined for ruminal fluid. The pH was measured in parallel directly in the rumen and the reticulum in vivo, and in samples obtained via aspiration from 10 fistulated cows on hay- or concentrate-based diets. The equilibrium constants of the bicarbonate system were measured at 38°C both using the Astrup technique and a newly developed method with titration at 2 levels of partial pressure of CO2 (pCO2; 4.75 and 94.98 kPa), yielding mean values of 0.234 ± 0.005 mmol ∙ L(-1) ∙ kPa(-1) and 6.11 ± 0.02 for Henry's constant and the dissociation constant, respectively (n/n = 31/10). Both reticular pH and the pH of samples measured after removal were more alkalic than those measured in vivo in the rumen (by ΔpH = 0.87 ± 0.04 and 0.26 ± 0.04). The amount of acid or base required to shift the pH of ruminal samples to 6.4 or 5.8 (base excess) differed between the 2 feeding groups. Experimental results are compared with the mathematical predictions of an open 2-buffer Henderson-Hasselbalch equilibrium model. Because pCO2 has pronounced effects on ruminal pH and can decrease rapidly in samples removed from the rumen, introduction of a generally accepted protocol for determining the acid-base status of ruminal fluid with standard levels of pCO2 and measurement of base excess in addition to pH should be considered. PMID:26519978
8. Jammed acid-base reactions at interfaces.
PubMed
Gibbs-Davis, Julianne M; Kruk, Jennifer J; Konek, Christopher T; Scheidt, Karl A; Geiger, Franz M
2008-11-19
Using nonlinear optics, we show that acid-base chemistry at aqueous/solid interfaces tracks bulk pH changes at low salt concentrations. In the presence of 10 to 100 mM salt concentrations, however, the interfacial acid-base chemistry remains jammed for hours, until it finally occurs within minutes at a rate that follows the kinetic salt effect. For various alkali halide salts, the delay times increase with increasing anion polarizability and extent of cation hydration and lead to massive hysteresis in interfacial acid-base titrations. The resulting implications for pH cycling in these systems are that interfacial systems can spatially and temporally lag bulk acid-base chemistry when the Debye length approaches 1 nm.
9. Use of an Acid-Base Table.
ERIC Educational Resources Information Center
Willis, Grover; And Others
1986-01-01
Identifies several ways in which an acid-base table can provide students with information about chemical reactions. Cites examples of the chart's use and includes a table which indicates the strengths of some common acids and bases. (ML)
10. The comprehensive acid-base characterization of glutathione
Mirzahosseini, Arash; Somlyay, Máté; Noszál, Béla
2015-02-01
Glutathione in its thiol (GSH) and disulfide (GSSG) forms, and 4 related compounds were studied by 1H NMR-pH titrations and a case-tailored evaluation method. The resulting acid-base properties are quantified in terms of 128 microscopic protonation constants; the first complete set of such parameters for this vitally important pair of compounds. The concomitant 12 interactivity parameters were also determined. Since biological redox systems are regularly compared to the GSH-GSSG pair, the eight microscopic thiolate basicities determined this way are exclusive means for assessing subtle redox parameters in a wide pH range.
11. Are Fundamental Constants Really Constant?
ERIC Educational Resources Information Center
Swetman, T. P.
1972-01-01
Dirac's classical conclusions, that the values of e2, M and m are constants and the quantity of G decreases with time. Evoked considerable interest among researchers and traces historical development by which further experimental evidence points out that both e and G are constant values. (PS)
12. [Kidney, Fluid, and Acid-Base Balance].
PubMed
Shioji, Naohiro; Hayashi, Masao; Morimatsu, Hiroshi
2016-05-01
Kidneys play an important role to maintain human homeostasis. They contribute to maintain body fluid, electrolytes, and acid-base balance. Especially in fluid control, we, physicians can intervene body fluid balance using fluid resuscitation and diuretics. In recent years, one type of fluid resuscitation, hydroxyl ethyl starch has been extensively studied in the field of intensive care. Although their effects on fluid resuscitation are reasonable, serious complications such as kidney injury requiring renal replacement therapy occur frequently. Now we have to pay more attention to this important complication. Another topic of fluid management is tolvaptan, a selective vasopressin-2 receptor antagonist Recent randomized trial suggested that tolvaptan has a similar supportive effect for fluid control and more cost effective compared to carperitide. In recent years, Stewart approach is recognized as one important tool to assess acid-base balance in critically ill patients. This approach has great value, especially to understand metabolic components in acid-base balance. Even for assessing the effects of kidneys on acid-base balance, this approach gives us interesting insight. We should appropriately use this new approach to treat acid-base abnormality in critically ill patients. PMID:27319095
13. Estimation of medium effects on equilibrium constants in moderate and high ionic strength solutions at elevated temperatures by using specific interaction theory (SIT): interaction coefficients involving Cl, OH- and Ac- up to 200 degrees C and 400 bars.
PubMed
Xiong, Yongliang
2006-01-01
In this study, a series of interaction coefficients of the Brønsted-Guggenheim-Scatchard specific interaction theory (SIT) have been estimated up to 200 degrees C and 400 bars. The interaction coefficients involving Cl- estimated include epsilon(H+, Cl-), epsilon(Na+, Cl-), epsilon(Ag+, Cl-), epsilon(Na+, AgCl2 -), epsilon(Mg2+, Cl-), epsilon(Ca2+, Cl-), epsilon(Sr2+, Cl-), epsilon(Ba2+, Cl-), epsilon(Sm3+, Cl-), epsilon(Eu3+, Cl-), epsilon(Gd3+, Cl-), and epsilon(GdAc2+, Cl-). The interaction coefficients involving OH- estimated include epsilon(Li+, OH-), epsilon(K+, OH-), epsilon(Na+, OH-), epsilon(Cs+, OH-), epsilon(Sr2+, OH-), and epsilon(Ba2+, OH-). In addition, the interaction coefficients of epsilon(Na+, Ac-) and epsilon(Ca2+, Ac-) have also been estimated. The bulk of interaction coefficients presented in this study has been evaluated from the mean activity coefficients. A few of them have been estimated from the potentiometric and solubility studies. The above interaction coefficients are tested against both experimental mean activity coefficients and equilibrium quotients. Predicted mean activity coefficients are in satisfactory agreement with experimental data. Predicted equilibrium quotients are in very good agreement with experimental values. Based upon its relatively rapid attainment of equilibrium and the ease of determining magnesium concentrations, this study also proposes that the solubility of brucite can be used as a pH (pcH) buffer/sensor for experimental systems in NaCl solutions up to 200 degrees C by employing the predicted solubility quotients of brucite in conjunction with the dissociation quotients of water and the first hydrolysis quotients of Mg2+, all in NaCl solutions.
14. Rapid determination of the equivalence volume in potentiometric acid-base titrations to a preset pH-II Standardizing a solution of a strong base, graphic location of equivalence volume, determination of stability constants of acids and titration of a mixture of two weak acids.
PubMed
1974-06-01
A newly proposed method of titrating weak acids with strong bases is applied to standardize a solution of a strong base, to graphic determination of equivalence volume of acetic acid with an error of 0.2%, to calculate the stability constants of hydroxylammonium ion, boric acid and hydrogen ascorbate ion and to analyse a mixture of acetic acid and ammonium ion with an error of 0.2-0.7%.
15. An arbitrary correction function for CO(2) evolution in acid-base titrations and its use in multiparametric refinement of data.
PubMed
Wozniak, M; Nowogrocki, G
1981-08-01
A great number of acid-base titrations are performed under an inert gas flow: in the procedure, a variable amount of CO(2)-from carbonated reactants-is carried away and thus prevents strict application of mass-balance equations. A function for the CO(2) evolution is proposed and introduced into the general expression for the volume of titrant. Use of this expression in multiparametric refinement yields, besides the usual values (concentrations, acidity constants...), a parameter characteristic of this departure of CO(2). Furthermore, a modified weighting factor is introduced to take into account the departure from equilibrium caused by the slow CO(2) evolution. The validity of these functions was successfully tested on three typical examples: neutralization of strong acid by sodium carbonate, of sodium carbonate by strong acid, and of a mixture of hydrochloric acid, 4-nitrophenol and phenol by carbonated potassium hydroxide.
16. Equilibrium Shaping
Izzo, Dario; Petazzi, Lorenzo
2006-08-01
We present a satellite path planning technique able to make identical spacecraft aquire a given configuration. The technique exploits a behaviour-based approach to achieve an autonomous and distributed control over the relative geometry making use of limited sensorial information. A desired velocity is defined for each satellite as a sum of different contributions coming from generic high level behaviours: forcing the final desired configuration the behaviours are further defined by an inverse dynamic calculation dubbed Equilibrium Shaping. We show how considering only three different kind of behaviours it is possible to acquire a number of interesting formations and we set down the theoretical framework to find the entire set. We find that allowing a limited amount of communication the technique may be used also to form complex lattice structures. Several control feedbacks able to track the desired velocities are introduced and discussed. Our results suggest that sliding mode control is particularly appropriate in connection with the developed technique.
17. The physiological assessment of acid-base balance.
PubMed
Howorth, P J
1975-04-01
Acid-base terminology including the sue of SI units is reviewed. The historical reasons why nomograms have been particularly used in acid-base work are discussed. The theoretical basis of the Henderson-Hasselbalch equation is considered. It is emphasized that the solubility of CO2 in plasma and the apparent first dissociation constant of carbonic acid are not chemical constants when applied to media of uncertain and varying composition such as blood plasma. The use of the Henderson-Hasselbalch equation in making hypothermia corrections for PCO2 is discussed. The Astrup system for the in vitro determination of blood gases and derived parameters is described and the theoretical weakness of the base excess concept stressed. A more clinically-oriented approach to the assessment of acid-base problems is presented. Measurement of blood [H+] and PCO2 are considered to be primary data which should be recorded on a chart with in vivo CO2-titration lines (see below). Clinical information and results of other laboratory investigations such as plasma bicarbonate, PO2,P50 are then to be considered together with the primary data. In order to interpret this combined information it is essential to take into account the known ventilatory response to metabolic acidosis and alkalosis, and the renal response to respiratory acidosis and alkalosis. The use is recommended of a chart showing the whole-body CO2-titration points obtained when patients with different initial levels of non-respiratory [H+] are ventilated. A number of examples are given of the use of this [H+] and PCO2 in vivo chart in the interpretation of acid-base data. The aetiology, prognosis and treatment of metabolic alkalosis is briefly reviewed. Treatment with intravenous acid is recommended for established cases. Attention is drawn to the possibility of iatrogenic production of metabolic alkalosis. Caution is expressed over the use of intravenous alkali in all but the severest cases of metabolic acidosis. The role of
18. Separation of Acids, Bases, and Neutral Compounds
Fujita, Megumi; Mah, Helen M.; Sgarbi, Paulo W. M.; Lall, Manjinder S.; Ly, Tai Wei; Browne, Lois M.
2003-01-01
Separation of Acids, Bases, and Neutral Compounds requires the following software, which is available for free download from the Internet: Netscape Navigator, version 4.75 or higher, or Microsoft Internet Explorer, version 5.0 or higher; Chime plug-in, version compatible with your OS and browser (available from MDL); and Flash player, version 5 or higher (available from Macromedia).
19. Jigsaw Cooperative Learning: Acid-Base Theories
ERIC Educational Resources Information Center
Tarhan, Leman; Sesen, Burcin Acar
2012-01-01
This study focused on investigating the effectiveness of jigsaw cooperative learning instruction on first-year undergraduates' understanding of acid-base theories. Undergraduates' opinions about jigsaw cooperative learning instruction were also investigated. The participants of this study were 38 first-year undergraduates in chemistry education…
20. The Magic Sign: Acids, Bases, and Indicators.
ERIC Educational Resources Information Center
Phillips, Donald B.
1986-01-01
Presents an approach that is used to introduce elementary and junior high students to a series of activities that will provide concrete experiences with acids, bases, and indicators. Provides instructions and listings of needed solutions and materials for developing this "magic sign" device. Includes background information and several student…
1. Potentiometric Measurement of Transition Ranges and Titration Errors for Acid/Base Indicators
Flowers, Paul A.
1997-07-01
Sophomore analytical chemistry courses typically devote a substantial amount of lecture time to acid/base equilibrium theory, and usually include at least one laboratory project employing potentiometric titrations. In an effort to provide students a laboratory experience that more directly supports their classroom discussions on this important topic, an experiment involving potentiometric measurement of transition ranges and titration errors for common acid/base indicators has been developed. The pH and visually-assessed color of a millimolar strong acid/base system are monitored as a function of added titrant volume, and the resultant data plotted to permit determination of the indicator's transition range and associated titration error. Student response is typically quite positive, and the measured quantities correlate reasonably well to literature values.
2. Thermodynamics and Kinetics of Chemical Equilibrium in Solution.
ERIC Educational Resources Information Center
Leenson, I. A.
1986-01-01
Discusses theory of thermodynamics of the equilibrium in solution and dissociation-dimerization kinetics. Describes experimental procedure including determination of molar absorptivity and equilibrium constant, reaction enthalpy, and kinetics of the dissociation-dimerization reaction. (JM)
3. On the Equilibrium States of Interconnected Bubbles or Balloons.
ERIC Educational Resources Information Center
Weinhaus, F.; Barker, W.
1978-01-01
Describes the equilibrium states of a system composed of two interconnected, air-filled spherical membranes of different sizes. The equilibrium configurations are determined by the method of minimization of the availability of the system at constant temperature. (GA)
4. Nuclear magnetic resonance as a tool for determining protonation constants of natural polyprotic bases in solution.
PubMed
Frassineti, C; Ghelli, S; Gans, P; Sabatini, A; Moruzzi, M S; Vacca, A
1995-11-01
The acid-base properties of the tetramine 1,5,10,14-tetraazatetradecane H2N(CH2)3NH(CH2)4NH(CH2)3NH2 (spermine) in deuterated water have been studied at 40 degrees C at various pD values by means of NMR spectroscopy. Both one-dimensional 13C[1H] spectra and two-dimensional 1H/13C heterocorrelation spectra with inverse detection have been recorded. A calculation procedure of general validity has been developed to unravel the effect of rapid exchange between the various species in equilibrium as a function of pD of the solution. The method of calculation used in this part of the new computer program, HYPNMR, is independent of the equilibrium model. HYPNMR has been used to obtain the basicity constants of spermine with respect to the D+ cation at 40 degrees C. Calculations have been performed using either 13C[1H] or 1H/13C data individually, or using both sets of data simultaneously. The results of the latter calculations were practically the same as the results obtained with the single data sets; the calculated errors on the refined parameters were a little smaller. After appropriate empirical corrections for temperature effects and for the presence of D+ in contrast to H+, the calculated constants are compared with spermine protonation constants which have been determined previously both from potentiometric and NMR data.
5. Surface Lewis acid-base properties of polymers measured by inverse gas chromatography.
PubMed
Shi, Baoli; Zhang, Qianru; Jia, Lina; Liu, Yang; Li, Bin
2007-05-18
Surface Lewis acid-base properties are significant for polymers materials. The acid constant, K(a) and base constant, K(b) of many polymers were characterized by some researchers with inverse gas chromatography (IGC) in recent years. In this paper, the surface acid-base constants, K(a) and K(b) of 20 kinds of polymers measured by IGC in recent years are summarized and discussed, including seven polymers characterized in this work. After plotting K(b) versus K(a), it is found that the polymers can be encircled by a triangle. They scatter in two regions of the triangle. Four polymers exist in region I. K(b)/K(a) of the polymers in region I are 1.4-2.1. The other polymers exist in region II. Most of the polymers are relative basic materials.
6. Model for acid-base chemistry in nanoparticle growth (MABNAG)
Yli-Juuti, T.; Barsanti, K.; Hildebrandt Ruiz, L.; Kieloaho, A.-J.; Makkonen, U.; Petäjä, T.; Ruuskanen, T.; Kulmala, M.; Riipinen, I.
2013-12-01
Climatic effects of newly-formed atmospheric secondary aerosol particles are to a large extent determined by their condensational growth rates. However, all the vapours condensing on atmospheric nanoparticles and growing them to climatically relevant sizes are not identified yet and the effects of particle phase processes on particle growth rates are poorly known. Besides sulfuric acid, organic compounds are known to contribute significantly to atmospheric nanoparticle growth. In this study a particle growth model MABNAG (Model for Acid-Base chemistry in NAnoparticle Growth) was developed to study the effect of salt formation on nanoparticle growth, which has been proposed as a potential mechanism lowering the equilibrium vapour pressures of organic compounds through dissociation in the particle phase and thus preventing their evaporation. MABNAG is a model for monodisperse aqueous particles and it couples dynamics of condensation to particle phase chemistry. Non-zero equilibrium vapour pressures, with both size and composition dependence, are considered for condensation. The model was applied for atmospherically relevant systems with sulfuric acid, one organic acid, ammonia, one amine and water in the gas phase allowed to condense on 3-20 nm particles. The effect of dissociation of the organic acid was found to be small under ambient conditions typical for a boreal forest site, but considerable for base-rich environments (gas phase concentrations of about 1010 cm-3 for the sum of the bases). The contribution of the bases to particle mass decreased as particle size increased, except at very high gas phase concentrations of the bases. The relative importance of amine versus ammonia did not change significantly as a function of particle size. While our results give a reasonable first estimate on the maximum contribution of salt formation to nanoparticle growth, further studies on, e.g. the thermodynamic properties of the atmospheric organics, concentrations of low
7. Model for acid-base chemistry in nanoparticle growth (MABNAG)
Yli-Juuti, T.; Barsanti, K.; Hildebrandt Ruiz, L.; Kieloaho, A.-J.; Makkonen, U.; Petäjä, T.; Ruuskanen, T.; Kulmala, M.; Riipinen, I.
2013-03-01
Climatic effects of newly-formed atmospheric secondary aerosol particles are to a large extent determined by their condensational growth rates. However, all the vapors condensing on atmospheric nanoparticles and growing them to climatically relevant sizes are not identified yet and the effects of particle phase processes on particle growth rates are poorly known. Besides sulfuric acid, organic compounds are known to contribute significantly to atmospheric nanoparticle growth. In this study a particle growth model MABNAG (Model for Acid-Base chemistry in NAnoparticle Growth) was developed to study the effect of salt formation on nanoparticle growth, which has been proposed as a potential mechanism lowering the equilibrium vapor pressures of organic compounds through dissociation in the particle phase and thus preventing their evaporation. MABNAG is a model for monodisperse aqueous particles and it couples dynamics of condensation to particle phase chemistry. Non-zero equilibrium vapor pressures, with both size and composition dependence, are considered for condensation. The model was applied for atmospherically relevant systems with sulfuric acid, one organic acid, ammonia, one amine and water in the gas phase allowed to condense on 3-20 nm particles. The effect of dissociation of the organic acid was found to be small under ambient conditions typical for a boreal forest site, but considerable for base-rich environments (gas phase concentrations of about 1010 cm-3 for the sum of the bases). The contribution of the bases to particle mass decreased as particle size increased, except at very high gas phase concentrations of the bases. The relative importance of amine versus ammonia did not change significantly as a function of particle size. While our results give a reasonable first estimate on the maximum contribution of salt formation to nanoparticle growth, further studies on, e.g. the thermodynamic properties of the atmospheric organics, concentrations of low
8. Exploring Chemical Equilibrium with Poker Chips: A General Chemistry Laboratory Exercise
ERIC Educational Resources Information Center
Bindel, Thomas H.
2012-01-01
A hands-on laboratory exercise at the general chemistry level introduces students to chemical equilibrium through a simulation that uses poker chips and rate equations. More specifically, the exercise allows students to explore reaction tables, dynamic chemical equilibrium, equilibrium constant expressions, and the equilibrium constant based on…
9. Mathematical modeling of acid-base physiology
PubMed Central
Occhipinti, Rossana; Boron, Walter F.
2015-01-01
pH is one of the most important parameters in life, influencing virtually every biological process at the cellular, tissue, and whole-body level. Thus, for cells, it is critical to regulate intracellular pH (pHi) and, for multicellular organisms, to regulate extracellular pH (pHo). pHi regulation depends on the opposing actions of plasma-membrane transporters that tend to increase pHi, and others that tend to decrease pHi. In addition, passive fluxes of uncharged species (e.g., CO2, NH3) and charged species (e.g., HCO3− , NH4+) perturb pHi. These movements not only influence one another, but also perturb the equilibria of a multitude of intracellular and extracellular buffers. Thus, even at the level of a single cell, perturbations in acid-base reactions, diffusion, and transport are so complex that it is impossible to understand them without a quantitative model. Here we summarize some mathematical models developed to shed light onto the complex interconnected events triggered by acids-base movements. We then describe a mathematical model of a spherical cell–which to our knowledge is the first one capable of handling a multitude of buffer reaction–that our team has recently developed to simulate changes in pHi and pHo caused by movements of acid-base equivalents across the plasma membrane of a Xenopus oocyte. Finally, we extend our work to a consideration of the effects of simultaneous CO2 and HCO3− influx into a cell, and envision how future models might extend to other cell types (e.g., erythrocytes) or tissues (e.g., renal proximal-tubule epithelium) important for whole-body pH homeostasis. PMID:25617697
10. Mathematical modeling of acid-base physiology.
PubMed
Occhipinti, Rossana; Boron, Walter F
2015-01-01
pH is one of the most important parameters in life, influencing virtually every biological process at the cellular, tissue, and whole-body level. Thus, for cells, it is critical to regulate intracellular pH (pHi) and, for multicellular organisms, to regulate extracellular pH (pHo). pHi regulation depends on the opposing actions of plasma-membrane transporters that tend to increase pHi, and others that tend to decrease pHi. In addition, passive fluxes of uncharged species (e.g., CO2, NH3) and charged species (e.g., HCO3(-), [Formula: see text] ) perturb pHi. These movements not only influence one another, but also perturb the equilibria of a multitude of intracellular and extracellular buffers. Thus, even at the level of a single cell, perturbations in acid-base reactions, diffusion, and transport are so complex that it is impossible to understand them without a quantitative model. Here we summarize some mathematical models developed to shed light onto the complex interconnected events triggered by acids-base movements. We then describe a mathematical model of a spherical cells-which to our knowledge is the first one capable of handling a multitude of buffer reactions-that our team has recently developed to simulate changes in pHi and pHo caused by movements of acid-base equivalents across the plasma membrane of a Xenopus oocyte. Finally, we extend our work to a consideration of the effects of simultaneous CO2 and HCO3(-) influx into a cell, and envision how future models might extend to other cell types (e.g., erythrocytes) or tissues (e.g., renal proximal-tubule epithelium) important for whole-body pH homeostasis.
11. Absorption, fluorescence, and acid-base equilibria of rhodamines in micellar media of sodium dodecyl sulfate.
PubMed
Obukhova, Elena N; Mchedlov-Petrossyan, Nikolay O; Vodolazkaya, Natalya A; Patsenker, Leonid D; Doroshenko, Andrey O; Marynin, Andriy I; Krasovitskii, Boris M
2017-01-01
Rhodamine dyes are widely used as molecular probes in different fields of science. The aim of this paper was to ascertain to what extent the structural peculiarities of the compounds influence their absorption, emission, and acid-base properties under unified conditions. The acid-base dissociation (HR(+)⇄R+H(+)) of a series of rhodamine dyes was studied in sodium n-dodecylsulfate micellar solutions. In this media, the form R exists as a zwitterion R(±). The indices of apparent ionization constants of fifteen rhodamine cations HR(+) with different substituents in the xanthene moiety vary within the range of pKa(app)=5.04 to 5.53. The distinct dependence of emission of rhodamines bound to micelles on pH of bulk water opens the possibility of using them as fluorescent interfacial acid-base indicators.
12. Absorption, fluorescence, and acid-base equilibria of rhodamines in micellar media of sodium dodecyl sulfate.
PubMed
Obukhova, Elena N; Mchedlov-Petrossyan, Nikolay O; Vodolazkaya, Natalya A; Patsenker, Leonid D; Doroshenko, Andrey O; Marynin, Andriy I; Krasovitskii, Boris M
2017-01-01
Rhodamine dyes are widely used as molecular probes in different fields of science. The aim of this paper was to ascertain to what extent the structural peculiarities of the compounds influence their absorption, emission, and acid-base properties under unified conditions. The acid-base dissociation (HR(+)⇄R+H(+)) of a series of rhodamine dyes was studied in sodium n-dodecylsulfate micellar solutions. In this media, the form R exists as a zwitterion R(±). The indices of apparent ionization constants of fifteen rhodamine cations HR(+) with different substituents in the xanthene moiety vary within the range of pKa(app)=5.04 to 5.53. The distinct dependence of emission of rhodamines bound to micelles on pH of bulk water opens the possibility of using them as fluorescent interfacial acid-base indicators. PMID:27423469
13. Extraction of electrolytes from aqueous solutions and their spectrophotometric determination by use of acid-base chromoionophores in lipophylic solvents.
PubMed
Barberi, Paola; Giannetto, Marco; Mori, Giovanni
2004-04-01
The formation of non-absorbing complexes in an organic phase has been exploited for the spectrophotometric determination of ionic analytes in aqueous solutions. The method is based on liquid-liquid extraction of aqueous solution with lipophylic organic phases containing an acid-base chromoionophore, a neutral lypophilic ligand (neutral carrier) selective to the analyte and a cationic (or anionic) exchanger. The method avoids all difficulties of the preparation of the very thin membranes used in optodes, so that it can advantageously be used for the study of the role physical-chemical parameters of the system in order to optimize them and to prepare, if necessary, an optimized optode. Two lipophylic derivatives of Nile Blue and 4',5-dibromofluorescein have been synthesized, in order to ensure their permanence within organic phase. Two different neutral carriers previously characterized by us as ionophores for liquid-membrane Ion Selective Electrodes have been employed. Three different ionic exchangers have been tested. Furthermore, a model allowing the interpolation of experimental data and the determination of the thermodynamic constant of the ionic-exchange equilibrium has been developed and applied. PMID:15242090
14. Acid-base strength and acidochromism of some dimethylamino-azinium iodides. An integrated experimental and theoretical study.
PubMed
Benassi, Enrico; Carlotti, Benedetta; Fortuna, Cosimo G; Barone, Vincenzo; Elisei, Fausto; Spalletti, Anna
2015-01-15
The effects of pH on the spectral properties of stilbazolium salts bearing dimethylamino substituents, namely, trans isomers of the iodides of the dipolar E-[2-(4-dimethylamino)styryl]-1-methylpyridinium, its branched quadrupolar analogue E,E-[2,6-di-(p-dimethylamino)styryl]-1-methylpyridinium, and three analogues, chosen to investigate the effects of the stronger quinolinium acceptor, the longer butadiene π bridge, or both, were investigated through a joint experimental and computational approach. A noticeable acidochromism of the absorption spectra (interesting for applications) was observed, with the basic and protonated species giving intensely colored and transparent solutions, respectively. The acid–base equilibrium constants for the protonation of the dimethylamino group in the ground state (pKa) were experimentally derived. Theoretical calculations according to the thermodynamic Born-Haber cycle provided pKa values in good agreement with the experimental values. The very low fluorescence yield did not allow a direct investigation of the changes in the acid-base properties in the excited state (pKa*) by fluorimetric titrations. Their values were derived by quantum-mechanical calculations and estimated experimentally on the basis of the Förster cycle.
15. Equilibrium bond lengths, force constants and vibrational frequencies of MnF 2, FeF 2, CoF 2, NiF 2, and ZnF 2 from least-squares analysis of gas-phase electron diffraction data
Vogt, Natalja
2001-08-01
The least-squares analysis of the electron diffraction data for MnF 2, FeF 2, CoF 2, NiF 2 and ZnF 2 was carried out in terms of a cubic potential function. The obtained equilibrium bond lengths (in Å) are re(Mn-F)=1.797(6), re(Fe-F)=1.755(6), re(Co-F)=1.738(6), re(Ni-F)=1.715(7), and re(Zn-F)=1.729(7). The determined force constants and the corresponding vibrational frequencies are listed. The bond length re(Cu-F)=1.700(14) Å for CuF 2 was estimated and the variations of bond lengths for the first-row transition metal difluorides were discussed in light of their electronic structure.
16. Modern quantitative acid-base chemistry.
PubMed
Stewart, P A
1983-12-01
Quantitative analysis of ionic solutions in terms of physical and chemical principles has been effectively prohibited in the past by the overwhelming amount of calculation it required, but computers have suddenly eliminated that prohibition. The result is an approach to acid-base which revolutionizes our ability to understand, predict, and control what happens to hydrogen ions in living systems. This review outlines that approach and suggests some of its most useful implications. Quantitative understanding requires distinctions between independent variables (in body fluids: pCO2, net strong ion charge, and total weak acid, usually protein), and dependent variables [( HCO-3], [HA], [A-], [CO(2-)3], [OH-], and [H+] (or pH]. Dependent variables are determined by independent variables, and can be calculated from the defining equations for the specific system. Hydrogen ion movements between solutions can not affect hydrogen ion concentration; only changes in independent variables can. Many current models for ion movements through membranes will require modification on the basis of this quantitative analysis. Whole body acid-base balance can be understood quantitatively in terms of the three independent variables and their physiological regulation by the lungs, kidneys, gut, and liver. Quantitative analysis also shows that body fluids interact mainly by strong ion movements through the membranes separating them.
17. Bipolar Membranes for Acid Base Flow Batteries
Anthamatten, Mitchell; Roddecha, Supacharee; Jorne, Jacob; Coughlan, Anna
2011-03-01
Rechargeable batteries can provide grid-scale electricity storage to match power generation with consumption and promote renewable energy sources. Flow batteries offer modular and flexible design, low cost per kWh and high efficiencies. A novel flow battery concept will be presented based on acid-base neutralization where protons (H+) and hydroxyl (OH-) ions react electrochemically to produce water. The large free energy of this highly reversible reaction can be stored chemically, and, upon discharge, can be harvested as usable electricity. The acid-base flow battery concept avoids the use of a sluggish oxygen electrode and utilizes the highly reversible hydrogen electrode, thus eliminating the need for expensive noble metal catalysts. The proposed flow battery is a hybrid of a battery and a fuel cell---hydrogen gas storing chemical energy is produced at one electrode and is immediately consumed at the other electrode. The two electrodes are exposed to low and high pH solutions, and these solutions are separated by a hybrid membrane containing a hybrid cation and anion exchange membrane (CEM/AEM). Membrane design will be discussed, along with ion-transport data for synthesized membranes.
18. A Constant Pressure Bomb
NASA Technical Reports Server (NTRS)
Stevens, F W
1924-01-01
This report describes a new optical method of unusual simplicity and of good accuracy suitable to study the kinetics of gaseous reactions. The device is the complement of the spherical bomb of constant volume, and extends the applicability of the relationship, pv=rt for gaseous equilibrium conditions, to the use of both factors p and v. The method substitutes for the mechanical complications of a manometer placed at some distance from the seat of reaction the possibility of allowing the radiant effects of reaction to record themselves directly upon a sensitive film. It is possible the device may be of use in the study of the photoelectric effects of radiation. The method makes possible a greater precision in the measurement of normal flame velocities than was previously possible. An approximate analysis shows that the increase of pressure and density ahead of the flame is negligible until the velocity of the flame approaches that of sound.
19. Teaching Acid/Base Physiology in the Laboratory
ERIC Educational Resources Information Center
Friis, Ulla G.; Plovsing, Ronni; Hansen, Klaus; Laursen, Bent G.; Wallstedt, Birgitta
2010-01-01
Acid/base homeostasis is one of the most difficult subdisciplines of physiology for medical students to master. A different approach, where theory and practice are linked, might help students develop a deeper understanding of acid/base homeostasis. We therefore set out to develop a laboratory exercise in acid/base physiology that would provide…
20. A General Simulator for Acid-Base Titrations
de Levie, Robert
1999-07-01
General formal expressions are provided to facilitate the automatic computer calculation of acid-base titration curves of arbitrary mixtures of acids, bases, and salts, without and with activity corrections based on the Davies equation. Explicit relations are also given for the buffer strength of mixtures of acids, bases, and salts.
1. Using Willie's Acid-Base Box for Blood Gas Analysis
ERIC Educational Resources Information Center
Dietz, John R.
2011-01-01
In this article, the author describes a method developed by Dr. William T. Lipscomb for teaching blood gas analysis of acid-base status and provides three examples using Willie's acid-base box. Willie's acid-base box is constructed using three of the parameters of standard arterial blood gas analysis: (1) pH; (2) bicarbonate; and (3) CO[subscript…
2. A comparative study of surface acid-base characteristics of natural illites from different origins
SciTech Connect
Liu, W.; Sun, Z.; Forsling, W.; Du, Q.; Tang, H.
1999-11-01
The acid-base characteristics of naturally occurring illites, collected from different locations, were investigated by potentiometeric titrations. The experimental data were interpreted using the constant capacitance surface complexation model. Considerable release of Al and Si from illite samples and subsequent complexation or precipitation of hydroxyl aluminosilicates generated during the acidimetric forward titration and the alkalimetric back titration, respectively, were observed. In order to describe the acid-base chemistry of aqueous illite surfaces, two surface proton-reaction models, introducing the corresponding reactions between the dissolved aluminum species and silicic acid, as well as a surface Al-Si complex on homogeneous illite surface sites, were proposed. Optimization results indicated that both models could provide a good description of the titration behavior for all aqueous illite systems in this study. The intrinsic acidity constants for the different illites were similar in Model 1, showing some generalities in their acid-base properties. Model 1 may be considered as a simplification of Model 2, evident in the similarities between the corresponding constants. In addition, the formation constant for surface Al-Si species (complexes or precipitates) is relatively stable in this study.
3. Importance of acid-base equilibrium in electrocatalytic oxidation of formic acid on platinum.
PubMed
Joo, Jiyong; Uchida, Taro; Cuesta, Angel; Koper, Marc T M; Osawa, Masatoshi
2013-07-10
Electro-oxidation of formic acid on Pt in acid is one of the most fundamental model reactions in electrocatalysis. However, its reaction mechanism is still a matter of strong debate. Two different mechanisms, bridge-bonded adsorbed formate mechanism and direct HCOOH oxidation mechanism, have been proposed by assuming a priori that formic acid is the major reactant. Through systematic examination of the reaction over a wide pH range (0-12) by cyclic voltammetry and surface-enhanced infrared spectroscopy, we show that the formate ion is the major reactant over the whole pH range examined, even in strong acid. The performance of the reaction is maximal at a pH close to the pKa of formic acid. The experimental results are reasonably explained by a new mechanism in which formate ion is directly oxidized via a weakly adsorbed formate precursor. The reaction serves as a generic example illustrating the importance of pH variation in catalytic proton-coupled electron-transfer reactions.
4. Acid-base regulation during heating and cooling in the lizard, Varanus exanthematicus.
PubMed
Wood, S C; Johansen, K; Glass, M L; Hoyt, R W
1981-04-01
Current concepts of acid-base balance in ectothermic animals require that arterial pH vary inversely with body temperature in order to maintain a constant OH-/H+ and constant net charge on proteins. The present study evaluates acid-base regulation in Varanus exanthematicus under various regimes of heating and cooling between 15 and 38 degrees C. Arterial blood was sampled during heating and cooling at various rates, using restrained and unrestrained animals with and without face masks. Arterial pH was found to have a small temperature dependence, i.e., pH = 7.66--0.005 (T). The slope (dpH/dT = -0.005), while significantly greater than zero (P less than 0.05), is much less than that required for a constant OH-/H+ or a constant imidazole alphastat (dpH/dT congruent to 0.018). The physiological mechanism that distinguishes this species from most other ectotherms is the presence of a ventilatory response to temperature-induced changes in CO2 production and O2 uptake, i.e., VE/VO2 is constant. This results in a constant O2 extraction and arterial saturation (approx. 90%), which is adaptive to the high aerobic requirements of this species.
5. Effect of temperature on the acid-base properties of the alumina surface: microcalorimetry and acid-base titration experiments.
PubMed
Morel, Jean-Pierre; Marmier, Nicolas; Hurel, Charlotte; Morel-Desrosiers, Nicole
2006-06-15
Sorption reactions on natural or synthetic materials that can attenuate the migration of pollutants in the geosphere could be affected by temperature variations. Nevertheless, most of the theoretical models describing sorption reactions are at 25 degrees C. To check these models at different temperatures, experimental data such as the enthalpies of sorption are thus required. Highly sensitive microcalorimeters can now be used to determine the heat effects accompanying the sorption of radionuclides on oxide-water interfaces, but enthalpies of sorption cannot be extracted from microcalorimetric data without a clear knowledge of the thermodynamics of protonation and deprotonation of the oxide surface. However, the values reported in the literature show large discrepancies and one must conclude that, amazingly, this fundamental problem of proton binding is not yet resolved. We have thus undertaken to measure by titration microcalorimetry the heat effects accompanying proton exchange at the alumina-water interface at 25 degrees C. Based on (i) the surface sites speciation provided by a surface complexation model (built from acid-base titrations at 25 degrees C) and (ii) results of the microcalorimetric experiments, calculations have been made to extract the enthalpic variations associated respectively to first and second deprotonation of the alumina surface. Values obtained are deltaH1 = 80+/-10 kJ mol(-1) and deltaH2 = 5+/-3 kJ mol(-1). In a second step, these enthalpy values were used to calculate the alumina surface acidity constants at 50 degrees C via the van't Hoff equation. Then a theoretical titration curve at 50 degrees C was calculated and compared to the experimental alumina surface titration curve. Good agreement between the predicted acid-base titration curve and the experimental one was observed.
6. Acid-base properties of 2-phenethyldithiocarbamoylacetic acid, an antitumor agent
Novozhilova, N. E.; Kutina, N. N.; Petukhova, O. A.; Kharitonov, Yu. Ya.
2013-07-01
The acid-base properties of the 2-phenethyldithiocarbamoylacetic acid (PET) substance belonging to the class of isothiocyanates and capable of inhibiting the development of tumors on many experimental models were studied. The acidity and hydrolysis constants of the PET substance in ethanol, acetone, aqueous ethanol, and aqueous acetone solutions were determined from the data of potentiometric (pH-metric) titration of ethanol and acetone solutions of PET with aqueous solidum hydroxide at room temperature.
7. Drug-induced acid-base disorders.
PubMed
Kitterer, Daniel; Schwab, Matthias; Alscher, M Dominik; Braun, Niko; Latus, Joerg
2015-09-01
The incidence of acid-base disorders (ABDs) is high, especially in hospitalized patients. ABDs are often indicators for severe systemic disorders. In everyday clinical practice, analysis of ABDs must be performed in a standardized manner. Highly sensitive diagnostic tools to distinguish the various ABDs include the anion gap and the serum osmolar gap. Drug-induced ABDs can be classified into five different categories in terms of their pathophysiology: (1) metabolic acidosis caused by acid overload, which may occur through accumulation of acids by endogenous (e.g., lactic acidosis by biguanides, propofol-related syndrome) or exogenous (e.g., glycol-dependant drugs, such as diazepam or salicylates) mechanisms or by decreased renal acid excretion (e.g., distal renal tubular acidosis by amphotericin B, nonsteroidal anti-inflammatory drugs, vitamin D); (2) base loss: proximal renal tubular acidosis by drugs (e.g., ifosfamide, aminoglycosides, carbonic anhydrase inhibitors, antiretrovirals, oxaliplatin or cisplatin) in the context of Fanconi syndrome; (3) alkalosis resulting from acid and/or chloride loss by renal (e.g., diuretics, penicillins, aminoglycosides) or extrarenal (e.g., laxative drugs) mechanisms; (4) exogenous bicarbonate loads: milk-alkali syndrome, overshoot alkalosis after bicarbonate therapy or citrate administration; and (5) respiratory acidosis or alkalosis resulting from drug-induced depression of the respiratory center or neuromuscular impairment (e.g., anesthetics, sedatives) or hyperventilation (e.g., salicylates, epinephrine, nicotine).
8. Semi-empirical proton binding constants for natural organic matter
Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain
2010-03-01
Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed
9. Acid-base balance and plasma composition in the aestivating lungfish (Protopterus).
PubMed
DeLaney, R G; Lahiri, S; Hamilton, R; Fishman, P
1977-01-01
Upon entering into aestivation, Protopterus aethiopicus develops a respiratory acidosis. A slow compensatory increase in plasma bicarbonate suffices only to partially restore arterial pH toward normal. The cessation of water intake from the start of aestivation results in hemoconcentration and marked oliguria. The concentrations of most plasma constituents continue to increase progressively, and the electrolyte ratios change. The increase in urea concentration is disproportionately high for the degree of dehydration and constitutes an increasing fraction of total plasma osmolality. Acid-base and electrolyte balance do not reach a new equilibrium within 1 yr in the cocoon. PMID:13665
10. Teaching Chemical Equilibrium with the Jigsaw Technique
Doymus, Kemal
2008-03-01
This study investigates the effect of cooperative learning (jigsaw) versus individual learning methods on students’ understanding of chemical equilibrium in a first-year general chemistry course. This study was carried out in two different classes in the department of primary science education during the 2005-2006 academic year. One of the classes was randomly assigned as the non-jigsaw group (control) and other as the jigsaw group (cooperative). Students participating in the jigsaw group were divided into four “home groups” since the topic chemical equilibrium is divided into four subtopics (Modules A, B, C and D). Each of these home groups contained four students. The groups were as follows: (1) Home Group A (HGA), representin g the equilibrium state and quantitative aspects of equilibrium (Module A), (2) Home Group B (HGB), representing the equilibrium constant and relationships involving equilibrium constants (Module B), (3) Home Group C (HGC), representing Altering Equilibrium Conditions: Le Chatelier’s principle (Module C), and (4) Home Group D (HGD), representing calculations with equilibrium constants (Module D). The home groups then broke apart, like pieces of a jigsaw puzzle, and the students moved into jigsaw groups consisting of members from the other home groups who were assigned the same portion of the material. The jigsaw groups were then in charge of teaching their specific subtopic to the rest of the students in their learning group. The main data collection tool was a Chemical Equilibrium Achievement Test (CEAT), which was applied to both the jigsaw and non-jigsaw groups The results indicated that the jigsaw group was more successful than the non-jigsaw group (individual learning method).
11. Using quantitative acid-base analysis in the ICU.
PubMed
Lloyd, P; Freebairn, R
2006-03-01
The quantitative acid-base 'Strong Ion' calculator is a practical application of quantitative acid-base chemistry, as developed by Peter Stewart and Peter Constable. It quantifies the three independent factors that control acidity, calculates the concentration and charge of unmeasured ions, produces a report based on these calculations and displays a Gamblegram depicting measured ionic species. Used together with the medical history, quantitative acid-base analysis has advantages over traditional approaches.
12. Kinetics of acid base catalyzed transesterification of Jatropha curcas oil.
PubMed
Jain, Siddharth; Sharma, M P
2010-10-01
Out of various non-edible oil resources, Jatropha curcas oil (JCO) is considered as future feedstock for biodiesel production in India. Limited work is reported on the kinetics of transesterification of high free fatty acids containing oil. The present study reports the results of kinetic study of two-step acid base catalyzed transesterification process carried out at an optimum temperature of 65 °C and 50 °C for esterification and transesterification respectively under the optimum methanol to oil ratio of 3:7 (v/v), catalyst concentration 1% (w/w) for H₂SO₄ and NaOH. The yield of methyl ester (ME) has been used to study the effect of different parameters. The results indicate that both esterification and transesterification reaction are of first order with reaction rate constant of 0.0031 min⁻¹ and 0.008 min⁻¹ respectively. The maximum yield of 21.2% of ME during esterification and 90.1% from transesterification of pretreated JCO has been obtained.
13. The effects of secular calcium and magnesium concentration changes on the thermodynamics of seawater acid/base chemistry: Implications for Eocene and Cretaceous ocean carbon chemistry and buffering
Hain, Mathis P.; Sigman, Daniel M.; Higgins, John A.; Haug, Gerald H.
2015-05-01
Reconstructed changes in seawater calcium and magnesium concentration ([Ca2+], [Mg2+]) predictably affect the ocean's acid/base and carbon chemistry. Yet inaccurate formulations of chemical equilibrium "constants" are currently in use to account for these changes. Here we develop an efficient implementation of the MIAMI Ionic Interaction Model to predict all chemical equilibrium constants required for carbon chemistry calculations under variable [Ca2+] and [Mg2+]. We investigate the impact of [Ca2+] and [Mg2+] on the relationships among the ocean's pH, CO2, dissolved inorganic carbon (DIC), saturation state of CaCO3 (Ω), and buffer capacity. Increasing [Ca2+] and/or [Mg2+] enhances "ion pairing," which increases seawater buffering by increasing the concentration ratio of total to "free" (uncomplexed) carbonate ion. An increase in [Ca2+], however, also causes a decline in carbonate ion to maintain a given Ω, thereby overwhelming the ion pairing effect and decreasing seawater buffering. Given the reconstructions of Eocene [Ca2+] and [Mg2+] ([Ca2+]~20 mM; [Mg2+]~30 mM), Eocene seawater would have required essentially the same DIC as today to simultaneously explain a similar-to-modern Ω and the estimated Eocene atmospheric CO2 of ~1000 ppm. During the Cretaceous, at ~4 times modern [Ca2+], ocean buffering would have been at a minimum. Overall, during times of high seawater [Ca2+], CaCO3 saturation, pH, and atmospheric CO2 were more susceptible to perturbations of the global carbon cycle. For example, given both Eocene and Cretaceous seawater [Ca2+] and [Mg2+], a doubling of atmospheric CO2 would require less carbon addition to the ocean/atmosphere system than under modern seawater composition. Moreover, increasing seawater buffering since the Cretaceous may have been a driver of evolution by raising energetic demands of biologically controlled calcification and CO2 concentration mechanisms that aid photosynthesis.
14. Magnetospheric equilibrium with anisotropic pressure
SciTech Connect
Cheng, C.Z.
1991-07-01
Self-consistent magnetospheric equilibrium with anisotropic pressure is obtained by employing an iterative metric method for solving the inverse equilibrium equation in an optimal flux coordinate system. A method of determining plasma parallel and perpendicular pressures from either analytic particle distribution or particle distribution measured along the satellite's path is presented. The numerical results of axisymmetric magnetospheric equilibrium including the effects of finite beta, pressure anisotropy, and boundary conditions are presented for a bi-Maxwellian particle distribution. For the isotropic pressure cases, the finite beta effect produces an outward expansion of the constant magnetic flux surfaces in relation to the dipole field lines, and along the magnetic field the toroidal ring current is maximum at the magnetic equator. The effect of pressure anisotropy is found to further expand the flux surfaces outward. Along the magnetic field lines the westward ring current can be peak away from the equator due to an eastward current contribution resulting from pressure anisotropy. As pressure anisotropy increases, the peak westward current can become more singular. The outer boundary flux surface has significant effect on the magnetospheric equilibrium. For the outer flux boundary resembling dayside compressed flux surface due to solar wind pressure, the deformation of the magnetic field can be quite different from that for the outer flux boundary resembling the tail-like surface. 23 refs., 17 figs.
15. Identification of acid-base catalytic residues of high-Mr thioredoxin reductase from Plasmodium falciparum.
PubMed
McMillan, Paul J; Arscott, L David; Ballou, David P; Becker, Katja; Williams, Charles H; Müller, Sylke
2006-11-01
High-M(r) thioredoxin reductase from the malaria parasite Plasmodium falciparum (PfTrxR) contains three redox active centers (FAD, Cys-88/Cys-93, and Cys-535/Cys-540) that are in redox communication. The catalytic mechanism of PfTrxR, which involves dithiol-disulfide interchanges requiring acid-base catalysis, was studied by steady-state kinetics, spectral analyses of anaerobic static titrations, and rapid kinetics analysis of wild-type enzyme and variants involving the His-509-Glu-514 dyad as the presumed acid-base catalyst. The dyad is conserved in all members of the enzyme family. Substitution of His-509 with glutamine and Glu-514 with alanine led to TrxR with only 0.5 and 7% of wild type activity, respectively, thus demonstrating the crucial roles of these residues for enzymatic activity. The H509Q variant had rate constants in both the reductive and oxidative half-reactions that were dramatically less than those of wild-type enzyme, and no thiolateflavin charge-transfer complex was observed. Glu-514 was shown to be involved in dithiol-disulfide interchange between the Cys-88/Cys-93 and Cys-535/Cys-540 pairs. In addition, Glu-514 appears to greatly enhance the role of His-509 in acid-base catalysis. It can be concluded that the His-509-Glu-514 dyad, in analogy to those in related oxidoreductases, acts as the acid-base catalyst in PfTrxR.
16. Isodynamic axisymmetric equilibrium near the magnetic axis
SciTech Connect
Arsenin, V. V.
2013-08-15
Plasma equilibrium near the magnetic axis of an axisymmetric toroidal magnetic confinement system is described in orthogonal flux coordinates. For the case of a constant current density in the vicinity of the axis and magnetic surfaces with nearly circular cross sections, expressions for the poloidal and toroidal magnetic field components are obtained in these coordinates by using expansion in the reciprocal of the aspect ratio. These expressions allow one to easily derive relationships between quantities in an isodynamic equilibrium, in which the absolute value of the magnetic field is constant along the magnetic surface (Palumbo’s configuration)
17. Renal acid-base metabolism after ischemia.
PubMed
Holloway, J C; Phifer, T; Henderson, R; Welbourne, T C
1986-05-01
The response of the kidney to ischemia-induced cellular acidosis was followed over the immediate one hr post-ischemia reflow period. Clearance and extraction experiments as well as measurement of cortical intracellular pH (pHi) were performed on Inactin-anesthetized Sprague-Dawley rats. Arteriovenous concentration differences and para-aminohippurate extraction were obtained by cannulating the left renal vein. Base production was monitored as bicarbonate released into the renal vein and urine; net base production was related to the renal handling of glutamine and ammonia as well as to renal oxygen consumption and pHi. After a 15 min control period, the left renal artery was snared for one-half hr followed by release and four consecutive 15 min reflow periods. During the control period, cortical cell pHi measured by [14C]-5,5-Dimethyl-2,4-Oxazolidinedione distribution was 7.07 +/- 0.08, and Q-O2 was 14.1 +/- 2.2 micromoles/min; neither net glutamine utilization nor net bicarbonate generation occurred. After 30 min of ischemia, renal tissue pH fell to 6.6 +/- 0.15. However, within 45 min of reflow, cortical cell pH returned and exceeded the control value, 7.33 +/- 0.06 vs. 7.15 +/- 0.08. This increase in pHi was associated with a significant rise in cellular metabolic rate, Q-O2 increased to 20.3 +/- 6.4 micromoles/min. Corresponding with cellular alkalosis was a net production of bicarbonate and a net ammonia uptake and glutamine release; urinary acidification was abolished. These results are consistent with a nonexcretory renal metabolic base generating mechanism governing cellular acid base homeostasis following ischemia. PMID:3723929
18. What is the Ultimate Goal in Acid-Base Regulation?
ERIC Educational Resources Information Center
Balakrishnan, Selvakumar; Gopalakrishnan, Maya; Alagesan, Murali; Prakash, E. Sankaranarayanan
2007-01-01
It is common to see chapters on acid-base physiology state that the goal of acid-base regulatory mechanisms is to maintain the pH of arterial plasma and not arterial PCO [subscript 2] (Pa[subscript CO[subscript 2
19. A Closer Look at Acid-Base Olfactory Titrations
ERIC Educational Resources Information Center
Neppel, Kerry; Oliver-Hoyo, Maria T.; Queen, Connie; Reed, Nicole
2005-01-01
Olfactory titrations using raw onions and eugenol as acid-base indicators are reported. An in-depth investigation on olfactory titrations is presented to include requirements for potential olfactory indicators and protocols for using garlic, onions, and vanillin as acid-base olfactory indicators are tested.
20. A Modern Approach to Acid-Base Chemistry
ERIC Educational Resources Information Center
Drago, Russell S.
1974-01-01
Summarizes current status of our knowledge about acid-base interactions, including Lewis considerations, experimental design, data about donor-acceptor systems, common misconceptions, and hard-soft acid-base model. Indicates that there is the possibility of developing unifying concepts for chemical reactions of inorganic compounds. (CC)
1. Chiral shift reagent for amino acids based on resonance-assisted hydrogen bonding.
PubMed
Chin, Jik; Kim, Dong Chan; Kim, Hae-Jo; Panosyan, Francis B; Kim, Kwan Mook
2004-07-22
[structure: see text] A chiral aldehyde that forms resonance-assisted hydrogen bonded imines with amino acids has been developed. This hydrogen bond not only increases the equilibrium constant for imine formation but also provides a highly downfield-shifted NMR singlet for evaluating enantiomeric excess and absolute stereochemistry of amino acids. PMID:15255698
2. Influence of kinetics on the determination of the surface reactivity of oxide suspensions by acid-base titration.
PubMed
Duc, M; Adekola, F; Lefèvre, G; Fédoroff, M
2006-11-01
The effect of acid-base titration protocol and speed on pH measurement and surface charge calculation was studied on suspensions of gamma-alumina, hematite, goethite, and silica, whose size and porosity have been well characterized. The titration protocol has an important effect on surface charge calculation as well as on acid-base constants obtained by fitting of the titration curves. Variations of pH versus time after addition of acid or base to the suspension were interpreted as diffusion processes. Resulting apparent diffusion coefficients depend on the nature of the oxide and on its porosity.
3. Influence of dissolved organic carbon content on modelling natural organic matter acid-base properties.
PubMed
Garnier, Cédric; Mounier, Stéphane; Benaïm, Jean Yves
2004-10-01
Natural organic matter (NOM) behaviour towards proton is an important parameter to understand NOM fate in the environment. Moreover, it is necessary to determine NOM acid-base properties before investigating trace metals complexation by natural organic matter. This work focuses on the possibility to determine these acid-base properties by accurate and simple titrations, even at low organic matter concentrations. So, the experiments were conducted on concentrated and diluted solutions of extracted humic and fulvic acid from Laurentian River, on concentrated and diluted model solutions of well-known simple molecules (acetic and phenolic acids), and on natural samples from the Seine river (France) which are not pre-concentrated. Titration experiments were modelled by a 6 acidic-sites discrete model, except for the model solutions. The modelling software used, called PROSECE (Programme d'Optimisation et de SpEciation Chimique dans l'Environnement), has been developed in our laboratory, is based on the mass balance equilibrium resolution. The results obtained on extracted organic matter and model solutions point out a threshold value for a confident determination of the studied organic matter acid-base properties. They also show an aberrant decreasing carboxylic/phenolic ratio with increasing sample dilution. This shift is neither due to any conformational effect, since it is also observed on model solutions, nor to ionic strength variations which is controlled during all experiments. On the other hand, it could be the result of an electrode troubleshooting occurring at basic pH values, which effect is amplified at low total concentration of acidic sites. So, in our conditions, the limit for a correct modelling of NOM acid-base properties is defined as 0.04 meq of total analysed acidic sites concentration. As for the analysed natural samples, due to their high acidic sites content, it is possible to model their behaviour despite the low organic carbon concentration.
4. Sound speeds in suspensions in thermodynamic equilibrium
Temkin, S.
1992-11-01
This work considers sound propagation in suspensions of particles of constant mass in fluids, in both relaxed and frozen thermodynamic equilibrium. Treating suspensions as relaxing media, thermodynamic arguments are used to obtain their sound speeds in equilibrium conditions. The results for relaxed equilibrium, which is applicable in the limit of low frequencies, agree with existing theories for aerosols, but disagree with Wood's equation. It is shown that the latter is thermodynamically correct only in the exceptional case when the specific heat ratios of the fluid and of the particles are equal to unity. In all other cases discrepancies occur. These may be significant when one of the two phases in the suspension is a gas, as is the case in aerosols and in bubbly liquids. The paper also includes a brief discussion of the sound speed in frozen equilibrium.
5. Radiative-dynamical equilibrium states for Jupiter
NASA Technical Reports Server (NTRS)
Trafton, L. M.; Stone, P. H.
1974-01-01
In order to obtain accurate estimates of the radiative heating that drives motions in Jupiter's atmosphere, previous radiative equilibrium calculations are improved by including the NH3 opacities and updated results for the pressure-induced opacities. These additions increase the radiative lapse rate near the top of the statically unstable region and lead to a fairly constant radiative lapse rate below the tropopause. The radiative-convective equilibrium temperature structure consistent with these changes is calculated, but it differs only slightly from earlier calculations. The radiative equilibrium calculations are used to calculate whether equilibrium states can occur on Jupiter which are similar to the baroclinic instability regimes on the earth and Mars. The results show that Jupiter's dynamical regime cannot be of this kind, except possibly at very high latitudes, and that its regime must be a basically less stable one than this kind.
6. Study of monoprotic acid-base equilibria in aqueous micellar solutions of nonionic surfactants using spectrophotometry and chemometrics.
PubMed
2015-10-01
Many studies have shown the distribution of solutes between aqueous phase and micellar pseudo-phase in aqueous micellar solutions. However, spectrophotometric studies of acid-base equilibria in these media do not confirm such distribution because of the collinearity between concentrations of chemical species in the two phases. The collinearity causes the number of detected species to be equal to the number of species in a homogenous solution that automatically misinterpreted as homogeneity of micellar solutions, therefore the collinearity is often neglected. This interpretation is in contradiction to the distribution theory in micellar media that must be avoided. Acid-base equilibrium of an indicator was studied in aqueous micellar solutions of a nonionic surfactant to address the collinearity using UV/Visible spectrophotometry. Simultaneous analysis (matrix augmentation) of the equilibrium and solvation data was applied to eliminate the collinearity from the equilibrium data. A model was then suggested for the equilibrium that was fitted to the augmented data to estimate distribution coefficients of the species between the two phases. Moreover, complete resolution of concentration and spectral profiles of species in each phase was achieved.
7. Equilibrium structure of gas phase o-benzyne
Groner, Peter; Kukolich, Stephen G.
2006-01-01
An equilibrium structure has been derived for o-benzyne from experimental rotational constants of seven isotopomers and vibration-rotation constants calculated from MP2 (full)/6-31G(d) quadratic and cubic force fields. In the case of benzene, this method yields results that are in excellent agreement with those obtained from high quality ab initio force fields. The ab initio-calculated vibrational averaging corrections were applied to the measured A0, B0 and C0 rotational constants and the resulting experimental, near-equilibrium, rotational constants were used in a least squares fit to determine the approximate equilibrium structural parameters. The C-C bond lengths for this equilibrium structure of o-benzyne are, beginning with the formal triple bond (C 1-C 2): 1.255, 1.383, 1.403 and 1.405 Å. The bond angles obtained are in good agreement with most of the recent ab initio predictions.
8. An Olfactory Indicator for Acid-Base Titrations.
ERIC Educational Resources Information Center
Flair, Mark N.; Setzer, William N.
1990-01-01
The use of an olfactory acid-base indicator in titrations for visually impaired students is discussed. Potential olfactory indicators include eugenol, thymol, vanillin, and thiophenol. Titrations performed with each indicator with eugenol proved to be successful. (KR)
9. Biologist's Toolbox. Acid-base Balance: An Educational Computer Game.
ERIC Educational Resources Information Center
Boyle, Joseph, III; Robinson, Gloria
1987-01-01
Describes a microcomputer program that can be used in teaching the basic physiological aspects of acid-base (AB) balance. Explains how its game format and graphic approach can be applied in diagnostic and therapeutic exercises. (ML)
10. The Bronsted-Lowery Acid-Base Concept.
ERIC Educational Resources Information Center
Kauffman, George B.
1988-01-01
Gives the background history of the simultaneous discovery of acid-base relationships by Johannes Bronsted and Thomas Lowry. Provides a brief biographical sketch of each. Discusses their concept of acids and bases in some detail. (CW)
11. Getting Freshman in Equilibrium.
ERIC Educational Resources Information Center
Journal of Chemical Education, 1983
1983-01-01
Various aspects of chemical equilibrium were discussed in six papers presented at the Seventh Biennial Conference on Chemical Education (Stillwater, Oklahoma 1982). These include student problems in understanding hydrolysis, helping students discover/uncover topics, equilibrium demonstrations, instructional strategies, and flaws to kinetic…
12. Acid-base homeostasis in the human system
NASA Technical Reports Server (NTRS)
White, R. J.
1974-01-01
Acid-base regulation is a cooperative phenomena in vivo with body fluids, extracellular and intracellular buffers, lungs, and kidneys all playing important roles. The present account is much too brief to be considered a review of present knowledge of these regulatory systems, and should be viewed, instead, as a guide to the elements necessary to construct a simple model of the mutual interactions of the acid-base regulatory systems of the body.
13. Spectral and Acid-Base Properties of Hydroxyflavones in Micellar Solutions of Cationic Surfactants
Lipkovska, N. A.; Barvinchenko, V. N.; Fedyanina, T. V.; Rugal', A. A.
2014-09-01
It has been shown that the spectral characteristics (intensity, position of the absorption band) and the acid-base properties in a series of structurally similar hydroxyflavones depend on the concentration of the cationic surfactants miramistin and decamethoxin in aqueous solutions, and the extent of their changes is more pronounced for hydrophobic quercetin than for hydrophilic rutin. For the first time, we have determined the apparent dissociation constants of quercetin and rutin in solutions of these cationic surfactants (pKa1) over a broad concentration range and we have established that they decrease in the series water-decamethoxin-miramistin.
14. The species- and site-specific acid-base properties of penicillamine and its homodisulfide
Mirzahosseini, Arash; Szilvay, András; Noszál, Béla
2014-08-01
Penicillamine, penicillamine disulfide and 4 related compounds were studied by 1H NMR-pH titrations and case-tailored evaluation methods. The resulting acid-base properties are quantified in terms of 14 macroscopic and 28 microscopic protonation constants and the concomitant 7 interactivity parameters. The species- and site-specific basicities are interpreted by means of inductive and shielding effects through various intra- and intermolecular comparisons. The thiolate basicities determined this way are key parameters and exclusive means for the prediction of thiolate oxidizabilities and chelate forming properties in order to understand and influence chelation therapy and oxidative stress at the molecular level.
15. Stoichiometry and Formation Constant Determination by Linear Sweep Voltammetry.
ERIC Educational Resources Information Center
Schultz, Franklin A.
1979-01-01
In this paper an experiment is described in which the equilibrium constants necessary for determining the composition and distribution of lead (II)-oxalate species may be measured by linear sweep voltammetry. (Author/BB)
16. Determination of Acidity Constants by Gradient Flow-Injection Titration
ERIC Educational Resources Information Center
Conceicao, Antonio C. L.; Minas da Piedade, Manuel E.
2006-01-01
A three-hour laboratory experiment, designed for an advanced undergraduate course in instrumental analysis that illustrates the application of the gradient chamber flow-injection titration (GCFIT) method with spectrophotometric detection to determine acidity constants is presented. The procedure involves the use of an acid-base indicator to obtain…
17. A Better Way of Dealing with Chemical Equilibrium.
ERIC Educational Resources Information Center
Tykodi, Ralph J.
1986-01-01
Discusses how to address the concept of chemical equilibrium through the use of thermodynamic activities. Describes the advantages of setting up an equilibrium constant in terms of activities and demonstrates how to approximate those activities by practical measures such as partial pressures, mole fractions, and molar concentrations. (TW)
18. Formation of nitric acid hydrates - A chemical equilibrium approach
NASA Technical Reports Server (NTRS)
Smith, Roland H.
1990-01-01
Published data are used to calculate equilibrium constants for reactions of the formation of nitric acid hydrates over the temperature range 190 to 205 K. Standard enthalpies of formation and standard entropies are calculated for the tri- and mono-hydrates. These are shown to be in reasonable agreement with earlier calorimetric measurements. The formation of nitric acid trihydrate in the polar stratosphere is discussed in terms of these equilibrium constants.
19. Carbonic anhydrase and acid-base regulation in fish.
PubMed
Gilmour, K M; Perry, S F
2009-06-01
Carbonic anhydrase (CA) is the zinc metalloenzyme that catalyses the reversible reactions of CO(2) with water. CA plays a crucial role in systemic acid-base regulation in fish by providing acid-base equivalents for exchange with the environment. Unlike air-breathing vertebrates, which frequently utilize alterations of breathing (respiratory compensation) to regulate acid-base status, acid-base balance in fish relies almost entirely upon the direct exchange of acid-base equivalents with the environment (metabolic compensation). The gill is the critical site of metabolic compensation, with the kidney playing a supporting role. At the gill, cytosolic CA catalyses the hydration of CO(2) to H(+) and HCO(3)(-) for export to the water. In the kidney, cytosolic and membrane-bound CA isoforms have been implicated in HCO(3)(-) reabsorption and urine acidification. In this review, the CA isoforms that have been identified to date in fish will be discussed together with their tissue localizations and roles in systemic acid-base regulation.
20. The acid-base titration of montmorillonite
Bourg, I. C.; Sposito, G.; Bourg, A. C.
2003-12-01
Proton binding to clay minerals plays an important role in the chemical reactivity of soils (e.g., acidification, retention of nutrients or pollutants). If should also affect the performance of clay barriers for waste disposal. The surface acidity of clay minerals is commonly modelled empirically by assuming generic amphoteric surface sites (>SOH) on a flat surface, with fitted site densities and acidity constant. Current advances in experimental methods (notably spectroscopy) are rapidly improving our understanding of the structure and reactivity of the surface of clay minerals (arrangement of the particles, nature of the reactive surface sites, adsorption mechanisms). These developments are motivated by the difficulty of modelling the surface chemistry of mineral surfaces at the macro-scale (e.g., adsorption or titration) without a detailed (molecular-scale) picture of the mechanisms, and should be progressively incorporated into surface complexation models. In this view, we have combined recent estimates of montmorillonite surface properties (surface site density and structure, edge surface area, surface electrostatic potential) with surface site acidities obtained from the titration of alpha-Al2O3 and SiO2, and a novel method of accounting for the unknown initial net proton surface charge of the solid. The model predictions were compared to experimental titrations of SWy-1 montmorillonite and purified MX-80 bentonite in 0.1-0.5 mol/L NaClO4 and 0.005-0.5 mol/L NaNO3 background electrolytes, respectively. Most of the experimental data were appropriately described by the model after we adjusted a single parameter (silanol sites on the surface of montmorillonite were made to be slightly more acidic than those of silica). At low ionic strength and acidic pH the model underestimated the buffering capacity of the montmorillonite, perhaps due to clay swelling or to the interlayer adsorption of dissolved aluminum. The agreement between our model and the experimental
1. Temperature and acid-base balance in the American lobster Homarus americanus.
PubMed
Qadri, Syed Aman; Camacho, Joseph; Wang, Hongkun; Taylor, Josi R; Grosell, Martin; Worden, Mary Kate
2007-04-01
Lobsters (Homarus americanus) in the wild inhabit ocean waters where temperature can vary over a broad range (0-25 degrees C). To examine how environmental thermal variability might affect lobster physiology, we examine the effects of temperature and thermal change on the acid-base status of the lobster hemolymph. Total CO(2), pH, P(CO)2 and HCO(-)(3) were measured in hemolymph sampled from lobsters acclimated to temperature in the laboratory as well as from lobsters acclimated to seasonal temperatures in the wild. Our results demonstrate that the change in hemolymph pH as a function of temperature follows the rule of constant relative alkalinity in lobsters acclimated to temperature over a period of weeks. However, thermal change can alter lobster acid-base status over a time course of minutes. Acute increases in temperature trigger a respiratory compensated metabolic acidosis of the hemolymph. Both the strength and frequency of the lobster heartbeat in vitro are modulated by changes in pH within the physiological range measured in vivo. These observations suggest that changes in acid-base status triggered by thermal variations in the environment might modulate lobster cardiac performance in vivo.
2. Acid-base chemistry of white wine: analytical characterisation and chemical modelling.
PubMed
Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe
2012-01-01
A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing.
3. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling
PubMed Central
Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe
2012-01-01
A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762
4. A New Application for Radioimmunoassay: Measurement of Thermodynamic Constants.
ERIC Educational Resources Information Center
1983-01-01
Describes a laboratory experiment in which an equilibrium radioimmunoassay (RIA) is used to estimate thermodynamic parameters such as equilibrium constants. The experiment is simple and inexpensive, and it introduces a technique that is important in the clinical chemistry and research laboratory. Background information, procedures, and results are…
5. Chemical Equilibrium, Unit 3: Chemical Equilibrium Calculations. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide.
ERIC Educational Resources Information Center
Jameson, Cynthia J.
Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this unit on chemical equilibrium calculations includes objectives, prerequisites, a discussion of the equilibrium constant (K), and ten…
6. Microwave spectrum and equilibrium structure of o-xylene
Vogt, Natalja; Demaison, Jean; Geiger, Werner; Rudolph, Heinz Dieter
2013-06-01
Ground state rotational constants were determined for 14 isotopologues of o-xylene. These rotational constants have been corrected with the rovibrational constants calculated from a quantum chemical force field. It was found that the derived semiexperimental equilibrium rotational constants of the deuterated isotopologues are not fully compatible with those of the non-deuterated ones. To mitigate the consequences of this incompatibility, the semiexperimental equilibrium rotational constants of the non-deuterated isotopologues have been supplemented by structural parameters, in particular those for hydrogen atoms, from high level ab initio calculations. The combined data have been used in a weighted least-squares fit to determine an accurate equilibrium structure. It was shown, at least in the present case, that the empirical structures are not sufficiently accurate and are, therefore, hardly appropriate for large molecules with many hydrogen atoms.
7. Far-from-equilibrium kinetic processes
2015-12-01
We analyze the kinetics of activated processes that take place under far-from-equilibrium conditions, when the system is subjected to external driving forces or gradients or at high values of affinities. We use mesoscopic non-equilibrium thermodynamics to show that when a force is applied, the reaction rate depends on the force. In the case of a chemical reaction at high affinity values, the reaction rate is no longer constant but depends on affinity, which implies that the law of mass action is no longer valid. This result is in good agreement with the kinetic theory of reacting gases, which uses a Chapman-Enskog expansion of the probability distribution.
8. Acid-base properties of the Fe(CN){sub 6}{sup 3-}/Fe(CN){sub 6}{sup 4-} redox couple in the presence of various background mineral acids and salts
SciTech Connect
Crozes, X.; Blanc, P.; Moisy, P.; Cote, G.
2012-04-15
The acid-base behavior of Fe(CN){sub 6}{sup 4-} was investigated by measuring the formal potentials of the Fe(CN){sub 6}{sup 3-}/Fe(CN){sub 6}{sup 4-} couple over a wide range of acidic and neutral solution compositions. The experimental data were fitted to a model taking into account the protonated forms of Fe(CN){sub 6}{sup 4-} and using values of the activities of species in solution, calculated with a simple solution model and a series of binary data available in the literature. The fitting needed to take account of the protonated species HFe(CN){sub 6}{sup 3-} and H{sub 2}Fe(CN){sub 6}{sup 2-}, already described in the literature, but also the species H{sub 3}Fe(CN){sub 6}{sup -} (associated with the acid-base equilibrium H{sub 3}Fe(CN){sub 6}{sup -} ↔ H{sub 2}Fe(CN){sub 6}{sup 2-} + H{sup +}). The acidic dissociation constants of HFe(CN){sub 6}{sup 3-}, H{sub 2}Fe(CN){sub 6}{sup 2-} and H{sub 3}Fe(CN){sub 6}{sup -} were found to be pK(1)(II) = 3.9 ± 0.1, pK(2)(II) = 2.0 ± 0.1, and pK(3)(II) = 0.0 ± 0.1, respectively. These constants were determined by taking into account that the activities of the species are independent of the ionic strength. (authors)
9. Acid base reactions, phosphate and arsenate complexation, and their competitive adsorption at the surface of goethite in 0.7 M NaCl solution
Gao, Yan; Mucci, Alfonso
2001-07-01
Potentiometric titrations of the goethite-water interface were carried out at 25°C in 0.1, 0.3 and 0.7 M NaCl solutions. The acid/base properties of goethite at pH > 4 in a 0.7 M NaCl solution can be reproduced successfully using either the Constant Capacitance (CCM), the Basic Stern (BSM) or the Triple Layer models (TLM) when two surface acidity constants are considered. Phosphate and arsenate complexation at the surface of goethite was studied in batch adsorption experiments. The experiments were conducted in 0.7 mol/L NaCl solutions at 25°C in the pH range of 3.0 to 10.0. Phosphate shows a strong affinity for the goethite surface and the amount of phosphate adsorbed decreases with increasing pH. Phosphate complexation is described using a model consisting of three monodentate surface complexes. Arsenate shows a similar adsorption pattern on goethite but a higher affinity than phosphate. A model including three surface complexation constants describes the arsenate adsorption at [AsO 4] init = 23 and 34 μmol/L. The model prediction, however, overestimates arsenate adsorption at [AsO 4] init = 8.8 μmol/L. The goethite surface acidity constants as well as the preceding phosphate and arsenate surface complexation constants were evaluated by the CCM and BSM with the aid of the computer program FITEQL, version 2.0. The experimental investigation of phosphate and arsenate competitive adsorption in 0.7 mol/L NaCl was performed at [PO 4]/[AsO 4] ratios of 1:1, 2.5:1 and 5:1 with [AsO 4] init = 9.0 μmol/L and at a [PO 4]/[AsO 4] ratio of 1:1 with [AsO 4] init = 22 μmol/L. The surface complexation of arsenate decreases significantly in competitive adsorption experiments and the decrease is proportional to the amount of phosphate present. Phosphate adsorption is also reduced but less drastically in competitive adsorption and is not affected significantly by incremental additions of arsenate at pH > 7. The equilibrium model derived by combining the single oxyanion
10. Response reactions: equilibrium coupling.
PubMed
Hoffmann, Eufrozina A; Nagypal, Istvan
2006-06-01
It is pointed out and illustrated in the present paper that if a homogeneous multiple equilibrium system containing k components and q species is composed of the reactants actually taken and their reactions contain only k + 1 species, then we have a unique representation with (q - k) stoichiometrically independent reactions (SIRs). We define these as coupling reactions. All the other possible combinations with k + 1 species are the coupled reactions that are in equilibrium when the (q - k) SIRs are in equilibrium. The response of the equilibrium state for perturbation is determined by the coupling and coupled equilibria. Depending on the circumstances and the actual thermodynamic data, the effect of coupled equilibria may overtake the effect of the coupling ones, leading to phenomena that are in apparent contradiction with Le Chatelier's principle. PMID:16722770
11. Approaches to the Treatment of Equilibrium Perturbations
Canagaratna, Sebastian G.
2003-10-01
Perturbations from equilibrium are treated in the textbooks by a combination of Le Châtelier's principle, the comparison of the equilibrium constant K with the reaction quotient Q,and the kinetic approach. Each of these methods is briefly reviewed. This is followed by derivations of the variation of the equilibrium value of the extent of reaction, ξeq, with various parameters on which it depends. Near equilibrium this relationship can be represented by a straight line. The equilibrium system can be regarded as moving on this line as the parameter is varied. The slope of the line depends on quantities like enthalpy of reaction, volume of reaction and so forth. The derivation shows that these quantities pertain to the equilibrium system, not the standard state. Also, the derivation makes clear what kind of assumptions underlie our conclusions. The derivation of these relations involves knowledge of thermodynamics that is well within the grasp of junior level physical chemistry students. The conclusions that follow from the derived relations are given as subsidiary rules in the form of the slope of ξeq, with T, p, et cetera. The rules are used to develop a visual way of predicting the direction of shift of a perturbed system. This method can be used to supplement one of the other methods even at the introductory level.
12. Computing Equilibrium Chemical Compositions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1995-01-01
Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN.
13. Acid-Base Titration of (S)-Aspartic Acid: A Circular Dichroism Spectrophotometry Experiment
Cavaleiro, Ana M. V.; Pedrosa de Jesus, Júlio D.
2000-09-01
The magnitude of the circular dichroism of (S)-aspartic acid in aqueous solutions at a fixed wavelength varies with the addition of strong base. This laboratory experiment consists of the circular dichroism spectrophotometric acid-base titration of (S)-aspartic acid in dilute aqueous solutions, and the use of the resulting data to determine the ionization constant of the protonated amino group. The work familiarizes students with circular dichroism and illustrates the possibility of performing titrations using a less usual instrumental method of following the course of a reaction. It shows the use of a chiroptical property in the determination of the concentration in solution of an optically active molecule, and exemplifies the use of a spectrophotometric titration in the determination of an ionization constant.
14. Chemical rescue, multiple ionizable groups, and general acid-base catalysis in the HDV genomic ribozyme.
PubMed
Perrotta, Anne T; Wadkins, Timothy S; Been, Michael D
2006-07-01
In the ribozyme from the hepatitis delta virus (HDV) genomic strand RNA, a cytosine side chain is proposed to facilitate proton transfer in the transition state of the reaction and, thus, act as a general acid-base catalyst. Mutation of this active-site cytosine (C75) reduced RNA cleavage rates by as much as one million-fold, but addition of exogenous cytosine and certain nucleobase or imidazole analogs can partially rescue activity in these mutants. However, pH-rate profiles for the rescued reactions were bell shaped, and only one leg of the pH-rate curve could be attributed to ionization of the exogenous nucleobase or buffer. When a second potential ionizable nucleobase (C41) was removed, one leg of the bell-shaped curve was eliminated in the chemical-rescue reaction. With this construct, the apparent pK(a) determined from the pH-rate profile correlated with the solution pK(a) of the buffer, and the contribution of the buffer to the rate enhancement could be directly evaluated in a free-energy or Brønsted plot. The free-energy relationship between the acid dissociation constant of the buffer and the rate constant for cleavage (Brønsted value, beta, = approximately 0.5) was consistent with a mechanism in which the buffer acted as a general acid-base catalyst. These data support the hypothesis that cytosine 75, in the intact ribozyme, acts as a general acid-base catalyst.
15. Site-specific acid-base properties of pholcodine and related compounds.
PubMed
Kovács, Z; Hosztafi, S; Noszál, B
2006-11-01
The acid-base properties of pholcodine, a cough-depressant agent, and related compounds including metabolites were studied by 1H NMR-pH titrations, and are characterised in terms of macroscopic and microscopic protonation constants. New N-methylated derivatives were also synthesized in order to quantitate site- and nucleus-specific protonation shifts and to unravel microscopic acid-base equilibria. The piperidine nitrogen was found to be 38 and 400 times more basic than its morpholine counterpart in pholcodine and norpholcodine, respectively. The protonation data show that the molecule of pholcodine bears an average of positive charge of 1.07 at physiological pH, preventing it from entering the central nervous system, a plausible reason for its lack of analgesic or addictive properties. The protonation constants of pholcodine and its derivatives are interpreted by comparing with related molecules of pharmaceutical interest. The pH-dependent relative concentrations of the variously protonated forms of pholcodine and morphine are depicted in distribution diagrams.
16. Site-specific acid-base properties of pholcodine and related compounds.
PubMed
Kovács, Z; Hosztafi, S; Noszál, B
2006-11-01
The acid-base properties of pholcodine, a cough-depressant agent, and related compounds including metabolites were studied by 1H NMR-pH titrations, and are characterised in terms of macroscopic and microscopic protonation constants. New N-methylated derivatives were also synthesized in order to quantitate site- and nucleus-specific protonation shifts and to unravel microscopic acid-base equilibria. The piperidine nitrogen was found to be 38 and 400 times more basic than its morpholine counterpart in pholcodine and norpholcodine, respectively. The protonation data show that the molecule of pholcodine bears an average of positive charge of 1.07 at physiological pH, preventing it from entering the central nervous system, a plausible reason for its lack of analgesic or addictive properties. The protonation constants of pholcodine and its derivatives are interpreted by comparing with related molecules of pharmaceutical interest. The pH-dependent relative concentrations of the variously protonated forms of pholcodine and morphine are depicted in distribution diagrams. PMID:17004059
17. Effects of temperature on acid-base balance and ventilation in desert iguanas.
PubMed
Bickler, P E
1981-08-01
The effects of constant and changing temperatures on blood acid-base status and pulmonary ventilation were studied in the eurythermal lizard Dipsosaurus dorsalis. Constant temperatures between 18 and 42 degrees C maintained for 24 h or more produced arterial pH changes of -0.0145 U X degrees C-1. Arterial CO2 tension (PCO2) increased from 9.9 to 32 Torr plasma [HCO-3] and total CO2 contents remained constant at near 19 and 22 mM, respectively. Under constant temperature conditions, ventilation-gas exchange ratios (VE/MCO2 and VE/MO2) were inversely related to temperature and can adequately explain the changes in arterial PCO2 and pH. During warming and cooling between 25 and 42 degrees C arterial pH, PCO2 [HCO-3], and respiratory exchange ratios (MCO2/MO2) were similar to steady-state values. Warming and cooling each took about 2 h. During the temperature changes, rapid changes in lung ventilation following steady-state patterns were seen. Blood relative alkalinity changed slightly with steady-state or changing body temperatures, whereas calculated charge on protein histidine imidazole was closely conserved. Cooling to 17-18 degrees C resulted in a transient respiratory acidosis correlated with a decline in the ratio VE/MCO2. After 12-24 h at 17-18 degrees C, pH, PCO2, and VE returned to steady-state values. The importance of thermal history of patterns of acid-base regulation in reptiles is discussed.
18. On the accuracy of acid-base determinations from potentiometric titrations using only a few points from the titration curve.
PubMed
Olin, A; Wallén, B
1977-05-01
There are several procedures which use only a few points on the titration curve for the calculation of equivalence volumes in acid-base titrations. The accuracy of such determinations will depend on the positions of the points on the titration curve. The effects of errors in the stability constants and in the pH measurements on the accuracy of the analysis have been considered, and the results are used to establish the conditions under which these errors are minimized.
19. Experimental determination of thermodynamic equilibrium in biocatalytic transamination.
PubMed
Tufvesson, Pär; Jensen, Jacob S; Kroutil, Wolfgang; Woodley, John M
2012-08-01
The equilibrium constant is a critical parameter for making rational design choices in biocatalytic transamination for the synthesis of chiral amines. However, very few reports are available in the scientific literature determining the equilibrium constant (K) for the transamination of ketones. Various methods for determining (or estimating) equilibrium have previously been suggested, both experimental as well as computational (based on group contribution methods). However, none of these were found suitable for determining the equilibrium constant for the transamination of ketones. Therefore, in this communication we suggest a simple experimental methodology which we hope will stimulate more accurate determination of thermodynamic equilibria when reporting the results of transaminase-catalyzed reactions in order to increase understanding of the relationship between substrate and product molecular structure on reaction thermodynamics.
20. Modelling of the acid base properties of two thermophilic bacteria at different growth times
Heinrich, Hannah T. M.; Bremer, Phil J.; McQuillan, A. James; Daughney, Christopher J.
2008-09-01
Acid-base titrations and electrophoretic mobility measurements were conducted on the thermophilic bacteria Anoxybacillus flavithermus and Geobacillus stearothermophilus at two different growth times corresponding to exponential and stationary/death phase. The data showed significant differences between the two investigated growth times for both bacterial species. In stationary/death phase samples, cells were disrupted and their buffering capacity was lower than that of exponential phase cells. For G. stearothermophilus the electrophoretic mobility profiles changed dramatically. Chemical equilibrium models were developed to simultaneously describe the data from the titrations and the electrophoretic mobility measurements. A simple approach was developed to determine confidence intervals for the overall variance between the model and the experimental data, in order to identify statistically significant changes in model fit and thereby select the simplest model that was able to adequately describe each data set. Exponential phase cells of the investigated thermophiles had a higher total site concentration than the average found for mesophilic bacteria (based on a previously published generalised model for the acid-base behaviour of mesophiles), whereas the opposite was true for cells in stationary/death phase. The results of this study indicate that growth phase is an important parameter that can affect ion binding by bacteria, that growth phase should be considered when developing or employing chemical models for bacteria-bearing systems.
1. Acid Base Titrations in Nonaqueous Solvents and Solvent Mixtures
Barcza, Lajos; Buvári-Barcza, Ágnes
2003-07-01
The acid base determination of different substances by nonaqueous titrations is highly preferred in pharmaceutical analyses since the method is quantitative, exact, and reproducible. The modern interpretation of the reactions in nonaqueous solvents started in the last century, but several inconsistencies and unsolved problems can be found in the literature. The acid base theories of Brønsted Lowry and Lewis as well as the so-called solvent theory are outlined first, then the promoting (and leveling) and the differentiating effects are discussed on the basis of the hydrogen-bond concept. Emphasis is put on the properties of formic acid and acetic anhydride since their importance is increasing.
2. Equilibrium games in networks
Li, Angsheng; Zhang, Xiaohui; Pan, Yicheng; Peng, Pan
2014-12-01
It seems a universal phenomenon of networks that the attacks on a small number of nodes by an adversary player Alice may generate a global cascading failure of the networks. It has been shown (Li et al., 2013) that classic scale-free networks (Barabási and Albert, 1999, Barabási, 2009) are insecure against attacks of as small as O(logn) many nodes. This poses a natural and fundamental question: Can we introduce a second player Bob to prevent Alice from global cascading failure of the networks? We proposed a game in networks. We say that a network has an equilibrium game if the second player Bob has a strategy to balance the cascading influence of attacks by the adversary player Alice. It was shown that networks of the preferential attachment model (Barabási and Albert, 1999) fail to have equilibrium games, that random graphs of the Erdös-Rényi model (Erdös and Rényi, 1959, Erdös and Rényi, 1960) have, for which randomness is the mechanism, and that homophyly networks (Li et al., 2013) have equilibrium games, for which homophyly and preferential attachment are the underlying mechanisms. We found that some real networks have equilibrium games, but most real networks fail to have. We anticipate that our results lead to an interesting new direction of network theory, that is, equilibrium games in networks.
3. Immunity by equilibrium.
PubMed
Eberl, Gérard
2016-08-01
The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology.
4. Beyond Equilibrium Thermodynamics
Öttinger, Hans Christian
2005-01-01
Beyond Equilibrium Thermodynamics fills a niche in the market by providing a comprehensive introduction to a new, emerging topic in the field. The importance of non-equilibrium thermodynamics is addressed in order to fully understand how a system works, whether it is in a biological system like the brain or a system that develops plastic. In order to fully grasp the subject, the book clearly explains the physical concepts and mathematics involved, as well as presenting problems and solutions; over 200 exercises and answers are included. Engineers, scientists, and applied mathematicians can all use the book to address their problems in modelling, calculating, and understanding dynamic responses of materials.
5. Acid-base status in dietary treatment of phenylketonuria.
PubMed
Manz, F; Schmidt, H; Schärer, K; Bickel, H
1977-10-01
Blood acid-base status, serum electrolytes, and urine pH were examined in 64 infants and children with phenylketonuria (PKU) treated with three different low phenylalanine protein hydrolyzates (Aponti, Cymogran, AlbumaidXP) and two synthetic amino acid mixtures (Aminogran, PAM). The formulas caused significant differences in acid-base status, serum potassium, and chloride, and in urine pH. In acid-base balance studies in two patients with PKU, Aponti, PAM, and two modifications of PAM (P2 + P3) were given. We observed a change from mild alkalosis to increasing metabolic acidosis from Aponti (serum bicarbonate 25,8 mval/liter) to P3 (24,0Y, P2 (19, 3) and PAM (17,0). Urine pH decreased and renal net acid excretion increased. In the formulas PAM, P2 and P3 differences in renal net acid excretion correlated with differences in chloride and sulfur contents of the diets and of the urines. New modifications of AlbumaidXP and of PAM, prepared according to our recommendations, showed normal renal net acid excretion (1 mEq/kg/24 hr) in a balance study performed in one patient with PKU and normal acid base status in 20 further patients.
6. Potentiometric Acid-Base Titrations with Activated Graphite Electrodes
Riyazuddin, P.; Devika, D.
1997-10-01
Dry cell graphite (DCG) electrodes activated with potassium permanganate are employed as potentiometric indicator electrodes for acid-base titrations. Special attention is given to an indicator probe comprising activated DCG-non-activiated DCG electrode couple. This combination also proves suitable for the titration of strong or weak acids.
7. Thymine, adenine and lipoamino acid based gene delivery systems.
PubMed
Skwarczynski, Mariusz; Ziora, Zyta M; Coles, Daniel J; Lin, I-Chun; Toth, Istvan
2010-05-14
A novel class of thymine, adenine and lipoamino acid based non-viral carriers for gene delivery has been developed. Their ability to bind to DNA by hydrogen bonding was confirmed by NMR diffusion, isothermal titration calorimetry and transmission electron microscopy experiments.
8. Soil Studies: Applying Acid-Base Chemistry to Environmental Analysis.
ERIC Educational Resources Information Center
West, Donna M.; Sterling, Donna R.
2001-01-01
Laboratory activities for chemistry students focus attention on the use of acid-base chemistry to examine environmental conditions. After using standard laboratory procedures to analyze soil and rainwater samples, students use web-based resources to interpret their findings. Uses CBL probes and graphing calculators to gather and analyze data and…
9. Acid-Base Disorders--A Computer Simulation.
ERIC Educational Resources Information Center
Maude, David L.
1985-01-01
Describes and lists a program for Apple Pascal Version 1.1 which investigates the behavior of the bicarbonate-carbon dioxide buffer system in acid-base disorders. Designed specifically for the preclinical medical student, the program has proven easy to use and enables students to use blood gas parameters to arrive at diagnoses. (DH)
10. Using Spreadsheets to Produce Acid-Base Titration Curves.
ERIC Educational Resources Information Center
Cawley, Martin James; Parkinson, John
1995-01-01
Describes two spreadsheets for producing acid-base titration curves, one uses relatively simple cell formulae that can be written into the spreadsheet by inexperienced students and the second uses more complex formulae that are best written by the teacher. (JRH)
11. On the Khinchin Constant
NASA Technical Reports Server (NTRS)
Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Craw, James M. (Technical Monitor)
1995-01-01
We prove known identities for the Khinchin constant and develop new identities for the more general Hoelder mean limits of continued fractions. Any of these constants can be developed as a rapidly converging series involving values of the Riemann zeta function and rational coefficients. Such identities allow for efficient numerical evaluation of the relevant constants. We present free-parameter, optimizable versions of the identities, and report numerical results.
12. Has Stewart approach improved our ability to diagnose acid-base disorders in critically ill patients?
PubMed
Masevicius, Fabio D; Dubin, Arnaldo
2015-02-01
The Stewart approach-the application of basic physical-chemical principles of aqueous solutions to blood-is an appealing method for analyzing acid-base disorders. These principles mainly dictate that pH is determined by three independent variables, which change primarily and independently of one other. In blood plasma in vivo these variables are: (1) the PCO2; (2) the strong ion difference (SID)-the difference between the sums of all the strong (i.e., fully dissociated, chemically nonreacting) cations and all the strong anions; and (3) the nonvolatile weak acids (Atot). Accordingly, the pH and the bicarbonate levels (dependent variables) are only altered when one or more of the independent variables change. Moreover, the source of H(+) is the dissociation of water to maintain electroneutrality when the independent variables are modified. The basic principles of the Stewart approach in blood, however, have been challenged in different ways. First, the presumed independent variables are actually interdependent as occurs in situations such as: (1) the Hamburger effect (a chloride shift when CO2 is added to venous blood from the tissues); (2) the loss of Donnan equilibrium (a chloride shift from the interstitium to the intravascular compartment to balance the decrease of Atot secondary to capillary leak; and (3) the compensatory response to a primary disturbance in either independent variable. Second, the concept of water dissociation in response to changes in SID is controversial and lacks experimental evidence. In addition, the Stewart approach is not better than the conventional method for understanding acid-base disorders such as hyperchloremic metabolic acidosis secondary to a chloride-rich-fluid load. Finally, several attempts were performed to demonstrate the clinical superiority of the Stewart approach. These studies, however, have severe methodological drawbacks. In contrast, the largest study on this issue indicated the interchangeability of the Stewart and
13. The hubble constant.
PubMed
Huchra, J P
1992-04-17
The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy The Hubble constant is the constant of proportionality between recession velocity and development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution. PMID:17743107
14. The cosmological constant
NASA Technical Reports Server (NTRS)
Carroll, Sean M.; Press, William H.; Turner, Edwin L.
1992-01-01
The cosmological constant problem is examined in the context of both astronomy and physics. Effects of a nonzero cosmological constant are discussed with reference to expansion dynamics, the age of the universe, distance measures, comoving density of objects, growth of linear perturbations, and gravitational lens probabilities. The observational status of the cosmological constant is reviewed, with attention given to the existence of high-redshift objects, age derivation from globular clusters and cosmic nuclear data, dynamical tests of Omega sub Lambda, quasar absorption line statistics, gravitational lensing, and astrophysics of distant objects. Finally, possible solutions to the physicist's cosmological constant problem are examined.
15. Decoupling the contribution of dispersive and acid-base components of surface energy on the cohesion of pharmaceutical powders.
PubMed
Shah, Umang V; Olusanmi, Dolapo; Narang, Ajit S; Hussain, Munir A; Tobyn, Michael J; Heng, Jerry Y Y
2014-11-20
This study reports an experimental approach to determine the contribution from two different components of surface energy on cohesion. A method to tailor the surface chemistry of mefenamic acid via silanization is established and the role of surface energy on cohesion is investigated. Silanization was used as a method to functionalize mefenamic acid surfaces with four different functional end groups resulting in an ascending order of the dispersive component of surface energy. Furthermore, four haloalkane functional end groups were grafted on to the surface of mefenamic acid, resulting in varying levels of acid-base component of surface energy, while maintaining constant dispersive component of surface energy. A proportional increase in cohesion was observed with increases in both dispersive as well as acid-base components of surface energy. Contributions from dispersive and acid-base surface energy on cohesion were determined using an iterative approach. Due to the contribution from acid-base surface energy, cohesion was found to increase ∼11.7× compared to the contribution from dispersive surface energy. Here, we provide an approach to deconvolute the contribution from two different components of surface energy on cohesion, which has the potential of predicting powder flow behavior and ultimately controlling powder cohesion.
16. Biochemical thermodynamics and rapid-equilibrium enzyme kinetics.
PubMed
Alberty, Robert A
2010-12-30
Biochemical thermodynamics is based on the chemical thermodynamics of aqueous solutions, but it is quite different because pH is used as an independent variable. A transformed Gibbs energy G' is used, and that leads to transformed enthalpies H' and transformed entropies S'. Equilibrium constants for enzyme-catalyzed reactions are referred to as apparent equilibrium constants K' to indicate that they are functions of pH in addition to temperature and ionic strength. Despite this, the most useful way to store basic thermodynamic data on enzyme-catalyzed reactions is to give standard Gibbs energies of formation, standard enthalpies of formation, electric charges, and numbers of hydrogen atoms in species of biochemical reactants like ATP. This makes it possible to calculate standard transformed Gibbs energies of formation, standard transformed enthalpies of formation of reactants (sums of species), and apparent equilibrium constants at desired temperatures, pHs, and ionic strengths. These calculations are complicated, and therefore, a mathematical application in a computer is needed. Rapid-equilibrium enzyme kinetics is based on biochemical thermodynamics because all reactions in the mechanism prior to the rate-determining reaction are at equilibrium. The expression for the equilibrium concentration of the enzyme-substrate complex that yields products can be derived by applying Solve in a computer to the expressions for the equilibrium constants in the mechanism and the conservation equation for enzymatic sites. In 1979, Duggleby pointed out that the minimum number of velocities of enzyme-catalyzed reactions required to estimate the values of the kinetic parameters is equal to the number of kinetic parameters. Solve can be used to do this with steady-state rate equations as well as rapid-equilibrium rate equations, provided that the rate equation is a polynomial. Rapid-equilibrium rate equations can be derived for complicated mechanisms that involve several reactants
17. Equilibrium CO bond lengths
Demaison, Jean; Császár, Attila G.
2012-09-01
Based on a sample of 38 molecules, 47 accurate equilibrium CO bond lengths have been collected and analyzed. These ultimate experimental (reEX), semiexperimental (reSE), and Born-Oppenheimer (reBO) equilibrium structures are compared to reBO estimates from two lower-level techniques of electronic structure theory, MP2(FC)/cc-pVQZ and B3LYP/6-311+G(3df,2pd). A linear relationship is found between the best equilibrium bond lengths and their MP2 or B3LYP estimates. These (and similar) linear relationships permit to estimate the CO bond length with an accuracy of 0.002 Å within the full range of 1.10-1.43 Å, corresponding to single, double, and triple CO bonds, for a large number of molecules. The variation of the CO bond length is qualitatively explained using the Atoms in Molecules method. In particular, a nice correlation is found between the CO bond length and the bond critical point density and it appears that the CO bond is at the same time covalent and ionic. Conditions which permit the computation of an accurate ab initio Born-Oppenheimer equilibrium structure are discussed. In particular, the core-core and core-valence correlation is investigated and it is shown to roughly increase with the bond length.
18. An Updated Equilibrium Machine
ERIC Educational Resources Information Center
Schultz, Emeric
2008-01-01
A device that can demonstrate equilibrium, kinetic, and thermodynamic concepts is described. The device consists of a leaf blower attached to a plastic container divided into two chambers by a barrier of variable size and form. Styrofoam balls can be exchanged across the barrier when the leaf blower is turned on and various air pressures are…
19. Determination of the Vibrational Constants of Some Diatomic Molecules: A Combined Infrared Spectroscopic and Quantum Chemical Third Year Chemistry Project.
ERIC Educational Resources Information Center
Ford, T. A.
1979-01-01
In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.…
20. Fundamental Physical Constants
National Institute of Standards and Technology Data Gateway
SRD 121 CODATA Fundamental Physical Constants (Web, free access) This site, developed in the Physics Laboratory at NIST, addresses three topics: fundamental physical constants, the International System of Units (SI), which is the modern metric system, and expressing the uncertainty of measurement results.
1. Calculation of magnetostriction constants
Tatebayashi, T.; Ohtsuka, S.; Ukai, T.; Mori, N.
1986-02-01
The magnetostriction constants h1 and h2 for Ni and Fe metals and the anisotropy constants K1 and K2 for Fe metal are calculated on the basis of the approximate d bands obtained by Deegan's prescription, by using Gilat-Raubenheimer's method. The obtained results are compared with the experimental ones.
2. A simplified strong ion model for acid-base equilibria: application to horse plasma.
PubMed
Constable, P D
1997-07-01
The Henderson-Hasselbalch equation and Stewart's strong ion model are currently used to describe mammalian acid-base equilibria. Anomalies exist when the Henderson-Hasselbalch equation is applied to plasma, whereas the strong ion model does not provide a practical method for determining the total plasma concentration of nonvolatile weak acids ([Atot]) and the effective dissociation constant for plasma weak acids (Ka). A simplified strong ion model, which was developed from the assumption that plasma ions act as strong ions, volatile buffer ions (HCO-3), or nonvolatile buffer ions, indicates that plasma pH is determined by five independent variables: PCO2, strong ion difference, concentration of individual nonvolatile plasma buffers (albumin, globulin, and phosphate), ionic strength, and temperature. The simplified strong ion model conveys on a fundamental level the mechanism for change in acid-base status, explains many of the anomalies when the Henderson-Hasselbalch equation is applied to plasma, is conceptually and algebraically simpler than Stewart's strong ion model, and provides a practical in vitro method for determining [Atot] and Ka of plasma. Application of the simplified strong ion model to CO2-tonometered horse plasma produced values for [Atot] (15.0 +/- 3.1 meq/l) and Ka (2.22 +/- 0.32 x 10(-7) eq/l) that were significantly different from the values commonly assumed for human plasma ([Atot] = 20.0 meq/l, Ka = 3.0 x 10(-7) eq/l). Moreover, application of the experimentally determined values for [Atot] and Ka to published data for the horse (known PCO2, strong ion difference, and plasma protein concentration) predicted plasma pH more accurately than the values for [Atot] and Ka commonly assumed for human plasma. Species-specific values for [Atot] and Ka should be experimentally determined when the simplified strong ion model (or strong ion model) is used to describe acid-base equilibria.
3. Ion effects on the lac repressor--operator equilibrium.
PubMed
Barkley, M D; Lewis, P A; Sullivan, G E
1981-06-23
The effects of ions on the interaction of lac repressor protein and operator DNA have been studied by the membrane filter technique. The equilibrium association constant was determined as a function of monovalent and divalent cation concentrations, anions, and pH. The binding of repressor and operator is extremely sensitive to the ionic environment. The dependence of the observed equilibrium constant on salt concentration is analyzed according to the binding theory of Record et al. [Record, M. T., Jr., Lohman, T. M., & deHaseth, P. L. (1976) J. Mol. Biol. 107, 145]. The number of ionic interactions in repressor--operator complex is deduced from the slopes of the linear log-log plots. About 11 ionic interactions are formed between repressor and DNA phosphates at pH 7.4 and about 9 ionic interactions at pH 8.0, in reasonable agreement with previous estimates. A favorable nonelectrostatic binding free energy of about 9-12 kcal/mol is estimated from the extrapolated equilibrium constants at the 1 M standard state. The values are in good accord with recent results for the salt-independent binding of repressor core and operator DNA. The effects of pH on the repressor--operator interaction are small, and probably result from titration of functional groups in the DNA-binding site of the protein. For monovalent salts, the equilibrium constant is slightly dependent on cation type and highly dependent on anion type. At constant salt concentration, the equilibrium constant decreases about 10000-fold in the order CH3CO2- greater than or equal to F- greater than Cl- greater than Br- greater than NO3- greater than SCN- greater than I-. The wide range of accessible equilibrium constants provides a useful tool for in vitro studies of the repressor--operator interaction.
4. The glmS ribozyme cofactor is a general acid-base catalyst.
PubMed
2012-11-21
The glmS ribozyme is the first natural self-cleaving ribozyme known to require a cofactor. The d-glucosamine-6-phosphate (GlcN6P) cofactor has been proposed to serve as a general acid, but its role in the catalytic mechanism has not been established conclusively. We surveyed GlcN6P-like molecules for their ability to support self-cleavage of the glmS ribozyme and found a strong correlation between the pH dependence of the cleavage reaction and the intrinsic acidity of the cofactors. For cofactors with low binding affinities, the contribution to rate enhancement was proportional to their intrinsic acidity. This linear free-energy relationship between cofactor efficiency and acid dissociation constants is consistent with a mechanism in which the cofactors participate directly in the reaction as general acid-base catalysts. A high value for the Brønsted coefficient (β ~ 0.7) indicates that a significant amount of proton transfer has already occurred in the transition state. The glmS ribozyme is the first self-cleaving RNA to use an exogenous acid-base catalyst.
5. The glmS Ribozyme Cofactor is a General Acid-Base Catalyst
PubMed Central
2012-01-01
The glmS ribozyme is the first natural self-cleaving ribozyme known to require a cofactor. The D-glucosamine-6-phosphate (GlcN6P) cofactor has been proposed to serve as a general acid, but its role in the catalytic mechanism has not been established conclusively. We surveyed GlcN6P-like molecules for their ability to support self-cleavage of the glmS ribozyme and found a strong correlation between the pH dependence of the cleavage reaction and the intrinsic acidity of the cofactors. For cofactors with low binding affinities the contribution to rate enhancement was proportional to their intrinsic acidity. This linear free-energy relationship between cofactor efficiency and acid dissociation constants is consistent with a mechanism in which the cofactors participate directly in the reaction as general acid-base catalysts. A high value for the Brønsted coefficient (β ~ 0.7) indicates that a significant amount of proton transfer has already occurred in the transition state. The glmS ribozyme is the first self-cleaving RNA to use an exogenous acid-base catalyst. PMID:23113700
6. [Blood acid-base balance of sportsmen during physical activity].
PubMed
Petrushova, O P; Mikulyak, N I
2014-01-01
The aim of this study was to investigate the acid-base balance parameters in blood of sportsmen by physical activity. Before exercise lactate concentration in blood was normal. Carbon dioxide pressure (рСО2), bicarbonate concentration (НСО3 -), base excess (BE), were increased immediately after physical activity lactate concentration increased, while pH, BE, НСО3 -, рСО2 decreased in capillary blood of sportsmen. These changes show the development of lactate-acidosis which is partly compensated with bicarbonate buffering system and respiratory alkalosis. During postexercise recovery lactate concentration decreased, while рСО2, НСО3 -, BE increased. The results of this study can be used for diagnostics of acid-base disorders and their medical treatment for preservation of sportsmen physical capacity.
7. Evolution of the Acid-Base Status in Cardiac Arrest
PubMed Central
Carrasco G., Hugo A.; Oletta L., José F.
1973-01-01
In a study of the evolution of acid-base status in 26 patients who had cardiopulmonary arrest in the operating room, it appeared that: The determination of acid-base status within the first hour post-cardiac arrest is useful in differentiating final survivors from non-survivors. Respiratory or combined acidosis carries a poor prognosis not evidenced for metabolic acidosis. Late respiratory complications are more frequent in patients with initial combined acidosis. Treatment should be instituted on the basis of frequent determinations of acidbase status, since accurate diagnosis of degree and type of acidosis cannot be done on clinical grounds only. Recovery of consciousness is influenced by the type and severity of acidosis, less so by duration of arrest; and that high pCO2 is associated frequently with unconsciousness after recovery of circulatory function. PMID:4709532
8. Absolute Equilibrium Entropy
NASA Technical Reports Server (NTRS)
Shebalin, John V.
1997-01-01
The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids.
9. An Updated Equilibrium Machine
Schultz, Emeric
2008-08-01
A device that can demonstrate equilibrium, kinetic, and thermodynamic concepts is described. The device consists of a leaf blower attached to a plastic container divided into two chambers by a barrier of variable size and form. Styrofoam balls can be exchanged across the barrier when the leaf blower is turned on and various air pressures are applied. Equilibrium can be approached from different distributions of balls in the container under different conditions. The Le Châtelier principle can be demonstrated. Kinetic concepts can be demonstrated by changing the nature of the barrier, either changing the height or by having various sized holes in the barrier. Thermodynamic concepts can be demonstrated by taping over some or all of the openings and restricting air flow into container on either side of the barrier.
10. Space Shuttle astrodynamical constants
NASA Technical Reports Server (NTRS)
Cockrell, B. F.; Williamson, B.
1978-01-01
Basic space shuttle astrodynamic constants are reported for use in mission planning and construction of ground and onboard software input loads. The data included here are provided to facilitate the use of consistent numerical values throughout the project.
11. The cosmological constant problem
SciTech Connect
Dolgov, A.D.
1989-05-01
A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs.
12. The species- and site-specific acid-base properties of biological thiols and their homodisulfides.
PubMed
Mirzahosseini, Arash; Noszál, Béla
2014-07-01
Cysteamine, cysteine, homocysteine, their homodisulfides and 9 related compounds were studied by ¹H NMR-pH titrations and case-tailored evaluation methods. The resulting acid-base properties are quantified in terms of 33 macroscopic and 62 microscopic protonation constants and the concomitant 16 interactivity parameters, providing thus the first complete microspeciation of this vitally important family of biomolecules. The species- and site-specific basicities are interpreted by means of inductive and hydrogen-bonding effects through various intra- and intermolecular comparisons. The pH-dependent distribution of the microspecies is depicted. The thiolate basicities determined this way provide exclusive means for the prediction of thiolate oxidizabilities, a key parameter to understand and influence oxidative stress at the molecular level.
13. Equilibrium and fluctuation analysis for ZTH electrical diagnostics
SciTech Connect
Miller, G.; Ingraham, J.C.
1988-12-01
Some of the rationale behind the electrical diagnostics proposed for the Los Alamos Confinement Physics Research Facility, ZTH, is discussed. The axisymmetric equilibrium measurements consist of a poloidal flux array and a toroidally averaged poloidal field array. The equilibrium quantities of interest, for example, the radial magnetic field causing displacement of the outer plasma magnetic surface, are obtained from the measurements by linear combination with constant coefficients. Some possible objectives for the nonaxisymmetric field measurements are discussed. 7 refs., 6 figs.
14. Constant potential pulse polarography
USGS Publications Warehouse
Christie, J.H.; Jackson, L.L.; Osteryoung, R.A.
1976-01-01
The new technique of constant potential pulse polarography, In which all pulses are to be the same potential, is presented theoretically and evaluated experimentally. The response obtained is in the form of a faradaic current wave superimposed on a constant capacitative component. Results obtained with a computer-controlled system exhibit a capillary response current similar to that observed In normal pulse polarography. Calibration curves for Pb obtained using a modified commercial pulse polarographic instrument are in good accord with theoretical predictions.
15. Equivalence-point electromigration acid-base titration via moving neutralization boundary electrophoresis.
PubMed
Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi
2011-04-01
In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry.
16. The normal acid-base status of mice.
PubMed
Iversen, Nina K; Malte, Hans; Baatrup, Erik; Wang, Tobias
2012-03-15
Rodent models are commonly used for various physiological studies including acid-base regulation. Despite the widespread use of especially genetic modified mice, little attention have been made to characterise the normal acid-base status in these animals in order to reveal proper control values. Furthermore, several studies report blood gas values obtained in anaesthetised animals. We, therefore, decided to characterise blood CO(2) binding characteristic of mouse blood in vitro and to characterise normal acid-base status in conscious BALBc mice. In vitro CO(2) dissociation curves, performed on whole blood equilibrated to various PCO₂ levels in rotating tonometers, revealed a typical mammalian pK' (pK'=7.816-0.234 × pH (r=0.34)) and a non-bicarbonate buffer capacity (16.1 ± 2.6 slyke). To measure arterial acid-base status, small blood samples were taken from undisturbed mice with indwelling catheters in the carotid artery. In these animals, pH was 7.391 ± 0.026, plasma [HCO(3)(-)] 18.4 ± 0.83 mM, PCO₂ 30.3 ± 2.1 mm Hg and lactate concentration 4.6 ± 0.7 mM. Our study, therefore, shows that mice have an arterial pH that resembles other mammals, although arterial PCO₂ tends to be lower than in larger mammals. However, pH from arterial blood sampled from mice anaesthetised with isoflurane was significantly lower (pH 7.239 ± 0.021), while plasma [HCO(3)(-)] was 18.5 ± 1.4 mM, PCO₂ 41.9 ± 2.9 mm Hg and lactate concentration 4.48 ± 0.67 mM. Furthermore, we measured metabolism and ventilation (V(E)) in order to determine the ventilation requirements (VE/VO₂) to answer whether small mammals tend to hyperventilate. We recommend, therefore, that studies on acid-base regulation in mice should be based on samples taken for indwelling catheters rather than cardiac puncture of terminally anaesthetised mice.
17. Acid-base disorders in calves with chronic diarrhea.
PubMed
Bednarski, M; Kupczyński, R; Sobiech, P
2015-01-01
The aim of this study was to analyze disorders of acid-base balance in calves with chronic diarrhea caused by mixed, viral, bacterial and Cryptosporydium parvum infection. We compared results ob- tained with the classic model (Henderson-Hasselbalch) and strong ion approach (the Steward model). The study included 36 calves aged between 14 and 21 days. The calves were allocated to three groups: I - (control) non-diarrheic calves, group II - animals with compensated acid-base imbalance and group III calves with compensated acid-base disorders and hypoalbuminemia. Plasma concentrations of Na+, K+, Cl-, C12+, Mg2+, P, albumin and lactate were measured. In the classic model, acid-base balance was determined on the basis of blood pH, pCO2, HCO3-, BE and anion gap. In the strong ion model, strong ion difference (SID), effective strong anion difference, total plasma concentration of nonvolatile buffers (A(Tot)) and strong ion gap (SIG) were measured. The control calves and the animals from groups II and III did not differ significantly in terms of their blood pH. The plasma concentration of HCO3-, BE and partial pressure of CO2 in animals from the two groups with chronic diarrhea were significantly higher than those found in the controls. The highest BE (6.03 mmol/l) was documented in calves from group II. The animals from this group presented compensation resulted from activation of metabolic mechanisms. The calves with hypoal- buminemia (group III) showed lower plasma concentrations of albumin (15.37 g/L), Cl (74.94 mmol/L), Mg2+ (0.53 mmol/L), P (1.41 mmol/L) and higher value of anion gap (39.03 mmol/L). This group III presented significantly higher SID3 (71.89 mmol/L), SID7 (72.92 mmol/L) and SIG (43.53 mmol/L) values than animals from the remaining groups (P < 0.01), whereas A(Tot) (6.82 mmol/L) were significantly lower. The main finding of the correlation study was the excellent relationship between the AGcorr and SID3, SID7, SIG. In conclusion, chronic diarrhea leads
18. Equilibrium thermodynamics in modified gravitational theories
Bamba, Kazuharu; Geng, Chao-Qiang; Tsujikawa, Shinji
2010-04-01
We show that it is possible to obtain a picture of equilibrium thermodynamics on the apparent horizon in the expanding cosmological background for a wide class of modified gravity theories with the Lagrangian density f(R,ϕ,X), where R is the Ricci scalar and X is the kinetic energy of a scalar field ϕ. This comes from a suitable definition of an energy-momentum tensor of the “dark” component that respects to a local energy conservation in the Jordan frame. In this framework the horizon entropy S corresponding to equilibrium thermodynamics is equal to a quarter of the horizon area A in units of gravitational constant G, as in Einstein gravity. For a flat cosmological background with a decreasing Hubble parameter, S globally increases with time, as it happens for viable f(R) inflation and dark energy models. We also show that the equilibrium description in terms of the horizon entropy S is convenient because it takes into account the contribution of both the horizon entropy S' in non-equilibrium thermodynamics and an entropy production term.
19. Structural design using equilibrium programming
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1992-01-01
Multiple nonlinear programming methods are combined in the method of equilibrium programming. Equilibrium programming theory has been appied to problems in operations research, and in the present study it is investigated as a framework to solve structural design problems. Several existing formal methods for structural optimization are shown to actually be equilibrium programming methods. Additionally, the equilibrium programming framework is utilized to develop a new structural design method. Selected computational results are presented to demonstrate the methods.
20. Chemical equilibrium. [maximizing entropy of gas system to derive relations between thermodynamic variables
NASA Technical Reports Server (NTRS)
1976-01-01
The entropy of a gas system with the number of particles subject to external control is maximized to derive relations between the thermodynamic variables that obtain at equilibrium. These relations are described in terms of the chemical potential, defined as equivalent partial derivatives of entropy, energy, enthalpy, free energy, or free enthalpy. At equilibrium, the change in total chemical potential must vanish. This fact is used to derive the equilibrium constants for chemical reactions in terms of the partition functions of the species involved in the reaction. Thus the equilibrium constants can be determined accurately, just as other thermodynamic properties, from a knowledge of the energy levels and degeneracies for the gas species involved. These equilibrium constants permit one to calculate the equilibrium concentrations or partial pressures of chemically reacting species that occur in gas mixtures at any given condition of pressure and temperature or volume and temperature.
1. A physicochemical model of crystalloid infusion on acid-base status.
PubMed
Omron, Edward M; Omron, Rodney M
2010-09-01
The objective of this study is to develop a physicochemical model of the projected change in standard base excess (SBE) consequent to the infused volume of crystalloid solutions in common use. A clinical simulation of modeled acid-base and fluid compartment parameters was conducted in a 70-kg test participant at standard physiologic state: pH =7.40, partial pressure of carbon dioxide (PCO2) = 40 mm Hg, Henderson-Hasselbalch actual bicarbonate ([HCO3]HH) = 24.5 mEq/L, strong ion difference (SID) = 38.9 mEq/L, albumin = 4.40 g/dL, inorganic phosphate = 1.16 mmol/L, citrate total = 0.135 mmol/L, and SBE =0.1 mEq/L. Simulations of multiple, sequential crystalloid infusions up to 10 L were conducted of normal saline (SID = 0), lactated Ringer's (SID = 28), plasmalyte 148 (SID = 50), one-half normal saline þ 75 mEq/L sodium bicarbonate (NaHCO3; SID = 75), 0.15 mol/L NaHCO3 (SID = 150), and a hypothetical crystalloid solution whose SID = 24.5 mEq/L, respectively. Simulations were based on theoretical completion of steady-state equilibrium and PCO2 was fixed at 40 mm Hg to assess nonrespiratory acid-base effects. A crystalloid SID equivalent to standard state actual bicarbonate (24.5 mEq/L) results in a neutral metabolic acid-base status for infusions up to 10 L. The 5 study solutions exhibited curvilinear relationships between SBE and crystalloid infusion volume in liters. Solutions whose SID was greater than 24.5 mEq/L demonstrated a progressive metabolic alkalosis and less, a progressive metabolic acidosis. In a human model system, the effects of crystalloid infusion on SBE are a function of the crystalloid and plasma SID, volume infused, and nonvolatile plasma weak acid changes. A projection of the impact of a unit volume of various isotonic crystalloid solutions on SBE is presented. The model's validation, applications, and limitations are examined.
2. Variation of Fundamental Constants
Flambaum, V. V.
2006-11-01
Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. The spatial variation can explain a fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence. We present a review of recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses. There are some hints for the variation in quasar absorption spectra. Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transition between accidentally degenerate atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on the ultraviolet transition between very low excited state and ground state in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Huge enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance.
3. Cosmic curvature from de Sitter equilibrium cosmology.
PubMed
Albrecht, Andreas
2011-10-01
I show that the de Sitter equilibrium cosmology generically predicts observable levels of curvature in the Universe today. The predicted value of the curvature, Ω(k), depends only on the ratio of the density of nonrelativistic matter to cosmological constant density ρ(m)(0)/ρ(Λ) and the value of the curvature from the initial bubble that starts the inflation, Ω(k)(B). The result is independent of the scale of inflation, the shape of the potential during inflation, and many other details of the cosmology. Future cosmological measurements of ρ(m)(0)/ρ(Λ) and Ω(k) will open up a window on the very beginning of our Universe and offer an opportunity to support or falsify the de Sitter equilibrium cosmology.
4. Elastic constants of calcite
USGS Publications Warehouse
Peselnick, L.; Robie, R.A.
1962-01-01
The recent measurements of the elastic constants of calcite by Reddy and Subrahmanyam (1960) disagree with the values obtained independently by Voigt (1910) and Bhimasenachar (1945). The present authors, using an ultrasonic pulse technique at 3 Mc and 25??C, determined the elastic constants of calcite using the exact equations governing the wave velocities in the single crystal. The results are C11=13.7, C33=8.11, C44=3.50, C12=4.82, C13=5.68, and C14=-2.00, in units of 1011 dyncm2. Independent checks of several of the elastic constants were made employing other directions and polarizations of the wave velocities. With the exception of C13, these values substantially agree with the data of Voigt and Bhimasenachar. ?? 1962 The American Institute of Physics.
5. The Hubble constant
NASA Technical Reports Server (NTRS)
Huchra, John P.
1992-01-01
The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy radial velocities and distances. Although there has been considerable progress in the development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution.
6. Functional nucleic-acid-based sensors for environmental monitoring.
PubMed
Sett, Arghya; Das, Suradip; Bora, Utpal
2014-10-01
Efforts to replace conventional chromatographic methods for environmental monitoring with cheaper and easy to use biosensors for precise detection and estimation of hazardous environmental toxicants, water or air borne pathogens as well as various other chemicals and biologics are gaining momentum. Out of the various types of biosensors classified according to their bio-recognition principle, nucleic-acid-based sensors have shown high potential in terms of cost, sensitivity, and specificity. The discovery of catalytic activities of RNA (ribozymes) and DNA (DNAzymes) which could be triggered by divalent metallic ions paved the way for their extensive use in detection of heavy metal contaminants in environment. This was followed with the invention of small oligonucleotide sequences called aptamers which can fold into specific 3D conformation under suitable conditions after binding to target molecules. Due to their high affinity, specificity, reusability, stability, and non-immunogenicity to vast array of targets like small and macromolecules from organic, inorganic, and biological origin, they can often be exploited as sensors in industrial waste management, pollution control, and environmental toxicology. Further, rational combination of the catalytic activity of DNAzymes and RNAzymes along with the sequence-specific binding ability of aptamers have given rise to the most advanced form of functional nucleic-acid-based sensors called aptazymes. Functional nucleic-acid-based sensors (FNASs) can be conjugated with fluorescent molecules, metallic nanoparticles, or quantum dots to aid in rapid detection of a variety of target molecules by target-induced structure switch (TISS) mode. Although intensive research is being carried out for further improvements of FNAs as sensors, challenges remain in integrating such bio-recognition element with advanced transduction platform to enable its use as a networked analytical system for tailor made analysis of environmental
7. Thermal equilibrium of goats.
PubMed
Maia, Alex S C; Nascimento, Sheila T; Nascimento, Carolina C N; Gebremedhin, Kifle G
2016-05-01
The effects of air temperature and relative humidity on thermal equilibrium of goats in a tropical region was evaluated. Nine non-pregnant Anglo Nubian nanny goats were used in the study. An indirect calorimeter was designed and developed to measure oxygen consumption, carbon dioxide production, methane production and water vapour pressure of the air exhaled from goats. Physiological parameters: rectal temperature, skin temperature, hair-coat temperature, expired air temperature and respiratory rate and volume as well as environmental parameters: air temperature, relative humidity and mean radiant temperature were measured. The results show that respiratory and volume rates and latent heat loss did not change significantly for air temperature between 22 and 26°C. In this temperature range, metabolic heat was lost mainly by convection and long-wave radiation. For temperature greater than 30°C, the goats maintained thermal equilibrium mainly by evaporative heat loss. At the higher air temperature, the respiratory and ventilation rates as well as body temperatures were significantly elevated. It can be concluded that for Anglo Nubian goats, the upper limit of air temperature for comfort is around 26°C when the goats are protected from direct solar radiation.
8. Thermal equilibrium of goats.
PubMed
Maia, Alex S C; Nascimento, Sheila T; Nascimento, Carolina C N; Gebremedhin, Kifle G
2016-05-01
The effects of air temperature and relative humidity on thermal equilibrium of goats in a tropical region was evaluated. Nine non-pregnant Anglo Nubian nanny goats were used in the study. An indirect calorimeter was designed and developed to measure oxygen consumption, carbon dioxide production, methane production and water vapour pressure of the air exhaled from goats. Physiological parameters: rectal temperature, skin temperature, hair-coat temperature, expired air temperature and respiratory rate and volume as well as environmental parameters: air temperature, relative humidity and mean radiant temperature were measured. The results show that respiratory and volume rates and latent heat loss did not change significantly for air temperature between 22 and 26°C. In this temperature range, metabolic heat was lost mainly by convection and long-wave radiation. For temperature greater than 30°C, the goats maintained thermal equilibrium mainly by evaporative heat loss. At the higher air temperature, the respiratory and ventilation rates as well as body temperatures were significantly elevated. It can be concluded that for Anglo Nubian goats, the upper limit of air temperature for comfort is around 26°C when the goats are protected from direct solar radiation. PMID:27157333
9. Gallic acid-based indanone derivatives as anticancer agents.
PubMed
Saxena, Hari Om; Faridi, Uzma; Srivastava, Suchita; Kumar, J K; Darokar, M P; Luqman, Suaib; Chanotiya, C S; Krishna, Vinay; Negi, Arvind S; Khanuja, S P S
2008-07-15
Gallic acid-based indanone derivatives have been synthesised. Some of the indanones showed very good anticancer activity in MTT assay. Compounds 10, 11, 12 and 14 possessed potent anticancer activity against various human cancer cell lines. The most potent indanone (10, IC(50)=2.2 microM), against MCF-7, that is, hormone-dependent breast cancer cell line, showed no toxicity to human erythrocytes even at higher concentrations (100 microg/ml, 258 microM). While, indanones 11, 12 and 14 showed toxicities to erythrocytes at higher concentrations.
10. Acid-Base Homeostasis: Overview for Infusion Nurses.
PubMed
Masco, Natalie A
2016-01-01
Acid-base homeostasis is essential to normal function of the human body. Even slight alterations can significantly alter physiologic processes at the tissue and cellular levels. To optimally care for patients, nurses must be able to recognize signs and symptoms that indicate deviations from normal. Nurses who provide infusions to patients-whether in acute care, home care, or infusion center settings-have a responsibility to be able to recognize the laboratory value changes that occur with the imbalance and appreciate the treatment options, including intravenous infusions. PMID:27598068
11. A fully automatic system for acid-base coulometric titrations.
PubMed
Cladera, A; Caro, A; Estela, J M; Cerdà, V
1990-01-01
An automatic system for acid-base titrations by electrogeneration of H(+) and OH(-) ions, with potentiometric end-point detection, was developed. The system includes a PC-compatible computer for instrumental control, data acquisition and processing, which allows up to 13 samples to be analysed sequentially with no human intervention.The system performance was tested on the titration of standard solutions, which it carried out with low errors and RSD. It was subsequently applied to the analysis of various samples of environmental and nutritional interest, specifically waters, soft drinks and wines.
12. A Computer-Based Simulation of an Acid-Base Titration
ERIC Educational Resources Information Center
Boblick, John M.
1971-01-01
Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)
13. Compassion is a constant.
PubMed
Scott, Tricia
2015-11-01
Compassion is a powerful word that describes an intense feeling of commiseration and a desire to help those struck by misfortune. Most people know intuitively how and when to offer compassion to relieve another person's suffering. In health care, compassion is a constant; it cannot be rationed because emergency nurses have limited time or resources to manage increasing demands.
14. XrayOpticsConstants
2005-06-20
This application (XrayOpticsConstants) is a tool for displaying X-ray and Optical properties for a given material, x-ray photon energy, and in the case of a gas, pressure. The display includes fields such as the photo-electric absorption attenuation length, density, material composition, index of refraction, and emission properties (for scintillator materials).
15. Compassion is a constant.
PubMed
Scott, Tricia
2015-11-01
Compassion is a powerful word that describes an intense feeling of commiseration and a desire to help those struck by misfortune. Most people know intuitively how and when to offer compassion to relieve another person's suffering. In health care, compassion is a constant; it cannot be rationed because emergency nurses have limited time or resources to manage increasing demands. PMID:26542898
16. Potentiometric determination of the total acidity of humic acids by constant-current coulometry.
PubMed
Palladino, Giuseppe; Ferri, Diego; Manfredi, Carla; Vasca, Ermanno
2007-01-16
A straightforward method for both the quantitative and the equilibrium analysis of humic acids in solution, based on the combination of potentiometry with coulometry, is presented. The method is based on potentiometric titrations of alkaline solutions containing, besides the humic acid sample, also NaClO(4) 1M; by means of constant current coulometry the analytical acidity in the solutions is increased with a high precision, until the formation of a solid phase occurs. Hence, the total acid content of the macromolecules may be determined from the e.m.f. data by using modified Gran plots or least-squares sum minimization programs as well. It is proposed to use the pK(w) value in the ionic medium as a check of the correctness of each experiment; this datum may be readily obtained as a side-result in each titration. Modelling acid-base equilibria of the HA samples analysed was also performed, on the basis of the buffer capacity variations occurring during each titration. The experimental data fit, having the least standard deviation, was obtained assuming a mixture of three monoprotic acids (HX, HY, HZ) having about the same analytical concentration, whose acid dissociation constants in NaClO(4) 1M at 25 degrees C were pK(HX)=3.9+/-0.2, pK(HY)=7.5+/-0.3, pK(HZ)=9.5+/-0.2, respectively. With the proposed method the handling of alkaline HA solutions, the titration with very dilute NaOH or HCl solutions and the need for the availability of very small volumes of titrant to be added by microburettes may be avoided.
17. 78 FR 36698 - Microbiology Devices; Reclassification of Nucleic Acid-Based Systems for Mycobacterium tuberculosis
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-19
... Nucleic Acid-Based Systems for Mycobacterium tuberculosis Complex in Respiratory Specimens AGENCY: Food...) is proposing to reclassify nucleic acid-based in vitro diagnostic devices for the detection of... Controls Guideline: Nucleic Acid-Based In Vitro Diagnostic Devices for the Detection of...
18. Phase equilibrium studies
SciTech Connect
Mathias, P.M.; Stein, F.P.
1983-09-01
A phase equilibrium model has been developed for the SRC-I process, as well as the other coal liquefaction processes. It is applicable to both vapor/liquid and liquid/liquid equilibria; it also provides an approximate but adequate description of aqueous mixtures where the volatile electrolyte components dissociate to form ionic species. This report completes the description of the model presented in an earlier report (Mathias and Stein, 1983a). Comparisons of the model to previously published data on coal-fluid mixtures are presented. Further, a preliminary analysis of new data on SRC-I coal fluids is presented. Finally, the current capabilities and deficiencies of the model are discussed. 25 references, 17 figures, 30 tables.
19. Statistical physics ""Beyond equilibrium
SciTech Connect
Ecke, Robert E
2009-01-01
The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.
20. Stochastic acid-based quenching in chemically amplified photoresists: a simulation study
Mack, Chris A.; Biafore, John J.; Smith, Mark D.
2011-04-01
BACKGROUND: The stochastic nature of acid-base quenching in chemically amplified photoresists leads to variations in the resulting acid concentration during post-exposure bake, which leads to line-edge roughness (LER) of the resulting features. METHODS: Using a stochastic resist simulator, we predicted the mean and standard deviation of the acid concentration after post-exposure bake for an open-frame exposure and fit the results to empirical expressions. RESULTS: The mean acid concentration after quenching can be predicted using the reaction-limited rate equation and an effective rate constant. The effective quenching rate constant is predicted by an empirical expression. A second empirical expression for the standard deviation of the acid concentration matched the output of the PROLITH stochastic resist model to within a few percent CONCLUSIONS: Predicting the stochastic uncertainty in acid concentration during post-exposure bake for 193-nm and extreme ultraviolet resists allows optimization of resist processing and formulations, and may form the basis of a comprehensive LER model.
1. Acid-base and catalytic properties of the products of oxidative thermolysis of double complex compounds
Pechenyuk, S. I.; Semushina, Yu. P.; Kuz'mich, L. F.; Ivanov, Yu. V.
2016-01-01
Acid-base properties of the products of thermal decomposition of [M(A)6] x; [M1(L)6] y (where M is Co, Cr, Cu, Ni; M1 is Fe, Cr, Co; A is NH3, 1/2 en, 1/2 pn, CO(NH2)2; and L is CN, 1/2C2O4) binary complexes in air and their catalytic properties in the oxidation reaction of ethanol with atmospheric oxygen are studied. It is found that these thermolysis products are mixed oxides of the central atoms of complexes characterized by pH values of the zero charge point in the region of 4-9, OH-group sorption limits from 1 × 10-4 to 4.5 × 10-4 g-eq/g, OH-group surface concentrations of 10-50 nm-2 in 0.1 M NaCl solutions, and S sp from 3 to 95 m2/g. Their catalytic activity is estimated from the apparent rate constant of the conversion of ethanol in CO2. The values of constants are (1-6.5) × 10-5 s-1, depending on the gas flow rate and the S sp value.
2. Equilibrium properties of chemically reacting gases
NASA Technical Reports Server (NTRS)
1976-01-01
The equilibrium energy, enthalpy, entropy, specific heat at constant volume and constant pressure, and the equation of state of the gas are all derived for chemically reacting gas mixtures in terms of the compressibility, the mol fractions, the thermodynamic properties of the pure gas components, and the change in zero point energy due to reaction. Results are illustrated for a simple diatomic dissociation reaction and nitrogen is used as an example. Next, a gas mixture resulting from combined diatomic dissociation and atomic ionization reactions is treated and, again, nitrogen is used as an example. A short discussion is given of the additional complexities involved when precise solutions for high-temperature air are desired, including effects caused by NO produced in shuffle reactions and by other trace species formed from CO2, H2O and Ar found in normal air.
3. Wall of fundamental constants
SciTech Connect
Olive, Keith A.; Peloso, Marco; Uzan, Jean-Philippe
2011-02-15
We consider the signatures of a domain wall produced in the spontaneous symmetry breaking involving a dilatonlike scalar field coupled to electromagnetism. Domains on either side of the wall exhibit slight differences in their respective values of the fine-structure constant, {alpha}. If such a wall is present within our Hubble volume, absorption spectra at large redshifts may or may not provide a variation in {alpha} relative to the terrestrial value, depending on our relative position with respect to the wall. This wall could resolve the contradiction between claims of a variation of {alpha} based on Keck/Hires data and of the constancy of {alpha} based on Very Large Telescope data. We derive the properties of the wall and the parameters of the underlying microscopic model required to reproduce the possible spatial variation of {alpha}. We discuss the constraints on the existence of the low-energy domain wall and describe its observational implications concerning the variation of the fundamental constants.
4. A continuum model for flocking: Obstacle avoidance, equilibrium, and stability
Mecholsky, Nicholas Alexander
The modeling and investigation of the dynamics and configurations of animal groups is a subject of growing attention. In this dissertation, we present a partial-differential-equation based continuum model of flocking and use it to investigate several properties of group dynamics and equilibrium. We analyze the reaction of a flock to an obstacle or an attacking predator. We show that the flock response is in the form of density disturbances that resemble Mach cones whose configuration is determined by the anisotropic propagation of waves through the flock. We investigate the effect of a flock 'pressure' and pairwise repulsion on an equilibrium density distribution. We investigate both linear and nonlinear pressures, look at the convergence to a 'cold' (T → 0) equilibrium solution, and find regions of parameter space where different models produce the same equilibrium. Finally, we analyze the stability of an equilibrium density distribution to long-wavelength perturbations. Analytic results for the stability of a constant density solution as well as stability regimes for constant density solutions to the equilibrium equations are presented.
5. Varying constants quantum cosmology
SciTech Connect
Leszczyńska, Katarzyna; Balcerzak, Adam; Dabrowski, Mariusz P. E-mail: [email protected]
2015-02-01
We discuss minisuperspace models within the framework of varying physical constants theories including Λ-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ansätze for the variability of constants: c(a) = c{sub 0} a{sup n} and G(a)=G{sub 0} a{sup q}. We find that most of the varying c and G minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe ''from nothing'' (a=0) to a Friedmann geometry with the scale factor a{sub t} is large for growing c models and is strongly suppressed for diminishing c models. As for G varying, the probability of tunneling is large for G diminishing, while it is small for G increasing. In general, both varying c and G change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.
6. Absorption Spectroscopy Study of Acid-Base and Metal-Binding Properties of Flavanones
Shubina, V. S.; Shatalina, Yu. V.
2013-11-01
We have used absorption spectroscopy to study the acid-base and metal-binding properties of two structurally similar flavanones: taxifolin and naringenin. We have determined the acid dissociation constants for taxifolin (pKa1 = 7.10 ± 0.05, pKa2 = 8.60 ± 0.09, pKa3 = 8.59 ± 0.19, pKa4 = 11.82 ± 0.36) and naringenin (pKa1 = 7.05 ± 0.05, pKa2 = 8.85 ± 0.09, pKa3 = 12.01 ± 0.38). The appearance of new absorption bands in the visible wavelength region let us determine the stoichiometric composition of the iron (II) complexes of the flavanones. We show that at pH 5, in solution there is a mixture of complexes between taxifolin and iron (II) ions in stoichiometric ratio 2:1 and 1:2, while at pH 7.4 and pH 9, we detect a 1:1 taxifolin:Fe(II) complex. We established that at these pH values, naringenin forms a 2:1 complex with iron (II) ions. We propose structures for the complexes formed. Comprehensive study of the acid-base properties and the metal-binding capability of the two structurally similar flavanones let us determine the structure-properties relation and the conditions under which antioxidant activity of the polyphenols appears, via chelation of variable-valence metal ions.
7. Acid-base and respiratory properties of a buffered bovine erythrocyte perfusion medium.
PubMed
Lindinger, M I; Heigenhauser, G J; Jones, N L
1986-05-01
Current research in organ physiology often utilizes in situ or isolated perfused tissues. We have characterized a perfusion medium associated with excellent performance characteristics in perfused mammalian skeletal muscle. The perfusion medium consisting of Krebs-Henseleit buffer, bovine serum albumin, and fresh bovine erythrocytes was studied with respect to its gas-carrying relationships and its response to manipulation of acid-base state. Equilibration of the perfusion medium at base excess of -10, -5, 0, 5, and 10 mmol X L-1 to humidified gas mixtures varying in their CO2 and O2 content was followed by measurements of perfusate hematocrit, hemoglobin concentration, pH, Pco2, Cco2, Po2, and percent oxygen saturation. The oxygen dissociation curve was similar to that of mammalian bloods, having a P50 of 32 Torr (1 Torr = 133.3 Pa), Hill's constant n of 2.87 +/- 0.15, and a Bohr factor of -0.47, showing the typical Bohr shifts with respect to CO2 and pH. The oxygen capacity was calculated to be 190 mL X L-1 blood. The carbon dioxide dissociation curve was also similar to that of mammalian blood. The in vitro nonbicarbonate buffer capacity (delta [HCO3-] X delta pH-1) at zero base excess was -24.6 and -29.9 mmol X L-1 X pH-1 for the perfusate and buffer, respectively. The effects of reduced oxygen saturation on base excess and pH of the medium were quantified. The data were used to construct an acid-base alignment diagram for the medium, which may be used to quantify the flux of nonvolatile acid or base added to the venous effluent during tissue perfusions.
8. Acid-base titrations using microfluidic paper-based analytical devices.
PubMed
Karita, Shingo; Kaneta, Takashi
2014-12-16
9. Acid-base balance in the developing marsupial: from ectotherm to endotherm.
PubMed
Andrewartha, Sarah J; Cummings, Kevin J; Frappell, Peter B
2014-05-01
Marsupial joeys are born ectothermic and develop endothermy within their mother's thermally stable pouch. We hypothesized that Tammar wallaby joeys would switch from α-stat to pH-stat regulation during the transition from ectothermy to endothermy. To address this, we compared ventilation (Ve), metabolic rate (Vo2), and variables relevant to blood gas and acid-base regulation and oxygen transport including the ventilatory requirements (Ve/Vo2 and Ve/Vco2), partial pressures of oxygen (PaO2), carbon dioxide (PaCO2), pHa, and oxygen content (CaO2) during progressive hypothermia in ecto- and endothermic Tammar wallabies. We also measured the same variables in the well-studied endotherm, the Sprague-Dawley rat. Hypothermia was induced in unrestrained, unanesthetized joeys and rats by progressively dropping the ambient temperature (Ta). Rats were additionally exposed to helox (80% helium, 20% oxygen) to facilitate heat loss. Respiratory, metabolic, and blood-gas variables were measured over a large body temperature (Tb) range (∼15-16°C in both species). Ectothermic joeys displayed limited thermogenic ability during cooling: after an initial plateau, Vo2 decreased with the progressive drop in Tb. The Tb of endothermic joeys and rats fell despite Vo2 nearly doubling with the initiation of cold stress. In all three groups the changes in Vo2 were met by changes in Ve, resulting in constant Ve/Vo2 and Ve/Vco2, blood gases, and pHa. Thus, although thermogenic capability was nearly absent in ectothermic joeys, blood acid-base regulation was similar to endothermic joeys and rats. This suggests that unlike some reptiles, unanesthetized mammals protect arterial blood pH with changing Tb, irrespective of their thermogenic ability and/or stage of development.
10. The influence of dissolved organic matter on the acid-base system of the Baltic Sea
Kuliński, Karol; Schneider, Bernd; Hammer, Karoline; Machulik, Ulrike; Schulz-Bull, Detlef
2014-04-01
To assess the influence of dissolved organic matter (DOM) on the acid-base system of the Baltic Sea, 19 stations along the salinity gradient from Mecklenburg Bight to the Bothnian Bay were sampled in November 2011 for total alkalinity (AT), total inorganic carbon concentration (CT), partial pressure of CO2 (pCO2), and pH. Based on these data, an organic alkalinity contribution (Aorg) was determined, defined as the difference between measured AT and the inorganic alkalinity calculated from CT and pH and/or CT and pCO2. Aorg was in the range of 22-58 μmol kg- 1, corresponding to 1.5-3.5% of AT. The method to determine Aorg was validated in an experiment performed on DOM-enriched river water samples collected from the mouths of the Vistula and Oder Rivers in May 2012. The Aorg increase determined in that experiment correlated directly with the increased DOC concentration caused by enrichment of the > 1 kDa DOM fraction. To examine the effect of Aorg on calculations of the marine CO2 system, the pCO2 and pH values measured in Baltic Sea water were compared with calculated values that were based on the measured alkalinity and another variable of the CO2 system, but ignored the existence of Aorg. Large differences between measured and calculated pCO2 and pH were obtained when the computations were based on AT and CT. The calculated pCO2 was 27-56% lower than the measured value whereas the calculated pH was overestimated by more than 0.4 pH units. Since biogeochemical models are based on the transport and transformations of AT and CT, the acid-base properties of DOM should be included in calculations of the CO2 system in DOM-rich basins like the Baltic Sea. In view of our limited knowledge about the composition and acid/base properties of DOM, this is best achieved using a bulk dissociation constant, KDOM, that represents all weakly acidic functional groups present in DOM. Our preliminary results indicated that the bulk KDOM in the Baltic Sea is 2.94 · 10- 8 mol kg- 1
11. Semiexperimental equilibrium structure of the lower energy conformer of glycidol by the mixed estimation method.
PubMed
Demaison, Jean; Craig, Norman C; Conrad, Andrew R; Tubergen, Michael J; Rudolph, Heinz Dieter
2012-09-13
Rotational constants were determined for (18)O-substituted isotopologues of the lower energy conformer of glycidol, which has an intramolecular inner hydrogen bond from the hydroxyl group to the oxirane ring oxygen. Rotational constants were previously determined for the (13)C and the OD species. These rotational constants have been corrected with the rovibrational constants calculated from an ab initio cubic force field. The derived semiexperimental equilibrium rotational constants have been supplemented by carefully chosen structural parameters, including those for hydrogen atoms, from medium level ab initio calculations. The combined data have been used in a weighted least-squares fit to determine an equilibrium structure for the glycidol H-bond inner conformer. This work shows that the mixed estimation method allows us to determine a complete and reliable equilibrium structure for large molecules, even when the rotational constants of a number of isotopologues are unavailable.
12. A damped pendulum forced with a constant torque
Coullet, P.; Gilli, J. M.; Monticelli, M.; Vandenberghe, N.
2005-12-01
The dynamics of a damped pendulum driven by a constant torque is studied experimentally and theoretically. We use this simple device to demonstrate some generic dynamical behavior including the loss of equilibrium or saddle node bifurcation with or without hysteresis and the homoclinic bifurcation. A qualitative analysis is developed to emphasize the role of two dimensionless parameters corresponding to damping and forcing.
13. Henry's law constants for dimethylsulfide in freshwater and seawater
NASA Technical Reports Server (NTRS)
Dacey, J. W. H.; Wakeham, S. G.; Howes, B. L.
1984-01-01
Distilled water and several waters of varying salinity were subjected, over a 0-32 C temperature range, to measurements for Henry's law constants for dimethylsulfide. Values for distilled water and seawater of the solubility parameters A and C are obtained which support the concept that the concentration of dimethylsulfide in the atmosphere is far from equilibrium with seawater.
14. Fatty acid-based polyurethane films for wound dressing applications.
PubMed
Gultekin, Guncem; Atalay-Oral, Cigdem; Erkal, Sibel; Sahin, Fikret; Karastova, Djursun; Tantekin-Ersolmaz, S Birgul; Guner, F Seniha
2009-01-01
Fatty acid-based polyurethane films were prepared for use as potential wound dressing material. The polymerization reaction was carried out with or without catalyst. Polymer films were prepared by casting-evaporation technique with or without crosslink-catalyst. The film prepared from uncatalyzed reaction product with crosslink-catalyst gave slightly higher crosslink density. The mechanical tests showed that, the increase in the tensile strength and decrease in the elongation at break is due to the increase in the degree of crosslinking. All films were flexible, and resisted to acid solution. The films prepared without crosslink-catalyst were more hydrophilic, absorbed more water. The highest permeability values were generally obtained for the films prepared without crosslink catalyst. Both the direct contact method and the MMT test were applied for determination of cytotoxicity of polymer films and the polyurethane film prepared from uncatalyzed reaction product without crosslink-catalyst showed better biocompatibility property, closest to the commercial product, Opsite.
15. Fatty acid-based polyurethane films for wound dressing applications.
PubMed
Gultekin, Guncem; Atalay-Oral, Cigdem; Erkal, Sibel; Sahin, Fikret; Karastova, Djursun; Tantekin-Ersolmaz, S Birgul; Guner, F Seniha
2009-01-01
Fatty acid-based polyurethane films were prepared for use as potential wound dressing material. The polymerization reaction was carried out with or without catalyst. Polymer films were prepared by casting-evaporation technique with or without crosslink-catalyst. The film prepared from uncatalyzed reaction product with crosslink-catalyst gave slightly higher crosslink density. The mechanical tests showed that, the increase in the tensile strength and decrease in the elongation at break is due to the increase in the degree of crosslinking. All films were flexible, and resisted to acid solution. The films prepared without crosslink-catalyst were more hydrophilic, absorbed more water. The highest permeability values were generally obtained for the films prepared without crosslink catalyst. Both the direct contact method and the MMT test were applied for determination of cytotoxicity of polymer films and the polyurethane film prepared from uncatalyzed reaction product without crosslink-catalyst showed better biocompatibility property, closest to the commercial product, Opsite. PMID:18839285
16. Ultrasonic and densimetric titration applied for acid-base reactions.
PubMed
Burakowski, Andrzej; Gliński, Jacek
2014-01-01
Classical acoustic acid-base titration was monitored using sound speed and density measurements. Plots of these parameters, as well as of the adiabatic compressibility coefficient calculated from them, exhibit changes with the volume of added titrant. Compressibility changes can be explained and quantitatively predicted theoretically in terms of Pasynski theory of non-compressible hydrates combined with that of the additivity of the hydration numbers with the amount and type of ions and molecules present in solution. It also seems that this development could be applied in chemical engineering for monitoring the course of chemical processes, since the applied experimental methods can be carried out almost independently on the medium under test (harmful, aggressive, etc.).
17. Micellar acid-base potentiometric titrations of weak acidic and/or insoluble drugs.
PubMed
Gerakis, A M; Koupparis, M A; Efstathiou, C E
1993-01-01
The effect of various surfactants [the cationics cetyl trimethyl ammonium bromide (CTAB) and cetyl pyridinium chloride (CPC), the anionic sodium dodecyl sulphate (SDS), and the nonionic polysorbate 80 (Tween 80)] on the solubility and ionization constant of some sparingly soluble weak acids of pharmaceutical interest was studied. Benzoic acid (and its 3-methyl-, 3-nitro-, and 4-tert-butyl-derivatives), acetylsalicylic acid, naproxen and iopanoic acid were chosen as model examples. Precise and accurate acid-base titrations in micellar systems were made feasible using a microcomputer-controlled titrator. The response curve, response time and potential drift of the glass electrode in the micellar systems were examined. The cationics CTAB and CPC were found to increase considerably the ionization constant of the weak acids (delta pKa ranged from -0.21 to -3.57), while the anionic SDS showed negligible effect and the nonionic Tween 80 generally decreased the ionization constants. The solubility of the acids in aqueous micellar and acidified micellar solutions was studied spectrophotometrically and it was found increased in all cases. Acetylsalicylic acid, naproxen, benzoic acid and iopanoic acid could be easily determined in raw material and some of them in pharmaceutical preparations by direct titration in CTAB-micellar system instead of using the traditional non-aqueous or back titrimetry. Precisions of 0.3-4.3% RSD and good correlation with the official tedious methods were obtained. The interference study of some excipients showed that a preliminary test should be carried out before the assay of formulations.
18. Nucleic acid-based tissue biomarkers of urologic malignancies.
PubMed
Dietrich, Dimo; Meller, Sebastian; Uhl, Barbara; Ralla, Bernhard; Stephan, Carsten; Jung, Klaus; Ellinger, Jörg; Kristiansen, Glen
2014-08-01
Molecular biomarkers play an important role in the clinical management of cancer patients. Biomarkers allow estimation of the risk of developing cancer; help to diagnose a tumor, ideally at an early stage when cure is still possible; and aid in monitoring disease progression. Furthermore, they hold the potential to predict the outcome of the disease (prognostic biomarkers) and the response to therapy (predictive biomarkers). Altogether, biomarkers will help to avoid tumor-related deaths and reduce overtreatment, and will contribute to increased survival and quality of life in cancer patients due to personalized treatments. It is well established that the process of carcinogenesis is a complex interplay between genomic predisposition, acquired somatic mutations, epigenetic changes and genomic aberrations. Within this complex interplay, nucleic acids, i.e. RNA and DNA, play a fundamental role and therefore represent ideal candidates for biomarkers. They are particularly promising candidates because sequence-specific hybridization and amplification technologies allow highly accurate and sensitive assessment of these biomarker levels over a broad dynamic range. This article provides an overview of nucleic acid-based biomarkers in tissues for the management of urologic malignancies, i.e. tumors of the prostate, testis, kidney, penis, urinary bladder, renal pelvis, ureter and other urinary organs. Special emphasis is put on genomic, transcriptomic and epigenomic biomarkers (SNPs, mutations [genomic and mitochondrial], microsatellite instabilities, viral and bacterial DNA, DNA methylation and hydroxymethylation, mRNA expression, and non-coding RNAs [lncRNA, miRNA, siRNA, piRNA, snRNA, snoRNA]). Due to the multitude of published biomarker candidates, special focus is given to the general applicability of different molecular classes as biomarkers and some particularly promising nucleic acid biomarkers. Furthermore, specific challenges regarding the development and clinical
19. Napoleon Is in Equilibrium
PubMed Central
Phillips, Rob
2016-01-01
It has been said that the cell is the test tube of the twenty-first century. If so, the theoretical tools needed to quantitatively and predictively describe what goes on in such test tubes lag sorely behind the stunning experimental advances in biology seen in the decades since the molecular biology revolution began. Perhaps surprisingly, one of the theoretical tools that has been used with great success on problems ranging from how cells communicate with their environment and each other to the nature of the organization of proteins and lipids within the cell membrane is statistical mechanics. A knee-jerk reaction to the use of statistical mechanics in the description of cellular processes is that living organisms are so far from equilibrium that one has no business even thinking about it. But such reactions are probably too hasty given that there are many regimes in which, because of a separation of timescales, for example, such an approach can be a useful first step. In this article, we explore the power of statistical mechanical thinking in the biological setting, with special emphasis on cell signaling and regulation. We show how such models are used to make predictions and describe some recent experiments designed to test them. We also consider the limits of such models based on the relative timescales of the processes of interest. PMID:27429713
20. Copolymer Crystallization: Approaching Equilibrium
Crist, Buckley; Finerman, Terry
2002-03-01
Random ethylene-butene copolymers of uniform chemical composition and degree of polymerization are crystallized by evaporation of thin films (1 μ m - 5 μ m) from solution. Macroscopic films ( 100 μm) formed by sequential layer deposition are characterized by density, calorimetry and X-ray techniques. Most notable is the density, which in some cases implies a crystalline fraction nearly 90% of the equilibrium value calculated from Flory theory. Melting temperature of these solution deposited layers is increased by as much as 8 ^oC over Tm for the same polymer crystallized from the melt. Small-angle X-ray scattering indicates that the amorphous layer thickness is strongly reduced by this layered crystallization process. X-ray diffraction shows a pronounced orientation of chain axes and lamellar normals parallel to the normal of the macroscopic film. It is clear that solvent enhances chain mobility, permitting proper sequences to aggregate and crystallize in a manner that is never achieved in the melt.
1. Napoleon Is in Equilibrium
Phillips, Rob
2015-03-01
It has been said that the cell is the test tube of the twenty-first century. If so, the theoretical tools needed to quantitatively and predictively describe what goes on in such test tubes lag sorely behind the stunning experimental advances in biology seen in the decades since the molecular biology revolution began. Perhaps surprisingly, one of the theoretical tools that has been used with great success on problems ranging from how cells communicate with their environment and each other to the nature of the organization of proteins and lipids within the cell membrane is statistical mechanics. A knee-jerk reaction to the use of statistical mechanics in the description of cellular processes is that living organisms are so far from equilibrium that one has no business even thinking about it. But such reactions are probably too hasty given that there are many regimes in which, because of a separation of timescales, for example, such an approach can be a useful first step. In this article, we explore the power of statistical mechanical thinking in the biological setting, with special emphasis on cell signaling and regulation. We show how such models are used to make predictions and describe some recent experiments designed to test them. We also consider the limits of such models based on the relative timescales of the processes of interest.
2. Change is a Constant.
PubMed
Lubowitz, James H; Provencher, Matthew T; Brand, Jefferson C; Rossi, Michael J; Poehling, Gary G
2015-06-01
In 2015, Henry P. Hackett, Managing Editor, Arthroscopy, retires, and Edward A. Goss, Executive Director, Arthroscopy Association of North America (AANA), retires. Association is a positive constant, in a time of change. With change comes a need for continuing education, research, and sharing of ideas. While the quality of education at AANA and ISAKOS is superior and most relevant, the unique reason to travel and meet is the opportunity to interact with innovative colleagues. Personal interaction best stimulates new ideas to improve patient care, research, and teaching. Through our network, we best create innovation.
3. Cosmology with varying constants.
PubMed
Martins, Carlos J A P
2002-12-15
The idea of possible time or space variations of the 'fundamental' constants of nature, although not new, is only now beginning to be actively considered by large numbers of researchers in the particle physics, cosmology and astrophysics communities. This revival is mostly due to the claims of possible detection of such variations, in various different contexts and by several groups. I present the current theoretical motivations and expectations for such variations, review the current observational status and discuss the impact of a possible confirmation of these results in our views of cosmology and physics as a whole.
4. Transition State Charge Stabilization and Acid-Base Catalysis of mRNA Cleavage by the Endoribonuclease RelE.
PubMed
Dunican, Brian F; Hiller, David A; Strobel, Scott A
2015-12-01
The bacterial toxin RelE is a ribosome-dependent endoribonuclease. It is part of a type II toxin-antitoxin system that contributes to antibiotic resistance and biofilm formation. During amino acid starvation, RelE cleaves mRNA in the ribosomal A-site, globally inhibiting protein translation. RelE is structurally similar to microbial RNases that employ general acid-base catalysis to facilitate RNA cleavage. The RelE active site is atypical for acid-base catalysis, in that it is enriched with positively charged residues and lacks the prototypical histidine-glutamate catalytic pair, making the mechanism of mRNA cleavage unclear. In this study, we use a single-turnover kinetic analysis to measure the effect of pH and phosphorothioate substitution on the rate constant for cleavage of mRNA by wild-type RelE and seven active-site mutants. Mutation and thio effects indicate a major role for stabilization of increased negative change in the transition state by arginine 61. The wild-type RelE cleavage rate constant is pH-independent, but the reaction catalyzed by many of the mutants is strongly dependent on pH, suggestive of general acid-base catalysis. pH-rate curves indicate that wild-type RelE operates with the pK(a) of at least one catalytic residue significantly downshifted by the local environment. Mutation of any single active-site residue is sufficient to disrupt this microenvironment and revert the shifted pK(a) back above neutrality. pH-rate curves are consistent with K54 functioning as a general base and R81 as a general acid. The capacity of RelE to effect a large pK(a) shift and facilitate a common catalytic mechanism by uncommon means furthers our understanding of other atypical enzymatic active sites.
5. Local thermodynamic equilibrium for globally disequilibrium open systems under stress
2016-04-01
Predictive modeling of far and near equilibrium processes is essential for understanding of patterns formation and for quantifying of natural processes that are never in global equilibrium. Methods of both equilibrium and non-equilibrium thermodynamics are needed and have to be combined. For example, predicting temperature evolution due to heat conduction requires simultaneous use of equilibrium relationship between internal energy and temperature via heat capacity (the caloric equation of state) and disequilibrium relationship between heat flux and temperature gradient. Similarly, modeling of rocks deforming under stress, reactions in system open for the porous fluid flow, or kinetic overstepping of the equilibrium reaction boundary necessarily needs both equilibrium and disequilibrium material properties measured under fundamentally different laboratory conditions. Classical irreversible thermodynamics (CIT) is the well-developed discipline providing the working recipes for the combined application of mutually exclusive experimental data such as density and chemical potential at rest under constant pressure and temperature and viscosity of the flow under stress. Several examples will be presented.
6. Equilibrium and non-equilibrium cluster phases in colloids with competing interactions.
PubMed
Mani, Ethayaraja; Lechner, Wolfgang; Kegel, Willem K; Bolhuis, Peter G
2014-07-01
The phase behavior of colloids that interact via competing interactions - short-range attraction and long-range repulsion - is studied by computer simulation. In particular, for a fixed strength and range of repulsion, the effect of the strength of an attractive interaction (ε) on the phase behavior is investigated at various colloid densities (ρ). A thermodynamically stable equilibrium colloidal cluster phase, consisting of compact crystalline clusters, is found below the fluid-solid coexistence line in the ε-ρ parameter space. The mean cluster size is found to linearly increase with the colloid density. At large ε and low densities, and at small ε and high densities, a non-equilibrium cluster phase, consisting of elongated Bernal spiral-like clusters, is observed. Although gelation can be induced either by increasing ε at constant density or vice versa, the gelation mechanism is different in either route. While in the ρ route gelation occurs via a glass transition of compact clusters, gelation in the ε route is characterized by percolation of elongated clusters. This study both provides the location of equilibrium and non-equilibrium cluster phases with respect to the fluid-solid coexistence, and reveals the dependencies of the gelation mechanism on the preparation route.
7. Compilation of Henry's law constants, version 3.99
Sander, R.
2014-11-01
Many atmospheric chemicals occur in the gas phase as well as in liquid cloud droplets and aerosol particles. Therefore, it is necessary to understand the distribution between the phases. According to Henry's law, the equilibrium ratio between the abundances in the gas phase and in the aqueous phase is constant for a dilute solution. Henry's law constants of trace gases of potential importance in environmental chemistry have been collected and converted into a uniform format. The compilation contains 14775 values of Henry's law constants for 3214 species, collected from 639 references. It is also available on the internet at http://www.henrys-law.org.
8. The spectroscopic constants and anharmonic force field of AgSH: An ab initio study
Zhao, Yanliang; Wang, Meishan; Yang, Chuanlu; Ma, Xiaoguang; Zhu, Ziliang
2016-07-01
The equilibrium structure, spectroscopy constants, and anharmonic force field of silver hydrosulfide (AgSH) have been calculated at B3P86, B3PW91 and MP2 methods employing two basis sets, TZP and QZP, respectively. The calculated geometries, ground state rotational constants, harmonic vibrational wave numbers, and quartic and sextic centrifugal distortion constants are compared with the available experimental and theoretical data. The equilibrium rotational constants, fundamental frequencies, anharmonic constants, and vibration-rotation interaction constants, Coriolis coupling constants, cubic and quartic force constants are predicted. The calculated results show that the MP2/TZP results are in good agreement with experiment observation and are also an advisable choice to study the anharmonic force field of AgSH.
9. Comparison of the acid-base properties of ribose and 2'-deoxyribose nucleotides.
PubMed
Mucha, Ariel; Knobloch, Bernd; Jezowska-Bojczuk, Małgorzata; Kozłowski, Henryk; Sigel, Roland K O
2008-01-01
The extent to which the replacement of a ribose unit by a 2'-deoxyribose unit influences the acid-base properties of nucleotides has not hitherto been determined in detail. In this study, by potentiometric pH titrations in aqueous solution, we have measured the acidity constants of the 5'-di- and 5'-triphosphates of 2'-deoxyguanosine [i.e., of H(2)(dGDP)(-) and H(2)(dGTP)(2-)] as well as of the 5'-mono-, 5'-di-, and 5'-triphosphates of 2'-deoxyadenosine [i.e., of H(2)(dAMP)(+/-), H(2)(dADP)(-), and H(2)(dATP)(2-)]. These 12 acidity constants (of the 56 that are listed) are compared with those of the corresponding ribose derivatives (published data) measured under the same experimental conditions. The results show that all protonation sites in the 2'-deoxynucleotides are more basic than those in their ribose counterparts. The influence of the 2'-OH group is dependent on the number of 5'-phosphate groups as well as on the nature of the purine nucleobase. The basicity of N7 in guanine nucleotides is most significantly enhanced (by about 0.2 pK units), while the effect on the phosphate groups and the N1H or N1H(+) sites is less pronounced but clearly present. In addition, (1)H NMR chemical shift change studies in dependence on pD in D(2)O have been carried out for the dAMP, dADP, and dATP systems, which confirmed the results from the potentiometric pH titrations and showed the nucleotides to be in their anti conformations. Overall, our results are not only of relevance for metal ion binding to nucleotides or nucleic acids, but also constitute an exact basis for the calculation, determination, and understanding of perturbed pK(a) values in DNAzymes and ribozymes, as needed for the delineation of acid-base mechanisms in catalysis.
10. Equilibrium Shape of Colloidal Crystals.
PubMed
Sehgal, Ray M; Maroudas, Dimitrios
2015-10-27
Assembling colloidal particles into highly ordered configurations, such as photonic crystals, has significant potential for enabling a broad range of new technologies. Facilitating the nucleation of colloidal crystals and developing successful crystal growth strategies require a fundamental understanding of the equilibrium structure and morphology of small colloidal assemblies. Here, we report the results of a novel computational approach to determine the equilibrium shape of assemblies of colloidal particles that interact via an experimentally validated pair potential. While the well-known Wulff construction can accurately capture the equilibrium shape of large colloidal assemblies, containing O(10(4)) or more particles, determining the equilibrium shape of small colloidal assemblies of O(10) particles requires a generalized Wulff construction technique which we have developed for a proper description of equilibrium structure and morphology of small crystals. We identify and characterize fully several "magic" clusters which are significantly more stable than other similarly sized clusters.
11. A Simple Method for the Consecutive Determination of Protonation Constants through Evaluation of Formation Curves
ERIC Educational Resources Information Center
Hurek, Jozef; Nackiewicz, Joanna
2013-01-01
A simple method is presented for the consecutive determination of protonation constants of polyprotic acids based on their formation curves. The procedure is based on generally known equations that describe dissociation equilibria. It has been demonstrated through simulation that the values obtained through the proposed method are sufficiently…
12. Acid-base property of N-methylimidazolium-based protic ionic liquids depending on anion.
PubMed
Kanzaki, Ryo; Doi, Hiroyuki; Song, Xuedan; Hara, Shota; Ishiguro, Shin-ichi; Umebayashi, Yasuhiro
2012-12-01
Proton-donating and ionization properties of several protic ionic liquids (PILs) made from N-methylimidazole (Mim) and a series of acids (HA) have been assessed by means of potentiometric and calorimetric titrations. With regard to strong acids, bis(trifluoromethanesulfonyl) amide (Tf(2)NH) and trifluoromethanesulfonic acid (TfOH), it was elucidated that the two equimolar mixtures with Mim almost consist of ionic species, HMim(+) and A(-), and the proton transfer equilibrium corresponding to autoprotolysis in ordinary molecular liquids was established. The respective autoprotolysis constants were successfully evaluated, which indicate the proton-donating abilities of TfOH and Tf(2)NH in the respective PILs are similar. In the case of trifluoroacetic acid, the proton-donating ability of CF(3)COOH is much weaker than those of TfOH and Tf(2)NH, while ions are predominant species. On the other hand, with regard to formic acid and acetic acid, protons of these acids are suggested not to transfer to Mim sufficiently. From calorimetric titrations, about half of Mim is estimated to be proton-attached at most in the CH(3)COOH-Mim equimolar mixture. In such a mixture, hydrogen-bonding adducts formation has been suggested. The autoprotolysis constants of the present PILs show a good linear correlation with dissociation constants of the constituent acids in an aqueous phase.
Blichert-Toft, J.; Albarede, F.
2011-12-01
When only modern isotope compositions are concerned, the choice of normalization values is inconsequential provided that their values are universally accepted. No harm is done as long as large amounts of standard reference material with known isotopic differences with respect to the reference value ('anchor point') can be maintained under controlled conditions. For over five decades, the scientific community has been referring to an essentially unavailable SMOW for stable O and H isotopes and to a long-gone belemnite sample for carbon. For radiogenic isotopes, the isotope composition of the daughter element, the parent-daughter ratio, and a particular value of the decay constant are all part of the reference. For the Lu-Hf system, for which the physical measurements of the decay constant have been particularly defective, the reference includes the isotope composition of Hf and the Lu/Hf ratio of an unfortunately heterogeneous chondrite mix that has been successively refined by Patchett and Tatsumoto (1981), Blichert-Toft and Albarede (1997, BTA), and Bouvier et al. (2008, BVP). The \\varepsilonHf(T) difference created by using BTA and BVP is nearly within error (+0.45 epsilon units today and -0.36 at 3 Ga) and therefore of little or no consequence. A more serious issue arises when the chondritic reference is taken to represent the Hf isotope evolution of the Bulk Silicate Earth (BSE): the initial isotope composition of the Solar System, as determined by the indistinguishable intercepts of the external eucrite isochron (Blichert-Toft et al., 2002) and the internal angrite SAH99555 isochron (Thrane et al., 2010), differs from the chondrite value of BTA and BVP extrapolated to 4.56 Ga by ~5 epsilon units. This difference and the overestimated value of the 176Lu decay constant derived from the slopes of these isochrons, have been interpreted as reflecting irradiation of the solar nebula by either gamma (Albarede et al., 2006) or cosmic rays (Thrane et al., 2010) during
14. Measurement of the solar constant
NASA Technical Reports Server (NTRS)
Crommelynck, D.
1981-01-01
The absolute value of the solar constant and the long term variations that exist in the absolute value of the solar constant were measured. The solar constant is the total irradiance of the Sun at a distance of one astronomical unit. An absolute radiometer removed from the effects of the atmosphere with its calibration tested in situ was used to measure the solar constant. The importance of an accurate knowledge of the solar constant is emphasized.
15. Triprotic acid-base microequilibria and pharmacokinetic sequelae of cetirizine.
PubMed
Marosi, Attila; Kovács, Zsuzsanna; Béni, Szabolcs; Kökösi, József; Noszál, Béla
2009-06-28
(1)H NMR-pH titrations of cetirizine, the widely used antihistamine and four related compounds were carried out and the related 11 macroscopic protonation constants were determined. The interactivity parameter between the two piperazine amine groups was obtained from two symmetric piperazine derivatives. Combining these two types of datasets, all the 12 microconstants and derived tautomeric constants of cetirizine were calculated. Upon this basis, the conflicting literature data of cetirizine microspeciation were clarified, and the pharmacokinetic absorption-distribution properties could be interpreted. The pH-dependent distribution of the microspecies is provided.
16. The Hubble constant.
PubMed
Tully, R B
1993-06-01
Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391
17. When constants are important
SciTech Connect
Beiu, V.
1997-04-01
In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff.
18. The Hubble constant.
PubMed Central
Tully, R B
1993-01-01
Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391
19. Unitaxial constant velocity microactuator
DOEpatents
McIntyre, Timothy J.
1994-01-01
A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-manometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment.
20. Unitaxial constant velocity microactuator
DOEpatents
McIntyre, T.J.
1994-06-07
A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment is disclosed. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-nanometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment. 10 figs.
1. Constant attitude orbit transfer
Cress, Peter; Evans, Michael
A two-impulse orbital transfer technique is described in which the spacecraft attitude remains constant for both burns, eliminating the need for attitude maneuvers between the burns. This can lead to significant savings in vehicle weight, cost and complexity. Analysis is provided for a restricted class of applications of this transfer between circular orbits. For those transfers with a plane change less than 30 deg, the total velocity cost of the maneuver is less than twelve percent greater than that of an optimum plane split Hohmann transfer. While this maneuver does not minimize velocity requirement, it does provide a means of achieving necessary transfer while substantially reducing the cost and complexity of the spacecraft.
2. Effect of acid-base alterations on hepatic lactate utilization
PubMed Central
Goldstein, Philip J.; Simmons, Daniel H.; Tashkin, Donald P.
1972-01-01
1. The effect of acid-base changes on hepatic lactate utilization was investigated in anaesthetized, mechanically ventilated dogs. 2. Portal vein flow and hepatic artery flow were measured with electromagnetic flowmeters, lactate concentration of portal vein, arterial and mixed hepatic venous blood was determined by an enzymatic technique, and hepatic lactate uptake was calculated using the Fick principle. 3. Respiratory alkalosis (Δ pH 0·25 ± 0·02) in four dogs resulted in a significant fall in total hepatic blood flow (-22 ± 4%) and a significant rise in both arterial lactate concentration (2·18 ± 0·32 m-mole/l.) and hepatic lactate utilization (3·9 ± 1·2 μmole/min.kg). 4. 0·6 M-Tris buffer infusion (Δ pH 0·21 ± 0·02) in four dogs produced no significant changes in liver blood flow, arterial lactate concentration or hepatic lactate uptake. 5. Respiratory acidosis (Δ pH -0·20 ± 0·03) in six dogs and metabolic acidosis (Δ pH -0·20 ± 0·02) in four dogs produced no significant changes in liver blood flow, decreases in arterial lactate concentration of 0·38 ± 0·09 m-mole/l. (P < 0·05) and 0·13 ± 0·13 m-mole/l., respectively, and no significant changes in hepatic lactate uptake. 6. A significant correlation (r = 0·63; P < 0·01) was found between hepatic lactate utilization and arterial lactate concentration during the hyperlactataemia associated with respiratory alkalosis. 7. Hyperlactataemia induced in four dogs by infusion of buffered sodium lactate (Δ pH 0·05 ± 0·01;% Δ liver blood flow 29 ± 7%) was also significantly correlated with hepatic lactate utilization (r = 0·70; P < 0·01) and the slope of the regression was similar to that during respiratory alkalosis. 8. These data suggest that the hyperlactataemia of alkalosis is not due to impaired hepatic utilization of lactate and that the principal determinant of hepatic lactate uptake during alkalosis or lactate infusion is blood lactate concentration, rather than liver
3. [Rigorous algorithms for calculating the exact concentrations and activity levels of all the different species during acid-base titrations in water].
PubMed
Burgot, G; Burgot, J L
2000-10-01
The principles of two algorithms allowing the calculations of the concentration and activity levels of the different species during acid-base titrations in water are described. They simulate titrations at constant and variable ionic strengths respectively. They are designed so acid and base strengths, their concentrations and the titrant volume added can be chosen freely. The calculations are based on rigorous equations with a general scope. They are sufficiently compact to be processed on pocket calculators. The algorithms can easily simulate pH-metric, spectrophotometric, conductometric and calorimetric titrations, and hence allow determining concentrations and some physico-chemical constants related to the occurring chemical systems.
4. Capillary zone electrophoresis of basic analytes in methanol as non-aqueous solvent mobility and ionisation constant.
PubMed
Porras, S P; Riekkola, M L; Kenndler, E
2001-01-01
The electrophoretically relevant properties of monoacidic 21 bases (including common drugs) containing aliphatic or aromatic amino groups were determined in methanol as solvent. These properties are the actual mobilities (that of the fully ionised weak bases), and their pKa values. Actual mobilities were measured in acidic methanolic solutions containing perchloric acid. The ionisation constants of the amines were derived from the dependence of the ionic mobilities on the pH of the background electrolyte solution. The pH scale in methanol was established from acids with known conventional pK*a values in this solvent used as buffers, avoiding thus further adjustment with a pH sensitive electrode that might bias the scale. Actual mobilities in methanol were found larger than in water, and do not correlate well with the solvent's viscosity. The pK*a values of the cation acids, HB-, the corresponding form of the base, B, are higher in methanol, whereas a less pronounced shift was found than for neutral acids of type HA. The mean increase (compared to pure aqueous solution) for aliphatic ammonium type analytes is 1.8, for substituted anilinium 1.1, and for aromatic ammonium from pyridinium type 0.5 units. The interpretation of this shift was undertaken with the concept of the medium effect on the particles involved in the acid-base equilibrium: the proton, the molecular base, B, and the cation HB+. PMID:11206793
5. Acid-base metabolism: implications for kidney stones formation.
PubMed
Hess, Bernhard
2006-04-01
The physiology and pathophysiology of renal H+ ion excretion and urinary buffer systems are reviewed. The main focus is on the two major conditions related to acid-base metabolism that cause kidney stone formation, i.e., distal renal tubular acidosis (dRTA) and abnormally low urine pH with subsequent uric acid stone formation. Both the entities can be seen on the background of disturbances of the major urinary buffer system, NH3+ <--> NH4+. On the one hand, reduced distal tubular secretion of H+ ions results in an abnormally high urinary pH and either incomplete or complete dRTA. On the other hand, reduced production/availability of NH4+ is the cause of an abnormally low urinary pH, which predisposes to uric acid stone formation. Most recent research indicates that the latter abnormality may be a renal manifestation of the increasingly prevalent metabolic syndrome. Despite opposite deviations from normal urinary pH values, both the dRTA and uric acid stone formation due to low urinary pH require the same treatment, i.e., alkali. In the dRTA, alkali is needed for improving the body's buffer capacity, whereas the goal of alkali treatment in uric acid stone formers is to increase the urinary pH to 6.2-6.8 in order to minimize uric acid crystallization.
6. Solution influence on biomolecular equilibria - Nucleic acid base associations
NASA Technical Reports Server (NTRS)
Pohorille, A.; Pratt, L. R.; Burt, S. K.; Macelroy, R. D.
1984-01-01
Various attempts to construct an understanding of the influence of solution environment on biomolecular equilibria at the molecular level using computer simulation are discussed. First, the application of the formal statistical thermodynamic program for investigating biomolecular equilibria in solution is presented, addressing modeling and conceptual simplications such as perturbative methods, long-range interaction approximations, surface thermodynamics, and hydration shell. Then, Monte Carlo calculations on the associations of nucleic acid bases in both polar and nonpolar solvents such as water and carbon tetrachloride are carried out. The solvent contribution to the enthalpy of base association is positive (destabilizing) in both polar and nonpolar solvents while negative enthalpies for stacked complexes are obtained only when the solute-solute in vacuo energy is added to the total energy. The release upon association of solvent molecules from the first hydration layer around a solute to the bulk is accompanied by an increase in solute-solvent energy and decrease in solvent-solvent energy. The techniques presented are expectd to displace less molecular and more heuristic modeling of biomolecular equilibria in solution.
7. Acid-base transport by the renal proximal tubule
PubMed Central
Skelton, Lara A.; Boron, Walter F.; Zhou, Yuehan
2015-01-01
Each day, the kidneys filter 180 L of blood plasma, equating to some 4,300 mmol of the major blood buffer, bicarbonate (HCO3−). The glomerular filtrate enters the lumen of the proximal tubule (PT), and the majority of filtered HCO3− is reclaimed along the early (S1) and convoluted (S2) portions of the PT in a manner coupled to the secretion of H+ into the lumen. The PT also uses the secreted H+ to titrate non-HCO3− buffers in the lumen, in the process creating “new HCO3−” for transport into the blood. Thus, the PT – along with more distal renal segments – is largely responsible for regulating plasma [HCO3−]. In this review we first focus on the milestone discoveries over the past 50+ years that define the mechanism and regulation of acid-base transport by the proximal tubule. Further on in the review, we will summarize research still in progress from our laboratory, work that addresses the problem of how the PT is able to finely adapt to acid–base disturbances by rapidly sensing changes in basolateral levels of HCO3− and CO2 (but not pH), and thereby to exert tight control over the acid–base composition of the blood plasma. PMID:21170887
8. Acid/base account and minesoils: A review
SciTech Connect
Hossner, L.R.; Brandt, J.E.
1997-12-31
Generation of acidity from the oxidation of iron sulfides (FeS{sub 2}) is a common feature of geological materials exposed to the atmosphere by mining activities. Acid/base accounting (ABA) has been the primary method to evaluate the acid- or alkaline-potential of geological materials and to predict if weathering of these materials will have an adverse effect on terrestrial and aquatic environments. The ABA procedure has also been used to evaluate minesoils at different stages of weathering and, in some cases, to estimate lime requirements. Conflicting assessments of the methodology have been reported in the literature. The ABA is the fastest and easiest way to evaluate the acid-forming characteristics of overburden materials; however, accurate evaluations sometimes require that ABA data be examined in conjunction with additional sample information and results from other analytical procedures. The end use of ABA data, whether it be for minesoil evaluation or water quality prediction, will dictate the method`s interpretive criteria. Reaction kinetics and stoichiometry may vary and are not clearly defined for all situations. There is an increasing awareness of the potential for interfering compounds, particularly siderite (FeCO{sub 3}), to be present in geological materials associated with coal mines. Hardrock mines, with possible mixed sulfide mineralogy, offer a challenge to the ABA, since acid generation may be caused by minerals other than pyrite. A combination of methods, static and kinetic, is appropriate to properly evaluate the presence of acid-forming materials.
9. [Development of Nucleic Acid-Based Adjuvant for Cancer Immunotherapy].
PubMed
Kobiyama, Kouji; Ishii, Ken J
2015-09-01
Since the discovery of the human T cell-defined tumor antigen, the cancer immunotherapy field has rapidly progressed, with the research and development of cancer immunotherapy, including cancer vaccines, being conducted actively. However, the disadvantages of most cancer vaccines include relatively weak immunogenicity and immune escape or exhaustion. Adjuvants with innate immunostimulatory activities have been used to overcome these issues, and these agents have been shown to enhance the immunogenicity of cancer vaccines and to act as mono-therapeutic anti-tumor agents. CpG ODN, an agonist for TLR9, is one of the promising nucleic acid-based adjuvants, and it is a potent inducer of innate immune effector functions. CpG ODN suppresses tumor growth in the absence of tumor antigens and peptide administration. Therefore, CpG ODN is expected to be useful as a cancer vaccine adjuvant as well as a cancer immunotherapy agent. In this review, we discuss the potential therapeutic applications and mechanisms of CpG ODN for cancer immunotherapy.
10. Environmental applications of poly(amic acid)-based nanomaterials.
PubMed
Okello, Veronica A; Du, Nian; Deng, Boling; Sadik, Omowunmi A
2011-05-01
Nanoscale materials offer new possibilities for the development of novel remediation and environmental monitoring technologies. Different nanoscale materials have been exploited for preventing environmental degradation and pollutant transformation. However, the rapid self-aggregation of nanoparticles or their association with suspended solids or sediments where they could bioaccumulate supports the need for polymeric coatings to improve mobility, allows faster site cleanups and reduces remediation cost. The ideal material must be able to coordinate different nanomaterials functionalities and exhibit the potential for reusability. We hereby describe two novel environmental applications of nanostructured poly (amic acid)-based (nPAA) materials. In the first application, nPAA was used as both reductant and stabilizer during the in situ chemical reduction of chromium(vi) to chromium(iii). Results showed that Cr(vi) species were rapidly reduced within the concentration range of 10(-1) to 10(2) mM with efficiency of 99.9% at 40 °C in water samples and 90% at 40 °C in soil samples respectively. Furthermore, the presence of PdNPs on the PAA-Au electrode was found to significantly enhance the rate of reduction. In the second application, nPAA membranes were tested as filters to capture, isolate and detect nanosilver. Preliminary results demonstrate the capability of the nPAA membranes to quantitatively capture nanoparticles from suspension and quantify their abundance on the membranes. Silver nanoparticles detection at concentrations near the toxic threshold of silver was also demonstrated.
11. Acid-base transport in pancreas—new challenges
PubMed Central
Novak, Ivana; Haanes, Kristian A.; Wang, Jing
2013-01-01
Along the gastrointestinal tract a number of epithelia contribute with acid or basic secretions in order to aid digestive processes. The stomach and pancreas are the most extreme examples of acid (H+) and base (HCO−3) transporters, respectively. Nevertheless, they share the same challenges of transporting acid and bases across epithelia and effectively regulating their intracellular pH. In this review, we will make use of comparative physiology to enlighten the cellular mechanisms of pancreatic HCO−3 and fluid secretion, which is still challenging physiologists. Some of the novel transporters to consider in pancreas are the proton pumps (H+-K+-ATPases), as well as the calcium-activated K+ and Cl− channels, such as KCa3.1 and TMEM16A/ANO1. Local regulators, such as purinergic signaling, fine-tune, and coordinate pancreatic secretion. Lastly, we speculate whether dys-regulation of acid-base transport contributes to pancreatic diseases including cystic fibrosis, pancreatitis, and cancer. PMID:24391597
12. DEOXYRIBONUCLEIC ACID BASE COMPOSITION OF PROTEUS AND PROVIDENCE ORGANISMS
PubMed Central
Falkow, Stanley; Ryman, I. R.; Washington, O.
1962-01-01
Falkow, Stanley (Walter Reed Army Institute of Research, Washington D.C.), I. R. Ryman, and O. Washington. Deoxyribonucleic acid base composition of Proteus and Providence organisms. J. Bacteriol. 83:1318–1321. 1962.—Deoxyribonucleic acids (DNA) from various species of Proteus and of Providence bacteria have been examined for their guanine + cytosine (GC) content. P. vulgaris, P. mirabilis, and P. rettgeri possess essentially identical mean GC contents of 39%, and Providence DNA has a GC content of 41.5%. In marked contrast, P. morganii DNA was found to contain 50% GC. The base composition of P. morganii is only slightly lower than those observed for representatives of the Escherichia, Shigella, and Salmonella groups. Aerobacter and Serratia differ significantly from the other members of the family by their relatively high GC content. Since a minimal requirement for genetic compatibility among different species appears to be similarity of their DNA base composition, it is suggested that P. morganii is distinct genetically from the other species of Proteus as well as Providence strains. The determination of the DNA base composition of microorganisms is important for its predictive information. This information should prove of considerable value in investigating genetic and taxonomic relationships among bacteria. PMID:13891463
13. Nucleic acid-based nanoengineering: novel structures for biomedical applications
PubMed Central
Li, Hanying; LaBean, Thomas H.; Leong, Kam W.
2011-01-01
Nanoengineering exploits the interactions of materials at the nanometre scale to create functional nanostructures. It relies on the precise organization of nanomaterials to achieve unique functionality. There are no interactions more elegant than those governing nucleic acids via Watson–Crick base-pairing rules. The infinite combinations of DNA/RNA base pairs and their remarkable molecular recognition capability can give rise to interesting nanostructures that are only limited by our imagination. Over the past years, creative assembly of nucleic acids has fashioned a plethora of two-dimensional and three-dimensional nanostructures with precisely controlled size, shape and spatial functionalization. These nanostructures have been precisely patterned with molecules, proteins and gold nanoparticles for the observation of chemical reactions at the single molecule level, activation of enzymatic cascade and novel modality of photonic detection, respectively. Recently, they have also been engineered to encapsulate and release bioactive agents in a stimulus-responsive manner for therapeutic applications. The future of nucleic acid-based nanoengineering is bright and exciting. In this review, we will discuss the strategies to control the assembly of nucleic acids and highlight the recent efforts to build functional nucleic acid nanodevices for nanomedicine. PMID:23050076
14. [Development of Nucleic Acid-Based Adjuvant for Cancer Immunotherapy].
PubMed
Kobiyama, Kouji; Ishii, Ken J
2015-09-01
Since the discovery of the human T cell-defined tumor antigen, the cancer immunotherapy field has rapidly progressed, with the research and development of cancer immunotherapy, including cancer vaccines, being conducted actively. However, the disadvantages of most cancer vaccines include relatively weak immunogenicity and immune escape or exhaustion. Adjuvants with innate immunostimulatory activities have been used to overcome these issues, and these agents have been shown to enhance the immunogenicity of cancer vaccines and to act as mono-therapeutic anti-tumor agents. CpG ODN, an agonist for TLR9, is one of the promising nucleic acid-based adjuvants, and it is a potent inducer of innate immune effector functions. CpG ODN suppresses tumor growth in the absence of tumor antigens and peptide administration. Therefore, CpG ODN is expected to be useful as a cancer vaccine adjuvant as well as a cancer immunotherapy agent. In this review, we discuss the potential therapeutic applications and mechanisms of CpG ODN for cancer immunotherapy. PMID:26469159
15. Water-wire catalysis in photoinduced acid-base reactions.
PubMed
Kwon, Oh-Hoon; Mohammed, Omar F
2012-07-01
The pronounced ability of water to form a hyperdense hydrogen (H)-bond network among itself is at the heart of its exceptional properties. Due to the unique H-bonding capability and amphoteric nature, water is not only a passive medium, but also behaves as an active participant in many chemical and biological reactions. Here, we reveal the catalytic role of a short water wire, composed of two (or three) water molecules, in model aqueous acid-base reactions synthesizing 7-hydroxyquinoline derivatives. Utilizing femtosecond-resolved fluorescence spectroscopy, we tracked the trajectories of excited-state proton transfer and discovered that proton hopping along the water wire accomplishes the reaction more efficiently compared to the transfer occurring with bulk water clusters. Our finding suggests that the directionality of the proton movements along the charge-gradient H-bond network may be a key element for long-distance proton translocation in biological systems, as the H-bond networks wiring acidic and basic sites distal to each other can provide a shortcut for a proton in searching a global minimum on a complex energy landscape to its destination.
16. Tuning, ergodicity, equilibrium, and cosmology
Albrecht, Andreas
2015-05-01
I explore the possibility that the cosmos is fundamentally an equilibrium system and review the attractive features of such theories. Equilibrium cosmologies are commonly thought to fail due to the "Boltzmann brain" problem. I show that it is possible to evade the Boltzmann brain problem if there is a suitable coarse-grained relationship between the fundamental degrees of freedom and the cosmological observables. I make my main points with simple toy models and then review the de Sitter equilibrium model as an illustration.
17. Understanding thermal equilibrium through activities
2015-03-01
Thermal equilibrium is a basic concept in thermodynamics. In India, this concept is generally introduced at the first year of undergraduate education in physics and chemistry. In our earlier studies (Pathare and Pradhan 2011 Proc. episteme-4 Int. Conf. to Review Research on Science Technology and Mathematics Education pp 169-72) we found that students in India have a rather unsatisfactory understanding of thermal equilibrium. We have designed and developed a module of five activities, which are presented in succession to the students. These activities address the students’ alternative conceptions that underlie their lack of understanding of thermal equilibrium and aim at enhancing their understanding of the concept.
18. Computer Assisted Instruction for Equilibrium.
ERIC Educational Resources Information Center
Berry, Gifford L.
1988-01-01
Describes two computer assisted tutorials, one on acid ionization constants (Ka), and the other on solubility product constants (Ksp). Discusses framework to be used in writing computer assisted instruction programs. Lists topics covered in the programs. (MVL)
19. Lewis Acid Based Sorption of Trace Amounts of RuCl3 by Polyaniline.
PubMed
Harbottle, Allison M; Hira, Steven M; Josowicz, Mira; Janata, Jiří
2016-08-23
A sorption process of RuCl3 in phosphate buffer by polyaniline (PANI) powder chemically synthesized from phosphoric acid was spectrophotometrically monitored as a function of time. It was determined that the sorption process follows the Langmuir and Freundlich isotherms, and their constants were evaluated. It was determined that chemisorption was the rate-controlling step. By conducting detailed studies, we assigned the chemisorption to Lewis acid based interactions of the sorbent electron pair localized at the benzenoid amine (-NH2) and quinoid imine (═NH) groups, with the sorbate, RuCl3, as the electron acceptor. The stability of the interaction over a period of ∼1 week showed that the presence of the Ru(III) in the PANI matrix reverses its state from emeraldine base to emeraldine salt, resulting in a change of conductivity. The partial electron donor based charge transfer is a slow process as compared to the sorption process involving Brønsted acid doping. PMID:27479848
20. Acid-base titration of melanocortin peptides: evidence of Trp rotational conformers interconversion.
PubMed
Fernandez, Roberto M; Vieira, Renata F F; Nakaie, Clóvis R; Lamy, M Teresa; Ito, Amando S
2005-01-01
Tryptophantime-resolved fluorescence was used to monitor acid-base titration properties of alpha-melanocyte stimulating hormone (alpha-MSH) and the biologically more potent analog [Nle4, D-Phe7]alpha -MSH (NDP-MSH), labeled or not with the paramagnetic amino acid probe 2,2,6,6-tetramthylpiperidine-N-oxyl-4-amino-4-carboxylic acid (Toac). Global analysis of fluorescence decay profiles measured in the pH range between 2.0 and 11.0 showed that, for each peptide, the data could be well fitted to three lifetimes whose values remained constant. The less populated short lifetime component changed little with pH and was ascribed to Trp g+ chi1 rotamer, in which electron transfer deactivation predominates over fluorescence. The long and intermediate lifetime preexponential factors interconverted along that pH interval and the result was interpreted as due to interconversion between Trp g- and trans chi1 rotamers, driven by conformational changes promoted by modifications in the ionization state of side-chain residues. The differences in the extent of interconversion in alpha-MSH and NDP-MSH are indicative of structural differences between the peptides, while titration curves suggest structural similarities between each peptide and its Toac-labeled species, in aqueous solution. Though less sensitive than fluorescence, the Toac electron spin resonance (ESR) isotropic hyperfine splitting parameter can also monitor the titration of side-chain residues located relatively far from the probe.
1. Temperature lapse rates at restricted thermodynamic equilibrium in the Earth system
Björnbom, Pehr
2015-03-01
Equilibrium temperature profiles obtained by maximizing the entropy of a column of fluid with a given height and volume under the influence of gravity are discussed by using numerical experiments. Calculations are made both for the case of an ideal gas and for a liquid with constant isobaric heat capacity, constant compressibility and constant thermal expansion coefficient representing idealized conditions corresponding to atmosphere and ocean. Calculations confirm the classical equilibrium condition by Gibbs that an isothermal temperature profile gives a maximum in entropy constrained by a constant mass and a constant sum of internal and potential energy. However, it was also found that an isentropic profile gives a maximum in entropy constrained by a constant mass and a constant internal energy of the fluid column. On the basis of this result a hypothesis is suggested that the adiabatic lapse rate represents a restricted or transitory and metastable equilibrium state, which has a maximum in entropy with lower value than the maximum in the state with an isothermal lapse rate. This transitory equilibrium state is maintained by passive forces, preventing or slowing down the transition of the system to the final or ultimate equilibrium state.
2. Effects of anaesthesia on blood gases, acid-base status and ions in the toad Bufo marinus.
PubMed
Andersen, Johnnie Bremholm; Wang, Tobias
2002-03-01
It is common practice to chronically implant catheters for subsequent blood sampling from conscious and undisturbed animals. This method reduces stress associated with blood sampling, but anaesthesia per se can also be a source of stress in animals. Therefore, it is imperative to evaluate the time required for physiological parameters (e.g. blood gases, acid-base status, plasma ions, heart rate and blood pressure) to stabilise following surgery. Here, we report physiological parameters during and after anaesthesia in the toad Bufo marinus. For anaesthesia, toads were immersed in benzocaine (1 g l(-1)) for 15 min or until the corneal reflex disappeared, and the femoral artery was cannulated. A 1-ml blood sample was taken immediately after surgery and subsequently after 2, 5, 24 and 48 h. Breathing ceased during anaesthesia, which resulted in arterial Po(2) values below 30 mmHg, and respiratory acidosis developed, with arterial Pco(2) levels reaching 19.5+/-2 mmHg and pH 7.64+/-0.04. The animals resumed pulmonary ventilation shortly after the operation, and oxygen levels increased to a constant level within 2 h. Acid--base status, however, did not stabilise until 24 h after anaesthesia. Haematocrit doubled immediately after cannulation (26+/-1%), but reached a constant level of 13% within 24 h. Blood pressure and heart rate were elevated for the first 5 h, but decreased after 24 h to a constant level of approximately 30 cm H2O and 35 beats min(-1), respectively. There were no changes following anaesthesia in mean cellular haemoglobin concentration, [K+], [Cl-], [Na+], [lactate] or osmolarity. Toads fully recovered from anaesthesia after 24 h.
3. Equilibrium and Orientation in Cephalopods.
ERIC Educational Resources Information Center
Budelmann, Bernd-Ulrich
1980-01-01
Describes the structure of the equilibrium receptor system in cephalopods, comparing it to the vertebrate counterpart--the vestibular system. Relates the evolution of this complex system to the competition of cephalopods with fishes. (CS)
4. Model studies of intracellular acid-base temperature responses in ectotherms.
PubMed
Reeves, R B; Malan, A
1976-10-01
Measurements of intracellular pH (pHi) in air-breathing ectotherms have only been made in the steady state; these pHi indicate that protein charge state, measured as alpha imidazole (alphaIM), the fractional dissociation of protein histidine imidazole groups, is preserved when ectotherm tissues change temperature in vivo, with related changes in pHi and PCO2. In partial answer to the question of how such tissues are able to avoid disrupting transients to functions sensitive to protein charge states, model studies were carried out to assess the passive intracellular buffer system response to a combined change in body temperature and CO2 partial pressure as occurs in vivo in these species. The cell compartment was modeled as a closed volume of ternary buffer solution, containing protein imidazole (50 mM/1); phosphate (15 mM/1) and CO2-bicarbonate buffer components, permeable only to CO2 and permitted no change in buffer base. Excursions from a steady-state non-equilibrium pHi were computed to a step-change in temperature/PCO2. Computations for frog (Rana catesbeiana) striated muscle show that the calculated pHi response on the basis of estimated composition and concentration of cell buffer components, moves along the curve describing the steady-state temperature relationship. No transient away from steady-state alphaIM and carbon dioxide content need be postulated. Applications to turtle (Pseudemys scripta) striated muscle are also explored. These calculations show that ectotherm cells may be capable of responding without appreciable time for adaptation to intracellular acid-base state changes incurred by sudden alteration of body temperature in vivo, given the observed adjustments of blood PCO2 with temperature.
5. Solution properties and emulsification properties of amino acid-based gemini surfactants derived from cysteine.
PubMed
Yoshimura, Tomokazu; Sakato, Ayako; Esumi, Kunio
2013-01-01
Amino acid-based anionic gemini surfactants (2C(n)diCys, where n represents an alkyl chain with a length of 10, 12, or 14 carbons and "di" and "Cys" indicate adipoyl and cysteine, respectively) were synthesized using the amino acid cysteine. Biodegradability, equilibrium surface tension, and dynamic light scattering were used to characterize the properties of gemini surfactants. Additionally, the effects of alkyl chain length, number of chains, and structure on these properties were evaluated by comparing previously reported gemini surfactants derived from cystine (2C(n)Cys) and monomeric surfactants (C(n)Cys). 2C(n)diCys shows relatively higher biodegradability than does C(n)Cys and previously reported sugar-based gemini surfactants. Both critical micelle concentration (CMC) and surface tension decrease when alkyl chain length is increased from 10 to 12, while a further increase in chain length to 14 results in increased CMC and surface tension. This indicates that long-chain gemini surfactants have a decreased aggregation tendency due to the steric hindrance of the bulky spacer as well as premicelle formation at concentrations below the CMC and are poorly packed at the air/water interface. Formation of micelles (measuring 2 to 5 nm in solution) from 2C(n)diCys shows no dependence on alkyl chain length. Further, shaking the mixtures of aqueous 2C(n)diCys surfactant solutions and squalane results in the formation of oil-in-water type emulsions. The highly stable emulsions are formed using 2C₁₂diCys or 2C₁₄diCys solution and squalane in a 1:1 or 2:1 volume ratio.
6. Strongly Non-equilibrium Dynamics of Nanochannel Confined DNA
Reisner, Walter
Nanoconfined DNA exhibits a wide-range of fascinating transient and steady-state non-equilibrium phenomena. Yet, while experiment, simulation and scaling analytics are converging on a comprehensive picture regarding the equilibrium behavior of nanochannel confined DNA, non-equilibrium behavior remains largely unexplored. In particular, while the DNA extension along the nanochannel is the key observable in equilibrium experiments, in the non-equilibrium case it is necessary to measure and model not just the extension but the molecule's full time-dependent one-dimensional concentration profile. Here, we apply controlled compressive forces to a nanochannel confined molecule via a nanodozer assay, whereby an optically trapped bead is slid down the channel at a constant speed. Upon contact with the molecule, a propagating concentration ``shockwave'' develops near the bead and the molecule is dynamically compressed. This experiment, a single-molecule implementation of a macroscopic cylinder-piston apparatus, can be used to observe the molecule response over a range of forcings and benchmark theoretical description of non-equilibrium behavior. We show that the dynamic concentration profiles, including both transient and steady-state response, can be modelled via a partial differential evolution equation combining nonlinear diffusion and convection. Lastly, we present preliminary results for dynamic compression of multiple confined molecules to explore regimes of segregation and mixing for multiple chains in confinement.
7. A search for equilibrium states
NASA Technical Reports Server (NTRS)
Zeleznik, F. J.
1982-01-01
An efficient search algorithm is described for the location of equilibrium states in a search set of states which differ from one another only by the choice of pure phases. The algorithm has three important characteristics: (1) it ignores states which have little prospect for being an improved approximation to the true equilibrium state; (2) it avoids states which lead to singular iteration equations; (3) it furnishes a search history which can provide clues to alternative search paths.
8. Edge equilibrium code for tokamaks
SciTech Connect
2014-01-15
The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.
9. Equilibrium and non-equilibrium properties of finite-volume crystallites
Degawa, Masashi
Finite volume effects on equilibrium and non-equilibrium properties of nano-crystallites are studied theoretically and compared to both experiment and simulation. When a system is isolated or its size is small compared to the correlation length, all equilibrium and close-to-equilibrium properties will depend on the system boundary condition. Specifically for solid nano-crystallites, their finite size introduces global curvature to the system, which alters its equilibrium properties compared to the thermodynamic limit. Also such global curvature leads to capillary-induced morphology changes of the surface. Interesting dynamics can arise when the crystallite is supported on a substrate, with crossovers of the dominant driving force from the capillary force and crystallite-substrate interactions. To address these questions, we introduce thermodynamic functions for the boundary conditions, which can be derived from microscopic models. For nano-crystallites, the boundary is the surface (including interfaces), the thermodynamic description is based on the steps that define the shape of the surface, and the underlying microscopic model includes kinks. The global curvature of the surface introduces metastable states with different shapes governed by a constant of integration of the extra boundary condition, which we call the shape parameter c. The discrete height of the steps introduces transition states in between the metastable states, and the lowest energy accessible structure (energy barrier less 10k BT) as a function of the volume has been determined. The dynamics of nano-crystallites as they relax from a non-equilibrium structure is described quantitatively in terms of the motion of steps in both capillary-induced and interface-boundary-induced regimes. The step-edge fluctuations of the top facet are also influenced by global curvature and volume conservation and the effect yields different dynamic scaling exponents from a pure 1D system. Theoretical results are
10. Kinetic and equilibrium studies of acrylonitrile binding to cytochrome c peroxidase and oxidation of acrylonitrile by cytochrome c peroxidase compound I.
PubMed
Chinchilla, Diana; Kilheeney, Heather; Vitello, Lidia B; Erman, James E
2014-01-01
Ferric heme proteins bind weakly basic ligands and the binding affinity is often pH dependent due to protonation of the ligand as well as the protein. In an effort to find a small, neutral ligand without significant acid/base properties to probe ligand binding reactions in ferric heme proteins we were led to consider the organonitriles. Although organonitriles are known to bind to transition metals, we have been unable to find any prior studies of nitrile binding to heme proteins. In this communication we report on the equilibrium and kinetic properties of acrylonitrile binding to cytochrome c peroxidase (CcP) as well as the oxidation of acrylonitrile by CcP compound I. Acrylonitrile binding to CcP is independent of pH between pH 4 and 8. The association and dissociation rate constants are 0.32±0.16 M(-1) s(-1) and 0.34±0.15 s(-1), respectively, and the independently measured equilibrium dissociation constant for the complex is 1.1±0.2 M. We have demonstrated for the first time that acrylonitrile can bind to a ferric heme protein. The binding mechanism appears to be a simple, one-step association of the ligand with the heme iron. We have also demonstrated that CcP can catalyze the oxidation of acrylonitrile, most likely to 2-cyanoethylene oxide in a "peroxygenase"-type reaction, with rates that are similar to rat liver microsomal cytochrome P450-catalyzed oxidation of acrylonitrile in the monooxygenase reaction. CcP compound I oxidizes acrylonitrile with a maximum turnover number of 0.61 min(-1) at pH 6.0. PMID:24291498
11. Kinetic and equilibrium studies of acrylonitrile binding to cytochrome c peroxidase and oxidation of acrylonitrile by cytochrome c peroxidase compound I
PubMed Central
Chinchilla, Diana; Kilheeney, Heather; Vitello, Lidia B.; Erman, James E.
2013-01-01
Ferric heme proteins bind weakly basic ligands and the binding affinity is often pH dependent due to protonation of the ligand as well as the protein. In an effort to find a small, neutral ligand without significant acid/base properties to probe ligand binding reactions in ferric heme proteins we were led to consider the organonitriles. Although organonitriles are known to bind to transition metals, we have been unable to find any prior studies of nitrile binding to heme proteins. In this communication we report on the equilibrium and kinetic properties of acrylonitrile binding to cytochrome c peroxidase (CcP) as well as the oxidation of acrylonitrile by CcP compound I. Acrylonitrile binding to CcP is independent of pH between pH 4 and 8. The association and dissociation rate constants are 0.32 ± 0.16 M−1s−1 and 0.34 ± 0.15 s−1, respectively, and the independently measured equilibrium dissociation constant for the complex is 1.1 ± 0.2 M. We have demonstrated for the first time that acrylonitrile can bind to a ferric heme protein. The binding mechanism appears to be a simple, one-step association of the ligand with the heme iron. We have also demonstrated that CcP can catalyze the oxidation of acrylonitrile, most likely to 2-cyanoethylene oxide in a “peroxygenase”-type reaction, with rates that are similar to rat liver microsomal cytochrome P450-catalyzed oxidation of acrylonitrile in the monooxygenase reaction. CcP compound I oxidizes acrylonitrile with a maximum turnover number of 0.61 min−1 at pH 6.0. PMID:24291498
12. Kinetic and equilibrium studies of acrylonitrile binding to cytochrome c peroxidase and oxidation of acrylonitrile by cytochrome c peroxidase compound I.
PubMed
Chinchilla, Diana; Kilheeney, Heather; Vitello, Lidia B; Erman, James E
2014-01-01
Ferric heme proteins bind weakly basic ligands and the binding affinity is often pH dependent due to protonation of the ligand as well as the protein. In an effort to find a small, neutral ligand without significant acid/base properties to probe ligand binding reactions in ferric heme proteins we were led to consider the organonitriles. Although organonitriles are known to bind to transition metals, we have been unable to find any prior studies of nitrile binding to heme proteins. In this communication we report on the equilibrium and kinetic properties of acrylonitrile binding to cytochrome c peroxidase (CcP) as well as the oxidation of acrylonitrile by CcP compound I. Acrylonitrile binding to CcP is independent of pH between pH 4 and 8. The association and dissociation rate constants are 0.32±0.16 M(-1) s(-1) and 0.34±0.15 s(-1), respectively, and the independently measured equilibrium dissociation constant for the complex is 1.1±0.2 M. We have demonstrated for the first time that acrylonitrile can bind to a ferric heme protein. The binding mechanism appears to be a simple, one-step association of the ligand with the heme iron. We have also demonstrated that CcP can catalyze the oxidation of acrylonitrile, most likely to 2-cyanoethylene oxide in a "peroxygenase"-type reaction, with rates that are similar to rat liver microsomal cytochrome P450-catalyzed oxidation of acrylonitrile in the monooxygenase reaction. CcP compound I oxidizes acrylonitrile with a maximum turnover number of 0.61 min(-1) at pH 6.0.
13. Adansonian Analysis and Deoxyribonucleic Acid Base Composition of Serratia marcescens
PubMed Central
Colwell, R. R.; Mandel, M.
1965-01-01
Colwell, R. R. (Georgetown University, Washington, D.C.), and M. Mandel. Adansonian analysis and deoxyribonucleic acid base composition of Serratia marcescens. J. Bacteriol. 89:454–461. 1965.—A total of 33 strains of Serratia marcescens were subjected to Adansonian analysis for which more than 200 coded features for each of the organisms were included. In addition, the base composition [expressed as moles per cent guanine + cytosine (G + C)] of the deoxyribonucleic acid (DNA) prepared from each of the strains was determined. Except for four strains which were intermediate between Serratia and the Hafnia and Aerobacter group C of Edwards and Ewing, the S. marcescens species group proved to be extremely homogeneous, and the different strains showed high affinities for each other (mean similarity, ¯S = 77%). The G + C ratio of the DNA from the Serratia strains ranged from 56.2 to 58.4% G + C. Many species names have been listed for the genus, but only a single clustering of the strains was obtained at the species level, for which the species name S. marcescens was retained. S. kiliensis, S. indica, S. plymuthica, and S. marinorubra could not be distinguished from S. marcescens; it was concluded, therefore, that there is only a single species in the genus. The variety designation kiliensis does not appear to be valid, since no subspecies clustering of strains with negative Voges-Proskauer reactions could be detected. The characteristics of the species are listed, and a description of S. marcescens is presented. PMID:14255714
14. Acid-base chemical mechanism of aspartase from Hafnia alvei.
PubMed
Yoon, M Y; Thayer-Cook, K A; Berdis, A J; Karsten, W E; Schnackerz, K D; Cook, P F
1995-06-20
An acid-base chemical mechanism is proposed for Hafnia alvei aspartase in which a proton is abstracted from C-3 of the monoanionic form of L-aspartate by an enzyme general base with a pK of 6.3-6.6 in the absence and presence of Mg2+. The resulting carbanion is presumably stabilized by delocalization of electrons into the beta-carboxyl with the assistance of a protonated enzyme group in the vicinity of the beta-carboxyl. Ammonia is then expelled with the assistance of a general acid group that traps an initially expelled NH3 as the final NH4+ product. In agreement with the function of the general acid group, potassium, an analog of NH4+, binds optimally when the group is unprotonated. The pK for the general acid is about 7 in the absence of Mg2+, but is increased by about a pH unit in the presence of Mg2+. Since the same pK values are observed in the pKi(succinate) and V/K pH profile, both enzyme groups must be in their optimum protonation state for efficient binding of reactant in the presence of Mg2+. At the end of a catalytic cycle, both the general base and general acid groups are in a protonation state opposite that in which they started when aspartate was bound. The presence of Mg2+ causes a pH-dependent activation of aspartase exhibited as a partial change in the V and V/Kasp pH profiles. When the aspartase reaction is run in D2O to greater than 50% completion no deuterium is found in the remaining aspartate, indicating that the site is inaccessible to solvent during the catalytic cycle.
15. Beyond the Hubble Constant
1995-08-01
about the distances to galaxies and thereby about the expansion rate of the Universe. A simple way to determine the distance to a remote galaxy is by measuring its redshift, calculate its velocity from the redshift and divide this by the Hubble constant, H0. For instance, the measured redshift of the parent galaxy of SN 1995K (0.478) yields a velocity of 116,000 km/sec, somewhat more than one-third of the speed of light (300,000 km/sec). From the universal expansion rate, described by the Hubble constant (H0 = 20 km/sec per million lightyears as found by some studies), this velocity would indicate a distance to the supernova and its parent galaxy of about 5,800 million lightyears. The explosion of the supernova would thus have taken place 5,800 million years ago, i.e. about 1,000 million years before the solar system was formed. However, such a simple calculation works only for relatively ``nearby'' objects, perhaps out to some hundred million lightyears. When we look much further into space, we also look far back in time and it is not excluded that the universal expansion rate, i.e. the Hubble constant, may have been different at earlier epochs. This means that unless we know the change of the Hubble constant with time, we cannot determine reliable distances of distant galaxies from their measured redshifts and velocities. At the same time, knowledge about such change or lack of the same will provide unique information about the time elapsed since the Universe began to expand (the ``Big Bang''), that is, the age of the Universe and also its ultimate fate. The Deceleration Parameter q0 Cosmologists are therefore eager to determine not only the current expansion rate (i.e., the Hubble constant, H0) but also its possible change with time (known as the deceleration parameter, q0). Although a highly accurate value of H0 has still not become available, increasing attention is now given to the observational determination of the second parameter, cf. also the Appendix at the
16. Anisotropic pressure tokamak equilibrium and stability considerations
SciTech Connect
Salberta, E.R.; Grimm, R.C.; Johnson, J.L.; Manickam, J.; Tang, W.M.
1987-02-01
Investigation of the effect of pressure anisotropy on tokamak equilibrium and stability is made with an MHD model. Realistic perpendicular and parallel pressure distributions, P/sub perpendicular/(psi,B) and P/sub parallel/(psi,B), are obtained by solving a one-dimensional Fokker-Planck equation for neutral beam injection to find a distribution function f(E, v/sub parallel//v) at the position of minimum field on each magnetic surface and then using invariance of the magnetic moment to determine its value at each point on the surface. The shift of the surfaces of constant perpendicular and parallel pressure from the flux surfaces depends strongly on the angle of injection. This shift explains the observed increase or decrease in the stability conditions. Estimates of the stabilizing effect of hot trapped ions indicates that a large fraction must be nonresonant and thus decoupled from the bad curvature before it becomes important.
17. Radiative equilibrium model of Titan's atmosphere
NASA Technical Reports Server (NTRS)
Samuelson, R. E.
1983-01-01
The present global radiative equilibrium model for the Saturn satellite Titan is restricted to the two-stream approximation, is vertically homogeneous in its scattering properties, and is spectrally divided into one thermal and two solar channels. Between 13 and 33% of the total incident solar radiation is absorbed at the planetary surface, and the 30-60 ratio of violet to thermal IR absorption cross sections in the stratosphere leads to the large temperature inversion observed there. The spectrally integrated mass absorption coefficient at thermal wavelengths is approximately constant throughout the stratosphere, and approximately linear with pressure in the troposphere, implying the presence of a uniformly mixed aerosol in the stratosphere. There also appear to be two regions of enhanced opacity near 30 and 500 mbar.
18. The effect of heating insufflation gas on acid-base alterations and core temperature during laparoscopic major abdominal surgery
PubMed Central
Lee, Kyung-Cheon; Kim, Ji Young; Lee, Hee-Dong; Kwon, Il Won
2011-01-01
Background Carbon dioxide (CO2) has different biophysical properties under different thermal conditions, which may affect its rate of absorption in the blood and the related adverse events. The present study was aimed to investigate the effects of heating of CO2 on acid-base balance using Stewart's physiochemical approach, and body temperature during laparoscopy. Methods Thirty adult patients undergoing laparoscopic major abdominal surgery were randomized to receive either room temperature CO2 (control group, n = 15) or heated CO2 (heated group, n = 15). The acid-base parameters were measured 10 min after the induction of anesthesia (T1), 40 min after pneumoperitoneum (T2), at the end of surgery (T3) and 1 h after surgery (T4). Body temperature was measured at 15-min intervals until the end of the surgery. Results There were no significant differences in pH, PaCO2, the apparent strong ion difference, the strong ion gap, bicarbonate ion, or lactate between two groups throughout the whole investigation period. At T2, pH was decreased whereas PaCO2 was increased in both groups compared with T1 but these changes were not significantly different. Body temperatures in the heated group were significantly higher than those in the control group from 30 to 90 min after pneumoperitoneum. Conclusions The heating of insufflating CO2 did not affect changes in the acid-base status and PaCO2 in patients undergoing laparoscopic abdominal surgery when the ventilator was set to maintain constant end-tidal CO2. However, the heated CO2 reduced the decrease in the core body temperature 30 min after the pneumoperitoneum. PMID:22110878
19. Shape characteristics of equilibrium and non-equilibrium fractal clusters.
PubMed
Mansfield, Marc L; Douglas, Jack F
2013-07-28
It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other
20. Shape characteristics of equilibrium and non-equilibrium fractal clusters.
PubMed
Mansfield, Marc L; Douglas, Jack F
2013-07-28
It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other
1. Shape characteristics of equilibrium and non-equilibrium fractal clusters
Mansfield, Marc L.; Douglas, Jack F.
2013-07-01
It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other
2. The empirical equilibrium structure of diacetylene
Thorwirth, Sven; Harding, Michael E.; Muders, Dirk; Gauss, Jürgen
2008-09-01
High-level quantum-chemical calculations are reported at the MP2 and CCSD(T) levels of theory for the equilibrium structure and the harmonic and anharmonic force fields of diacetylene, H sbnd C tbnd C sbnd C tbnd C sbnd H. The calculations were performed employing Dunning's hierarchy of correlation-consistent basis sets cc-pV XZ, cc-pCV XZ, and cc-pwCV XZ, as well as the ANO2 basis set of Almlöf and Taylor. An empirical equilibrium structure based on experimental rotational constants for 13 isotopic species of diacetylene and computed zero-point vibrational corrections is determined (reemp:r=1.0615 Å,r=1.2085 Å,r=1.3727 Å) and in good agreement with the best theoretical structure (CCSD(T)/cc-pCV5Z: r=1.0617 Å, r=1.2083 Å, r=1.3737 Å). In addition, the computed fundamental vibrational frequencies are compared with the available experimental data and found in satisfactory agreement.
3. Achieving Chemical Equilibrium: The Role of Imposed Conditions in the Ammonia Formation Reaction
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2006-01-01
Under conditions of constant temperature T and pressure P, chemical equilibrium occurs in a closed system (fixed mass) when the Gibbs free energy G of the reaction mixture is minimized. However, when chemical reactions occur under other conditions, other thermodynamic functions are minimized or maximized. For processes at constant T and volume V,…
4. Equilibrium econophysics: A unified formalism for neoclassical economics and equilibrium thermodynamics
Sousa, Tânia; Domingos, Tiago
2006-11-01
We develop a unified conceptual and mathematical structure for equilibrium econophysics, i.e., the use of concepts and tools of equilibrium thermodynamics in neoclassical microeconomics and vice versa. Within this conceptual structure the results obtained in microeconomic theory are: (1) the definition of irreversibility in economic behavior; (2) the clarification that the Engel curve and the offer curve are not descriptions of real processes dictated by the maximization of utility at constant endowment; (3) the derivation of a relation between elasticities proving that economic elasticities are not all independent; (4) the proof that Giffen goods do not exist in a stable equilibrium; (5) the derivation that ‘economic integrability’ is equivalent to the generalized Le Chatelier principle and (6) the definition of a first order phase transition, i.e., a transition between separate points in the utility function. In thermodynamics the results obtained are: (1) a relation between the non-dimensional isothermal and adiabatic compressibilities and the increase or decrease in the thermodynamic potentials; (2) the distinction between mathematical integrability and optimization behavior and (3) the generalization of the Clapeyron equation.
5. The acid-base resistant zone in three dentin bonding systems.
PubMed
Inoue, Go; Nikaido, Toru; Foxton, Richard M; Tagami, Junji
2009-11-01
An acid-base resistant zone has been found to exist after acid-base challenge adjacent to the hybrid layer using SEM. The aim of this study was to examine the acid-base resistant zone using three different bonding systems. Dentin disks were applied with three different bonding systems, and then a resin composite was light-cured to make dentin disk sandwiches. After acid-base challenge, the polished surfaces were observed using SEM. For both one- and two-step self-etching primer systems, an acid-base resistant zone was clearly observed adjacent to the hybrid layer - but with differing appearances. For the wet bonding system, the presence of an acid-base resistant zone was unclear. This was because the self-etching primer systems etched the dentin surface mildly, such that the remaining mineral phase of dentin and the bonding agent yielded clear acid-base resistant zones. In conclusion, the acid-base resistant zone was clearly observed when self-etching primer systems were used, but not so for the wet bonding system.
6. Thai Grade 11 Students' Alternative Conceptions for Acid-Base Chemistry
ERIC Educational Resources Information Center
Artdej, Romklao; Ratanaroutai, Thasaneeya; Coll, Richard Kevin; Thongpanchang, Tienthong
2010-01-01
This study involved the development of a two-tier diagnostic instrument to assess Thai high school students' understanding of acid-base chemistry. The acid-base diagnostic test (ABDT) comprising 18 items was administered to 55 Grade 11 students in a science and mathematics programme during the second semester of the 2008 academic year. Analysis of…
7. A Comparative Study of French and Turkish Students' Ideas on Acid-Base Reactions
ERIC Educational Resources Information Center
Cokelez, Aytekin
2010-01-01
The goal of this comparative study was to determine the knowledge that French and Turkish upper secondary-school students (grades 11 and 12) acquire on the concept of acid-base reactions. Following an examination of the relevant curricula and textbooks in the two countries, 528 students answered six written questions about the acid-base concept.…
8. High School Students' Understanding of Acid-Base Concepts: An Ongoing Challenge for Teachers
ERIC Educational Resources Information Center
Damanhuri, Muhd Ibrahim Muhamad; Treagust, David F.; Won, Mihye; Chandrasegaran, A. L.
2016-01-01
Using a quantitative case study design, the "Acids-Bases Chemistry Achievement Test" ("ABCAT") was developed to evaluate the extent to which students in Malaysian secondary schools achieved the intended curriculum on acid-base concepts. Responses were obtained from 260 Form 5 (Grade 11) students from five schools to initially…
9. Modeling description and spectroscopic evidence of surface acid-base properties of natural illites.
PubMed
Liu, W
2001-12-01
The acid-base properties of natural illites from different areas were studied by potentiometric titrations. The acidimetric supernatant was regarded as the system blank to calculate the surface site concentration due to consideration of substrate dissolution during the prolonged acidic titration. The following surface complexation model could give a good interpretation of the surface acid-base reactions of the aqueous illites:
10. Collaborative Strategies for Teaching Common Acid-Base Disorders to Medical Students
ERIC Educational Resources Information Center
Petersen, Marie Warrer; Toksvang, Linea Natalie; Plovsing, Ronni R.; Berg, Ronan M. G.
2014-01-01
The ability to recognize and diagnose acid-base disorders is of the utmost importance in the clinical setting. However, it has been the experience of the authors that medical students often have difficulties learning the basic principles of acid-base physiology in the respiratory physiology curriculum, particularly when applying this knowledge to…
11. Canonical Pedagogical Content Knowledge by Cores for Teaching Acid-Base Chemistry at High School
ERIC Educational Resources Information Center
2015-01-01
The topic of acid-base chemistry is one of the oldest in general chemistry courses and it has been almost continuously in academic discussion. The central purpose of documenting the knowledge and beliefs of a group of ten Mexican teachers with experience in teaching acid-base chemistry in high school was to know how they design, prepare and…
12. [Dynamics of blood gases and acid-base balance in patients with carbon monoxide acute poisoning].
PubMed
Polozova, E V; Shilov, V V; Bogachova, A S; Davydova, E V
2015-01-01
Evaluation of blood gases and acid-base balance covered patients with carbon monoxide acute poisoning, in accordance with inhalation trauma presence. Evidence is that thermochemical injury of respiratory tract induced severe acid-base dysbalance remaining decompensated for a long time despite the treatment.
13. Neutral and charged matter in equilibrium with black holes
Bronnikov, K. A.; Zaslavskii, O. B.
2011-10-01
We study the conditions of a possible static equilibrium between spherically symmetric, electrically charged or neutral black holes and ambient matter. The following kinds of matter are considered: (1) neutral and charged matter with a linear equation of state pr=wρ (for neutral matter the results of our previous work are reproduced), (2) neutral and charged matter with pr˜ρm, m>1, and (3) the possible presence of a “vacuum fluid” (the cosmological constant or, more generally, anything that satisfies the equality T00=T11 at least at the horizon). We find a number of new cases of such an equilibrium, including those generalizing the well-known Majumdar-Papapetrou conditions for charged dust. It turns out, in particular, that ultraextremal black holes cannot be in equilibrium with any matter in the absence of a vacuum fluid; meanwhile, matter with w>0, if it is properly charged, can surround an extremal charged black hole.
14. Uncertainty of mantle geophysical properties computed from phase equilibrium models
Connolly, J. A. D.; Khan, A.
2016-05-01
Phase equilibrium models are used routinely to predict geophysically relevant mantle properties. A limitation of this approach is that nonlinearity of the phase equilibrium problem precludes direct assessment of the resultant uncertainties. To overcome this obstacle, we stochastically assess uncertainties along self-consistent mantle adiabats for pyrolitic and basaltic bulk compositions to 2000 km depth. The dominant components of the uncertainty are the identity, composition and elastic properties of the minerals. For P wave speed and density, the latter components vary little, whereas the first is confined to the upper mantle. Consequently, P wave speeds, densities, and adiabatic temperatures and pressures predicted by phase equilibrium models are more uncertain in the upper mantle than in the lower mantle. In contrast, uncertainties in S wave speeds are dominated by the uncertainty in shear moduli and are approximately constant throughout the model depth range.
15. Chemical-equilibrium calculations for aqueous geothermal brines
SciTech Connect
Kerrisk, J.F.
1981-05-01
Results from four chemical-equilibrium computer programs, REDEQL.EPAK, GEOCHEM, WATEQF, and SENECA2, have been compared with experimental solubility data for some simple systems of interest with geothermal brines. Seven test cases involving solubilities of CaCO/sub 3/, amorphous SiO/sub 2/, CaSO/sub 4/, and BaSO/sub 4/ at various temperatures from 25 to 300/sup 0/C and in NaCl or HCl solutions of 0 to 4 molal have been examined. Significant differences between calculated results and experimental data occurred in some cases. These differences were traced to inaccuracies in free-energy or equilibrium-constant data and in activity coefficients used by the programs. Although currently available chemical-equilibrium programs can give reasonable results for these calculations, considerable care must be taken in the selection of free-energy data and methods of calculating activity coefficients.
16. Spectral Quasi-Equilibrium Manifold for Chemical Kinetics.
PubMed
Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V
2016-05-26
The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.
17. Effective Torsion and Spring Constants in a Hybrid Translational-Rotational Oscillator
ERIC Educational Resources Information Center
Nakhoda, Zein; Taylor, Ken
2011-01-01
A torsion oscillator is a vibrating system that experiences a restoring torque given by [tau] = -[kappa][theta] when it experiences a rotational displacement [theta] from its equilibrium position. The torsion constant [kappa] (kappa) is analogous to the spring constant "k" for the traditional translational oscillator (for which the restoring force…
18. Equilibrium studies of copper ion adsorption onto palm kernel fibre.
PubMed
Ofomaja, Augustine E
2010-07-01
The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic. PMID:20346574
19. Equilibrium studies of copper ion adsorption onto palm kernel fibre.
PubMed
Ofomaja, Augustine E
2010-07-01
The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic.
20. New Quasar Studies Keep Fundamental Physical Constant Constant
2004-03-01
Very Large Telescope sets stringent limit on possible variation of the fine-structure constant over cosmological time Summary Detecting or constraining the possible time variations of fundamental physical constants is an important step toward a complete understanding of basic physics and hence the world in which we live. A step in which astrophysics proves most useful. Previous astronomical measurements of the fine structure constant - the dimensionless number that determines the strength of interactions between charged particles and electromagnetic fields - suggested that this particular constant is increasing very slightly with time. If confirmed, this would have very profound implications for our understanding of fundamental physics. New studies, conducted using the UVES spectrograph on Kueyen, one of the 8.2-m telescopes of ESO's Very Large Telescope array at Paranal (Chile), secured new data with unprecedented quality. These data, combined with a very careful analysis, have provided the strongest astronomical constraints to date on the possible variation of the fine structure constant. They show that, contrary to previous claims, no evidence exist for assuming a time variation of this fundamental constant. PR Photo 07/04: Relative Changes with Redshift of the Fine Structure Constant (VLT/UVES) A fine constant To explain the Universe and to represent it mathematically, scientists rely on so-called fundamental constants or fixed numbers. The fundamental laws of physics, as we presently understand them, depend on about 25 such constants. Well-known examples are the gravitational constant, which defines the strength of the force acting between two bodies, such as the Earth and the Moon, and the speed of light. One of these constants is the so-called "fine structure constant", alpha = 1/137.03599958, a combination of electrical charge of the electron, the Planck constant and the speed of light. The fine structure constant describes how electromagnetic forces hold
1. Interactions of Virus Like Particles in Equilibrium and Non-equilibrium Systems
Lin, Hsiang-Ku
This thesis summarizes my Ph.D. research on the interactions of virus like particles in equilibrium and non-equilibrium biological systems. In the equilibrium system, we studied the fluctuation-induced forces between inclusions in a fluid membrane. We developed an exact method to calculate thermal Casimir forces between inclusions of arbitrary shapes and separation, embedded in a fluid membrane whose fluctuations are governed by the combined action of surface tension, bending modulus, and Gaussian rigidity. Each objects shape and mechanical properties enter only through a characteristic matrix, a static analog of the scattering matrix. We calculate the Casimir interaction between two elastic disks embedded in a membrane. In particular, we find that at short separations the interaction is strong and independent of surface tension. In the non-equilibrium system, we studied the transport and deposition dynamics of colloids in saturated porous media under un-favorable filtering conditions. As an alternative to traditional convection-diffusion or more detailed numerical models, we consider a mean-field description in which the attachment and detachment processes are characterized by an entire spectrum of rate constants, ranging from shallow traps which mostly account for hydrodynamic dispersivity, all the way to the permanent traps associated with physical straining. The model has an analytical solution which allows analysis of its properties including the long time asymptotic behavior and the profile of the deposition curves. Furthermore, the model gives rise to a filtering front whose structure, stability and propagation velocity are examined. Based on these results, we propose an experimental protocol to determine the parameters of the model.
2. Equilibrium and dynamic design principles for binding molecules engineered for reagentless biosensors.
PubMed
de Picciotto, Seymour; Imperiali, Barbara; Griffith, Linda G; Wittrup, K Dane
2014-09-01
Reagentless biosensors rely on the interaction of a binding partner and its target to generate a change in fluorescent signal using an environment-sensitive fluorophore or Förster resonance energy transfer. Binding affinity can exert a significant influence on both the equilibrium and the dynamic response characteristics of such a biosensor. We here develop a kinetic model for the dynamic performance of a reagentless biosensor. Using a sinusoidal signal for ligand concentration, our findings suggest that it is optimal to use a binding moiety whose equilibrium dissociation constant matches that of the average predicted input signal, while maximizing both the association rate constant and the dissociation rate constant at the necessary ratio to create the desired equilibrium constant. Although practical limitations constrain the attainment of these objectives, the derivation of these design principles provides guidance for improved reagentless biosensor performance and metrics for quality standards in the development of biosensors. These concepts are broadly relevant to reagentless biosensor modalities.
3. A mathematical model of pH, based on the total stoichiometric concentration of acids, bases and ampholytes dissolved in water.
PubMed
Mioni, Roberto; Mioni, Giuseppe
2015-10-01
In chemistry and in acid-base physiology, the Henderson-Hasselbalch equation plays a pivotal role in studying the behaviour of the buffer solutions. However, it seems that the general function to calculate the valence of acids, bases and ampholytes, N = f(pH), at any pH, has only been provided by Kildeberg. This equation can be applied to strong acids and bases, pluriprotic weak acids, bases and ampholytes, with an arbitrary number of acid strength constants, pKA, including water. By differentiating this function with respect to pH, we obtain the general equation for the buffer value. In addition, by integrating the titration curve, TA, proposed by Kildeberg, and calculating its Legendre transform, we obtain the Gibbs free energy of pH (or pOH)-dependent titratable acid. Starting from the law of electroneutrality and applying suitable simplifications, it is possible to calculate the pH of the buffer solutions by numerical methods, available in software packages such as Excel. The concept of buffer capacity has also been clarified by Urbansky, but, at variance with our approach, not in an organic manner. In fact, for each set of monobasic, dibasic, tribasic acids, etc., various equations are presented which independently fit each individual acid-base category. Consequently, with the increase in acid groups (pKA), the equations become more and more difficult, both in practice and in theory. Some examples are proposed to highlight the boundary that exists between acid-base physiology and the thermodynamic concepts of energy, chemical potential, amount of substance and acid resistance. PMID:26059505
4. A mathematical model of pH, based on the total stoichiometric concentration of acids, bases and ampholytes dissolved in water.
PubMed
Mioni, Roberto; Mioni, Giuseppe
2015-10-01
In chemistry and in acid-base physiology, the Henderson-Hasselbalch equation plays a pivotal role in studying the behaviour of the buffer solutions. However, it seems that the general function to calculate the valence of acids, bases and ampholytes, N = f(pH), at any pH, has only been provided by Kildeberg. This equation can be applied to strong acids and bases, pluriprotic weak acids, bases and ampholytes, with an arbitrary number of acid strength constants, pKA, including water. By differentiating this function with respect to pH, we obtain the general equation for the buffer value. In addition, by integrating the titration curve, TA, proposed by Kildeberg, and calculating its Legendre transform, we obtain the Gibbs free energy of pH (or pOH)-dependent titratable acid. Starting from the law of electroneutrality and applying suitable simplifications, it is possible to calculate the pH of the buffer solutions by numerical methods, available in software packages such as Excel. The concept of buffer capacity has also been clarified by Urbansky, but, at variance with our approach, not in an organic manner. In fact, for each set of monobasic, dibasic, tribasic acids, etc., various equations are presented which independently fit each individual acid-base category. Consequently, with the increase in acid groups (pKA), the equations become more and more difficult, both in practice and in theory. Some examples are proposed to highlight the boundary that exists between acid-base physiology and the thermodynamic concepts of energy, chemical potential, amount of substance and acid resistance.
5. A beta-D-allopyranoside-grafted Ru(II) complex: synthesis and acid-base and DNA-binding properties.
PubMed
Ma, Yan-Zi; Yin, Hong-Ju; Wang, Ke-Zhi
2009-08-01
A new ruthenium(II) complex grafted with beta-d-allopyranoside, Ru(bpy)(2)(Happip)(ClO(4))(2) (where bpy = 2,2'-bipyridine; Happip = 2-(4-(beta-d-allopyranoside)phenyl)imidazo[4,5-f][1,10]phenanthroline), has been synthesized and characterized by elemental analysis, (1)H NMR spectroscopy, and mass spectrometry. The acid-base properties of the complex have been studied by UV-visible and luminescence spectrophotometric pH titrations, and ground- and excited-state ionization constants have been derived. The Ru(II) complex functions as a DNA intercalator as revealed by UV-visible and emission titrations, salt effects, steady-state emission quenching by [Fe(CN)(6)](4-), DNA competitive binding with ethidium bromide, DNA melting experiment, and viscosity measurements.
6. Tuning universality far from equilibrium
PubMed Central
Karl, Markus; Nowak, Boris; Gasenzer, Thomas
2013-01-01
Possible universal dynamics of a many-body system far from thermal equilibrium are explored. A focus is set on meta-stable non-thermal states exhibiting critical properties such as self-similarity and independence of the details of how the respective state has been reached. It is proposed that universal dynamics far from equilibrium can be tuned to exhibit a dynamical transition where these critical properties change qualitatively. This is demonstrated for the case of a superfluid two-component Bose gas exhibiting different types of long-lived but non-thermal critical order. Scaling exponents controlled by the ratio of experimentally tuneable coupling parameters offer themselves as natural smoking guns. The results shed light on the wealth of universal phenomena expected to exist in the far-from-equilibrium realm. PMID:23928853
7. Phase coexistence far from equilibrium
Dickman, Ronald
2016-04-01
Investigation of simple far-from-equilibrium systems exhibiting phase separation leads to the conclusion that phase coexistence is not well defined in this context. This is because the properties of the coexisting nonequilibrium systems depend on how they are placed in contact, as verified in the driven lattice gas with attractive interactions, and in the two-temperature lattice gas, under (a) weak global exchange between uniform systems, and (b) phase-separated (nonuniform) systems. Thus, far from equilibrium, the notions of universality of phase coexistence (i.e., independence of how systems exchange particles and/or energy), and of phases with intrinsic properties (independent of their environment) are lost.
8. Toroidal plasma equilibrium with gravity
SciTech Connect
Yoshikawa, S.
1980-09-01
Toroidal magnetic field configuration in a gravitational field is calculated both from a simple force-balance and from the calculation using magnetic surfaces. The configuration is found which is positionally stable in a star. The vibrational frequency near the equilibrium point is proportional to the hydrostatic frequency of a star multiplied by the ratio (W/sub B//W/sub M/)/sup 1/2/ where W/sub B/ is the magnetic field energy density, and W/sub M/ is the material pressure at the equilibrium point. It is proposed that this frequency may account for the observed solar spot cycles.
9. Adiabatic evolution of plasma equilibrium
PubMed Central
Grad, H.; Hu, P. N.; Stevens, D. C.
1975-01-01
A new theory of plasma equilibrium is introduced in which adiabatic constraints are specified. This leads to a mathematically nonstandard structure, as compared to the usual equilibrium theory, in which prescription of pressure and current profiles leads to an elliptic partial differential equation. Topologically complex configurations require further generalization of the concept of adiabaticity to allow irreversible mixing of plasma and magnetic flux among islands. Matching conditions across a boundary layer at the separatrix are obtained from appropriate conservation laws. Applications are made to configurations with planned islands (as in Doublet) and accidental islands (as in Tokamaks). Two-dimensional, axially symmetric, helically symmetric, and closed line equilibria are included. PMID:16578729
10. Novel mapping in non-equilibrium stochastic processes
Heseltine, James; Kim, Eun-jin
2016-04-01
We investigate the time-evolution of a non-equilibrium system in view of the change in information and provide a novel mapping relation which quantifies the change in information far from equilibrium and the proximity of a non-equilibrium state to the attractor. Specifically, we utilize a nonlinear stochastic model where the stochastic noise plays the role of incoherent regulation of the dynamical variable x and analytically compute the rate of change in information (information velocity) from the time-dependent probability distribution function. From this, we quantify the total change in information in terms of information length { L } and the associated action { J }, where { L } represents the distance that the system travels in the fluctuation-based, statistical metric space parameterized by time. As the initial probability density function’s mean position (μ) is decreased from the final equilibrium value {μ }* (the carrying capacity), { L } and { J } increase monotonically with interesting power-law mapping relations. In comparison, as μ is increased from {μ }*,{ L } and { J } increase slowly until they level off to a constant value. This manifests the proximity of the state to the attractor caused by a strong correlation for large μ through large fluctuations. Our proposed mapping relation provides a new way of understanding the progression of the complexity in non-equilibrium system in view of information change and the structure of underlying attractor.
11. Modeling Bacteria Surface Acid-Base Properties: The Overprint Of Biology
Amores, D. R.; Smith, S.; Warren, L. A.
2009-05-01
Bacteria are ubiquitous in the environment and are important repositories for metals as well as nucleation templates for a myriad of secondary minerals due to an abundance of reactive surface binding sites. Model elucidation of whole cell surface reactivity simplifies bacteria as viable but static, i.e., no metabolic activity, to enable fits of microbial data sets from models derived from mineral surfaces. Here we investigate the surface proton charging behavior of live and dead whole cell cyanobacteria (Synechococcus sp.) harvested from a single parent culture by acid-base titration using a Fully Optimized ContinUouS (FOCUS) pKa spectrum method. Viability of live cells was verified by successful recultivation post experimentation, whereas dead cells were consistently non-recultivable. Surface site identities derived from binding constants determined for both the live and dead cells are consistent with molecular analogs for organic functional groups known to occur on microbial surfaces: carboxylic (pKa = 2.87-3.11), phosphoryl (pKa = 6.01-6.92) and amine/hydroxyl groups (pKa = 9.56-9.99). However, variability in total ligand concentration among the live cells is greater than those between the live and dead. The total ligand concentrations (LT, mol- mg-1 dry solid) derived from the live cell titrations (n=12) clustered into two sub-populations: high (LT = 24.4) and low (LT = 5.8), compared to the single concentration for the dead cell titrations (LT = 18.8; n=5). We infer from these results that metabolic activity can substantively impact surface reactivity of morphologically identical cells. These results and their modeling implications for bacteria surface reactivities will be discussed.
12. Interpretation of pH-activity profiles for acid-base catalysis from molecular simulations.
PubMed
Dissanayake, Thakshila; Swails, Jason M; Harris, Michael E; Roitberg, Adrian E; York, Darrin M
2015-02-17
The measurement of reaction rate as a function of pH provides essential information about mechanism. These rates are sensitive to the pK(a) values of amino acids directly involved in catalysis that are often shifted by the enzyme active site environment. Experimentally observed pH-rate profiles are usually interpreted using simple kinetic models that allow estimation of "apparent pK(a)" values of presumed general acid and base catalysts. One of the underlying assumptions in these models is that the protonation states are uncorrelated. In this work, we introduce the use of constant pH molecular dynamics simulations in explicit solvent (CpHMD) with replica exchange in the pH-dimension (pH-REMD) as a tool to aid in the interpretation of pH-activity data of enzymes and to test the validity of different kinetic models. We apply the methods to RNase A, a prototype acid-base catalyst, to predict the macroscopic and microscopic pK(a) values, as well as the shape of the pH-rate profile. Results for apo and cCMP-bound RNase A agree well with available experimental data and suggest that deprotonation of the general acid and protonation of the general base are not strongly coupled in transphosphorylation and hydrolysis steps. Stronger coupling, however, is predicted for the Lys41 and His119 protonation states in apo RNase A, leading to the requirement for a microscopic kinetic model. This type of analysis may be important for other catalytic systems where the active forms of the implicated general acid and base are oppositely charged and more highly correlated. These results suggest a new way for CpHMD/pH-REMD simulations to bridge the gap with experiments to provide a molecular-level interpretation of pH-activity data in studies of enzyme mechanisms.
13. Understanding Thermal Equilibrium through Activities
ERIC Educational Resources Information Center
2015-01-01
Thermal equilibrium is a basic concept in thermodynamics. In India, this concept is generally introduced at the first year of undergraduate education in physics and chemistry. In our earlier studies (Pathare and Pradhan 2011 "Proc. episteme-4 Int. Conf. to Review Research on Science Technology and Mathematics Education" pp 169-72) we…
14. An investigation of equilibrium concepts
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
A different approach to modeling of the thermochemistry of rocket engine combustion phenomena is presented. The methodology described is based on the hypothesis of a new variational principle applicable to compressible fluid mechanics. This hypothesis is extended to treat the thermochemical behavior of a reacting (equilibrium) gas in an open system.
15. A Simplified Undergraduate Laboratory Experiment to Evaluate the Effect of the Ionic Strength on the Equilibrium Concentration Quotient of the Bromcresol Green Dye
ERIC Educational Resources Information Center
Rodriguez, Hernan B.; Mirenda, Martin
2012-01-01
A modified laboratory experiment for undergraduate students is presented to evaluate the effects of the ionic strength, "I", on the equilibrium concentration quotient, K[subscript c], of the acid-base indicator bromcresol green (BCG). The two-step deprotonation of the acidic form of the dye (sultone form), as it is dissolved in water, yields…
16. Constant-Pressure Hydraulic Pump
NASA Technical Reports Server (NTRS)
Galloway, C. W.
1982-01-01
Constant output pressure in gas-driven hydraulic pump would be assured in new design for gas-to-hydraulic power converter. With a force-multiplying ring attached to gas piston, expanding gas would apply constant force on hydraulic piston even though gas pressure drops. As a result, pressure of hydraulic fluid remains steady, and power output of the pump does not vary.
17. Improving pharmacy students' understanding and long-term retention of acid-base chemistry.
PubMed
Roche, Victoria F
2007-12-15
Despite repeated exposure to the principles underlying the behavior of organic acids and bases in aqueous solution, some pharmacy students remain confused about the topic of acid-base chemistry. Since a majority of organic drug molecules have acid-base character, the ability to predict their reactivity and the extent to which they will ionize in a given medium is paramount to students' understanding of essentially all aspects of drug action in vivo and in vitro. This manuscript presents a medicinal chemistry lesson in the fundamentals of acid-base chemistry that many pharmacy students have found enlightening and clarifying.
18. [Practical diagnostics of acid-base disorders: part I: differentiation between respiratory and metabolic disturbances].
PubMed
Deetjen, P; Lichtwarck-Aschoff, M
2012-11-01
The first part of this overview on diagnostic tools for acid-base disorders focuses on basic knowledge for distinguishing between respiratory and metabolic causes of a particular disturbance. Rather than taking sides in the great transatlantic or traditional-modern debate on the best theoretical model for understanding acid-base physiology, this article tries to extract what is most relevant for everyday clinical practice from the three schools involved in these keen debates: the Copenhagen, the Boston and the Stewart schools. Each school is particularly strong in a specific diagnostic or therapeutic field. Appreciating these various strengths a unifying, simplified algorithm together with an acid-base calculator will be discussed.
19. Acid-base and chelatometric photo-titrations with photosensors and membrane photosensors.
PubMed
Matsuo, T; Masuda, Y; Sekido, E
1986-08-01
Photosensors (PS) and membrane photosensors (MPS), which can be immersed in the test solution and facilitate the measurement of concentration, have been developed by miniaturizing an optical system consisting of a light source and a photocell. For use in acid-base or complexometric titrations a poly(vinyl chloride) membrane containing an acid-base or metallochromic indicator can be applied as a coating to the photocell. Spectrophotometric determination of copper(II), and photometric acid-base and chelatometric titrations have been performed with the PS and MPS systems.
20. Water dimer equilibrium constant calculation: a quantum formulation including metastable states.
PubMed
Leforestier, Claude
2014-02-21
We present a full quantum evaluation of the water second virial coefficient B(T) based on the Takahashi-Imada second order approximation. As the associated trace T r[e(-βH(AB)) - e(-βH(0)(AB))] is performed in the coordinate representation, it does also include contribution from the whole continuum, i.e., resonances and collision pairs of monomers. This approach is compared to a Path Integral Monte Carlo evaluation of this coefficient by Schenter [J. Chem. Phys. 117, 6573 (2002)] for the TIP4P potential and shown to give extremely close results in the low temperature range (250-450 K) reported. Using a recent ab initio flexible potential for the water dimer, this new formulation leads to very good agreement with experimental values over the whole range of temperatures available. The virial coefficient is then used in the well known relation Kp(T) = -(B(T) - bM)/RT where the excluded volume bM is assimilated to the second virial coefficient of pure water monomer vapor and approximated from the inner repulsive part of the interaction potential. This definition, which renders bM temperature dependent, allows us to retrieve the 38 cm(3) mol(-1) value commonly used, at room temperature. The resulting values for Kp(T) are in agreement with available experimental data obtained from infrared absorption spectra of water vapor.
1. Non-Equilibrium Properties from Equilibrium Free Energy Calculations
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Wilson, Michael A.
2012-01-01
Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.
2. Electrospun poly(lactic acid) based conducting nanofibrous networks
Patra, S. N.; Bhattacharyya, D.; Ray, S.; Easteal, A. J.
2009-08-01
Multi-functionalised micro/nanostructures of conducting polymers in neat or blended forms have received much attention because of their unique properties and technological applications in electrical, magnetic and biomedical devices. Biopolymer-based conducting fibrous mats are of special interest for tissue engineering because they not only physically support tissue growth but also are electrically conductive, and thus are able to stimulate specific cell functions or trigger cell responses. They are effective for carrying current in biological environments and can thus be considered for delivering local electrical stimuli at the site of damaged tissue to promote wound healing. Electrospinning is an established way to process polymer solutions or melts into continuous fibres with diameter often in the nanometre range. This process primarily depends on a number of parameters, including the type of polymer, solution viscosity, polarity and surface tension of the solvent, electric field strength and the distance between the spinneret and the collector. The present research has included polyaniline (PANi) as the conducting polymer and poly(L-lactic acid) (PLLA) as the biopolymer. Dodecylbenzene sulphonic acid (DBSA) doped PANi and PLLA have been dissolved in a common solvent (mixtures of chloroform and dimethyl formamide (DMF)), and the solutions successfully electrospun. DMF enhanced the dielectric constant of the solvent, and tetra butyl ammonium bromide (TBAB) was used as an additive to increase the conductivity of the solution. DBSA-doped PANi/PLLA mat exhibits an almost bead-free network of nanofibres that have extraordinarily smooth surface and diameters in the range 75 to 100 nm.
3. Envisioning an enzymatic Diels-Alder reaction by in situ acid-base catalyzed diene generation.
PubMed
Linder, Mats; Johansson, Adam Johannes; Manta, Bianca; Olsson, Philip; Brinck, Tore
2012-06-01
We present and evaluate a new and potentially efficient route for enzyme-mediated Diels-Alder reactions, utilizing general acid-base catalysis. The viability of employing the active site of ketosteroid isomerase is demonstrated.
4. Going Beyond, Going Further: The Preparation of Acid-Base Titration Curves.
ERIC Educational Resources Information Center
McClendon, Michael
1984-01-01
Background information, list of materials needed, and procedures used are provided for a simple technique for generating mechanically plotted acid-base titration curves. The method is suitable for second-year high school chemistry students. (JN)
5. Ultrastructural observation of the acid-base resistant zone of all-in-one adhesives using three different acid-base challenges.
PubMed
Tsujimoto, Miho; Nikaido, Toru; Inoue, Go; Sadr, Alireza; Tagami, Junji
2010-11-01
The aim of this study was to analyze the ultrastructure of the dentin-adhesive interface using two all-in-one adhesive systems (Clearfil Tri-S Bond, TB; Tokuyama Bond Force, BF) after different acid-base challenges. Three solutions were used as acidic solutions for the acid-base challenges: a demineralizing solution (DS), a phosphoric acid solution (PA), and a hydrochloric acid solution (HCl). After the acid-base challenges, the bonded interfaces were examined by scanning electron microscopy. Thickness of the acid-base resistant zone (ABRZ) created in PA and HCl was thinner than in DS for both adhesive systems. For BF adhesive, an eroded area was observed beneath the ABRZ after immersion in PA and HCl, but not in DS. Conversely for TB adhesive, the eroded area was observed only after immersion in PA. In conclusion, although the ABRZ was observed for both all-in-one adhesive systems, its morphological features were influenced by the ingredients of both the adhesive material and acidic solution.
6. Constants and Variables of Nature
SciTech Connect
Sean Carroll
2009-04-03
It is conventional to imagine that the various parameters which characterize our physical theories, such as the fine structure constant or Newton’s gravitational constant, are truly “constant”, in the sense that they do not change from place to place or time to time. Recent developments in both theory and observation have led us to re-examine this assumption, and to take seriously the possibility that our supposed constants are actually gradually changing. I will discuss why we might expect these parameters to vary, and what observation and experiment have to say about the issue.
7. Enthalpies of formation of rare earths and actinide(III) hydroxides: Their acid-base relationships and estimation of their thermodynamic properties
SciTech Connect
1991-12-31
This paper reviews the literature on rare earth(III) and actinide(III) hydroxide thermodynamics, in particular the determination of their enthalpies of formation at 25{degree}C. The hydroxide unit-cell volumes, lanthanide/actinide ion sizes, and solid-solution stability trends have been correlated with a generalized acid-base strength model for oxides to estimate properties for heterogeneous equilibria that are relevant to nuclear waste modeling and to characterization of potential actinide environmental interactions. Enthalpies of formation and solubility-product constants of actinide(III) hydroxides are estimated.
8. Enthalpies of formation of rare earths and actinide(III) hydroxides: Their acid-base relationships and estimation of their thermodynamic properties
SciTech Connect
1991-01-01
This paper reviews the literature on rare earth(III) and actinide(III) hydroxide thermodynamics, in particular the determination of their enthalpies of formation at 25{degree}C. The hydroxide unit-cell volumes, lanthanide/actinide ion sizes, and solid-solution stability trends have been correlated with a generalized acid-base strength model for oxides to estimate properties for heterogeneous equilibria that are relevant to nuclear waste modeling and to characterization of potential actinide environmental interactions. Enthalpies of formation and solubility-product constants of actinide(III) hydroxides are estimated.
9. Near equilibrium distributions for beams with space charge in linear and nonlinear periodic focusing systems
SciTech Connect
Sonnad, Kiran G.; Cary, John R.
2015-04-15
A procedure to obtain a near equilibrium phase space distribution function has been derived for beams with space charge effects in a generalized periodic focusing transport channel. The method utilizes the Lie transform perturbation theory to canonically transform to slowly oscillating phase space coordinates. The procedure results in transforming the periodic focusing system to a constant focusing one, where equilibrium distributions can be found. Transforming back to the original phase space coordinates yields an equilibrium distribution function corresponding to a constant focusing system along with perturbations resulting from the periodicity in the focusing. Examples used here include linear and nonlinear alternating gradient focusing systems. It is shown that the nonlinear focusing components can be chosen such that the system is close to integrability. The equilibrium distribution functions are numerically calculated, and their properties associated with the corresponding focusing system are discussed.
10. Equilibrium & Nonequilibrium Fluctuation Effects in Biopolymer Networks
Kachan, Devin Michael
Fluctuation-induced interactions are an important organizing principle in a variety of soft matter systems. In this dissertation, I explore the role of both thermal and active fluctuations within cross-linked polymer networks. The systems I study are in large part inspired by the amazing physics found within the cytoskeleton of eukaryotic cells. I first predict and verify the existence of a thermal Casimir force between cross-linkers bound to a semi-flexible polymer. The calculation is complicated by the appearance of second order derivatives in the bending Hamiltonian for such polymers, which requires a careful evaluation of the the path integral formulation of the partition function in order to arrive at the physically correct continuum limit and properly address ultraviolet divergences. I find that cross linkers interact along a filament with an attractive logarithmic potential proportional to thermal energy. The proportionality constant depends on whether and how the cross linkers constrain the relative angle between the two filaments to which they are bound. The interaction has important implications for the synthesis of biopolymer bundles within cells. I model the cross-linkers as existing in two phases: bound to the bundle and free in solution. When the cross-linkers are bound, they behave as a one-dimensional gas of particles interacting with the Casimir force, while the free phase is a simple ideal gas. Demanding equilibrium between the two phases, I find a discontinuous transition between a sparsely and a densely bound bundle. This discontinuous condensation transition induced by the long-ranged nature of the Casimir interaction allows for a similarly abrupt structural transition in semiflexible filament networks between a low cross linker density isotropic phase and a higher cross link density bundle network. This work is supported by the results of finite element Brownian dynamics simulations of semiflexible filaments and transient cross-linkers. I
11. Phonon Mapping in Flowing Equilibrium
Ruff, J. P. C.
2015-03-01
When a material conducts heat, a modification of the phonon population occurs. The equilibrium Bose-Einstein distribution is perturbed towards flowing-equilibrium, for which the distribution function is not analytically known. Here I argue that the altered phonon population can be efficiently mapped over broad regions of reciprocal space, via diffuse x-ray scattering or time-of-flight neutron scattering, while a thermal gradient is applied across a single crystal sample. When compared to traditional transport measurements, this technique offers a superior, information-rich new perspective on lattice thermal conductivity, wherein the band and momentum dependences of the phonon thermal current are directly resolved. The proposed method is benchmarked using x-ray thermal diffuse scattering measurements of single crystal diamond under transport conditions. CHESS is supported by the NSF & NIH/NIGMS via NSF Award DMR-1332208.
12. Punctuated equilibrium comes of age
Gould, Stephan Jay; Eldredge, Niles
1993-11-01
The intense controversies that surrounded the youth of punctuated equilibrium have helped it mature to a useful extension of evolutionary theory. As a complement to phyletic gradualism, its most important implications remain the recognition of stasis as a meaningful and predominant pattern within the history of species, and in the recasting of macroevolution as the differential success of certain species (and their descendants) within clades.
13. Thermodynamic equilibrium at heterogeneous pressure
Vrijmoed, J. C.; Podladchikov, Y. Y.
2015-07-01
Recent advances in metamorphic petrology point out the importance of grain-scale pressure variations in high-temperature metamorphic rocks. Pressure derived from chemical zonation using unconventional geobarometry based on equal chemical potentials fits mechanically feasible pressure variations. Here, a thermodynamic equilibrium method is presented that predicts chemical zoning as a result of pressure variations by Gibbs energy minimization. Equilibrium thermodynamic prediction of the chemical zoning in the case of pressure heterogeneity is done by constrained Gibbs minimization using linear programming techniques. In addition to constraining the system composition, a certain proportion of the system is constrained at a specified pressure. Input pressure variations need to be discretized, and each discrete pressure defines an additional constraint for the minimization. The Gibbs minimization method provides identical results to a geobarometry approach based on chemical potentials, thus validating the inferred pressure gradient. The thermodynamic consistency of the calculation is supported by the similar result obtained from two different approaches. In addition, the method can be used for multi-component, multi-phase systems of which several applications are given. A good fit to natural observations in multi-phase, multi-component systems demonstrates the possibility to explain phase assemblages and zoning by spatial pressure variations at equilibrium as an alternative to pressure variation in time due to disequilibrium.
14. Long-term equilibrium tides
Shaffer, John A.; Cerveny, Randall S.
1998-08-01
Extreme equilibrium tides, or ``hypertides,'' are computed in a new equilibrium tidal model combining algorithms of a version of the Chapront ELP-2000/82 Lunar Theory with the BER78 Milankovitch astronomical expansions. For the recent past, a high correspondence exists between computed semidiurnal tide levels and a record of coastal flooding demonstrating that astronomical alignment is a potential influence on such flooding. For the Holocene and near future, maximum tides demonstrate cyclic variations with peaks at near 5000 B.P. and 4000 A.P. On the late Quaternary timescale, variations in maximum equilibrium tide level display oscillations with periods of approximately 10,000, 100,000 and 400,000 years, because of precessional shifts in tidal maxima between vernal and autumnal equinoxes. While flooding occurs under the combined effects of tides and storms via ``storm surges,'' the most extensive flooding will occur with the coincidence of storms and the rarer hypertides and is thus primarily influenced by hypertides. Therefore we suggest that astronomical alignment's relationship to coastal flooding is probabilistic rather than deterministic. Data derived from this model are applicable to (1) archaeological and paleoclimatic coastal reconstructions, (2) long-term planning, for example, radioactive waste site selection, (3) sealevel change and paleoestuarine studies or (4) ocean-meteorological interactions.
15. Radioligand Binding Assays for Determining Dissociation Constants of Phytohormone Receptors.
PubMed
Hellmuth, Antje; Calderón Villalobos, Luz Irina A
2016-01-01
In receptor-ligand interactions, dissociation constants provide a key parameter for characterizing binding. Here, we describe filter-based radioligand binding assays at equilibrium, either varying ligand concentrations up to receptor saturation or outcompeting ligand from its receptor with increasing concentrations of ligand analogue. Using the auxin coreceptor system, we illustrate how to use a saturation binding assay to determine the apparent dissociation constant (K D (') ) for the formation of a ternary TIR1-auxin-AUX/IAA complex. Also, we show how to determine the inhibitory constant (K i) for auxin binding by the coreceptor complex via a competition binding assay. These assays can be applied broadly to characterize a one-site binding reaction of a hormone to its receptor. PMID:27424743
16. A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA)
SciTech Connect
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.
2005-12-23
This paper describes the development and application of a new multicomponent equilibrium solver for aerosol-phase (MESA) to predict the complex solid-liquid partitioning in atmospheric particles containing H+, NH4+, Na+, Ca2+, SO4=, HSO4-, NO3-, and Cl- ions. The algorithm of MESA involves integrating the set of ordinary differential equations describing the transient precipitation and dissolution reactions for each salt until the system satisfies the equilibrium or mass convergence criteria. Arbitrary values are chosen for the dissolution and precipitation rate constants such that their ratio is equal to the equilibrium constant. Numerically, this approach is equivalent to iterating all the equilibrium reactions simultaneously with a single iteration loop. Because CaSO4 is sparingly soluble, it is assumed to exist as a solid over the entire RH range to simplify the algorithm for calcium containing particles. Temperature-dependent mutual deliquescence relative humidity polynomials (valid from 240 to 310 K) for all the possible salt mixtures were constructed using the comprehensive Pitzer-Simonson-Clegg (PSC) activity coefficient model at 298.15 K and temperature-dependent equilibrium constants in MESA. Performance of MESA is evaluated for 16 representative mixed-electrolyte systems commonly found in tropospheric aerosols using PSC and two other multicomponent activity coefficient methods – Multicomponent Taylor Expansion Method (MTEM) of Zaveri et al. [2004], and the widely-used Kusik and Meissner method (KM), and the results are compared against the predictions of the Web-based AIM Model III or available experimental data. Excellent agreement was found between AIM, MESA-PSC, and MESA-MTEM predictions of the multistage deliquescence growth as a function of RH. On the other hand, MESA-KM displayed up to 20% deviations in the mass growth factors for common salt mixtures in the sulfate-poor cases while significant discrepancies were found in the predicted multistage
17. Thermodynamics of sodium dodecyl sulphate-salicylic acid based micellar systems and their potential use in fruits postharvest.
PubMed
Cid, A; Morales, J; Mejuto, J C; Briz-Cid, N; Rial-Otero, R; Simal-Gándara, J
2014-05-15
Micellar systems have excellent food applications due to their capability to solubilise a large range of hydrophilic and hydrophobic substances. In this work, the mixed micelle formation between the ionic surfactant sodium dodecyl sulphate (SDS) and the phenolic acid salicylic acid have been studied at several temperatures in aqueous solution. The critical micelle concentration and the micellization degree were determined by conductometric techniques and the experimental data used to calculate several useful thermodynamic parameters, like standard free energy, enthalpy and entropy of micelle formation. Salicylic acid helps the micellization of SDS, both by increasing the additive concentration at a constant temperature and by increasing temperature at a constant concentration of additive. The formation of micelles of SDS in the presence of salicylic acid was a thermodynamically spontaneous process, and is also entropically controlled. Salicylic acid plays the role of a stabilizer, and gives a pathway to control the three-dimensional water matrix structure. The driving force of the micellization process is provided by the hydrophobic interactions. The isostructural temperature was found to be 307.5 K for the mixed micellar system. This article explores the use of SDS-salicylic acid based micellar systems for their potential use in fruits postharvest.
18. The Importance of the Ionic Product for Water to Understand the Physiology of the Acid-Base Balance in Humans
PubMed Central
Adeva-Andany, María M.; Carneiro-Freire, Natalia; Donapetry-García, Cristóbal; Rañal-Muíño, Eva; López-Pereiro, Yosua
2014-01-01
Human plasma is an aqueous solution that has to abide by chemical rules such as the principle of electrical neutrality and the constancy of the ionic product for water. These rules define the acid-base balance in the human body. According to the electroneutrality principle, plasma has to be electrically neutral and the sum of its cations equals the sum of its anions. In addition, the ionic product for water has to be constant. Therefore, the plasma concentration of hydrogen ions depends on the plasma ionic composition. Variations in the concentration of plasma ions that alter the relative proportion of anions and cations predictably lead to a change in the plasma concentration of hydrogen ions by driving adaptive adjustments in water ionization that allow plasma electroneutrality while maintaining constant the ionic product for water. The accumulation of plasma anions out of proportion of cations induces an electrical imbalance compensated by a fall of hydroxide ions that brings about a rise in hydrogen ions (acidosis). By contrast, the deficiency of chloride relative to sodium generates plasma alkalosis by increasing hydroxide ions. The adjustment of plasma bicarbonate concentration to these changes is an important compensatory mechanism that protects plasma pH from severe deviations. PMID:24877130
19. The importance of the ionic product for water to understand the physiology of the acid-base balance in humans.
PubMed
Adeva-Andany, María M; Carneiro-Freire, Natalia; Donapetry-García, Cristóbal; Rañal-Muíño, Eva; López-Pereiro, Yosua
2014-01-01
Human plasma is an aqueous solution that has to abide by chemical rules such as the principle of electrical neutrality and the constancy of the ionic product for water. These rules define the acid-base balance in the human body. According to the electroneutrality principle, plasma has to be electrically neutral and the sum of its cations equals the sum of its anions. In addition, the ionic product for water has to be constant. Therefore, the plasma concentration of hydrogen ions depends on the plasma ionic composition. Variations in the concentration of plasma ions that alter the relative proportion of anions and cations predictably lead to a change in the plasma concentration of hydrogen ions by driving adaptive adjustments in water ionization that allow plasma electroneutrality while maintaining constant the ionic product for water. The accumulation of plasma anions out of proportion of cations induces an electrical imbalance compensated by a fall of hydroxide ions that brings about a rise in hydrogen ions (acidosis). By contrast, the deficiency of chloride relative to sodium generates plasma alkalosis by increasing hydroxide ions. The adjustment of plasma bicarbonate concentration to these changes is an important compensatory mechanism that protects plasma pH from severe deviations.
20. Varying Constants, Gravitation and Cosmology
Uzan, Jean-Philippe
2011-12-01
Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.
1. Equilibrium and kinetics in metamorphism
Pattison, D. R.
2012-12-01
The equilibrium model for metamorphism is founded on the metamorphic facies principle, the repeated association of the same mineral assemblages in rocks of different bulk composition that have been metamorphosed together. Yet, for any metamorphic process to occur, there must be some degree of reaction overstepping (disequilibrium) to initiate reaction. The magnitude and variability of overstepping, and the degree to which it is either a relatively minor wrinkle or a more substantive challenge to the interpretation of metamorphic rocks using the equilibrium model, is an active area of current research. Kinetic barriers to reaction generally diminish with rising temperature due to the Arrhenius relation. In contrast, the rate of build-up of the macroscopic energetic driving force needed to overcome kinetic barriers to reaction, reaction affinity, does not vary uniformly with temperature, instead varying from reaction to reaction. High-entropy reactions that release large quantities of H2O build up reaction affinity more rapidly than low-entropy reactions that release little or no H2O, such that the former are expected to be overstepped less than the latter. Some consequences include: (1) metamorphic reaction intervals may be discrete rather than continuous, initiating at the point that sufficient reaction affinity has built up to overcome kinetic barriers; (2) metamorphic reaction intervals may not correspond in a simple way to reaction boundaries in an equilibrium phase diagram; (3) metamorphic reactions may involve metastable reactions; (4) metamorphic 'cascades' are possible, in which stable and metastable reactions involving the same reactant phases may proceed simultaneously; and (5) fluid generation, and possibly fluid presence in general, may be episodic rather than continuous, corresponding to discrete intervals of reaction. These considerations bear on the interpretation of P-T-t paths from metamorphic mineral assemblages and textures. The success of the
2. Sorption: Equilibrium partitioning and QSAR development using molecular predictors
SciTech Connect
Means, J.C.
1994-12-31
Sorption of chemical contaminants to sediments and soils has long been a subject of intensive investigation and QSAR development. Progressing the development of organic carbon-normalized, equilibrium partition constants (Koc) have greatly advanced the prediction of environmental fate. Integration of observed experimental results with thermodynamic modeling of compound behavior, based upon concepts of phase activities and fugacity have placed these QSARs on a firm theoretical base. An increasing spectrum of compound properties such as solubility, chemical activity, molecular surface area and other molecular topological indices have been evaluated for their utility as predictors of sorption properties. Questions concerning the effects of nonequilibrium states, hysteresis or irreversibility in desorption kinetics and equilibria, and particle-concentrations effects upon equilibrium constants as they affect fate predictions remain areas of contemporary investigation. These phenomena are considered and reviewed. The effects of modifying factors such as the effects of salinity or the presence of co-solvents may alter predicted fate of a compound. Competitive sorption with mobile microparticulate or colloidal phases may also impact OSAR predictions. Research on the role of both inorganic and organic-rich colloidal phases as a modifying influence on soil/sediment equilibrium partitioning theory is summarized.
3. Synthesis of crystalline americium hydroxide, Am(OH){sub 3}, and determination of its enthalpy of formation; estimation of the solubility-product constants of actinide(III) hydroxides
SciTech Connect
1993-12-31
This paper reports a new synthesis of pure, microcrystalline Am(OH){sub 3}, its characterization by x-ray powder diffraction and infrared spectroscopy, and the calorimetric determination of its enthalpy of solution in dilute hydrochloric acid. From the enthalpy of solution the enthalpy of formation of Am(OH){sub 3} has been calculated to be {minus}1371.2{plus_minus}7.9 kj{center_dot}mol{sup {minus}1}, which represents the first experimental determination of an enthalpy of formation of any actinide hydroxide. The free energy of formation and solubility product constant of Am(OH){sub 3} (K{sub sp} = 7 {times} 10{sup {minus}31}) have been calculated from our enthalpy of formation and entropy estimates and are compared with literature measurements under near-equilibrium conditions. Since many properties of the tripositive lanthanide and actinide ions (e.g., hydrolysis, complex-ion formation, and thermochemistry) change in a regular manner, these properties can be interpreted systematically in terms of ionic size. This paper compares the thermochemistry of Am(OH){sub 3} with thermochemical studies of lanthanide hydroxides. A combined structural and acid-base model is used to explain the systematic differences in enthalpies of solution between the oxides and hydroxides of the 4f{sup n} and 5f{sup n} subgroups and to predict solubility-product constants for the actinide(III) hydroxides of Pu through Cf.
4. Synthesis, structure and study of azo-hydrazone tautomeric equilibrium of 1,3-dimethyl-5-(arylazo)-6-amino-uracil derivatives
Debnath, Diptanu; Roy, Subhadip; Li, Bing-Han; Lin, Chia-Her; Misra, Tarun Kumar
2015-04-01
Azo dyes, 1,3-dimethyl-5-(arylazo)-6-aminouracil (aryl = -C6H5 (1), -p-CH3C6H4 (2), -p-ClC6H4 (3), -p-NO2C6H4 (4)) were prepared and characterized by UV-vis, FT-IR, 1H NMR, 13C NMR spectroscopic techniques and single crystal X-ray crystallographic analysis. In the light of spectroscopic analysis it evidences that of the tautomeric forms, the azo-enamine-keto (A) form is the predominant form in the solid state whereas in different solvents it is the hydrazone-imine-keto (B) form. The study also reveals that the hydrazone-imine-keto (B) form exists in an equilibrium mixture with its anionic form in various organic solvents. The solvatochromic and photophysical properties of the dyes in various solvents with different hydrogen bonding parameter were investigated. The dyes exhibit positive solvatochromic property on moving from polar protic to polar aprotic solvents. They are fluorescent active molecules and exhibit high intense fluorescent peak in some solvents like DMSO and DMF. It has been demonstrated that the anionic form of the hydrazone-imine form is responsible for the high intense fluorescent peak. In addition, the acid-base equilibrium in between neutral and anionic form of hydrazone-imine form in buffer solution of varying pH was investigated and evaluated the pKa values of the dyes by making the use of UV-vis spectroscopic methods. The determined acid dissociation constant (pKa) values increase according to the sequence of 2 > 1 > 3 > 4.
5. Synthesis, structure and study of azo-hydrazone tautomeric equilibrium of 1,3-dimethyl-5-(arylazo)-6-amino-uracil derivatives.
PubMed
Debnath, Diptanu; Roy, Subhadip; Li, Bing-Han; Lin, Chia-Her; Misra, Tarun Kumar
2015-04-01
Azo dyes, 1,3-dimethyl-5-(arylazo)-6-aminouracil (aryl=-C6H5 (1), -p-CH3C6H4 (2), -p-ClC6H4 (3), -p-NO2C6H4 (4)) were prepared and characterized by UV-vis, FT-IR, 1H NMR, 13C NMR spectroscopic techniques and single crystal X-ray crystallographic analysis. In the light of spectroscopic analysis it evidences that of the tautomeric forms, the azo-enamine-keto (A) form is the predominant form in the solid state whereas in different solvents it is the hydrazone-imine-keto (B) form. The study also reveals that the hydrazone-imine-keto (B) form exists in an equilibrium mixture with its anionic form in various organic solvents. The solvatochromic and photophysical properties of the dyes in various solvents with different hydrogen bonding parameter were investigated. The dyes exhibit positive solvatochromic property on moving from polar protic to polar aprotic solvents. They are fluorescent active molecules and exhibit high intense fluorescent peak in some solvents like DMSO and DMF. It has been demonstrated that the anionic form of the hydrazone-imine form is responsible for the high intense fluorescent peak. In addition, the acid-base equilibrium in between neutral and anionic form of hydrazone-imine form in buffer solution of varying pH was investigated and evaluated the pKa values of the dyes by making the use of UV-vis spectroscopic methods. The determined acid dissociation constant (pKa) values increase according to the sequence of 2>1>3>4.
6. Oxygen affinity of haemoglobin and red cell acid-base status in patients with severe chronic obstructive lung disease.
PubMed
Huckauf, H; Schäfer, J H; Kollo, D
1976-01-01
The oxygen affinity of hemoglobin and the factors determining the position of the oxygen dissociation curve were investigated in twenty-five patients with severe chronic obstructive lung disease. Patients have been separated into three groups: group I showed a normal or mild decrease of PaO2, group II a moderate fall in arterial oxygen pressure, and group III a severe hypoxia with balanced acid-base equilibrium and hypercapnia. Blood hemoglobin exhibited a significant increase in all groups, indicating an improved oxygen transport. In most patients a leftward shifting of the oxygen dissociation curve occurred. It is discussed that the tendency to left shifting is based upon alkalosis inside the red cells, evidently demonstrated in all groups studied. 2,3-diphosphoglycerate showed no close relation to evaluated oxygen affinity of hemoglobin. The evidence for an increased oxygen affinity may reveal a further compensatory mechanism in oxygen transport in patients with pulmonary disorders. Additionally the alkalosis inside the cells may counterbalance too great a right shifting of oxygen dissociation curve in vivo when severe hypoxia and hypercapnia occur. PMID:13884
7. Constant fields and constant gradients in open ionic channels.
PubMed
Chen, D P; Barcilon, V; Eisenberg, R S
1992-05-01
Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant.
8. Constant fields and constant gradients in open ionic channels.
PubMed Central
Chen, D P; Barcilon, V; Eisenberg, R S
1992-01-01
Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159
9. Torque equilibrium attitude control for Skylab reentry
NASA Technical Reports Server (NTRS)
Glaese, J. R.; Kennel, H. F.
1979-01-01
All the available torque equilibrium attitudes (most were useless from the standpoint of lack of electrical power) and the equilibrium seeking method are presented, as well as the actual successful application during the 3 weeks prior to Skylab reentry.
10. GEOMETRIC PROGRAMMING, CHEMICAL EQUILIBRIUM, AND THE ANTI-ENTROPY FUNCTION*
PubMed Central
Duffin, R. J.; Zener, C.
1969-01-01
The culmination of this paper is the following duality principle of thermodynamics: maximum S = minimum S*. (1) The left side of relation (1) is the classical characterization of equilibrium. It says to maximize the entropy function S with respect to extensive variables which are subject to certain constraints. The right side of (1) is a new characterization of equilibrium and concerns minimization of an anti-entropy function S* with respect to intensive variables. Relation (1) is applied to the chemical equilibrium of a mixture of gases at constant temperature and volume. Then (1) specializes to minimum F = maximum F*, (2) where F is the Helmholtz function for free energy and F* is an anti-Helmholtz function. The right-side of (2) is an unconstrained maximization problem and gives a simplified practical procedure for calculating equilibrium concentrations. We also give a direct proof of (2) by the duality theorem of geometric programming. The duality theorem of geometric programming states that minimum cost = maximum anti-cost. (30) PMID:16591769
11. Full characterization of GPCR monomer-dimer dynamic equilibrium by single molecule imaging.
PubMed
Kasai, Rinshi S; Suzuki, Kenichi G N; Prossnitz, Eric R; Koyama-Honda, Ikuko; Nakada, Chieko; Fujiwara, Takahiro K; Kusumi, Akihiro
2011-02-01
Receptor dimerization is important for many signaling pathways. However, the monomer-dimer equilibrium has never been fully characterized for any receptor with a 2D equilibrium constant as well as association/dissociation rate constants (termed super-quantification). Here, we determined the dynamic equilibrium for the N-formyl peptide receptor (FPR), a chemoattractant G protein-coupled receptor (GPCR), in live cells at 37°C by developing a single fluorescent-molecule imaging method. Both before and after liganding, the dimer-monomer 2D equilibrium is unchanged, giving an equilibrium constant of 3.6 copies/µm(2), with a dissociation and 2D association rate constant of 11.0 s(-1) and 3.1 copies/µm(2)s(-1), respectively. At physiological expression levels of ∼2.1 receptor copies/µm(2) (∼6,000 copies/cell), monomers continually convert into dimers every 150 ms, dimers dissociate into monomers in 91 ms, and at any moment, 2,500 and 3,500 receptor molecules participate in transient dimers and monomers, respectively. Not only do FPR dimers fall apart rapidly, but FPR monomers also convert into dimers very quickly.
12. Influence of substituent on equilibrium of benzoxazine synthesis from Mannich base and formaldehyde.
PubMed
Deng, Yuyuan; Zhang, Qin; Zhou, Qianhao; Zhang, Chengxi; Zhu, Rongqi; Gu, Yi
2014-09-14
N-Substituted aminomethylphenol (Mannich base) and 3,4-dihydro-2H-3-substituted 1,3-benzoxazine (benzoxazine) were synthesized from substituted phenol (p-cresol, phenol, p-chlorophenol), substituted aniline (p-toluidine, aniline, p-chloroaniline) and formaldehyde to study influence of substituent on equilibrium of benzoxazine synthesis from Mannich base and formaldehyde. (1)H-NMR and charges of nitrogen and oxygen atoms illustrate effect of substituent on reactivity of Mannich base, while oxazine ring stability is characterized by differential scanning calorimetry (DSC) and C-O bond order. Equilibrium constants were tested from 50 °C to 80 °C, and the results show that substituent attached to phenol or aniline has same impact on reactivity of Mannich base; however, it has opposite influence on oxazine ring stability and equilibrium constant. Compared with the phenol-aniline system, electron-donating methyl on phenol or aniline increases the charge of nitrogen and oxygen atoms in Mannich base. When the methyl group is located at para position of phenol, oxazine ring stability increases, and the equilibrium constant climbs, whereas when the methyl group is located at the para position of aniline, oxazine ring stability decreases, the benzoxazine hydrolysis tends to happen and equilibrium constant is significantly low.
13. Effective cosmological constant induced by stochastic fluctuations of Newton's constant
de Cesare, Marco; Lizzi, Fedele; Sakellariadou, Mairi
2016-09-01
We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector.
14. The influence of dissolved organic matter on the acid-base system of the Baltic Sea: A pilot study
Kulinski, Karol; Schneider, Bernd; Hammer, Karoline; Schulz-Bull, Detlef
2015-04-01
To assess the influence of dissolved organic matter (DOM) on the acid-base system of the Baltic Sea, 19 stations along the salinity gradient from Mecklenburg Bight to the Bothnian Bay were sampled in November 2011 for total alkalinity (AT), total inorganic carbon concentration (CT), partial pressure of CO2 (pCO2), and pH. Based on these data, an organic alkalinity contribution (Aorg) was determined, defined as the difference between measured AT and the inorganic alkalinity calculated from CT and pH and/or CT and pCO2. Aorg was in the range of 22-58 µmol kg-1, corresponding to 1.5-3.5% of AT. The method to determine Aorg was validated in an experiment performed on DOM-enriched river water samples collected from the mouths of the Vistula and Oder Rivers in May 2012. The Aorg increase determined in that experiment correlated directly with the increase of DOC concentration caused by enrichment of the >1 kDa DOM fraction. To examine the effect of Aorg on calculations of the marine CO2 system, the pCO2 and pH values measured in Baltic Sea water were compared with calculated values that were based on the measured alkalinity and another variable of the CO2 system, but ignored the existence of Aorg. Large differences between measured and calculated pCO2 and pH were obtained when the computations were based on AT and CT. The calculated pCO2 was 27-56% lower than the measured values whereas the calculated pH was overestimated by more than 0.4 pH units. Since biogeochemical models are based on the transport and transformations of AT and CT, the acid-base properties of DOM should be included in calculations of the CO2 system in DOM-rich basins like the Baltic Sea. In view of our limited knowledge about the composition and acid/base properties of DOM, this is best achieved using a bulk dissociation constant, KDOM, that represents all weakly acidic functional groups present in DOM. Our preliminary results indicated that the bulk KDOM in the Baltic Sea is 2.94•10-8 mol kg-1
15. Particle orbits in two-dimensional equilibrium models for the magnetotail
NASA Technical Reports Server (NTRS)
Karimabadi, H.; Pritchett, P. L.; Coroniti, F. V.
1990-01-01
Assuming that there exist an equilibrium state for the magnetotail, particle orbits are investigated in two-dimensional kinetic equilibrium models for the magnetotail. Particle orbits in the equilibrium field are compared with those calculated earlier with one-dimensional models, where the main component of the magnetic field (Bx) was approximated as either a hyperbolic tangent or a linear function of z with the normal field (Bz) assumed to be a constant. It was found that the particle orbits calculated with the two types of models are significantly different, mainly due to the neglect of the variation of Bx with x in the one-dimensional fields.
16. Resonant behaviour of MHD waves on magnetic flux tubes. III - Effect of equilibrium flow
NASA Technical Reports Server (NTRS)
Goossens, Marcel; Hollweg, Joseph V.; Sakurai, Takashi
1992-01-01
The Hollweg et al. (1990) analysis of MHD surface waves in a stationary equilibrium is extended. The conservation laws and jump conditions at Alfven and slow resonance points obtained by Sakurai et al. (1990) are generalized to include an equilibrium flow, and the assumption that the Eulerian perturbation of total pressure is constant is recovered as the special case of the conservation law for an equilibrium with straight magnetic field lines and flow along the magnetic field lines. It is shown that the conclusions formulated by Hollweg et al. are still valid for the straight cylindrical case. The effect of curvature is examined.
17. Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer.
PubMed
Hu, Yujing; Gao, Yang; An, Bo
2015-07-01
An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.
18. Clinical assessment of acid-base status. Strong ion difference theory.
PubMed
Constable, P D
1999-11-01
The traditional approach to evaluating acid-base balance uses the Henderson-Hasselbalch equation to categorize four primary acid-base disturbances: respiratory acidosis (increased PCO2), respiratory alkalosis (decreased PCO2), metabolic acidosis (decreased extracellular base excess), or metabolic alkalosis (increased extracellular base excess). The anion gap is calculated to detect the presence of unidentified anions in plasma. This approach works well clinically and is recommended for use whenever serum total protein, albumin, and phosphate concentrations are approximately normal; however, when their concentrations are markedly abnormal, the Henderson-Hasselbalch equation frequently provides erroneous conclusions as to the cause of an acid-base disturbance. Moreover, the Henderson-Hasselbalch approach is more descriptive than mechanistic. The new approach to evaluating acid-base balance uses the simplified strong ion model to categorize eight primary acid-base disturbances: respiratory acidosis (increased PCO2), respiratory alkalosis (decreased PCO2), strong ion acidosis (decreased [SID+]) or strong ion alkalosis (increased [SID+]), nonvolatile buffer ion acidosis (increased [ATOT]) or nonvolatile buffer ion alkalosis (decreased [ATOT]), and temperature acidosis (increased body temperature) or temperature alkalosis (decreased body temperature). The strong ion gap is calculated to detect the presence of unidentified anions in plasma. This simplified strong ion approach works well clinically and is recommended for use whenever serum total protein, albumin, and phosphate concentrations are markedly abnormal. The simplified strong ion approach is mechanistic and is therefore well suited for describing the cause of any acid-base disturbance. The new approach should therefore be valuable in a clinical setting and in research studies investigating acid-base balance. The presence of unmeasured strong ions in plasma or serum (such as lactate, ketoacids, and uremic anions
19. Out-of-equilibrium relaxation of the thermal Casimir effect in a model polarizable material.
PubMed
Dean, David S; Démery, Vincent; Parsegian, V Adrian; Podgornik, Rudolf
2012-03-01
Relaxation of the thermal Casimir or van der Waals force (the high temperature limit of the Casimir force) for a model dielectric medium is investigated. We start with a model of interacting polarization fields with a dynamics that leads to a frequency dependent dielectric constant of the Debye form. In the static limit, the usual zero frequency Matsubara mode component of the Casimir force is recovered. We then consider the out-of-equilibrium relaxation of the van der Waals force to its equilibrium value when two initially uncorrelated dielectric bodies are brought into sudden proximity. For the interaction between dielectric slabs, it is found that the spatial dependence of the out-of-equilibrium force is the same as the equilibrium one, but it has a time dependent amplitude, or Hamaker coefficient, which increases in time to its equilibrium value. The final relaxation of the force to its equilibrium value is exponential in systems with a single or finite number of polarization field relaxation times. However, in systems, such as those described by the Havriliak-Negami dielectric constant with a broad distribution of relaxation times, we observe a much slower power law decay to the equilibrium value.
20. Optical constants of solid methane
NASA Technical Reports Server (NTRS)
Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B.
1989-01-01
Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented of the optical constants of solid methane for the 0.4 to 2.6 micron region. K is reported for both the amorphous and the crystalline (annealed) states. Using the previously measured values of the real part of the refractive index, n, of liquid methane at 110 K n is computed for solid methane using the Lorentz-Lorentz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for condensed CH4. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109467029571533, "perplexity": 4250.613256849287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00385.warc.gz"} |
http://crypto.stackexchange.com/questions/8437/what-is-the-fastest-elliptic-curve-operation-fp-in-affine-coordinates-such-tha?answertab=oldest | # What is the fastest elliptic curve operation f(P) in affine coordinates such that f^n(P)=P only if n is large?
I'm working with the affine representations of points of the Secp256k1 elliptic curve (from Bitcoin).
I've read many papers that show that computing some functions, like $f(P)=3P$ can be computed faster than the standard way. Other papers say that with some pre-computation, the field inversion can be amortized if $F^1(P) ... F^k(P)$ must be computed.
I need the fastest function $F(P)$ that, when applied to the last result iteratively, generates a sequences of points whose average period is large (I don't need any proof, it can be just large in practice). To be fast I suppose it should be computed without field inversions. I don't mind to pre-compute some values.
For example, it could be $F(P) = 1.5P+4Q$ for a fixed $Q$. It doesn't matter which function it is, because I need it to generate random points in the curve. The probability distribution doesn't matter either. (notation: $1.5$ is the point halving of $3P$)
Motivation: Solutions to this problem may be helpful for generating vanity addresses.
-
The standard way to generate random points is to select a random value for X, check to see if there's a solution for the elliptic curve equation with that value, and if these is, pick one of the two possible values for Y. Or, do you need random values with known relationships, or for which you can compute output number N+1 given output number N? – poncho May 23 '13 at 16:03
Yes, I need a way to track a point back to source points P1, P2, Pn (with a known relation) and that's why I had though about a linear function F on the previous points. – Richard May 23 '13 at 18:07
I bet this is a Bitcoin-related question, in which case although people say it uses a Koblitz curve it is in fact not one. I think I have a good candidate solution for your problem but it works for a composite modulus and only if the group order is kept secret. If that's useful then let me know. If you're working in affine coordinates and you want to generate new points without inversions then you're probably limited to the Frobenius endomorphism. – Barack Obama May 24 '13 at 0:04
Can you describe what problem you actually want to solve? – CodesInChaos May 24 '13 at 5:47
I have some experience with Bitcoin's curve and I'm very confident that you will be unable to avoid an inversion for your problem as stated. I'm also quite confident that the restrictions you have specified above are more restrictive than are really necessary. Perhaps you can let us know whether you are a) trying to break the curve, b) generate vanity addresses, c) implementing some deterministic wallet scheme or d) implementing transactions which third parties can't link to an address. – Barack Obama May 24 '13 at 23:20
With your curve, you can use the Gallant-Lambert-Vanstone (GLV) method to answer your question. Indeed, the equation of your curve is: $$y^2=x^3+7.$$ Since $p$ is congruent to $1$ modulo $3$, there are cube roots of unity modulo $p$. Let: $$j=55594575648329892869085402983802832744385952214688224221778511981742606582254 \pmod{p}.$$ You can check that $j^3\equiv 1\pmod{p}$. The complex multiplication by $j$ sends $P=(X_P,Y_P)$ to $P'=(jX_P,Y_P)$.
Moreover, $P'=J\cdot P,$ where $$J=37718080363155996902926221483475020450927657555482586988616620542887997980018.$$
Finally, multiplication by $J-1$ can be performed efficiently (one application of complex multiplication and one addition) and has high order. Don't use $J+1$: it has order $6$.
EDIT $J^3$ is $1$ modulo the order of the curve, while $j^3$ is $1$ mod $p$.
This endomorphism of the curve is the projection of the complex multiplication of the curve $y^2=x^3+7$ over the rationals to the curve reduced mod $p$. This is why is is usually called the complex multiplication.
All in all, this gives a reasonably fast way to generate random looking multiples of $P$.
The full GLV method is much more than that since it speeds up multiplication by an arbitrary constant compared to regular double and add, but its basic idea relies on having an endomorphism that can be computed quickly.
-
This is not the GLV method - it's just using an efficiently computable endomorphism which is well known. Also, the solution described does not avoid the inversion required to produce an affine point. Finally, I'm not sure what's so "complex" about the multiplication by j! – Barack Obama Jul 11 '13 at 0:00
Depending on what you actually want to do, it might be possible to speed this up using a batch inversion, instead of inverting each denominator individually.
1. Use some form of extended coordinates
2. Compute a few hundred new points in extended coordinates, with known relation to the original point.
3. Multiply all denominators together and invert it.
4. Use multiplications of the combined denominator with the existing denominators to compute the individual denominators.
AFAIK Step 3+4 have a cost of 3 field multiplications per-point. Which is much cheaper that 200ish multiplications required for an inversion.
One way to implement steps 2 and 3 is:
Given the denominators $z_1 ... z_n$:
• Define $r_i = \Pi_{j=i+1}^n z_j$ compute iteratively as $r_n=1$, $r_i=r_{i+1} \cdot z_{i+1}$ for $i=n ... 0$ and store it in an array.
• Compute $r_0^{-1}$ using a field inversion.
• Define $l_1 = r_0^{-1} \cdot \Pi_{j=1}^{i-1} z_j$ and compute it iteratively as $l_0= r_0^{-1}$ and then $l_i = l_{i-1}\cdot z_{i-1}$
• $z_i^{-1}=l_i \cdot r_i$. Given $z_i^{-1}$ the affine coordinate can be obtained by multiplying it with the nominator.
Why does this work?
$z_i^{-1} =\\ = (\Pi_{j=1}^n z_j) \cdot (\Pi_{j=1}^n z_j)^{-1} \cdot z_i^{-1} \\ = (\Pi_{j=1}^{i-1} z_j \cdot z_i \cdot \Pi_{j=i+1}^n z_j)\cdot (\Pi_{j=1}^n z_j)^{-1} \cdot z_i^{-1}\\ = ((\Pi_{j=1}^n z_j)^{-1}\cdot\Pi_{j=1}^{i-1} z_j) \cdot \Pi_{j=i+1}^n z_j\\ = l_i \cdot r_i$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305610656738281, "perplexity": 458.35163113730914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00194-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://support.sas.com/documentation/cdl/en/statug/67523/HTML/default/statug_surveylogistic_details06.htm | # The SURVEYLOGISTIC Procedure
### Model Fitting
Subsections:
#### Determining Observations for Likelihood Contributions
If you use the events/trials syntax, each observation is split into two observations. One has the response value 1 with a frequency equal to the value of the events variable. The other observation has the response value 2 and a frequency equal to the value of (trials – events). These two observations have the same explanatory variable values and the same WEIGHT values as the original observation.
For either the single-trial or the events/trials syntax, let j index all observations. In other words, for the single-trial syntax, j indexes the actual observations. And, for the events/trials syntax, j indexes the observations after splitting (as described previously). If your data set has 30 observations and you use the single-trial syntax, j has values from 1 to 30; if you use the events/trials syntax, j has values from 1 to 60.
Suppose the response variable in a cumulative response model can take on the ordered values , where k is an integer . The likelihood for the jth observation with ordered response value and explanatory variables vector ( row vectors) is given by
where is the logistic, normal, or extreme-value distribution function; are ordered intercept parameters; and is the slope parameter vector.
For the generalized logit model, letting the st level be the reference level, the intercepts are unordered and the slope vector varies with each logit. The likelihood for the jth observation with ordered response value and explanatory variables vector (row vectors) is given by
#### Iterative Algorithms for Model Fitting
Two iterative maximum likelihood algorithms are available in PROC SURVEYLOGISTIC to obtain the pseudo-estimate of the model parameter . The default is the Fisher scoring method, which is equivalent to fitting by iteratively reweighted least squares. The alternative algorithm is the Newton-Raphson method. Both algorithms give the same parameter estimates; the covariance matrix of is estimated in the section Variance Estimation. For a generalized logit model, only the Newton-Raphson technique is available. You can use the TECHNIQUE= option in the MODEL statement to select a fitting algorithm.
##### Iteratively Reweighted Least Squares Algorithm (Fisher Scoring)
Let Y be the response variable that takes values . Let j index all observations and be the value of response for the jth observation. Consider the multinomial variable such that
and . With denoting the probability that the jth observation has response value i, the expected value of is , and . The covariance matrix of is , which is the covariance matrix of a multinomial random variable for one trial with parameter vector . Let be the vector of regression parameters—for example, for cumulative logit model. Let be the matrix of partial derivatives of with respect to . The estimating equation for the regression parameters is
where , and and are the WEIGHT and FREQ values of the jth observation.
With a starting value of , the pseudo-estimate of is obtained iteratively as
where , , and are evaluated at the ith iteration . The expression after the plus sign is the step size. If the log likelihood evaluated at is less than that evaluated at , then is recomputed by step-halving or ridging. The iterative scheme continues until convergence is obtained—that is, until is sufficiently close to . Then the maximum likelihood estimate of is .
By default, starting values are zero for the slope parameters, and starting values are the observed cumulative logits (that is, logits of the observed cumulative proportions of response) for the intercept parameters. Alternatively, the starting values can be specified with the INEST= option in the PROC SURVEYLOGISTIC statement.
##### Newton-Raphson Algorithm
Let
be the gradient vector and the Hessian matrix, where is the log likelihood for the jth observation. With a starting value of , the pseudo-estimate of is obtained iteratively until convergence is obtained:
where and are evaluated at the ith iteration . If the log likelihood evaluated at is less than that evaluated at , then is recomputed by step-halving or ridging. The iterative scheme continues until convergence is obtained—that is, until is sufficiently close to . Then the maximum likelihood estimate of is .
#### Convergence Criteria
Four convergence criteria are allowed: ABSFCONV= , FCONV= , GCONV= , and XCONV= . If you specify more than one convergence criterion, the optimization is terminated as soon as one of the criteria is satisfied. If none of the criteria is specified, the default is GCONV=1E–8.
#### Existence of Maximum Likelihood Estimates
The likelihood equation for a logistic regression model does not always have a finite solution. Sometimes there is a nonunique maximum on the boundary of the parameter space, at infinity. The existence, finiteness, and uniqueness of pseudo-estimates for the logistic regression model depend on the patterns of data points in the observation space (Albert and Anderson, 1984; Santner and Duffy, 1986).
Consider a binary response model. Let be the response of the ith subject, and let be the row vector of explanatory variables (including the constant 1 associated with the intercept). There are three mutually exclusive and exhaustive types of data configurations: complete separation, quasi-complete separation, and overlap.
Complete separation
There is a complete separation of data points if there exists a vector that correctly allocates all observations to their response groups; that is,
This configuration gives nonunique infinite estimates. If the iterative process of maximizing the likelihood function is allowed to continue, the log likelihood diminishes to zero, and the dispersion matrix becomes unbounded.
Quasi-complete separation
The data are not completely separable, but there is a vector such that
and equality holds for at least one subject in each response group. This configuration also yields nonunique infinite estimates. If the iterative process of maximizing the likelihood function is allowed to continue, the dispersion matrix becomes unbounded and the log likelihood diminishes to a nonzero constant.
Overlap
If neither complete nor quasi-complete separation exists in the sample points, there is an overlap of sample points. In this configuration, the pseudo-estimates exist and are unique.
Complete separation and quasi-complete separation are problems typically encountered with small data sets. Although complete separation can occur with any type of data, quasi-complete separation is not likely with truly continuous explanatory variables.
The SURVEYLOGISTIC procedure uses a simple empirical approach to recognize the data configurations that lead to infinite parameter estimates. The basis of this approach is that any convergence method of maximizing the log likelihood must yield a solution that gives complete separation, if such a solution exists. In maximizing the log likelihood, there is no checking for complete or quasi-complete separation if convergence is attained in eight or fewer iterations. Subsequent to the eighth iteration, the probability of the observed response is computed for each observation. If the probability of the observed response is one for all observations, there is a complete separation of data points and the iteration process is stopped. If the complete separation of data has not been determined and an observation is identified to have an extremely large probability (0.95) of the observed response, there are two possible situations. First, there is overlap in the data set, and the observation is an atypical observation of its own group. The iterative process, if allowed to continue, stops when a maximum is reached. Second, there is quasi-complete separation in the data set, and the asymptotic dispersion matrix is unbounded. If any of the diagonal elements of the dispersion matrix for the standardized observations vectors (all explanatory variables standardized to zero mean and unit variance) exceeds 5,000, quasi-complete separation is declared and the iterative process is stopped. If either complete separation or quasi-complete separation is detected, a warning message is displayed in the procedure output.
Checking for quasi-complete separation is less foolproof than checking for complete separation. The NOCHECK option in the MODEL statement turns off the process of checking for infinite parameter estimates. In cases of complete or quasi-complete separation, turning off the checking process typically results in the procedure failing to converge.
#### Model Fitting Statistics
Suppose the model contains s explanatory effects. For the jth observation, let be the estimated probability of the observed response. The three criteria displayed by the SURVEYLOGISTIC procedure are calculated as follows:
• –2 log likelihood:
where and are the weight and frequency values, respectively, of the jth observation. For binary response models that use the events/trials syntax, this is equivalent to
where is the number of events, is the number of trials, and is the estimated event probability.
• Akaike information criterion:
where p is the number of parameters in the model. For cumulative response models, , where k is the total number of response levels minus one, and s is the number of explanatory effects. For the generalized logit model, .
• Schwarz criterion:
where p is the number of parameters in the model. For cumulative response models, , where k is the total number of response levels minus one, and s is the number of explanatory effects. For the generalized logit model, .
The –2 log likelihood statistic has a chi-square distribution under the null hypothesis (that all the explanatory effects in the model are zero), and the procedure produces a p-value for this statistic. The AIC and SC statistics give two different ways of adjusting the –2 log likelihood statistic for the number of terms in the model and the number of observations used.
#### Generalized Coefficient of Determination
Cox and Snell (1989, pp. 208–209) propose the following generalization of the coefficient of determination to a more general linear model:
where is the likelihood of the intercept-only model, is the likelihood of the specified model, and n is the sample size. The quantity achieves a maximum of less than 1 for discrete models, where the maximum is given by
Nagelkerke (1991) proposes the following adjusted coefficient, which can achieve a maximum value of 1:
Properties and interpretation of and are provided in Nagelkerke (1991). In the "Testing Global Null Hypothesis: BETA=0" table, is labeled as "RSquare" and is labeled as "Max-rescaled RSquare." Use the RSQUARE option to request and .
#### INEST= Data Set
You can specify starting values for the iterative algorithm in the INEST= data set.
The INEST= data set contains one observation for each BY group. The INEST= data set must contain the intercept variables (named Intercept for binary response models and Intercept, Intercept2, Intercept3, and so forth, for ordinal response models) and all explanatory variables in the MODEL statement. If BY processing is used, the INEST= data set should also include the BY variables, and there must be one observation for each BY group. If the INEST= data set also contains the _TYPE_ variable, only observations with _TYPE_ value 'PARMS' are used as starting values. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592205882072449, "perplexity": 853.2720503959132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00894.warc.gz"} |
http://mrandrewandrade.com/blog/2015/10/22/modeling-thermistor-using-data-science.html | In the last blog post, I talked about voltage dividers and how they can be used to limit voltage to eventually connect to an ADC to measure voltage on a BeagleBone. Today I am going to build on the voltage divider and pair hardware engineering with data science. The next couple of posts will be pieces of battery testing rig, and then I will put it all togeather to explain how the full system works. First things first, what temperature sensor should we use?
## Selecting a Thermal Sensor
Half due to avaliability, and the other half due to shear laziness (of not wanting to buy sensors), I decided we are going to use thermisters as our temperature sensor of choice. A friend hand a bunch so why use them?
Specifically, he gave me a bunch of 100k thermistors Part No. HT100K3950-1 invidually connected to a 1m cable.
## What is a thermister?
A thermister can be simply defined as a special type of resistor whose resistance is highly dependent on temperature. Therefore if you are able to measure the resistance on the thermister, you can easily determine the temperature if you know how the thermister behaves. Simple enough right?
If you go to the site which sells them, they give a bit more information:
Hotend Thermistor 100K Glass-sealed Thermistor 3950 1% thermistor 1.8mm glass head
1M long cables 2 wires
PTFE Tube 0.6*1mm to protect from the thermistor
Shrink wrap between thermistor and the cables.
The most important piece of information to get started using the thermister is its behaviour responding to temperature. Luckly for me, my friend sent me the data which was on the site. When you have a table or chart which maps the resistance to temperature, you can simply measure the resistance with a multimemter and look up the temperature on the chart, or you can again use a voltage divider and connect it to an ADC (more on that later).
## Mapping Resistance to Temperature using Curve Fitting
The site included a weird word document with a bunch of numbers. It really confuses me why one would have tabular data in a word document, a spreadsheet serves that purpose. Anyways, the information was easily extractable and I was able to put it into an spreadsheet and save as a .CSV file. If you want to follow along, you can you download the document here
Once you own the file in a speadsheet program or in your text editor of choice, you can see there are four columns: temperature in Celcius, maximum resistance in $k\Omega$, normal (average)resistance in $k\Omega$ and minimum resistance in $k\Omega$ . If we were measuring the resistance by hand, we could simply just look up (and eyeball) the closest resistance value and read of the temperature. We could be more fancy and use linear interpolation as an alternative to eyeballing.
That is all great, but our goal is to use a micocontroller (or computer) to store, measure and use the temperature readings. This means we have to mathematically model how the thermister behaves. Since we have data provided from the manufacturer, we do this by plotting the data which was provided:
Now we can see that it does not have a linear relation. Actually it has a inverse relation or an $x^{n}$ where $n<0$ most probably. Since I am curious, I plotted the min and the max resistance to get a better feel for the error in the temperature reading. My ituition tells me that that is th range the thermister operates in.
Now, the top graph, isn’t that useful. All it shows is that the range is very small, and is wider (there is more error) when temperatures are below 0 degrees. To see if we can do better, lets limit the range (with contingency) of the temperatures we will be dealing with on the project: 0-100 degrees.
The plot is a bit clearer but not perfect, let’s try and be more fancy and reprensent the error with error bars like they do in stats 101.
Great! A bit better, but it is still hard to read. Let’s try plotting the error on the same axis as the expected (normal) resistance.
From this figure it is quite clear as tempertature decreases, there is more error in the thermistor reading. This figure also shows that reading taken over 20 C should have good accuracy. We can even take this futher by one more plot
This figure shows the upper and lower bound of error (to be around $\pm 1.5k\Omega$)+ 125 kohm, since the expected reading would be around $125kOmega$ at 20 degrees C. Knowing the smallest resistance within our operating range will occur at 100 degrees (around $6720\Omega$), R_1 can be calculated to be around $1200\Omega$ using the voltage divider presented in the previous post. Now, the largest possible error can be calculated and used as a very conservative estime of the temperature reading resolution. Before we do that, let us fit a curve to the data. Using grade 11 math, we can estimate that the function which discribes the invest curve would look something like $resistance = a \times e^{-b \times temprature} + c$. We can then use SciPy’s curve_fit to determine the fit parameters and the covarience matrix.
full temperature fit coefficients:
[ 3.22248984e+02 5.51886907e-02 4.56056442e+00]
Covariance matrix:
[[ 1.82147996e+00 -2.16345654e-04 -2.82466975e-01]
[ -2.16345654e-04 2.97792772e-08 2.87522862e-05]
[ -2.82466975e-01 2.87522862e-05 2.25733939e-01]]
The fit coefficients are now known! This means that the following equation approximates the behaviour of the thermister: $resistance = 322 \times e^{-0.055 \times temprature} + 4.56$
We can also determine the standard deviation of the fit from the diagonal of the covariance matrix, and plot it for each parameter.
As we can see, the standard deviation is very small and thus results in a good fit accross the full range of temperature as shown in the three figures below:
While the model accross the full temperature is useful, we can improve our model by curve fitting only in the temperature range we are interested in. This prevents the algorithm for compesating for select set of data which is irrevant.
fit coefficients:
[ 3.16643400e+02 4.84933743e-02 6.53548105e+00]
Covariance matrix:
[[ 3.29405526e-01 4.07305280e-05 -1.67742297e-02]
[ 4.07305280e-05 3.89687128e-08 4.14707796e-05]
[ -1.67742297e-02 4.14707796e-05 7.51633680e-02]]
We can see the difference by comparing both of the curve fit models on the interested temperature range:
While the testing spec we developed states we should have the capability of measuring from 0-100 degress C, the average range of operation is actually between 20-80 degrees C, so we can change the range to match the standard operating range
fit coefficients:
[ 2.94311453e+02 4.51009053e-02 5.05438839e+00]
Covariance matrix:
[[ 6.57786227e-01 1.12143572e-04 8.24269669e-02]
[ 1.12143572e-04 2.18837795e-08 1.84056321e-05]
[ 8.24269669e-02 1.84056321e-05 1.81122519e-02]]
Residual mean (Kohm):
9.99334067349e-10
Residual std dev (Kohm):
0.2302444024
The results of the curve fit within the standard operating temperature is much better. Now only is residual error mean essentually zero (9.99e-10 k ohm) with a relatively small standard deviation (0.230 kohm), but the residual error have the general appearance of being normally distributed (unlike the previous curve fits). What this means is that, the model will predict the resistance very well (have a very low error) for the stardard operating temperatures, but will perform poorer outside. Luckly for us, our battries will not be operating below 20 degrees C or above 80 degrees C.
### Curve fit model:
The model we created can now be sumarized to the following equation:
$R_2 = a e^{-b \times temperature} + c$ where R_2 is meaured in $k\Omega$, temperature measued in degree celcius, and the constants a, b, and c were found (through curve fitting) to be:
$a = 2.94311453 \times 10^2$
$b = 4.51009053 \times 10 ^{-02}$
$c = 5.054388390$
In addition, the error in the curve fitting is averaged at 9.99334067349e-10 kohm and the standard deviation of 0.2302444024 kohm. This error, while small can later be used in filtering algorithms when we are monitoring the temperature. I will touch on this in a later post, but this data about statistical noise while reading can aid in estimating temperature through more advance software (such as using Kalman Filtering).
## Writing software to convert voltage into temperature readings
If we think of the system as a whole, the input to the system is Vo which is measured on an analog input pin of the BBB. Based on this voltage reading, we have to write software which estimates the temperature.
To begin, we know that the thermister (R_2) changes resistance values depending on temperature. We can then use the voltage devider to map this relation:
Vin in this case will be 3.3 V which is provided from the BBB. As noted in the last post, the spec on the BBB states that that analog input pin can only take in voltage in the range of 0 to 1.8 V. This means we can set the max Vout = 1.8. Finally based on the resistance to temperature data, we know that resistance increases as the temperature decreases. This means that R_2 (the resistance of the thermistor), will be the greatest when T = 0 degrees (R_2 will be around 327 kohms. We can then use this in our voltage divider equation and solve for R_1. The solution is R_1 = 272700 ohms. Because resistors come in standard sizes, 274K ohm is the stardard 1% error resistor to use. Technically, in this case we should go to the next highest resistor value (to limit the voltage to 1.8V), this doesn’t necessarily have to be true since we will not be cooling the batteries lower than room temperature. While I recommend using the 274k ohm resistor (with 1% error), one can use a 270k ohm (with 5% error) without much concequece if they ensure that the temperature does not fall below 5 degrees. Even if it does, the Beaglebone has some circuity to help prevent damamge for a slightly larder analog input voltage.
We can use algebra to solve for R_2 in terms of the other variables as the following:
$R_2 = \frac{R_1 V_o}{V_{in} - V_o}$ where $V_{in} \neq V_o$
In this equation, R_2 can be solved from R_1 (set by us, V_o (measured), V_in (set to by us).
Next we can use the previous curve fit equation, and use algebra solve for temperature:
$temperature = - \frac{ln (\frac {R_2-c}{a}}{b}$ where $R_2 > c$$$We can then substitute our relation of R_2 to the measured Vo to get the following equation: $temperature = -ln (\frac {\frac{R_1 V_o}{V_{in} - V_o}-c}{ab}$ where $R_2 > c$ and $V_{in} \neq V_o$$$
Before we can use this, we have to ensure the conditions hold. V_o will on equal V_in if R_2 is equal to zero, and we also ensure C (about 5.5 kohm) > R_2. If we use the data found in the csv, we see that the smallest resistance in our operaterting temperature (100 C) is 6.7100 kohm, so we are safe for both conditions.
# Results!
We can now write a simple which takes the a volatage as a parameter, and returns the temperature.
-2.11347171107
We can now use this function and plot for the full range of input voltages:
Using this chart we can now test the system and measure temperature! The next post will be about combining all the prieces and doing the testing! | {"extraction_info": {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317792415618896, "perplexity": 953.7832244898952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00064.warc.gz"} |
https://en.wikipedia.org/wiki/Jacobi_sum | Jacobi sum
Jump to navigation Jump to search
In mathematics, a Jacobi sum is a type of character sum formed with Dirichlet characters. Simple examples would be Jacobi sums J(χ, ψ) for Dirichlet characters χ, ψ modulo a prime number p, defined by
${\displaystyle J(\chi ,\psi )=\sum \chi (a)\psi (1-a)\,,}$
where the summation runs over all residues a = 2, 3, ..., p − 1 mod p (for which neither a nor 1 − a is 0). Jacobi sums are the analogues for finite fields of the beta function. Such sums were introduced by C. G. J. Jacobi early in the nineteenth century in connection with the theory of cyclotomy. Jacobi sums J can be factored generically into products of powers of Gauss sums g. For example, when the character χψ is nontrivial,
${\displaystyle J(\chi ,\psi )={\frac {g(\chi )g(\psi )}{g(\chi \psi )}}\,,}$
analogous to the formula for the beta function in terms of gamma functions. Since the nontrivial Gauss sums g have absolute value p12, it follows that J(χ, ψ) also has absolute value p12 when the characters χψ, χ, ψ are nontrivial. Jacobi sums J lie in smaller cyclotomic fields than do the nontrivial Gauss sums g. The summands of J(χ, ψ) for example involve no pth root of unity, but rather involve just values which lie in the cyclotomic field of (p − 1)th roots of unity. Like Gauss sums, Jacobi sums have known prime ideal factorisations in their cyclotomic fields; see Stickelberger's theorem.
When χ is the Legendre symbol,
${\displaystyle J(\chi ,\chi )=-\chi (-1)=(-1)^{\frac {p+1}{2}}\,.}$
In general the values of Jacobi sums occur in relation with the local zeta-functions of diagonal forms. The result on the Legendre symbol amounts to the formula p + 1 for the number of points on a conic section that is a projective line over the field of p elements. A paper of André Weil from 1949 very much revived the subject. Indeed, through the Hasse–Davenport relation of the late 20th century, the formal properties of powers of Gauss sums had become current once more.
As well as pointing out the possibility of writing down local zeta-functions for diagonal hypersurfaces by means of general Jacobi sums, Weil (1952) demonstrated the properties of Jacobi sums as Hecke characters. This was to become important once the complex multiplication of abelian varieties became established. The Hecke characters in question were exactly those one needs to express the Hasse–Weil L-functions of the Fermat curves, for example. The exact conductors of these characters, a question Weil had left open, were determined in later work.
References
• Berndt, B. C.; Evans, R. J.; Williams, K. S. (1998). Gauss and Jacobi Sums. Wiley.[ISBN missing]
• Lang, S. (1978). Cyclotomic fields. Graduate Texts in Mathematics. 59. Springer Verlag. ch. 1. ISBN 0-387-90307-0.
• Weil, André (1949). "Numbers of solutions of equations in finite fields". Bull. Amer. Math. Soc. 55: 497–508. doi:10.1090/s0002-9904-1949-09219-4.
• Weil, André (1952). "Jacobi sums as Grössencharaktere". Trans. Amer. Math. Soc. 73: 487–495. doi:10.1090/s0002-9947-1952-0051263-0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738745093345642, "perplexity": 943.6024190572018}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00094.warc.gz"} |
https://www.physicsforums.com/threads/intersection-of-subspaces.87626/ | # Intersection of subspaces
1. Sep 5, 2005
### loli12
I have 2 subspaces U and V of R^3 which
U = {(a1, a2, a3) in R^3: a1 = 3(a2) and a3 = -a2}
V = {(a1, a2, a3) in R^3: a1 - 4(a2) - a3 = 0}
I used the information in U and substituted it into the equation in V and I got 0 = 0. So, does it mean that the intersection of U and V is the whole R^3 which has no restrictions on a1, a2 and a3 (they are free)? Or do the original restrictions on both the original subspaces still being applied to the intersection?
2. Sep 5, 2005
### AKG
The intersection of U and V cannot possibly be all of R³. How could the intersection of two sets be bigger than both of the sets? Both subspaces are 1-dimensional, so the intersection is either 1-dimensional or 0-dimensional. Can you find a non-zero point that is in both U and V? If so, then the intersection of U and V is U and is also V (i.e. U = V). A point in U takes the form (x, x/3, -x/3). Would such a point be in V?
x - 4(x/3) - (-x/3) = x - (4/3)x + (1/3)x = 0 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8247607350349426, "perplexity": 829.7203638657429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720967.29/warc/CC-MAIN-20161020183840-00099-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/god-hypothesis.75695/ | # God Hypothesis
1. May 15, 2005
### the_truth
Scientific Method.
1. Observation and description of a phenomenon or group of phenomena.
2. Formulation of an hypothesis to explain the phenomena. In physics, the hypothesis often takes the form of a causal mechanism or a mathematical relation.
3. Use of the hypothesis to predict the existence of other phenomena, or to predict quantitatively the results of new observations.
4. Performance of experimental tests of the predictions by several independent experimenters and properly performed experiments.
God.
This is a proper hypothesis which remains unproven, which is a step forward from the seemingly completely out of the blue hypothesis of god. The aim of this hypothesis is also to provoke discussion on how do you choose a hypothesis, that most malleable element of scientific method and one which is very relevant to today's physics. Possibly also the element which einstein refused to work with and led to his stagnation.
1:
Observation 1.
You cannot measure anything certainly due to the heisenburg uncertainty principle and also because you cannot measure anything precisely. You cannot for instance say a ruler is exactly 30 cms long as the chances are it could very well be
30.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
cms long and you cannot measure with such precision and if you could measure with such precision you still wouldn't know whether the ruler is
30.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
cms long or not.
Observation 2.
You would need to be able to measure things to an infinite degree of precision in order to know the exact length of the ruler.
Observation 3.
If there was an omnipotent sentient being he would have the ability to measure things to an infinite degree of precision and thus with the laws of the universe be able to predict the entire universe. God is also creditted with being the creator of the universe and that the laws of the universe exist becasue he is a watchmaker god, whom does not externally influence the universe after it has been set in motion.
2:
Hypothesis.
The relationship between observation 1 and 2 with observation 1 is not a coincidence. Bear in mind observation 3 is an observation of irrational opinions.
3:
Evidence.
The possibility of the relationship being a coincidence is unknown. There is also the possibility that the idea of god has caused me to introduce it into my observations, which would be circular. However it is an observation and allowed by scientific method and so should not be ignored on that basis. More scientific observations which correlate with ancient ideas of god are required before the possibility of this relationship can no longer be considerred a relationship.
2. May 15, 2005
### <<<GUILLE>>>
You can't measure infinites. I like your speech. I'm only posting a conversation from about 200 years ago, adn that's all because I think it says everything I need/think:
Napoleon-I have heard that you haven't included god, in your explenation about the universe?
Laplace-No; I didn't require that hypothesis.
Napoleon-Oh, it's a very good theory, it explains many things. :rofl: :rofl:
Poor Napoleon, he was very inteligent for strategy though.
3. May 27, 2005
### the_truth
Yeah.. Shuffle 500000 men into Siberia, great idea.
Similar Discussions: God Hypothesis | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8070688843727112, "perplexity": 871.3638103624792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188924.7/warc/CC-MAIN-20170322212948-00036-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www2.physics.ox.ac.uk/contacts/people/devriendt/publications?page=3 | # Publications by Julien Devriendt
## Mergers drive spin swings along the cosmic web
Monthly Notices of the Royal Astronomical Society Oxford University Press 445 (2014) L46-L50
C Welker, J Devriendt, Y Dubois, C Pichon, S Peirani
<p>The close relationship between mergers and the reorientation of the <em>spin</em> for galaxies and their host dark haloes is investigated using a cosmological hydrodynamical simulation (Horizon-AGN). Through a statistical analysis of merger trees, we show that spin swings are mainly driven by mergers along the filamentary structure of the cosmic web, and that these events account for the preferred perpendicular orientation of massive galaxies with respect to their nearest filament. By contrast, low-mass galaxies (<em>M</em><sub>s</sub> < 10<sup>10</sup> M<sub>⊙</sub> at redshift 1.5) having undergone very few mergers, if at all, tend to possess a spin well aligned with their filament. Haloes follow the same trend as galaxies but display a greater sensitivity to smooth anisotropic accretion. The relative effect of mergers on magnitude is qualitatively different for minor and major mergers: mergers (and diffuse accretion) generally increase the magnitude of the specific angular momentum, but major mergers also give rise to a population of objects with less specific angular momentum left. Without mergers, secular accretion builds up the specific angular momentum of galaxies but not that of haloes. It also (re)aligns galaxies with their filament.</p>
## Integral field spectroscopy of high redshift galaxies with the HARMONI spectrograph on the European Extremely Large Telescope
GROUND-BASED AND AIRBORNE INSTRUMENTATION FOR ASTRONOMY V 9147 (2014) ARTN 91478Z
S Kendrew, S Zieleniewski, N Thatte, J Devriendt, R Houghton, T Fusco, M Tecza, F Clarke, K O'Brien
## Satellite Survival in Highly Resolved Milky Way Class Halos
Monthly Notices of the Royal Astronomical Society 429 (2012) 633-651
S Geen, A Slyz, J Devriendt
Surprisingly little is known about the origin and evolution of the Milky Way's satellite galaxy companions. UV photoionisation, supernova feedback and interactions with the larger host halo are all thought to play a role in shaping the population of satellites that we observe today, but there is still no consensus as to which of these effects, if any, dominates. In this paper, we revisit the issue by re-simulating a Milky Way class dark matter (DM) halo with unprecedented resolution. Our set of cosmological hydrodynamic Adaptive Mesh Refinement (AMR) simulations, called the Nut suite, allows us to investigate the effect of supernova feedback and UV photoionisation at high redshift with sub-parsec resolution. We subsequently follow the effect of interactions with the Milky Way-like halo using a lower spatial resolution (50pc) version of the simulation down to z=0. This latter produces a population of simulated satellites that we compare to the observed satellites of the Milky Way and M31. We find that supernova feedback reduces star formation in the least massive satellites but enhances it in the more massive ones. Photoionisation appears to play a very minor role in suppressing star and galaxy formation in all progenitors of satellite halos. By far the largest effect on the satellite population is found to be the mass of the host and whether gas cooling is included in the simulation or not. Indeed, inclusion of gas cooling dramatically reduces the number of satellites captured at high redshift which survive down to z=0.
## Constraining stellar assembly and active galactic nucleus feedback at the peak epoch of star formation
MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 425 (2012) L96-L100
T Kimm, S Kaviraj, JEG Devriendt, SH Cohen, RA Windhorst, Y Dubois, A Slyz, NP Hathi, RRE Jr, RW O'Connell, MA Dopita, J Silk
## Self-regulated growth of supermassive black holes by a dual jet-heating active galactic nucleus feedback mechanism: Methods, tests and implications for cosmological simulations
Monthly Notices of the Royal Astronomical Society 420 (2012) 2662-2683
Y Dubois, J Devriendt, A Slyz, R Teyssier
We develop a subgrid model for the growth of supermassive black holes (BHs) and their associated active galactic nucleus (AGN) feedback in hydrodynamical cosmological simulations. This model transposes previous attempts to describe BH accretion and AGN feedback with the smoothed particle hydrodynamics (SPH) technique to the adaptive mesh refinement framework. It also furthers their development by implementing a new jet-like outflow treatment of the AGN feedback which we combine with the heating mode traditionally used in the SPH approach. Thus, our approach allows one to test the robustness of the conclusions derived from simulating the impact of self-regulated AGN feedback on galaxy formation vis-à-vis the numerical method. Assuming that BHs are created in the early stages of galaxy formation, they grow by mergers and accretion of gas at a Eddington-limited Bondi accretion rate. However this growth is regulated by AGN feedback which we model using two different modes: a quasar-heating mode when accretion rates on to the BHs are comparable to the Eddington rate, and a radio-jet mode at lower accretion rates which not only deposits energy, but also deposits mass and momentum on the grid. In other words, our feedback model deposits energy as a succession of thermal bursts and jet outflows depending on the properties of the gas surrounding the BHs. We assess the plausibility of such a model by comparing our results to observational measurements of the co-evolution of BHs and their host galaxy properties, and check their robustness with respect to numerical resolution. We show that AGN feedback must be a crucial physical ingredient for the formation of massive galaxies as it appears to be able to efficiently prevent the accumulation of and/or expel cold gas out of haloes/galaxies and significantly suppress star formation. Our model predicts that the relationship between BHs and their host galaxy mass evolves as a function of redshift, because of the vigorous accretion of cold material in the early Universe that drives Eddington-limited accretion on to BHs. Quasar activity is also enhanced at high redshift. However, as structures grow in mass and lose their cold material through star formation and efficient BH feedback ejection, the AGN activity in the low-redshift Universe becomes more and more dominated by the radio mode, which powers jets through the hot circumgalactic medium. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.
## THE EPOCH OF DISK SETTLING: z similar to 1 TO NOW
ASTROPHYSICAL JOURNAL 758 (2012) ARTN 106
SA Kassin, BJ Weiner, SM Faber, JP Gardner, CNA Willmer, AL Coil, MC Cooper, J Devriendt, AA Dutton, P Guhathakurta, DC Koo, AJ Metevier, KG Noeske, JR Primack
## The radius of baryonic collapse in disc galaxy formation
Monthly Notices of the Royal Astronomical Society 424 (2012) 502-507
SA Kassin, J Devriendt, SM Fall, RS de Jong, B Allgood, JR Primack
In the standard picture of disc galaxy formation, baryons and dark matter receive the same tidal torques, and therefore approximately the same initial specific angular momentum. However, observations indicate that disc galaxies typically have only about half as much specific angular momentum as their dark matter haloes. We argue this does not necessarily imply that baryons lose this much specific angular momentum as they form galaxies. It may instead indicate that galaxies are most directly related to the inner regions of their host haloes, as may be expected in a scenario where baryons in the inner parts of haloes collapse first. A limiting case is examined under the idealized assumption of perfect angular momentum conservation. Namely, we determine the density contrast Δ, with respect to the critical density of the Universe, by which dark matter haloes need to be defined in order to have the same average specific angular momentum as the galaxies they host. Under the assumption that galaxies are related to haloes via their characteristic rotation velocities, the necessary Δ is ∼600. This Δ corresponds to an average halo radius and mass which are ∼60per cent and ∼75per cent, respectively, of the virial values (i.e. for Δ= 200). We refer to this radius as the radius of baryonic collapse R BC, since if specific angular momentum is conserved perfectly, baryons would come from within it. It is not likely a simple step function due to the complex gastrophysics involved; therefore, we regard it as an effective radius. In summary, the difference between the predicted initial and the observed final specific angular momentum of galaxies, which is conventionally attributed solely to angular momentum loss, can more naturally be explained by a preference for collapse of baryons within R BC, with possibly some later angular momentum transfer. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.
## Feeding compact bulges and supermassive black holes with low angular momentum cosmic gas at high redshift
Monthly Notices of the Royal Astronomical Society 423 (2012) 3616-3630
Y Dubois, C Pichon, M Haehnelt, T Kimm, A Slyz, J Devriendt, D Pogosyan
We use cosmological hydrodynamical simulations to show that a significant fraction of the gas in high redshift rare massive haloes falls nearly radially to their very centre on extremely short time-scales. This process results in the formation of very compact bulges with specific angular momentum a factor of 5-30 smaller than the average angular momentum of the baryons in the whole halo. Such low angular momentum originates from both segregation and effective cancellation when the gas flows to the centre of the halo along well-defined cold filamentary streams. These filaments penetrate deep inside the halo and connect to the bulge from multiple rapidly changing directions. Structures falling in along the filaments (satellite galaxies) or formed by gravitational instabilities triggered by the inflow (star clusters) further reduce the angular momentum of the gas in the bulge. Finally, the fraction of gas radially falling to the centre appears to increase with the mass of the halo; we argue that this is most likely due to an enhanced cancellation of angular momentum in rarer haloes which are fed by more isotropically distributed cold streams. Such an increasingly efficient funnelling of low angular momentum gas to the centre of very massive haloes at high redshift may account for the rapid pace at which the most massive supermassive black holes grow to reach observed masses around 10 9M ⊙ at an epoch when the Universe is barely 1 Gyr old. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.
## The environment and redshift dependence of accretion on to dark matter haloes and subhaloes
Monthly Notices of the Royal Astronomical Society (2011)
H Tillson, L Miller, J Devriendt
## How active galactic nucleus feedback and metal cooling shape cluster entropy profiles
Monthly Notices of the Royal Astronomical Society (2011)
Y Dubois, J Devriendt, R Teyssier, A Slyz
## The environment and redshift dependence of accretion on to dark matter haloes and subhaloes
Monthly Notices of the Royal Astronomical Society 417 (2011) 666-680
H Tillson, L Miller, J Devriendt
A dark-matter-only Horizon Project simulation is used to investigate the environment and redshift dependences of accretion on to both haloes and subhaloes. These objects grow in the simulation via mergers and via accretion of diffuse non-halo material, and we measure the combined signal from these two modes of accretion. It is found that the halo accretion rate varies less strongly with redshift than predicted by the Extended Press-Schechter formalism and is dominated by minor merger and diffuse accretion events at z= 0, for all haloes. These latter growth mechanisms may be able to drive the radio-mode feedback hypothesised for recent galaxy-formation models, and have both the correct accretion rate and the form of cosmological evolution. The low-redshift subhalo accretors in the simulation form a mass-selected subsample safely above the mass resolution limit that reside in the outer regions of their host, with ∼70 per cent beyond their host's virial radius, where they are probably not being significantly stripped of mass. These subhaloes accrete, on average, at higher rates than haloes at low redshift and we argue that this is due to their enhanced clustering at small scales. At cluster scales, the mass accretion rate on to haloes and subhaloes at low redshift is found to be only weakly dependent on environment, and we confirm that at z∼ 2 haloes accrete independently of their environment at all scales, as reported by other authors. By comparing our results with an observational study of black hole growth, we support previous suggestions that at z > 1, dark matter haloes and their associated central black holes grew coevally, but show that by the present-day, dark matter haloes could be accreting at fractional rates that are up to a factor of 3 - 4 higher than their associated black holes. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS.
## Rigging dark haloes: Why is hierarchical galaxy formation consistent with the inside-out build-up of thin discs?
Monthly Notices of the Royal Astronomical Society 418 (2011) 2493-2507
C Pichon, D Pogosyan, T Kimm, A Slyz, J Devriendt, Y Dubois
State-of-the-art hydrodynamical simulations show that gas inflow through the virial sphere of dark matter haloes is focused (i.e. has a preferred inflow direction), consistent (i.e. its orientation is steady in time) and amplified (i.e. the amplitude of its advected specific angular momentum increases with time). We explain this to be a consequence of the dynamics of the cosmic web within the neighbourhood of the halo, which produces steady, angular momentum rich, filamentary inflow of cold gas. On large scales, the dynamics within neighbouring patches drives matter out of the surrounding voids, into walls and filaments before it finally gets accreted on to virialized dark matter haloes. As these walls/filaments constitute the boundaries of asymmetric voids, they acquire a net transverse motion, which explains the angular momentum rich nature of the later infall which comes from further away. We conjecture that this large-scale driven consistency explains why cold flows are so efficient at building up high-redshift thin discs inside out. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS.
## Galactic star formation in parsec-scale resolution simulations
Proceedings of the IAU (2011)
LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier
The interstellar medium (ISM) in galaxies is multiphase and cloudy, with stars forming in the very dense, cold gas found in Giant Molecular Clouds (GMCs). Simulating the evolution of an entire galaxy, however, is a computational problem which covers many orders of magnitude, so many simulations cannot reach densities high enough or temperatures low enough to resolve this multiphase nature. Therefore, the formation of GMCs is not captured and the resulting gas distribution is smooth, contrary to observations. We investigate how star formation (SF) proceeds in simulated galaxies when we obtain parsec-scale resolution and more successfully capture the multiphase ISM. Both major mergers and the accretion of cold gas via filaments are dominant contributors to a galaxy's total stellar budget and we examine SF at high resolution in both of these contexts.
## The impact of ISM turbulence, clustered star formation and feedback on galaxy mass assembly through cold flows and mergers
Proceedings of the IAU (2011)
LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier
Two of the dominant channels for galaxy mass assembly are cold flows (cold gas supplied via the filaments of the cosmic web) and mergers. How these processes combine in a cosmological setting, at both low and high redshift, to produce the whole zoo of galaxies we observe is largely unknown. Indeed there is still much to understand about the detailed physics of each process in isolation. While these formation channels have been studied using hydrodynamical simulations, here we study their impact on gas properties and star formation (SF) with some of the first simulations that capture the multiphase, cloudy nature of the interstellar medium (ISM), by virtue of their high spatial resolution (and corresponding low temperature threshold). In this regime, we examine the competition between cold flows and a supernovae(SNe)-driven outflow in a very high-redshift galaxy (z {\approx} 9) and study the evolution of equal-mass galaxy mergers at low and high redshift, focusing on the induced SF. We find that SNe-driven outflows cannot reduce the cold accretion at z {\approx} 9 and that SF is actually enhanced due to the ensuing metal enrichment. We demonstrate how several recent observational results on galaxy populations (e.g. enhanced HCN/CO ratios in ULIRGs, a separate Kennicutt Schmidt (KS) sequence for starbursts and the population of compact early type galaxies (ETGs) at high redshift) can be explained with mechanisms captured in galaxy merger simulations, provided that the multiphase nature of the ISM is resolved.
## How active galactic nucleus feedback and metal cooling shape cluster entropy profiles
Monthly Notices of the Royal Astronomical Society 417 (2011) 1853-1870
Y Dubois, J Devriendt, R Teyssier, A Slyz
Observed clusters of galaxies essentially come in two flavours: non-cool-core clusters characterized by an isothermal temperature profile and a central entropy floor, and cool-core clusters where temperature and entropy in the central region are increasing with radius. Using cosmological resimulations of a galaxy cluster, we study the evolution of its intracluster medium (ICM) gas properties, and through them we assess the effect of different (subgrid) modelling of the physical processes at play, namely gas cooling, star formation, feedback from supernovae and active galactic nuclei (AGNs). More specifically, we show that AGN feedback plays a major role in the pre-heating of the protocluster as it prevents a high concentration of mass from collecting in the centre of the future galaxy cluster at early times. However, AGN activity during the cluster's later evolution is also required to regulate the mass flow into its core and prevent runaway star formation in the central galaxy. Whereas the energy deposited by supernovae alone is insufficient to prevent an overcooling catastrophe, supernovae are responsible for spreading a large amount of metals at high redshift, enhancing the cooling efficiency of the ICM gas. As the AGN energy release depends on the accretion rate of gas on to its central black hole engine, the AGNs respond to this supernova-enhanced gas accretion by injecting more energy into the surrounding gas, and as a result increase the amount of early pre-heating. We demonstrate that the interaction between an AGN jet and the ICM gas that regulates the growth of the AGN's black hole can naturally produce cool-core clusters if we neglect metals. However, as soon as metals are allowed to contribute to the radiative cooling, only the non-cool-core solution is produced. © 2011 The Authors. Monthly Notices of the Royal Astronomical Society © 2011 RAS.
## Extreme value statistics of smooth Gaussian random fields
Monthly Notices of the Royal Astronomical Society (2011)
S Colombi, O Davis, J Devriendt, S Prunet, J Silk
We consider the Gumbel or extreme value statistics describing the distribution function p G (ν max ) of the maximum values of a random field ν within patches of fixed size. We present, for smooth Gaussian random fields in two and three dimensions, an analytical estimate of p G which is expected to hold in a regime where local maxima of the field are moderately high and weakly clustered. When the patch size becomes sufficiently large, the negative of the logarithm of the cumulative extreme value distribution is simply equal to the average of the Euler characteristic of the field in the excursion ν≥ν max inside the patches. The Gumbel statistics therefore represents an interesting alternative probe of the genus as a test of non-Gaussianity, e.g. in cosmic microwave background temperature maps or in 3D galaxy catalogues. It can be approximated, except in the remote positive tail, by a negative Weibull-type form, converging slowly to the expected Gumbel-type form for infinitely large patch size. Convergence is facilitated when large-scale correlations are weaker. We compare the analytic predictions to numerical experiments for the case of a scale-free Gaussian field in two dimensions, achieving impressive agreement between approximate theory and measurements. We also discuss the generalization of our formalism to non-Gaussian fields. © 2011 The Authors. Monthly Notices of the Royal Astronomical Society © 2011 RAS.
## Galactic star formation in parsec-scale resolution simulations
Proceedings of the International Astronomical Union 6 (2011) 487-490
LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier
The interstellar medium (ISM) in galaxies is multiphase and cloudy, with stars forming in the very dense, cold gas found in Giant Molecular Clouds (GMCs). Simulating the evolution of an entire galaxy, however, is a computational problem which covers many orders of magnitude, so many simulations cannot reach densities high enough or temperatures low enough to resolve this multiphase nature. Therefore, the formation of GMCs is not captured and the resulting gas distribution is smooth, contrary to observations. We investigate how star formation (SF) proceeds in simulated galaxies when we obtain parsec-scale resolution and more successfully capture the multiphase ISM. Both major mergers and the accretion of cold gas via filaments are dominant contributors to a galaxy's total stellar budget and we examine SF at high resolution in both of these contexts. © 2011 International Astronomical Union.
## The origin and evolution of the mass-metallicity relation at high redshift using galics
Monthly Notices of the Royal Astronomical Society 410 (2011) 2203-2216
J Sakstein, A Pipino, JEG Devriendt, R Maiolino
The Galaxies in Cosmological Simulations (galics) semi-analytical model of hierarchical galaxy formation is used to investigate the effects of different galactic properties, including star formation rate (SFR) and outflows, on the shape of the mass-metallicity relation and to predict the relation for galaxies at redshift z= 2.27 and 3.54. Our version of galics has the chemical evolution implemented in great detail and is less heavily reliant on approximations, such as instantaneous recycling. We vary the model parameters controlling both the efficiency and redshift dependence of the SFR as well as the efficiency of supernova feedback. We find that the factors controlling the SFR influence the relation significantly at all redshifts and require a strong redshift dependence, proportional to 1 +z, in order to reproduce the observed relation at the low-mass end. Indeed, at any redshift, the predicted relation flattens out at the high-mass end resulting in a poorer agreement with observations in this regime. We also find that variation in the parameters associated with outflows has a minimal effect on the relation at high redshift but does serve to alter its shape in the more recent past. We thus conclude that the relation is one between the SFR and mass and that outflows are only important in shaping the relation at late times. When the relation is stratified by the SFR, it is apparent that the predicted galaxies with increasing stellar masses have higher SFRs, supporting the view that galaxy downsizing is the origin of the relation. Attempting to reproduce the observed relation, we vary the parameters controlling the efficiency of star formation and its redshift dependence and compare the predicted relations with those of Erb et al. at z= 2.27 and Maiolino et al. at z= 3.54 in order to find the best-fitting parameters. We succeed in fitting the relation at z= 3.54 reasonably well; however, we fail at z= 2.27, our relation lying on average below the observed one at the one standard deviation level. We do, however, predict the observed evolution between z= 3.54 and 0. Finally, we discuss the reasons for the above failure and the flattening at high masses, with regards to both the comparability of our predictions with observations and the possible lack of underlying physics. Several of these problems are common to many semi-analytic/hybrid models and so we discuss possible improvements and set the stage for future work by considering how the predictions and physics in these models can be made more robust in light of our results. © 2010 The Authors Monthly Notices of the Royal Astronomical Society © 2010 RAS.
## The impact of ISM turbulence, clustered star formation and feedback on galaxy mass assembly through cold flows and mergers
Proceedings of the International Astronomical Union 6 (2010) 234-237
LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier
Two of the dominant channels for galaxy mass assembly are cold flows (cold gas supplied via the filaments of the cosmic web) and mergers. How these processes combine in a cosmological setting, at both low and high redshift, to produce the whole zoo of galaxies we observe is largely unknown. Indeed there is still much to understand about the detailed physics of each process in isolation. While these formation channels have been studied using hydrodynamical simulations, here we study their impact on gas properties and star formation (SF) with some of the first from simulations that capture the multiphase, cloudy nature of the interstellar medium (ISM), by virtue of their high spatial resolution (and corresponding low temperature threshold). In this regime, we examine the competition between cold flows and a supernovae(SNe)-driven outflow in a very high-redshift galaxy (z ≈ 9) and study the evolution of equal-mass galaxy mergers at low and high redshift, focusing on the induced SF. We find that SNe-driven outflows cannot reduce the cold accretion at z ≈ 9 and that SF is actually enhanced due to the ensuing metal enrichment. We demonstrate how several recent observational results on galaxy populations (e.g. enhanced HCN/CO ratios in ULIRGs, a separate Kennicutt Schmidt (KS) sequence for starbursts and the population of compact early type galaxies (ETGs) at high redshift) can be explained with mechanisms captured in galaxy merger simulations, provided that the multiphase nature of the ISM is resolved. © Copyright International Astronomical Union 2011.
## The skeleton: Connecting large scale structures to galaxy formation
AIP Conference Proceedings 1241 (2010) 1108-1117
C Pichon, C Gay, D Pogosyan, S Prunet, T Sousbie, S Colombi, A Slyz, J Devriendt
We report on two quantitative, morphological estimators of the filamentary structure of the Cosmic Web, the so-called global and local skeletons. The first, based on a global study of the matter density gradient flow, allows us to study the connectivity between a density peak and its surroundings, with direct relevance to the anisotropic accretion via cold flows on galactic halos. From the second, based on a local constraint equation involving the derivatives of the field, we can derive predictions for powerful statistics, such as the differential length and the relative saddle to extrema counts of the Cosmic web as a function of density threshold (with application to percolation of structures and connectivity), as well as a theoretical framework to study their cosmic evolution through the onset of gravity-induced non-linearities. © 2010 American Institute of Physics. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397698402404785, "perplexity": 2005.5795382927508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00103.warc.gz"} |
https://granthaminstitute.com/2018/01/23/the-lower-the-climate-sensitivity-the-better-but-what-we-need-is-zero-carbon/ | The lower the climate sensitivity the better – but what we need is zero carbon
Following the publication of a paper presenting a new narrower estimate of “equilibrium climate sensitivity” – a measure of how future greenhouse gas emissions could alter the climate – Professor Joanna Haigh, co-director of the Grantham Institute, explains the implications of climate sensitivity and why it should be interpreted carefully.
What concerns me about a recent paper published in Nature is the interpretation of its results by some commentators. The findings have been pounced on by some as an indication that climate scientists have been exaggerating the risk associated with greenhouse gas increases. Even some climate scientists have concluded that “the risk of very high surface temperature changes occurring in the future will decrease”. But this can only be the case if carbon dioxide (CO2) emissions cease.
When we think about the future of the Earth’s climate the first thing to consider is how concentrations of atmospheric greenhouse gases will change; the next is how the climate will respond – aka the climate sensitivity. Estimating climate sensitivity is difficult – we need to know not only the direct impact of greenhouse gases trapping heat radiation, but also the impact of knock-on effects such as changes in humidity, cloud, ice, and the broader carbon cycle, including plant species and cover.
‘Equilibrium Climate Sensitivity’ is an estimate of the increase in average global temperatures that would occur when the Earth has fully adjusted to atmospheric CO2 doubling in concentration from pre-industrial levels. A range of different methods have been employed to calculate ECS, using observational records of CO2 concentration and temperature. The models used range from simple energy balance considerations to complex computer simulations of the whole climate system, but all methods need to include assumptions of one type or another.
The 2013 report of the Intergovernmental Panel on Climate Change suggested that ECS lies between 1.5 and 4.5°C. However, the paper published last week suggests a narrowing of this range to between 2.2 and 3.4°C. It is not for me here to discuss the merits of that study, though I note its range still lies within that of the IPCC – and it is certainly not the last word on this issue.
What is important to note is that, while ECS gives an indication of climate sensitivity to increasing greenhouse gases, it is not very useful as a predictor of actual temperature. Firstly adjustment is very slow, and surface temperatures will continue to rise well after the date of the doubling. Secondly it assumes the concentrations of greenhouse gases have stabilised. An easier to visualise perspective is given by the idea of burnable carbon: there is a limit to the amount of CO2 that we can allow to accumulate in the atmosphere if we wish to avoid dangerous levels of warming. The greater the rate of CO2 emissions, the sooner that threshold will be reached.
Warming can only be halted if CO2 emissions cease. Of course, a lower ECS means that warming is slower, but it must not be interpreted as a maximum possible temperature increase. As long as we go on pumping greenhouse gases into the atmosphere, the temperature will rise and rise inexorably. We need to stop.
Find out more about Grantham Institute research on low-carbon pathways here. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049954771995544, "perplexity": 975.7609655888183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00119.warc.gz"} |
https://www.birs.ca/events/2017/5-day-workshops/17w5030/videos/watch/201709211227-Yuan.html | Video From 17w5030: Splitting Algorithms, Modern Operator Theory, and Applications
Thursday, September 21, 2017 12:27 - 13:01
Partial error bound conditions and the linear convergence rate of ADMM | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095030426979065, "perplexity": 2927.573708927696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00339.warc.gz"} |
https://nyuscholars.nyu.edu/en/publications/the-scope-and-limits-of-simulation-in-automated-reasoning-3 | # The scope and limits of simulation in automated reasoning
Ernest Davis, Gary Marcus
Research output: Contribution to journalReview articlepeer-review
## Abstract
In scientific computing and in realistic graphic animation, simulation - that is, step-by-step calculation of the complete trajectory of a physical system - is one of the most common and important modes of calculation. In this article, we address the scope and limits of the use of simulation, with respect to AI tasks that involve high-level physical reasoning. We argue that, in many cases, simulation can play at most a limited role. Simulation is most effective when the task is prediction, when complete information is available, when a reasonably high quality theory is available, and when the range of scales involved, both temporal and spatial, is not extreme. When these conditions do not hold, simulation is less effective or entirely inappropriate. We discuss twelve features of physical reasoning problems that pose challenges for simulation-based reasoning. We briefly survey alternative techniques for physical reasoning that do not rely on simulation.
Original language English (US) 60-72 13 Artificial Intelligence 233 https://doi.org/10.1016/j.artint.2015.12.003 Published - Apr 2016
## Keywords
• Physical reasoning
• Simulation
## ASJC Scopus subject areas
• Language and Linguistics
• Linguistics and Language
• Artificial Intelligence
## Fingerprint
Dive into the research topics of 'The scope and limits of simulation in automated reasoning'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638341426849365, "perplexity": 2255.0353371965216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00691.warc.gz"} |
http://www.math.mtu.edu/graduate/comp/node12.html | Next: Algebra Up: Optimization/Numerical Linear Algebra Previous: Outline
### Sample questions
1.
Compare and contrast the line search and trust region methods of globalizing a quasi-Newton method. Your discussion should touch on the following points:
(a)
The cost of taking a step (that is, of solving the line search subproblem and trust region subproblem), and the complexity of the algorithms.
(b)
Dealing with nonconvexity.
(c)
Convergence theorems.
2.
Derive the BFGS update used for approximating the Hessian of the objective function in an unconstrained minimization problem. Explain the rationale for the steps in the derivation.
3.
(a)
State carefully and prove the first-order necessary condition for to have a local minimum at x=x*.
(b)
Give an example to show that the first-order condition is only necessary, not sufficient.
(c)
State carefully and prove the second-order necessary condition for to have a local minimum at x=x*.
(d)
Give an example to show that the second-order condition is only necessary, not sufficient.
4.
(a)
Let be a sequence in , and suppose
Define q-quadratically.''
(b)
Let be given. Newton's method for minimizing f is locally q-quadratically convergent. State carefully and prove this theorem.
5.
Suppose is convex.
(a)
State and prove a theorem indicating that the usual first-order necessary condition is, in this case, a sufficient condition.
(b)
Prove that every local minimum of f is, in fact, a global minimum.
6.
Consider the equality-constrained problem
(2)
where and are smooth functions.
(a)
Explain how to apply the quadratic penalty method to (). How does one obtain an estimate of the Lagrange multiplier?
(b)
Explain how to apply the augmented Lagrangian method to (). How does one obtain an estimate of the Lagrange multiplier?
7.
Recall that the norm of is defined by
Derive the formula for the corresponding induced matrix norm of , and prove that it is correct.
8.
(a)
What is the condition number of a matrix ?
(b)
Explain how this condition number is related to the problem of computing the solution x to Ax=b, where b is regarded as the data of the problem, and A is regarded as being known exactly.
9.
Consider the least-squares problem Ax=b, where , m>n, and are given, and x is to be determined.
(a)
Assume that A has full rank. Explain how to solve the least-squares problem using:
i.
the normal equations;
ii.
the QR factorization of A;
iii.
the SVD of A.
In each case, your explanation must include a justification that the algorithm leads to the solution of the least-squares problem (e.g. explain why the solution of the normal equations is the solution of the least-squares problem).
(b)
Discuss the advantages and disadvantages of each of the above methods. Which is the method of choice for the full rank least-squares problem?
10.
(a)
Give a simple example to show that Gaussian elimination without partial pivoting is unstable in finite-precision arithmetic. (Hint: The example can be as small as .)
(b)
Using the concept of backward error analysis, explain the conditions under which Gaussian elimination with partial pivoting can be unstable in finite-precision arithmetic. (Note: This question does not ask you to perform a backward error analysis. Rather, you can quote standard results in your explanation.)
(c)
Give an example to show that Gaussian elimination with partial pivoting can be unstable.
11.
(a)
Suppose that has eigenvalues
Explain how to perform the power method, and under what conditions it converges to an eigenvalue.
(b)
Explain the idea of simultaneous iteration.
(c)
Explain the QR algorithm and its relationship to simultaneous iteration.
12.
Suppose that is invertible, B is an estimate of A-1, and AB = I+E. Show that the relative error in B is bounded by (using an arbitrary matrix norm).
13.
Show that if A is symmetric positive definite and banded, say aij = 0 for |i-j| > p, then the Cholesky factor B of A satisfies bij = 0 for j > i or j < i-p.
14.
Suppose that Gaussian elimination (without partial pivoting) is applied to a symmetric positive definite matrix A. Write
where each Ej is an elementary (lower triangular) matrix (left-multiplication by Ej accomplishes the jth step of Gaussian elimination). None of the Ejs is a permutation matrix, that is, no row interchanges are performed. The purpose of this exercise is to prove that this is possible (i.e. that Gaussian elimination can be applied without row interchanges) and to prove the following inequality:
Do this by proving the following three lemmas:
(a)
Let B be a symmetric positive definite matrix. Then bii > 0for and the largest entry of B (in magnitude) occurs on the diagonal.
(b)
Let A be a symmetric positive definite matrix, and suppose one step of Gaussian elimination is applied to A to obtain
Then is also symmetric positive definite.
(c)
Using the notation of the previous lemma,
Now complete the proof by induction. (Note that this result both proves that no partial pivoting is required for a symmetric positive definite matrix, and also that Gaussian elimination is perfectly stable when applied to such a matrix.)
15.
Let have SVD A=USVT, where and are orthogonal and S is diagonal (Sij=0 if ). Define S(k) by
and define A(k) by
A(k)=US(k)VT.
What is , where is the matrix norm induced by the Euclidean () vector norm? Prove your answer.
Next: Algebra Up: Optimization/Numerical Linear Algebra Previous: Outline
Math Dept Webmaster
2003-08-28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.97694993019104, "perplexity": 556.2359378529358}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292887.6/warc/CC-MAIN-20160823195812-00011-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/66014/why-probability-measures-in-ergodic-theory | # Why probability measures in ergodic theory?
I just had a look at Walters' introductory book on ergodic theory and was struck that the book always sticks to probability measures. Why is it the case that ergodic theory mainly considers probability measures? Is it that the important theorems, for example Birkhoff's ergodic theorem, is true only for probability measures? Or is it because of the relation with concepts from thermodynamics such as entropy?
I also wish to ask one more doubt; this one slightly more technical. Probability theory always works with the Borel sigma algebra; it is rarely the case that the sigma algebra is enlarged to the Lebesgue sigma algebra for the case of the real numbers(for defining random variables) or the unit circle, for instance. In ergodic theory, do we go by this restriction, or not? That is, when ignoring sets of measure zero, do we have that subsets of measure zero are measurable?
-
Everything that works for probability measures should also probably work for finite measures (by mere normalization). As for infinite measure spaces, there is a well-developed theory in that case too. See Aaronson's monograph: amazon.com/… – Mark Sep 20 '11 at 10:58
The second question is addressed in this thread: mathoverflow.net/questions/31603/… – user18297 Dec 19 '11 at 20:57
The question isn't really about probability spaces, it's about finite measure. Usually the theory on classic ergodic theory (by classic I mean on finite measure spaces) is developed on probability spaces but it also works on any finite measure spaces, just take the measure normalized and everything will work fine. This hyppotesis is really needed, some theorems doens't really work on spaces that doesn't have finite measure, eg, Poincarré Reccurence Thm it's not true if you open this possibility. (Just take the transformation defined on the real line by $T(x)=x+1$. It is measure preserving but it's not recurrent.)
Specificly on the Birkhoff Thm, it still valid on $\sigma$-finite spaces but it doesn't give you much information about the limit. In fact, the Birkhoff' sum converges to 0.
But there's a nice theory going on $\sigma$-finite spaces with full measure infinity. Actually there is a nice book by Aaronson about infinite ergodic theory and some really good notes by Zweimüller. Things here chage a bit, eg, you don't have the property given by Poincarré Recurrence (you have to ask it as a definition).Some of the results try to chance how you make the Birkhoff sum in order to get some aditional information and can be applied to the calculus of Markov Chains. Another nice example that was object of recent study is the Boole's Transformation and it is deffined by \begin{eqnarray*} B: \mathbb{R} &\rightarrow& \mathbb{R} \\ x &\mapsto& \dfrac{x^2-1}{x} \end{eqnarray*}
I don't know if I made myself very clear, but I recommend those texts. You should try it, it offers this theory and seek for the answer of your kind of question.
Aaronson, J. - An Introduction to Infinite Ergodic Theory. Mathematical Surveys and Monographs, AMS, 1997.
Zweimüller, R. - Surrey Notes on Infinite Ergodic Theory. You can get it here
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9373173117637634, "perplexity": 323.5562717738158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701151880.99/warc/CC-MAIN-20160205193911-00115-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-00752375 | # Tree-level lepton universality violation in the presence of sterile neutrinos: impact for $R_K$ and $R_\pi$
Abstract : We consider a tree-level enhancement to the violation of lepton flavour universality in light meson decays arising from modified $W \ell \nu$ couplings in the standard model minimally extended by sterile neutrinos. Due to the presence of additional mixings between the active (left-handed) neutrinos and the new sterile states, the deviation from unitarity of the leptonic mixing matrix intervening in charged currents might lead to a tree-level enhancement of $R_{P} = \Gamma (P \to e \nu) / \Gamma (P \to \mu \nu)$, with $P=K, \pi$. We illustrate these enhancements in the case of the inverse seesaw model, showing that one can saturate the current experimental bounds on $\Delta r_{K}$ (and $\Delta r_{\pi}$), while in agreement with the different experimental and observational constraints.
Type de document :
Article dans une revue
Journal of High Energy Physics, Springer, 2013, 1302, pp.048. 〈10.1007/JHEP02(2013)048〉
https://hal.archives-ouvertes.fr/hal-00752375
Contributeur : Responsable Bibliotheque <>
Soumis le : jeudi 15 novembre 2012 - 15:29:29
Dernière modification le : jeudi 15 mars 2018 - 09:44:05
### Citation
A. Abada, D. Das, A. M. Teixeira, A. Vicente, C. Weiland. Tree-level lepton universality violation in the presence of sterile neutrinos: impact for $R_K$ and $R_\pi$. Journal of High Energy Physics, Springer, 2013, 1302, pp.048. 〈10.1007/JHEP02(2013)048〉. 〈hal-00752375〉
### Métriques
Consultations de la notice | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618092179298401, "perplexity": 3049.1063459666784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00034.warc.gz"} |
https://yutsumura.com/every-group-of-order-12-has-a-normal-subgroup-of-order-3-or-4/ | Every Group of Order 12 Has a Normal Subgroup of Order 3 or 4
Problem 566
Let $G$ be a group of order $12$. Prove that $G$ has a normal subgroup of order $3$ or $4$.
Contents
Hint.
Use Sylow’s theorem.
(See Sylow’s Theorem (Summary) for a review of Sylow’s theorem.)
Recall that if there is a unique Sylow $p$-subgroup in a group $GH$, then it is a normal subgroup in $G$.
Proof.
Since $12=2^2\cdot 3$, a Sylow $2$-subgroup of $G$ has order $4$ and a Sylow $3$-subgroup of $G$ has order $3$.
Let $n_p$ be the number of Sylow $p$-subgroups in $G$, where $p=2, 3$.
Recall that if $n_p=1$, then the unique Sylow $p$-subgroup is normal in $G$.
By Sylow’s theorem, we know that $n_2\mid 3$, hence $n_p=1, 3$.
Also by Sylow’s theorem, $n_3 \equiv 1 \pmod{3}$ and $n_3\mid 4$.
It follows that $n_3=1, 4$.
If $n_3=1$, then the unique Sylow $3$-subgroup is a normal subgroup of order $3$.
Suppose that $n_3=4$. Then there are four Sylow $3$-subgroup in $G$.
The order of each Sylow $3$-subgroup is $3$, and the intersection of two distinct Sylow $3$-subgroups intersect trivially (the intersection consists of the identity element) since every nonidentity element has order $3$.
Hence two elements of order $3$ in each Sylow $3$-subgroup are not included in other Sylow $3$-subgroup.
Thus, there are totally $4\cdot 2=8$ elements of order $3$ in $G$.
Since $|G|=12$, there are $12-8=4$ elements of order not equal to $3$.
Since any Sylow $2$-subgroup contains four elements, these elements fill up the remaining elements.
So there is just one Sylow $2$-subgroup, and hence it is a normal subgroup of order $4$.
In either case, the group $G$ has a normal subgroup of order $3$ or $4$.
More from my site
• Group of Order $pq$ Has a Normal Sylow Subgroup and Solvable Let $p, q$ be prime numbers such that $p>q$. If a group $G$ has order $pq$, then show the followings. (a) The group $G$ has a normal Sylow $p$-subgroup. (b) The group $G$ is solvable. Definition/Hint For (a), apply Sylow's theorem. To review Sylow's theorem, […]
• Sylow Subgroups of a Group of Order 33 is Normal Subgroups Prove that any $p$-Sylow subgroup of a group $G$ of order $33$ is a normal subgroup of $G$. Hint. We use Sylow's theorem. Review the basic terminologies and Sylow's theorem. Recall that if there is only one $p$-Sylow subgroup $P$ of $G$ for a fixed prime $p$, then $P$ […]
• Non-Abelian Group of Order $pq$ and its Sylow Subgroups Let $G$ be a non-abelian group of order $pq$, where $p, q$ are prime numbers satisfying $q \equiv 1 \pmod p$. Prove that a $q$-Sylow subgroup of $G$ is normal and the number of $p$-Sylow subgroups are $q$. Hint. Use Sylow's theorem. To review Sylow's theorem, check […]
• Every Sylow 11-Subgroup of a Group of Order 231 is Contained in the Center $Z(G)$ Let $G$ be a finite group of order $231=3\cdot 7 \cdot 11$. Prove that every Sylow $11$-subgroup of $G$ is contained in the center $Z(G)$. Hint. Prove that there is a unique Sylow $11$-subgroup of $G$, and consider the action of $G$ on the Sylow $11$-subgroup by […]
• Every Group of Order 72 is Not a Simple Group Prove that every finite group of order $72$ is not a simple group. Definition. A group $G$ is said to be simple if the only normal subgroups of $G$ are the trivial group $\{e\}$ or $G$ itself. Hint. Let $G$ be a group of order $72$. Use the Sylow's theorem and determine […]
• If a Sylow Subgroup is Normal in a Normal Subgroup, it is a Normal Subgroup Let $G$ be a finite group. Suppose that $p$ is a prime number that divides the order of $G$. Let $N$ be a normal subgroup of $G$ and let $P$ be a $p$-Sylow subgroup of $G$. Show that if $P$ is normal in $N$, then $P$ is a normal subgroup of $G$. Hint. It follows from […]
• Are Groups of Order 100, 200 Simple? Determine whether a group $G$ of the following order is simple or not. (a) $|G|=100$. (b) $|G|=200$. Hint. Use Sylow's theorem and determine the number of $5$-Sylow subgroup of the group $G$. Check out the post Sylow’s Theorem (summary) for a review of Sylow's […]
• A Group of Order $pqr$ Contains a Normal Subgroup of Order Either $p, q$, or $r$ Let $G$ be a group of order $|G|=pqr$, where $p,q,r$ are prime numbers such that $p<q<r$. Show that $G$ has a normal subgroup of order either $p,q$ or $r$. Hint. Show that using Sylow's theorem that $G$ has a normal Sylow subgroup of order either $p,q$, or $r$. Review […]
2 Responses
1. Shabnam says:
Awesome explanation thanks a lot
• Yu says:
You are welcome!Thank you for the comment!
If the Quotient is an Infinite Cyclic Group, then Exists a Normal Subgroup of Index $n$
Let $N$ be a normal subgroup of a group $G$. Suppose that $G/N$ is an infinite cyclic group. Then prove...
Close | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482189416885376, "perplexity": 70.5635514170892}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00015.warc.gz"} |
http://mathoverflow.net/questions/67225/induced-fibration-of-eilenberg-maclane-spaces | # Induced fibration of Eilenberg-MacLane spaces
How does the inclusion $\mathbb Z\rightarrow \mathbb Q$ induce a fibration $K(\mathbb Z,n)\rightarrow K(\mathbb Q,n)$ with fibre $\Omega K(\mathbb Q/\mathbb Z,n)$?
-
This really isn't a great question, since it is not at all clear where your difficulty lies. One can define a functor $K(-, n)$ (as in my answer), and that's really all there is to it. (I had trouble deciding whether to answer at all, and whether this question would be better suited for math.stackexchange.com. It's definitely not a research-level question; see the faq.) – Todd Trimble Jun 8 '11 at 9:48
This is a very simple exercise. – Fernando Muro Jun 8 '11 at 9:57
I have now deleted my answer, since the question has been changed. You have to choose your models correctly to get this fiber in a point-set topology sense, but it isn't hard. – Todd Trimble Jun 8 '11 at 10:36
It's still the same very simple exercise I used to solve as an undergraduate student. – Fernando Muro Jun 8 '11 at 11:11
@Fernando Muro : very simple?? :) thanks anyway fernando. – palio Jun 8 '11 at 11:19
## 1 Answer
Probably the most functorial approach is to use the Dold-Kan equivalence $$F:\{\text{chain complexes}\} \to \{\text{simplicial abelian groups}\}.$$ Let $A_{\ast}$ denote the chain complex with just $\mathbb{Q}/\mathbb{Z}$ in dimension $n-1$, let $B_{\ast}$ be the one with a surjective differential from $\mathbb{Q}$ in dimension $n$ to $\mathbb{Q}/\mathbb{Z}$ in dimension $n-1$, and let $C_{\ast}$ be the one with just a $\mathbb{Q}$ in dimension $n$. There is an evident short exact sequence (and therefore fibration) $A_{\ast}\to B_{\ast}\to C_{\ast}$, which gives a fibration $|FA_{\ast}|\to |FB_{\ast}|\to |FC_{\ast}|$ of topological abelian groups. Here $|FA_{\ast}|$ and $|FC_{\ast}|$ are $K(\mathbb{Q}/\mathbb{Z},n-1)$ and $K(\mathbb{Q},n)$ essentially by definition, and it is easy to produce a weak equivalence from the corresponding model for $K(\mathbb{Z},n)$ to $|FB_{\ast}|$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808976650238037, "perplexity": 342.0087362058162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922763.3/warc/CC-MAIN-20140909051433-00299-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.authorea.com/users/9090/articles/9204-distinguishing-disorder-from-order-in-irreversible-decay-processes/_show_article | 02/14/2015
# Jonathan W. Nichols, Shane W. Flynn, William E. Fatherley, Jason R. Green Department of Chemistry, University of Massachusetts Boston
Abstract
Fluctuating rate coefficients are necessary when modeling disordered kinetic processes with mass-action rate equations. However, measuring the fluctuations of rate coefficients is a challenge, particularly for nonlinear rate equations. Here we present a measure of the total disorder in irreversible decay $$i\,A\to \textrm{products}$$, $$i=1,2,3,\ldots n$$ governed by (non)linear rate equations – the inequality between the time-integrated square of the rate coefficient (multiplied by the time interval of interest) and the square of the time-integrated rate coefficient. We apply the inequality to empirical models for statically and dynamically disordered kinetics with $$i\geq 2$$. These models serve to demonstrate that the inequality quantifies the cumulative variations in a rate coefficient, and the equality is a bound only satisfied when the rate coefficients are constant in time.
# Introduction
Rates are a way to infer the mechanism of kinetic processes, such as chemical reactions. They typically obey the empirical mass-action rate laws when the reaction system is homogeneous, with uniform concentration (s) throughout. Deviations from traditional rate laws are possible when the system is heterogeneous and there are fluctuations in structure, energetics, or concentrations. When traditional kinetic descriptions break down [insert citation], the process is statically and/or dynamically disordered [insert Zwanzig citation], and it is necessary to replace the rate constant in the rate equation with a time-dependent rate coefficient. Measuring the variation of time-dependent rate coefficients is a means of quantifying the fidelity of a rate coefficient and rate law.
In our previous work a theory was developed for analyzing first-order irreversible decay kinetics through an inequality[insert citation]. The usefulness of this inequality is through its ability to quantify disorder, with the unique property of becoming an equality only when the system is disorder free, and therefore described by chemical kinetics in its classical formulation. The next problem that should be addressed is that of higher order kinetics, what if the physical systems one wishes to understand are more complex kinetic schemes, they would require a modified theoretical framework for analysis, but should and can be addressed. To motivate this type of development systems such as...... are all known to proceed through higher ordered kinetics, and all of these systems possess unique and interesting applications, therefore a more complete kinetics description of them should be pursued[insert citations].
Static and dynamic disorder lead to an observed rate coefficient that depends on time $$k(t)$$. The main result here, and in Reference[cite], is an inequality $\mathcal{L}(\Delta{t})^2 \leq \mathcal{J}(\Delta{t})$ between the statistical length (squared) $\mathcal{L}(\Delta{t})^2 \equiv \left[\int_{t_i}^{t_f}k(t)dt\right]^2$ and the divergence $\frac{\mathcal{J}(\Delta{t})}{\Delta{t}} \equiv \int_{t_i}^{t_f}k(t)^{2}dt$ over a time interval $$\Delta t = t_f - t_i$$. Both $$\mathcal{L}$$ and $$\mathcal{J}$$ are functions of a possibly time-dependent rate coefficient, originally motivated by an adapted form of the Fisher information[cite]. Reference 1 showed that the difference $$\mathcal{J}(\Delta t)-\mathcal{L}(\Delta t)^2$$ is a measure of the variation in the rate coefficient, due to static or dynamic disorder, for decay kinetics with a first-order rate law. The lower bound holds only when the rate coefficient is constant in first-order irreversible decay. Here we extend this result to irreversible decay processes with “order” higher than one. We show $$\mathcal{J}-\mathcal{L}^2=0$$ is a condition for a constant rate coefficient for any $$i$$. Accomplishing this end requires reformulating the definition of the time-dependent rate coefficient.
In this work we extend the application of this inequality to measure disorder in irreversible decay kinetics with nonlinear rate laws (i.e., kinetics with total “order” greater thane unity). We illustrate this framework with proof-of-principle analyses for second-order kinetics for irreversible decay phenomena. We also connect this theory to previous work on first-order kinetics showing how the model simplifies in a consistent manner when working with first order models.
# Disordered and nonlinear irreversible kinetics
We consider the irreversible reaction types $i\,A \to \mathrm{products}\quad\quad\textrm{for}\quad i=1,2,3,\ldots,n$ with the nonlinear differential rate laws $\frac{dC_i(t)}{dt} = k_i(t)\left[C_i(t)\right]^i.$ Experimental data is typically a concentration profile corresponding to the integrated rate law. If the concentration profile is normalized, by dividing the concentration at a time $$t$$ to the initial concentration, it is called the survival function $S_i(t) = \frac{C_i(t)}{C_i(0)},$ the input to our theory. Namely, we define the effective rate coefficient, $$k_i(t)$$, through an appropriate time derivative of the survival function that depends on the order $$i$$ of reaction $k_i(t) \equiv \begin{cases} \displaystyle -\frac{d}{dt}\ln S_1(t) & \text{if } i = 1 \\[10pt] \displaystyle +\frac{d}{dt}\frac{1}{S_i(t)^{i-1}} & \text{if } i \geq 2. \end{cases}$
## Bound for rate constants
These forms of $$k(t)$$ satisfy the bound $$\mathcal{J}-\mathcal{L}^2 = 0$$ in the absence of disorder, when $$k_i(t)\to\omega_i$$. This is straightforward to show for the case of an $$i^{th}$$-order reaction ($$i\geq 2$$), with the traditional integrated rate law $\frac{1}{C_i(t)^{i-1}} = \frac{1}{C_i(0)^{i-1}}+(i-1)\omega_i t.$ and associated survival function $S_i(t) = \sqrt[i-1]{\frac{1}{1+(i-1)\omega_i tC_i(0)^{i-1}}}.$ In traditional kinetics, the rate coefficient of irreversible decay is assumed to be constant, in which case $$k(t)\to\omega_i$$, but this will not be the case when the kinetics are statically or dynamically disordered. In these cases, we will use the above definitions of $$k(t)$$.
The statistical length and divergence can also be derived for these irreversible decay reactions. The time-dependent rate coefficient is $k_i(t) \equiv \frac{d}{dt}\frac{1}{S_i(t)^{i-1}} = (i-1)\omega_i C_i(0)^{i-1}$ The statistical length $$\mathcal{L}_i$$ is the integral of the cumulative time-dependent rate coefficient over a period of time $$\Delta{t}$$, and the divergence is the cumulative square of the rate coefficient, multiplied by the time interval. For the equations governing traditional kinetics, both the statistical length squared and the divergence are $$(i-1)^2\omega_i^2\left(C_i^{i-1}(0)\right)^2\Delta t^2$$: the bound holds when there is no static or dynamic disorder, and a single rate coefficient $$\omega_i$$ is sufficient to characterize irreversible decay.
The nonlinearity of the rate law leads to solutions that depend on concentration. This concentration dependence is also present in both $$\mathcal{J}$$ and $$\mathcal{L}$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768133163452148, "perplexity": 651.2344425391924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186780.20/warc/CC-MAIN-20170322212946-00577-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/14138/why-exactly-does-current-carrying-two-current-wires-attract-repel | # Why exactly does current carrying two current wires attract/repel?
When to parallel wires carrying currents in same direction I1 & I2.
http://www.youtube.com/watch?v=43AeuDvWc0k this video demonstrates that effect.
My question is, why exactly does this happen?
I know the reason, but I'm not convinced with it. One wire generates the magnetic field which is into the plane at another wire. Electrons are moving in the wire which experience Lorentz force F = q(V X B).
My arguments are,
1. this force is experienced by the electrons, not the nucleus. And these electrons that are in motion are the "Free electrons". So, when the experience force just they alone should be drifted towards/away from the wire but not the entire atom(s).
2. The only force binding an electron to the material/matter is the coulombic attraction force from the nucleus. If the Lorentz force is sufficiently large, then it should be able to remove electrons from atoms. I other words, they should come out of the material.
But I never heard/read of any thing like that happening. Why doesn't this happen?
In any case, atoms must not experience any force, then why is it that entire wire is experiencing a force of i(L X B)?
-
You are right in both arguments. The thing is just, this "only force, ... the coulombic attraction" is incredibly much stronger than the Lorentz force due to the magnetic field of a single wire carrying current in the same direction.
As for "In any case, atoms must not experience any force", this is obviously wrong, as can be seen very plainly when you think of Newton's third law and the fact that the coulomb attraction occurs between the electrons and nuclei in the wire of question.
-
Your question is assuming that the electrons are weakly interacting with the nucleus. The interaction with the nucleus is extremely strong. It is better to ask instead why do we have conductivity at all. Electrons are so tightly bound to nuclei of atoms, why should a tiny external electric field get them moving?
The answer is that quantum mechanical effects can spread out electrons over many atoms. This is responsible for chemical bonding. In metals, the electrons have a spread out wavefunction, and the energy-band of spread-out electron states is only partly filled, so it only takes a little bit of energy to push an electron into motion.
But for your original question, there is an easy way to see the answer. Consider two infinite charged wires 1 cm apart. You know that they repel, so they move apart. Now boost to a frame moving along the wires at a huge speed, near the speed of light. Relavistic time dilation slows down the rate at which they move apart. But the charge density has gone up in this frame, because of the length contraction. So there must be an additional attractive force due to the currents in the wires. In the limit that you are moving at the speed of light, the attractive like-current force must exactly cancel the repulsive like-charge electrostatic force.
-
Consider two infinite charged wires 1 cm apart, held together by a series of springs spaced 1 cm apart along the wire (1 spring per cm). Now boost to a frame moving along the wires at .866 c. The Lorentz factor is 2, so charge density doubles in the new frame because of length contraction. The spring density also doubles (2 springs per cm) to exactly cancel the increase in the repulsive electrostatic force. If there is an additional attractive force-per-meter due to the current in this frame, there must also be an additional repulsive force-per-meter we haven't considered. Right? – Nick Jan 7 at 10:23 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8244773745536804, "perplexity": 369.98305711894625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898842.15/warc/CC-MAIN-20141030025818-00229-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3568022/?tool=pubmed | BMC Syst Biol. 2012; 6: 142.
Published online Nov 21, 2012.
PMCID: PMC3568022
# Incremental parameter estimation of kinetic metabolic network models
## Abstract
### Background
An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE). Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified).
### Results
In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates) exceeds that of metabolites (chemical species). Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA) models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented.
### Conclusions
The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.
Keywords: Incremental parameter estimation, Kinetic modeling, Metabolic network, GMA model
## Background
The estimation of unknown kinetic parameters from time-series measurements of biological molecules is a major bottleneck in the ODE model building process in systems biology and metabolic engineering [1]. The majority of current estimation methods involve simultaneous (single-step) parameter identification, where model prediction errors are minimized over the entire parameter space. These methods often rely on global optimization methods, such as simulated annealing, genetic algorithms and other evolutionary approaches [1-3]. The problem of obtaining the best-fit parameter estimates however, is typically ill-posed due to issues related with data informativeness, problem formulation and parameter correlation, all of which contribute to the lack of complete parameter identifiability. Not to mention, finding the global minimum of model residuals over highly multidimensional parameter space is challenging and can become prohibitively expensive to perform on a computer workstation, even for tens of parameters.
Here, we consider the modeling of cellular metabolism using the canonical power-law formalism, specifically the generalized mass action (GMA) systems [4,5]. The power-law formalism has many advantages, which have been detailed elsewhere [1,6]. Notably, power laws have a relatively simple structure that permits algebraic manipulation in the logarithmic scale, but nonetheless is capable of describing essentially any nonlinearity. Regulatory interactions among metabolites can also be described straightforwardly through the kinetic order parameters, establishing an equivalence between structural identification and parametric estimation. However, the number of parameters increases proportionally with the number of metabolites and fluxes, leading to a large-scale parameter identification problem, one where single-step estimation methods often struggle to converge.
The integration of ODE often constitutes a major part of the computational cost in the parameter estimation, especially when the ODE model is stiff [7]. While stiffness can genuinely arise due to a large time scale separation of the reaction kinetics in the real system, stiff ODEs could also result from unrealistic combinations of parameter values during the parameter optimization procedure, especially when a global optimizer is used. The parameter estimation of ODE models using power-law kinetics is particularly prone to stiffness problem since many of the unknown parameters are the exponents of the concentrations. For this reason, alternative formulations have been proposed that avoid these ODE integrations either completely [7,8] or partially [9-11]. Particularly, computational cost could be significantly reduced by decomposing the estimation problem into two phases, starting with the calculation of dynamic reaction rates or fluxes from the slopes of concentration data, followed by the least square regressions of kinetic parameters [12-14]. In this case, the final parameter estimation is done one flux at a time, each involving only a handful of parameters and thus, the global minimum solution can be either computed analytically (for example, when using log-linear power-law flux functions) or determined efficiently. Moreover, as the first estimation phase (flux estimation) depends only on the assumption of the topology of the metabolic network, the flux estimates can subsequently be used to guide the selection of the most appropriate flux functions for the second phase or to detect inconsistencies in the assumed topology of the network separately from the flux equations [14]. However, the application of this method requires the number of metabolites to be equal to or larger than that of fluxes, so that the flux estimation can result in a unique solution. Since the reverse situation is more commonly encountered in the typical metabolic networks, a generalization of this incremental estimation approach becomes the main focus in this study.
As noted above, the new parameter estimation method in this work is built on the concept of incremental identification [12,13] or dynamical flux estimation (DFE) method [14,15]. The proposed method provides two new contributions: (1) an ability to handle the more general scenario, where the number of reactions exceeds that of the metabolites and (2) high numerical efficiency through the reduction of the parameter search space. Specifically, two parameter estimation formulations are proposed with objective functions that depend on model prediction errors of metabolite concentrations and of concentration time-slopes. An extension of this strategy to circumstances where concentration data of some metabolites are missing is also presented. The proposed method is applied to two previously published GMA models and compared with single-step estimation methods, in order to demonstrate its efficacy.
## Methods
The generalized mass action model of cellular metabolism describes the mass balance of metabolites, taking into account all metabolic influxes and effluxes and their stoichiometric ratios, as follows:
$dXt,p/dt=X˙t,p=SvX,p,$
(1)
where X(t,p) is the vector of metabolic concentration time profiles, SRm×n is the stoichiometric matrix for m metabolites that participate in n reactions, and v(X,p) denotes the vector of metabolic fluxes (i.e. reaction rates). Here, each flux is described by a power-law equation:
$vjX,p=γj∏iXifji,$
(2)
where γj is the rate constant of the j-th flux and fji is the kinetic order parameter, representing the influence of metabolite Xi on the j-th flux (positive: Xi is an activating factor or a substrate, negative: Xi is an inhibiting factor). In incremental parameter identification, a data pre-processing step (e.g. smoothing or filtering) is usually applied to the noisy time-course concentration data Xm(tk), in order to improve the time-slope estimates $X˙mtk$. Subsequently, the dynamic metabolic fluxes v(tk) are estimated from Equation (1) by substituting $X˙t$ with $X˙mtk.$ Finally, the kinetic parameters associated with the j-th flux (i.e. γj and fji’s) can be calculated using a least square regression of the power law flux function in Equation (2) against the estimated vj(tk). Note that for GMA models, the least square parameter regressions in the last step are linear in the logarithmic scale and thus, can be performed very efficiently.
A unique set of dynamic flux values v(tk) can only be computed from $X˙mtk=Svtk,$ when the number of metabolites exceeds that of fluxes. However, a metabolite in general can participate in more than one metabolic flux (m<n). In such a situation, there exist an infinite number of dynamic flux combinations v(tk) that satisfy $X˙mtk=Svtk.$ The dimensionality of the set of flux solutions is equal to the degree of freedom (DOF), given by the difference between the number of fluxes and the number of metabolites: nDOF=n-m >0 (assuming S has a full row rank, i.e. there is no redundant ODE in Equation (1)). The positive DOF means that the values of nDOF selected fluxes can be independently set, from which the remaining fluxes can be computed. This relationship forms the basis of the proposed estimation method, in which the model goodness of fit to data is optimized by adjusting only a subset of parameters associated with the independent fluxes above.
Specifically, we start by decomposing the fluxes into two groups: v(tk)=[ vI(tk)TvD(tk)T ]T , where the subscripts I and D denote the independent and dependent subset, respectively. Then, the parameter vector p and the stoichiometric matrix S can be structured correspondingly as p=[ pIpD ] and S=[ SISD ]. The relationship between the independent and dependent fluxes can be formulated by rearranging $X˙mtk=Svtk$ into:
$vDtk=SD−1X˙mtk−SIvIXmtk,pI.$
(3)
In this case, given pI, one can compute the independent fluxes vI(Xm(tk),pI) using the concentration data Xm(tk), and subsequently obtain vD(tk) from Equation (3). Finally, pD can be estimated by a simple least square fitting of vD(Xm(tk),pD) to the computed vD(tk) one flux at a time, when there are more time points than the number of parameters in each flux.
In this study, two formulations of the parameter estimation of ODE models in Equation (1) are investigated, involving the minimization of concentration and slope errors. The objective function for the concentration error is given by
$ΦCp,X=1mK∑k=1KXmtk−Xtk,pTXmtk−Xtk,p$
(4)
and that for the slope error is given by
$ΦSp,X=1mK∑k=1KX˙mtk−SvXmtk,pTX˙mtk−SvXmtk,p,$
(5)
where K denotes the total number of measurement time points and X(tk,p) is the concentration prediction (i.e. the solution to the ODE model in Equation (1)). Figure Figure11 describes the formulation of the incremental parameter estimation and the procedure for computing the objective functions. Note that the computation of ΦC requires an integration of the ODE model and thus, the estimation using this objective function is expected to be computationally costlier than that using ΦS. On the other hand, metabolic mass balance is only approximately satisfied at discrete time points tk during the parameter estimation using ΦS, as the ODE model is not integrated.
Flowchart of the incremental parameter estimation.
There are several important practical considerations in the implementation of the proposed method. The first consideration is on the selection of the independent fluxes. Here, the set of these fluxes is selected such that (i) the m×m submatrix SD is invertible, (ii) the total number of the independent parameters pI is small, and (iii) the prior knowledge of the corresponding pI is maximized. The last two aspects should lead to a reduction in the parameter search space and the cost of finding the global optimal solution of the minimization problem in Figure Figure1.1. The second consideration is regarding constraints in the parameter estimation. Biologically relevant values of parameters are often available, providing lower and/or upper bounds for the parameter estimates. In addition, enzymatic reactions in the ODE model are often assumed to be irreversible and thus, dynamic flux estimates are constrained to be positive. Hence, the parameter estimation involves a constrained minimization problem, for which many global optimization algorithms exist.
So far, we have assumed that the time-course concentration data are available for all metabolites. However, the method above can be modified to accommodate more general circumstances, in which data for one or several metabolites are missing. In this case, the ODE model is first rewritten to separate the mass balances associated with measured and unmeasured metabolites, such that
$X˙t,p=X˙MX˙Ut,p=SMSUvXM,XU,p$
(6)
where the subscripts M and U refer to components that correspond to measured and unmeasured metabolites, respectively. Again, if the fluxes are split into two categories vI and vD as above, the following relationship still applies for the measured metabolites:
$vDtk=SD,M−1X˙Mtk−SI,MvItk$
(7)
Naturally, the degree of freedom associated with the dynamic flux estimation is higher by the number of component in XU than before. Figure Figure22 presents a modification of the parameter estimation procedure in Figure Figure11 to handle the case of missing data, in which an additional step involving the simulation of unmeasured metabolites $X˙U=SUvXM,XU,p$ will be performed. In this integration, XM is set as an external variable, whose time-profiles are interpolated from the measured concentrations. The set of independent fluxes vI are now selected to include all fluxes that appear in $X˙U$ and those that lead to a full column ranked SD,M. If SD,M is a non-square matrix, then a pseudo-inverse will be done in Equation (7). Of course, the same considerations mentioned above are equally relevant in this case. Note that the initial conditions of XU will also need to be estimated.
Flowchart of the incremental parameter estimation when metabolites are not completely measured.
## Results
Two case studies: a generic branched pathway [7] and the glycolytic pathway of L. lactis[16], were used to evaluate the performance of the proposed estimation method. In addition, simultaneous estimation methods employing the same objective functions in Equations (4) and (5) were applied to these case studies, to gauge the reduction in the computational cost from using the proposed strategy. In order to alleviate the ODE stiffness issue, parameter combinations that lead to a violation in the MATLAB (ode15s) integration time step criterion is assigned a large error value (ΦC=103 for the branched pathway and 105 for the glycolytic pathway). Alternatively, one could also set a maximum allowable integration time and penalize the associated parameter values upon violation, as described above. In this study, the optimization problems were solved in MATLAB using publicly available eSSM GO (Enhanced Scatter Search Method for Global Optimization) toolbox, a population-based metaheuristic global optimization method incorporating probabilistic and deterministic strategies [17,18]. The MATLAB codes of the case studies below are available in Additional file 1. Each parameter estimation was repeated five times to ensure the reliability of the global optimal solution. Unless noted differently, the iterations in the optimization algorithm were terminated when the values of objective functions improve by less than 0.01% or the runtime has exceeded the maximum duration (5 days).
### A generic branched pathway
The generic branched pathway in this example consists of four metabolites and six fluxes, describing the transformations among the metabolites (double-line arrows), with feedback activation and inhibition (dashed arrows with plus or minus signs, respectively), as shown in Figure Figure3A.3A. The GMA model of this pathway is given in Figure Figure3B,3B, containing a total of thirteen rate constants and kinetic orders. This model with the parameter values and initial conditions reported previously [7] were used to generate noise-free and noisy time-course concentration data (i.i.d additive noise from a Gaussian distribution with 10% coefficient of variation). The noisy data were smoothened using a 6-th order polynomial, which provided the best relative goodness of fit among polynomials according to Akaike Information Criterion (AIC) [19] and adjusted R2[20]. Subsequently, time-slopes of noise-free and smoothened noisy data were computed using the central finite difference approximation.
A generic branched pathway. (A) Metabolic pathway map and (B) the GMA model equations [7].
Here, v1 and v6 were chosen as the independent fluxes as they comprise the least number of kinetic parameters and lead to an invertible SD. The two rate constants and two kinetic orders were constrained to within [0,25] and [0,2], respectively. In addition, all the reactions are assumed to be irreversible.
Table Table11 compares simultaneous and incremental parameter estimation runs using noise-free data, employing the two objective functions above. Regardless of the objective function, the proposed incremental approach significantly outperformed the simultaneous estimation. When using the concentration-error minimization, simultaneous optimization met great difficulty to converge due to stiff ODE integrations. Only one out of five repeated runs could complete after relaxing the convergence criteria of the objective function to 1%, while the others were prematurely terminated after the prescribed maximum runtime of 5 days. In contrast, the proposed incremental estimation was able to find a minima of ΦC in less than 96 seconds on average with good concentration fit and parameter accuracy (see Figure Figure4A4A and Table Table1).1). By avoiding ODE integrations using ΦS, the simultaneous estimation of parameters could be completed in roughly 10 minutes duration, but this was much slower than the incremental estimation using ΦC. In this case, the incremental method was able to converge in below 2 seconds or over 250 times faster. The goodness of fit to concentration data and the accuracy of parameter estimates were relatively equal for all three completed estimations (see Figure Figure4B4B and Table Table1).1). The parameter inaccuracy in this case was mainly due to the polynomial smoothing of the concentration data, since the same estimations using the analytical values of the slopes (by evaluating the right hand side of the ODE model in Equation (1)) could give accurate parameter estimates (see Additional file 2: Table S1).
Parameter estimations of the branched pathway model using noise-free data
Simultaneous and incremental estimation of the branched pathway using in silico noise-free data (×). (A) concentration predictions using parameter estimates from incremental method by ΦC minimization (–––); (B) ...
Table Table22 provides the results of the same estimation procedures as above using noisy data. Data noise led to a loss of information and an expected decline in the parameter accuracy. Like before, the simultaneous estimation using ΦC met stiffness problem and three out of five runs did not finish within the five-day time limit. The incremental approach using either one of the objective functions offered a significant reduction in the computational time over the simultaneous estimation using ΦS, while providing comparable parameter accuracy and concentration and slope fit (see Figure Figure55 and Table Table2).2). In this example, data noise did not affect the computational cost in obtaining the (global) minimum of the objective functions.
Parameter estimations of the branched pathway model using noisy data
Simultaneous and incremental estimation of the branched pathway using in silico noisy data (×). (A) concentration predictions using parameter estimates from incremental method by ΦC minimization (–––); (B) concentration ...
Finally, the estimation strategy described in Figure Figure22 was applied to this example using noise-free data and assuming X3 data were missing. Fluxes v3 and v4 that appear in $X˙3$ were chosen to be among the independent fluxes and flux v1 was also added to the set such that the dependent fluxes can be uniquely determined from Equation (7). In addition to the parameters associated with the aforementioned fluxes, the initial condition X3(t0) was also estimated. The bounds for the rate constants and kinetic orders were kept the same as above, while the initial concentration was bounded within [0, 5].
Table Table33 summarizes the parameter estimation results. Four out of five repeated runs of ΦC simultaneous optimization were again prematurely terminated after 5 days. Meanwhile, the rest of the estimations could provide reasonably good data fitting with the exception of fitting to X3 data as expected (see Figure Figure6).6). Like data noise, missing data led to increased inaccuracy of the parameter estimates, regardless of the estimation methods. Finally, the computational speedup by using the incremental over the simultaneous estimation was significant, but was lower than in the previous runs due to the additional integration of XU and the larger number of independent parameters. The detailed values of the parameter estimates in this case study can be found in the Additional file 2: Tables S2 and S3.
Parameter estimations of the branched pathway model using noise-free data with X3 missing
Simultaneous and incremental estimation of the branched pathway with missing X3: in silico noisy-free data (×). (A) concentration predictions using parameter estimates from incremental method by ΦC minimization (---); (B) concentration ...
### The glycolytic pathway in Lactococcus. lactis
The second case study was taken from the GMA modeling of the glycolytic pathway in L. lactis[16], involving six internal metabolites: glucose 6-phosphate (G6P) – X1, fructose 1, 6-biphosphate (FBP) – X2, 3-phosphoglycerate (3-PGA) – X3, phosphoenolpyruvate (PEP) - X4, Pyruvate – X5, Lactate – X6, and nine metabolic fluxes. In addition, external glucose (Glu), ATP and Pi are treated as off-line variables, whose values were interpolated from measurement data. The pathway connectivity is given in Figure Figure7A,7A, while the model equations are provided in Figure Figure77B.
L. lactis glycolytic pathway. (A) Metabolic pathway map (Double-lined arrows: flow of material; dashed arrows with plus or minus signs: activation or inhibition, respectively) and (B) the GMA model equations [16].
The time-course concentration dataset of all metabolites were measured using in vivo NMR [21,22], and smoothened data used for the parameter estimations below were shown in Figure Figure8.8. The raw data has been filtered previously [16], and these smoothened data for all metabolites but X6, were directly used for the concentration slope calculation in this case study. In the case of X6, a saturating Hill-type equation: k1tn / (k2+tn) where t is time and the constants k1, k2, n are smoothing parameters, was fitted to the filtered data to remove unrealistic fluctuations. The central difference approximation was also adopted to obtain the time-slope data.
Incremental estimation of the L. lactis model: Experimental data (×) compared with model predictions using parameters from concentration error minimization (–––) and slope error minimization (---).
Fluxes v4, v7 and v9 were selected as the DOF, again to give the least number of pI and to ensure that SD is invertible. All rate constants were constrained to within [0, 50], while the independent and dependent kinetic orders were allowed within [0, 5] and [-5, 5], respectively. The difference between the bounds for the independent and dependent kinetic orders was done on purpose to simulate a scenario where the signs of the independent kinetic orders were known a priori.
Table Table44 reports the outcome of the single-step and incremental parameter estimation runs using ΦC and ΦS. The values of the parameter estimates are given in the Additional file 2: Table S4. Like in the previous case study, there was a significant reduction in the estimation runtime by using the proposed method over the simultaneous estimation, with comparable goodness of fit in concentration and slope. None of the five repeats of ΦC simultaneous minimization converged within the five-day time limit, even after relaxing the convergence criteria of the objective function to 1%. On the other hand, the incremental estimation using ΦC was not only able to converge, but was also faster than the simultaneous estimation of ΦS that did not require any ODE integration. The incremental estimation using ΦC was able to provide parameters with the best overall concentration fit (see Figure Figure8),8), despite having a large slope error. Finally, minimizing ΦS does not guarantee that the resulting ODE is numerically solvable, as was the case of simultaneous estimation, due to numerical stiffness. But the incremental parameter estimation from minimizing ΦS can produce solvable ODEs with good concentration and slope fits.
Parameter estimations of the L. lactis model
## Discussion
In this study, an incremental strategy is used to develop a computationally efficient method for the parameter estimation of ODE models. Unlike most commonly used methods, where the parameter estimation is performed to minimize model residuals over the entire parameter space simultaneously, here the estimation is done in two incremental steps, involving the estimation of dynamic reaction rates or fluxes and flux-based parameter regressions. Importantly, the proposed strategy is designed to handle systems in which there exist extra degrees of freedom in the dynamic flux estimation, when the number of metabolic fluxes exceeds that of metabolites. The positive DOF means that there exist infinitely many solutions to the dynamic flux estimation, which is one of the factors underlying the parameter identifiability issues plaguing many estimation problems in systems biology [23,24].
The main premise of the new method is in recognizing that while many equivalent solutions exist for the dynamic flux estimation, the subsequent flux-based regression will give parameter values with different goodness-of-fit, as measured by ΦC or ΦS. In other words, given any two dynamic flux vectors v(tk) satisfying $X˙mtk=Svtk,$ the associated parameter pairs (pI, pD) may not predict the slope or concentration data equally well, due to differences in the quality of parameter regression for each v(tk). Also, because of the DOF, the minimization of model residuals needs to be done only over a subset of parameters that are associated with the flux degrees of freedom, resulting in much reduced parameter search space and correspondingly much faster convergence to the (global) optimal solution. The superior performance of the proposed method over simultaneous estimation was convincingly demonstrated in the two GMA modeling case studies in the previous section. The minimization of slope error, also known as slope-estimation-decoupling strategy method [7], is arguably one of the most computationally efficient simultaneous methods. In this strategy, the parameter fitting essentially constitutes a zero-finding problem and the estimation can be done without having to integrate the ODEs. Yet, the incremental estimation could offer more than two orders of magnitude reduction in the computational time over this strategy.
There are many factors, including data-related, model-related, computational and mathematical issues, which contribute to the difficulty in estimating kinetic parameters of ODE models from time-course concentration data [1]. Each of these factors has been addressed to a certain degree by using the incremental identification strategy presented in this work. For example, in data-related issues, the proposed method can be modified to handle the absence of concentration data of some metabolites, as shown in Figure Figure2.2. Nevertheless, the method is neither able nor expected to resolve the lack of complete parameter identifiability due to insufficient (dynamical) information contained in the data [23,24]. As illustrated in the first case study, single-step and incremental approaches provided parameter estimates with similar accuracies, which expectedly deteriorated with noise contamination and loss of data.
The appropriateness of using a particular mathematical formulation, like power law, is an example of model-related issues. As discussed above, this issue can be addressed after the dynamic fluxes are estimated, where the chosen functional dependence of the fluxes on a specific set of metabolite concentrations can be tested prior to the parameter regression [14]. Next, the computational issues associated with performing a global optimization over a large number of variables and the need to integrate ODEs have been mitigated in the proposed method by performing optimization only over the independent parameter subset and using a minimization of slope error, respectively. Finally, in this work, we have also addressed a mathematical issue related to the degrees of freedom that exist during the inference of dynamic fluxes from slopes of concentration data. However, extra degrees of freedom (mathematical redundancies) are also expected to influence the second step of the method, i.e. one-flux-at-a-time parameter estimation. For (log)linear regression of parameters in GMA models, such redundancy will lead to a lack of full column rank of the matrix containing the logarithms of concentration data Xm(tk) and thus, can be straightforwardly detected.
The proposed estimation method has several weaknesses that are common among incremental estimation methods. As demonstrated in the first case study, the accuracy of the identified parameter relies on the ability to obtain good estimates of the concentration slopes. Direct slope estimation from the raw data, for example using central finite difference approximation, is usually not advisable due to high degree of noise in the typical biological data. Hence, pre-smoothing of the time-course data is often required, as done in this study. Many algorithms are available for such purpose, from simplistic polynomial regression and splines to more advanced artificial neural network [7,25] and Whittaker-Eilers smoother [26,27]. If reliable concentration slope estimates are not available, but bounds for the slope values can be obtained, then one can use interval arithmetic to derive upper and lower limits for the dependent fluxes and parameters using Equation (3) (or Equation (7) [28]. When the objective function involves integrating the model, validated solution to ODE with interval parameters can be used to produce the corresponding upper and lower bounds of concentration predictions [29]. Finally, the estimation can be reformulated, for example by minimizing the upper bound of the objective.
In addition to the drawback discussed above, the proposed strategy requires a priori knowledge about the topology of the network. For cellular metabolism, such information has become more readily available as genome-scale metabolic network of many important organisms, including human, E. coli and S. cereviseae, have been and are continuously being reconstructed [30]. For other networks, many algorithms also exist for the estimation of network topology based on time-series concentration data, including Bayesian network inference, transfer entropy, and Granger causality [31-33].
## Conclusions
The estimation of kinetic parameters of ODE models from time-course concentration data remains a key bottleneck in model building in systems biology. The lack of complete parameter identifiability has been blamed as the root cause of the difficulty in such estimation. In this study, a new incremental estimation method is proposed that is able to overcome the existence of extra degrees of freedom in the dynamic flux estimation from concentration slopes and to significantly reduce the computational requirements in finding parameter estimates. The method can also be applied, after minor modifications, to circumstances where concentration data for a few molecules are missing. While the present work concerns with the GMA modeling of metabolic networks, the estimation strategies discussed in this work have general applicability to any kinetic models that can be written as $X˙tk=Svtk.$ The creation of computationally efficient parameter estimation methods, such as the one presented here, represents an important step toward genome-scale kinetic modeling of cellular metabolism.
## Competing interest
The authors declare that they have no competing interests.
## Authors’ contributions
GJ conceived of the study, carried out the parameter estimation and wrote the manuscript. GS participated in the design of the study. RG conceived and guided the study and wrote the manuscript. All authors have read and approved the final manuscript.
## Funding
Singapore-MIT Alliance and ETH Zurich.
## Supplementary Material
Incremental Estimation Code. Additional file 1 contains MATLAB codes for the parameter estimations in the two case studies: branched pathway model and L. lactis pathway model.
Supplementary Tables. Additional file 2 contains the parameter estimation results of the branched pathway model using noise-free data and analytical slopes, the parameter estimates of the two case studies, and the parameter estimation results of five repeated runs.
## References
• Chou IC, Voit EO. Recent developments in parameter estimation and structure identification of biochemical and genomic systems. Math Biosci. 2009;219(2):57–83. [PubMed]
• Mendes P, Kell D. Non-linear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation. Bioinformatics. 1998;14(10):869–883. [PubMed]
• Moles CG, Mendes P, Banga JR. Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res. 2003;13(11):2467–2474. [PubMed]
• Savageau MA. Biochemical systems analysis. I. Some mathematical properties of the rate law for the component enzymatic reactions. J Theor Biol. 1969;25(3):365–369. [PubMed]
• Savageau MA. Biochemical systems analysis. II. The steady-state solutions for an n-pool system using a power-law approximation. J Theor Biol. 1969;25(3):370–379. [PubMed]
• Voit EO. Computational analysis of biochemical systems: a practical guide for biochemists and molecular biologists. New York: Cambridge University Press; 2000.
• Voit EO, Almeida J. Decoupling dynamical systems for pathway identification from metabolic profiles. Bioinformatics. 2004;20(11):1670–1681. [PubMed]
• Tsai KY, Wang FS. Evolutionary optimization with data collocation for reverse engineering of biological networks. Bioinformatics. 2005;21(7):1180–1188. [PubMed]
• Kimura S, Ide K, Kashihara A, Kano M, Hatakeyama M, Masui R, Nakagawa N, Yokoyama S, Kuramitsu S, Konagaya A. Inference of S-system models of genetic networks using a cooperative coevolutionary algorithm. Bioinformatics. 2005;21(7):1154–1163. [PubMed]
• Maki Y, Ueda T, Masahiro O, Naoya U, Kentaro I, Uchida K. Inference of genetic network using the expression profile time course data of mouse P19 cells. Genome Inform. 2002;13:382–383.
• Jia G, Stephanopoulos G, Gunawan R. Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method. Bioinformatics. 2011;27(14):1964–1970. [PubMed]
• Bardow A, Marquardt W. Incremental and simultaneous identification of reaction kinetics: methods and comparison. Chem Eng Sci. 2004;59(13):2673–2684.
• Marquardt W, Brendel M, Bonvin D. Incremental identification of kinetic models for homogeneous reaction systems. Chem Eng Sci. 2006;61(16):5404–5420.
• Goel G, Chou IC, Voit EO. System estimation from metabolic time-series data. Bioinformatics. 2008;24(21):2505–2511. [PubMed]
• Voit EO, Goel G, Chou IC, Fonseca LL. Estimation of metabolic pathway systems from different data sources. IET Syst Biol. 2009;3(6):513–522. [PubMed]
• Voit EO, Almeida J, Marino S, Lall R, Goel G, Neves AR, Santos H. Regulation of glycolysis in Lactococcus lactis: an unfinished systems biological case study. Syst Biol (Stevenage) 2006;153(4):286–298. [PubMed]
• Egea JA, Rodriguez-Fernandez M, Banga JR, Marti R. Scatter search for chemical and bio-process optimization. J Global Optimization. 2007;37(3):481–503.
• Rodriguez-Fernandez M, Egea JA, Banga JR. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems. BMC Bioinformatics. 2006;7:483. [PubMed]
• Akaike H. New Look at Statistical-Model Identification. IEEE T Automat Contr. 1974;Ac19(6):716–723.
• Montgomery DC, Runger GC. Applied statistics and probability for engineers. 4. Hoboken, NJ: Wiley; 2007.
• Neves AR, Ramos A, Costa H, van Swam II, Hugenholtz J, Kleerebezem M, de Vos W, Santos H. Effect of different NADH oxidase levels on glucose metabolism by Lactococcus lactis: kinetics of intracellular metabolite pools determined by in vivo nuclear magnetic resonance. Appl Environ Microbiol. 2002;68(12):6332–6342. [PubMed]
• Neves AR, Ramos A, Nunes MC, Kleerebezem M, Hugenholtz J, de Vos WM, Almeida J, Santos H. In vivo nuclear magnetic resonance studies of glycolytic kinetics in Lactococcus lactis. Biotechnol Bioeng. 1999;64(2):200–212. [PubMed]
• Raue A, Kreutz C, Maiwald T, Bachmann J, Schilling M, Klingmuller U, Timmer J. Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics. 2009;25(15):1923–1929. [PubMed]
• Srinath S, Gunawan R. Parameter identifiability of power-law biochemical system models. J Biotechnol. 2010;149(3):132–140. [PubMed]
• Almeida JS. Predictive non-linear modeling of complex data by artificial neural networks. Curr Opin Biotechnol. 2002;13(1):72–76. [PubMed]
• Eilers PH. A perfect smoother. Anal Chem. 2003;75(14):3631–3636. [PubMed]
• Vilela M, Borges CC, Vinga S, Vasconcelos AT, Santos H, Voit EO, Almeida JS. Automated smoother for the numerical decoupling of dynamics models. BMC Bioinformatics. 2007;8:305. [PubMed]
• Jaulin L, Kieffer M, Didrit O, Walter E. Applied interval analysis: with examples in parameter and state estimation, robust control and robotics. London: Springer; 2001.
• Lin YD, Stadtherr MA. Validated solution of ODEs with parametric uncertainties. 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering. 2006;21:167–172.
• Latendresse M, Paley S, Karp PD. Browsing metabolic and regulatory networks with BioCyc. Methods Mol Biol. 2012;804:197–216. [PubMed]
• Imoto S, Kim S, Goto T, Miyano S, Aburatani S, Tashiro K, Kuhara S. Bayesian network and nonparametric heteroscedastic regression for nonlinear modeling of genetic network. J Bioinform Comput Biol. 2003;1(2):231–252. [PubMed]
• Nagarajan R, Upreti M. Comment on causality and pathway search in microarray time series experiment. Bioinformatics. 2008;24(7):1029–1032. [PubMed]
• Tung TQ, Ryu T, Lee KH, Lee D. In: Proceedings of the Twentieth IEEE International Symposium on Computer-Based Medical Systems:20-22 June 2007; Maribor, Slovenia. Kokol P, Los A, editor. Los Alamitos: IEEE Computer Society; 2007. Inferring gene regulatory networks from microarray time series data using transfer entropy; pp. 383–388.
Articles from BMC Systems Biology are provided here courtesy of BioMed Central
## Formats:
### Related citations in PubMed
See reviews...See all...
### Cited by other articles in PMC
See all...
• PubMed
PubMed
PubMed citations for these articles | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020975828170776, "perplexity": 1772.070432616893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007832.1/warc/CC-MAIN-20141125155647-00144-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://alexanderpruss.blogspot.com/2008/11/tense-and-action.html | ## Thursday, November 13, 2008
### Tense and action
Consider a version of John Perry's argument that action needs tense. You promised to call a friend precisely between 12:10 and 12:15 and no later. When it is between 12:10 and 12:15, and you know what time it is, this knowledge, together with the promise, gives you reason to call your friend. But if this knowledge is tenseless, then you could have it at 12:30, say. Thus, absurdly, at 12:30 you could have knowledge that gives you just as good a reason to call your friend.[note 1]
Here, however, is a tenseless proposal. Suppose it is 12:12, and I am deliberating whether to call my friend. I think the following thought-token, with all the verbs in a timeless tense:
1. A phone call flowing from this deliberative process would occur between 12:10 and 12:15, and hence fulfill the promise, so I have reason that this deliberative process should conclude in a phone call to the friend.
And so I call. Let's see how the Perry-inspired argument fares in this case. I knew the propositions in (1) at 12:12, and I could likewise know these propositions at 12:30, though if I were to express that knowledge then, I would have to replace both occurrences of the phrase "this deliberative process" in (1) by the phrase "that deliberative process." However, this fact is in no way damaging.
For suppose that at 12:30, I am again deliberating whether to call my friend. I have, on this tenseless proposal, the very same beliefs that at 12:12 were expressed by (1). It would seem that where I have the same beliefs and the same knowledge, I have the same reasons. If this principle is not true, the Perry argument fails, since then one can simply affirm that one has the same beliefs and knowledge at 12:30 as one did at 12:12, but at 12:30 these beliefs and knowledge are not a reason for acting, while they are a reason for acting at 12:12. But I can affirm the principle, and I am still not harmed by the argument. For what is it that I conclude at 12:30 that I have (tenseless) reason to do? There is reason that the deliberative process should conclude in a call to the friend. But the relevant referent of "the deliberative process" is not the deliberative process that occurs at 12:30, call it D12:30, but the deliberative process that occurs at 12:12, call it D12:12. For (1) is not about the 12:30 deliberative process, but about the 12:12 one.
The principle that the same beliefs and knowledge gives rise to the very same reasons may be true—but the reason given rise to is a reason for the 12:12 deliberative process to conclude in a phone call. But that is not what I am deliberating about at 12:30. At 12:30, I am deliberating whether this new deliberative process, D12:30, should result in a phone call to the friend. That I can easily conclude that D12:12 should result in a phone call to the friend is simply irrelevant.
There is an awkwardness about the solution as I have formulated it. It makes deliberative processes inextricably self-referential. What I am deliberating about is whether this very deliberation should result in this or that action. But I think this is indeed a plausible way to understand a deliberation. When a nation votes for president, the nation votes not just for who should be president, but for who should result as president from this very election. (These two are actually subtly different questions. There could be cases where it is better that X be president, but it is better that Y result as president from this very election. Maybe X promised not to run in this election.)
[I made some minor revisions to this post, the most important of which was to emphasize that (1) is a token.]
#### 1 comment:
Alexander R Pruss said...
One may also need to deliberate about how long this deliberation process should last (if it starts at 12:12, it shouldn't last more than three minutes!) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917592167854309, "perplexity": 1159.0889177167714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777418.140/warc/CC-MAIN-20141217075257-00129-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://research-information.bris.ac.uk/en/publications/penroses-new-argument-and-paradox | # Penrose's New Argument and Paradox
Research output: Chapter in Book/Report/Conference proceedingChapter in a book
## Abstract
In this paper we take a closer look at Penrose’s New Argument for the claim that the human mind cannot be mechanized and investigate whether the argument can be formalized in a sound and coherent way using a theory of truth and absolute provability. Our findings are negative; we can show that there will be no consistent theory that allows for a formalization of Penrose’s argument in a straight- forward way. In a second step we consider Penrose’s overall strategy for arguing for his view and provide a reasonable theory of truth and absolute provability in which this strategy leads to a sound argument for the claim that the human mind cannot be mechanized. However, we argue that the argument is intuitively implausible since it relies on a pathological feature of the proposed theory.
Original language English Truth, Existence, and Explanation FilMat Studies in the Philosophy of Mathematics Springer Published - 2018
### Publication series
Name Boston Studies in the History and Philosophy of Science
## Fingerprint
Dive into the research topics of 'Penrose's New Argument and Paradox'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864450991153717, "perplexity": 947.8706679757752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00167.warc.gz"} |
http://en.wikipedia.org/wiki/Interaction_information | # Interaction information
Jump to: navigation, search
The interaction information (McGill 1954) or co-information (Bell 2003) is one of several generalizations of the mutual information, and expresses the amount information (redundancy or synergy) bound up in a set of variables, beyond that which is present in any subset of those variables. Unlike the mutual information, the interaction information can be either positive or negative. This confusing property has likely retarded its wider adoption as an information measure in machine learning and cognitive science.
## The Three-Variable Case
For three variables $\{X,Y,Z\}$, the interaction information $I(X;Y;Z)$ is given by
$\begin{matrix} I(X;Y;Z) & = & I(X;Y|Z)-I(X;Y) \\ \ & = & I(X;Z|Y)-I(X;Z) \\ \ & = & I(Y;Z|X)-I(Y;Z) \end{matrix}$
where, for example, $I(X;Y)$ is the mutual information between variables $X$ and $Y$, and $I(X;Y|Z)$ is the conditional mutual information between variables $X$ and $Y$ given $Z$. Formally,
$\begin{matrix} I(X;Y|Z) & = & H(X|Z) + H(Y|Z) - H(X,Y|Z) \\ \ & = & H(X|Z)-H(X|Y,Z) \end{matrix}$
For the three-variable case, the interaction information $I(X;Y;Z)$ is the difference between the information shared by $\{Y,X\}$ when $Z$ has been fixed and when $Z$ has not been fixed. (See also Fano's 1961 textbook.) Interaction information measures the influence of a variable $Z$ on the amount of information shared between $\{Y,X\}$. Because the term $I(X;Y|Z)$ can be zero — for example, when the dependency between $\{X,Y\}$ is due entirely to the influence of a common cause $Z$, the interaction information can be negative as well as positive. Negative interaction information indicates that variable $Z$ inhibits (i.e., accounts for or explains some of) the correlation between $\{Y,X\}$, whereas positive interaction information indicates that variable $Z$ facilitates or enhances the correlation between $\{Y,X\}$.
Interaction information is bounded. In the three variable case, it is bounded by
$-min\ \{ I(X;Y), I(Y;Z), I(X;Z) \} \leq I(X;Y;Z) \leq min\ \{ I(X;Y|Z), I(Y;Z|X), I(X;Z|Y) \}$
### Example of Negative Interaction Information
Negative interaction information seems much more natural than positive interaction information in the sense that such explanatory effects are typical of common-cause structures. For example, clouds cause rain and also block the sun; therefore, the correlation between rain and darkness is partly accounted for by the presence of clouds, $I(rain;dark|cloud) \leq I(rain;dark)$. The result is negative interaction information $I(rain;dark;cloud)$.
### Example of Positive Interaction Information
The case of positive interaction information seems a bit less natural. A prototypical example of positive $I(X;Y;Z)$ has $X$ as the output of an XOR gate to which $Y$ and $Z$ are the independent random inputs. In this case $I(Y;Z)$ will be zero, but $I(Y;Z|X)$ will be positive (1 bit) since once output $X$ is known, the value on input $Y$ completely determines the value on input $Z$. Since $I(Y;Z|X)>I(Y;Z)$, the result is positive interaction information $I(X;Y;Z)$. It may seem that this example relies on a peculiar ordering of $X,Y,Z$ to obtain the positive interaction, but the symmetry of the definition for $I(X;Y;Z)$ indicates that the same positive interaction information results regardless of which variable we consider as the interloper or conditioning variable. For example, input $Y$ and output $X$ are also independent until input $Z$ is fixed, at which time they are totally dependent (obviously), and we have the same positive interaction information as before, $I(X;Y;Z)=I(X;Y|Z)-I(X;Y)$.
This situation is an instance where fixing the common effect $X$ of causes $Y$ and $Z$ induces a dependency among the causes that did not formerly exist. This behavior is colloquially referred to as explaining away and is thoroughly discussed in the Bayesian Network literature (e.g., Pearl 1988). Pearl's example is auto diagnostics: A car's engine can fail to start $(X)$ due either to a dead battery $(Y)$ or due to a blocked fuel pump $(Z)$. Ordinarily, we assume that battery death and fuel pump blockage are independent events, because of the essential modularity of such automotive systems. Thus, in the absence of other information, knowing whether or not the battery is dead gives us no information about whether or not the fuel pump is blocked. However, if we happen to know that the car fails to start (i.e., we fix common effect $X$), this information induces a dependency between the two causes battery death and fuel blockage. Thus, knowing that the car fails to start, if an inspection shows the battery to be in good health, we can conclude that the fuel pump must be blocked.
Battery death and fuel blockage are thus dependent, conditional on their common effect car starting. What the foregoing discussion indicates is that the obvious directionality in the common-effect graph belies a deep informational symmetry: If conditioning on a common effect increases the dependency between its two parent causes, then conditioning on one of the causes must create the same increase in dependency between the second cause and the common effect. In Pearl's automotive example, if conditioning on car starts induces $I(X;Y;Z)$ bits of dependency between the two causes battery dead and fuel blocked, then conditioning on fuel blocked must induce $I(X;Y;Z)$ bits of dependency between battery dead and car starts. This may seem odd because battery dead and car starts are already governed by the implication battery dead $\rightarrow$ car doesn't start. However, these variables are still not totally correlated because the converse is not true. Conditioning on fuel blocked removes the major alternate cause of failure to start, and strengthens the converse relation and therefore the association between battery dead and car starts. A paper by Tsujishita (1995) focuses in greater depth on the third-order mutual information.
## The Four-Variable Case
One can recursively define the n-dimensional interaction information in terms of the $(n-1)$-dimensional interaction information. For example, the four-dimensional interaction information can be defined as
$\begin{matrix} I(W;X;Y;Z) & = & I(X;Y;Z|W)-I(X;Y;Z) \\ \ & = & I(X;Y|Z,W)-I(X;Y|W)-I(X;Y|Z)+I(X;Y) \end{matrix}$
or, equivalently,
$\begin{matrix} I(W;X;Y;Z)& = & H(W)+H(X)+H(Y)+H(Z) \\ \ & - & H(W,X)-H(W,Y)-H(W,Z)-H(X,Y)-H(X,Z)-H(Y,Z) \\ \ & + & H(W,X,Y)+H(W,X,Z)+H(W,Y,Z)+H(X,Y,Z)-H(W,X,Y,Z) \end{matrix}$
## The n-Variable Case
It is possible to extend all of these results to an arbitrary number of dimensions. The general expression for interaction information on variable set $\mathcal{V}=\{X_{1},X_{2},\ldots ,X_{n}\}$ in terms of the marginal entropies is given by Jakulin & Bratko (2003).
$I(\mathcal{V})\equiv -\sum_{\mathcal{T}\subseteq \mathcal{V}}(-1)^{\left\vert\mathcal{V}\right\vert -\left\vert \mathcal{T}\right\vert}H(\mathcal{T})$
which is an alternating (inclusion-exclusion) sum over all subsets $\mathcal{T}\subseteq \mathcal{V}$, where $\left\vert \mathcal{V}\right\vert =n$. Note that this is the information-theoretic analog to the Kirkwood approximation.
## Difficulties Interpreting Interaction Information
The possible negativity of interaction information can be the source of some confusion (Bell 2003). As an example of this confusion, consider a set of eight independent binary variables $\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7},X_{8}\}$. Agglomerate these variables as follows:
$\begin{matrix} Y_{1} &=&\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7}\} \\ Y_{2} &=&\{X_{4},X_{5},X_{6},X_{7}\} \\ Y_{3} &=&\{X_{5},X_{6},X_{7},X_{8}\} \end{matrix}$
Because the $Y_{i}$'s overlap each other (are redundant) on the three binary variables $\{X_{5},X_{6},X_{7}\}$, we would expect the interaction information $I(Y_{1};Y_{2};Y_{3})$ to equal $-3$ bits, which it does. However, consider now the agglomerated variables
$\begin{matrix} Y_{1} &=&\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7}\} \\ Y_{2} &=&\{X_{4},X_{5},X_{6},X_{7}\} \\ Y_{3} &=&\{X_{5},X_{6},X_{7},X_{8}\} \\ Y_{4} &=&\{X_{7},X_{8}\} \end{matrix}$
These are the same variables as before with the addition of $Y_{4}=\{X_{7},X_{8}\}$. Because the $Y_{i}$'s now overlap each other (are redundant) on only one binary variable $\{X_{7}\}$, we would expect the interaction information $I(Y_{1};Y_{2};Y_{3};Y_{4})$ to equal $-1$ bit. However, $I(Y_{1};Y_{2};Y_{3};Y_{4})$ in this case is actually equal to $+1$ bit, indicating a synergy rather than a redundancy. This is correct in the sense that
$\begin{matrix} I(Y_{1};Y_{2};Y_{3};Y_{4}) & = & I(Y_{1};Y_{2};Y_{3}|Y_{4})-I(Y_{1};Y_{2};Y_{3}) \\ \ & = & -2+3 \\ \ & = & 1 \end{matrix}$
but it remains difficult to interpret.
## Uses of Interaction Information
• Jakulin and Bratko (2003b) provide a machine learning algorithm which uses interaction information.
• Killian, Kravitz and Gilson (2007) use mutual information expansion to extract entropy estimates from molecular simulations.
• Moore et al. (2006), Chanda P, Zhang A, Brazeau D, Sucheston L, Freudenheim JL, Ambrosone C, Ramanathan M. (2007) and Chanda P, Sucheston L, Zhang A, Brazeau D, Freudenheim JL, Ambrosone C, Ramanathan M. (2008) demonstrate the use of interaction information for analyzing gene-gene and gene-environmental interactions associated with complex diseases.
## References
• Bell, A J (2003), The co-information lattice [1]
• Fano, R M (1961), Transmission of Information: A Statistical Theory of Communications, MIT Press, Cambridge, MA.
• Garner W R (1962). Uncertainty and Structure as Psychological Concepts, JohnWiley & Sons, New York.
• Han T S (1978). Nonnegative entropy measures of multivariate symmetric correlations, Information and Control 36, 133-156.
• Han T S (1980). Multiple mutual information and multiple interactions in frequency data, Information and Control 46, 26-45.
• Jakulin A & Bratko I (2003a). Analyzing Attribute Dependencies, in N Lavra\quad{c}, D Gamberger, L Todorovski & H Blockeel, eds, Proceedings of the 7th European Conference on Principles and Practice of Knowledge Discovery in Databases, Springer, Cavtat-Dubrovnik, Croatia, pp. 229–240.
• Jakulin A & Bratko I (2003b). Quantifying and visualizing attribute interactions [2].
• Margolin A, Wang K, Califano A, & Nemenman I (2010). Multivariate dependence and genetic networks inference. IET Syst Biol 4, 428.
• McGill W J (1954). Multivariate information transmission, Psychometrika 19, 97-116.
• Moore JH, Gilbert JC, Tsai CT, Chiang FT, Holden T, Barney N, White BC (2006). A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility, Journal of Theoretical Biology 241, 252-261. [3]
• Nemenman I (2004). Information theory, multivariate dependence, and genetic network inference [4].
• Pearl, J (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann, San Mateo, CA.
• Tsujishita, T (1995), On triple mutual information, Advances in applied mathematics 16, 269-274.
• Chanda P, Sucheston L, Zhang A, Brazeau D, Freudenheim JL, Ambrosone C, Ramanathan M. (2008). AMBIENCE: a novel approach and efficient algorithm for identifying informative genetic and environmental associations with complex phenotypes. Genetics. 2008 Oct;180(2):1191-210. PMID 17924337. http://www.genetics.org/cgi/content/full/180/2/1191
• Killian B J, Kravitz J Y & Gilson M K (2007) Extraction of configurational entropy from molecular simulations via an expansion approximation. J. Chem. Phys., 127, 024107. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8347289562225342, "perplexity": 1717.0081358552038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776400583.60/warc/CC-MAIN-20140707234000-00010-ip-10-180-212-248.ec2.internal.warc.gz"} |
http://support.sas.com/documentation/cdl/en/statug/67523/HTML/default/statug_introcom_sect021.htm | # Shared Concepts and Topics
### Splines and Spline Bases
Subsections:
This section provides details about the construction of spline bases with the EFFECT statement. A spline function is a piecewise polynomial function in which the individual polynomials have the same degree and connect smoothly at join points whose abscissa values, referred to as knots, are prespecified. You can use spline functions to fit curves to a wide variety of data.
A spline of degree 0 is a step function with steps located at the knots. A spline of degree 1 is a piecewise linear function where the lines connect at the knots. A spline of degree 2 is a piecewise quadratic curve whose values and slopes coincide at the knots. A spline of degree 3 is a piecewise cubic curve whose values, slopes, and curvature coincide at the knots. Visually, a cubic spline is a smooth curve, and it is the most commonly used spline when a smooth fit is desired. Note that when no knots are used, splines of degree d are simply polynomials of degree d.
More formally, suppose you specify knots . Then a spline of degree is a function with d – 1 continuous derivatives such that
where each is a polynomial of degree d. The requirement that has d – 1continuous derivatives is satisfied by requiring that the function values and all derivatives up to order d – 1 of the adjacent polynomials at each knot match.
A counting argument yields the number of parameters that define a spline with n knots. There are n + 1 polynomials of degree d, giving coefficients. However, there are d restrictions at each of the n knots, so the number of free parameters is = n + d + 1. In mathematical terminology this says that the dimension of the vector space of splines of degree d on n distinct knots is n + d + 1. If you have n + d + 1 basis vectors, then you can fit a curve to your data by regressing your dependent variable by using this basis for the corresponding design matrix columns. In this context, such a spline is known as a regression spline. The EFFECT statement provides a simple mechanism for obtaining such a basis.
If you remove the restriction that the knots of a spline must be distinct and allow repeated knots, then you can obtain functions with less smoothness and even discontinuities at the repeated knot location. For a spline of degree d and a repeated knot with multiplicity , the piecewise polynomials that join such a knot are required to have only dm matching derivatives. Note that this increases the number of free parameters by m – 1 but also decreases the number of distinct knots by m – 1. Hence the dimension of the vector space of splines of degree d with n knots is still n + d + 1, provided that any repeated knot has a multiplicity less than or equal to d.
The EFFECT statement provides support for the commonly used truncated power function basis and B-spline basis. With exact arithmetic and by using the complete basis, you obtain the same fit with either of these bases. The following sections provide details about constructing spline bases for the space of splines of degree d with n knots that satisfies . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601405620574951, "perplexity": 293.9750400852675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00489.warc.gz"} |
http://math.stackexchange.com/questions/261857/normal-subgroup-of-prime-index-and-another-subgroup | # Normal subgroup of prime index and another subgroup
Suppose that $N$ is a normal subgroup of a finite group $G$, and $H$ is a subgroup of $G$. If $|G/N| = p$ for some prime $p$, then show that $H$ is contained in $N$ or that $NH = G$.
I imagine this is related to the fact that $|NH| = |N||H|/|N \cap H|$, but this is not really helping me. I considered the fact that since $N$ is normal, we get that $NH \leq G$, and I then used Largrange, but I'm stuck, and some help would be nice.
-
Consider the homomorphism $\phi:G\rightarrow G/N$ that sends $x$ to $xN$. Since $\phi(H)\leq G/N$, therefore $|\phi(H)|||G/N|$. Hence $|\phi(H)|=1$ or $|\phi(H)|=p=|G/N|$. In the first case we get $\phi(H)=\{N\}$, thus $H\leq N$. In the other case $\phi(H)=G/N$, we deduce that $\forall x\in G\ \exists h\in H[xN=hN]$, it is easy to show that this implies that $NH=G$.
(Note that we don't need G to be finite )
-
Recall that when we mod out by a normal subgroup $N$, there is a one-to-one correspondence between subgroups in $G/N$ and subgroups $H$ containing $N$ in $G$. Since the order of $G/N$ is prime, there are no proper subgroups of $G/N$ (by Lagrange's Theorem). This implies that there aren't any proper subgroups of $G$ that properly contain $N$, hence $H\subseteq N$.
-
Well, if you've got some $h\notin N$, then $|N\langle h\rangle|>|N|$. This is a group since $N$ is normal. Hence if $[G:N]$ is prime, then by Lagrange $N\langle h\rangle=G$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959960579872131, "perplexity": 33.99543886696262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.82/warc/CC-MAIN-20150521113208-00005-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Exponential_integral | # Exponential integral
Not to be confused with other integrals of exponential functions.
Plot of E1 function (top) and Ei function (bottom).
In mathematics, the exponential integral Ei is a special function on the complex plane. It is defined as one particular definite integral of the ratio between an exponential function and its argument.
## Definitions
For real nonzero values of x, the exponential integral Ei(x) is defined as
$\operatorname{Ei}(x)=-\int_{-x}^{\infty}\frac{e^{-t}}t\,dt.\,$
The Risch algorithm shows that Ei is not an elementary function. The definition above can be used for positive values of x, but the integral has to be understood in terms of the Cauchy principal value due to the singularity of the integrand at zero.
For complex values of the argument, the definition becomes ambiguous due to branch points at 0 and $\infty$.[1] Instead of Ei, the following notation is used,[2]
$\mathrm{E}_1(z) = \int_z^\infty \frac{e^{-t}}{t}\, dt,\qquad|{\rm Arg}(z)|<\pi$
In general, a branch cut is taken on the negative real axis and E1 can be defined by analytic continuation elsewhere on the complex plane.
For positive values of the real part of $z$, this can be written[3]
$\mathrm{E}_1(z) = \int_1^\infty \frac{e^{-tz}}{t}\, dt = \int_0^1 \frac{e^{-z/u}}{u}\, du ,\qquad \Re(z) \ge 0.$
The behaviour of E1 near the branch cut can be seen by the following relation:[4]
$\lim_{\delta\to0+}\mathrm{E_1}(-x \pm i\delta) = -\mathrm{Ei}(x) \mp i\pi,\qquad x>0,$
## Properties
Several properties of the exponential integral below, in certain cases, allow one to avoid its explicit evaluation through the definition above.
### Convergent series
Integrating the Taylor series for $e^{-t}/t$, and extracting the logarithmic singularity, we can derive the following series representation for $\mathrm{E_1}(x)$ for real $x$:[5]
$\mathrm{Ei}(x) = \gamma+\ln |x| + \sum_{k=1}^{\infty} \frac{x^k}{k\; k!} \qquad x \neq 0$
For complex arguments off the negative real axis, this generalises to[6]
$\mathrm{E_1}(z) =-\gamma-\ln z-\sum_{k=1}^{\infty}\frac{(-z)^k}{k\; k!} \qquad (|\mathrm{Arg}(z)| < \pi)$
where $\gamma$ is the Euler–Mascheroni constant. The sum converges for all complex $z$, and we take the usual value of the complex logarithm having a branch cut along the negative real axis.
This formula can be used to compute $\mathrm{E_1}(x)$ with floating point operations for real $x$ between 0 and 2.5. For $x > 2.5$, the result is inaccurate due to cancellation.
A faster converging series was found by Ramanujan:
${\rm Ei} (x) = \gamma + \ln x + \exp{(x/2)} \sum_{n=1}^\infty \frac{ (-1)^{n-1} x^n} {n! \, 2^{n-1}} \sum_{k=0}^{\lfloor (n-1)/2 \rfloor} \frac{1}{2k+1}$
### Asymptotic (divergent) series
Relative error of the asymptotic approximation for different number $~N~$ of terms in the truncated sum
Unfortunately, the convergence of the series above is slow for arguments of larger modulus. For example, for x = 10 more than 40 terms are required to get an answer correct to three significant figures.[7] However, there is a divergent series approximation that can be obtained by integrating $ze^z\mathrm{E_1}(z)$ by parts:[8]
$\mathrm{E_1}(z)=\frac{\exp(-z)}{z}\sum_{n=0}^{N-1} \frac{n!}{(-z)^n}$
which has error of order $O(N!z^{-N})$ and is valid for large values of $\mathrm{Re}(z)$. The relative error of the approximation above is plotted on the figure to the right for various values of $N$, the number of terms in the truncated sum ($N=1$ in red, $N=5$ in pink).
### Exponential and logarithmic behavior: bracketing
Bracketing of $\mathrm{E_1}$ by elementary functions
From the two series suggested in previous subsections, it follows that $\mathrm{E_1}$ behaves like a negative exponential for large values of the argument and like a logarithm for small values. For positive real values of the argument, $\mathrm{E_1}$ can be bracketed by elementary functions as follows:[9]
$\frac{1}{2}e^{-x}\,\ln\!\left( 1+\frac{2}{x} \right) < \mathrm{E_1}(x) < e^{-x}\,\ln\!\left( 1+\frac{1}{x} \right) \qquad x>0$
The left-hand side of this inequality is shown in the graph to the left in blue; the central part $\mathrm{E_1}(x)$ is shown in black and the right-hand side is shown in red.
### Definition by Ein
Both $\mathrm{Ei}$ and $\mathrm{E_1}$ can be written more simply using the entire function $\mathrm{Ein}$[10] defined as
$\mathrm{Ein}(z) = \int_0^z (1-e^{-t})\frac{dt}{t} = \sum_{k=1}^\infty \frac{(-1)^{k+1}z^k}{k\; k!}$
(note that this is just the alternating series in the above definition of $\mathrm{E_1}$). Then we have
$\mathrm{E_1}(z) \,=\, -\gamma-\ln z + {\rm Ein}(z) \qquad |\mathrm{Arg}(z)| < \pi$
$\mathrm{Ei}(x) \,=\, \gamma+\ln x - \mathrm{Ein}(-x) \qquad x>0$
### Relation with other functions
The exponential integral is closely related to the logarithmic integral function li(x) by the formula
$\mathrm{li}(x) = \mathrm{Ei}(\ln x)\,$
for positive real values of $x$
The exponential integral may also be generalized to
${\rm E}_n(x) = \int_1^\infty \frac{e^{-xt}}{t^n}\, dt,$
which can be written as a special case of the incomplete gamma function:[11]
${\rm E}_n(x) =x^{n-1}\Gamma(1-n,x).\,$
The generalized form is sometimes called the Misra function[12] $\varphi_m(x)$, defined as
$\varphi_m(x)={\rm E}_{-m}(x).\,$
Including a logarithm defines the generalized integro-exponential function[13]
$E_s^j(z)= \frac{1}{\Gamma(j+1)}\int_1^\infty (\log t)^j \frac{e^{-zt}}{t^s}\,dt$.
The indefinite integral:
$\mathrm{Ei}(a \cdot b) = \iint e^{a b} \, da \, db$
is similar in form to the ordinary generating function for $d(n)$, the number of divisors of $n$:
$\sum\limits_{n=1}^{\infty} d(n)x^{n} = \sum\limits_{a=1}^{\infty} \sum\limits_{b=1}^{\infty} x^{a b}$
### Derivatives
The derivatives of the generalised functions $\mathrm{E_n}$ can be calculated by means of the formula [14]
$\mathrm{E_n}'(z) = -\mathrm{E_{n-1}}(z) \qquad (n=1,2,3,\ldots)$
Note that the function $\mathrm{E_0}$ is easy to evaluate (making this recursion useful), since it is just $e^{-z}/z$.[15]
### Exponential integral of imaginary argument
$\mathrm{E_1}(ix)$ against $x$; real part black, imaginary part red.
If $z$ is imaginary, it has a nonnegative real part, so we can use the formula
$\mathrm{E_1}(z) = \int_1^\infty \frac{e^{-tz}}{t} dt$
to get a relation with the trigonometric integrals $\mathrm{Si}$ and $\mathrm{Ci}$:
$\mathrm{E_1}(ix) = i\left(-\tfrac{1}{2}\pi + \mathrm{Si}(x)\right) - \mathrm{Ci}(x) \qquad (x>0)$
The real and imaginary parts of $\mathrm{E_1}(x)$ are plotted in the figure to the right with black and red curves.
## Applications
• Time-dependent heat transfer
• Nonequilibrium groundwater flow in the Theis solution (called a well function)
• Radiative transfer in stellar atmospheres
• Radial diffusivity equation for transient or unsteady state flow with line sources and sinks
• Solutions to the neutron transport equation in simplified 1-D geometries.[16]
Goodwin–Staton integral
## Notes
1. ^ Abramowitz and Stegun, p. 228
2. ^ Abramowitz and Stegun, p. 228, 5.1.1
3. ^ Abramowitz and Stegun, p. 228, 5.1.4 with n = 1
4. ^ Abramowitz and Stegun, p. 228, 5.1.7
5. ^ For a derivation, see Bender and Orszag, p253
6. ^ Abramowitz and Stegun, p. 229, 5.1.11
7. ^ Bleistein and Handelsman, p. 2
8. ^ Bleistein and Handelsman, p. 3
9. ^ Abramowitz and Stegun, p. 229, 5.1.20
10. ^ Abramowitz and Stegun, p. 228, see footnote 3.
11. ^ Abramowitz and Stegun, p. 230, 5.1.45
12. ^ After Misra (1940), p. 178
13. ^ Milgram (1985)
14. ^ Abramowitz and Stegun, p. 230, 5.1.26
15. ^ Abramowitz and Stegun, p. 229, 5.1.24
16. ^ George I. Bell; Samuel Glasstone (1970). Nuclear Reactor Theory. Van Nostrand Reinhold Company. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 60, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843540191650391, "perplexity": 523.8397841267317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00180-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/higgs-boson-mass-consequences.923971/ | # Higgs Boson Mass Consequences
• B
• Thread starter alejandromeira
• Start date
• #1
What are the consequences of the experimental value of the Higgs boson mass for theories of multiverse and supersymmetry?
## Answers and Replies
• #2
Gold Member
2,159
1,064
The Higgs boson mass significantly constrains the available parameter space of many supersymmetry theories. But, there isn't a really good compact way of describing that impact because there are so many versions of SUSY and so many free parameters in the theory.
It really has no obvious impact on theories of multiverse which really don't deserve the title of "theories" anyway.
• #3
Ok. Thank you so much. I still have a lot to study.
• #4
2,077
399
The most interesting (for me, can't speak for others) consequence of measured Higgs boson mass value is a few unexplained correlations:
With measured top and Higgs masses, SM sits right on vacuum stability/metastability line.
Sum of squares of all SM bosons' masses is equal to half of square of Higgs VEV to within 0.35%.
• Last Post
Replies
16
Views
825
• Last Post
Replies
6
Views
680
• Last Post
Replies
5
Views
1K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
5
Views
609
• Last Post
Replies
5
Views
1K
• Last Post
Replies
3
Views
921
• Last Post
Replies
4
Views
2K
• Last Post
Replies
7
Views
994
• Last Post
Replies
5
Views
1K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686304926872253, "perplexity": 4418.455951276606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00222.warc.gz"} |
http://math.stackexchange.com/questions/11081/calculus-find-the-limit-exp-vs-power | # Calculus, find the limit, Exp vs Power?
$\lim_{x\to\infty} \frac{e^x}{x^n}$
n is any natural number.
Using L'hopital doesn't make much sense to me. I did find this in the book:
"In a struggle between a power and an exp, the exp wins."
Can I refer that line as an answer? If the fraction would have been flipped, then the limit would be zero. But in this case I the limit is actually $\infty$
-
That statement you encountered is a nonrigorous version of a statement on growth rates; briefly, no matter how high you take $n$ in $x^n$, there is a value $x$ such that beyond it, $\exp(x)>x^n$. Now use this to see if a limit exists. – J. M. Nov 20 '10 at 14:06
Repeated use of L'Hôpital's rule ($n$ times):
$\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{x^{n}}=\underset{% x\rightarrow \infty }{\lim }\dfrac{e^{x}}{nx^{n-1}}=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{n(n-1)x^{n-2}}=\cdots =$ $=\cdots=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{n(n-1)\cdots 3\cdot x^{2}}=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{% n(n-1)\cdots 3\cdot 2x}=$ $=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{% n(n-1)\cdots 3\cdot 2\cdot 1}=\underset{x\rightarrow \infty }{\lim }\dfrac{% e^{x}}{n!}=\infty$
To convince yourself: If you had $\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{x^{10}}$ you would have to apply L'Hôpital's rule ten times.
Added 2: Plot of $\dfrac{e^{x}}{x^{3}}$
-
Use the fact that $e^x \ge \left( 1 + \frac{x}{n} \right)^n$ for any $n > 0$.
-
HINT: One way of looking at this would be: $$\frac{1}{x^{n}} \biggl[ \biggl(1 + \frac{x}{1!} + \frac{x^{2}}{2!} + \cdots + \frac{x^{n}}{n!}\biggr) + \frac{x^{n+1}}{(n+1)!} + \cdots \biggr]$$
I hope you understand why i put the brackets inside those terms.
-
Why isn't L'Hôpital a good solution? Just use induction.
-
L'Hôpital is a thing whose use should be avoided, as usually the alternatives are much more instructive. – J. M. Nov 20 '10 at 14:16
Considering the boy is just doing homework, I don't think a highbrow mathematical approach is needed here. Besides, I don't know what is so bad about L'Hôpital. Particularly in this case. – Raskolnikov Nov 20 '10 at 14:18
I believe the problem is tailor-made for repeated application of L'Hopital's Rule, but here are some thoughts ...
You could note that $e^{x} = (e^{x/n})^n$, and consider $\left( \lim \frac{e^{x/n}}{x}\right)^n$, so that you are comparing an exponential to a single power of $x$, which might be a bit less daunting for you.
A bit more cleanly, and to make the numerator and denominator match better, define $y := \frac{x}{n}$. Then $$\frac{e^{x}}{x^n}=\frac{e^{ny}}{(ny)^n}=\frac{\left(e^{y}\right)^n}{n^n y^n}=\frac{1}{n^n}\frac{\left(e^{y}\right)^n}{y^{n}}=\frac{1}{n^n}\left(\frac{e^y}{y}\right)^n$$
Since $n$ is a constant, you can direct your limiting attention to $\frac{e^y}{y}$ (as $y \to \infty$, of course).
-
Let's consider the limit to infinity when taking log of that expression:
$$\lim_{x\rightarrow\infty} \ln{\frac{e^x}{x^n}} = \lim_{x\rightarrow\infty}x-n\ln(x)=\lim_{x\rightarrow\infty}x(1-\frac{n\ln(x)}{x})=\infty$$
Therefore, the limit is $\infty$.
The proof is complete.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500168561935425, "perplexity": 428.72885259677605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639121.73/warc/CC-MAIN-20150417045719-00060-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/relation-strength-interaction-and-decay-time.170906/ | Relation strength interaction and decay time
1. May 19, 2007
da_willem
There is this characteristic time associated with the decay of particles; ~10^-16s for electromagnetic decay, ~10^-23s for strong decay and >10^-13s for weak decay. Now I know that the decay time is to first order inversely proportional to the coupling constant squared (from a first order Feynman diagram with only a vertex contribution). So from this point of view I 'understand' why decay via strong interactions go faster than via weak interactions, but how can one see this physically?
Short times for virtual particles correspond to high energies by the hup, and I've seen the relation between the virtual particle mass and the interaction range, but why do interactions with exchange of virtual massless gluons go faster than those with exchange of photons which goes faster than the exchange of massive intermediate vector bosons?!
Last edited: May 19, 2007
2. May 21, 2007
Meir Achuz
1. $$\alpha(EM)$$, and $$\alpha(QCD)$$ each vary with energy.
At energies for typical decays (~100 MeV)
$$\alpha(QCD)\sim 100\alpha(EM)$$.
2. The effective weak coupling for typical decays
$$\sim \alpha(EM)(M_p/M_W)^2$$.
Last edited: May 21, 2007
Similar Discussions: Relation strength interaction and decay time | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559951424598694, "perplexity": 1432.1669665414654}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608954.74/warc/CC-MAIN-20170527133144-20170527153144-00456.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/195117-automorphisms-group-g.html | # Thread: Automorphisms of a group G
1. ## Automorphisms of a group G
Dummit and Foote Section 4.4 Automorphisms Exercise 1 reads as follows:
Let $\sigma \in Aut(G)$ and $\phi_g$ is conjugation by g, prove that $\sigma \phi_g \sigma^{-1}$ = $\phi_{\sigma (g)}$.
================================================== ====
A start to the proof is as follows:
$\sigma \phi_g \sigma^{-1}$
= $\sigma (\phi_g (\sigma^{-1} (x)))$
= $\sigma (g. \sigma^{-1} (x). g^{-1} )$
= $\sigma (g). x . \sigma (g^{-1} )$
Now we have completed the proof if $\sigma (g^{-1}) = ( {\sigma (g) )}^{-1}$
But why is $\sigma (g^{-1}) = ( {\sigma (g) )}^{-1}$ in this case??
Peter
2. ## Re: Automorphisms of a group G
For an automorphism we have $\phi (e) = e$
Thus $\phi (e) = \phi (g g^{-1}) = \phi (g) \phi( g^{-1} ) = e$
and from this it follows that $\phi (g^{-1}) = {[\phi (g)]}^{-1}$
Am I correct?
Peter
3. ## Re: Automorphisms of a group G
Originally Posted by Bernhard
For an automorphism we have $\phi (e) = e$
Thus $\phi (e) = \phi (g g^{-1}) = \phi (g) \phi( g^{-1} ) = e$
and from this it follows that $\phi (g^{-1}) = {[\phi (g)]}^{-1}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910903573036194, "perplexity": 846.529163986572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806327.92/warc/CC-MAIN-20171121074123-20171121094123-00554.warc.gz"} |
http://aas.org/archives/BAAS/v26n4/aas185/abs/S5011.html | An X-ray study of the supernova remnant W44
Session 50 -- Supernova Remnants
Display presentation, Tuesday, 10, 1995, 9:20am - 6:30pm
## [50.11] An X-ray study of the supernova remnant W44
Ilana Harrus, John P.~Hughes (SAO)
We report results from the analysis and modeling of data for the supernova remnant (SNR) W44. Spectral analysis of archival data from the Einstein Solid State Spectrometer, the ROSAT Position Sensitive Proportional Counter, and the Large Area Counters on {\it Ginga}, covering an energy range from 0.3 to 8~keV, indicates that the SNR can be described well using a nonequilibrium ionization model with temperature $\sim$0.8 keV, ionization timescale $\sim$9000 cm$^{-3}$ years, and elemental abundances close to the solar ratios. The column density toward the SNR is high: greater than 10$^{22}$ atoms cm$^{-2}$.
As has been known for some time, W44 presents a centrally peaked surface brightness distribution in the soft X-ray band while at radio wavelengths it shows a limb-brightened shell morphology, in contradiction to predictions of standard models (e.g., Sedov) for SNR evolution. We have investigated two different evolutionary scenarios which can explain the centered X-ray morphology of the remnant: (1) the White and Long (1991) model involving the slow thermal evaporation of clouds engulfed by the supernova blast wave as it propagates though a clumpy interstellar medium (ISM), and (2) a hydrodynamical simulation of a blast wave propagating through a homogeneous ISM, including the effects of radiative cooling. Both models can have their respective parameters tuned to reproduce approximately the morphology of the SNR. We find that, for the case of the radiative-phase shock model, the best agreement is obtained for an initial explosion energy in the range $(0.5 - 0.6) \times 10^{51}$~ergs and an ambient ISM density of between 1.5 and 2 cm$^{-3}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800327181816101, "perplexity": 2719.600830004759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826016.5/warc/CC-MAIN-20140820021346-00406-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://homework.cpm.org/category/CON_FOUND/textbook/mc2/chapter/10/lesson/10.2.4/problem/10-120 | ### Home > MC2 > Chapter 10 > Lesson 10.2.4 > Problem10-120
10-120.
If the area of the triangle at right is $132$ cm$^{2}$, what is the height?
Use the equation for finding the area of a triangle.
$\text{Area} = \frac{1}{2} (\text{base})(\text{height})$
Substitute all the values that are known.
$132 = \frac{1}{2} (16)(\text{height})$
Now simplify and solve for height. | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982581496238708, "perplexity": 1728.529792244494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00294.warc.gz"} |
http://docplayer.net/1768686-Ion-exchange-reactions-of-clays.html | # ION EXCHANGE REACTIONS OF CLAYS
Size: px
Start display at page:
## Transcription
1 ABSTRACT ION EXCHANGE REACTIONS OF CLAYS BY D. R. LEWIS»* It has been recognized for many ypars that many aspects of clay technolofcy including soil treatment and drilling nnul treatment must remain in an essentially empirical state until a basis for the understanding of ion exchange reactions is established. Much of the work on ion exchange reactions of clays in the i)ast has been directed toward establishing total exchange capacities or determining the ionic distribution empirically. This information in general is not suitable for the evaluation of hypotheses designed to provide a basis for understanding the exchange reaction. When the techninues for characterizing the various clay minerals offered the possibilit.v of quantitative study, the solution and exchanger phase contributions to the ionic distribution covdd be experimentally evaluated in principle. The particular experimental techniques which have been used to measure ionic distril)uti()n, however, frequently neglected observations which are essential if the data are to he used for testing and developing theories of ion exchange. It is now well recognized that molecular adsorption, complex ion formation in solution, and ion-pair formation between a mobile solution ion and a fixed exchanger group may occur in addition to the ion exchange reaction. Therefore, if the data are to be useful to develop theories of ion exchange, the whole system must be selected to minimize such extraneous contributions. On the basis of recent tlieoreticsil worli, various experimental techniiiues art; evaluated from the point of view of their suitability for equilibrium ion distribution studies. The mass action, adsorption isotherm, and Gibbs-Donnan equilibrium formulations of the ion exchange theory are discussed as they may apply to clay systems. Recent progress is summarized in (1) solution thermodynamics of mixed electrolytes as it is relevant to ion exchange processes of clays, (2) the contributions of non-ideality of the clay exchanger phase, and (S) the work of swelling of clays which affects the ionic distributions in ion exchange reactions. It is concluded that the parameters which relate to the solid phase of the exchanger and those which relate to the solution are now sufficiently well recognized that future experiments can be planned which may more realistically provide an experimental basis for iinderstanding the process of equilibrium ion exchange distributions in aqueous clay-electrolyte systems. INTRODUCTION In principle, all of the answers to questions involving the interaction of matter are ealenlable from relatively few basic eoneepts. In this sense, it has been pointed out that all of Chemistry is now redneible to ajiplied mathematics. It appears unlikcl.v, however, that the entirely theoretical computations will soon displace the experimental aspects of chemistry. At the other extreme, experimental work which is without the guidance offered by a coherent body of theory frequently lacks the integration and direction necessary to achieve usefttl results. The experimental work concerned with the distribution of ions that will be reached when a clay mineral is iilaced in a solution of electrolytes originally was without such a guide, and only quite recently has there been any adequate theoretical body on which exxk'rimental studies might be planned. There have been some excellent systematic experimental studies which arc outstanding examples of intelligently planned w^ork (Schachtschabel, no date; Wiklander, 1950), but the colloidal nature of the clays and the great number of variations provided by the different members of each of the major clay mineral groups add many complexities which must be separated and * Publication No. 26, Exploration and Prodviction Research Division, Shell Development Co., Houston, Texas. ** Senior chemist, Exploration and Production Research Division, Shell Development Co., Houston 25, Texas. ( 54) meastu'cd if the results of the experiments are to be useful for any systeut other than the specific one which has been investigated. Accordingly, this discussion of the basis of ion exchange behavior Avill exclude non-exchange i)henomena. We will define an ion exchange reaction as a thermodynamieally reversible interaction between the ions originally occupying a fixed number of reacting sites of the insoluble exchanger with the several ionic species in the solution. This definition eliminates from this discussion such interesting and important topics as the irreversible fixation of ions.such as potassium ("Wicklander, ]95()), ammonium (Barshad, 1951; Joffe and Levine, 1947), zinc (Blgabaly, 1950; Elgabaly and Jenny, 1943), and lithium (Hoffmann and Klemen, 1950). Neither will this discussion concern itself with reactions resulting in covaleut bonds between certain clays and hydrogen, or with molecular adsorption from solution. By restricting our discussion to the clays, moreover, we have eliminated discussion of the reactions of the organic ion exchangers and of the inorganic zeolites. The early history of ion exchange studies starting with the systematic studies by Thompson and Way a century ago has recently been summarized by Detiel and Hostettler (1950), Duncan and Lister (1948), Kelley (1948), and Kunin and Myers (1950). When a clay mineral is placed in a solution containing several dissolved salts, the whole assembly wall in time reach a steady-state condition of ionic distribution between the clay and the solution which will persist for a very long period of time. It is important to know how this equilibrium distribution depends upon the nature of the exchanger and its physical condition and how it depends on the nature of the solution. In general, however, it is to be expected that a variety of processes, including the ion exchange reaction itself, may determine this distribution. Such processes as molecular adsorption, formation of complex solution ions, formation of difficultly soluble salts, or formatiou of complexes with the exchanger phase may be superimposed on the ion exchange reaction itself in a given system. (Bonner, Argersinger, and Davidson, 1952). In the present discussion of the ion exchange reaction, attention will be directed toward those systems in which the distribution of ions arises primarily from the ion exchange reaction itself. ION EXCHANGE PROPERTIES OF THE CLAY MINERAL GROUPS It is convenient to consider clays as multivalent polyelectrolytes in ion exchange reactions. For each of the major crystal structure groups, however, it is important to take into consideration the effect of the distribution of charges in the lattice. The relationship between crystalline structure of tlie silicate minerals and their ionexchanging properties have been discussed in considerable detail by Bagchi (1949). Kaolin Group. Many members of the kaolin group of clay minerals exhibit an almost complete freedom from
2 Fart IIJ PROPERTIES OF CLAYS yo isomorphoiis substitution yet have a small but definite ion exehanoe capacity. The sites of the exchanji'e reactivity of kaolinite are generally agreed to be associated with the structural Oil groups on the exposed clay surfaces. Because of the differences in the balance of electrical charges of those hydroxjd ions along the lateral surfaces and those formed by the hydration of silica at the broken edges of the crystals, tliere may well be more than one class of exchange sites on kaolinite. This picture of the exchange activity arising from the dissociation of the surface hydroxyl protons is consistent with the low magnitude of the total exchange capacities of minerals of this group. Aifapulgite Grnu-p. The fibrous clay group typified by attapulgite exhibits a very different geometry from the platy minerals and, accordingly, a different distribiition of the charges on the surface ions. In attapulgite itself a small amount of the silicon is frequently replaced b.v aluminum ions which give rise to the charge deficiency causing the ion exchange activity of attapulgite (Marshall, 1949). Because of its fibrous structure and the presence of channels parallel to the long axis of the crystals in which many of the mobile exchange ions are found, the rate of the ion exchange reaction in attapulgite minerals may be much slower than in platy minerals. This would be expected if the ions along the channel must diff'use into the solution phase to reach an equilibrium. lllite Group. The illite group of clay minerals are small particle size, plate-shaped clay minerals distinguished by their ability to fix potassium irreversiblj^ The iou exchange activity for the illites is attributed to isomorphous substitution occurring largely in the surface tetrahedral silica layers. This gives rise both to more favorable geometric configuration for microscopic counter-balancing of the unbalance in electrical charge and also to the possibility of formation of co-valent linkages. Either ccmdition is likely to produce an irreversible reaction. Moiiimorillonite Group. The most active cla.v group in terms of amount of ion exchange reactivit.v per unit weight of (day is the montmorillonite family. The high degree of their base exchange capacity and the rapidity of their reactions have long been recognized as outstanding attributes of this class of clay minerals. Minerals of this group are plate shaped, three-layer lattice minerals with a very high degree of isomorphous substitution, distributed both in the octahedral positions in which chiefly magnesium substitutes for aluminum, and in the tetrahedral coordination in which predominantly aluminum substitutes for silicon (Harry, 1950; Hendricks, 1945; Koss and Hendricks, 1945). Because of both the large base-exchange capacity and the widespread occurrence and economic importance of this group of minerals, a great deal of the experimental work has been done (Hauser, 1951). As there are these marked differences in the stnu'ture both geometrically and in electrical charge density of the principal groups of clay minerals, tliere will be large variations in the relative contributions of reversible ion exchange reactions, the degree of amphoteric nature of the clay minerals and phj'sical adsorption to the equilibrium distribution of ions in an aqueous elay-electroh'te system. EXPERIMENTAL TECHNIQUES Methods of Preparing Hydrogen-Clay. Although this discussion is more directly concerned with the interpretation of the data having to do with ion exchange properties of clays than with the determination of the exchange properties themselves, the usefulness of the data is freciuently affected considerably by the exact details of the method of determination of the exchange properties, and, accordingly, some attention must be given to the limitations of various techniques. One group of techiiicpies which are commonly employed involves the preparation of the hydrogen form of the clay either by dialysis or electrodialysis or by direct action of a solution of a mineral acid. The acid form of the clay is then treated with the base of the desired salt form and the equilibrium distribution determined from the degree of ccmversion (often measured by the change in ph of the suspension s.vstem), or the inflection in the titration curve is used to determine the total exchange capacity. The difficulties of interpretation of the titration curves of acid clays by either inorganic or organic bases are widely recognized (Marshall, 1949; Mitra and Rajagopalan, 1948; 1948a; Mukherjee and Mitra, 1946). In the first place, there is no general agreement about the nature of the exchange titration curve. The results of various researchers have varied from the production of definitely diprotic (Slabaugh and Culbertsoii, 1951) character in the titration curves to curves w'hicli have a very broad inflection or none at all and in which the establishment of an end-point corresponding to the completion of a reaction is very difficult even to estimate. Some investigators have titrated to an arbitrary pyl which the.v considered to be an end-point for the reaction, assuming that the distribution of proton activity of all the clays in the samples being titrated is the same, and that legitimate and reproducible conditions for measuring cell potentials in suspension are established in each suspension. The colloidal nature of the system complicates both the measurement of potentials and the interpretation of the potentials in terms of hydrogen ion activities (Mysels, 1951). Moreover, the anomalous behavior of the hydrogen ion in its reactions w'ith clays has long been known, and recentl.v the behavior of hydrogen ions in ion exchange reactions of clays has been found to exhibit a pattern that suggests that these ions are held to many clays partly b>- covaleiit bonds (Krishiiamoorthy and "Overstreet, 1950, 1950b). It is likely that studies of the equilibrium distribution ions on clays should not involve the preparation of the hydrogen form as a necessary step (Glaeser, 1946a; Vendl and Hung, 1943). A great deal of useful information concerning the polyelectrolyte nature of the clays can jjrobably be derived ultimately from the studies on the titration behavior of the hydrogen form of the clays, but such information is not a necessary and integral part of the stud.v of the exchange behavior of the clays. Method for Preparing Ani^nonium-Clay. The most satisfactory experimental technique to employ in a given set of experiments will depend to some extent on the intention of the application of the data. For example.
3 56 CLAYS AND CLAY TECHNOLOGY [Bull. 169 for the determination of the total exchange capacity of the clay minerals, a variety of satisfactory procedures employing either ammonium acetate or ammonium chloride solutions neutralized with ammonium hydroxide have been described which differ only in the details of the preparation and manipulation of the sample. (Bray, 1942; Glaeser, 1946; Graham and Sullivan, 1938; Lewis, 1951). The ammonium ions retained by the clay may either be determined directly on the clay or eluted and determined separately. For the determination of the total exchange capacity of a number of clays the use of an ammonium-form ion exchange resin of suitable character has proved very satisfactory (Lewis, 1952; Wiklander, 1949, 1951). Experimental Techniques. Experimental techniques may be adapted to micro quantities of clay, or methods may be used that permit the colorimetric determination of the exchange cations rapidly and easily, if less accurately. If the equilibrium distribution of ions between a clay phase and a solution phase is to be determined, the most direct method involves placing a clay with a known ion population in an electrolyte solution of known composition. After a suitable length of time both phases of this system are analyzed to determine the distribution of ions at equilibrium. This method, so direct in principle, is replete with pitfalls. It may be convenient to analyze chemically only the solution phase before and after the reaction, to determine the distribution of ions accomplished by the exchanger phase, thus requiring that the analytical procedure be very accurate in determining a small difference between two large numbers. Moreover, because the equilibrium water content of the clay depends strongly on its ion form, the concentration of the external solution changes as the ionic composition of the clay changes, and the degree of exclusion of molecular salts by the Donnan mechanism from the hydrated clay changes as the ionic form of the clay changes. The effect of the change in equilibrium water content with change in the ionic form of the exchanger may be so great that failure to consider it may so distort the results that ion exchange in its ordinary sense does not appear to take place (Lowen, Stoener, and Argersinger, 1951). For the most accurate equilibrium determination, the sohition phase and the exchanger phase should be physically separated in a manner which does not disturb the ionic equilibrium already established. For accurate work it is desirable to bring the clay to equilibrium with a given composition of electrolyte solution, separate the clay phase and repeatedly bring the clay to equilibrium with successive portions of the same solution. In this way the composition of the electrolyte solution is not altered by the contribution of the displaced cations from the exchanger phase, so that the composition of the equilibrium solution phase may be determined accurately either from an accurately prepared composition of the equilibrium solution or an accurate analysis of the initial solution. The clay phase should finally be separated and analyzed directly for the distribution of the ions participating in the exchange reaction or the exchanging pair displaced by a third cation and analyzed in the elution product. The direct experimental determination of equilibrium ionic distribution can be successful for studies of ion exchange if careful attention is paid to the details of the experiment, with suitable attention to analytical accuracy and proper manipulation of the sample, so that the final data provide an accurate picture of the equilibrium partition of electrolyte ions between the solution phase and the exchanger phase at equilibrium. Clay Chromatographic Methods. A modification of the column chromatographic technique has been used recently in determining the exchange isotherms for clays. This technique involves the preparation of a column consisting of the clay in an inert matrix (asbestos) that provides suitable flow properties for the column. The exchange isotherm is obtained by measuring the composition of the solution passing through the column as one exchange cation On the clay is displaced by another. This technique in principle possesses the virtues of greatly reducing the amount of analytical work required and of having inherent in the process the separation of the clay and electrolyte phases. If radioactive isotopes are used as tracers for following changes in composition of the eluted solution, the whole process can be put on an essentially automatic basis. The recently reported determination of the cesium-sodium isotherm at room temperature on a montmorillonite from Chambers, Arizona, (API 23) indicated considerable promise for this technique with the clay minerals (Faucher, Southworth, and Thomas, 1952). The colloidal character of the clay minerals, however, may cause mechanical difficulties in the preparation of suitable chromatographic columns, unless the columns are always operated with solutions having relatively high ionic strengths. Clay-Besin Reaction Methods. For the determination of the ionic distribution on clay particles at low solution concentrations, monofunctional sulfonic acid resins may be used by bringing an electrolyte solution and resin to equilibrium with the clay. After equilibrium is reached, it is possible from only a material balance and an analysis of the washed resin phase to determine the equilibrium distribution of ions on the clay equilibrium with the electrolyte solution. It has been demonstrated that the distribution of ions between a clay and a solution is independent of the presence of the exchanging resin. EXPERIMENTAL CONSIDERATIONS There are two major classes of objectives in the examination of the data which are obtained in the study of ion exchange reactions. The first of these requires only that sufficient data be accumulated so that a working equation or graph can describe the data and permit interpolation and extrapolation of the behavior of this system to conditions not precisely covered by the experiments. This method permits considerable latitude in the type of parameters and the manner of the mathematical combination to provide a description of the actual behavior of the particular process. With such a description the behavior of the distribution of calcium ions and sodium ions, for example, on a specified clay, could be summarized at the temperature and solution strength of the experiments over a relatively wide range of compositions of the exchanger and solution phases. Such descriptions of behavior serve a useful practical purpose. On the other hand, such descriptions in themselves pro-
4 Part II] PROPERTIES OF CLAYS 57 vide no clues which sng-gest either the magnitude or direction of changes in selectivity of sodium with respect to calcium as tlic temperature, tlie total strength of the solution, or the mineral species should change. The other objective is that of establishing a sound theoretical basis for understanding the different selectivities of tlie various ions when reacting with ditferent exchanger phases. The mathematical expression of these theoretical views would provide not only a description of the process, but also a basis for prediction of changes in the nature of the distribution with changes in a wide variety of parameters which enter either explicitly or implicitly into the equations. Good expermental data obtained from well-characterized solutions of electrolytes interacting with well-definecl clay nuneral species are necessary for either of these considerations. At present there is a great need for more experimental information on the ion exchange behavior of clays under circumstances Avhicli permit the examination of the data with a view to testing- various hj'potheses and theories which have been offered as a basis for the ionic selectivity in ion exc' angers. x\lthough the nature of the experimental work which is needed from both practical and theoretical standpoints in the study of ion exchange of clays was clearly pointed out by Bray in 1942, the present need for these data in clay systems is as great as it was at that time. Both the theoretical and the experimental studies designed to establish the contribution of the several conceivable parameters to the actual selectivity of an exchanger for ions in solution has proceeded at a greatly accelerated rate in systems involving synthetic organic resin exchangers. The intensity of activity in the investigation of clay systems is increasing at present. Those aspects which Bray pointed out as much-needed extensions of the experimental effort involve leaving the convenient range of ion distributions from the standpoint of analytical techniques in general and extending these studies to very wide ranges of composition of the exchanger phase and over wide ranges of total concentrations of the solution as well. While both of these directions are now being actively pursued by investigators of resin-electrolj^te systems, similar progress has not been made in clay investigations. Another aspect on which Bray felt that considerably more work should be done is that of greatly increasing the number of different ions present in a system. From a practical standpoint, particularly in connection with soils, the need for such investigations is undoubtedly great. Prom the standpoint of theory, however, our knowledge of the specific interactions between ions in solution and in the exchanger phase is much too inadequate to enable us to apply this information theoretically at present. In his recent review of the theoretical progress being made in the elucidation of the mechanism of ion exchange reactions, Boyd summarized the current status of ion exchange equilibrium theory as being somewhat confused and the disagreements in the literature far more numerous than agreements (1951). This sentiment echoed the conclusions expressed by Marshall in his discussion of the ion exchange reactions of clays when he reported that the only certain conclusion one can draw at present is that better experiments are needed (1949). The various approaches which are presently being made to establish the principal mechanisms by which the solid exchanger phase controls the distribution of exchangeable cations among its available ion exchange sites when in equilibrium with a solution of a given composition may be classified into several broad groups. The ion exchange equilibrium has been considered (1) as a class of reversible double-decomposition reaction which may be described by the principles of the law of mass action, (2) as an ionic adsorption reaction the behavior of which may be described by a suitable isotherm eqimtion for a mixture of electrolytes, (13) as a Gibbs-Donnan distribution between two phases, and (4) as reflecting the behavior of solution ions under the influence of a heteropolar ionic solid surface. Most investigators have preferred either the mass action or the adsorption description of the exchange process. In general, there are a number of changes whieh accompany the redistribution of ions in the ion exchange reaction. These variables must be considered when designing experiments to test the various hypotheses of the equilibrium distribution of ions in ion exchange reactions. They include the following processes wliich compete with the exchange reaction or accompany it: A. Ion-pair formation between solution ions and exchangers. B. Molecular adsorption of partiall,y dissociated electrolytes. O. Complex ion formation in solution. D. Change in distribution of ion species with changes in concentration of electrolytes. In addition to these processes, the solution concentration and composition may change during the ion exchange reaction bec'ause of the following factors which must be evaluated to permit cahndation of the equilibrium distribution: A. Variation of equilibrium water content of exchanger with change in ion composition. B. Change in solution volume resulting from exchange of electrolytes having different partial molar volumes. MASS-ACTION DESCRIPTION OF ION EXCHANGE REACTION If we consider a reversible reaction of the following form between monovalent cations A* and B* in solution and an exchange phase Z, A* + BZ^B* ^AZ (1) the law of mass-action describes the equilibrium distribution in terms of a product (B^) {AZ) K = (2) {A^) {BZ) In this expression the quantities in parentheses represent the activities of the various species. The activity of each species is a quantity a,-, such that 1,1; ^ p,- + RT In a, (3) where [Xi is the chemical potential of the species i, Xi its chemical potential in some arbitrary standard state, B the universal gas constant and T the absolute temperature. If the ion exchange reaction is truly reversible and if the activities ai^^{ai) can be evaluated, at constant temperature and pressure, the constant K can be calculated and the Gibbs free energy of the reaction computed from its value.
5 58 CLAYS AND CLAY TECHNOLOGY [Bull. 169 As a first approximation, the concentration of the ions in solution and in the exchanger phase have been substituted for the activities. In this form, the value of K is a mass law concentration product which is not expected to remain constant. For practical purposes, a closelj' related quantity, the selectivity coefficient D is frequently calculated as (AZ/BZ) D = (4) The typo of variation of equilibrium mass law product is illustrated in figure 1, Both the mass law concentration product and the selectivity coefficient are without direct theoretical utility themselves, although they are iiseful working quantities which differ from the thermodynamic quantities by suitable functions to convert the concentrations of ions to activities. The evaluation of the activities in both the solution and exchanger phases, however, involves several uncertainties at the present stage of our knowledge of these reactions. A number of approaches have been employed to evaluate the activities of the ions which are reacting both in the solution phase and the clay phase. For the solution phase the basic data required are the activity coefficients of the electrolytes in mixed ion solutions over the concentration and composition ranges employed in the reactions. In general, this information is not available, although Harned and Owen (1950) have summarized the available data and some rules for computing estimates of activities of electrolytes. The approximation is frequently made that the ion activity is that of the single electrolyte at a total ionic strength of the reacting mixed solution. There is the possible objection to all these methods that the activity of the dilute mixed electrolyte solution may not be the correct activity to use on the BZ -^ FiGUKK 1. AZ + B^ LOG Cg^/Cflt A',nri;ition iu mass-law x)ri)(luet. grounds that the exchange reaction occurs only in the immediate vicinity of the highly ionic crystalline clay exchanger, where its activity would be expected to be significantly different from that in the dilute solution both because of the change of dielectric constant of the FlGT'RK 2. Activity coefficients for O.Ol m HCl in electrolyte solutions. solvent and the potential energy of the ion in this environment (Davis and Kideal, 1948; Greyhume, 1951; Grimley, 1950; Weyl, 1950). Since the over-all process is the transfer of ions from the dilute solution to the exchanger, however, and since at equilibrium the chemical potential of ions of any species is the same throughout the system, the solution thermodynamic activities should be suitable when they are known. The equilibrium constant for the reaction (1) can be written for the mono-monovalent exchange as niab) Y. (B) (AZ) K = ^ : (5) m,{a) Y, {A) {BZ) where m^ (B) is the main ionic molality of the cation B+ with the solution anion, and Y- (B) is the mean ionic activity coefficient for this electrolyte. These quantities are defined in terms of the molalities of the cation m^. and anion m_ as m/= j)i/'+»w-'"'" (6) In this expression tv is the valence of the cation, v_ the valence of the anion and V =^ v+-{- V- (7) Analogously, the mean ionic activity coefficients are y/^yz + y^"- (8) The mean ionic activities of ions are influenced by the presence of dissimilar ions. The values of the mean ionic activity coefficients for electrolytes have been determined by emf measurements in suitable cells. The effect of 1-50
### ph: Measurement and Uses
ph: Measurement and Uses One of the most important properties of aqueous solutions is the concentration of hydrogen ion. The concentration of H + (or H 3 O + ) affects the solubility of inorganic and organic
### Chemistry. The student will be able to identify and apply basic safety procedures and identify basic equipment.
Chemistry UNIT I: Introduction to Chemistry The student will be able to describe what chemistry is and its scope. a. Define chemistry. b. Explain that chemistry overlaps many other areas of science. The
### Chem101: General Chemistry Lecture 9 Acids and Bases
: General Chemistry Lecture 9 Acids and Bases I. Introduction A. In chemistry, and particularly biochemistry, water is the most common solvent 1. In studying acids and bases we are going to see that water
### Range of Competencies
CHEMISTRY Content Domain Range of Competencies l. Nature of Science 0001 0003 18% ll. Matter and Atomic Structure 0004 0006 18% lll. Energy and Chemical Bonding 0007 0010 23% lv. Chemical Reactions 0011
### Solutions & Colloids
Chemistry 100 Bettelheim, Brown, Campbell & Farrell Ninth Edition Introduction to General, Organic and Biochemistry Chapter 6 Solutions & Colloids Solutions Components of a Solution Solvent: The substance
### ionic substances (separate) based on! Liquid Mixtures miscible two liquids that and form a immiscible two liquids that form a e.g.
Unit 7 Solutions, Acids & Bases Solution mixture + solvent - substance present in the amount solute - in the solvent solvent molecules solute particles ionic substances (separate) based on! Liquid Mixtures
### Chemical Reactions in Water Ron Robertson
Chemical Reactions in Water Ron Robertson r2 f:\files\courses\1110-20\2010 possible slides for web\waterchemtrans.doc Properties of Compounds in Water Electrolytes and nonelectrolytes Water soluble compounds
### Chemical Reactions in Water
Chemical Reactions in Water Ron Robertson r2 f:\files\courses\1110-20\2010 possible slides for web\waterchemtrans.doc Acids, Bases and Salts Acids dissolve in water to give H + ions. These ions attach
### Chemistry B11 Chapter 6 Solutions and Colloids
Chemistry B11 Chapter 6 Solutions and Colloids Solutions: solutions have some properties: 1. The distribution of particles in a solution is uniform. Every part of the solution has exactly the same composition
### ACID-BASE TITRATIONS: DETERMINATION OF CARBONATE BY TITRATION WITH HYDROCHLORIC ACID BACKGROUND
#3. Acid - Base Titrations 27 EXPERIMENT 3. ACID-BASE TITRATIONS: DETERMINATION OF CARBONATE BY TITRATION WITH HYDROCHLORIC ACID BACKGROUND Carbonate Equilibria In this experiment a solution of hydrochloric
### Forensic Science Standards and Benchmarks
Forensic Science Standards and Standard 1: Understands and applies principles of scientific inquiry Power : Identifies questions and concepts that guide science investigations Uses technology and mathematics
### EXPERIMENT 10: Electrical Conductivity Chem 111
EXPERIMENT 10: Electrical Conductivity Chem 111 INTRODUCTION A. Electrical Conductivity A substance can conduct an electrical current if it is made of positively and negatively charged particles that are
### WEAK ACIDS AND BASES
WEAK ACIDS AND BASES [MH5; Chapter 13] Recall that a strong acid or base is one which completely ionizes in water... In contrast a weak acid or base is only partially ionized in aqueous solution... The
### Chemistry 51 Chapter 8 TYPES OF SOLUTIONS. A solution is a homogeneous mixture of two substances: a solute and a solvent.
TYPES OF SOLUTIONS A solution is a homogeneous mixture of two substances: a solute and a solvent. Solute: substance being dissolved; present in lesser amount. Solvent: substance doing the dissolving; present
### (1) e.g. H hydrogen that has lost 1 electron c. anion - negatively charged atoms that gain electrons 16-2. (1) e.g. HCO 3 bicarbonate anion
GS106 Chemical Bonds and Chemistry of Water c:wou:gs106:sp2002:chem.wpd I. Introduction A. Hierarchy of chemical substances 1. atoms of elements - smallest particles of matter with unique physical and
### CHEMISTRY STANDARDS BASED RUBRIC ATOMIC STRUCTURE AND BONDING
CHEMISTRY STANDARDS BASED RUBRIC ATOMIC STRUCTURE AND BONDING Essential Standard: STUDENTS WILL UNDERSTAND THAT THE PROPERTIES OF MATTER AND THEIR INTERACTIONS ARE A CONSEQUENCE OF THE STRUCTURE OF MATTER,
### Expt. 4: ANALYSIS FOR SODIUM CARBONATE
Expt. 4: ANALYSIS FOR SODIUM CARBONATE Introduction In this experiment, a solution of hydrochloric acid is prepared, standardized against pure sodium carbonate, and used to determine the percentage of
### The Mole Concept. The Mole. Masses of molecules
The Mole Concept Ron Robertson r2 c:\files\courses\1110-20\2010 final slides for web\mole concept.docx The Mole The mole is a unit of measurement equal to 6.022 x 10 23 things (to 4 sf) just like there
### Chapter 14 Solutions
Chapter 14 Solutions 1 14.1 General properties of solutions solution a system in which one or more substances are homogeneously mixed or dissolved in another substance two components in a solution: solute
### IB Chemistry. DP Chemistry Review
DP Chemistry Review Topic 1: Quantitative chemistry 1.1 The mole concept and Avogadro s constant Assessment statement Apply the mole concept to substances. Determine the number of particles and the amount
### Factors that Affect the Rate of Dissolving and Solubility
Dissolving Factors that Affect the Rate of Dissolving and Solubility One very important property of a solution is the rate of, or how quickly a solute dissolves in a solvent. When dissolving occurs, there
### IB Chemistry 1 Mole. One atom of C-12 has a mass of 12 amu. One mole of C-12 has a mass of 12 g. Grams we can use more easily.
The Mole Atomic mass units and atoms are not convenient units to work with. The concept of the mole was invented. This was the number of atoms of carbon-12 that were needed to make 12 g of carbon. 1 mole
### Chemistry: The Central Science. Chapter 13: Properties of Solutions
Chemistry: The Central Science Chapter 13: Properties of Solutions Homogeneous mixture is called a solution o Can be solid, liquid, or gas Each of the substances in a solution is called a component of
### The component present in larger proportion is known as solvent.
40 Engineering Chemistry and Environmental Studies 2 SOLUTIONS 2. DEFINITION OF SOLUTION, SOLVENT AND SOLUTE When a small amount of sugar (solute) is mixed with water, sugar uniformally dissolves in water
### Notes: Acids and Bases
Name Chemistry Pre-AP Notes: Acids and Bases Period I. Describing Acids and Bases A. Properties of Acids taste ph 7 Acids change color of an (e.g. blue litmus paper turns in the presence of an acid) React
### Chapter 1: Moles and equations. Learning outcomes. you should be able to:
Chapter 1: Moles and equations 1 Learning outcomes you should be able to: define and use the terms: relative atomic mass, isotopic mass and formula mass based on the 12 C scale perform calculations, including
### Determining the Identity of an Unknown Weak Acid
Purpose The purpose of this experiment is to observe and measure a weak acid neutralization and determine the identity of an unknown acid by titration. Introduction The purpose of this exercise is to identify
### The Mole Concept. A. Atomic Masses and Avogadro s Hypothesis
The Mole Concept A. Atomic Masses and Avogadro s Hypothesis 1. We have learned that compounds are made up of two or more different elements and that elements are composed of atoms. Therefore, compounds
### Chapter 6. Solution, Acids and Bases
Chapter 6 Solution, Acids and Bases Mixtures Two or more substances Heterogeneous- different from place to place Types of heterogeneous mixtures Suspensions- Large particles that eventually settle out
### Molarity of Ions in Solution
APPENDIX A Molarity of Ions in Solution ften it is necessary to calculate not only the concentration (in molarity) of a compound in aqueous solution but also the concentration of each ion in aqueous solution.
### Types of Solutions. Chapter 17 Properties of Solutions. Types of Solutions. Types of Solutions. Types of Solutions. Types of Solutions
Big Idea: Liquids will mix together if both liquids are polar or both are nonpolar. The presence of a solute changes the physical properties of the system. For nonvolatile solutes the vapor pressure, boiling
### Intermolecular forces, acids, bases, electrolytes, net ionic equations, solubility, and molarity of Ions in solution:
Intermolecular forces, acids, bases, electrolytes, net ionic equations, solubility, and molarity of Ions in solution: 1. What are the different types of Intermolecular forces? Define the following terms:
### This value, called the ionic product of water, Kw, is related to the equilibrium constant of water
HYDROGEN ION CONCENTRATION - ph VALUES AND BUFFER SOLUTIONS 1. INTRODUCTION Water has a small but definite tendency to ionise. H 2 0 H + + OH - If there is nothing but water (pure water) then the concentration
### Chapter 4 Notes - Types of Chemical Reactions and Solution Chemistry
AP Chemistry A. Allan Chapter 4 Notes - Types of Chemical Reactions and Solution Chemistry 4.1 Water, the Common Solvent A. Structure of water 1. Oxygen's electronegativity is high (3.5) and hydrogen's
### Formulae, stoichiometry and the mole concept
3 Formulae, stoichiometry and the mole concept Content 3.1 Symbols, Formulae and Chemical equations 3.2 Concept of Relative Mass 3.3 Mole Concept and Stoichiometry Learning Outcomes Candidates should be
### Paper 1 (7405/1): Inorganic and Physical Chemistry Mark scheme
AQA Qualifications A-level Chemistry Paper (7405/): Inorganic and Physical Chemistry Mark scheme 7405 Specimen paper Version 0.5 MARK SCHEME A-level Chemistry Specimen paper 0. This question is marked
### 5s Solubility & Conductivity
5s Solubility & Conductivity OBJECTIVES To explore the relationship between the structures of common household substances and the kinds of solvents in which they dissolve. To demonstrate the ionic nature
### V. POLYPROTIC ACID IONIZATION. NOTICE: K a1 > K a2 > K a3 EQUILIBRIUM PART 2. A. Polyprotic acids are acids with two or more acidic hydrogens.
EQUILIBRIUM PART 2 V. POLYPROTIC ACID IONIZATION A. Polyprotic acids are acids with two or more acidic hydrogens. monoprotic: HC 2 H 3 O 2, HCN, HNO 2, HNO 3 diprotic: H 2 SO 4, H 2 SO 3, H 2 S triprotic:
### SCIENCE Chemistry Standard: Physical Science
Standard: Physical Science Nature of Matter A. Describe that matter is made of minute particles called atoms and atoms are comprised of even smaller components. Explain the structure and properties of
### Acids and Bases: Definitions. Brønsted-Lowry Acids and Bases. Brønsted-Lowry Acids and Bases CHEMISTRY THE CENTRAL SCIENCE
CHEMISTRY THE CENTRAL SCIENCE Professor Angelo R. Rossi Department of Chemistry Spring Semester Acids and Bases: Definitions Arrhenius Definition of Acids and Bases Acids are substances which increase
### SECTION 14 CHEMICAL EQUILIBRIUM
1-1 SECTION 1 CHEMICAL EQUILIBRIUM Many chemical reactions do not go to completion. That is to say when the reactants are mixed and the chemical reaction proceeds it only goes to a certain extent, and
### CHM1 Review for Exam 12
Topics Solutions 1. Arrhenius Acids and bases a. An acid increases the H + concentration in b. A base increases the OH - concentration in 2. Strong acids and bases completely dissociate 3. Weak acids and
### Dynamic Soil Systems Part A Soil ph and Soil Testing
Dynamic Soil Systems Part A Soil ph and Soil Testing Objectives: To measure soil ph and observe conditions which change ph To distinguish between active acidity (soil solution ph) and exchangeable acidity
### SOLUBILITY, IONIC STRENGTH AND ACTIVITY COEFFICIENTS
SOLUBILITY, IONIC STRENGTH AND ACTIVITY COEFFICIENTS References: 1. See `References to Experiments' for text references.. W. C. Wise and C. W. Davies, J. Chem. Soc., 73 (1938), "The Conductivity of Calcium
### Chapter 11 Properties of Solutions
Chapter 11 Properties of Solutions 11.1 Solution Composition A. Molarity moles solute 1. Molarity ( M ) = liters of solution B. Mass Percent mass of solute 1. Mass percent = 1 mass of solution C. Mole
### Prentice Hall. Chemistry (Wilbraham) 2008, National Student Edition - South Carolina Teacher s Edition. High School. High School
Prentice Hall Chemistry (Wilbraham) 2008, National Student Edition - South Carolina Teacher s Edition High School C O R R E L A T E D T O High School C-1.1 Apply established rules for significant digits,
### Chemistry Objectives
Chemistry Objectives Matter, and Measurement 1. know the definition of chemistry and be knowledgeable 3-14 about specific disciplines of chemistry. 2. understand the nature of the scientific method and
### Chapter 2: Atoms, Molecules & Life
Chapter 2: Atoms, Molecules & Life What Are Atoms? An atom are the smallest unit of matter. Atoms are composed of Electrons = negatively charged particles. Neutrons = particles with no charge (neutral).
### Soil Chemistry Ch. 2. Chemical Principles As Applied to Soils
Chemical Principles As Applied to Soils I. Chemical units a. Moles and Avogadro s number The numbers of atoms, ions or molecules are important in chemical reactions because the number, rather than mass
### Osmolality Explained. Definitions
Osmolality Explained What is osmolality? Simply put, osmolality is a measurement of the total number of solutes in a liquid solution expressed in osmoles of solute particles per kilogram of solvent. When
### In the box below, draw the Lewis electron-dot structure for the compound formed from magnesium and oxygen. [Include any charges or partial charges.
Name: 1) Which molecule is nonpolar and has a symmetrical shape? A) NH3 B) H2O C) HCl D) CH4 7222-1 - Page 1 2) When ammonium chloride crystals are dissolved in water, the temperature of the water decreases.
### 12.3 Colligative Properties
12.3 Colligative Properties Changes in solvent properties due to impurities Colloidal suspensions or dispersions scatter light, a phenomenon known as the Tyndall effect. (a) Dust in the air scatters the
### Chemistry 132 NT. Solubility Equilibria. The most difficult thing to understand is the income tax. Solubility and Complex-ion Equilibria
Chemistry 13 NT The most difficult thing to understand is the income tax. Albert Einstein 1 Chem 13 NT Solubility and Complex-ion Equilibria Module 1 Solubility Equilibria The Solubility Product Constant
### 1. Balance the following equation. What is the sum of the coefficients of the reactants and products?
1. Balance the following equation. What is the sum of the coefficients of the reactants and products? 1 Fe 2 O 3 (s) + _3 C(s) 2 Fe(s) + _3 CO(g) a) 5 b) 6 c) 7 d) 8 e) 9 2. Which of the following equations
### Formulas, Equations and Moles
Chapter 3 Formulas, Equations and Moles Interpreting Chemical Equations You can interpret a balanced chemical equation in many ways. On a microscopic level, two molecules of H 2 react with one molecule
### stoichiometry = the numerical relationships between chemical amounts in a reaction.
1 REACTIONS AND YIELD ANSWERS stoichiometry = the numerical relationships between chemical amounts in a reaction. 2C 8 H 18 (l) + 25O 2 16CO 2 (g) + 18H 2 O(g) From the equation, 16 moles of CO 2 (a greenhouse
### Acid-Base (Proton-Transfer) Reactions
Acid-Base (Proton-Transfer) Reactions Chapter 17 An example of equilibrium: Acid base chemistry What are acids and bases? Every day descriptions Chemical description of acidic and basic solutions by Arrhenius
### CHAPTER 10: INTERMOLECULAR FORCES: THE UNIQUENESS OF WATER Problems: 10.2, 10.6,10.15-10.33, 10.35-10.40, 10.56-10.60, 10.101-10.
CHAPTER 10: INTERMOLECULAR FORCES: THE UNIQUENESS OF WATER Problems: 10.2, 10.6,10.15-10.33, 10.35-10.40, 10.56-10.60, 10.101-10.102 10.1 INTERACTIONS BETWEEN IONS Ion-ion Interactions and Lattice Energy
### Chemistry 151 Final Exam
Chemistry 151 Final Exam Name: SSN: Exam Rules & Guidelines Show your work. No credit will be given for an answer unless your work is shown. Indicate your answer with a box or a circle. All paperwork must
### Chemistry. CHEMISTRY SYLLABUS, ASSESSMENT and UNIT PLANNERS GENERAL AIMS. Students should be able to
i CHEMISTRY SYLLABUS, ASSESSMENT and UNIT PLANNERS GENERAL AIMS Students should be able to - apply and use knowledge and methods that are typical to chemistry - develop experimental and investigative skills,
### Experiment 9 - Double Displacement Reactions
Experiment 9 - Double Displacement Reactions A double displacement reaction involves two ionic compounds that are dissolved in water. In a double displacement reaction, it appears as though the ions are
### W1 WORKSHOP ON STOICHIOMETRY
INTRODUCTION W1 WORKSHOP ON STOICHIOMETRY These notes and exercises are designed to introduce you to the basic concepts required to understand a chemical formula or equation. Relative atomic masses of
### STATE UNIVERSITY OF NEW YORK COLLEGE OF TECHNOLOGY CANTON, NEW YORK COURSE OUTLINE CHEM 150 - COLLEGE CHEMISTRY I
STATE UNIVERSITY OF NEW YORK COLLEGE OF TECHNOLOGY CANTON, NEW YORK COURSE OUTLINE CHEM 150 - COLLEGE CHEMISTRY I PREPARED BY: NICOLE HELDT SCHOOL OF SCIENCE, HEALTH, AND PROFESSIONAL STUDIES SCIENCE DEPARTMENT
### 4. Acid Base Chemistry
4. Acid Base Chemistry 4.1. Terminology: 4.1.1. Bronsted / Lowry Acid: "An acid is a substance which can donate a hydrogen ion (H+) or a proton, while a base is a substance that accepts a proton. B + HA
### Equilibria. Unit Outline
Acid Base Equilibria 17Advanced Unit Outline 17.1 Acid Base Reactions 17.2 Buffers 17.3 Acid Base Titrations 17. Some Important Acid Base Systems In This Unit We will now expand the introductory coverage
### CHAPTER 6 Chemical Bonding
CHAPTER 6 Chemical Bonding SECTION 1 Introduction to Chemical Bonding OBJECTIVES 1. Define Chemical bond. 2. Explain why most atoms form chemical bonds. 3. Describe ionic and covalent bonding.. 4. Explain
### Solute and Solvent 7.1. Solutions. Examples of Solutions. Nature of Solutes in Solutions. Learning Check. Solution. Solutions
Chapter 7 s 7.1 s Solute and Solvent s are homogeneous mixtures of two or more substances. consist of a solvent and one or more solutes. 1 2 Nature of Solutes in s Examples of s Solutes spread evenly throughout
### Chapter 3 Molecules, Moles, and Chemical Equations. Chapter Objectives. Warning!! Chapter Objectives. Chapter Objectives
Larry Brown Tom Holme www.cengage.com/chemistry/brown Chapter 3 Molecules, Moles, and Chemical Equations Jacqueline Bennett SUNY Oneonta 2 Warning!! These slides contains visual aids for learning BUT they
### Chapter 4: Structure and Properties of Ionic and Covalent Compounds
Chapter 4: Structure and Properties of Ionic and Covalent Compounds 4.1 Chemical Bonding o Chemical Bond - the force of attraction between any two atoms in a compound. o Interactions involving valence
### Chemistry Diagnostic Questions
Chemistry Diagnostic Questions Answer these 40 multiple choice questions and then check your answers, located at the end of this document. If you correctly answered less than 25 questions, you need to
### Solutions Review Questions
Name: Thursday, March 06, 2008 Solutions Review Questions 1. Compared to pure water, an aqueous solution of calcium chloride has a 1. higher boiling point and higher freezing point 3. lower boiling point
### Stoichiometry and Aqueous Reactions (Chapter 4)
Stoichiometry and Aqueous Reactions (Chapter 4) Chemical Equations 1. Balancing Chemical Equations (from Chapter 3) Adjust coefficients to get equal numbers of each kind of element on both sides of arrow.
### Chapter 8 How to Do Chemical Calculations
Chapter 8 How to Do Chemical Calculations Chemistry is both a qualitative and a quantitative science. In the laboratory, it is important to be able to measure quantities of chemical substances and, as
### An acid is a substance that produces H + (H 3 O + ) Ions in aqueous solution. A base is a substance that produces OH - ions in aqueous solution.
Chapter 8 Acids and Bases Definitions Arrhenius definitions: An acid is a substance that produces H + (H 3 O + ) Ions in aqueous solution. A base is a substance that produces OH - ions in aqueous solution.
### PART I: MULTIPLE CHOICE (30 multiple choice questions. Each multiple choice question is worth 2 points)
CHEMISTRY 123-07 Midterm #1 Answer key October 14, 2010 Statistics: Average: 74 p (74%); Highest: 97 p (95%); Lowest: 33 p (33%) Number of students performing at or above average: 67 (57%) Number of students
### EXPERIMENT # 3 ELECTROLYTES AND NON-ELECTROLYTES
EXPERIMENT # 3 ELECTROLYTES AND NON-ELECTROLYTES Purpose: 1. To investigate the phenomenon of solution conductance. 2. To distinguish between compounds that form conducting solutions and compounds that
### Chapter 4 Chemical Reactions
Chapter 4 Chemical Reactions I) Ions in Aqueous Solution many reactions take place in water form ions in solution aq solution = solute + solvent solute: substance being dissolved and present in lesser
### Chem 1B Saddleback College Dr. White 1. Experiment 8 Titration Curve for a Monoprotic Acid
Chem 1B Saddleback College Dr. White 1 Experiment 8 Titration Curve for a Monoprotic Acid Objectives To learn the difference between titration curves involving a strong acid with a strong base and a weak
### CHEM 102: Sample Test 5
CHEM 102: Sample Test 5 CHAPTER 17 1. When H 2 SO 4 is dissolved in water, which species would be found in the water at equilibrium in measurable amounts? a. H 2 SO 4 b. H 3 SO + 4 c. HSO 4 d. SO 2 4 e.
### Chapter 14 - Acids and Bases
Chapter 14 - Acids and Bases 14.1 The Nature of Acids and Bases A. Arrhenius Model 1. Acids produce hydrogen ions in aqueous solutions 2. Bases produce hydroxide ions in aqueous solutions B. Bronsted-Lowry
### Physical pharmacy. dr basam al zayady
Physical pharmacy Lec 7 dr basam al zayady Ideal Solutions and Raoult's Law In an ideal solution of two volatile liquids, the partial vapor pressure of each volatile constituent is equal to the vapor pressure
### INTRODUCTORY CHEMISTRY Concepts and Critical Thinking
INTRODUCTORY CHEMISTRY Concepts and Critical Thinking Sixth Edition by Charles H. Corwin Chapter 13 Liquids and Solids by Christopher Hamaker 1 Chapter 13 Properties of Liquids Unlike gases, liquids do
### Q.1 Classify the following according to Lewis theory and Brønsted-Lowry theory.
Acid-base 2816 1 Acid-base theories ACIDS & BASES - IONIC EQUILIBRIA LEWIS acid electron pair acceptor H +, AlCl 3 base electron pair donor NH 3, H 2 O, C 2 H 5 OH, OH e.g. H 3 N: -> BF 3 > H 3 N + BF
### Chapter 13 & 14 Practice Exam
Name: Class: Date: Chapter 13 & 14 Practice Exam Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Acids generally release H 2 gas when they react with a.
### Chapter 17. How are acids different from bases? Acid Physical properties. Base. Explaining the difference in properties of acids and bases
Chapter 17 Acids and Bases How are acids different from bases? Acid Physical properties Base Physical properties Tastes sour Tastes bitter Feels slippery or slimy Chemical properties Chemical properties
### Science 20. Unit A: Chemical Change. Assignment Booklet A1
Science 20 Unit A: Chemical Change Assignment Booklet A FOR TEACHER S USE ONLY Summary Teacher s Comments Chapter Assignment Total Possible Marks 79 Your Mark Science 20 Unit A: Chemical Change Assignment
### Chemical equilibria Buffer solutions
Chemical equilibria Buffer solutions Definition The buffer solutions have the ability to resist changes in ph when smaller amounts of acid or base is added. Importance They are applied in the chemical
### Equilibrium, Acids and Bases Unit Summary:
Equilibrium, Acids and Bases Unit Summary: Prerequisite Skills and Knowledge Understand concepts of concentration, solubility, saturation point, pressure, density, viscosity, flow rate, and temperature
### Acids and Bases. Chapter 16
Acids and Bases Chapter 16 The Arrhenius Model An acid is any substance that produces hydrogen ions, H +, in an aqueous solution. Example: when hydrogen chloride gas is dissolved in water, the following
### Solutions. ... the components of a mixture are uniformly intermingled (the mixture is homogeneous). Solution Composition. Mass percentageof solute=
Solutions Properties of Solutions... the components of a mixture are uniformly intermingled (the mixture is homogeneous). Solution Composition 1. Molarity (M) = 4. Molality (m) = moles of solute liters
### Problems you need to KNOW to be successful in the upcoming AP Chemistry exam.
Problems you need to KNOW to be successful in the upcoming AP Chemistry exam. Problem 1 The formula and the molecular weight of an unknown hydrocarbon compound are to be determined by elemental analysis
### Boyle s law - For calculating changes in pressure or volume: P 1 V 1 = P 2 V 2. Charles law - For calculating temperature or volume changes: V 1 T 1
Common Equations Used in Chemistry Equation for density: d= m v Converting F to C: C = ( F - 32) x 5 9 Converting C to F: F = C x 9 5 + 32 Converting C to K: K = ( C + 273.15) n x molar mass of element
### 7.4. Using the Bohr Theory KNOW? Using the Bohr Theory to Describe Atoms and Ions
7.4 Using the Bohr Theory LEARNING TIP Models such as Figures 1 to 4, on pages 218 and 219, help you visualize scientific explanations. As you examine Figures 1 to 4, look back and forth between the diagrams
### Q.1 Classify the following according to Lewis theory and Brønsted-Lowry theory.
Acid-base A4 1 Acid-base theories ACIDS & BASES - IONIC EQUILIBRIA 1. LEWIS acid electron pair acceptor H, AlCl 3 base electron pair donor NH 3, H 2 O, C 2 H 5 OH, OH e.g. H 3 N: -> BF 3 > H 3 N BF 3 see
### Chapter 2 The Chemical Context of Life
Chapter 2 The Chemical Context of Life Multiple-Choice Questions 1) About 25 of the 92 natural elements are known to be essential to life. Which four of these 25 elements make up approximately 96% of living
### ION EXCHANGE FOR DUMMIES. An introduction
ION EXCHANGE FOR DUMMIES An introduction Water Water is a liquid. Water is made of water molecules (formula H 2 O). All natural waters contain some foreign substances, usually in small amounts. The water
### Paper 1 (7404/1): Inorganic and Physical Chemistry Mark scheme
AQA Qualifications AS Chemistry Paper (7404/): Inorganic and Physical Chemistry Mark scheme 7404 Specimen paper Version 0.6 MARK SCHEME AS Chemistry Specimen paper Section A 0. s 2 2s 2 2p 6 3s 2 3p 6
### Solutions CHAPTER Specific answers depend on student choices.
CHAPTER 15 1. Specific answers depend on student choices.. A heterogeneous mixture does not have a uniform composition: the composition varies in different places within the mixture. Examples of non homogeneous | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066260814666748, "perplexity": 2706.4348342014973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00509-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/71596/mechanical-definition-of-ordinals | Mechanical definition of ordinals
It seems that one can construct ordinals from bottom up by successively introducing a new symbol each time a limit is taken: $$1,\ 2,\ \ldots,\ \omega,\ \omega +1,\ \omega +2,\ \ldots,\ \omega\cdot 2,\ \omega\cdot 2 +1,\ \ldots,\ \omega^{2},\ \ldots,\ \omega^{3},\ \ldots\ \omega^{\omega},\ \ldots,\ \omega^{\omega^{\omega}},\ \ldots, \epsilon_{0},\ \ldots$$ Can this be taken as a (mechanical) definition of ordinals? More abstract definitions like "an ordinal is a transitive well-ordered set satisfying certain properties" are much more appealing to me. Is this mechanical definition sufficient to prove things like "each well-ordered set is order isomorphic to exactly one ordinal?"
-
What do you do when you reach an uncountable ordinal - do you have an uncountable number of symbols to choose from? What if we just use each ordinal as a symbol for itself? – Carl Mummert Oct 11 '11 at 1:37
Worse yet, the first uncountable ordinal $\omega_1$ cannot be reached as the limit of a countable sequence of smaller ordinals. So your process will give you at most the countable ordninals. – Henning Makholm Oct 11 '11 at 1:44
@Henning: this is an argument in favor of taking each ordinal as a symbol for itself. – Carl Mummert Oct 11 '11 at 1:50
@Carl, they are awfully hard to write down on paper (except by use of other symbols, and even then we don't get most of them), which strikes me as a rather basic requirement for symbols. – Henning Makholm Oct 11 '11 at 1:53
In fact, the intent of the OP's method won't even get you to a nonrecursive ordinal, but it will get you to things that make $\varepsilon_{0}$ pale into insignificance (e.g. $\Gamma_{0}$, $\Gamma_{\varepsilon_{0}}$, the Bachmann-Howard ordinal, etc.). See en.wikipedia.org/wiki/Recursive_ordinal and en.wikipedia.org/wiki/Large_countable_ordinal – Dave L. Renfro Oct 11 '11 at 14:52
As remarked in the comments, this is far from sufficient to cover even the countable ordinals. Personally, I see the problem with the three dots at the end which imply both an undefined idea of continuing this sequence, as well something that will terminate after at most $\omega_1$ many steps.
I imagine you might get to some large countable ordinals, perhaps $\epsilon_{\epsilon_0}$ or even higher. However this will terminate long before $\omega_1$.
Why is that a problem? Well, of course that we know about well-ordered sets whose order type is uncountable. But think of this reason: $\mu_0=\{\text{all those ordinals you wrote above}\}$, ordered by $\in$ this would be a transitive and well-ordered set. However it is not isomorphic to any of its members. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932137668132782, "perplexity": 294.1027928788918}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826343.66/warc/CC-MAIN-20140820021346-00199-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/perturbation-theory-qualitative-question.603217/ | # Perturbation theory (qualitative question)
• Thread starter LogicX
• Start date
• #1
LogicX
181
1
## Homework Statement
How does the energy change (negative, positive or no change) in the HOMO-LUMO transition of a conjugated polyene where there are 5 double bonds when a nitrogen is substituted in the center of the chain? The substitution lowers the potential energy in the center of the box (everywhere else V(x)=0 for particle in a box).
When there are 6 double bonds, the opposite change happens. Why?
## Homework Equations
E1=<ψ0|H1ψ0>
E(pertubed)= E0 + E1λ
## The Attempt at a Solution
Ok, so if you look at the particle in a box ψ*ψ for n=5 and for n=6, the center of the n=5 is at the top of a peak, while for n=6 it is at a node (i.e. where the probability=0). I'm not sure how to use this info to say how the excitation energy would change.
I think it means that for n=6 there is no change because there is no probability of an electron being there so the substitution does not change the excitation energy. And for n=5 there is a decrease in potential energy so E1 is more negative and the gap would be larger? (or would a decrease in V(x) mean that the gap is smaller?)
Does any of that make sense? Again, I just need a qualitative answer, and it basically boils down to how E1 changes with the substitution.
EDIT: I noticed this thread seems to be related but I'm still not quite sure of the answer:
Last edited:
• Last Post
Replies
3
Views
643
• Last Post
Replies
8
Views
913
• Last Post
Replies
1
Views
622
• Last Post
Replies
3
Views
697
• Last Post
Replies
3
Views
679
• Last Post
Replies
0
Views
405
• Last Post
Replies
1
Views
384
• Last Post
Replies
3
Views
775
• Last Post
Replies
0
Views
573
• Last Post
Replies
9
Views
1K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636924624443054, "perplexity": 1005.9486699451568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00764.warc.gz"} |
http://mathhelpforum.com/trigonometry/197807-converting-b-w-polar-equations-rectangular-equations.html | # Math Help - CONVERTING b/w POLAR equations & RECTANGULAR equations
1. ## CONVERTING b/w POLAR equations & RECTANGULAR equations
I 'd like to see the steps it takes to solve each of these problems. Thanks!
Polar to Rectangular:
1.) r2 = sin2θ
2.) r = 2secθ ( i got x=2, not sure if it's right)
3.) r = 6/(2cosθ - 3sinθ)
Rectangular to Polar:
1.) 2xy=1
2.) y2- 8y - 16 = 0
3.) x2 + y2 - 2ay = 0
4.) x2 = y3 ( my final answer is r= cot2θcscθ )
2. ## Re: CONVERTING b/w POLAR equations & RECTANGULAR equations
Hello, Crysland!
Polar to Rectangular:
$(1)\; r^2\:=\:\sin2\theta$
We have: . . . . . . . $r^2 \:=\:2\sin\theta\cos\theta$
Multiply by $r^2\!:\qquad\;\; r^4 \:=\:2r^2\sin\theta\cos\theta$
. . . . . . . . . . . . . $(r^2)^2 \:=\:2(r\cos\theta)(r\sin\theta)$
Substitute: . $(x^2+y^2)^2 \:=\:2xy$
$(2)\; r \:=\:2\sec\theta$
( I got $x=2.$) . Right!
$\text{We have: }\:r \:=\:\dfrac{2}{\cos\theta} \quad\Rightarrow\quad r\cos\theta \:=\:2 \quad\Rightarrow\quad x \:=\:2$
$(3)\; r \:=\:\dfrac{6}{2\cos\theta - 3\sin\theta}$
$\begin{array}{ccc}\text{We have:} & r(2\cos\theta - 3\sin\theta) \:=\:6 \\ \\ & 2r\cos\theta - 3r\sin\theta \:=\:6 \\ \\ & 2x - 3y \:=\:6 \end{array}$
Rectangular to Polar:
$(1)\; 2xy\:=\:1$
$2(r\cos\theta)(r\sin\theta) \:=\:1 \quad\Rightarrow\quad 2r^2 \:=\:\frac{1}{\sin\theta\cos\theta} \quad\Rightarrow\quad r^2 \:=\:\frac{1}{2\sin\theta\cos\theta}$
. . . . . . . . . . . . . . . . . $\Rightarrow\quad r^2 \:=\:\frac{1}{\sin2\theta} \quad\Rightarrow\quad r^2 \:=\:\csc2\theta$
$(2)\; y^2- 8y - 16 \:=\: 0$
$(r\sin\theta)^2 - 8(r\sin\theta) - 16 \:=\:0 \quad\Rightarrow\quad r^2\sin^2\theta - 8r\sin\theta - 16 \:=\:0$
. . $r \;=\;\dfrac{8\sin\theta \pm \sqrt{64\sin^2\theta + 64\sin^2\theta}}{2\sin^2\theta} \;=\;\dfrac{8\sin\theta \pm\sqrt{128\sin^2\theta}}{2\sin^2\theta}$
. . $r \;=\;\dfrac{8\sin\theta \pm 8\sqrt{2}\sin\theta}{2\sin^2\theta} \;=\;\dfrac{8\sin\theta(1 \pm\sqrt{2})}{2\sin^2\theta} \;=\;\dfrac{4(1\pm\sqrt{2})}{\sin\theta}$
. . $r \;=\;4(1\pm\sqrt{2})\csc\theta$
$(3)\; x^2 + y^2 - 2ay \:=\: 0$
We have: . $r^2 - 2ar\sin\theta \:=\:0 \quad\Rightarrow\quad r(r - 2a\sin\theta) \:=\:0$
Then: . $r-2a\sin\theta \:=\:0 \quad\Rightarrow\quad r \;=\;2a\sin\theta$
(We can disregard $r = 0.$)
$(4)\;x^2 \:=\: y^3$
(My final answer is: . $r\:=\:\cot^2\theta\csc\theta$ . Yes!
We have: . $(r\cos\theta)^2 \;=\;(r\sin\theta)^3 \quad\Rightarrow\quad r^2\cos^2\theta \;=\;r^3\sin^3\theta$
. . . . . . . . . $\cos^2\theta \;=\;r\sin^3\theta \quad\Rightarrow\quad \dfrac{\cos^2\theta}{\sin^3\theta} \;=\;r$
. . . . . . . . . . $r \;=\;\frac{\cos^2\theta}{\sin^2\theta}\cdot\frac{1 }{\sin\theta} \quad\Rightarrow\quad r \;=\;\cot^2\theta\csc\theta$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937918782234192, "perplexity": 1221.7971904651747}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989043.35/warc/CC-MAIN-20150728002309-00018-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://www.ck12.org/algebra/Checking-for-Solutions-to-Systems-of-Linear-Inequalities/lecture/Testing-Solutions-for-a-System-of-Inequalities/r1/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Checking for Solutions to Systems of Linear Inequalities ( Video ) | Algebra | CK-12 Foundation
# Checking for Solutions to Systems of Linear Inequalities
%
Progress
Practice Checking for Solutions to Systems of Linear Inequalities
Progress
%
Testing Solutions for a System of Inequalities
Shows an example to demonstrate how testing solutions to a system of inequalities works. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020140528678894, "perplexity": 1556.5281609816514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447546615.89/warc/CC-MAIN-20141224185906-00055-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2011_AIME_II_Problems/Problem_9&oldid=133271 | # 2011 AIME II Problems/Problem 9
## Problem 9
Let be non-negative real numbers such that , and . Let and be positive relatively prime integers such that is the maximum possible value of . Find .
## Solution
Note that neither the constraint nor the expression we need to maximize involves products with . Factoring out say and we see that the constraint is , while the expression we want to maximize is . Adding the left side of the constraint to the expression, we get: . This new expression is the product of three non-negative terms whose sum is equal to 1. By AM-GM this product is at most . Since we have added at least the desired maximum is at most . It is easy to see that this upper bound can in fact be achieved by ensuring that the constraint expression is equal to with —for example, by choosing and small enough—so our answer is
An example is:
Another example is
## Solution 2 (Not legit)
There's a symmetry between and . Therefore, a good guess is that and , at which point we know that , , and we are trying to maximize . Then, | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690938591957092, "perplexity": 278.95507125923444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00079.warc.gz"} |
https://superphysics.org/research/einstein/relativity/section-21/ | Section 21
# The Foundations of Classical Mechanics and Special Relativity are Unsatisfactory
March 22, 2022
Newton’s First Law is only valid for non-moving `K` which:
• have unique states of motion, and
• are in uniform translational motion relative to each other.
Relative to other reference-bodies `K`, the law is not valid.
Both in classical mechanics and in special relativity, we differentiate between:
• viewpoints `K` [man outside the box] where the laws of nature can hold relatively [affected by c]
• viewpoints `K` [man inside the box] where those laws cannot hold relatively [insignificant to c]
But why are relativistic viewpoints more important than non-relativistic viewpoints? *
*Superphysics Note: No, they are not more important or have more priority to Nature. All reference-bodies or viewpoints are of equal importance!
A gas range has two identical pots with water. Steam is being emitted continuously from Pot A which is on a flame, but not from Pot B which has no flame. I can see that the flame causes steam. If both have no flame but Pot A still gives steam, then I will be puzzled.
Similarly, I seek in vain for a real something in classical mechanics or special relativity which causes gravity which creates the different behaviour of bodies from viewpoints `K` [inside the box] and `K'`* [outside the box].
**Einstein Note: The objection is most important when the motion of the viewpoint is inherent e.g. when the viewpoint is rotating uniformly.
Newton saw this objection and attempted to invalidate it, but without success*.
*Superphysics Note: Here, Einstein explains that he invents inertial mass (and therefore the preference to relativistic spacetime) simply because he couldn’t find the cause for gravity. So he sources it from Newton’s Second Law in a spacetime that is in perpetual movement. This is why gravity in his General Relativity is not a force that acts from afar, but a warping of spacetime that changes movements of perpetually-moving objects. The cause of gravity has already been identified by Descartes as aethereal vortices. Newton discarded Descartes and so he couldn’t identify the cause of gravitation.
But E. Mach recognised it most clearly. He claimed that mechanics must be placed on a new basis. It can only be solved by GR since its equations hold for every body of reference, whatever may be its state of motion. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979302644729614, "perplexity": 1738.7689732487838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00327.warc.gz"} |
http://mathhelpforum.com/pre-calculus/118490-complex-numbers.html | 1. ## complex numbers
I am having trouble with these 2 questions
Find the complex number z such that (5+2i)+ ((−3−2i)/z)=5i
and
Find the complex number z such that (2−2i)z+(1−4i)z bar=4+5i
any help would be great
thanks
2. Originally Posted by kblythe
I am having trouble with these 2 questions
Find the complex number z such that (5+2i)+ ((−3−2i)/z)=5i
and
Find the complex number z such that (2−2i)z+(1−4i)z bar=4+5i
any help would be great
thanks
1) $\frac{-3 - 2i}{z} = 3i - 5 \Rightarrow \frac{z}{-3 - 2i} = \frac{1}{-5 + 3i} \Rightarrow z = \frac{-3 - 2i}{-5 + 3i}$. Your job is to express this answer in cartesian form.
2) Let $z = x + iy$:
$(2 - 2i)(x + iy) + (1 - 4i)(x - iy) = 4 + 5i$.
Expand and equate the real and imaginary parts on each side. This will give you two simultaneous equations that you must solve for x and y.
If you need more help, please show all your work and say where you're stuck. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87953782081604, "perplexity": 500.2957124034964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00704.warc.gz"} |
https://byjus.com/ncert-solutions-class-10-maths/chapter-11-constructions/ | NCERT Solutions for Class 10 Maths Chapter 11- Constructions
NCERT Solutions for Class 10 Maths Chapter 11 Constructions are provided in a detailed manner, where one can find a step-by-step solution to all the questions for fast revisions. Solutions for the 11th chapter of NCERT Class 10 maths are prepared by subject experts under the guidelines of NCERT to assist students in their second term exam preparations. Get free NCERT Solutions for Class 10 Maths, Chapter 11 – Constructions at BYJU’S to accelerate the second term exam preparation. All the questions of NCERT exercises are solved using diagrams with a step-by-step procedure for construction. Solutions of NCERT help students boost their concepts and clear doubts.
Access Answers of Maths NCERT Chapter 11 – Constructions
Exercise 11.1 Page: 220
In each of the following, give the justification of the construction also:
1. Draw a line segment of length 7.6 cm and divide it in the ratio 5 : 8. Measure the two parts.
Construction Procedure:
A line segment with a measure of 7.6 cm length is divided in the ratio of 5:8 as follows.
1. Draw line segment AB with the length measure of 7.6 cm
2. Draw a ray AX that makes an acute angle with line segment AB.
3. Locate the points i.e.,13 (= 5+8) points, such as A1, A2, A3, A4 …….. A13, on the ray AX such that it becomes AA1 = A1A2 = A2A3 and so on.
4. Join the line segment and the ray, BA13.
5. Through the point A5, draw a line parallel to BA13 which makes an angle equal to ∠AA13B
6. The point A5 which intersects the line AB at point C.
7. C is the point divides line segment AB of 7.6 cm in the required ratio of 5:8.
8. Now, measure the lengths of the line AC and CB. It comes out to the measure of 2.9 cm and 4.7 cm respectively.
Justification:
The construction of the given problem can be justified by proving that
AC/CB = 5/ 8
By construction, we have A5C || A13B. From Basic proportionality theorem for the triangle AA13B, we get
AC/CB =AA5/A5A13….. (1)
From the figure constructed, it is observed that AA5 and A5A13 contain 5 and 8 equal divisions of line segments respectively.
Therefore, it becomes
AA5/A5A13=5/8… (2)
Compare the equations (1) and (2), we obtain
AC/CB = 5/ 8
Hence, Justified.
2. Construct a triangle of sides 4 cm, 5 cm and 6 cm and then a triangle similar to it whose sides are 2/3 of
the corresponding sides of the first triangle.
Construction Procedure:
1. Draw a line segment AB which measures 4 cm, i.e., AB = 4 cm.
2. Take the point A as centre, and draw an arc of radius 5 cm.
3. Similarly, take the point B as its centre, and draw an arc of radius 6 cm.
4. The arcs drawn will intersect each other at point C.
5. Now, we obtained AC = 5 cm and BC = 6 cm and therefore ΔABC is the required triangle.
6. Draw a ray AX which makes an acute angle with the line segment AB on the opposite side of vertex C.
7. Locate 3 points such as A1, A2, A3 (as 3 is greater between 2 and 3) on line AX such that it becomes AA1= A1A2 = A2A3.
8. Join the point BA3 and draw a line through A2which is parallel to the line BA3 that intersect AB at point B’.
9. Through the point B’, draw a line parallel to the line BC that intersect the line AC at C’.
10. Therefore, ΔAB’C’ is the required triangle.
Justification:
The construction of the given problem can be justified by proving that
AB’ = (2/3)AB
B’C’ = (2/3)BC
AC’= (2/3)AC
From the construction, we get B’C’ || BC
∴ ∠AB’C’ = ∠ABC (Corresponding angles)
In ΔAB’C’ and ΔABC,
∠ABC = ∠AB’C (Proved above)
∠BAC = ∠B’AC’ (Common)
∴ ΔAB’C’ ∼ ΔABC (From AA similarity criterion)
Therefore, AB’/AB = B’C’/BC= AC’/AC …. (1)
In ΔAAB’ and ΔAAB,
∠A2AB’ =∠A3AB (Common)
From the corresponding angles, we get,
∠AA2B’ =∠AA3B
Therefore, from the AA similarity criterion, we obtain
ΔAA2B’ and AA3B
So, AB’/AB = AA2/AA3
Therefore, AB’/AB = 2/3 ……. (2)
From the equations (1) and (2), we get
AB’/AB=B’C’/BC = AC’/ AC = 2/3
This can be written as
AB’ = (2/3)AB
B’C’ = (2/3)BC
AC’= (2/3)AC
Hence, justified.
3. Construct a triangle with sides 5 cm, 6 cm and 7 cm and then another triangle whose sides are 7/5 of the corresponding sides of the first triangle
Construction Procedure:
1. Draw a line segment AB =5 cm.
2. Take A and B as centre, and draw the arcs of radius 6 cm and 7 cm respectively.
3. These arcs will intersect each other at point C and therefore ΔABC is the required triangle with the length of sides as 5 cm, 6 cm, and 7 cm respectively.
4. Draw a ray AX which makes an acute angle with the line segment AB on the opposite side of vertex C.
5. Locate the 7 points such as A1, A2, A3, A4, A5, A6, A7 (as 7 is greater between 5 and 7), on line AX such that it becomes AA1 = A1A2 = A2A3 = A3A4 = A4A5 = A5A6 = A6A7
6. Join the points BA5 and draw a line from A7 to BA5 which is parallel to the line BA5 where it intersects the extended line segment AB at point B’.
7. Now, draw a line from B’ the extended line segment AC at C’ which is parallel to the line BC and it intersects to make a triangle.
8. Therefore, ΔAB’C’ is the required triangle.
Justification:
The construction of the given problem can be justified by proving that
AB’ = (7/5)AB
B’C’ = (7/5)BC
AC’= (7/5)AC
From the construction, we get B’C’ || BC
∴ ∠AB’C’ = ∠ABC (Corresponding angles)
In ΔAB’C’ and ΔABC,
∠ABC = ∠AB’C (Proved above)
∠BAC = ∠B’AC’ (Common)
∴ ΔAB’C’ ∼ ΔABC (From AA similarity criterion)
Therefore, AB’/AB = B’C’/BC= AC’/AC …. (1)
In ΔAA7B’ and ΔAA5B,
∠A7AB’=∠A5AB (Common)
From the corresponding angles, we get,
∠A A7B’=∠A A5B
Therefore, from the AA similarity criterion, we obtain
ΔA A2B’ and A A3B
So, AB’/AB = AA5/AA7
Therefore, AB /AB’ = 5/7 ……. (2)
From the equations (1) and (2), we get
AB’/AB = B’C’/BC = AC’/ AC = 7/5
This can be written as
AB’ = (7/5)AB
B’C’ = (7/5)BC
AC’= (7/5)AC
Hence, justified.
Construction Procedure:
1. Draw a line segment BC with the measure of 8 cm.
2. Now draw the perpendicular bisector of the line segment BC and intersect at the point D
3. Take the point D as centre and draw an arc with the radius of 4 cm which intersect the perpendicular bisector at the point A
4. Now join the lines AB and AC and the triangle is the required triangle.
5. Draw a ray BX which makes an acute angle with the line BC on the side opposite to the vertex A.
6. Locate the 3 points B1, B2 and B3 on the ray BX such that BB1 = B1B2 = B2B3
7. Join the points B2C and draw a line from B3 which is parallel to the line B2C where it intersects the extended line segment BC at point C’.
8. Now, draw a line from C’ the extended line segment AC at A’ which is parallel to the line AC and it intersects to make a triangle.
9. Therefore, ΔA’BC’ is the required triangle.
Justification:
The construction of the given problem can be justified by proving that
A’B = (3/2)AB
BC’ = (3/2)BC
A’C’= (3/2)AC
From the construction, we get A’C’ || AC
∴ ∠ A’C’B = ∠ACB (Corresponding angles)
In ΔA’BC’ and ΔABC,
∠B = ∠B (common)
∠A’BC’ = ∠ACB
∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion)
Therefore, A’B/AB = BC’/BC= A’C’/AC
Since the corresponding sides of the similar triangle are in the same ratio, it becomes
A’B/AB = BC’/BC= A’C’/AC = 3/2
Hence, justified.
5. Draw a triangle ABC with side BC = 6 cm, AB = 5 cm and ∠ABC = 60°. Then construct a triangle whose sides are 3/4 of the corresponding sides of the triangle ABC.
Construction Procedure:
1. Draw a ΔABC with base side BC = 6 cm, and AB = 5 cm and ∠ABC = 60°.
2. Draw a ray BX which makes an acute angle with BC on the opposite side of vertex A.
3. Locate 4 points (as 4 is greater in 3 and 4), such as B1, B2, B3, B4, on line segment BX.
4. Join the points B4C and also draw a line through B3, parallel to B4C intersecting the line segment BC at C’.
5. Draw a line through C’ parallel to the line AC which intersects the line AB at A’.
6. Therefore, ΔA’BC’ is the required triangle.
Justification:
The construction of the given problem can be justified by proving that
Since the scale factor is 3/4 , we need to prove
A’B = (3/4)AB
BC’ = (3/4)BC
A’C’= (3/4)AC
From the construction, we get A’C’ || AC
In ΔA’BC’ and ΔABC,
∴ ∠ A’C’B = ∠ACB (Corresponding angles)
∠B = ∠B (common)
∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion)
Since the corresponding sides of the similar triangle are in the same ratio, it becomes
Therefore, A’B/AB = BC’/BC= A’C’/AC
So, it becomes A’B/AB = BC’/BC= A’C’/AC = 3/4
Hence, justified.
6. Draw a triangle ABC with side BC = 7 cm, ∠ B = 45°, ∠ A = 105°. Then, construct a triangle whose sides are 4/3 times the corresponding sides of ∆ ABC.
To find ∠C:
Given:
∠B = 45°, ∠A = 105°
We know that,
Sum of all interior angles in a triangle is 180°.
∠A+∠B +∠C = 180°
105°+45°+∠C = 180°
∠C = 180° − 150°
∠C = 30°
So, from the property of triangle, we get ∠C = 30°
Construction Procedure:
The required triangle can be drawn as follows.
1. Draw a ΔABC with side measures of base BC = 7 cm, ∠B = 45°, and ∠C = 30°.
2. Draw a ray BX makes an acute angle with BC on the opposite side of vertex A.
3. Locate 4 points (as 4 is greater in 4 and 3), such as B1, B2, B3, B4, on the ray BX.
4. Join the points B3C.
5. Draw a line through B4 parallel to B3C which intersects the extended line BC at C’.
6. Through C’, draw a line parallel to the line AC that intersects the extended line segment at C’.
7. Therefore, ΔA’BC’ is the required triangle.
Justification:
The construction of the given problem can be justified by proving that
Since the scale factor is 4/3, we need to prove
A’B = (4/3)AB
BC’ = (4/3)BC
A’C’= (4/3)AC
From the construction, we get A’C’ || AC
In ΔA’BC’ and ΔABC,
∴ ∠A’C’B = ∠ACB (Corresponding angles)
∠B = ∠B (common)
∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion)
Since the corresponding sides of the similar triangle are in the same ratio, it becomes
Therefore, A’B/AB = BC’/BC= A’C’/AC
So, it becomes A’B/AB = BC’/BC= A’C’/AC = 4/3
Hence, justified.
7. Draw a right triangle in which the sides (other than hypotenuse) are of lengths 4 cm and 3 cm. Then construct another triangle whose sides are 5/3 times the corresponding sides of the given triangle.
Given:
The sides other than hypotenuse are of lengths 4cm and 3cm. It defines that the sides are perpendicular to each other
Construction Procedure:
The required triangle can be drawn as follows.
1. Draw a line segment BC =3 cm.
2. Now measure and draw angle 90°
3. Take B as centre and draw an arc with the radius of 4 cm and intersects the ray at the point B.
4. Now, join the lines AC and the triangle ABC is the required triangle.
5. Draw a ray BX makes an acute angle with BC on the opposite side of vertex A.
6. Locate 5 such as B1, B2, B3, B4, on the ray BX such that such that BB1 = B1B2 = B2B3= B3B4 = B4B5
7. Join the points B3C.
8. Draw a line through B5 parallel to B3C which intersects the extended line BC at C’.
9. Through C’, draw a line parallel to the line AC that intersects the extended line AB at A’.
10. Therefore, ΔA’BC’ is the required triangle.
Justification:
The construction of the given problem can be justified by proving that
Since the scale factor is 5/3, we need to prove
A’B = (5/3)AB
BC’ = (5/3)BC
A’C’= (5/3)AC
From the construction, we get A’C’ || AC
In ΔA’BC’ and ΔABC,
∴ ∠ A’C’B = ∠ACB (Corresponding angles)
∠B = ∠B (common)
∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion)
Since the corresponding sides of the similar triangle are in the same ratio, it becomes
Therefore, A’B/AB = BC’/BC= A’C’/AC
So, it becomes A’B/AB = BC’/BC= A’C’/AC = 5/3
Hence, justified.
Exercise 11.2 Page: 221
In each of the following, give the justification of the construction also:
1. Draw a circle of radius 6 cm. From a point 10 cm away from its centre, construct the pair of tangents to the circle and measure their lengths.
Construction Procedure:
The construction to draw a pair of tangents to the given circle is as follows.
1. Draw a circle with radius = 6 cm with centre O.
2. Locate a point P, which is 10 cm away from O.
3. Join the points O and P through line
4. Draw the perpendicular bisector of the line OP.
5. Let M be the mid-point of the line PO.
6. Take M as centre and measure the length of MO
7. The length MO is taken as radius and draw a circle.
8. The circle drawn with the radius of MO, intersect the previous circle at point Q and R.
9. Join PQ and PR.
10. Therefore, PQ and PR are the required tangents.
Justification:
The construction of the given problem can be justified by proving that PQ and PR are the tangents to the circle of radius 6cm with centre O.
To prove this, join OQ and OR represented in dotted lines.
From the construction,
∠PQO is an angle in the semi-circle.
We know that angle in a semi-circle is a right angle, so it becomes,
∴ ∠PQO = 90°
Such that
⇒ OQ ⊥ PQ
Since OQ is the radius of the circle with radius 6 cm, PQ must be a tangent of the circle. Similarly, we can prove that PR is a tangent of the circle.
Hence, justified.
2. Construct a tangent to a circle of radius 4 cm from a point on the concentric circle of radius 6 cm and measure its length. Also verify the measurement by actual calculation.
Construction Procedure:
For the given circle, the tangent can be drawn as follows.
1. Draw a circle of 4 cm radius with centre “O”.
2. Again, take O as centre draw a circle of radius 6 cm.
3. Locate a point P on this circle
4. Join the points O and P through lines such that it becomes OP.
5. Draw the perpendicular bisector to the line OP
6. Let M be the mid-point of PO.
7. Draw a circle with M as its centre and MO as its radius
8. The circle drawn with the radius OM, intersect the given circle at the points Q and R.
9. Join PQ and PR.
10. PQ and PR are the required tangents.
From the construction, it is observed that PQ and PR are of length 4.47 cm each.
It can be calculated manually as follows
In ∆PQO,
Since PQ is a tangent,
∠PQO = 90°. PO = 6cm and QO = 4 cm
Applying Pythagoras theorem in ∆PQO, we obtain PQ2+QO2 = PQ2
PQ2+(4)2 = (6)2
PQ2 +16 =36
PQ2 = 36−16
PQ2 = 20
PQ = 2√5
PQ = 4.47 cm
Therefore, the tangent length PQ = 4.47
Justification:
The construction of the given problem can be justified by proving that PQ and PR are the tangents to the circle of radius 4 cm with centre O.
To prove this, join OQ and OR represented in dotted lines.
From the construction,
∠PQO is an angle in the semi-circle.
We know that angle in a semi-circle is a right angle, so it becomes,
∴ ∠PQO = 90°
Such that
⇒ OQ ⊥ PQ
Since OQ is the radius of the circle with radius 4 cm, PQ must be a tangent of the circle. Similarly, we can prove that PR is a tangent of the circle.
Hence, justified.
3. Draw a circle of radius 3 cm. Take two points P and Q on one of its extended diameter each at a distance of 7 cm from its centre. Draw tangents to the circle from these two points P and Q
Construction Procedure:
The tangent for the given circle can be constructed as follows.
1. Draw a circle with a radius of 3cm with centre “O”.
2. Draw a diameter of a circle and it extends 7 cm from the centre and mark it as P and Q.
3. Draw the perpendicular bisector of the line PO and mark the midpoint as M.
4. Draw a circle with M as centre and MO as radius
5. Now join the points PA and PB in which the circle with radius MO intersects the circle of circle 3cm.
6. Now PA and PB are the required tangents.
7. Similarly, from the point Q, we can draw the tangents.
8. From that, QC and QD are the required tangents.
Justification:
The construction of the given problem can be justified by proving that PQ and PR are the tangents to the circle of radius 3 cm with centre O.
To prove this, join OA and OB.
From the construction,
∠PAO is an angle in the semi-circle.
We know that angle in a semi-circle is a right angle, so it becomes,
∴ ∠PAO = 90°
Such that
⇒ OA ⊥ PA
Since OA is the radius of the circle with radius 3 cm, PA must be a tangent of the circle. Similarly, we can prove that PB, QC and QD are the tangent of the circle.
Hence, justified
4. Draw a pair of tangents to a circle of radius 5 cm which are inclined to each other at an angle of 60°
Construction Procedure:
The tangents can be constructed in the following manner:
1. Draw a circle of radius 5 cm and with centre as O.
2. Take a point Q on the circumference of the circle and join OQ.
3. Draw a perpendicular to QP at point Q.
4. Draw a radius OR, making an angle of 120° i.e(180°−60°) with OQ.
5. Draw a perpendicular to RP at point R.
6. Now both the perpendiculars intersect at point P.
7. Therefore, PQ and PR are the required tangents at an angle of 60°.
Justification:
The construction can be justified by proving that ∠QPR = 60°
By our construction
∠OQP = 90°
∠ORP = 90°
And ∠QOR = 120°
We know that the sum of all interior angles of a quadrilateral = 360°
∠OQP+∠QOR + ∠ORP +∠QPR = 360o
90°+120°+90°+∠QPR = 360°
Therefore, ∠QPR = 60°
Hence Justified
5. Draw a line segment AB of length 8 cm. Taking A as centre, draw a circle of radius 4 cm and taking B as centre, draw another circle of radius 3 cm. Construct tangents to each circle from the centre of the other circle.
Construction Procedure:
The tangent for the given circle can be constructed as follows.
1. Draw a line segment AB = 8 cm.
2. Take A as centre and draw a circle of radius 4 cm
3. Take B as centre, draw a circle of radius 3 cm
4. Draw the perpendicular bisector of the line AB and the midpoint is taken as M.
5. Now, take M as centre draw a circle with the radius of MA or MB which the intersects the circle at the points P, Q, R and S.
6. Now join AR, AS, BP and BQ
7. Therefore, the required tangents are AR, AS, BP and BQ
Justification:
The construction can be justified by proving that AS and AR are the tangents of the circle (whose centre is B with radius is 3 cm) and BP and BQ are the tangents of the circle (whose centre is A and radius is 4 cm).
From the construction, to prove this, join AP, AQ, BS, and BR.
∠ASB is an angle in the semi-circle. We know that an angle in a semi-circle is a right angle.
∴ ∠ASB = 90°
⇒ BS ⊥ AS
Since BS is the radius of the circle, AS must be a tangent of the circle.
Similarly, AR, BP, and BQ are the required tangents of the given circle.
6. Let ABC be a right triangle in which AB = 6 cm, BC = 8 cm and ∠ B = 90°. BD is the perpendicular from B on AC. The circle through B, C, D is drawn. Construct the tangents from A to this circle.
Construction Procedure:
The tangent for the given circle can be constructed as follows
1. Draw the line segment with base BC = 8cm
2. Measure the angle 90° at the point B, such that ∠ B = 90°.
3. Take B as centre and draw an arc with a measure of 6cm.
4. Let the point be A where the arc intersects the ray.
5. Join the line AC.
6. Therefore, ABC be the required triangle.
7. Now, draw the perpendicular bisector to the line BC and the midpoint is marked as E.
8. Take E as centre and BE or EC measure as radius draw a circle.
9. Join A to the midpoint E of the circle
10. Now, again draw the perpendicular bisector to the line AE and the midpoint is taken as M
11. Take M as Centre and AM or ME measure as radius, draw a circle.
12. This circle intersects the previous circle at the points B and Q
13. Join the points A and Q
14. Therefore, AB and AQ are the required tangents
Justification:
The construction can be justified by proving that AG and AB are the tangents to the circle.
From the construction, join EQ.
∠AQE is an angle in the semi-circle. We know that an angle in a semi-circle is a right angle.
∴ ∠AQE = 90°
⇒ EQ⊥ AQ
Since EQ is the radius of the circle, AQ has to be a tangent of the circle. Similarly, ∠B = 90°
⇒ AB ⊥ BE
Since BE is the radius of the circle, AB has to be a tangent of the circle.
Hence, justified.
7. Draw a circle with the help of a bangle. Take a point outside the circle. Construct the pair of tangents from this point to the circle.
Construction Procedure:
The required tangents can be constructed on the given circle as follows.
1. Draw a circle with the help of a bangle.
2. Draw two non-parallel chords such as AB and CD
3. Draw the perpendicular bisector of AB and CD
4. Take the centre as O where the perpendicular bisector intersects.
5. To draw the tangents, take a point P outside the circle.
6. Join the points O and P.
7. Now draw the perpendicular bisector of the line PO and midpoint is taken as M
8. Take M as centre and MO as radius draw a circle.
9. Let the circle intersects intersect the circle at the points Q and R
10. Now join PQ and PR
11. Therefore, PQ and PR are the required tangents.
Justification:
The construction can be justified by proving that PQ and PR are the tangents to the circle.
Since, O is the centre of a circle, we know that the perpendicular bisector of the chords passes through the centre.
Now, join the points OQ and OR.
We know that perpendicular bisector of a chord passes through the centre.
It is clear that the intersection point of these perpendicular bisectors is the centre of the circle.
Since, ∠PQO is an angle in the semi-circle. We know that an angle in a semi-circle is a right angle.
∴ ∠PQO = 90°
⇒ OQ⊥ PQ
Since OQ is the radius of the circle, PQ has to be a tangent of the circle. Similarly,
∴ ∠PRO = 90°
⇒ OR ⊥ PO
Since OR is the radius of the circle, PR has to be a tangent of the circle
Therefore, PQ and PR are the required tangents of a circle.
Also Access NCERT Exemplar for Class 10 Maths Chapter 11 CBSE Notes for Class 10 Maths Chapter 11
NCERT Solutions for Class 10 Maths Chapter 11 Constructions
Topics present in NCERT Solutions for Class 10 Maths Chapter 11 includes division of a line segment, constructions of tangents to a circle, line segment bisector and many more. Students in class 9, study some basics of constructions like drawing the perpendicular bisector of a line segment, bisecting an angle, triangle construction etc. Using Class 9 concepts, students in Class 10 will learn about some more constructions along with the reasoning behind that work.
NCERT Class 10, Chapter 11-Constructions is a part of Geometry. Over the past few years, geometry consists a total weightage of 15 marks in the final exams. Construction is a scoring chapter of geometry section. In the previous year exam, one question of 4 marks being asked from this chapter.
List of Exercises in class 10 Maths Chapter 11
Exercise 11.1 Solutions (7 Questions)
Exercise 11.2 Solutions (7 Questions)
The NCERT solutions for Class 10 for the 11th chapter of Maths is all about construction of line segments, division of a Line Segment and Construction of a Circle, Constructions of Tangents to a circle using analytical approach. Students also have to provide justification of each answer.
The topics covered in Maths Chapter 11 Constructions are:
Exercise Topic 11.1 Introduction 11.2 Division of a Line Segment 11.3 Construction of Tangents to a Circle 11.4 Summary
Some of the ideas applied in this chapter:
1. The locus of a point that moves in an identical distance from 2 points, is normal to the line joining both the points.
2. Perpendicular or Normal means right angles whereas, bisector cuts a line segment in two half.
3. The design of different shapes utilizing a pair of compasses and straightedge or ruler.
Key Features of NCERT Solutions for Class 10 Maths Chapter 11 Constructions
• NCERT solutions can also prove to be of valuable help to students in their assignments and preparation of CBSE term-wise and competitive exams.
• Each question is explained using diagrams which makes learning more interactive.
• Easy and understandable language used in NCERT solutions.
• Provide detailed solution using an analytical approach.
Frequently Asked Questions on NCERT Solutions for Class 10 Maths Chapter 11
What is the use of practising NCERT Solutions for Class 10 Maths Chapter 11?
Practising NCERT Solutions for Class 10 Maths Chapter 11 provides you with an idea about the sample of questions that will be asked in the second term exam, which would help students prepare competently. These solutions are useful resources, which can provide them with all the vital information in the most precise form. These solutions cover all topics included in the NCERT syllabus, prescribed by the CBSE board.
List out the topics of NCERT Solutions for Class 10 Maths Chapter 11?
The topics covered in NCERT Solutions for Class 10 Maths Chapter 11 Constructions are Introduction to the constructions, the division of a line segment and construction of tangents to a circle and finally it gives the summary of all the concepts provided in the whole chapter. By referring to these solutions, you get rid of your doubts and also can exercise additional questions.
Whether NCERT Solutions for Class 10 Maths Chapter 11 can view only online?
For the ease of learning, the solutions have also been provided in PDF format, so that the students can download them for free and refer to the solutions offline as well. These NCERT Solutions for Class 10 Maths Chapter 11 can be viewed online.
1. good | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254474401473999, "perplexity": 1271.7177137165363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00038.warc.gz"} |
http://math.stackexchange.com/questions/278425/is-it-possible-to-prove-a-mathematical-statement-by-proving-that-a-proof-exists/281615 | # Is it possible to prove a mathematical statement by proving that a proof exists?
I'm sure there are easy ways of proving things using, well... any other method besides this! But still, I'm curious to know whether it would be acceptable/if it has been done before?
-
Sure. For certain statements, you can even prove them by showing that there is no proof of their negation. – Andres Caicedo Jan 14 '13 at 6:52
@AndresCaicedo not true. If you know you can't disprove something, then it's consistent, not proven. AC and ~AC are both consistent with ZF – Jan Dvorak Jan 14 '13 at 6:54
I'm wondering how would you non-constructively prove that a proof exists. The proof of a proof would then count as a proof of the original concept. – Jan Dvorak Jan 14 '13 at 6:57
@Jan Dvorak I understand your point The interesting question is "Are there any known theorems that use this proof-strategy in their proof"? – Amr Jan 14 '13 at 7:03
@JanDvorak I am well aware of these issues, of course. The statement I wrote can be made precise, and the precise versions are true. For example, it is a theorem of ZF that any $\Pi^0_1$ statement about the natural numbers that is not refutable in PA is true. – Andres Caicedo Jan 14 '13 at 7:05
There is a disappointing way of answering your question affirmatively: If $\phi$ is a statement such that First order Peano Arithmetic $\mathsf{PA}$ proves "$\phi$ is provable", then in fact $\mathsf{PA}$ also proves $\phi$. You can replace here $\mathsf{PA}$ with $\mathsf{ZF}$ (Zermelo Fraenkel set theory) or your usual or favorite first order formalization of mathematics. In a sense, this is exactly what you were asking: If we can prove that there is a proof, then there is a proof. On the other hand, this is actually unsatisfactory because there are no known natural examples of statements $\phi$ for which it is actually easier to prove that there is a proof rather than actually finding it.
(The above has a neat formal counterpart, Löb's theorem, that states that if $\mathsf{PA}$ can prove "If $\phi$ is provable, then $\phi$", then in fact $\mathsf{PA}$ can prove $\phi$.)
There are other ways of answering affirmatively your question. For example, it is a theorem of $\mathsf{ZF}$ that if $\phi$ is a $\Pi^0_1$ statement and $\mathsf{PA}$ does not prove its negation, then $\phi$ is true. To be $\Pi^0_1$ means that $\phi$ is of the form "For all natural numbers $n$, $R(n)$", where $R$ is a recursive statement (that is, there is an algorithm that, for each input $n$, returns in a finite amount of time whether $R(n)$ is true or false). Many natural and interesting statements are $\Pi^0_1$: The Riemann hypothesis, the Goldbach conjecture, etc. It would be fantastic to verify some such $\phi$ this way. On the other hand, there is no scenario for achieving anything like this.
The key to the results above is that $\mathsf{PA}$, and $\mathsf{ZF}$, and any reasonable formalization of mathematics, are arithmetically sound, meaning that their theorems about natural numbers are actually true in the standard model of arithmetic. The first paragraph is a consequence of arithmetic soundness. The third paragraph is a consequence of the fact that $\mathsf{PA}$ proves all true $\Sigma^0_1$-statements. (Much less than $\mathsf{PA}$ suffices here, usually one refers to Robinson's arithmetic $Q$.) I do not recall whether this property has a standard name.
Here are two related posts on MO:
-
Good answer! Also, if we were to name this proof technique, what do you think would be appropriate? – chubbycantorset Jan 14 '13 at 18:04
(I've moved a comment answering the follow-up question above to the body of the answer, and added some references.) – Andres Caicedo Sep 24 '13 at 15:41
A sort of 'flip' of this, of course (and one catch with the purported approach to e.g. Goldbach, which Andres is certainly well aware of), is that there is (almost certainly) no statement $\phi$ for which we can prove that e.g. PA doesn't prove $\phi$! This is because if PA is inconsistent then it proves everything, so proving that there's a statement that PA doesn't prove is tantamount to a proof of the consistency of PA, and as such (by Godel) is impossible within PA itself unless the theory is inconsistent. (Note: this doesn't rule out proofs from outside PA,a la Goodstein...) – Steven Stadnicki Sep 24 '13 at 16:04
I'd say the model-theoretic proof of the Ax-Grothendieck theorem falls into this category. There may be other ways of proving it, but this is the only proof I saw in grad school, and it's pretty natural if you know model theory.
The theorem states that for any polynomial map $f:\mathbb{C}^n \to\mathbb{C}^n$, if $f$ is injective (one-to-one), then it is surjective (onto). The theorem uses several results in model theory, and the argument goes roughly as follows.
Let $ACL_p$ denote the theory of algebraically closed fields of characteristic $p$. $ACL_0$ is axiomatized by the axioms of an algebraically closed field and the axiom scheme $\psi_2, \psi_3, \psi_4,\ldots$, where $\psi_k$ is the statement "for all $x \neq 0$, $k x \neq 0$". Note that all $\psi_k$ are also proved by $ACL_p$, if $p$ does not divide $k$.
1. The theorem is true in $ACL_p$, $p>0$. This can be easily shown by contradiction: assume a counter example, then take the finite field generated by the elements in the counter-example, call that finite field $F_0$. Since $F_0^n\subseteq F^n$ is finite, and the map is injective, it must be surjective as well.
2. The theory of algebraically closed fields in characteristic $p$ is complete (i.e. the standard axioms prove or disprove all statements expressible in the first order language of rings).
3. For each degree $d$ and dimension $n$, restrict Ax-Grothendieck to a statement $\phi_{d,n}$, which is expressible as a statement in the first order language of rings. Then $\phi_{d,n}$ is provable in $ACL_p$ for all characteristics $p > 0$.
4. Assume the $\phi_{d,n}$ is false for $p=0$. Then by completeness, there is a proof $P$ of $\neg \phi_{d,n}$ in $ALC_0$. By the finiteness of proofs, there exists a finite subset of axioms for $ACL_0$ which are used in this proof. If none of the $\psi_k$ are used $P$, then $\neg \phi_{d,n}$ is true of all algebraically closed fields, which cannot be the case by (2). Let $k_0,\ldots, k_m$ be the collection of indices of $\psi_k$ used in $P$. Pick a prime $p_0$ which does not divide any of $k_0,\ldots,k_m$. Then all of the axioms used in $P$ are also true of $ACL_{p_0}$. Thus $ACL_{p_0}$ also proves $\neg \phi_{d,n}$, also contradicting (2). Contradiction. Therefore there is a proof of $\phi_{d,n}$ in $ACL_0$.
So the proof is actually along the lines of "for each degree $d$ and dimension $n$ there is a proof of the Ax-Grothendieck theorem restricted to that degree and dimension." What any of those proofs are, I have no clue.
-
Hi. Do you see how to extend the argument to prove that the inverse should also be polynomial? – Andres Caicedo Jan 18 '13 at 21:09
Not off the top of my head. I'm guessing it goes something like this: Since every function on finite fields is a polynomial function, there should be an upper bound $U(n, d, p)$ on the degree of the inverse for every $n$. If that function can be made independent of $p$, then just use "$\phi_{d,n}$ AND there is a polynomial of degree at most $U(n,d)$ which is an inverse of $f$" instead of just $\phi_{d,n}$. The proof would go the same. I don't know how to make the upper bound on the degree independent of $p$, however. (Is it possible?) – RecursivelyIronic Jan 18 '13 at 23:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327898025512695, "perplexity": 169.29424491233377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443062.21/warc/CC-MAIN-20141017005723-00273-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://images.planetmath.org/quantumoperatoralgebrasinquantumfieldtheories | # quantum operator algebras in quantum field theories
## 0.1 Introduction
This is a topic entry that introduces quantum operator algebras and presents concisely the important roles they play in quantum field theories.
###### Definition 0.1.
Quantum operator algebras (QOA) in quantum field theories are defined as the algebras of observable operators, and as such, they are also related to the von Neumann algebra; quantum operators are usually defined on Hilbert spaces, or in some QFTs on Hilbert space bundles or other similar families of spaces.
###### Remark 0.1.
Representations of Banach $*$-algebras (that are defined on Hilbert spaces) are closely related to C* -algebra representations which provide a useful approach to defining quantum space-times.
## 0.2 Quantum operator algebras in quantum field theories: QOA Role in QFTs
Important examples of quantum operators are: the Hamiltonian operator (or Schrödinger operator), the position and momentum operators, Casimir operators, unitary operators and spin operators. The observable operators are also . More general operators were recently defined, such as Prigogine’s superoperators.
Another development in quantum theories was the introduction of Frechét nuclear spaces or ‘rigged’ Hilbert spaces (Hilbert bundles). The following sections define several types of quantum operator algebras that provide the foundation of modern quantum field theories in mathematical physics.
### 0.2.1 Quantum groups; quantum operator algebras and related symmetries.
Quantum theories adopted a new lease of life post 1955 when von Neumann beautifully re-formulated quantum mechanics (QM) and quantum theories (QT) in the mathematically rigorous context of Hilbert spaces and operator algebras defined over such spaces. From a current physics perspective, von Neumann’ s approach to quantum mechanics has however done much more: it has not only paved the way to expanding the role of symmetry in physics, as for example with the Wigner-Eckhart theorem and its applications, but also revealed the fundamental importance in quantum physics of the state space geometry of quantum operator algebras.
## 0.3 Basic mathematical definitions in QOA:
### 0.3.1 Von Neumann algebra
Let $\mathcal{H}$ denote a complex (separable) Hilbert space. A von Neumann algebra $\mathcal{A}$ acting on $\mathcal{H}$ is a subset of the algebra of all bounded operators $\mathcal{L}(\mathcal{H})$ such that:
• (i) $\mathcal{A}$ is closed under the adjoint operation (with the adjoint of an element $T$ denoted by $T^{*}$).
• (ii) $\mathcal{A}$ equals its bicommutant, namely:
$\mathcal{A}=\{A\in\mathcal{L}(\mathcal{H}):\forall B\in\mathcal{L}(\mathcal{H}% ),\forall C\in\mathcal{A},~{}(BC=CB)\Rightarrow(AB=BA)\}~{}.$ (0.1)
If one calls a commutant of a set $\mathcal{A}$ the special set of bounded operators on $\mathcal{L}(\mathcal{H})$ which commute with all elements in $\mathcal{A}$, then this second condition implies that the commutant of the commutant of $\mathcal{A}$ is again the set $\mathcal{A}$.
On the other hand, a von Neumann algebra $\mathcal{A}$ inherits a unital subalgebra from $\mathcal{L}(\mathcal{H})$, and according to the first condition in its definition $\mathcal{A}$, it does indeed inherit a $*$-subalgebra structure as further explained in the next section on C* -algebras. Furthermore, one also has available a notable bicommutant theorem which states that: “$\mathcal{A}$ is a von Neumann algebra if and only if $\mathcal{A}$ is a $*$-subalgebra of $\mathcal{L}(\mathcal{H})$, closed for the smallest topology defined by continuous maps $(\xi,\eta)\longmapsto(A\xi,\eta)$ for all $$ where $<.,.>$ denotes the inner product defined on $\mathcal{H}$ ”.
For a well-presented treatment of the geometry of the state spaces of quantum operator algebras, the reader is referred to Aflsen and Schultz (2003; [AS2k3]).
### 0.3.2 Hopf algebra
First, a unital associative algebra consists of a linear space $A$ together with two linear maps:
$\displaystyle m$ $\displaystyle:A\otimes A{\longrightarrow}A~{},~{}(multiplication)$ (0.2) $\displaystyle\eta$ $\displaystyle:\mathbb{C}{\longrightarrow}A~{},~{}(unity)$
satisfying the conditions
$\displaystyle m(m\otimes\mathbf{1})$ $\displaystyle=m(\mathbf{1}\otimes m)$ (0.3) $\displaystyle m(\mathbf{1}\otimes\eta)$ $\displaystyle=m(\eta\otimes\mathbf{1})={\rm id}~{}.$
This first condition can be seen in terms of a commuting diagram :
$\begin{CD}A\otimes A\otimes A@>{m\otimes{\rm id}}>{}>A\otimes A\\ @V{{\rm id}\otimes m}V{}V@V{}V{m}V\\ A\otimes A@ >m>>A\end{CD}$ (0.4)
Next suppose we consider ‘reversing the arrows’, and take an algebra $A$ equipped with a linear homorphisms $\Delta:A{\longrightarrow}A\otimes A$, satisfying, for $a,b\in A$ :
$\displaystyle\Delta(ab)$ $\displaystyle=\Delta(a)\Delta(b)$ (0.5) $\displaystyle(\Delta\otimes{\rm id})\Delta$ $\displaystyle=({\rm id}\otimes\Delta)\Delta~{}.$
We call $\Delta$ a comultiplication, which is said to be coasociative in so far that the following diagram commutes
$\begin{CD}A\otimes A\otimes A@<{\Delta\otimes{\rm id}}<{} (0.6)
There is also a counterpart to $\eta$, the counity map $\varepsilon:A{\longrightarrow}\mathbb{C}$ satisfying
$({\rm id}\otimes\varepsilon)\circ\Delta=(\varepsilon\otimes{\rm id})\circ% \Delta={\rm id}~{}.$ (0.7)
A bialgebra $(A,m,\Delta,\eta,\varepsilon)$ is a linear space $A$ with maps $m,\Delta,\eta,\varepsilon$ satisfying the above properties.
Now to recover anything resembling a group structure, we must append such a bialgebra with an antihomomorphism $S:A{\longrightarrow}A$, satisfying $S(ab)=S(b)S(a)$, for $a,b\in A$ . This map is defined implicitly via the property :
$m(S\otimes{\rm id})\circ\Delta=m({\rm id}\otimes S)\circ\Delta=\eta\circ% \varepsilon~{}~{}.$ (0.8)
We call $S$ the antipode map.
A Hopf algebra is then a bialgebra $(A,m,\eta,\Delta,\varepsilon)$ equipped with an antipode map $S$ .
Commutative and non-commutative Hopf algebras form the backbone of quantum ‘groups’ and are essential to the generalizations of symmetry. Indeed, in most respects a quantum ‘group’ is closely related to its dual Hopf algebra; in the case of a finite, commutative quantum group its dual Hopf algebra is obtained via Fourier transformation of the group elements. When Hopf algebras are actually associated with their dual, proper groups of matrices, there is considerable scope for their representations on both finite and infinite dimensional Hilbert spaces.
### 0.3.3 Groupoids
Recall that a groupoid ${\mathsf{G}}$ is, loosely speaking, a small category with inverses over its set of objects $X=Ob({\mathsf{G}})$ . One often writes ${\mathsf{G}}^{y}_{x}$ for the set of morphisms in ${\mathsf{G}}$ from $x$ to $y$ . A topological groupoid consists of a space ${\mathsf{G}}$, a distinguished subspace ${\mathsf{G}}^{(0)}={\rm Ob(\mathsf{G)}}\subset{\mathsf{G}}$, called the space of objects of ${\mathsf{G}}$, together with maps
$r,s~{}:~{}\hbox{}$ (0.9) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 69, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252175688743591, "perplexity": 622.9831046763518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646213.26/warc/CC-MAIN-20180319023123-20180319043123-00701.warc.gz"} |
https://www.edumedia-sciences.com/en/media/182-field-force-potential | # Field - Force - PotentialHTML5
## Summary
The field is created by the fixed charge at any point, whether or not there is a test charge.
A force will exist only if you place a charge on this pre-existing electric field. Remember a charge never experiences its own electrical field.
The field is orthogonal to the equipotentials at any point and always points in the direction of decreasing potential. The spherical symmetry of this charge distribution is revealed by its spherical equipotentials.
Click on the static charge in the center to change its sign.
Click on the charge to catch it. Throw it to set new initial conditions.
## Learning goals
• To show the existence of an electric field at any point even when there is no test charge.
• To illustrate how a single charge will experience a repulsive or attractive force due to the presence of another single fixed charge.
• To explain the link between force, field and potential (energy).
• To view the orthogonality between the equipotentials and the electric field.
• To observe that the electric field always points in the direction of decreasing potential. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440302014350891, "perplexity": 474.5969812787724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00235.warc.gz"} |
https://en.wikibooks.org/wiki/Solutions_To_Mathematics_Textbooks/Principles_of_Mathematical_Analysis_(3rd_edition)_(ISBN_0070856133)/Chapter_6 | # Chapter 6
## 7
### a
By page 121 we know that ${\displaystyle f}$ must be bounded, say by ${\displaystyle M}$. We need to show that given ${\displaystyle \epsilon >0}$ we can find some ${\displaystyle c}$ such that ${\displaystyle \int _{c}^{1}f(x)dx\in B_{\epsilon }\left(\int _{0}^{1}f(x)dx\right).}$So, by Theorem 6.12 (c) we have ${\displaystyle \int _{0}^{1}f(x)dx=\int _{0}^{c}f(x)dx+\int _{c}^{1}f(x)dx}$ and ${\displaystyle \int _{0}^{c}f(x)dx\leq M\cdot c}$.
Hence, ${\displaystyle \int _{0}^{1}f(x)dx\leq M\cdot c+\int _{c}^{1}f(x)dx}$ but since we can choose any ${\displaystyle c>0}$ and ${\displaystyle M}$ is fixed we can choose ${\displaystyle c={\frac {\epsilon }{2M}}}$ which yields ${\displaystyle \int _{0}^{1}f(x)dx\leq {\frac {\epsilon }{2}}+\int _{c}^{1}f(x)dx}$ So, given ${\displaystyle \epsilon }$ we can always choose a ${\displaystyle c}$ such that ${\displaystyle \int _{c}^{1}f(x)dx\in B_{\epsilon }\left(\int _{0}^{1}f(x)dx\right)}$ as desired.
### b
Considered the function which is defined to be ${\displaystyle n(-1)^{n}}$ on the last ${\displaystyle 6/(n^{2}\pi ^{2})}$ of the interval [0,1] and zero at those ${\displaystyle f(x)}$ where ${\displaystyle x=6/(n^{2}\pi ^{2})}$. This function is well defined, since we know that ${\displaystyle \sum _{n=1}^{\infty }6/(n^{2}\pi ^{2})=1}$.
More specifically the function has value ${\displaystyle n(-1)^{n}}$ on the open interval from ${\displaystyle (p_{n}=1-\sum _{m=1}^{n-1}6/(m^{2}\pi ^{2}),p_{n+1}=1-\sum _{m=1}^{n}6/(m^{2}\pi ^{2}))}$
First we evaluate the integral of the function itself. Consider a partitioning of the interval ${\displaystyle [0,1]}$ at each ${\displaystyle p_{n}\pm \epsilon }$ for some ${\displaystyle \epsilon >0}$
Then, the lower and upper sums corresponding to the intervals of the partition from ${\displaystyle p_{n}-\epsilon }$ to ${\displaystyle p_{n+1}+\epsilon }$ are the same, since the function is constant valued on these intervals. Moreover, as ${\displaystyle \epsilon \to 0}$ the value of the upper and lower sums both approach ${\displaystyle n(-1)^{n}(p_{n+1}-p_{n})}$.
Thus we can express the value of the integral as the sum of the series ${\displaystyle \sum _{n=1}^{\infty }\left({\frac {6}{n^{2}\pi ^{2}}}\right)n(-1)^{n}=\sum _{n=1}^{\infty }\left({\frac {(-1)^{n}6}{n\pi ^{2}}}\right)}$ ${\displaystyle ={\frac {6}{\pi ^{2}}}\sum _{n=1}^{\infty }\left({\frac {(-1)^{n}}{n}}\right)}$ but we recognize this sum as just a constant multiple of the alternating harmonic series. Hence, the integral converges.
Now we examine the integral of the absolute value of the function. We argue similarly to the above, again partitioning the function at ${\displaystyle p_{n}\pm \epsilon }$ as defined above. The difference is that now, as we let ${\displaystyle \epsilon \to 0}$ the upper and lower sums both go to ${\displaystyle \sum _{n=1}^{\infty }\left({\frac {6}{n^{2}\pi ^{2}}}\right)n=\sum _{n=1}^{\infty }\left({\frac {6}{n\pi ^{2}}}\right)}$ ${\displaystyle ={\frac {6}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n}}}$ and so the integral does not exist, as this is the harmonic series, which does not converge.
In the above proof of divergence the important point is that the lower sums diverge. The fact that the upper sums diverge is an immediate consequence of this.
So, we have demonstrated a function whose integral converges, but does not converge absolutely as desired.
## 8
We begin by showing (${\displaystyle \Rightarrow [itex])that[itex]\int _{1}^{\infty }f(x)dx}$ converges if ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges.
So, we assume to start that ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges. Now consider the partition ${\displaystyle P=\{p_{n}\ |\ p_{n}=n,n\in \mathbb {N} \}}$. Since ${\displaystyle f(x)}$ decreases monotonically it must be that ${\displaystyle inf\{f([p_{n},p_{n+1}])\}=f(p_{n+1})}$ and similarly that ${\displaystyle sup\{f([p_{n},p_{n+1}])\}=f(p_{n})}$. Thus, the integral which we are trying to evaluate is bounded above by ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ and below by ${\displaystyle \sum _{n=2}^{\infty }f(n)}$.
Now we observe that ${\displaystyle \int _{a}^{\infty }f(x)dx}$ may be written as a sum over the domain as ${\displaystyle \sum _{n=1}^{\infty }\left(\int _{p_{n}}^{p_{n+1}}f(x)dx\right)}$ We know moreover that each of these integrals exist, by Theorem 6.9. Also, since ${\displaystyle f(x)}$ is always positive each such integral must be positive. Therefore, the integral may be expressed as a sum of a nonnegative series which is bounded above. Hence, by Theorem 3.24 the integral exists.
Now we prove (${\displaystyle \Leftarrow }$) that if ${\displaystyle \int _{1}^{\infty }f(x)dx}$ converges then ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges.
So assume now that ${\displaystyle \int _{1}^{\infty }f(x)dx}$ converges. Then we can prove that the summation ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ satisfies the Cauchy criterion. We established above ${\displaystyle \int _{k}^{\infty }f(x)dx}$ is bounded above by ${\displaystyle \sum _{n=k}^{\infty }f(n)}$ and below by ${\displaystyle \sum _{n=K+1}^{\infty }f(n)}$. This implies that given a sum ${\displaystyle \sum _{n=K+1}^{\infty }f(n)}$ it is bounded above by the integral ${\displaystyle \int _{k}^{\infty }f(x)dx}$. Moreover, since the integral ${\displaystyle \int _{k}^{\infty }f(x)dx}$ exists and ${\displaystyle f}$ is nonegative we know that it has the property given ${\displaystyle \epsilon >0\ \exists M}$ such that ${\displaystyle \int _{M}^{\infty }f(x)dx<\epsilon }$. For otherwise the integral would not exist and instead tend to infinity.
So now we can apply the Cauchy criterion for series. Since an upper bound of the series has the property that given ${\displaystyle \epsilon >0\ \exists M}$ such that ${\displaystyle \sum _{M}^{\infty }f(x)<\epsilon }$. So must the series itself have this property.
Thus, the sum converges as desired.
## 10
### a
We will prove that If ${\displaystyle u\geq 0}$ and ${\displaystyle v\geq 0}$ then ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ and that equality holds if and only if ${\displaystyle u^{p}=v^{q}}$ \begin{proof} We begin by proving the special case of equality
Assume that ${\displaystyle u^{p}=v^{q}}$. ${\displaystyle \Leftrightarrow u=v^{q/p}}$ ${\displaystyle \Leftrightarrow vu=v^{q/p+1}}$ ${\displaystyle \Leftrightarrow vu=v^{q/p+1}}$ ${\displaystyle \Leftrightarrow vu=v^{q(1/p+1/q)}}$ ${\displaystyle \Leftrightarrow vu=v^{q}}$ (Similarly we can show that ${\displaystyle vu=u^{p}\Leftrightarrow u^{p}=v^{q}}$.) Thus, ${\displaystyle vu=v^{q}\Leftrightarrow u^{p}=v^{q}}$ and we see moreover that ${\displaystyle uv={\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}\Leftarrow vu=v^{q}}$ since in this case we have ${\displaystyle uv=v^{q}\left({\frac {1}{p}}+{\frac {1}{q}}\right)\checkmark }$ Also, if it is not the case that ${\displaystyle vu=v^{q}}$ then it is easy to see that ${\displaystyle uv\neq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ as for a sum of quotients by ${\displaystyle p}$ and ${\displaystyle q}$ to not contain ${\displaystyle p}$, ${\displaystyle q}$ we must have the numerators equal.
Now we show that as we vary ${\displaystyle u}$ we must always have ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$. For, compute the derivative of ${\displaystyle uv}$ with respect to ${\displaystyle u}$, and the derivative of ${\displaystyle {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ with respect to ${\displaystyle u}$. We get ${\displaystyle v}$ and ${\displaystyle u^{p-1}}$ respectively. If we have ${\displaystyle u^{p}=v^{q}}$ then these are equal as demonstrated above (we showed that ${\displaystyle uv=u^{p}}$ in that case). In the case that ${\displaystyle u}$ is larger than this value then ${\displaystyle u^{p-1}>v}$ and in the case that ${\displaystyle u}$ is less than this value then ${\displaystyle u^{p-1}.
This argument can be repeated in an analogous manner for variations in ${\displaystyle v}$, and given any ${\displaystyle p}$ and ${\displaystyle q}$ we can find values for which ${\displaystyle u^{p}=v^{q}}$.
Thus, we observe that ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ as desired\end{proof}
### b
If ${\displaystyle f\in {\mathcal {R}}(\alpha )}$, ${\displaystyle g\in {\mathcal {R}}(\alpha )}$, ${\displaystyle f\geq 0}$, ${\displaystyle g\geq 0}$, and ${\displaystyle \int _{a}^{b}f^{p}d\alpha =1=\int _{a}^{b}g^{q}d\alpha ,}$ then ${\displaystyle \int _{a}^{b}fgd\alpha \leq 1}$ \begin{proof}
If ${\displaystyle 0\leq f\in {\mathcal {R}}(\alpha )}$ and ${\displaystyle 0\leq g\in {\mathcal {R}}(\alpha )}$ then ${\displaystyle f^{p}}$ and ${\displaystyle g^{q}}$ are in ${\displaystyle {\mathcal {R}}(\alpha )}$ by Theorem 6.11. Also, we have ${\displaystyle fg\in {\mathcal {R}}(\alpha )}$ so we get ${\displaystyle \int _{a}^{b}fgd\alpha \leq {\frac {1}{p}}\int _{a}^{b}f^{p}d\alpha +{\frac {1}{q}}\int _{a}^{b}g^{q}d\alpha =1}$ as desired.\end{proof}
### c
We prove H\"older's inequality \begin{proof} If ${\displaystyle f}$ and ${\displaystyle g}$ are complex valued then we get ${\displaystyle \left|\int _{a}^{b}fgd\alpha \right|\leq \int _{a}^{b}|f||g|d\alpha .}$
If ${\displaystyle \int _{a}^{b}|f|^{p}\neq 0}$and ${\displaystyle \int _{a}^{b}|g|^{q}\neq 0}$ then applying the previous part to the functions ${\displaystyle |f|/c}$ and ${\displaystyle |g|/d}$ where ${\displaystyle c^{p}=\int _{a}^{b}|g|^{q}}$ and ${\displaystyle d^{q}=\int _{a}^{b}|g|^{q}}$ gives what we wanted to show.
${\displaystyle \left|\int _{a}^{b}fgd\alpha \right|\leq \left(\int _{a}^{b}|f|^{p}d\alpha \right)^{1/p}+\left(\int _{a}^{b}|g|^{q}d\alpha \right)^{1/q}}$
However, if one of the above is zero (say without loss of generality ${\displaystyle \int _{a}^{b}|f|^{p}=0}$ then we just have ${\displaystyle \int _{a}^{b}|f|(c|g|)d\alpha \leq c^{q}{\frac {1}{q}}\int _{a}^{b}|g|^{q}d\alpha }$ for ${\displaystyle c>0}$. Taking the limit ${\displaystyle c\to 0}$ we observe that the inequality is still true.
${\displaystyle \int _{a}^{b}|f||g|d\alpha =0}$
\end{proof}
## 16
\begin{enumerate}
### a
We take the expression ${\displaystyle s\int _{1}^{\infty }{\frac {[x]}{x^{s+1}}}dx}$ and express it as a sum of integrals on the intervals ${\displaystyle (n,n+1)}$ to get ${\displaystyle s\left(\int _{1}^{2}{\frac {[x]}{x^{s+1}}}dx+\int _{2}^{3}{\frac {[x]}{x^{s+1}}}dx+\dots \right)}$ but since each such interval ${\displaystyle [x]}$ is the same, we just write ${\displaystyle s\left(\int _{1}^{2}{\frac {1}{x^{s+1}}}dx+\int _{2}^{3}{\frac {2}{x^{s+1}}}dx+\dots \right)}$(1)
Now we exploit the Fundamental Theorem of Calculus, computing ${\displaystyle \int _{n}^{n+1}{\frac {n}{x^{s+1}}}dx=n\left[-{\frac {x^{-s}}{s}}\right]_{n}^{n+1}=n\left(-{\frac {(n+1)^{-s}}{s}}+{\frac {n^{-s}}{s}}\right).}$ So, the summation in Equation 1 can, more explicitly be written as ${\displaystyle s\sum _{n=1}^{\infty }n\left(-{\frac {(n+1)^{-s}}{s}}+{\frac {n^{-s}}{s}}\right)=\sum _{n=1}^{\infty }\left({\frac {n}{n^{s}}}-{\frac {n}{(n+1)^{s}}}\right)}$ However, grouping common denominators, we observe that the sum partially telescopes to yield more simply ${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\zeta (s).}$
### b
Having now proved Part a it suffices to show that ${\displaystyle s\int _{1}^{\infty }{\frac {[x]}{x^{s+1}}}dx={\frac {s}{s-1}}-s\int _{1}^{\infty }{\frac {x-[x]}{x^{s+1}}}dx.}$
By the Fundamental Theorem of Calculus we have ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s}}}dx={\frac {1}{s-1}}}$ So \begin{eqnarray} \int_1^\infty \frac{x}{x^{s+1}} dx&=&\frac{1}{s-1}\\ \Rightarrow s \int_1^\infty \frac{x}{x^{s+1}} dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \left( \frac{x-[x]}{x^{s+1}} + \frac{[x]}{x^{s+1}} \right) dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \left( \frac{x-[x]}{x^{s+1}} + \frac{[x]}{x^{s+1}} \right) dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \frac{[x]}{x^{s+1}} dx &=&\frac{s}{s-1} - s \int_1^\infty \frac{x-[x]}{x^{s+1}} dx\\ \end{eqnarray*} as desired\
end part b
It remains now to show that the integral in Part \ref{2} converges.
Since for ${\displaystyle x\in (1,\infty )[itex]wehave[itex]0\leq {\frac {x-[x]}{x^{s+1}}}\leq {\frac {1}{x^{s+1}}}}$ we know that ${\displaystyle \int _{1}^{\infty }{\frac {x-[x]}{x^{s+1}}}dx}$ converges if and only if ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s+1}}}dx}$ converges.
However, ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s+1}}}dx}$ converges by the integral test (Problem 8) since we have already shown that the sequence ${\displaystyle \sum _{x=1}^{\infty }{\frac {1}{x^{s+1}}}}$ is convergent for ${\displaystyle 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 146, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995516538619995, "perplexity": 544.00834868785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00441-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://repository.lboro.ac.uk/articles/A_study_of_aero_engine_fan_flutter_at_high_rotational_speeds_using_holographic_interferometry/9525545/1 | ## A study of aero engine fan flutter at high rotational speeds using holographic interferometry
2013-06-27T10:45:40Z (GMT)
Aero-elastic instability is often a constraint on the design of modern high by-pass rat.io aero engines. Unstalled supersonic flutter is an instability which can be encountered in shrouded fans, in which mechanical vibrations give rise to unsteady aerodynamic forces which couple further energy into the mechanical vibration. This phenomenon is particularly sensitive to the deflection shape of the mechanical vibration, A detailed measurement of the vibrational deflection shape of a test fan undergoing supersonic unstalled flutter was sought by the author. This measurement was required in order to assess the current theoretical understanding and modelling of unstalled fan flutter, The suitability of alternative techniques for this measurement was assessed, Pulsed holographic interferometry was considered optimum for this study because of its full field capability, large range of sensitivity, high spatial resolution and good accuracy. A double pulsed holographic system, employing a rnirror~Abbe image rotator, was built specifically for this study, The mirror-Abbe unit was employed to rotate the illuminating beam and derotate the light returned from the rotating fan. This therefore maintained correlation between the two resultant holographic images. The holographic system was used to obtain good quality interferograms of the 0.86m diameter test fan when rotating at speeds just under 10 000rpm and undergoing unstalled flutter. The resultant interferograms were analysed to give the flutter deflection shape of the fan. The study of the fan in flutter was complemented by measurement of the test fan's vibrational characteristics under non-rotating conditions. The resultant experimental data were in agreement with the current theoretical understanding of supersonic unstalled fan flutter. Many of the assumptions employed in flutter prediction by calculation of unsteady work were experimentally verified, The deflection shapes of the test fan under non-rotating and flutter conditions were compared with those predicted by a finite element model of the structure and reasonably good agreement was obtained. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781489133834839, "perplexity": 2470.482772944841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00232.warc.gz"} |
https://custom-writing.org/qna/the-primary-sample-used-in-defining-the-sample-size-required/ | A
The primary sampling is habitually done to warrant that the study populace that is being researched is accurately sampled while the study sample is equitably disseminated. In other words, the distribution of the sample within the population is fair. Moreover, the preliminary testing or previous sampling is useful in avoiding the problem of collecting large sets of data that would not be useful at the end of the survey. Collecting useless data may result from improper sampling preparation, preservation, or ineffective methods of sampling. Preliminary sampling prevents situations where the target population could easily be missed.
Preliminary sampling is the method used to get the auxiliary information that would be utilized to attain more efficient sampling as well as the estimation procedures. Moreover, preliminary samples are better used, especially where there is the unavailability of prior information about the entire population. The information attained through preliminary sampling is then used to estimate the smaller and final sample. Generally, the information gotten through the preliminary sample for sampling purposes is used to change the selection of probabilities, group the units, and for the direct use in forming the estimates. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934962153434753, "perplexity": 817.0649437192285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00214.warc.gz"} |
https://math.codidact.com/posts/282046 | ### Communities
tag:snake search within a tag
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
created:<1w created < 1 week ago
post_type:xxxx type of post
Q&A
# Why rational to be indifferent between two urns, when urn A has 50-50 red and white balls, but you don't know urn B's ratio?
+1
−1
Please see the embolden sentence below. Assume that I'm risk adverse and "prefer the known chance over the unknown". Why's it irrational for me to choose A?
Also, there were problems on the probability side. One famous debate concerned a paradox posed by Daniel Ellsberg (of later fame due to publishing the Pentagon Papers) It involved multiple urns, some with known and some with unknown odds of drawing a winning ball. Instead of estimating the expected value of the unknown probability, and sticking with that estimate, most people exhibit strong aversion to ambiguity in violation of basic probability principles. A simpler version of the paradox would be as follows. You can choose one of two urns, each containing red and white balls. If you draw red you win $100 and nothing otherwise. You know that urn A has exactly a 50-50 ratio of red and white balls. In urn B, the ratio is unknown. From which urn do you wish to draw? Most people say A since they prefer the known chance over the unknown, especially since some suspect that urn B is perhaps stacked against them. But even if people can choose the color on which to bet, they still prefer A. Rationally, you should be indifferent, or if you think you can guess the color ratios, choose the urn with the better perceived odds of winning. Yet, smart people would knowingly violate this logical advice. Paul Slovic, The Irrational Economist (2010), p 56. Why does this post require moderator attention? You might want to add some details to your flag. Why should this post be closed? #### 1 comment thread General comments (1 comment) ## 2 answers +2 −0 Let's assume you know that urn B has 5 balls in it. I deliberately take an odd number, because that way we know for sure that there are not exactly the same number of red and white balls in that urn. Note that since you don't know the content of the urn, you have to assign probabilities to the urn. Now what could the content of the urn be? Well, for example, it could have 1 red and 4 white balls. But then, it also could have 1 white and 4 red balls in it. So which of those is more likely? Well, unless you have any reason to assume that the urn contains more white than red, or more red than white urns, you have to assign the same probability to both. Now for any number of balls in the urn, exchanging red and white balls gives another possible content of the urn, and the same argument as above gives equal probabilities for both of those contents. So what is the probability to draw a red ball from urn b, provided that it has 5 balls? Well, let's denote the probability of drawing red with$p(R)$, and the probability of drawing red given that there are$n$red balls (and$5-n$white balls) in the urn with$p(R|n)$. Then Bayes' Theorem gives us: $$p(R) = \frac05 p(R|0) + \frac15 p(R|1) + \frac25 p(R|2) + \frac35 p(R|3) + \frac45 p(R|4) + \frac55 p(R|5)$$ But by the argument above,$p(R|n) = p(R|5-n)$, therefore the above simplifies to $$p(R) = (\frac05+\frac55)p(R|0) + (\frac15+\frac45)p(R|1) + (\frac23+\frac35)p(R|2) = p(R|0) + p(R|1) + p(R|2) = \frac12$$ where the last equality is again because of the symmetry, and the fact that all probabilities have to add to$1$. Now this analysis works not just for$5$balls, but for any odd number of balls, and with a minor change also for all even numbers of balls. Thus no matter how many balls there are in urn B, the probability of drawing a red ball will always turn out to be$1/2$. For this reason, it also doesn't matter that you don't actually know the number of balls in urn B (except that of course there has to be at least one ball in it). Now whether it is really irrational to choose urn A over urn B is a completely different question. I think the text is wrong in claiming this. It is true that the expectation value is the same. But the expectation value is not everything. Consider the specific case that urn A contains one red and one white b all, while urn B can with equal probability contain two white balls, a white and a red ball, or two red balls. Note that here we are in a better situation than in the original puzzle because we are actually given both the possible contents of the urn and the corresponding probabilities. Now let's consider that we play two rounds. Obviously the expectation value is to win one of those rounds, no matter which of the urns we choose. Thus according to that text, both choices should be equivalent. But let us ask a different question: What is the probability that we don't win anything? Well, with urn A, the probability clearly is$1/4$: There are four different outcomes, and for only one of them the white ball is drawn twice. But for urn B, with probability$1/3$we have an urn where you are guaranteed to get a white ball twice, and with another probability$1/3$, you get the same urn as A, with probability$1/4$. Therefore the probability of not winning either game is$5/12$, which is considerably higher than$1/4$. In other words, with urn B indeed the risk is higher, although the probability of winning a single game (and therefore the expectation value) is equal. And thus if you are risk averse, choosing A over B is indeed rational. Anyone arguing otherwise would also have to argue that betting on getting a billion dollars with a probability of$1/1000000\$ is equivalent to getting a thousand dollars for sure.
Why does this post require moderator attention?
+1
−0
Say you have a coin and, if flipped, will land either heads or tails. What is the probability that it lands, say, heads? The "real" answer is that the probability is unknown. The information was not given at the start. We cannot proceed further then. But if we insist on moving on, we have to have a number. So we assume the probability is exactly 1/2 because there is one desired outcome (heads) and there are two possible outcomes (heads, tails). Because no information is given, we have no reason to think that heads are more likely or that tails are more likely.
Say an urn C has exactly one ball, either a red ball or a white ball. That's the only information you have. What then is the probability that a, say, red ball is drawn? The real answer is that the probability is unknown. But if we insist on moving on, we assume the probability is exactly 1/2.
Urn B has only red or white balls. We don't know how many of each there are. What is the probability that, say, a red ball is drawn? We assume the probability is exactly 1/2. This is the same as the probability for urn A. And since the probabilities are the same for urn A and for urn B, there is no reason to prefer one over the other.
Why does this post require moderator attention? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351654648780823, "perplexity": 869.4850912672047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00169.warc.gz"} |
http://cms.math.ca/cjm/msc/42C40?fromjnl=cjm&jnl=CJM | location: Publications → journals
Search results
Search: MSC category 42C40 ( Wavelets and other special systems )
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2011 (vol 63 pp. 689)
Olphert, Sean; Power, Stephen C.
Higher Rank Wavelets A theory of higher rank multiresolution analysis is given in the setting of abelian multiscalings. This theory enables the construction, from a higher rank MRA, of finite wavelet sets whose multidilations have translates forming an orthonormal basis in $L^2(\mathbb R^d)$. While tensor products of uniscaled MRAs provide simple examples we construct many nonseparable higher rank wavelets. In particular we construct \emph{Latin square wavelets} as rank~$2$ variants of Haar wavelets. Also we construct nonseparable scaling functions for rank $2$ variants of Meyer wavelet scaling functions, and we construct the associated nonseparable wavelets with compactly supported Fourier transforms. On the other hand we show that compactly supported scaling functions for biscaled MRAs are necessarily separable. Keywords: wavelet, multi-scaling, higher rank, multiresolution, Latin squaresCategories:42C40, 42A65, 42A16, 43A65
2. CJM 2008 (vol 60 pp. 334)
Curry, Eva
Low-Pass Filters and Scaling Functions for Multivariable Wavelets We show that a characterization of scaling functions for multiresolution analyses given by Hern\'{a}ndez and Weiss and that a characterization of low-pass filters given by Gundy both hold for multivariable multiresolution analyses. Keywords:multivariable multiresolution analysis, low-pass filter, scaling functionCategories:42C40, 60G35
3. CJM 2006 (vol 58 pp. 1121)
Bownik, Marcin; Speegle, Darrin
The Feichtinger Conjecture for Wavelet Frames, Gabor Frames and Frames of Translates The Feichtinger conjecture is considered for three special families of frames. It is shown that if a wavelet frame satisfies a certain weak regularity condition, then it can be written as the finite union of Riesz basic sequences each of which is a wavelet system. Moreover, the above is not true for general wavelet frames. It is also shown that a sup-adjoint Gabor frame can be written as the finite union of Riesz basic sequences. Finally, we show how existing techniques can be applied to determine whether frames of translates can be written as the finite union of Riesz basic sequences. We end by giving an example of a frame of translates such that any Riesz basic subsequence must consist of highly irregular translates. Keywords:frame, Riesz basic sequence, wavelet, Gabor system, frame of translates, paving conjectureCategories:42B25, 42B35, 42C40
4. CJM 2002 (vol 54 pp. 634)
Weber, Eric
Frames and Single Wavelets for Unitary Groups We consider a unitary representation of a discrete countable abelian group on a separable Hilbert space which is associated to a cyclic generalized frame multiresolution analysis. We extend Robertson's theorem to apply to frames generated by the action of the group. Within this setup we use Stone's theorem and the theory of projection valued measures to analyze wandering frame collections. This yields a functional analytic method of constructing a wavelet from a generalized frame multi\-resolution analysis in terms of the frame scaling vectors. We then explicitly apply our results to the action of the integers given by translations on $L^2({\mathbb R})$. Keywords:wavelet, multiresolution analysis, unitary group representation, frameCategories:42C40, 43A25, 42C15, 46N99 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344419002532959, "perplexity": 1476.2872237623308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099361.57/warc/CC-MAIN-20150627031819-00014-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/67966/fluids-in-thermodynamic-equlibrium | # Fluids in thermodynamic equlibrium
I am reading about the Euler equations of fluid dynamics from Leveque's Numerical Methods for Conservation Laws (Amazon link). After introducing the mass, momentum and energy equations, some thermodynamic concepts are discussed, to introduce an equation of state.
He says
In the Euler equations we assume that the gas is in chemical and thermodynamic equilibrium and that the internal energy is a known function of pressure and density.
After this, the usual thermodynamics-related equation of state (EOS) discussions are carried out.
Now chemical equilibrium I understand (number of moles of the chemical constituents do not change), however I don't understand how the assumption of thermodynamic equilibrium can be imposed.
From what baby thermodynamics I know, any thermodynamic analysis is always calculated for quasi-static processes, like 'slowly' pushing a piston in a cylinder of gas.
But in fluid dynamics fluids are flowing and that too rapidly and from intuition there will not be any thermodynamic equilibrium during fluid flow.
Where is my understanding going wrong?
-
The excerpt from the text forgets to mention that you assume Local Thermodynamic Equilibrium, and not full Thermodynamic Equilibrium, so to make it possible to define point to point (or from region to region) an EoS.
If there is no sense of being 'close' to thermodynamical equilibrium, it is simply impossible to talk about EoS, pressure and the like from the "Hydrodynamics as Local Thermal Equilibrium".
From the strict thermodynamic view, you can't talk of anything time-dependent. All piston an thermal cycles use the abuse of talking of 'quasi-equilibrium' without really defining it, and simply postulate that you can use full traditional thermodynamics all around. From a 'rigorous' point of view, the only thing that you can talk in thermodynamics are stationary, homogeneous systems, which are in the 'thermodynamical limit' ( $N,V,S...\rightarrow \infty$ but $N/V, S/V...$ fixed), so no time dependence.
The idea of flow is that even if the fluid flows very fast in relation to a fixed observer, if you go to the rest frame of that piece of fluid, you can talk of thermodynamic equilibrium close to that small part of the fluid.
I believe that the best way to understand how this all works is through Boltzmann Equation, which I can develop latter if you wish so.
So, you have boltzmann equation:
$\frac{\partial f}{\partial t}+\vec v \cdot \frac{\partial f}{\partial \vec r}+\vec F\cdot \frac{\partial f}{\partial \vec p} = \int d^3 \vec p_0 d\Omega\ g\ \sigma(g,\Omega) (f'f_1' - ff_1)$
Where:$g=|\vec p - \vec p_0|$, $\sigma(g,\Omega)$ is the differential cross section between gas molecules, and the primed distributions are evaluated with the momentum that corresponds to an out-going (solid)-angle $\Omega$, with ingoing momenta $\vec p$ and $\vec p_0$. The normalization is $\int d^3\vec r\ d^3 \vec p\ f(t,\vec r,\vec p)=N$, where N is the total number of particles.
We believe that this equation provides a good description to 1-particle distribution function, in phase space, of a a rarefied gas composed with hard-spheres(i.e. hard, short range, repulsive potential, with only elastic collisions). Putting aside whether it's justified or not to model a gas this way, simply believe for the moment that it works.
Now you want to model a gas inside a box as beeing in full thermodynamical equilibrium. Equilibrium is when you have stationary, homogeneous material. So, you want to look for solutions of Boltzmann equation that have this kind of symmetry, and thus:
$f(t,\vec r, \vec p) = \frac{N}{V}Id_V(\vec r)f_0(\vec p)$
So, the most of inhomogeneity that there may be is an indicator function that says that outside the box, there is no gas. We are also supposing that the only external forces are on the walls of the box, and so, in the bulk of the gas we have $\vec F=0$
Now we feed this ansatz to the Boltzmann equation and see what happens. Now, from the above assumptions $\partial f/\partial t=0$, $\partial f/\partial \vec r = 0$ and $\vec F=0$ on the bulk. This gives us:
$\frac{\partial f}{\partial t}+\vec v \cdot \frac{\partial f}{\partial \vec r}+\vec F\cdot \frac{\partial f}{\partial \vec p} = 0 = \int d^3 \vec p_0 d\Omega\ g\ \sigma(g,\Omega) (f'f_1' - ff_1)$
So we need to kill the collision kernel in order to satisfy the Boltzmann equation. The easiest way is to nullify the subtraction inside it by putting:
$f_0(\vec p)f_0(\vec p_1)=f_0(\vec p')f_0(\vec p_1')$
For all possible (elastic) collision outcomes. Now comes the the smart point. Lets take the $\log$ of the above expression.
$\log f_0(\vec p) + \log f_0(\vec p_1)=\log f_0(\vec p') + \log f_0(\vec p_1')$
If $\log f_0$ is function only of additive conserved quantities on the collision, we get the relation above for free! (Ok, not completely for free, its possible to show that this is essentially the only way to do it)
Now, for elastic binary collisions, we have only 3 conserved quantitites: Mass, Linear Momentum (because we believe that there is no relevant rotation) and kinectic energy.
Now we write:
$\log f_0(\vec p) = A\frac{\vec p^2}{2m}+ \vec B \cdot \vec p + Cm$
Massaging the above expression and we use integrability conditions, we may write:
$\log f_0(\vec p) = -\frac{ (\vec p-\vec p_0)^2}{2m\sigma^2}+ \log N_0$
In the case of the box, we know that the box isn't moving (equivalently, it's locally isotropic), so we put $\vec p_0=0$ and we get the Boltzmann Distribution as a solution to Boltzmann equation for equilibrium conditions. Further, we can identify $\sigma^2 = k_BT$ and we close the identification.
Now, to hydrodynamics. To find hydrodynamical equation from Botlzmann equation, we "Take Moments" from it, i.e., e multiply it by powers of the linear momentum, and integrate in momentum, so we get equations for things that live in usual 3D space.
Multiplying by $\chi(\vec p)$ and integrating:
$\frac{\partial}{\partial t}\left(\int d^3\vec p\ \chi(\vec p) f\right) + \frac{1}{m} \nabla_{\vec r} \cdot \left(\int d^3\vec p\ \chi(\vec p)\vec p f\right) + \vec F \cdot \left(\int d^3\vec p\ \chi(\vec p) \frac{\partial f}{\partial \vec p}\right) = \int d^3 \vec p\ d^3 \vec p_0\ d\Omega\ g\chi(\vec p)\ \sigma(g,\Omega) (f'f_1' - ff_1)$
It's possible to show that if $\chi(\vec p)$ is a conserved quantity on binary collisions, the last term is $=0$, so that's what we are going to look for. Choosing $\chi(\vec p)=m$, we arrive at:
$\frac{\partial}{\partial t}\left(\int d^3\vec p\ m f\right) + \nabla_{\vec r} \cdot \left(\int d^3\vec p\ \vec p f\right) = 0$
Identifying $\rho = \int d^3\vec p\ m f$ as the mass density and $\int d^3\vec p\ \vec p f = \vec j = \rho <\vec v> = \rho \vec u$ the mass current, we have the continuity equation for mass density:
$\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec v) = 0$
Setting $\chi(\vec p) = \vec p$ we arrive at:
$\frac{\partial }{\partial t}(\rho \vec u) + \nabla \cdot \Sigma - \rho F = 0$
Where $\Sigma_{ij} = \int d^3\vec p\ p_i p_j f(t,\vec r,\vec p)$. Now we can decompose $\vec p = m<\vec v> + \delta \vec p = m\vec u + \delta \vec p$, where we identify the average velocity as $\vec u = \frac{1}{\rho} \vec j$. This average velocity is what we identify as the fluid velocity. Going back to the last equation we have:
$\frac{\partial }{\partial t}(\rho \vec u) + \nabla \cdot \left(\rho \vec u \otimes \vec u + \Pi \right) = \vec f$
Where, finally, we identify $\Pi_{ij} = \int d^3\vec p\ \delta p_i \delta p_j f(t,\vec r,\vec p)$ as the stress tensor. The convective par is already there, and we now can appreciate the conection between kinectic theory and hydrodynamics. Comming back to Boltzmann Distribuition:
$f=n(t,\vec r)f_0(\vec p)$
$f_0(\vec p) = \frac{1}{(2\pi m k_BT)^{3/2}}e^{-\frac{ (\vec p-\vec p_0)^2}{2mk_BT}}$
We said that for Thermodynamics, we had $n$,$T$ and $\vec p_0$ constant all along the gas. For Hydrodynamics, we try to retain that functional form, and relax this assumption, i.e., we try (again) to find solutions to Boltzmann equation with the above form, but with $T(t,\vec r)$ and $\vec p(t,\vec r)$ possibly have some dependence in time and space, and so we talk about local thermal equilibrium, since we try to keep, locally, an equilibrium distribution.
If we do that, we end up with $\rho = m\times n(t,\vec r)$ and $\vec u = \frac{\vec p_0}{m}$, which wasn't totally unexpected, and $\vec p - m\vec u= \delta \vec p$, so the Boltzmann distribution measures the (local) fluctuation of velocity. Now computing the stress tensor:
$\Pi_{ij} = \frac{n}{(2\pi m k_BT)^{3/2}}\int d^3\vec p\ \delta p_i \delta p_j e^{-\frac{\delta \vec p^2}{2mk_BT}}$
It' not too difficult to see that the stress tensor above is proportional to the identity tensor, and we identify $\Pi_{ij} = p \delta_{ij}$, and since we have a relation between pressure, density and temperature, we have an EoS. If you plug that on the original equation with $\Pi$, you end up with Euler equation for Fluid Dynamics. So you can think that Euler equation is an evolution equation for something that is in strict local equilibrium.
Also, if you look closely, you will see that the probability distribution only care about the velocity fluctuation $\delta \vec p/m$, and not the actual velocity of the fluid $\vec u$. Here enters your question about the fluid flow:
But in fluid dynamics fluids are flowing and that too rapidly and from intuition there will not be any thermodynamic equilibrium during fluid flow.
From the fluid standpoint, the average velocity is not important to the thermodynamics, only the fluctuations around this average velocity.
Chemical equilibrium is not being considered here, since we are supposing that the fluid have a single chemical species, so it's naturally in chemical equilibrium.
Now, beyond Euler equation:
One very (strong) assumption that we made was that the fluid had the distribution in phase space that was locally Maxwell-Boltzmann. What would happen if we dropped this assumption?
Generally, we can't solve (or can only solve numerically) Boltzmann Equation except on very special cases, so, as any good physicist, we go to the next best thing: Approximate Solutions
What happens if our system is not on equilibrium but close to equilibrium? It should be possible to write $f=f_0\phi$ where $\phi \approx 1$. Now, you would like to find some parameter that you could use to to some kind of perturbation expansion around it. This parameter is essentially the Knudsen Number of the system. If you do this, essentially, the only thing that should change here is the stress tensor, which depends explicitly of the form of the distribution on phase space.
The Knudsen Number is essencially a measure of how far the "microscopic" scale of your system is far from the "macroscopic" scale. If they are sufficiently far apart, i.e. $Kn << 1$, an macroscopic, or Hydrodynamical, description of your system should be good.
The zero-th order on Kn should be $f_0$, so you seek something like $\phi = 1+Kn \phi_1 + (Kn)^2 \phi_2 + ...$
You can carry out this calculation, which is rather lengthy, and what you find (if I remember correctly) is that in first order on the Knudsen Number, you find the Navier-Stokes Stress tensor, which in turn bring you to Navier-Stokes equation, with the bulk and shear viscosity coefficients.
Not only this, you can calculate the dependence on the density and temperature for this coeficients, so you not only have the general form of the evolution equation, but you also have an "EoS" in the extended sense, so to encompass also the viscosity coeffients.
So, the idea of using this method is to define pressure, temperature and the like on the "Equilibrium" part of the distribution, and viscosity and any other kind of effect on the "Non-Equilibrium" part. In this sense you can talk about thermodynamics since you are near (local) equilibrium, even you are not exactly on equilibrium.
So, this is one way to see hydrodynamics as 'mean kinetic theory', and also as an (almost) local thermodynamics. There are also other ways to do it. One is to study Non-Equilibrium Thermodynamics as a macroscopic (in the same sense as classical thermodynamics) mean theory. This is done, in the linear theory, by De Groot & Mazur.
I hope that I have clarified some of your questions. I believe this is a very interesting subject, and I like it very much myself.
-
Wow thanks! Yes I would really appreciate some pointers on how Boltzmann equation explains all this away. – smilingbuddha Jun 13 '13 at 22:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9346691966056824, "perplexity": 323.09128303207757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118807.54/warc/CC-MAIN-20160428161518-00188-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/63927/localized-electrons-in-the-crystals | # localized electrons in the crystals
Why electrons in low lying levels of individual atoms stay localized in their own atoms in a crystal? Doesn't this contradict Bloch's theorem?
-
Yes (that would be the lowest state in the energy band, to which 1s energy level would expand to). But in that case the crystal would be unstable, because the average charge density would be positive then. To make a zero average charge density, you need to take as many electrons as protons in the nuclei. For the simpler model, consider 1 electron per 2 protons - an $\mathrm{H}_2^+$ hydrogen molecular ion. – firtree May 9 at 10:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059723615646362, "perplexity": 849.6280938830796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345777160/warc/CC-MAIN-20131218054937-00014-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://indianf.com/aries-astronomers-trace-the-mystery-behind-dwarf-galaxies/ | # ARIES astronomers trace the mystery behind dwarf galaxies
Amidst the billions of galaxies in the universe, a large number are tiny ones 100 times less massive than our own Milky-way galaxy.
While most of these tiny tots called dwarf galaxies form stars at a much slower rate than the massive ones, some dwarf galaxies are seen forming new stars at a mass-normalized rate 10-100 times more than that of the Milky-way galaxy. These activities, however, do not last longer than a few tens of million-years, a period which is much shorter than the age of these galaxies – typically a few billion years.
Scientists observing dozens of such galaxies using two Indian telescopes have found that the clue to this strange behaviour of these galaxies lies in the disturbed hydrogen distribution in these galaxies and also in recent collisions between two galaxies.
To understand the nature of star formation in dwarf galaxies astronomers Dr. Amitesh Omar and his former student Dr. Sumit Jaiswal from Aryabhatta Research Institute of Observational Sciences (ARIES), an autonomous institute of Department of Science & Technology (DST), observed many such galaxies using the 1.3-meter Devasthal Fast Optical Telescope (DFOT) near Nainital and the Giant Meter wave Radio Telescope (GMRT).
While the former operated at optical wavelengths sensitive to detect optical line radiation emanating from the ionized Hydrogen, in the latter 30 dishes of 45-meter diameter, each worked in tandem and produced sharp interferometric images via spectral line radiation at 1420.40 MHz coming from the neutral Hydrogen in galaxies.
Star formation at a high rate requires very high density of Hydrogen in the galaxies. According to the study conducted by the ARIES team, the 1420.40 MHz images of several intense star-forming dwarf galaxies indicated that hydrogen in these galaxies is very disturbed. While one expects a nearly symmetric distribution of hydrogen in well-defined orbits in galaxies, hydrogen in these dwarf galaxies is found to be irregular and sometimes not moving in well-defined orbits.
Some hydrogen around these galaxies is also detected in forms of isolated clouds, plumes, and tails as if some other galaxy recently has collided or brushed away with these galaxies, and gas is scattered as debris around the galaxies. The optical morphologies sometimes revealed multiple nuclei and high concentration of ionized hydrogen in the central region.
Although galaxy-galaxy collision was not directly detected, various signatures of it were revealed through radio, and optical imaging, and these are helping to build up a story. The research, therefore, suggests that recent collisions between two galaxies trigger intense star formation in these galaxies.
The findings of this research with detailed images of 13 galaxies will be appearing in the forthcoming issue of Monthly Notices of Royal Astronomical Society (MNRAS) Journal published by the Royal Astronomical Society, the U.K. It will help astronomers to understand the formation of stars and evolution of less massive galaxies in the Universe
COVID 19: India reaches another record of highest single day recoveries
India has touched another peak of posting the highest recoveries of COVID-19 cases in a single day with 62,282 people have been discharged from the hospitals in the | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307620882987976, "perplexity": 2302.7263643530714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00189.warc.gz"} |
http://quant-splinters.blogspot.ch/2014/01/ | i
## Friday, 24 January 2014
### Drawdown Risk Budgeting: Contributions to Drawdown-At-Risk and the Drawdown Parity Portfolio
Similar to Value-At-Risk, Drawdown-At-Risk is defined as a point on the drawdown distribution defined by a probability interpreted as a "level of confidence". The well-known risk measure Maximum Drawdown is the 100% Drawdown-At-Risk, i.e. the drawdown which is not exceeded with certainty.
The table below shows Drawdown-At-Risk values for the constituents of a specific multi asset class universe (total returns, monthly figures, Jan 2001 to Oct 2011, base currency USD)...
In portfolio analytics, fully additive contributions to risk can be derived from Euler's homogeneous function theorem for linear homogeneous risk measures. Portfolio volatility and tracking error are examples of risk measure which are linear homogeneous in constituent weights.
Non-linear homogeneous risk measures can be approximated (e.g. Taylor series expansions, using the total differential as a linear approximation and so on). In the chart below, we show how the 95% DaR of an equal-weighted portfolio varies with variations in individual constituent weights (we make the assumptions that exposures are booked against a riskfree cash account with zero return)...
This chart is called "the Spaghetti chart" by certain people. In the case of the minimum 95% DaR portfolio, i.e. the fully invested long-only portfolio with minimum 95% Drawdown-At-Risk, all spaghettis must point downwards...
The full details of the risk decomposition for the equal-weighted portfolio...
...in comparsion with the minimum 95% DaR portfolio...
Additive contributions to portfolio Drawdown-At-Risk open up the door for drawdown risk budgeting. For example, the Drawdown Parity Portfolio can be calculated as the portfolio with equal constituent contributions to portfolio drawdown risk...
Due to the residual, the DaR contributions are not perfectly equalized. Taking into account estimation risk and other implementation issues, this is acceptable for practical purposes.
Being able to calculate additive contributions to drawdown-at-risk is useful for descriptive ex post or ex ante risk budgeting purposes. The trade risk charts are useful indicators providing information on a) the risk drivers in the portfolio and b) the directions to trade.
Budgeting drawdown risk is really budgeting for future drawdowns ("ex ante drawdown"). This involves estimating future drawdowns. Whether future drawdowns can be estimated with the required precision is an empirical question. In order to assess what this task might involve, it is interesting reviewing certain findings in the theoretical literature related to the expected maximum drawdown for geometric Brownian motions (see for example "An Analysis of the Expected Maximum Drawdown Risk Measure" by Magdon-Ismail/Atyia. More recently, analytical results have been derived for return generating processes with time-varying volatility). In the long-run, the expected maximum drawdown for a geometric Brownian motion is...
\$MDD_{e} = (0.63519 + 0.5 \cdot ln(T) + ln(\frac{mu}{sigma})) \cdot \frac{sigma^2}{mu} \$
Expected maximum drawdown is function in investment horizon (+), volatility (+) and expected return (-).
While we have time series models with proven high predictive power to estimate volatility risk (e.g. GARCH), the estimation of maximum drawdown is a much more challenging task because it involves estimating expected returns, which is known to be subject to much higher estimation risk.
## Monday, 6 January 2014
### Resampling the Efficient Frontier - With How Many Observations?
Since optimizer inputs are stochastic variables, it follows that any efficient frontier must be a stochastic object. The efficient frontier we usually plot in mean/variance space is the expected efficient frontier. The realized efficient frontier will almost always deviate from the expected frontier and will lie within certain confidence bands.
Several attempts have been made to illustrate the stochastic nature of the efficient frontier, the most famous one probably being the so-called "Resampled Efficient Frontier" (tm) by Michaud/Michaud(1998).
Resampling involves setting the number of simulations as well as setting the number of observations to generate in each simulation. The importance of the latter decision is typically underestimated.
The chart below plots the resampled portfolios of 16 portfolios on a particular mean/variance efficient frontier...
The larger density of points at the bottom left end of the frontier is a result from the fact that there exist two very similar corner portfolios in this area of the curve.
The chart below plots the same frontier with the same number of simulations, but a much larger number of generated observations...
As the confidence bands, average weights or any risk and return characteristics are largely determined by the choice of number of simulations and number of observations in each simulation, it is worth keeping an eye on these modelling decisions when using relying on a resampling approach for investment purposes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.874976634979248, "perplexity": 2251.1065988790515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00241-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/please-help-me-split-this-equation-into-2-equations.85842/ | 1. Aug 21, 2005
### memarf1
Im trying to turn this equation into 2 seperate equations in order to place it in a runge kutta problem. This is the proposed problem and conditions:
$$\frac{d^2f}{dx^2} + f = 0$$
allowing
$$f (x) = A\cos x + B\sin x$$
$$f ' (x) = -A\sin x + B\cos x$$
$$f '' (x) = -A\cos x - B\sin x$$
and
$$g = \frac{df}{dx}$$
meaning
$$\frac{df}{dx} - g = 0$$ which is identical to $$\frac{d^2f}{dx^2} + f = 0$$
so
$$\frac{dg}{dx} + f = 0$$
the initial conditions for equation 1 are:
$$f (0) = 1$$
$$f ' (0) = 0$$
and for equation 2 are:
$$f (0) = 0$$
$$g (0) = 1$$
I hope this formatting is more easy to read.
any suggestions??
Last edited: Aug 21, 2005
2. Aug 21, 2005
### Zurtex
Right, you really need to learn Latex. So your post I tjink would go like this:
$$\frac{d''f}{dx''} + f = 0$$
Therefore:
$$f (x) = A\cos x + B\sin x$$
$$f ' (x) = -A\sin x + B\cos x$$
$$f '' (x) = -A\cos x - B\sin x$$
However, before I try to translate the rest, I feel it worth noting that this is very confusing:
$$\frac{d''f}{dx''}$$
Please stick to something like this:
$$\frac{d^{2}y}{dx^{2}} \quad \text{or} \quad y''$$
3. Aug 21, 2005
### memarf1
yes, that is correct.
Off Subject, but what is latex?
4. Aug 21, 2005
### Zurtex
Click on any of my equtions and a box should appear showing the cde I used to write it.
It's very early in the morning here, I'll come back and look at your problem later sorry, too tired right now.
5. Aug 21, 2005
### memarf1
Ok, well, I have changed the formatting, thank you for your continued help, ill check back in in the morning. Thanks again.
Im just looking for the 2 equations to plug into the runge kutta 4. I hope you can help. I have another post with my C++ code in it, but the code is correct. I just have to do this to show my professor. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.811148464679718, "perplexity": 1135.3229577456514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00180.warc.gz"} |
https://www.physicsforums.com/threads/help-me.99621/ | # Help me
1. Nov 12, 2005
### shravan
how to calculate the number of prime factors of 360? please give the method
2. Nov 13, 2005
### Tide
HINT: Factor the number. :)
P.S. And, no, I am not being glib!
3. Nov 13, 2005
### HallsofIvy
Staff Emeritus
As Tide said: start factoring. It's not that hard. I'll get you started:
360= 2(180)= 2(2)(90)= ...
surely you can do the rest yourself. Did you mean number of distinct prime factors or just number of prime factors (i.e. counting "2" more than once).
4. Nov 13, 2005
### shravan
sorry re question
I am sorry my question was wrong .however I wanted to ask how to find the no: of perfect squares in 360 without factorizing. I am sorry for sending the wrong question.
5. Nov 14, 2005
### bomba923
That's a different question; prime factorization of 360 yields
$$360 = 2^3 3^2 5$$
and therefore the only perfect-square factors included are
$${\{1,4,9,36\}}$$
from observing the prime factorization. There are only four perfect-square factors of 360.
(The "1" is trivial tho )
*Then again, I'll reply later when I'll write an explicitly mathematical way to calculate the quantity of perfect-square factors of 360-->without factorization, as you mentioned
Last edited: Nov 14, 2005 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928166031837463, "perplexity": 4086.702871497544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://www.researchgate.net/publication/220511595_Exact_Transient_Solutions_of_Nonempty_Markovian_Queues | Article
# Exact Transient Solutions of Nonempty Markovian Queues.
Computers & Mathematics with Applications 01/2006; 52:985-996. DOI: 10.1016/j.camwa.2006.04.022
Source: DBLP
ABSTRACT It has been shown by Sharma and Tarabia [1] that a power series technique can be successfully applied to derive the transient solution for an empty M/M/1/N queueing system. In this paper, we further illustrate how this technique can be used to extend [1] solution to allow for an arbitrary number of initial customers in the system. Moreover, from this, other more commonly sought results such as the transient solution of a nonempty M/M/1/∞ queue can be computed easily. The emphasis in this paper is theoretical but numerical assessment of operational consequences is also given.
0 Bookmarks
·
58 Views
• Source
##### Article: Analysis of random walks with an absorbing barrier and chemical rule
[Hide abstract]
ABSTRACT: Recently Tarabia and El-Baz [A.M.K. Tarabia, A.H. El-Baz, Transient solution of a random walk with chemical rule, Physica A 382 (2007) 430-438] have obtained the transient distribution for an infinite random walk moving on the integers -[infinity]<k<[infinity] of the real line. In this paper, a similar technique is used to derive new elegant explicit expressions for the first passage time and the transient state distributions of a semi-infinite random walk having "chemical" rule and in the presence of an absorbing barrier at state zero. The walker starting initially at any arbitrary positive integer position i,i>0. In random walk terminology, the busy period concerns the first passage time to zero. This relation of these walks to queuing problems is pointed out and the distributions of the queue length in the system and the first passage time (busy period) are derived. As special cases of our result, the Conolly et al. [B.W. Conolly, P.R. Parthasarathy, S. Dharmaraja, A chemical queue, Math. Sci. 22 (1997) 83-91] solution and the probability density function (PDF) of the busy period for the M/M/1/[infinity] queue are easily obtained. Finally, numerical values are given to illustrate the efficiency and effectiveness of the proposed approach.
Journal of Computational and Applied Mathematics 03/2009; 225(2):612–620. · 0.99 Impact Factor
• Source
##### Article: Transient results for M/M/1/c queues via path counting
[Hide abstract]
ABSTRACT: We find combinatorially the probability of having n customers in an M/M/1/c queueing system at an arbitrary time t when the arrival rate λ and the service rate µ are equal, including the case c = ∞. Our method uses path counting methods and finds a bijection between the paths of the type needed for the queueing model and paths of another type which are easy to count. The bijection involves some interesting geometric methods.
International Journal of Mathematics in Operational Research 09/2008; 1.
• ##### Article: Time dependent analysis of M/M/ 1 queue with server vacations and a waiting server
[Hide abstract]
ABSTRACT: In this paper, we have obtained explicit expressions for the time dependent probabilities of the M/M/1 queue with server vacations under a waiting server. The corresponding steady state probabilities have been obtained. We also obtain the time dependent performance measures of the systems. Numerical illustrations are provided to examine the sensitivity of the system state probabilities to changes in the parameters of the system.
08/2011; | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034325480461121, "perplexity": 817.2041015359805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657129431.12/warc/CC-MAIN-20140914011209-00091-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/100543-monoids-groups.html | 1. ## Monoids and Groups
Can I get help on the following problem:
Show that if n GE 3 then the center of Sn is of order 1.
2. Originally Posted by thomas_donald
Can I get help on the following problem:
Show that if n GE 3 then the center of Sn is of order 1.
Let $\displaystyle n\geq 4$.
Let $\displaystyle \sigma$ be non-trivial permutation. Then there exists $\displaystyle a,b\in \{1,2,...,n\}$ such that $\displaystyle \sigma(a) = b$ with $\displaystyle a\not = b$. Notice that $\displaystyle a,\sigma^{-1}(a),b$ has at most three distinct points, let $\displaystyle c$ be different from all three of these (since $\displaystyle n\geq 4$ this is possible). There exists a permutation $\displaystyle \tau$ which satisfies $\displaystyle \tau(a) = \sigma^{-1}(a),\tau(b) = c$. Thus, $\displaystyle \tau \sigma(a) = \tau (b) = c$ and $\displaystyle \sigma \tau(a) = \sigma \sigma^{-1}(a) = a$. We see that $\displaystyle \tau \sigma \not = \sigma \tau$. This shows if $\displaystyle \sigma$ is not trivial then it cannot lie in the center. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777247905731201, "perplexity": 120.91781944597031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00448.warc.gz"} |
https://docs.sympy.org/0.7.6.1/modules/polys/literature.html | /
Literature¶
The following is a non-comprehensive list of publications that were used as a theoretical foundation for implementing polynomials manipulation module.
[Kozen89] D. Kozen, S. Landau, Polynomial decomposition algorithms, Journal of Symbolic Computation 7 (1989), pp. 445-456
[Liao95] Hsin-Chao Liao, R. Fateman, Evaluation of the heuristic polynomial GCD, International Symposium on Symbolic and Algebraic Computation (ISSAC), ACM Press, Montreal, Quebec, Canada, 1995, pp. 240–247
[Gathen99] J. von zur Gathen, J. Gerhard, Modern Computer Algebra, First Edition, Cambridge University Press, 1999
[Weisstein09] Eric W. Weisstein, Cyclotomic Polynomial, From MathWorld - A Wolfram Web Resource, http://mathworld.wolfram.com/CyclotomicPolynomial.html
[Wang78] P. S. Wang, An Improved Multivariate Polynomial Factoring Algorithm, Math. of Computation 32, 1978, pp. 1215–1231
[Geddes92] K. Geddes, S. R. Czapor, G. Labahn, Algorithms for Computer Algebra, Springer, 1992
[Monagan93] Michael Monagan, In-place Arithmetic for Polynomials over Z_n, Proceedings of DISCO ‘92, Springer-Verlag LNCS, 721, 1993, pp. 22–34
[Kaltofen98] E. Kaltofen, V. Shoup, Subquadratic-time Factoring of Polynomials over Finite Fields, Mathematics of Computation, Volume 67, Issue 223, 1998, pp. 1179–1197
[Shoup95] V. Shoup, A New Polynomial Factorization Algorithm and its Implementation, Journal of Symbolic Computation, Volume 20, Issue 4, 1995, pp. 363–397
[Gathen92] J. von zur Gathen, V. Shoup, Computing Frobenius Maps and Factoring Polynomials, ACM Symposium on Theory of Computing, 1992, pp. 187–224
[Shoup91] V. Shoup, A Fast Deterministic Algorithm for Factoring Polynomials over Finite Fields of Small Characteristic, In Proceedings of International Symposium on Symbolic and Algebraic Computation, 1991, pp. 14–21
[Cox97] D. Cox, J. Little, D. O’Shea, Ideals, Varieties and Algorithms, Springer, Second Edition, 1997
[Ajwa95] I.A. Ajwa, Z. Liu, P.S. Wang, Groebner Bases Algorithm, https://citeseer.ist.psu.edu/myciteseer/login, 1995
[Bose03] N.K. Bose, B. Buchberger, J.P. Guiver, Multidimensional Systems Theory and Applications, Springer, 2003
[Giovini91] A. Giovini, T. Mora, “One sugar cube, please” or Selection strategies in Buchberger algorithm, ISSAC ‘91, ACM
[Bronstein93] M. Bronstein, B. Salvy, Full partial fraction decomposition of rational functions, Proceedings ISSAC ‘93, ACM Press, Kiev, Ukraine, 1993, pp. 157–160
[Buchberger01] B. Buchberger, Groebner Bases: A Short Introduction for Systems Theorists, In: R. Moreno-Diaz, B. Buchberger, J. L. Freire, Proceedings of EUROCAST‘01, February, 2001
[Davenport88] J.H. Davenport, Y. Siret, E. Tournier, Computer Algebra Systems and Algorithms for Algebraic Computation, Academic Press, London, 1988, pp. 124–128
[Greuel2008] G.-M. Greuel, Gerhard Pfister, A Singular Introduction to Commutative Algebra, Springer, 2008
[Atiyah69] M.F. Atiyah, I.G. MacDonald, Introduction to Commutative Algebra, Addison-Wesley, 1969
[Monagan00] M. Monagan and A. Wittkopf, On the Design and Implementation of Brown’s Algorithm over the Integers and Number Fields, Proceedings of ISSAC 2000, pp. 225-233, ACM, 2000.
[Brown71] W.S. Brown, On Euclid’s Algorithm and the Computation of Polynomial Greatest Common Divisors, J. ACM 18, 4, pp. 478-504, 1971.
[Hoeij04] M. van Hoeij and M. Monagan, Algorithms for polynomial GCD computation over algebraic function fields, Proceedings of ISSAC 2004, pp. 297-304, ACM, 2004.
[Wang81] P.S. Wang, A p-adic algorithm for univariate partial fractions, Proceedings of SYMSAC 1981, pp. 212-217, ACM, 1981.
[Hoeij02] M. van Hoeij and M. Monagan, A modular GCD algorithm over number fields presented with multiple extensions, Proceedings of ISSAC 2002, pp. 109-116, ACM, 2002
[ManWright94] Yiu-Kwong Man and Francis J. Wright, “Fast Polynomial Dispersion Computation and its Application to Indefinite Summation”, Proceedings of the International Symposium on Symbolic and Algebraic Computation, 1994, Pages 175-180 http://dl.acm.org/citation.cfm?doid=190347.190413
[Koepf98] Wolfram Koepf, “Hypergeometric Summation: An Algorithmic Approach to Summation and Special Function Identities”, Advanced lectures in mathematics, Vieweg, 1998
[Abramov71] S. A. Abramov, “On the Summation of Rational Functions”, USSR Computational Mathematics and Mathematical Physics, Volume 11, Issue 4, 1971, Pages 324-330
[Man93] Yiu-Kwong Man, “On Computing Closed Forms for Indefinite Summations”, Journal of Symbolic Computation, Volume 16, Issue 4, 1993, Pages 355-376 http://www.sciencedirect.com/science/article/pii/S0747717183710539
Previous topic
Internals of the Polynomial Manipulation Module
Printing System | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483304977416992, "perplexity": 3705.606513719419}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00465.warc.gz"} |
https://testbook.com/question-answer/what-would-be-the-corresponding-transfer-function--5f3cfe8a425b874985ba7c1a | # What would be the corresponding transfer function of a linear time invariant system having a unit step function as its impulse response?
This question was previously asked in
UPRVUNL AE EC 2014 Official Paper
View all UPRVUNL AE Papers >
1. 1/s
2. 1/s2
3. 1
4. s
Option 1 : 1/s
Free
Hindi Subject Test 1
4474
10 Questions 10 Marks 10 Mins
## Detailed Solution
Concept:
The transfer function is defined as the ratio of Laplace transform of the output to the Laplace transform of the input by assuming initial conditions are zero.
$$TF = \frac{{C\left( s \right)}}{{R\left( s \right)}}$$
Impulse response = Inverse Laplace transform of transfer function.
'OR'
Transfer function = Laplace transform of Impulse response.
Analysis:
Given: h(t) = u(t)
Transfer function = Laplace transform of Impulse response.
Transfer function = L{u(t)}
Transfer function = 1/s | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768461346626282, "perplexity": 2484.6520521612956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00320.warc.gz"} |
https://www.physicsforums.com/threads/graphene-greens-function-technique.631570/ | # Graphene - Green's function technique
• Start date
• Tags
• #1
csopi
82
2
Graphene -- Green's function technique
Hi,
I am looking for a comprehensive review about using Matsubara Green's function technique for graphene (or at least some hints in the following problem). I have already learned some finite temperature Green's function technique, but only the basics.
What confuses me is that graphene has two sublattices (say A and B), and so (in principle) we have four non-interacting Green's functions: $$G_{AA}(k,\tau)=-\langle T_{\tau}a_k(\tau)a_k^{\dagger}(0)\rangle,$$ ,
where $$a_k$$ is the annihilation operator acting on the A sublattice. G_{AB}, G_{BA} and G_{BB} are defined in a similar way.
Of course, there are connections between them, but G_{AA} and G_{AB} are essentially different. Now when I am to compute e.g. the screened Coulomb potential, I do not know, which Green's function should be used to evaluate the polarization bubble.
• #2
6,258
906
I think you will find the answer you are looking for when you consider the expression for the bubble in coordinate space.
• #3
csopi
82
2
Dear DrDu,
thank you for your response, but I do not think, I understand how your suggestion helps me. Please explain it to me a bit more thoroughly.
• #4
6,258
906
I mean that the electromagnetic field couples locally to the electrons. Hence the bubble is some integral containing a product of two Greensfunctions G(x,x')G(x,x'). What consequences does locality have in the case of Graphene?
• #6
csopi
82
2
Dear tejas777,
This is a very nice review, thank you very much. Let me ask just one final question: can you explain, how comes
$$F_{s,s'}(p,q)$$
in eq. (2.12) and (2.13) ?
• #7
tejas777
25
1
Look at section 6.2 (on page 19/23) in:
Now, the link contains a specific example. You can probably use this type of approach to derive a more general expression, one involving the ##s## and ##s'##. I may have read an actual journal article containing the rigorous analysis, but I cannot recall which one it was at the moment. If I am able to find that article I will post it here asap.
• Last Post
Replies
1
Views
2K
• Last Post
Replies
30
Views
2K
• Last Post
Replies
1
Views
802
• Last Post
Replies
0
Views
592
• Last Post
Replies
3
Views
1K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
7
Views
2K
• Last Post
Replies
4
Views
1K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
3
Views
1K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917746543884277, "perplexity": 856.809746229046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00512.warc.gz"} |
http://mathhelpforum.com/calculus/30606-differential-equation-equilibria.html | # Math Help - differential equation equilibria!??
1. ## differential equation equilibria!??
consider an interaction between two mutually inhibiting proteins with concentrations x and y, given by the differential equations:
dx/dt = f(y)-x
and
dy/dt = g(x)-y
where both f(y) and g(x) are decreasing functions.
*show that equilibria occur when f[g(x)]=x
how do i do the asterick part? any help would be very much appreciated..
2. Originally Posted by calcusucks
consider an interaction between two mutually inhibiting proteins with concentrations x and y, given by the differential equations:
dx/dt = f(y)-x
and
dy/dt = g(x)-y
where both f(y) and g(x) are decreasing functions.
*show that equilibria occur when f[g(x)]=x
how do i do the asterick part? any help would be very much appreciated..
Require that both rates of change equal zero:
f(y) - x = 0 => x = f(y) ..... (1)
g(x) - y = 0 => y = g(x) .... (2)
Solve equations (1) and (2) simultaneously:
Substitute (2) into (1) ...... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376011848449707, "perplexity": 2641.76515104584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464536.35/warc/CC-MAIN-20151124205424-00006-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://2022.help.altair.com/2022/hwsolvers/ms/topics/solvers/ms/optimization_problem_types_r.htm | # Optimization Problem Types
Optimization problems types include unconstrained optimization, simple bound constraints, and nonlinearly constrained optimization.
Unconstrained Optimization
An unconstrained problem has no constraints. Thus there are no equality or inequality constraints that the solution b, has to satisfy. Furthermore there are no design limits either.
Simple Bound Constraints
A bound-constrained problem has only lower and upper bounds on the design parameters. There are no equality or inequality constraints that the solution b, has to satisfy. In the finite element world, these are also known as side constraints.
Nonlinearly Constrained Optimization
This is the most complex variation of the optimization problem. The solution has to satisfy some nonlinear constraints (inequality and/or equality) and there are bounds on the design variables that specify limits on the values they can assume.
It is important to know about these problem types because several optimization search methods are available in MotionSolve. Some of these methods work only for specific types of optimization problems.
The optimization problem formulation is:(1)
minimize $\begin{array}{l}\text{}{\psi }_{0}\left(x,b\right)\text{}\text{}\end{array}$ (objective function) subject to (inequality constraints) (equality constraints) $\begin{array}{l}\text{}{b}_{L}\le b\le {b}_{U}\text{}\end{array}$ (design limits)
The functions are assumed to have the form:(2)
${\psi }_{k}\left(x,b\right)={\psi }_{k0}\left(b\right)+\underset{t0}{\overset{tf}{\int }}{L}_{k}\left(x,b,t\right)dt$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434546828269958, "perplexity": 1051.6579627551191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710972.37/warc/CC-MAIN-20221204104311-20221204134311-00340.warc.gz"} |
http://www.stata.com/statalist/archive/2009-05/msg01034.html | Re: st: Explaining the Use of Inferential Statistics Even Though I Have Population Data
From Salima Bouayad Agha To [email protected] Subject Re: st: Explaining the Use of Inferential Statistics Even Though I Have Population Data Date Sat, 30 May 2009 22:15:40 +0200
Well, even if is not really a stata question, I Think that the question of your reviewer is not obvious. In statistics a sample can effectively be a part of the population, and it is the most useful and current way to talk about of a sample, but in mathematical statistical course students also learn that a sample can be just the population a one specific moment of time and that this population can be very different if something else (random phenomenon) happens. Think for example of time series or some staistical process, let for example talk about the population of computers produced by a firm. So I'm not sure that your answer is the best one, depending on which field of research you are working. Just take a few moment to review some theoretical books on xaht is a sample in statistics.
Salima
PS :
In mathematical terms, given a random variable X with distribution F, a sample of length n\in\mathbb{N} is a set of n independent, identically distributed (iid) random variables with distribution F. It concretely represents n experiments in which we measure the same quantity. For example, if X represents the height of an individual and we measure n individuals, Xi will be the height of the i-th individual. Note that a sample of random variables (i.e. a set of measurable functions) must not be confused with the realisations of these variables (which are the values that these random variables take). In other words, Xi is a function representing the mesure at the i-th experiment and xi = Xi(?) is the value we actually get when making the measure.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015507698059082, "perplexity": 567.4035476330155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297281.13/warc/CC-MAIN-20150323172137-00054-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/hydrostatic-force-problem.147630/ | # Hydrostatic force problem
1. Dec 10, 2006
### SUchica10
A gate in an irrigation canal is in the form of a trapezoid 3 feet wide at the bottom, 5 feet wide at the top, with the height equal to 2 feet. It is placed vertically in the canal with the water extending to its top. For simplicity, take the density of water to be 60 lb/ft cubed. Find the hydrostatic force in pounds on the gate.
I am having problems setting this problem up. It looks like its really easy but I am just not sure how to start it.
I know F = density x gravity x area x depth
2. Dec 12, 2006
### chanvincent
Because the force acts on the gate is not constant, i.e. force at the bottom is larger than that at the top , we have
dF = density x gravity x depth x d(area)
Do this integration over the trapezoid will yield the correct answer.
3. Dec 13, 2006
### HallsofIvy
Staff Emeritus
Imagine the gate being divided into many narrow horizontal bands of width "$\Delta y$". If y is the depth of a band, and $\Delta y$ is small enough that we can think of every point in the band as at depth y, then the force along that band is the pressure, 60(y) [NOT "times gravity"! The density of the water is weight density, not mass density!], times the area: the length of the band times $\Delta y$. Of course, the length of the band depends on y: it is a linear function of y since the sides are straight lines, length(2)= 3 and length(0)= 5 so length(y)= -y+ 5. The force on that narrow band is 60y(5-y)$\Delta y$. The total force on the gate is the sum of those,$\Sum 60y(5-1y)\Delta y$, as y goes from 0 to 2. In the limit, that becomes the integral
$$60\int_0^5 y(5-y)dy$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827743172645569, "perplexity": 688.3661183720985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00070-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/624196/calculating-moore-penrose-pseudo-inverse | # Calculating Moore-Penrose pseudo inverse
I have a problem with a project requiring me to calculate the Moore-Penrose pseudo inverse. I've also posted about this on StackOverflow, where you can see my progress.
From what I understand from Planet Math you can simply compute the pseudoinverse only the first formula which I can understand, but it also says that this is for general cases, and you have to do SVD (singular value decomposition) and the formula becomes much complicated (the second formula) which I don't understand... I mean,
1. What is V? What is S? I'm confused.
2. How can I calculate SVD?
4. Why there are two pseudo inverse formulas?
Left pseudo inverse formula $$A_\text{left} = (A^TA)^{-1}A^T$$ Right pseudo inverse formula $$A_\text{right}=A^T(AA^T)^{-1}$$
Thank you very much, Daniel.
These formulas are for different matrix formats of the rectangular matrix $A$.
The matrix to be (pseudo-)inverted should have full rank. (added:) If $A\in I\!\!R^{m\times n}$ is a tall matrix, $m>n$, then this means $rank(A)=n$, that is, the columns have to be linearly independent, or $A$ as a linear map has to be injective. If $A$ is a wide matrix, $m<n$, then the rows of the matrix have to be independent to give full rank. (/edit)
If full rank is a given, then you are better off simplifying these formulas using a QR decomposition for $A$ resp. $A^T$. There the R factor is square and $Q$ is a narrow tall matrix with the same format as $A$ or $A^T$,
If $A$ is tall, then $A=QR$ and $A^{\oplus}_{left}=R^{-1}Q^T$
If $A$ is wide, then $A^T=QR$, $A=R^TQ^T$, and $A^{\oplus}_{right}=QR^{-T}$.
You only need an SVD if $A$ is suspected to not have the maximal rank for its format. Then a reliable rank estimation is only possible comparing the magnitudes of the singular values of $A$. The difference is $A^{\oplus}$ having a very large number or a zero as a singular value where $A$ has a very small singular value.
Added, since wikipedia is curiosly silent about this: Numerically, you first compute or let a library compute the SVD $A=U\Sigma V^T$ where $Σ=diag(σ_1,σ_2,\dots,σ_r)$ is the diagonal matrix of singular values, ordered in decreasing size $σ_1\ge σ_2\ge\dots\ge σ_r$.
Then you estimate the effective rank by looking for the smallest $k$ with for instance $σ_{k+1}<10^{-8}σ_1$ or as another strategy, $σ_{k+1}<10^{-2}σ_k$, or a combination of both. The factors defining what is "small enough" are a matter of taste and experience.
With this estimated effective rank $k$ you compute $$Σ^⊕=diag(σ_1^{-1},σ_2^{-1},\dots,σ_k^{-1},0,\dots,0)$$ and $$A^⊕=VΣ^⊕U^T.$$
Note how the singular values in $Σ^⊕$ and thus $A^⊕$ are increasing in this form, that is, truncating at the effective rank is a very sensitive operation, differences in this estimation lead to wildly varying results for the pseudo-inverse.
• 1-> what do you mean by different matrix formats ?? 2-> What if the matrix dosen't have full rank? What happens, i have to use QR decomposition as you sugested? 3-> ok, but don't SVD help me calculating the pseudo inverse? at least for every case 4-> can't i just use the first formula from A-left formula? And if not, why? 5-> sorry, i am not a math guru, i understand the baiscs of matrix, i also understand how to calculate the inverse, but this is a little bit tricky – Master345 Jan 1 '14 at 22:41
• 1) more rows than columns->tall, more columns than rows->wide -- 2) If you know the structural reason for the rank deficit, you better treat it by reducing redundancy in the problem formulation. If the rank deficit happens dynamically, you better use the better analytic power of the SVD. – LutzL Jan 1 '14 at 23:01
• -- 3) Yes, but SVD is still an iterative process with time $O(n^3|\log ε|)$ to reach a certain precision $ε$. The initial reduction to bidiagonal form corresponds to 2 QR decompositions. Meaning, if QR works, it is much faster. -- 4) You can, but for large matrices, the matrix product makes the condition quadratically worse. -- 5) try out Householder reflectors and Givens rotations, the mathematics is not that bad, keeping track of the indices for the positions to modify however... Anyway, the proposed library has all these fancy decompositions, so you only need to put the results together. – LutzL Jan 1 '14 at 23:02
• ** 3-> ** so, correct me if i understand, First formula A-left > SVD > QR decomposition ** 4-> ** so First formula A-left might work for 3x3, 4x4, but the higher you go, you might find bigger errors, is that right? ** 5-> ** i know they are better algorithms, put for starters let me try First formula A-left to see if it works, i can test it here calculator-fx.com/calculator/linear-algebra/… ** extra-> ** thank you very much for answering, i had never been into mathematics this deep and coding before ... – Master345 Jan 2 '14 at 0:03
• * 3) The computational effort for the 'first formula' is less than that for the SVD, numerical stability is worse. And, as said, if it works, QR is faster than SVD, the numerical results will be about the same. * 4) Yes, it will also work for 10x10, but for 1000x1000 I would expect problems. * 5) By all means, try out all methods. – LutzL Jan 2 '14 at 7:22
While the SVD yields a "clean" way to construct the pseudoinverse, it is sometimes an "overkill" in terms of efficiency.
The Moore-Penrose pseudoinverse can be seen as follows: Let $\ell:\mathbb R^n\rightarrow\mathbb R^m$ be a linear map. Then $\ell$ induces an isomorphism $\ell':{\rm Ker}(\ell)^\perp\rightarrow {\rm Im}(\ell)$. Then the Moore-Penrose pseudoinverse $\ell^+:\mathbb R^m\rightarrow \mathbb R^n$ can be described as follows.
$$\ell^+(x)=\ell'^{-1}(\Pi(x)),$$ where $\Pi$ is the orthogonal projection of $x$ on ${\rm Im}(\ell)$.
In other words, what you need is to compute orthonormal bases of ${\rm Im}(\ell)$ and of ${\rm Ker}(\ell)^\perp$ to contruct the Moore-Penrose pseudoinverse.
For an algorithm, you may be interested by the iterative method here
edit: roughly speaking, one way to see why the SVD might be an "overkill" is that if $A$ is a matrix with rational coefficients, then $A^+$ also have rational coefficients (see e.g. this paper), while the entries of the SVD are algebraic numbers.
• 1-> what do you mean about "overkill"? like is process consuming? first i just want to see this working, then i will think about optimisations 2-> I don't quite understand this formula ℓ+(x)=ℓ′−1(Π(x)) i'm used to work with formulas like uppwards 3-> ok, but don't SVD help me calculating the pseudo inverse? at least for every case 4-> sorry, i am not a math guru, i understand the baiscs of matrix, i also understand how to calculate the inverse, but this is a little bit tricky 5-> can't i just use the first formula from A-left formula? And if not, why? – Master345 Jan 1 '14 at 22:49
• 1-> By "overkill", I wanted to say that the SVD is a very powerful tool, maybe too powerful for computing the Moore-Penrose pseudoinverse, but there is indeed a very nice relation, see en.wikipedia.org/wiki/… 2->It is just a notation: a matrix actually represents a linear map (the map $u\mapsto A\cdot u$) and reasoning in terms of maps is often convenient. 3->Yes, but the SVD involves computing with algebraic numbers, and often makes exact computations impossible. – emeu Jan 2 '14 at 4:07
• 5-> A-left only works if the linear map $u\mapsto A\cdot u$ is surjective. A-right only works if the map is injective. (note that if $A$ is a square invertible matrix, then both formulas give the same result: $A^{-1}$) – emeu Jan 2 '14 at 4:08
• 2-> for me, a matrix is just a table that i can referr to its elements like A[i][j], thats the way i'm seeing it, because i worked a lot with arrays 5-> yes, i observed that, A-left gives same result as the inverse, and i cannot understand a something, given an array (1,2,3 - 4,5,6 - 7 8 9) that has the determinant 0 gives me a NULL matrix, see here (calculator-fx.com/calculator/linear-algebra/matrix-determinant) ... why is that? calculator-fx.com/calculator/linear-algebra/… gives me the right answer. – Master345 Jan 2 '14 at 18:27
• 2-> Yes, seeing a matrix as a table is also a very good way to think of it, especially for computational purposes. However, it is often a good idea (if you want to study this topic deeper) to have both representations in mind: as a table and as a linear map. 5-> A-left only works if the associated linear map is surjective, which is not the case if the determinant of a square matrix $A$ is zero. Consequently, in that case, the A-left formula does not work (but the Moore-Penrose pseudoinverse is still well defined and there are other ways to compute it) – emeu Jan 3 '14 at 7:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279831647872925, "perplexity": 431.9076756513944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00349.warc.gz"} |
https://mathshistory.st-andrews.ac.uk/Biographies/Crank/ | # John Crank
### Quick Info
Born
6 February 1916
Hindley, Lancashire, England
Died
3 October 2006
Ruislip, London, England
Summary
John Crank was an English numerical analyst who worked on the heat equation.
### Biography
John Crank was a student of Lawrence Bragg and Douglas Hartree at Manchester University (1934-38), where he was awarded the degrees of B.Sc. and M.Sc. and later (1953) D.Sc. After war work on ballistics he was a mathematical physicist at Courtaulds Fundamental Research Laboratory from 1945 to 1957 and professor of mathematics at Brunel University (initially Brunel College in Acton) from 1957 to 1981. His main work was on the numerical solution of partial differential equations and, in particular, the solution of heat-conduction problems. In the 1940s such calculations were carried out on simple mechanical desk machines. Crank is quoted as saying that to "burn a piece of wood" numerically then could take a week.
John Crank is best known for his joint work with Phyllis Nicolson on the heat equation, where a continuous solution $u(x, t)$ is required which satisfies the second order partial differential equation
$u_{t} - u_{xx} = 0$
for $t > 0$, subject to an initial condition of the form $u(x, 0) = f (x)$ for all real $x$. They considered numerical methods which find an approximate solution on a grid of values of $x$ and $t$, replacing $u_{t}(x, t)$ and $u_{xx}(x, t)$ by finite difference approximations. One of the simplest such replacements was proposed by L F Richardson in 1910. Richardson's method yielded a numerical solution which was very easy to compute, but alas was numerically unstable and thus useless. The instability was not recognised until lengthy numerical computations were carried out by Crank, Nicolson and others. Crank and Nicolson's method, which is numerically stable, requires the solution of a very simple system of linear equations (a tridiagonal system) at each time level.
### References (show)
1. J Crank, Free and moving boundary problems (Oxford, 1987).
2. J Crank, Mathematics and industry (Oxford, 1962).
3. J Crank, The mathematics of diffusion (Oxford, 1956).
4. J Crank, The Differential Analyser (London, 1947).
5. J Crank and P Nicolson. A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type, Proc. Cambridge Philos. Soc. 43 (1947). 50-67. [Re-published in: John Crank 80th birthday special issue Adv. Comput. Math. 6 (1997) 207-226] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907831311225891, "perplexity": 1578.455738664962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00162.warc.gz"} |
https://www.piping-designer.com/index.php/properties/634-engineering-mathematics-science-nomenclature/1956-formula-symbols-t | # Formula Symbols - T
Written by Jerry Ratzlaff on . Posted in Nomenclature & Symbols for Engineering, Mathematics, and Science
## Formula Symbols
A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z
## T
SymbolGreek SymbolDefinitionUSMetricValue
$$a_t$$ - tangential acceleration $$ft\;/\;sec^2$$ $$m\;/\;s^2$$ -
$$F_t$$ - tangential force $$ft\;/\;sec^2$$ $$m\;/\;s^2$$ -
$$v_t$$ - tangential velocity $$deg\;/\;sec$$ $$rad\;/\;s$$ -
$$Ta$$ - Taylor number dimensionless dimensionless -
$$T$$ - temperature $$^\circ F$$ $$C$$ -
$$\alpha$$, $$\;\beta$$ alpha, beta temperature coefficient $$1 \;/\; ^\circ R$$ $$1 \;/\; K$$ -
$$y$$ - temperature derating factor - - -
$$T_d$$, $$\;\Delta T$$, $$\;TD$$ Delta temperature differential $$^\circ F$$ $$C$$ -
$$\nabla T$$ nabla temperature gradient - $$Km^{-1}$$ -
$$T$$ - temperature of an ideal gas $$^\circ F$$ $$C$$ -
$$\large{ \partial T }$$ partial temperature rate of change - - -
$$V_{temp}$$ - temporary specific volume vatiable $$ft^3\;/\;lbm$$ $$m^3\;/\;kg$$ -
$$T$$ - tensile force $$lb \;or\; kip$$ $$N \;or\; kN$$ -
$$T_{sat}$$ - temperature saturation point $$^\circ F$$ $$C$$ -
$$s$$ - tensile strength $$psi$$ $$kg\;/\;cm^2$$ -
$$f_t$$ - tensile strength of concrete $$lb\;/\;in^2 \;or\; psi$$ $$Pa$$ -
$$\sigma$$ sigma tensile stress - - -
$$T$$ - tension $$lbf$$ $$N$$ -
$$\sigma$$ sigma tension coefficient - - -
$$F_t$$ - tension force - - -
$$v_t$$ - terminal velocity - - -
$$C_t$$ - thermal capacitance $$Btu\;/\; ^\circ F$$ $$J\;/\; K$$ -
$$C$$ - thermal conductance of air space $$Btu\;/\;ft^2-hr - ^\circ F$$ $$W\;/\;m^2 - C$$ -
$$Q$$ - thermal conduction - - -
$$p$$ - thermal conduction rate - - -
$$K$$, $$\;k$$ - thermal conductivity $$Btu\;/\;hr-ft^2- ^\circ F$$ $$W\;/\;m^2 - K$$ -
$$\lambda$$ lambda thermal conductivity coefficient $$Btu-ft\;/\;hr-ft^2-^\circ F$$ $$W\;/\;m - C$$ -
$$k_t$$ - thermal conductivity constant $$Btu\;/\;hr-ft^2- ^\circ F$$ $$W\;/\;m^2 - K$$ -
$$\lambda_{ik}$$ lambda thermal conductivity tensor - - -
$$h_c$$ - thermal contact conductance coefficient - - -
$$p$$ - thermal current - - -
$$D_{td}$$ - thermal diffusion coefficient - - -
$$\alpha_t$$ alpha thermal diffusion factor - - -
$$k_t$$ - thermal diffusion ratio - - -
$$\alpha$$ alpha thermal diffusivity $$ft^2\;/\;sec$$ $$m^2 \;/\;s$$ -
$$T$$, $$\;\eta_{th}$$ eta thermal efficency - - -
$$Q$$ - thermal energy $$Btu$$ $$W$$ -
$$\alpha$$, $$\;\alpha_c$$ alpha thermal expansion coefficient $$1 \;/\; ^\circ K$$ $$1 \;/\; ^\circ K$$ -
$$\alpha$$ alpha thermal expansivity - - -
$$l$$ - thermal intensity - $$Wm^{-2}$$ -
$$p$$ - thermal power transfer - - -
$$R$$, $$\;R_t$$ - thermal resistance $$hr-^\circ F\;/\;Btu$$ $$K\;/\;W$$ -
$$\epsilon_t$$ epsilon thermal strain - - -
$$\sigma_T$$ sigma thermal stress - - -
$$\tau$$ tau thermal time constant $$hr$$ $$s$$ -
$$T$$, $$\;\tau$$ tau thermodynamic temperature - - -
$$d$$, $$\;t$$, $$\;\delta$$ delta thickness $$in \;or\; ft$$ $$mm \;or\; m$$ -
$$t_f$$ - thickness of the flange of a steel beam cross-section $$in$$ $$mm$$ -
$$t_w$$ - thickness of the web of a steel beam cross-section $$in$$ $$mm$$ -
$$\mu$$ mu Thomson coefficient - - -
$$\sigma_e$$ sigma Thomson cross-section constant constant $$6.652\;458\;7321\;x\;10^{-29}\;m^2$$
$$T$$ - throat size of a weld $$in$$ $$mm$$ -
$$F$$ - thrust $$lbf$$ $$N$$ -
$$F$$ - thrust force $$lbf$$ $$N$$ -
$$t$$ - time $$sec$$ $$s$$ -
$$\tau$$ tau time constant $$sec$$ $$s$$ -
$$dt$$, $$\;\Delta t$$ Delta time differential $$sec$$ $$s$$ -
$$t_f$$ - time of flight $$sec$$ $$s$$ -
$$\Delta t$$ Delta time interval $$sec$$ $$s$$ -
$$t_p$$ - time of observation $$sec$$ $$s$$ -
$$t_c$$ - time of relaxation $$sec$$ $$s$$ -
$$T$$ - time scale - - -
$$T$$, $$\;\tau$$ tau torque $$lbf-ft$$ $$N-m$$ -
$$T_s$$ - torque speed $$lbf-ft \;/\; sec$$ $$N-m \;/\; s$$ -
$$T$$ - torsion $$lbf-ft \;/\; sec$$ $$N-m \;/\; s$$ -
$$K$$ Kappa torsion coefficient - $$N-m\;/\;rad$$ -
$$J$$ - torsional constant $$deg$$ $$rad$$ -
$$K$$ - tortional stiffness constant - - -
$$k_r$$ - torsional spring constant $$lbf-ft\;/\;rad$$ $$N-m\;/\;rad$$ -
$$n$$ - total - - -
$$J$$, $$\;j_i$$ - total angular momentum - - -
$$\omega_t$$ omega total angular velocity $$ft \;/\; sec$$ $$rad \;/\; s$$ -
$$P$$ - total concentrated load $$lb$$ $$N$$ -
$$h_d$$ total discharge head $$ft$$ $$m$$ -
$$TDH$$ - total dissolved head $$ft$$ $$m$$ -
$$TDS$$ - total dissolved solids $$ppm$$ $$mg\;/\;L$$ -
$$Q_n$$ - total energy $$Btu$$ $$kJ$$ -
$$h_t$$ - total head $$ft$$ $$m$$ -
$$q_t$$ total heat $$Btu$$ $$kJ$$ -
$$W$$ - total load from a uniform distribution $$lb$$ $$N$$ -
$$p_t$$ - total pressure $$in\; wg$$ $$Pa$$ -
$$V$$ - total shear force -
$$T_s$$ - total stagnation temperature $$^\circ F$$ $$C$$ -
$$U$$ - total strain energy - - -
$$h_s$$ total suction head $$ft$$ $$m$$ -
$$T$$ - total term - - -
$$t_t$$ - total time $$sec$$ $$s$$ -
$$V_t$$ - total velocity $$ft \;/\; sec$$ $$m \;/\; s$$ -
$$V_t$$ - total volume of soil $$ft^3$$ $$m^3$$ -
$$W_t$$ - total weight of soil $$lbf$$ $$N$$ -
$$W_t$$ - total work $$lbf-ft$$ $$kW$$ -
$$\dot {t}$$ - transfer rate - - -
$$TU$$ - transfer units - - -
$$KE_t$$ - translational kinetic energy - - -
$$\gamma$$ gamma transmissivity - - -
$$T$$, $$\;\tau$$ tau transmittance dimensionless dimensionless -
$$T$$ - transmitted torque $$lbf-ft$$ $$N-m$$ -
$$\tau$$ tau transmission coefficient - - -
$$\Delta x$$ Delta transverse displacement - - -
$$\epsilon_t$$ epsilon transverse strain (laterial strain) $$ft$$ $$m$$ -
$$T$$ - travel $$in$$ $$mm$$ -
$$\epsilon$$ epsilon true strain $$in\;/\;in$$ $$m\;/\;m$$ -
$$\sigma$$ sigma true stress $$lbf\;/\;in^2$$ $$MPa$$ -
$$Pr_t$$ - Turbulent Prandtl number dimensionless dimensionless -
$$2D$$ - two-dimension - - -
Symbol Greek Symbol Definition US Metric Value
A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868370294570923, "perplexity": 4320.849176804387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153899.14/warc/CC-MAIN-20210729234313-20210730024313-00204.warc.gz"} |
https://link.springer.com/chapter/10.1007%2F978-3-642-25947-0_1 | # String Theory 101
Chapter
Part of the Lecture Notes in Physics book series (LNP, volume 851)
## Abstract
In these lecture notes we will give a basic description of string theory aimed at students with no previous experience. Most of the discussion focuses on the bosonic string quantized used the so-called old covariant formulation, although we will also briefly introduce light cone quantization. In the final lecture we will discuss how supersymmetry can be included on the worldsheet, leading to type IIA, type IIB and heterotic superstrings.
## Keywords
Gauge Symmetry Open String Conformal Invariance Heterotic String Closed String
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
## References
1. 1.
Green, M.B., Schwarz, J.H., Witten, E.: Superstring Theory, vol. 1 and 2. University Press, Cambridge (1987) (Cambridge Monographs On Mathematical Physics)Google Scholar
2. 2.
Polchinski, J.: String Theory, vol. 1 and 2, p. 558. University Press, Cambridge (1998)Google Scholar
3. 3.
Zwiebach, B.: A First Course in String Theory, p. 558. University Press, Cambridge (2004)Google Scholar
4. 4.
Becker, K., Becker, M., Schwarz, J.H.: String Theory and M-Theory: A Modern Introduction, p. 739. Cambridge University Press, Cambridge (2004)Google Scholar | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8251180052757263, "perplexity": 2421.249410621925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948064.80/warc/CC-MAIN-20180426031603-20180426051603-00164.warc.gz"} |
http://math.stackexchange.com/questions/206106/calculate-the-expectation-of-a-modified-compound-poisson-process | # calculate the expectation of a modified compound poisson process
Let $t\in [0,a]$, $X_t:=\sum_{i=1}^{N_t}Y_i$ be a compounded Poisson process, i.e. $N_t$ a Poisson process with parameter $\lambda>0$ and $Y_i$ are iiid with distribution $\mu$. Now let $S_t:=\exp{(\sum_{i=1}^{N_t}f(Y_i)+(\lambda-\eta)t)}$, where $\eta>0$ and $f(x):=\log{(\frac{\eta}{\lambda}h(x))}$. We assume that there is a absolutely continuous measure $\hat{\mu}$ with respect to $\mu$ such that $h$ is the Radon-Nikodym derivative, i.e. $h(x):=\frac{d\hat{\mu}}{d\mu}(x)$. I want to prove $E[Z_t]=1$. What I did so far:
$$E[Z_t]=E[\exp{(f(Y_1))}\cdots\exp{(f(Y_{N_t}))}\exp{((\lambda-\eta)t)}]=\exp{((\lambda-\eta)t)}E[\exp{(f(Y_1))}\cdots\exp{(f(Y_{N_t}))}]$$
Now $\exp{(f(Y_j))}=\exp{(\log(\frac{\eta}{\lambda}h(Y_j)))}=\frac{\eta}{\lambda}h(Y_j)$, so first since the $Y_j$ are identical distributed we have
$$E[\exp{(f(Y_1))}\cdots\exp{(f(Y_{N_t}))}]=(\frac{\eta}{\lambda})^{N_t}E[h(Y_1)^{N_t}]$$
and then by independence
$$(\frac{\eta}{\lambda})^{N_t}E[h(Y_1)^{N_t}]=(\frac{\eta}{\lambda})^{N_t}E[h(Y_1)]^{N_t}=(\frac{\eta}{\lambda}E[h(Y_1)])^{N_t}$$
We end up with $E[Z_t]=\exp{((\lambda-\eta)t)}(\frac{\eta}{\lambda}E[h(Y_1)])^{N_t}$. If this should be one, we must have
$$(\frac{\eta}{\lambda}E[h(Y_1)])^{N_t}=\exp{(-(\lambda-\eta)t)}$$
Here I'm stuck. I guess I have to use, that $h$ is the Radon-Nikodym derivative and the distribution of $N_t$. Or did I a mistake so far? Some help would be appreciated!
-
@did Thank you for the hint! You're right, $\hat{\mu}$ is also a probability measure. However could you give me a hint how to calculate $E[(\frac{\eta}{\lambda}h(Y_1))^{N_t}]$, this is what I actually have to calculate, right? – user20869 Oct 2 '12 at 17:56
Actually, I'm done, if $E[h(Y_j)]=1$, but I do not see this. Of course $E[h]=1$, but why the composition? – user20869 Oct 2 '12 at 18:21
Thank you for your help – user20869 Oct 2 '12 at 19:32
Since $N_t$ is a random variable, what you computed so far is $\mathbb E(Z_t\mid N_t)$, not $\mathbb E(Z_t)$. To get the value of $\mathbb E(Z_t)$, consider $\mathbb E(\mathbb E(Z_t\mid N_t))$. You will probably end up being forced to use a hypothesis which is missing from your post, namely that $\hat\mu$ is a probability measure.
By definition, if $\hat\mu$ is a probability measure, then $\mathbb E(h(Y))=\displaystyle\int h(y)d\mu(y)=\int d\hat\mu(y)=1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884982705116272, "perplexity": 98.08968439884227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653628.45/warc/CC-MAIN-20141024030053-00106-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://canovasjm.netlify.com/2018/10/29/when-does-the-f-test-reduce-to-t-test/ | # When does the F-test reduce to a t-test?
If you have taken a regression or design of experiments class (or both), you probably have come across the following problem (or a similar one):
“Show that the sum-of-squares decomposition and F-statistic reduces to the usual equal-variance (pooled) two sample t-test in the case of $$a = 2$$ treatments - with the realization that an $$F$$ statistic with $$1$$ (numerator) and $$k$$ (denominator) degrees of freedom is equivalent to a $$t$$ statistic with $$k$$ degrees of freedom, viz, $$F_{1, k} = t_{k}^2$$
The interesting thing about this proof is that is really hard to find (I spent some reasonable amount of time googling and looking in books with no success). More interesting than that though, is that when this proof is mentioned is usually followed by the most annoying phrases in a Math textbook:
• is easy to prove…
• is not difficult to show…
• this easy/straightforward/simple proof is left to the reader…
Despite all of this adjectives, $$\color{red} {\text{it is hard to find the actual proof}}$$. The humble purpose of this blog post is to get rid of the vanity, work the proof, and let you judge if it is easy/straightforward/simple (or not).
Finally, let me point out that this blog post assumes you are somewhat familiar with the F-test, the t-test, and notation frequently used in design of experiments like $$\bar{y}_{..}$$, $$\bar{y}_{i.}$$, or $$\bar{y}_{.j}$$
## Bye-bye words, hello formulas
Let’s start by putting all the wording into formulas:
We have to prove that
$F_{a-1, N-a} = \frac{MST}{MSE} = \frac{\frac{SST}{a-1}}{\frac{SSE}{N-a}} \tag{1}$
reduces to
$t_{k}^2 = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{S_{p}^2(\frac{1}{n_{1}} + \frac{1}{n_{2}})} \tag{2}$
$$\color{red} {\text{When a = 2}}$$ (this is key)
## Notation
Symbol Description
SSE Sum of Squares due to Error
SST Sum of Squares of Treatment
MSE Mean Sum of squares Error
MST Mean Sum of squares Treatment
a Number of treatments
$$n_{1}$$ Number of observations in treatment 1
$$n_{2}$$ Number of observations in treatment 2
N Total number of observations
$$\bar{y}_{i.}$$ Mean of treatment $$i$$
$$\bar{y}_{..}$$ Global mean
$$k = N - a$$ Degrees of freedom of the denominator of F
Now that we have the formulas, we will work the following:
1. Denominator of equation (1)
2. Numerator of equation (1)
2.a. Part a
2.b. Part b
2.c. Part c
3. Put all together
## 1. Denominator of equation (1)
When $$a = 2$$ the denominator of expression $$(1)$$ is:
$MSE = \frac{SSE}{N-2} = \frac{\sum_{j=1}^{n_1}{(y_{1j} - \bar{y}_{1.})^2} + \sum_{j=1}^{n_2}{(y_{2j} - \bar{y}_{2.})^2}}{N-2} \tag{3}$
Recalling that the formula for the sample variance estimator is, $S_{i}^2 = \frac{\sum_{j=1}^{n_i}(y_{ij} - \bar{y}_{i.})^2}{n_{i} - 1}$ we can multiply and divide the terms in the numerator in $$(3)$$ by $$(n_{i} - 1)$$ and get $$(4)$$. Don’t forget that in this case $$N = n_{1} + n_{2}$$
$\frac{SSE}{N-2} = \frac{(n_{1} - 1) S_{1}^2 + (n_{2} - 1) S_{2}^2}{n_{1} + n_{2} - 2} = S_{p}^2 \tag{4}$
$$S_{p}^2$$ is called the pooled variance estimator.
## 2. Numerator of equation (1)
When $$a = 2$$ the numerator of expression $$(1)$$ is:
$\frac{SST}{2-1} = SST$
and the general expression for SST reduces to $$SST = \sum_{1}^2 n_{i} (\bar{y}_{i.} - \bar{y}_{..})^2$$ . The next step is to expand the sum as follows:
$\begin{eqnarray} SST & = & \sum_{1}^2 n_{i} (\bar{y}_{i.} - \bar{y}_{..})^2 \\ & = & n_{1} (\bar{y}_{1.} - \bar{y}_{..})^2 + n_{2} (\bar{y}_{2.} - \bar{y}_{..})^2 \\ \end{eqnarray} \tag{5}$
$$\bar{y}_{..}$$ is called the global mean and we are going to write it in a different way. The new way is:
$\bar{y}_{..} = \frac{n_{1} \bar{y}_{1.} + n_{2} \bar{y}_{2.}}{N} \tag{6}$
Next, replace (6) in formula (5) and re-write SST as:
$SST = \underbrace{n_1 \big[ \bar{y}_{1.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2}_{\text{Part a}} + \underbrace{n_2 \big[ \bar{y}_{2.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2}_{\text{Part b}} \tag{7}$
The next step is to find alternative ways for the expressions Part a and Part b
### 2.a. Part a
$\text{Part a} = n_1 \big[ \bar{y}_{1.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
Multiply and divide the term with $$\bar{y}_{1.}$$ by $$N$$
$n_1 \big[ \frac{N \bar{y}_{1.}}{N} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
$$N$$ is common denominator
$n_1 \big[\frac{N \bar{y}_{1.} - n_1 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
$$\bar{y}_{1.}$$ is common factor of $$N$$ and $$n_1$$
$n_1 \big[\frac{(N - n_1) \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
Replace $$(N - n_{1}) = n_{2}$$
$n_1 \big[\frac{n_2 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
Now $$n_{2}$$ is common factor of $$\bar{y}_{1.}$$ and $$\bar{y}_{2.}$$
$n_1 \big[\frac{n_2 (\bar{y}_{1.} - \bar{y}_{2.})}{N} \big]^2$
Take $$n_{2}$$ and $$N$$ out of the square
$\text{Part a} = \frac{n_{1} n_{2}^2}{N^2} (\bar{y}_{1.} - \bar{y}_{2.})^2$
### 2.b. Part b
$\text{Part b} = n_2 \big[ \bar{y}_{2.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
Multiply and divide the term with $$\bar{y}_{2.}$$ by $$N$$
$n_2 \big[ \frac{N \bar{y}_{2.}}{N} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
$$N$$ is common denominator
$n_2 \big[\frac{N \bar{y}_{2.} - n_1 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
$$\bar{y}_{2.}$$ is common factor of $$N$$ and $$n_2$$
$n_2 \big[\frac{(N - n_2) \bar{y}_{2.} - n_1 \bar{y}_{1.}}{N} \big]^2$
Replace $$(N - n_{2}) = n_{1}$$
$n_2 \big[\frac{n_1 \bar{y}_{2.} - n_1 \bar{y}_{1.}}{N} \big]^2$
Now $$n_{1}$$ is common factor of $$\bar{y}_{1.}$$ and $$\bar{y}_{2.}$$
$n_2 \big[\frac{n_1 (\bar{y}_{2.} - \bar{y}_{1.})}{N} \big]^2$
Take $$n_{1}$$ and $$N$$ out of the square
$\text{Part b} = \frac{n_{2} n_{1}^2}{N^2} (\bar{y}_{2.} - \bar{y}_{1.})^2$
Now that we have Part a and Part b we are going to go back to equation $$(7)$$ and replace them:
$SST = \frac{n_{1} n_{2}^2}{N^2} (\bar{y}_{1.} - \bar{y}_{2.})^2 + \frac{n_{2} n_{1}^2}{N^2} (\bar{y}_{2.} - \bar{y}_{1.})^2 \tag{8}$
Taking into account that $$(\bar{y}_{1.} - \bar{y}_{2.})^2 = (\bar{y}_{2.} - \bar{y}_{1.})^2$$, we can re-write equation $$(8)$$ as $$(9)$$:
$SST = \underbrace{\big[ \frac{n_{1} n_{2}^2}{N^2} + \frac{n_{2} n_{1}^2}{N^2} \big]}_{\text{Part c}} (\bar{y}_{1.} - \bar{y}_{2.})^2 \tag{9}$
This lead us with part Part c, that we are going to work next.
### 2.c. Part c
$\text{Part c} = \frac{n_{1} n_{2}^2}{N^2} + \frac{n_{2} n_{1}^2}{N^2}$
$$N^2$$ is common denominator and each of the summands has a $$n_{1} n_{2}$$ factor that we can factor out. Then we have:
$\frac{n_{1} n_{2} (n_{1} + n_{2})}{N^2}$
Replace $$N = n_{1} + n_{2}$$
$\frac{n_{1} n_{2} N}{N^2}$
Simplify $$N$$
$\frac{n_{1} n_{2}}{N}$
Re-write the fraction
$\frac{1}{\frac{N}{n_{1} n_{2}}}$
Replace $$N = n_{1} + n_{2}$$
$\frac{1}{\frac{n_{1} + n_{2}}{n_{1} n_{2}}} = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$
And we have
$\text{Part c} = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$
Finally, we have to replace this expression for Part c in $$(9)$$ and re-write SST as:
$SST = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}} (\bar{y}_{1.} - \bar{y}_{2.})^2$
## 3. Put all together
With the previous steps we have shown that, $$\color{red} {\text{when a = 2}}$$, we have:
$\frac{SST}{2-1} = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$
and
$\frac{SSE}{N-2} = S_{p}^2$
The ratio of these two expressions, namely the F-statistic, is then:
$F_{1, k} = \frac{\frac{SST}{2-1}}{\frac{SSE}{N-2}} = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{S_{p}^2 \big( \frac{1}{n_{1}} + \frac{1}{n_{2}} \big)} = t_{k}^2$
And this concludes our proof. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682299494743347, "perplexity": 745.1103519629996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00211.warc.gz"} |
https://math.stackexchange.com/questions/2109874/do-there-exist-proper-classes-that-arent-too-big | # Do there Exist Proper Classes that aren't “Too Big”
Some proper classes are "too big" to be a set in the sense that they have a subclass that can be put in bijection with $\alpha$ for every cardinal $\alpha$. It is implied in this post that every proper class is "too big" to be a set in this sense, however I have been unable to prove it. It's true if every two proper classes are in bijection, but it's consistent with ZFC for there to be a pair of non-bijective classes.
So, is the following true in ZFC:
For all proper classes, $C$, and $\alpha\in\mathbf{Card}$, $\exists S\subset C$ such that $|S|=\alpha$?
If not, is there something reasonable similar that preserves the intuition about classes that are "too big to be sets"?
$\newcommand{\Ord}{\operatorname{\mathbf{Ord}}}$Yes, this is true. First, let us prove this if $C$ is a subclass of the ordinals. By transfinite recursion, we can define a function $f:C\to \Ord$ such that for each $c\in C$, $f(c)$ is the least ordinal greater than $f(d)$ for all $d\in C$ such that $d<c$. The image of $f$ is a (not necessarily proper) initial segment of $\Ord$: that is, it is either an ordinal or it is all of $\Ord$. Since $f$ is injective (it is strictly order-preserving), if the image of $f$ were an ordinal then $C$ would be a set by Replacement (using the inverse of $f$). Thus the image of $f$ is all of $\Ord$. But now it is trivial to find a subset of $C$ of cardinality $\alpha$: just take $f^{-1}(\alpha)$ (which is a set by Replacement).
Now let $C$ be an arbitrary proper class. Let $D\subseteq \Ord$ be the class of all ranks of elements of $C$. If $D$ is bounded, then it is contained in some ordinal $\alpha$, which means $C$ is contained in $V_{\alpha}$ and hence is a set. So $D$ must be unbounded, and is thus a proper class. By the previous paragraph, for any cardinal $\alpha$, there exists $S\subset D$ of cardinality $\alpha$. Now use Choice to pick a single element of $C$ of rank $s$ for each $s\in S$. The set of all these elements is then a subset of $C$ of cardinality $\alpha$.
(To be clear, this is a proof you can give for any particular class $C$ defined by some formula in the language of set theory. Of course, ZFC cannot quantify over classes, and so cannot even state this "for all $C$..." at once.)
• This makes perfect sense to me and is a much better answer than mine. – Steven Stadnicki Jan 23 '17 at 5:56
Here is an alternative proof to that of Eric.
Since $C$ is a proper class, there is no $\alpha$ such that $C\subseteq V_\alpha$. Consider the class $\{C\cap V_\alpha\mid\alpha\in\mathrm{Ord}\}$, if there is an upper bound on cardinality to this class, some $\kappa$, then there cannot be more than $\kappa^+$ different sets in that class, which would make $C$ a set.
Therefore there are arbitrarily large $C\cap V_\alpha$'s, and therefore there is one larger than your given cardinal.
Interestingly, without choice, it is consistent that there is a proper class which does not have any countably infinite subsets. Although it is true in that every proper class can be mapped onto the class of ordinals.
• That note at the end is very interesting! – Stella Biderman Jan 23 '17 at 6:09
• Yes. I agree! Although there are caveats to it when working in ZF, but we can formalize it in an NBG-like setting to be accurate as stated. – Asaf Karagila Jan 23 '17 at 6:10
No, for one trivial but important reason: ZFC has no idea of the notion of class so you can't speak of classes at all within ZFC.
Perhaps you want something like NBG, but it's actually an axiom of NBG that things that aren't 'set-sized' are 'universe-sized': the Limitation of Size axiom says that for any class $C$, a set $x$ such that $x=C$ exists iff there is not a bijection between $C$ and $V$. See https://en.wikipedia.org/wiki/Axiom_of_limitation_of_size for more details.
Alternately, as noted by Eric Wofsey below, we can attempt to formalize the question in ZFC as follows:
For every formula $\phi(x)$ in one free variable, $(\not\exists S: x\in S\leftrightarrow \phi(x)) \implies (\forall \alpha\in CARD\ \exists T s.t. |\{t: t\in T\wedge \phi(t)\}|=\alpha)$.
I don't see an immediate proof of this in ZFC, but it's certainly plausible; note that if the RHS of the implication is false (that is, if there are cardinals not in the 'domain' of $\phi$), then the cardinals that are in the domain of $\phi$ are closed downwards, so there must be some cardinal $\beta$ such that $\{\gamma: \exists T s.t. |\{t:t\in T\wedge\phi(t)\}|=\gamma\} = \{\gamma: \gamma\lt \beta\}$ - that is, the cardinals in the domain of $\phi$ are exactly the cardinals less than $\beta$.
• But you can prove a metatheorem saying that for any class (i.e., formula with one free variable), ZFC proves that the statement is true for that class. Or you can enlarge the language by adjoining a new unary relation symbol that defines your class. – Eric Wofsey Jan 23 '17 at 5:17
• @EricWofsey Very true, though given that we talk about an $S\subset C$ there's a lot of delicacy in the specific formula, especially in indicating that $C$ isn't 'set-sized'. – Steven Stadnicki Jan 23 '17 at 5:23
• There really isn't anything delicate about it. We can quantify over subsets of $C$ with no difficulty: they are just sets all of whose elements satisfy the formula defining $C$. – Eric Wofsey Jan 23 '17 at 5:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705545902252197, "perplexity": 131.29878850120292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592261.1/warc/CC-MAIN-20200118052321-20200118080321-00332.warc.gz"} |
https://terrytao.wordpress.com/category/mathematics/mathco/ | You are currently browsing the category archive for the ‘math.CO’ category.
In 1946, Ulam, in response to a theorem of Anning and Erdös, posed the following problem:
Problem 1 (Erdös-Ulam problem) Let ${S \subset {\bf R}^2}$ be a set such that the distance between any two points in ${S}$ is rational. Is it true that ${S}$ cannot be (topologically) dense in ${{\bf R}^2}$?
The paper of Anning and Erdös addressed the case that all the distances between two points in ${S}$ were integer rather than rational in the affirmative.
The Erdös-Ulam problem remains open; it was discussed recently over at Gödel’s lost letter. It is in fact likely (as we shall see below) that the set ${S}$ in the above problem is not only forbidden to be topologically dense, but also cannot be Zariski dense either. If so, then the structure of ${S}$ is quite restricted; it was shown by Solymosi and de Zeeuw that if ${S}$ fails to be Zariski dense, then all but finitely many of the points of ${S}$ must lie on a single line, or a single circle. (Conversely, it is easy to construct examples of dense subsets of a line or circle in which all distances are rational, though in the latter case the square of the radius of the circle must also be rational.)
The main tool of the Solymosi-de Zeeuw analysis was Faltings’ celebrated theorem that every algebraic curve of genus at least two contains only finitely many rational points. The purpose of this post is to observe that an affirmative answer to the full Erdös-Ulam problem similarly follows from the conjectured analogue of Falting’s theorem for surfaces, namely the following conjecture of Bombieri and Lang:
Conjecture 2 (Bombieri-Lang conjecture) Let ${X}$ be a smooth projective irreducible algebraic surface defined over the rationals ${{\bf Q}}$ which is of general type. Then the set ${X({\bf Q})}$ of rational points of ${X}$ is not Zariski dense in ${X}$.
In fact, the Bombieri-Lang conjecture has been made for varieties of arbitrary dimension, and for more general number fields than the rationals, but the above special case of the conjecture is the only one needed for this application. We will review what “general type” means (for smooth projective complex varieties, at least) below the fold.
The Bombieri-Lang conjecture is considered to be extremely difficult, in particular being substantially harder than Faltings’ theorem, which is itself a highly non-trivial result. So this implication should not be viewed as a practical route to resolving the Erdös-Ulam problem unconditionally; rather, it is a demonstration of the power of the Bombieri-Lang conjecture. Still, it was an instructive algebraic geometry exercise for me to carry out the details of this implication, which quickly boils down to verifying that a certain quite explicit algebraic surface is of general type (Theorem 4 below). As I am not an expert in the subject, my computations here will be rather tedious and pedestrian; it is likely that they could be made much slicker by exploiting more of the machinery of modern algebraic geometry, and I would welcome any such streamlining by actual experts in this area. (For similar reasons, there may be more typos and errors than usual in this post; corrections are welcome as always.) My calculations here are based on a similar calculation of van Luijk, who used analogous arguments to show (assuming Bombieri-Lang) that the set of perfect cuboids is not Zariski-dense in its projective parameter space.
We also remark that in a recent paper of Makhul and Shaffaf, the Bombieri-Lang conjecture (or more precisely, a weaker consequence of that conjecture) was used to show that if ${S}$ is a subset of ${{\bf R}^2}$ with rational distances which intersects any line in only finitely many points, then there is a uniform bound on the cardinality of the intersection of ${S}$ with any line. I have also recently learned (private communication) that an unpublished work of Shaffaf has obtained a result similar to the one in this post, namely that the Erdös-Ulam conjecture follows from the Bombieri-Lang conjecture, plus an additional conjecture about the rational curves in a specific surface.
Let us now give the elementary reductions to the claim that a certain variety is of general type. For sake of contradiction, let ${S}$ be a dense set such that the distance between any two points is rational. Then ${S}$ certainly contains two points that are a rational distance apart. By applying a translation, rotation, and a (rational) dilation, we may assume that these two points are ${(0,0)}$ and ${(1,0)}$. As ${S}$ is dense, there is a third point of ${S}$ not on the ${x}$ axis, which after a reflection we can place in the upper half-plane; we will write it as ${(a,\sqrt{b})}$ with ${b>0}$.
Given any two points ${P, Q}$ in ${S}$, the quantities ${|P|^2, |Q|^2, |P-Q|^2}$ are rational, and so by the cosine rule the dot product ${P \cdot Q}$ is rational as well. Since ${(1,0) \in S}$, this implies that the ${x}$-component of every point ${P}$ in ${S}$ is rational; this in turn implies that the product of the ${y}$-coordinates of any two points ${P,Q}$ in ${S}$ is rational as well (since this differs from ${P \cdot Q}$ by a rational number). In particular, ${a}$ and ${b}$ are rational, and all of the points in ${S}$ now lie in the lattice ${\{ ( x, y\sqrt{b}): x, y \in {\bf Q} \}}$. (This fact appears to have first been observed in the 1988 habilitationschrift of Kemnitz.)
Now take four points ${(x_j,y_j \sqrt{b})}$, ${j=1,\dots,4}$ in ${S}$ in general position (so that the octuplet ${(x_1,y_1\sqrt{b},\dots,x_4,y_4\sqrt{b})}$ avoids any pre-specified hypersurface in ${{\bf C}^8}$); this can be done if ${S}$ is dense. (If one wished, one could re-use the three previous points ${(0,0), (1,0), (a,\sqrt{b})}$ to be three of these four points, although this ultimately makes little difference to the analysis.) If ${(x,y\sqrt{b})}$ is any point in ${S}$, then the distances ${r_j}$ from ${(x,y\sqrt{b})}$ to ${(x_j,y_j\sqrt{b})}$ are rationals that obey the equations
$\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2$
for ${j=1,\dots,4}$, and thus determine a rational point in the affine complex variety ${V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4} \subset {\bf C}^5}$ defined as
$\displaystyle V := \{ (x,y,r_1,r_2,r_3,r_4) \in {\bf C}^6:$
$\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2 \hbox{ for } j=1,\dots,4 \}.$
By inspecting the projection ${(x,y,r_1,r_2,r_3,r_4) \rightarrow (x,y)}$ from ${V}$ to ${{\bf C}^2}$, we see that ${V}$ is a branched cover of ${{\bf C}^2}$, with the generic cover having ${2^4=16}$ points (coming from the different ways to form the square roots ${r_1,r_2,r_3,r_4}$); in particular, ${V}$ is a complex affine algebraic surface, defined over the rationals. By inspecting the monodromy around the four singular base points ${(x,y) = (x_i,y_i)}$ (which switch the sign of one of the roots ${r_i}$, while keeping the other three roots unchanged), we see that the variety ${V}$ is connected away from its singular set, and thus irreducible. As ${S}$ is topologically dense in ${{\bf R}^2}$, it is Zariski-dense in ${{\bf C}^2}$, and so ${S}$ generates a Zariski-dense set of rational points in ${V}$. To solve the Erdös-Ulam problem, it thus suffices to show that
Claim 3 For any non-zero rational ${b}$ and for rationals ${x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4}$ in general position, the rational points of the affine surface ${V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4}}$ is not Zariski dense in ${V}$.
This is already very close to a claim that can be directly resolved by the Bombieri-Lang conjecture, but ${V}$ is affine rather than projective, and also contains some singularities. The first issue is easy to deal with, by working with the projectivisation
$\displaystyle \overline{V} := \{ [X,Y,Z,R_1,R_2,R_3,R_4] \in {\bf CP}^6: Q(X,Y,Z,R_1,R_2,R_3,R_4) = 0 \} \ \ \ \ \ (1)$
of ${V}$, where ${Q: {\bf C}^7 \rightarrow {\bf C}^4}$ is the homogeneous quadratic polynomial
$\displaystyle (X,Y,Z,R_1,R_2,R_3,R_4) := (Q_j(X,Y,Z,R_1,R_2,R_3,R_4) )_{j=1}^4$
with
$\displaystyle Q_j(X,Y,Z,R_1,R_2,R_3,R_4) := (X-x_j Z)^2 + b (Y-y_jZ)^2 - R_j^2$
and the projective complex space ${{\bf CP}^6}$ is the space of all equivalence classes ${[X,Y,Z,R_1,R_2,R_3,R_4]}$ of tuples ${(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf C}^7 \backslash \{0\}}$ up to projective equivalence ${(\lambda X, \lambda Y, \lambda Z, \lambda R_1, \lambda R_2, \lambda R_3, \lambda R_4) \sim (X,Y,Z,R_1,R_2,R_3,R_4)}$. By identifying the affine point ${(x,y,r_1,r_2,r_3,r_4)}$ with the projective point ${(X,Y,1,R_1,R_2,R_3,R_4)}$, we see that ${\overline{V}}$ consists of the affine variety ${V}$ together with the set ${\{ [X,Y,0,R_1,R_2,R_3,R_4]: X^2+bY^2=R^2; R_j = \pm R_1 \hbox{ for } j=2,3,4\}}$, which is the union of eight curves, each of which lies in the closure of ${V}$. Thus ${\overline{V}}$ is the projective closure of ${V}$, and is thus a complex irreducible projective surface, defined over the rationals. As ${\overline{V}}$ is cut out by four quadric equations in ${{\bf CP}^6}$ and has degree sixteen (as can be seen for instance by inspecting the intersection of ${\overline{V}}$ with a generic perturbation of a fibre over the generically defined projection ${[X,Y,Z,R_1,R_2,R_3,R_4] \mapsto [X,Y,Z]}$), it is also a complete intersection. To show (3), it then suffices to show that the rational points in ${\overline{V}}$ are not Zariski dense in ${\overline{V}}$.
Heuristically, the reason why we expect few rational points in ${\overline{V}}$ is as follows. First observe from the projective nature of (1) that every rational point is equivalent to an integer point. But for a septuple ${(X,Y,Z,R_1,R_2,R_3,R_4)}$ of integers of size ${O(N)}$, the quantity ${Q(X,Y,Z,R_1,R_2,R_3,R_4)}$ is an integer point of ${{\bf Z}^4}$ of size ${O(N^2)}$, and so should only vanish about ${O(N^{-8})}$ of the time. Hence the number of integer points ${(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf Z}^7}$ of height comparable to ${N}$ should be about
$\displaystyle O(N)^7 \times O(N^{-8}) = O(N^{-1});$
this is a convergent sum if ${N}$ ranges over (say) powers of two, and so from standard probabilistic heuristics (see this previous post) we in fact expect only finitely many solutions, in the absence of any special algebraic structure (e.g. the structure of an abelian variety, or a birational reduction to a simpler variety) that could produce an unusually large number of solutions.
The Bombieri-Lang conjecture, Conjecture 2, can be viewed as a formalisation of the above heuristics (roughly speaking, it is one of the most optimistic natural conjectures one could make that is compatible with these heuristics while also being invariant under birational equivalence).
Unfortunately, ${\overline{V}}$ contains some singular points. Being a complete intersection, this occurs when the Jacobian matrix of the map ${Q: {\bf C}^7 \rightarrow {\bf C}^4}$ has less than full rank, or equivalently that the gradient vectors
$\displaystyle \nabla Q_j = (2(X-x_j Z), 2(Y-y_j Z), -2x_j (X-x_j Z) - 2y_j (Y-y_j Z), \ \ \ \ \ (2)$
$\displaystyle 0, \dots, 0, -2R_j, 0, \dots, 0)$
for ${j=1,\dots,4}$ are linearly dependent, where the ${-2R_j}$ is in the coordinate position associated to ${R_j}$. One way in which this can occur is if one of the gradient vectors ${\nabla Q_j}$ vanish identically. This occurs at precisely ${4 \times 2^3 = 32}$ points, when ${[X,Y,Z]}$ is equal to ${[x_j,y_j,1]}$ for some ${j=1,\dots,4}$, and one has ${R_k = \pm ( (x_j - x_k)^2 + b (y_j - y_k)^2 )^{1/2}}$ for all ${k=1,\dots,4}$ (so in particular ${R_j=0}$). Let us refer to these as the obvious singularities; they arise from the geometrically evident fact that the distance function ${(x,y\sqrt{b}) \mapsto \sqrt{(x-x_j)^2 + b(y-y_j)^2}}$ is singular at ${(x_j,y_j\sqrt{b})}$.
The other way in which could occur is if a non-trivial linear combination of at least two of the gradient vectors vanishes. From (2), this can only occur if ${R_j=R_k=0}$ for some distinct ${j,k}$, which from (1) implies that
$\displaystyle (X - x_j Z) = \pm \sqrt{b} i (Y - y_j Z) \ \ \ \ \ (3)$
and
$\displaystyle (X - x_k Z) = \pm \sqrt{b} i (Y - y_k Z) \ \ \ \ \ (4)$
for two choices of sign ${\pm}$. If the signs are equal, then (as ${x_j, y_j, x_k, y_k}$ are in general position) this implies that ${Z=0}$, and then we have the singular point
$\displaystyle [X,Y,Z,R_1,R_2,R_3,R_4] = [\pm \sqrt{b} i, 1, 0, 0, 0, 0, 0]. \ \ \ \ \ (5)$
If the non-trivial linear combination involved three or more gradient vectors, then by the pigeonhole principle at least two of the signs involved must be equal, and so the only singular points are (5). So the only remaining possibility is when we have two gradient vectors ${\nabla Q_j, \nabla Q_k}$ that are parallel but non-zero, with the signs in (3), (4) opposing. But then (as ${x_j,y_j,x_k,y_k}$ are in general position) the vectors ${(X-x_j Z, Y-y_j Z), (X-x_k Z, Y-y_k Z)}$ are non-zero and non-parallel to each other, a contradiction. Thus, outside of the ${32}$ obvious singular points mentioned earlier, the only other singular points are the two points (5).
We will shortly show that the ${32}$ obvious singularities are ordinary double points; the surface ${\overline{V}}$ near any of these points is analytically equivalent to an ordinary cone ${\{ (x,y,z) \in {\bf C}^3: z^2 = x^2 + y^2 \}}$ near the origin, which is a cone over a smooth conic curve ${\{ (x,y) \in {\bf C}^2: x^2+y^2=1\}}$. The two non-obvious singularities (5) are slightly more complicated than ordinary double points, they are elliptic singularities, which approximately resemble a cone over an elliptic curve. (As far as I can tell, this resemblance is exact in the category of real smooth manifolds, but not in the category of algebraic varieties.) If one blows up each of the point singularities of ${\overline{V}}$ separately, no further singularities are created, and one obtains a smooth projective surface ${X}$ (using the Segre embedding as necessary to embed ${X}$ back into projective space, rather than in a product of projective spaces). Away from the singularities, the rational points of ${\overline{V}}$ lift up to rational points of ${X}$. Assuming the Bombieri-Lang conjecture, we thus are able to answer the Erdös-Ulam problem in the affirmative once we establish
Theorem 4 The blowup ${X}$ of ${\overline{V}}$ is of general type.
This will be done below the fold, by the pedestrian device of explicitly constructing global differential forms on ${X}$; I will also be working from a complex analysis viewpoint rather than an algebraic geometry viewpoint as I am more comfortable with the former approach. (As mentioned above, though, there may well be a quicker way to establish this result by using more sophisticated machinery.)
I thank Mark Green and David Gieseker for helpful conversations (and a crash course in varieties of general type!).
Remark 5 The above argument shows in fact (assuming Bombieri-Lang) that sets ${S \subset {\bf R}^2}$ with all distances rational cannot be Zariski-dense, and thus (by Solymosi-de Zeeuw) must lie on a single line or circle with only finitely many exceptions. Assuming a stronger version of Bombieri-Lang involving a general number field ${K}$, we obtain a similar conclusion with “rational” replaced by “lying in ${K}$” (one has to extend the Solymosi-de Zeeuw analysis to more general number fields, but this should be routine, using the analogue of Faltings’ theorem for such number fields).
Van Vu and I have just uploaded to the arXiv our paper “Random matrices have simple eigenvalues“. Recall that an ${n \times n}$ Hermitian matrix is said to have simple eigenvalues if all of its ${n}$ eigenvalues are distinct. This is a very typical property of matrices to have: for instance, as discussed in this previous post, in the space of all ${n \times n}$ Hermitian matrices, the space of matrices without all eigenvalues simple has codimension three, and for real symmetric cases this space has codimension two. In particular, given any random matrix ensemble of Hermitian or real symmetric matrices with an absolutely continuous distribution, we conclude that random matrices drawn from this ensemble will almost surely have simple eigenvalues.
For discrete random matrix ensembles, though, the above argument breaks down, even though general universality heuristics predict that the statistics of discrete ensembles should behave similarly to those of continuous ensembles. A model case here is the adjacency matrix ${M_n}$ of an Erdös-Rényi graph – a graph on ${n}$ vertices in which any pair of vertices has an independent probability ${p}$ of being in the graph. For the purposes of this paper one should view ${p}$ as fixed, e.g. ${p=1/2}$, while ${n}$ is an asymptotic parameter going to infinity. In this context, our main result is the following (answering a question of Babai):
Theorem 1 With probability ${1-o(1)}$, ${M_n}$ has simple eigenvalues.
Our argument works for more general Wigner-type matrix ensembles, but for sake of illustration we will stick with the Erdös-Renyi case. Previous work on local universality for such matrix models (e.g. the work of Erdos, Knowles, Yau, and Yin) was able to show that any individual eigenvalue gap ${\lambda_{i+1}(M)-\lambda_i(M)}$ did not vanish with probability ${1-o(1)}$ (in fact ${1-O(n^{-c})}$ for some absolute constant ${c>0}$), but because there are ${n}$ different gaps that one has to simultaneously ensure to be non-zero, this did not give Theorem 1 as one is forced to apply the union bound.
Our argument in fact gives simplicity of the spectrum with probability ${1-O(n^{-A})}$ for any fixed ${A}$; in a subsequent paper we also show that it gives a quantitative lower bound on the eigenvalue gaps (analogous to how many results on the singularity probability of random matrices can be upgraded to a bound on the least singular value).
The basic idea of argument can be sketched as follows. Suppose that ${M_n}$ has a repeated eigenvalue ${\lambda}$. We split
$\displaystyle M_n = \begin{pmatrix} M_{n-1} & X \\ X^T & 0 \end{pmatrix}$
for a random ${n-1 \times n-1}$ minor ${M_{n-1}}$ and a random sign vector ${X}$; crucially, ${X}$ and ${M_{n-1}}$ are independent. If ${M_n}$ has a repeated eigenvalue ${\lambda}$, then by the Cauchy interlacing law, ${M_{n-1}}$ also has an eigenvalue ${\lambda}$. We now write down the eigenvector equation for ${M_n}$ at ${\lambda}$:
$\displaystyle \begin{pmatrix} M_{n-1} & X \\ X^T & 0 \end{pmatrix} \begin{pmatrix} v \\ a \end{pmatrix} = \lambda \begin{pmatrix} v \\ a \end{pmatrix}.$
Extracting the top ${n-1}$ coefficients, we obtain
$\displaystyle (M_{n-1} - \lambda) v + a X = 0.$
If we let ${w}$ be the ${\lambda}$-eigenvector of ${M_{n-1}}$, then by taking inner products with ${w}$ we conclude that
$\displaystyle a (w \cdot X) = 0;$
we typically expect ${a}$ to be non-zero, in which case we arrive at
$\displaystyle w \cdot X = 0.$
In other words, in order for ${M_n}$ to have a repeated eigenvalue, the top right column ${X}$ of ${M_n}$ has to be orthogonal to an eigenvector ${w}$ of the minor ${M_{n-1}}$. Note that ${X}$ and ${w}$ are going to be independent (once we specify which eigenvector of ${M_{n-1}}$ to take as ${w}$). On the other hand, thanks to inverse Littlewood-Offord theory (specifically, we use an inverse Littlewood-Offord theorem of Nguyen and Vu), we know that the vector ${X}$ is unlikely to be orthogonal to any given vector ${w}$ independent of ${X}$, unless the coefficients of ${w}$ are extremely special (specifically, that most of them lie in a generalised arithmetic progression). The main remaining difficulty is then to show that eigenvectors of a random matrix are typically not of this special form, and this relies on a conditioning argument originally used by Komlós to bound the singularity probability of a random sign matrix. (Basically, if an eigenvector has this special form, then one can use a fraction of the rows and columns of the random matrix to determine the eigenvector completely, while still preserving enough randomness in the remaining portion of the matrix so that this vector will in fact not be an eigenvector with high probability.)
In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence ${G_n = (V_n, E_n)}$ of finite graphs, one can extract a subsequence ${G_{n_j} = (V_{n_j}, E_{n_j})}$ which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function ${p\colon [0,1] \times [0,1] \rightarrow [0,1]}$. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon ${p}$. For instance, the edge density
$\displaystyle \frac{1}{|V_{n_j}|^2} |E_{n_j}|$
converge to the integral
$\displaystyle \int_0^1 \int_0^1 p(x,y)\ dx dy,$
the triangle density
$\displaystyle \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert$
converges to the integral
$\displaystyle \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,$
the four-cycle density
$\displaystyle \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert$
converges to the integral
$\displaystyle \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,$
and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.
One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence ${G_n = (V_n,E_n)}$ of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ${\alpha \in\beta {\bf N} \backslash {\bf N}}$) to obtain a nonstandard graph ${G_\alpha = (V_\alpha,E_\alpha)}$, where ${V_\alpha = \prod_{n\rightarrow \alpha} V_n}$ is the ultraproduct of the ${V_n}$, and similarly for the ${E_\alpha}$. The set ${E_\alpha}$ can then be viewed as a symmetric subset of ${V_\alpha \times V_\alpha}$ which is measurable with respect to the Loeb ${\sigma}$-algebra ${{\mathcal L}_{V_\alpha \times V_\alpha}}$ of the product ${V_\alpha \times V_\alpha}$ (see this previous blog post for the construction of Loeb measure). A crucial point is that this ${\sigma}$-algebra is larger than the product ${{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}}$ of the Loeb ${\sigma}$-algebra of the individual vertex set ${V_\alpha}$. This leads to a decomposition
$\displaystyle 1_{E_\alpha} = p + e$
where the “graphon” ${p}$ is the orthogonal projection of ${1_{E_\alpha}}$ onto ${L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}$, and the “regular error” ${e}$ is orthogonal to all product sets ${A \times B}$ for ${A, B \in {\mathcal L}_{V_\alpha}}$. The graphon ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ then captures the statistics of the nonstandard graph ${G_\alpha}$, in exact analogy with the more traditional graph limits: for instance, the edge density
$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|$
(or equivalently, the limit of the ${\frac{1}{|V_n|^2} |E_n|}$ along the ultrafilter ${\alpha}$) is equal to the integral
$\displaystyle \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)$
where ${d\mu_V}$ denotes Loeb measure on a nonstandard finite set ${V}$; the triangle density
$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert$
(or equivalently, the limit along ${\alpha}$ of the triangle densities of ${E_n}$) is equal to the integral
$\displaystyle \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),$
and so forth. Note that with this construction, the graphon ${p}$ is living on the Cartesian square of an abstract probability space ${V_\alpha}$, which is likely to be inseparable; but it is possible to cut down the Loeb ${\sigma}$-algebra on ${V_\alpha}$ to minimal countable ${\sigma}$-algebra for which ${p}$ remains measurable (up to null sets), and then one can identify ${V_\alpha}$ with ${[0,1]}$, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)
Additive combinatorics, which studies things like the additive structure of finite subsets ${A}$ of an abelian group ${G = (G,+)}$, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.
It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group ${A_\alpha}$ in a nonstandard group ${G_\alpha = \prod_{n \rightarrow \alpha} G_n}$, defined as the ultraproduct of finite ${K}$-approximate groups ${A_n \subset G_n}$ for some standard ${K}$. (A ${K}$-approximate group ${A_n}$ is a symmetric set containing the origin such that ${A_n+A_n}$ can be covered by ${K}$ or fewer translates of ${A_n}$.) We then let ${O(A_\alpha)}$ be the external subgroup of ${G_\alpha}$ generated by ${A_\alpha}$; equivalently, ${A_\alpha}$ is the union of ${A_\alpha^m}$ over all standard ${m}$. This space has a Loeb measure ${\mu_{O(A_\alpha)}}$, defined by setting
$\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}$
whenever ${E_\alpha}$ is an internal subset of ${A_\alpha^m}$ for any standard ${m}$, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.
The Loeb measure ${\mu_{O(A_\alpha)}}$ is a translation invariant measure on ${O(A_{\alpha})}$, normalised so that ${A_\alpha}$ has Loeb measure one. As such, one should think of ${O(A_\alpha)}$ as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that ${O(A_\alpha)}$ is not actually a locally compact group with Haar measure, for two reasons:
• There is not an obvious topology on ${O(A_\alpha)}$ that makes it simultaneously locally compact, Hausdorff, and ${\sigma}$-compact. (One can get one or two out of three without difficulty, though.)
• The addition operation ${+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)}$ is not measurable from the product Loeb algebra ${{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$. Instead, it is measurable from the coarser Loeb algebra ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$ (compare with the analogous situation for nonstandard graphs).
Nevertheless, the analogy is a useful guide for the arguments that follow.
Let ${L(O(A_\alpha))}$ denote the space of bounded Loeb measurable functions ${f\colon O(A_\alpha) \rightarrow {\bf C}}$ (modulo almost everywhere equivalence) that are supported on ${A_\alpha^m}$ for some standard ${m}$; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation ${\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}$, defined by setting
$\displaystyle \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)$
whenever ${f\colon A_\alpha^m \rightarrow {}^* {\bf C}}$, ${g\colon A_\alpha^l \rightarrow {}^* {\bf C}}$ are bounded nonstandard functions (extended by zero to all of ${O(A_\alpha)}$), and then extending to arbitrary elements of ${L(O(A_\alpha))}$ by density. Equivalently, ${f \star g}$ is the pushforward of the ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$-measurable function ${(x,y) \mapsto f(x) g(y)}$ under the map ${(x,y) \mapsto x+y}$.
The basic structural theorem is then as follows.
Theorem 1 (Kronecker factor) Let ${A_\alpha}$ be an ultra approximate group. Then there exists a (standard) locally compact abelian group ${G}$ of the form
$\displaystyle G = {\bf R}^d \times {\bf Z}^m \times T$
for some standard ${d,m}$ and some compact abelian group ${T}$, equipped with a Haar measure ${\mu_G}$ and a measurable homomorphism ${\pi\colon O(A_\alpha) \rightarrow G}$ (using the Loeb ${\sigma}$-algebra on ${O(A_\alpha)}$ and the Baire ${\sigma}$-algebra on ${G}$), with the following properties:
• (i) ${\pi}$ has dense image, and ${\mu_G}$ is the pushforward of Loeb measure ${\mu_{O(A_\alpha)}}$ by ${\pi}$.
• (ii) There exists sets ${\{0\} \subset U_0 \subset K_0 \subset G}$ with ${U_0}$ open and ${K_0}$ compact, such that
$\displaystyle \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)$
• (iii) Whenever ${K \subset U \subset G}$ with ${K}$ compact and ${U}$ open, there exists a nonstandard finite set ${B}$ such that
$\displaystyle \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)$
• (iv) If ${f, g \in L}$, then we have the convolution formula
$\displaystyle f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)$
where ${\pi_* f,\pi_* g}$ are the pushforwards of ${f,g}$ to ${L^2(G, \mu_G)}$, the convolution ${\star}$ on the right-hand side is convolution using ${\mu_G}$, and ${\pi^*}$ is the pullback map from ${L^2(G,\mu_G)}$ to ${L^2(O(A_\alpha), \mu_{O(A_\alpha)})}$. In particular, if ${\pi_* f = 0}$, then ${f*g=0}$ for all ${g \in L}$.
One can view the locally compact abelian group ${G}$ as a “model “or “Kronecker factor” for the ultra approximate group ${A_\alpha}$ (in close analogy with the Kronecker factor from ergodic theory). In the case that ${A_\alpha}$ is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components ${{\bf R}^d \times {\bf Z}^m}$ of the Kronecker group ${G}$ are trivial, and this theorem was implicitly established by Szegedy. The compact group ${T}$ is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions ${f}$, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor ${G}$. Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.
Given any sequence of uniformly bounded functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ for some fixed ${m}$, we can view the function ${f \in L}$ defined by
$\displaystyle f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)$
as an “additive limit” of the ${f_n}$, in much the same way that graphons ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ are limits of the indicator functions ${1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}$. The additive limits capture some of the statistics of the ${f_n}$, for instance the normalised means
$\displaystyle \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)$
converge (along the ultrafilter ${\alpha}$) to the mean
$\displaystyle \int_G f(x)\ d\mu_G(x),$
and for three sequences ${f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}}$ of functions, the normalised correlation
$\displaystyle \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)$
converges along ${\alpha}$ to the correlation
$\displaystyle \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),$
the normalised ${U^2}$ Gowers norm
$\displaystyle ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}$
converges along ${\alpha}$ to the ${U^2}$ Gowers norm
$\displaystyle ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}$
and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised ${\ell^2}$ norm
$\displaystyle (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}$
does not necessarily converge to the ${L^2}$ norm
$\displaystyle (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},$
but can converge instead to a larger quantity, due to the presence of the orthogonal projection ${\pi_*}$ in the definition (4) of ${f}$.
An important special case of an additive limit occurs when the functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ involved are indicator functions ${f_n = 1_{E_n}}$ of some subsets ${E_n}$ of ${A_n^m}$. The additive limit ${f \in L}$ does not necessarily remain an indicator function, but instead takes values in ${[0,1]}$ (much as a graphon ${p}$ takes values in ${[0,1]}$ even though the original indicators ${1_{E_n}}$ take values in ${\{0,1\}}$). The convolution ${f \star f\colon G \rightarrow [0,1]}$ is then the ultralimit of the normalised convolutions ${\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}$; in particular, the measure of the support of ${f \star f}$ provides a lower bound on the limiting normalised cardinality ${\frac{1}{|A_n|} |E_n + E_n|}$ of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset ${2E_n = E_n + E_n}$ could contain a large number of elements which have very few (${o(|A_n|)}$) representations as the sum of two elements of ${E_n}$, and in the limit these portions of the sumset fall outside of the support of ${f \star f}$. (One can think of the support of ${f \star f}$ as describing the “essential” sumset of ${2E_n = E_n + E_n}$, discarding those elements that have only very few representations.) Similarly for higher convolutions of ${f}$. Thus one can use additive limits to partially control the growth ${k E_n}$ of iterated sumsets of subsets ${E_n}$ of approximate groups ${A_n}$, in the regime where ${k}$ stays bounded and ${n}$ goes to infinity.
Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.
Example 2 (Bohr sets) We take ${A_n}$ to be the intervals ${A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}$, where ${N_n}$ is a sequence going to infinity; these are ${2}$-approximate groups for all ${n}$. Let ${\theta}$ be an irrational real number, let ${I}$ be an interval in ${{\bf R}/{\bf Z}}$, and for each natural number ${n}$ let ${B_n}$ be the Bohr set
$\displaystyle B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.$
In this case, the (reduced) Kronecker factor ${G}$ can be taken to be the infinite cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ with the usual Lebesgue measure ${\mu_G}$. The additive limits of ${1_{A_n}}$ and ${1_{B_n}}$ end up being ${1_A}$ and ${1_B}$, where ${A}$ is the finite cylinder
$\displaystyle A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}$
and ${B}$ is the rectangle
$\displaystyle B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.$
Geometrically, one should think of ${A_n}$ and ${B_n}$ as being wrapped around the cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ via the homomorphism ${x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}$, and then one sees that ${B_n}$ is converging in some normalised weak sense to ${B}$, and similarly for ${A_n}$ and ${A}$. In particular, the additive limit predicts the growth rate of the iterated sumsets ${kB_n}$ to be quadratic in ${k}$ until ${k|I|}$ becomes comparable to ${1}$, at which point the growth transitions to linear growth, in the regime where ${k}$ is bounded and ${n}$ is large.
If ${\theta = \frac{p}{q}}$ were rational instead of irrational, then one would need to replace ${{\bf R}/{\bf Z}}$ by the finite subgroup ${\frac{1}{q}{\bf Z}/{\bf Z}}$ here.
Example 3 (Structured subsets of progressions) We take ${A_n}$ be the rank two progression
$\displaystyle A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},$
where ${N_n}$ is a sequence going to infinity; these are ${4}$-approximate groups for all ${n}$. Let ${B_n}$ be the subset
$\displaystyle B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.$
Then the (reduced) Kronecker factor can be taken to be ${G = {\bf R}^2}$ with Lebesgue measure ${\mu_G}$, and the additive limits of the ${1_{A_n}}$ and ${1_{B_n}}$ are then ${1_A}$ and ${1_B}$, where ${A}$ is the square
$\displaystyle A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}$
and ${B}$ is the circle
$\displaystyle B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.$
Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism ${a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})}$ for ${a,b = O( N_n )}$ to embed the original sets ${A_n, B_n}$ into the plane ${{\bf R}^2}$. In particular, one now expects the growth rate of the iterated sumsets ${k A_n}$ and ${k B_n}$ to be quadratic in ${k}$, in the regime where ${k}$ is bounded and ${n}$ is large.
Example 4 (Dissociated sets) Let ${d}$ be a fixed natural number, and take
$\displaystyle A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}$
where ${v_1,\dots,v_d}$ are randomly chosen elements of a large cyclic group ${{\bf Z}/p_n{\bf Z}}$, where ${p_n}$ is a sequence of primes going to infinity. These are ${O(d)}$-approximate groups. The (reduced) Kronecker factor ${G}$ can (almost surely) then be taken to be ${{\bf Z}^d}$ with counting measure, and the additive limit of ${1_{A_n}}$ is ${1_A}$, where ${A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}}$ and ${e_1,\dots,e_d}$ is the standard basis of ${{\bf Z}^d}$. In particular, the growth rates of ${k A_n}$ should grow approximately like ${k^d}$ for ${k}$ bounded and ${n}$ large.
Example 5 (Random subsets of groups) Let ${A_n = G_n}$ be a sequence of finite additive groups whose order is going to infinity. Let ${B_n}$ be a random subset of ${G_n}$ of some fixed density ${0 \leq \lambda \leq 1}$. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group ${\{0\}}$, and the additive limit of the ${1_{B_n}}$ is the constant function ${\lambda}$. The convolutions ${\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}}$ then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ${\lambda^2}$; this reflects the fact that ${(1-o(1))|G_n|}$ of the elements of ${G_n}$ can be represented as the sum of two elements of ${B_n}$ in ${(\lambda^2 + o(1)) |G_n|}$ ways. In particular, ${B_n+B_n}$ occupies a proportion ${1-o(1)}$ of ${G_n}$.
Example 6 (Trigonometric series) Take ${A_n = G_n = {\bf Z}/p_n {\bf C}}$ for a sequence ${p_n}$ of primes going to infinity, and for each ${n}$ let ${\xi_{n,1},\xi_{n,2},\dots}$ be an infinite sequence of frequencies chosen uniformly and independently from ${{\bf Z}/p_n{\bf Z}}$. Let ${f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}}$ denote the random trigonometric series
$\displaystyle f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.$
Then (almost surely) we can take the reduced Kronecker factor ${G}$ to be the infinite torus ${({\bf R}/{\bf Z})^{\bf N}}$ (with the Haar probability measure ${\mu_G}$), and the additive limit of the ${f_n}$ then becomes the function ${f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}}$ defined by the formula
$\displaystyle f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.$
In fact, the pullback ${\pi^* f}$ is the ultralimit of the ${f_n}$. As such, for any standard exponent ${1 \leq q < \infty}$, the normalised ${l^q}$ norm
$\displaystyle (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}$
can be seen to converge to the limit
$\displaystyle (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.$
The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.
It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.
Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.
In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).
Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the ${P \neq NP}$ problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The ${P \neq NP}$ problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as ${k}$-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and ${k}$-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.
My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector ${x \in {\bf R}^n}$ in an affine space ${\{ x \in {\bf R}^n: Ax = b \}}$ (where ${b \in {\bf R}^m}$ and ${A \in {\bf R}^{m\times n}}$ are given, and ${m}$ is typically somewhat smaller than ${n}$) which minimises the ${\ell^1}$-norm ${\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|}$ of the vector ${x}$. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once ${n}$ and ${m}$ get moderately large (e.g. of the order of ${10^6}$), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix ${A}$. Much of the problem comes from the fact that the functional ${x \mapsto \|x\|_1}$ is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint ${Ax=b}$ with a convex penalty term ${\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}$, thus one is now trying to minimise the unconstrained functional
$\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.$
This functional is more convex, and is over a computationally simpler domain ${{\bf R}^n}$ than the affine space ${\{x \in {\bf R}^n: Ax=b\}}$, so is easier (though still not entirely trivial) to optimise over. However, the minimiser ${x^\epsilon}$ to this problem need not match the minimiser ${x^0}$ to the original problem, particularly if the (sub-)gradient ${\partial \|x\|_1}$ of the original functional ${\|x\|_1}$ is large at ${x^0}$, and if ${\epsilon}$ is not set to be small. (And setting ${\epsilon}$ too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term
$\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2$
then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser ${x^0}$ to the original problem, provided that ${p}$ is (or more precisely, lies in) the (sub-)gradient ${\partial \|x\|_1}$ of ${\|x\|_1}$ at ${x^0}$ – even if ${\epsilon}$ is not small. But, one does not know in advance what the correct value of ${p}$ should be, because one does not know what the minimiser ${x^0}$ is.
With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for ${x}$ and ${p}$ simultaneously; given an initial guess ${x^k, p^k}$ for both ${x^k}$ and ${p^k}$, one first updates ${x^k}$ to the minimiser ${x^{k+1} \in {\bf R}^n}$ of the convex functional
$\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)$
and then updates ${p^{k+1}}$ to the natural value of the subgradient ${\partial \|x\|_1}$ at ${x^{k+1}}$, namely
$\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)$
(note upon taking the first variation of (1) that ${p^{k+1}}$ is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser ${x^0}$ even for non-small values of ${\epsilon}$, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).
Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth ${n}$-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space ${{\bf R}^m}$. The Nash embedding theorem guarantees that one can do this if ${m}$ is large enough depending on ${n}$, but what is the minimal value of ${m}$ one can get away with? Many years ago, my colleague Robert Greene showed that ${m = \frac{n(n+1)}{2} + n}$ sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that ${m = \frac{n(n+1)}{2}}$ is possible, and no better. So this suggests that ${m = \frac{n(n+1)}{2}}$ is the threshold for the smooth problem also, but this remains open in general. The cases ${n=1}$ is trivial, and the ${n=2}$ case is not too difficult (if the curvature is non-zero) as the codimension ${m-n}$ is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the ${n=3}$ case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.
This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.
Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.
One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:
Conjecture 1 (Kakeya conjecture) Let ${E}$ be a subset of ${{\bf R}^3}$ that contains a unit line segment in every direction. Then ${\hbox{dim}(E) = 3}$.
This conjecture is not precisely formulated here, because we have not specified exactly what type of set ${E}$ is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):
Conjecture 2 (Kakeya conjecture, again) Let ${{\cal L}}$ be a family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$ and contain a line in each direction. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ to ${B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.
As the space of all directions in ${{\bf R}^3}$ is two-dimensional, we thus see that ${{\cal L}}$ is an (at least) two-dimensional subset of the four-dimensional space of lines in ${{\bf R}^3}$ (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ${B(0,1)}$). One could then ask if this is the only property of ${{\cal L}}$ that is needed to establish the Kakeya conjecture, that is to say if any subset of ${B(0,2)}$ which contains a two-dimensional family of lines (restricted to ${B(0,2)}$, and meeting ${B(0,1)}$) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in ${B(0,2)}$ (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:
Conjecture 3 (Strong Kakeya conjecture) Let ${{\cal L}}$ be a two-dimensional family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.
Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.
In 1995, Wolff established the important lower bound ${\hbox{dim}(E) \geq 5/2}$ (for various notions of dimension, e.g. Hausdorff dimension) for sets ${E}$ in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the ${5/2}$ barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals ${{\bf R}}$. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:
Conjecture 4 (Strong Kakeya conjecture over ${{\bf C}}$) Let ${{\cal L}}$ be a four (real) dimensional family of complex lines in ${{\bf C}^3}$ that meet the unit ball ${B(0,1)}$ in ${{\bf C}^3}$, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every complex line ${\ell}$ in ${{\cal L}}$. Then ${E}$ has real dimension ${6}$.
The argument of Wolff can be adapted to the complex case to show that all sets ${E}$ occuring in Conjecture 4 have real dimension at least ${5}$. Unfortunately, this is sharp, due to the following fundamental counterexample:
Proposition 5 (Heisenberg group counterexample) Let ${H \subset {\bf C}^3}$ be the Heisenberg group
$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}$
and let ${{\cal L}}$ be the family of complex lines
$\displaystyle \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}$
with ${s,t \in {\bf R}}$ and ${\alpha \in {\bf C}}$. Then ${H}$ is a five (real) dimensional subset of ${{\bf C}^3}$ that contains every line in the four (real) dimensional set ${{\cal L}}$; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in ${{\cal L}}$. In particular, the strong Kakeya conjecture over the complex numbers is false.
This proposition is proven by a routine computation, which we omit here. The group structure on ${H}$ is given by the group law
$\displaystyle (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),$
giving ${E}$ the structure of a ${2}$-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over ${{\bf R}^2}$. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines ${{\cal L}}$ in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines
$\displaystyle \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}$
with ${t \in {\bf R}}$; multiplying this family of lines on the right by a group element in ${H}$ gives other families of parallel lines, which in fact sweep out all of ${{\cal L}}$.
The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield ${{\bf R}}$ of ${{\bf C}}$, which induces an involution ${z \mapsto \overline{z}}$ which can then be used to define the Heisenberg group ${H}$ through the formula
$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.$
Analogous Heisenberg counterexamples can also be constructed if one works over finite fields ${{\bf F}_{q^2}}$ that contain a “half-dimensional” subfield ${{\bf F}_q}$; we leave the details to the interested reader. Morally speaking, if ${{\bf R}}$ in turn contained a subfield of dimension ${1/2}$ (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.
We thus see that to go beyond the ${5/2}$ dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:
• (a) Exploit the distinct directions of the lines in ${{\mathcal L}}$ in a way that goes beyond the Wolff axiom; or
• (b) Exploit the fact that ${{\bf R}}$ does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).
(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)
Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of ${5/2}$ for Kakeya sets very slightly to ${5/2+10^{-10}}$ (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of ${{\bf F}_p}$, and then pursued route (b) to obtain a corresponding improvement ${5/2+\epsilon}$ to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.
Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:
1. Assume that the (strong) Kakeya conjecture fails, so that there are sets ${E}$ of the form in Conjecture 3 of dimension ${3-\sigma}$ for some ${\sigma>0}$. Assume that ${E}$ is “optimal”, in the sense that ${\sigma}$ is as large as possible.
2. Use the optimality of ${E}$ (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets ${E}$, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining ${E}$ to “behave like” a putative Heisenberg group counterexample.
3. By playing all these structural properties off of each other, show that ${E}$ can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.
Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set ${E}$ for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.
Roth’s theorem on arithmetic progressions asserts that every subset of the integers ${{\bf Z}}$ of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:
Theorem 1 (Roth’s theorem) Let ${G = (G,+)}$ be a compact abelian group, with Haar probability measure ${\mu}$, which is ${2}$-divisible (i.e. the map ${x \mapsto 2x}$ is surjective) and let ${A}$ be a measurable subset of ${G}$ with ${\mu(A) \geq \alpha}$ for some ${0 < \alpha < 1}$. Then we have
$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,$
where ${X \gg_\alpha Y}$ denotes the bound ${X \geq c_\alpha Y}$ for some ${c_\alpha > 0}$ depending only on ${\alpha}$.
This theorem is usually formulated in the case that ${G}$ is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group ${G = {\bf Z}/N{\bf Z}}$ of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of ${2}$-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant ${c_\alpha}$ on ${\alpha}$, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the ${2}$-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the ${2r}$ shift in that case.
We can deduce Theorem 1 from the following more general Khintchine-type statement. Let ${\hat G}$ denote the Pontryagin dual of a compact abelian group ${G}$, that is to say the set of all continuous homomorphisms ${\xi: x \mapsto \xi \cdot x}$ from ${G}$ to the (additive) unit circle ${{\bf R}/{\bf Z}}$. Thus ${\hat G}$ is a discrete abelian group, and functions ${f \in L^2(G)}$ have a Fourier transform ${\hat f \in \ell^2(\hat G)}$ defined by
$\displaystyle \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).$
If ${G}$ is ${2}$-divisible, then ${\hat G}$ is ${2}$-torsion-free in the sense that the map ${\xi \mapsto 2 \xi}$ is injective. For any finite set ${S \subset \hat G}$ and any radius ${\rho>0}$, define the Bohr set
$\displaystyle B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}$
where ${\|\theta\|_{{\bf R}/{\bf Z}}}$ denotes the distance of ${\theta}$ to the nearest integer. We refer to the cardinality ${|S|}$ of ${S}$ as the rank of the Bohr set. We record a simple volume bound on Bohr sets:
Lemma 2 (Volume packing bound) Let ${G}$ be a compact abelian group with Haar probability measure ${\mu}$. For any Bohr set ${B(S,\rho)}$, we have
$\displaystyle \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.$
Proof: We can cover the torus ${({\bf R}/{\bf Z})^S}$ by ${O_{|S|,\rho}(1)}$ translates ${\theta+Q}$ of the cube ${Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}$. Then the sets ${\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}}$ form an cover of ${G}$. But all of these sets lie in a translate of ${B(S,\rho)}$, and the claim then follows from the translation invariance of ${\mu}$. $\Box$
Given any Bohr set ${B(S,\rho)}$, we define a normalised “Lipschitz” cutoff function ${\nu_{B(S,\rho)}: G \rightarrow {\bf R}}$ by the formula
$\displaystyle \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)$
where ${c_{B(S,\rho)}}$ is the constant such that
$\displaystyle \int_G \nu_{B(S,\rho)}\ d\mu = 1,$
thus
$\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.$
The function ${\nu_{B(S,\rho)}}$ should be viewed as an ${L^1}$-normalised “tent function” cutoff to ${B(S,\rho)}$. Note from Lemma 2 that
$\displaystyle 1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)$
We then have the following sharper version of Theorem 1:
Theorem 3 (Roth-Khintchine theorem) Let ${G = (G,+)}$ be a ${2}$-divisible compact abelian group, with Haar probability measure ${\mu}$, and let ${\epsilon>0}$. Then for any measurable function ${f: G \rightarrow [0,1]}$, there exists a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\epsilon 1}$ and ${\rho \gg_\epsilon 1}$ such that
$\displaystyle \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)$
$\displaystyle \geq (\int_G f\ d\mu)^3 - O(\epsilon)$
where ${*}$ denotes the convolution operation
$\displaystyle f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).$
A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with ${f := 1_A}$ and ${\epsilon}$ equal to a small multiple of ${\alpha^3}$ to conclude that there is a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\alpha 1}$ and ${\rho \gg_\alpha 1}$ such that
$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.$
But from (2) we have the pointwise bound ${\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}$, and Theorem 1 follows.
Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set ${B(S,\rho)}$ to capture all the “large Fourier coefficients” of ${f}$, but such that a certain “dilate” of ${B(S,\rho)}$ does not capture much more Fourier energy of ${f}$ than ${B(S,\rho)}$ itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of ${f}$ into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the ${\nu_{B(S,\rho)}}$ considered above to achieve a similar effect.
A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets
$\displaystyle A+B := \{ a+b: a \in A, b \in B \}$
and difference sets
$\displaystyle A-B := \{ a-b: a \in A, b \in B \},$
as well as iterated sumsets such as ${3A=A+A+A}$, ${2A-2A=A+A-A-A}$, and so forth. Here, ${A, B}$ are finite non-empty subsets of some additive group ${G = (G,+)}$ (classically one took ${G={\bf Z}}$ or ${G={\bf R}}$, but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:
Lemma 1 (Ruzsa covering lemma) Let ${A, B}$ be finite non-empty subsets of ${G}$. Then ${A}$ may be covered by at most ${\frac{|A+B|}{|B|}}$ translates of ${B-B}$.
Proof: Consider a maximal set of disjoint translates ${a+B}$ of ${B}$ by elements ${a \in A}$. These translates have cardinality ${|B|}$, are disjoint, and lie in ${A+B}$, so there are at most ${\frac{|A+B|}{|B|}}$ of them. By maximality, for any ${a' \in A}$, ${a'+B}$ must intersect at least one of the selected ${a+B}$, thus ${a' \in a+B-B}$, and the claim follows. $\Box$
Lemma 2 (Ruzsa triangle inequality) Let ${A,B,C}$ be finite non-empty subsets of ${G}$. Then ${|A-C| \leq \frac{|A-B| |B-C|}{|B|}}$.
Proof: Consider the addition map ${+: (x,y) \mapsto x+y}$ from ${(A-B) \times (B-C)}$ to ${G}$. Every element ${a-c}$ of ${A - C}$ has a preimage ${\{ (x,y) \in (A-B) \times (B-C)\}}$ of this map of cardinality at least ${|B|}$, thanks to the obvious identity ${a-c = (a-b) + (b-c)}$ for each ${b \in B}$. Since ${(A-B) \times (B-C)}$ has cardinality ${|A-B| |B-C|}$, the claim follows. $\Box$
Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets ${A}$ of small doubling, in the sense that ${|A+A| \leq K|A|}$ for some bounded ${K}$. (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in ${K}$ (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in ${K}$.)
However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. ${{\bf R}^n}$ or a torus ${({\bf R}/{\bf Z})^n}$) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets ${A}$ remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group ${O_n({\bf R})}$ of orthogonal ${n \times n}$ matrices, and let ${A}$ be the matrices obtained by starting with all of the orthogonal matrice in ${O_n({\bf R})}$ and rounding each coefficient of each matrix in this set to the nearest multiple of ${\epsilon}$, for some small ${\epsilon>0}$. This forms a finite set (whose cardinality grows as ${\epsilon\rightarrow 0}$ like a certain negative power of ${\epsilon}$). In the limit ${\epsilon \rightarrow 0}$, the set ${A}$ is not a set of small doubling in the discrete sense. However, ${A \cdot A}$ is still close to ${A}$ in a metric sense, being contained in the ${O_n(\epsilon)}$-neighbourhood of ${A}$. Another key example comes from graphs ${\Gamma := \{ (x, f(x)): x \in G \}}$ of maps ${f: A \rightarrow H}$ from a subset ${A}$ of one additive group ${G = (G,+)}$ to another ${H = (H,+)}$. If ${f}$ is “approximately additive” in the sense that for all ${x,y \in G}$, ${f(x+y)}$ is close to ${f(x)+f(y)}$ in some metric, then ${\Gamma}$ might not have small doubling in the discrete sense (because ${f(x+y)-f(x)-f(y)}$ could take a large number of values), but could be considered a set of small doubling in a discretised sense.
One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric ${d}$ on the underlying group ${G = (G,+)}$ (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:
Definition 3 Let ${(X,d)}$ be a metric space, let ${E}$ be a subset of ${X}$, and let ${r>0}$ be a radius.
• The packing number ${N^{pack}_r(E)}$ is the largest number of points ${x_1,\dots,x_n}$ one can pack inside ${E}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ are disjoint.
• The internal covering number ${N^{int}_r(E)}$ is the fewest number of points ${x_1,\dots,x_n \in E}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ cover ${E}$.
• The external covering number ${N^{ext}_r(E)}$ is the fewest number of points ${x_1,\dots,x_n \in X}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ cover ${E}$.
• The metric entropy ${N^{ent}_r(E)}$ is the largest number of points ${x_1,\dots,x_n}$ one can find in ${E}$ that are ${r}$-separated, thus ${d(x_i,x_j) \geq r}$ for all ${i \neq j}$.
It is an easy exercise to verify the inequalities
$\displaystyle N^{ent}_{2r}(E) \leq N^{pack}_r(E) \leq N^{ext}_r(E) \leq N^{int}_r(E) \leq N^{ent}_r(E)$
for any ${r>0}$, and that ${N^*_r(E)}$ is non-increasing in ${r}$ and non-decreasing in ${E}$ for the three choices ${* = pack,ext,ent}$ (but monotonicity in ${E}$ can fail for ${*=int}$!). It turns out that the external covering number ${N^{ent}_r(E)}$ is slightly more convenient than the other notions of metric entropy, so we will abbreviate ${N_r(E) = N^{ent}_r(E)}$. The cardinality ${|E|}$ can be viewed as the limit of the entropies ${N^*_r(E)}$ as ${r \rightarrow 0}$.
If we have the bounded doubling property that ${B(0,2r)}$ is covered by ${O(1)}$ translates of ${B(0,r)}$ for each ${r>0}$, and one has a Haar measure ${m}$ on ${G}$ which assigns a positive finite mass to each ball, then any of the above entropies ${N^*_r(E)}$ is comparable to ${m( E + B(0,r) ) / m(B(0,r))}$, as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter ${r}$ by some absolute constant. Such losses can be acceptable in applications in which the underlying sets ${A}$ are largely “transverse” to the balls ${B(0,r)}$, so that the ${N_r}$-entropy of ${A}$ is largely independent of ${A}$; this is a situation which arises in particular in the case of graphs ${\Gamma = \{ (x,f(x)): x \in G \}}$ discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)
Henceforth we work in an additive group ${G}$ equipped with a translation-invariant metric ${d}$. (One can also generalise things slightly by allowing the metric to attain the values ${0}$ or ${+\infty}$, without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set ${E}$ will have finite entropy ${N_r(E)}$ for any ${r>0}$. We now have analogues of the two basic Ruzsa lemmas above:
Lemma 4 (Ruzsa covering lemma) Let ${A, B}$ be precompact non-empty subsets of ${G}$, and let ${r>0}$. Then ${A}$ may be covered by at most ${\frac{N_r(A+B)}{N_r(B)}}$ translates of ${B-B+B(0,2r)}$.
Proof: Let ${a_1,\dots,a_n \in A}$ be a maximal set of points such that the sets ${a_i + B + B(0,r)}$ are all disjoint. Then the sets ${a_i+B}$ are disjoint in ${A+B}$ and have entropy ${N_r(a_i+B)=N_r(B)}$, and furthermore any ball of radius ${r}$ can intersect at most one of the ${a_i+B}$. We conclude that ${N_r(A+B) \geq n N_r(B)}$, so ${n \leq \frac{N_r(A+B)}{N_r(B)}}$. If ${a \in A}$, then ${a+B+B(0,r)}$ must intersect one of the ${a_i + B + B(0,r)}$, so ${a \in a_i + B-B + B(0,2r)}$, and the claim follows. $\Box$
Lemma 5 (Ruzsa triangle inequality) Let ${A,B,C}$ be precompact non-empty subsets of ${G}$, and let ${r>0}$. Then ${N_{4r}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(B)}}$.
Proof: Consider the addition map ${+: (x,y) \mapsto x+y}$ from ${(A-B) \times (B-C)}$ to ${G}$. The domain ${(A-B) \times (B-C)}$ may be covered by ${N_r(A-B) N_r(B-C)}$ product balls ${B(x,r) \times B(y,r)}$. Every element ${a-c}$ of ${A - C}$ has a preimage ${\{ (x,y) \in (A-B) \times (B-C)\}}$ of this map which projects to a translate of ${B}$, and thus must meet at least ${N_r(B)}$ of these product balls. However, if two elements of ${A-C}$ are separated by a distance of at least ${4r}$, then no product ball can intersect both preimages. We thus see that ${N_{4r}^{ent}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(A-C)}}$, and the claim follows. $\Box$
Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)
It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.
(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)
Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.
The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:
(Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits Approximate group theory Topological group theory Model theory
As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:
• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)
The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.
With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.
Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.
Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.
Let ${F}$ be a field. A definable set over ${F}$ is a set of the form
$\displaystyle \{ x \in F^n | \phi(x) \hbox{ is true} \} \ \ \ \ \ (1)$
where ${n}$ is a natural number, and ${\phi(x)}$ is a predicate involving the ring operations ${+,\times}$ of ${F}$, the equality symbol ${=}$, an arbitrary number of constants and free variables in ${F}$, the quantifiers ${\forall, \exists}$, boolean operators such as ${\vee,\wedge,\neg}$, and parentheses and colons, where the quantifiers are always understood to be over the field ${F}$. Thus, for instance, the set of quadratic residues
$\displaystyle \{ x \in F | \exists y: x = y \times y \}$
is definable over ${F}$, and any algebraic variety over ${F}$ is also a definable set over ${F}$. Henceforth we will abbreviate “definable over ${F}$” simply as “definable”.
If ${F}$ is a finite field, then every subset of ${F^n}$ is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that ${E \subset F^n}$ is a definable set of complexity at most ${M}$ if ${n \leq M}$, and ${E}$ can be written in the form (1) for some predicate ${\phi}$ of length at most ${M}$ (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in ${n}$ dimensions of degree ${d}$ would be a definable set of complexity ${O_{n,d}(1)}$. We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.
In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:
Lemma 1 (Algebraic regularity lemma) Let ${F}$ be a finite field, let ${V,W}$ be definable non-empty sets of complexity at most ${M}$, and let ${E \subset V \times W}$ also be definable with complexity at most ${M}$. Assume that the characteristic of ${F}$ is sufficiently large depending on ${M}$. Then we may partition ${V = V_1 \cup \ldots \cup V_m}$ and ${W = W_1 \cup \ldots \cup W_n}$ with ${m,n = O_M(1)}$, with the following properties:
• (Definability) Each of the ${V_1,\ldots,V_m,W_1,\ldots,W_n}$ are definable of complexity ${O_M(1)}$.
• (Size) We have ${|V_i| \gg_M |V|}$ and ${|W_j| \gg_M |W|}$ for all ${i=1,\ldots,m}$ and ${j=1,\ldots,n}$.
• (Regularity) We have
$\displaystyle |E \cap (A \times B)| = d_{ij} |A| |B| + O_M( |F|^{-1/4} |V| |W| ) \ \ \ \ \ (2)$
for all ${i=1,\ldots,m}$, ${j=1,\ldots,n}$, ${A \subset V_i}$, and ${B\subset W_j}$, where ${d_{ij}}$ is a rational number in ${[0,1]}$ with numerator and denominator ${O_M(1)}$.
My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”
$\displaystyle \mu(w,w') := \{ v \in V: (v,w), (v,w') \in E \}$
of ${E}$ using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)
Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields ${F}$ (although its content is trivial if the field is not sufficiently large depending on the complexity at most ${M}$).
Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:
Proposition 2 Let ${F}$ be a finite field, and let ${M > 0}$.
• (Discretised cardinality) If ${E}$ is a non-empty definable set of complexity at most ${M}$, then one has
$\displaystyle |E| = c |F|^d + O_M( |F|^{d-1/2} ) \ \ \ \ \ (3)$
where ${d = O_M(1)}$ is a natural number, and ${c}$ is a positive rational number with numerator and denominator ${O_M(1)}$. In particular, we have ${|F|^d \ll_M |E| \ll_M |F|^d}$.
• (Definable cardinality) Assume ${|F|}$ is sufficiently large depending on ${M}$. If ${V, W}$, and ${E \subset V \times W}$ are definable sets of complexity at most ${M}$, so that ${E_w := \{ v \in V: (v,w) \in W \}}$ can be viewed as a definable subset of ${V}$ that is definably parameterised by ${w \in W}$, then for each natural number ${d = O_M(1)}$ and each positive rational ${c}$ with numerator and denominator ${O_M(1)}$, the set
$\displaystyle \{ w \in W: |E_w| = c |F|^d + O_M( |F|^{d-1/2} ) \} \ \ \ \ \ (4)$
is definable with complexity ${O_M(1)}$, where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” ${d}$ and “measure” ${c}$ of ${E_w}$ depends definably on ${w}$.)
We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)
The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most ${M}$ and cardinality ${|F|^{1/2}}$, if ${|F|}$ is sufficiently large depending on ${M}$. If ${E \subset V}$ are definable sets of complexity at most ${M}$, it shows that ${|E| = (c+ O_M(|F|^{-1/2})) |V|}$ for some rational ${0\leq c \leq 1}$ with numerator and denominator ${O_M(1)}$; furthermore, if ${c=0}$, we may improve this bound to ${|E| = O_M( |F|^{-1} |V|)}$. In particular, we obtain the following “self-improving” properties:
• If ${E \subset V}$ are definable of complexity at most ${M}$ and ${|E| \leq \epsilon |V|}$ for some ${\epsilon>0}$, then (if ${\epsilon}$ is sufficiently small depending on ${M}$ and ${F}$ is sufficiently large depending on ${M}$) this forces ${|E| = O_M( |F|^{-1} |V| )}$.
• If ${E \subset V}$ are definable of complexity at most ${M}$ and ${||E| - c |V|| \leq \epsilon |V|}$ for some ${\epsilon>0}$ and positive rational ${c}$, then (if ${\epsilon}$ is sufficiently small depending on ${M,c}$ and ${F}$ is sufficiently large depending on ${M,c}$) this forces ${|E| = c |V| + O_M( |F|^{-1/2} |V| )}$.
It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to ${E}$) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.
I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“. This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”. There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.
The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method. More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem. One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.
This is something of a first draft, so comments and suggestions are even more welcome than usual. (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1097, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602397084236145, "perplexity": 201.05257486294852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00230-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/105303-probability-help-please.html | Q: A study of the residents of a region showed that 20% were smokers. The probability of death due to lung cancer, given that a person smoked, was ten times the probability of death due to lung cancer, given that the person did not smoke. If the probability of death due to lung cancer in the region is .006, what is the probability of death due to lung cancer given that the person is a smoker?
I tried to use Baye's rule, but had not luck. How do I make use of the fact that P(D|S)=10P(D|S'), where D represents deaths due to lung cancer and S is for smokers.
Thanks
2. Originally Posted by Danneedshelp
Q: A study of the residents of a region showed that 20% were smokers. The probability of death due to lung cancer, given that a person smoked, was ten times the probability of death due to lung cancer, given that the person did not smoke. If the probability of death due to lung cancer in the region is .006, what is the probability of death due to lung cancer given that the person is a smoker?
I tried to use Baye's rule, but had not luck. How do I make use of the fact that P(D|S)=10P(D|S'), where D represents deaths due to lung cancer and S is for smokers.
Thanks
Hint: P(D) = P(D|S) P(S) + P(D|S') P(S')
3. Originally Posted by awkward
Hint: P(D) = P(D|S) P(S) + P(D|S') P(S')
How do I use that for this problem? I set up a box with all the information, but i am still not getting the desired 0.21 as my answer.
4. Originally Posted by Danneedshelp
Q: A study of the residents of a region showed that 20% were smokers. The probability of death due to lung cancer, given that a person smoked, was ten times the probability of death due to lung cancer, given that the person did not smoke. If the probability of death due to lung cancer in the region is .006, what is the probability of death due to lung cancer given that the person is a smoker?
I tried to use Baye's rule, but had not luck. How do I make use of the fact that P(D|S)=10P(D|S'), where D represents deaths due to lung cancer and S is for smokers.
Thanks
Draw a tree diagram. The first two branches are S (smokes) and S' (does not smoke). From each of those two branches there are two more branches. The first branch is D (death) and the second branch is D' (no death).
Let Pr(D | S') = x. Then Pr(D | S) = 10x.
From the tree diagram: $0.006 = (0.2)(10x) + (0.8)(x)$. Solve for $x$ and substitute into Pr(D | S) = 10x. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579800963401794, "perplexity": 426.2445438168164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718311.12/warc/CC-MAIN-20161020183838-00372-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/help-in-checking-the-solution-of-this-separable-equation.822396/ | # Help in checking the solution of this separable equation
• Thread starter enggM
• Start date
• #1
13
0
## Homework Statement
It is just an evaluation problem which looks like this dx/dy = x^2 y^2 / 1+x
## Homework Equations
dx/dy = x^2 y^2 / 1 + x
## The Attempt at a Solution
What i did is cross multiply to get this equation y^2 dy = x^2 / 1+x dx then next line ∫y^2 dy = ∫x^2/1+x dx
y^3/3 = ∫dx + ∫1/x dx after simplifying i get y^3=3x + 3 ln x + C but im not sure if this is the right answer.
## Answers and Replies
• #2
HallsofIvy
Homework Helper
41,847
965
First, I presume that you really mean dx/dy= x^2y^2/(1+ x). What you wrote, dx/dy= x^2y^2/1+ x, is the same as dx/dy = x^2y^2+ x.
More importantly you have separated the x and y incorrectly. The left side is y^2 dy but the right sides should be [(1+ x)/x]dx. You have the fraction inverted.
• #3
13
0
oh, so when i integrate [1+x / x] dx would it look like ∫1/x + ∫x dx? or something else?
• #4
LCKurtz
Homework Helper
Gold Member
9,559
770
oh, so when i integrate [1+x / x] dx would it look like ∫1/x + ∫x dx? or something else?
1 + x/x = 1+1 =2.
• #5
Mark44
Mentor
35,138
6,887
oh, so when i integrate [1+x / x] dx would it look like ∫1/x + ∫x dx? or something else?
Or did you intend the above to mean (1 + x)/x? If that's what you intended, it does absolutely no good to put brackets around the entire expression.
In post #1 you wrote what I've quoted below. For that, you need parentheses around the denominator, as x2y2/(1 + x).
It is just an evaluation problem which looks like this dx/dy = x^2 y^2 / 1+x
.
• #6
13
0
Oh sorry about that, what i intend to do is to integrate the entire expression. As it is the right hand side of the equation, no problem with y^2 dy but the right side looks a bit confusing. In the expression (1 + x) / x dx as was suggested is what i intend to integrate, so is this the right integration expression, ∫1 / x dx + ∫ dx? if so then ln x + x + C should be sufficient for the RHS, is it not?
• #7
Mark44
Mentor
35,138
6,887
Oh sorry about that, what i intend to do is to integrate the entire expression. As it is the right hand side of the equation, no problem with y^2 dy but the right side looks a bit confusing. In the expression (1 + x) / x dx as was suggested is what i intend to integrate, so is this the right integration expression, ∫1 / x dx + ∫ dx? if so then ln x + x + C should be sufficient for the RHS, is it not?
Yes, you can split ##\int \frac{1 + x}{x} \ dx## into ##\int \frac {dx} x + \int 1 \ dx##.
• #8
13
0
so the answer y^3 = 3x + 3 ln x + C should be correct? ok i get it now thanks for the time.
• #9
Mark44
Mentor
35,138
6,887
so the answer y^3 = 3x + 3 ln x + C should be correct? ok i get it now thanks for the time.
No @enggM, this is not correct. The work you did before was incorrect, and my earlier response was based on that work. I think you need to start from the beginning.
$$\frac{dx}{dy} = \frac{x^2y^2}{1 + x}$$
If you multiply both sides by 1 + x, then divide both sides by ##x^2##, and finally, multiply both sides by dy, the equation will be separated. What do you get when you do this?
• #10
13
0
@Mark44 i would get (1+x / x^2 ) dx = y^2 dy so integrating the both sides y^3 / 3 = 1 / x + ln x so the final form would then be y^3 = 3/x + 3 ln x + C? no?
• #11
Ray Vickson
Homework Helper
Dearly Missed
10,706
1,722
@Mark44 i would get (1+x / x^2 ) dx = y^2 dy so integrating the both sides y^3 / 3 = 1 / x + ln x so the final form would then be y^3 = 3/x + 3 ln x + C? no?
From
$$y^2 dy = \left( 1 + \frac{x}{x^2} \right) dx$$
you will get
$$\frac{1}{3} y^3 = x + \ln (x) + C$$
From
$$y^2 dy = \frac{1 + x}{x^2} dx$$
you will get
$$\frac{1}{3} y^3 = -\frac{1}{x} + \ln (x) + C$$
Which do you mean? Why are you still refusing to use parentheses? Do you not see their importance?
• #12
Mark44
Mentor
35,138
6,887
@Mark44 i would get (1+x / x^2 ) dx = y^2 dy so integrating the both sides y^3 / 3 = 1 / x + ln x so the final form would then be y^3 = 3/x + 3 ln x + C? no?
In writing (1 + x/x2) above, you are using parentheses, but aren't using them correctly (as Ray also points out). If you have a fraction where either the numerator or denominator (or both) has multiple terms, you need parentheses around the entire numerator or denominator, not around the overall fraction.
If you mean ##\frac{1 + x}{x^2}##, write it in text as (1 + x)/x^2, NOT as (1+x / x^2 ). What you wrote means ##1 + \frac x {x^2} = 1 + \frac 1 x##.
• #13
13
0
sorry about the confusion because sometimes when i solve something like this in paper i sometimes leave out the parenthesis. so the final form would be y^3 = -3/x + 3 ln x + C? by the way how did it become a negative? just an additional question.
• #14
SammyS
Staff Emeritus
Homework Helper
Gold Member
11,377
1,038
sorry about the confusion because sometimes when i solve something like this in paper i sometimes leave out the parenthesis. so the final form would be y^3 = -3/x + 3 ln x + C? by the way how did it become a negative? just an additional question.
What is ##\displaystyle \int\frac{1}{x^2}dx\ ?##
• #15
Mentallic
Homework Helper
3,798
94
sorry about the confusion because sometimes when i solve something like this in paper i sometimes leave out the parenthesis
On paper you can freely write the fraction as it's normally portrayed, however, if you mean that you do sometimes write everything on a single line and still don't use parentheses, then you need to break out of that habit. Parentheses are crucial.
• #16
13
0
@SammyS oh i see i remember now it should be u^-2+1 / -2 +1.
@Mentallic ok thanks...
thanks for all of your help in checking for the solution and answer to this problem helped me understand it much better.
• #17
LCKurtz
Homework Helper
Gold Member
9,559
770
@SammyS oh i see i remember now it should be u^-2+1 / -2 +1.
.
Apparently you haven't been paying attention to anything people in this thread have told you about the importance of using parentheses.
• Last Post
Replies
3
Views
694
• Last Post
Replies
3
Views
991
• Last Post
Replies
2
Views
2K
• Last Post
Replies
8
Views
1K
• Last Post
Replies
7
Views
880
• Last Post
Replies
5
Views
981
• Last Post
Replies
8
Views
2K
• Last Post
Replies
0
Views
3K
• Last Post
Replies
5
Views
8K
• Last Post
Replies
17
Views
3K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9654626846313477, "perplexity": 1219.8880990940536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00323.warc.gz"} |
http://www.researchgate.net/researcher/1000629_Cornilleau-Wehrlin_N | N. Cornilleau-Wehrlin
École Polytechnique, Paliseau, Île-de-France, France
Are you N. Cornilleau-Wehrlin?
Publications (255)377.92 Total impact
• Article: Intensities and spatiotemporal variability of equatorial noise emissions observed by the Cluster spacecraft
F. Němec, O. Santolík, Z. Hrbáčková, N. Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: Equatorial noise (EN) emissions are electromagnetic waves observed in the equatorial region of the inner magnetosphere at frequencies between the proton cyclotron frequency and the lower hybrid frequency. We present the analysis of 2229 EN events identified in the Spatio-Temporal Analysis of Field Fluctuations (STAFF) experiment data of the Cluster spacecraft during the years 2001-2010. EN emissions are distinguished using the polarization analysis, and their intensity is determined based on the evaluation of the Poynting flux rather than on the evaluation of only the electric/magnetic field intensity. The intensity of EN events is analyzed as a function of the frequency, the position of the spacecraft inside/outside the plasmasphere, magnetic local time, and the geomagnetic activity. The emissions have higher frequencies and are more intense in the plasma trough than in the plasmasphere. EN events observed in the plasma trough are most intense close to the local noon, while EN events observed in the plasmasphere are nearly independent on MLT. The intensity of EN events is enhanced during disturbed periods, both inside the plasmasphere and in the plasma trough. Observations of the same events by several Cluster spacecraft allow us to estimate their spatiotemporal variability. EN emissions observed in the plasmasphere do not change on the analyzed spatial scales (∆MLT< 0.2 h, ∆r < 0.2RE), but they change significantly on time scales of about an hour. The same appears to be the case also for EN events observed in the plasma trough, although the plasma trough dependencies are less clear.
Journal of Geophysical Research: Space Physics. 02/2015;
• Source
Article: Whistler mode waves and the electron heat flux in the solar wind: Cluster observations
[Hide abstract]
ABSTRACT: The nature of the magnetic field fluctuations in the solar wind between the ion and electron scales is still under debate. Using the Cluster/STAFF instrument, we make a survey of the power spectral density and of the polarization of these fluctuations at frequencies $f\in[1,400]$ Hz, during five years (2001-2005), when Cluster was in the free solar wind. In $\sim 10\%$ of the selected data, we observe narrow-band, right-handed, circularly polarized fluctuations, with wave vectors quasi-parallel to the mean magnetic field, superimposed on the spectrum of the permanent background turbulence. We interpret these coherent fluctuations as whistler mode waves. The life time of these waves varies between a few seconds and several hours. Here we present, for the first time, an analysis of long-lived whistler waves, i.e. lasting more than five minutes. We find several necessary (but not sufficient) conditions for the observation of whistler waves, mainly a low level of the background turbulence, a slow wind, a relatively large electron heat flux and a low electron collision frequency. When the electron parallel beta factor $\beta_{e\parallel}$ is larger than 3, the whistler waves are seen along the heat flux threshold of the whistler heat flux instability. The presence of such whistler waves confirms that the whistler heat flux instability contributes to the regulation of the solar wind heat flux, at least for $\beta_{e\parallel} \ge$ 3, in the slow wind, at 1 AU.
The Astrophysical Journal 10/2014; 796(1). · 6.28 Impact Factor
• Article: Azimuthal directions of equatorial noise propagation determined using 10 years of data from the Cluster spacecraft
[Hide abstract]
ABSTRACT: [1] Equatorial noise (EN) emissions are electromagnetic waves at frequencies between the proton cyclotron frequency and the lower hybrid frequency routinely observed within a few degrees of the geomagnetic equator at radial distances from about 2 to 6 RE. They propagate in the extraordinary (fast magnetosonic) mode nearly perpendicularly to the ambient magnetic field. We conduct a systematic analysis of azimuthal directions of wave propagation, using all available Cluster data from 2001 to 2010. Altogether, combined measurements of the Wide-Band Data and Spectrum Analyzer of the Spatio-Temporal Analysis of Field Fluctuations instruments allowed us to determine azimuthal angle of wave propagation for more than 100 EN events. It is found that the observed propagation pattern is mostly related to the plasmapause location. While principally isotropic azimuthal directions of EN propagation were detected inside the plasmasphere, wave propagation in the plasma trough was predominantly found directed to the West or East, perpendicular to the radial direction. The observed propagation pattern can be explained using a simple propagation analysis, assuming that the emissions are generated close to the plasmapause.
Journal of Geophysical Research: Space Physics 11/2013; 118(11). · 3.44 Impact Factor
• Article: Cluster observations of whistler waves correlated with ion-scale magnetic structures during the 17 August 2003 substorm event
[Hide abstract]
ABSTRACT: We provide evidence of the simultaneous occurrence of large-amplitude, quasi-parallel whistler mode waves and ion-scale magnetic structures, which have been observed by the Cluster spacecraft in the plasma sheet at 17 Earth radii, during a substorm event. It is shown that the magnetic structures are characterized by both a magnetic field strength minimum and a density hump and that they propagate in a direction quasi-perpendicular to the average magnetic field. The observed whistler mode waves are efficiently ducted by the inhomogeneity associated with such ion-scale magnetic structures. The large amplitude of the confined whistler waves suggests that electron precipitations could be enhanced locally via strong pitch angle scattering. Furthermore, electron distribution functions indicate that a strong parallel heating of electrons occurs within these ion-scale structures. This study provides new insights on the possible multiscale coupling of plasma dynamics during the substorm expansion, on the basis of the whistler mode wave trapping by coherent ion-scale structures.
Journal of Geophysical Research Atmospheres 10/2013; 118(10):6072-6089. · 3.44 Impact Factor
• Article: Quasiperiodic emissions observed by the Cluster spacecraft and their association with ULF magnetic pulsations
F. Němec, O. Santolík, J. S. Pickett, M. Parrot, N. Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: Quasiperiodic (QP) emissions are electromagnetic waves at frequencies of about 0.5-4 kHz characterized by a periodic time modulation of the wave intensity, with a typical modulation period on the order of minutes. We present results of a survey of QP emissions observed by the Wide-Band Data (WBD) instruments on board the Cluster spacecraft. All WBD data measured in the appropriate frequency range during the first 10 years of operation (2001-2010) at radial distances lower than 10 RE were visually inspected for the presence of QP emissions, resulting in 21 positively identified events. These are systematically analyzed, and their frequency ranges and modulation periods are determined. Moreover, a detailed wave analysis has been done for the events that were strong enough to be seen in low-resolution Spatio-Temporal Analysis of Field Fluctuations-Spectrum Analyzer data. Wave vectors are found to be nearly field-aligned in the equatorial region, but they become oblique at larger geomagnetic latitudes. This is consistent with a hypothesis of unducted propagation. ULF magnetic field pulsations were detected at the same time as QP emissions in 4 out of the 21 events. They were polarized in the plane perpendicular to the ambient magnetic field, and their frequencies roughly corresponded to the modulation period of the QP events.
Journal of Geophysical Research Atmospheres 07/2013; 118(7):4210-4220. · 3.44 Impact Factor
• Article: Analysis of amplitudes of equatorial noise emissions and their variation with L, MLT and magnetic activity
Zuzana Hrbáčková, Ondřej Santolík, Nicole Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: Wave-particle interactions are an important mechanism of energy exchange in the outer Van Allen radiation belt. These interactions can cause an increase or decrease of relativistic electron flux. The equatorial noise (EN) emissions (also called fast magnetosonic waves) are electromagnetic waves which could be effective in producing MeV electrons. EN emissions propagate predominantly within 10° of the geomagnetic equator at L shells from 1 to 10. Their frequency range is between the local proton cyclotron frequency and the lower hybrid resonance. We use a data set measured by the STAFF-SA instruments onboard four Cluster spacecraft from January 2001 to December 2010. We have compiled the list of the time intervals of the observed EN emissions during the investigated time period. For each interval we have computed an intensity profile of the wave magnetic field as a function of frequency. The frequency band is then determined by an automatic procedure and the measured power spectral densities are reliably transformed into wave amplitudes. The results are shown as a function of the McIlwain's parameter, magnetic local time and magnetic activity - Kp and Dst indexes. This work has received EU support through the FP7-Space grant agreement n 284520 for the MAARBLE collaborative research project.
04/2013;
• Article: Directions of equatorial noise propagation determined using Cluster and DEMETER spacecraft
[Hide abstract]
ABSTRACT: Equatorial noise emissions are electromagnetic waves at frequencies between the proton cyclotron frequency and the lower hybrid frequency routinely observed within a few degrees of the geomagnetic equator at radial distances from about 2 to 6 Re. High resolution data reveal that the emissions are formed by a system of spectral lines, being generated by instabilities of proton distribution functions at harmonics of the proton cyclotron frequency in the source region. The waves propagate in the fast magnetosonic mode nearly perpendicularly to the ambient magnetic field, i.e. the corresponding magnetic field fluctuations are almost linearly polarized along the ambient magnetic field and the corresponding electric field fluctuations are elliptically polarized in the equatorial plane, with the major polarization axis having the same direction as wave and Poynting vectors. We conduct a systematic analysis of azimuthal propagation of equatorial noise. Combined WBD and STAFF-SA measurements performed on the Cluster spacecraft are used to determine not only the azimuthal angle of the wave vector direction, but also to estimate the corresponding beaming angle. It is found that the beaming angle is generally rather large, i.e. the detected waves come from a significant range of directions, and a traditionally used approximation of a single plane wave fails. The obtained results are complemented by a raytracing analysis in order to get a comprehensive picture of equatorial noise propagation in the inner magnetosphere. Finally, high resolution multi-component measurements performed by the low-altitude DEMETER spacecraft are used to demonstrate that equatorial noise emissions can reach altitudes as low as 660 km, and that the observed propagation properties are in agreement with the overall propagation picture.
04/2013;
• Article: Eleven years of Cluster observations of whistler-mode chorus
[Hide abstract]
ABSTRACT: Electromagnetic emissions of whistler-mode chorus carry enough power to increase electron fluxes in the outer Van Allen radiation belt at time scales on the order of one day. However, the ability of these waves to efficiently interact with relativistic electrons is controlled by the wave propagation directions and time-frequency structure. Eleven years of measurements of the STAFF-SA and WBD instruments onboard the Cluster spacecraft are systematically analyzed in order to determine the probability density functions of propagation directions of chorus as a function of geomagnetic latitude, magnetic local time, L* parameter, and frequency. A large database of banded whistler-mode emissions and time-frequency structured chorus has been used for this analysis. This work has received EU support through the FP7-Space grant agreement no 284520 for the MAARBLE collaborative research project.
04/2013;
• Article: Characteristics of banded chorus-like emission measured by the TC-1 Double Star spacecraft
Eva Macúšová, Ondřej Santolík, Nicole Cornilleau-Wehrlin, Keith Yearby
[Hide abstract]
ABSTRACT: We present a study of the spatio-temporal characteristics of banded whistler-mode emissions. It covers the full operational period of the TC-1 spacecraft, between January 2004 and the end of September 2007. The analyzed data set has been visually selected from the onboard-analyzed time-frequency spectrograms of magnetic field fluctuations below 4 kHz measured by the STAFF/DWP wave instrument situated onboard the TC-1 spacecraft with a low inclination elliptical equatorial orbit. This orbit covers magnetic latitudes between -39o and 39o. The entire data set has been collected between L=2 and L=12. Our results show that almost all intense emissions (above a threshold of 10-5nT2Hz-1) occur at L-shells from 6 to 12 and in the MLT sector from 2 to 11 hours. This is in a good agreement with previous observations. We determine the bandwidth of the observed emission by an automatic procedure based on the measured spectra. This allows us to reliably calculate the integral amplitudes of the measured signals. The majority of the largest amplitudes of chorus-like emissions were found closer to the Earth. The other result is that the upper band chorus-like emissions (above one half of the electron cyclotron frequency) are much less intense than the lower band chorus-like emissions (below one half of the electron cyclotron frequency) and are usually observed closer to the Earth than the lower band. This work has received EU support through the FP7-Space grant agreement n 284520 for the MAARBLE collaborative research project.
04/2013;
• Source
Article: Scaling of the electron dissipation range of solar wind turbulence
[Hide abstract]
ABSTRACT: Electron scale solar wind turbulence has attracted great interest in recent years. Clear evidences have been given from the Cluster data that turbulence is not fully dissipated near the proton scale but continues cascading down to the electron scales. However, the scaling of the energy spectra as well as the nature of the plasma modes involved at those small scales are still not fully determined. Here we survey 10 years of the Cluster search-coil magnetometer (SCM) waveforms measured in the solar wind and perform a statistical study of the magnetic energy spectra in the frequency range [$1, 180$]Hz. We show that a large fraction of the spectra exhibit clear breakpoints near the electon gyroscale $\rho_e$, followed by steeper power-law like spectra. We show that the scaling below the electron breakpoint cannot be determined unambiguously due to instrumental limitations that will be discussed in detail. We compare our results to recent ones reported in other studies and discuss their implication on the physical mechanisms and the theoretical modeling of energy dissipation in the SW.
The Astrophysical Journal 03/2013; 777(1). · 6.28 Impact Factor
• Article: Conjugate observations of quasi-periodic emissions by Cluster and DEMETER spacecraft
[Hide abstract]
ABSTRACT: Quasi‐periodic (QP) emissions are electromagnetic emissions at frequencies of about 0.5–4 kHz that are characterized by a periodic time modulation of the wave intensity. Typical periods of this modulation are on the order of minutes. We present a case study of a large‐scale long‐lasting QP event observed simultaneously on board the DEMETER (Detection of Electro‐Magnetic Emissions Transmitted from Earthquake Regions) and the Cluster spacecraft. The measurements by the Wide‐Band Data instrument on board the Cluster spacecraft enabled us to obtain high‐resolution frequency‐time spectrograms of the event close to the equatorial region over a large range of radial distances, while the measurements by the STAFF‐SA instrument allowed us to perform a detailed wave analysis. Conjugate observations by the DEMETER spacecraft have been used to estimate the spatial and temporal extent of the emissions. The analyzed QP event lasted as long as 5 h and it spanned over the L‐shells from about 1.5 to 5.5. Simultaneous observations of the same event by DEMETER and Cluster show that the same QP modulation of the wave intensity is observed at the same time at very different locations in the inner magnetosphere. ULF magnetic field fluctuations with a period roughly comparable to, but somewhat larger than the period of the QP modulation were detected by the fluxgate magnetometers instrument on board the Cluster spacecraft near the equatorial region, suggesting these are likely to be related to the QP generation. Results of a detailed wave analysis show that the QP emissions detected by Cluster propagate unducted, with oblique wave normal angles at higher geomagnetic latitudes.
Journal of Geophysical Research Atmospheres 01/2013; 118(1):198-208. · 3.44 Impact Factor
• Article: CLUSTER STAFF search coils magnetometer calibration – comparisons with FGM
Geoscientific Instrumentation, Methods and Data Systems Discussions. 01/2013; 3(2):679-751.
• Source
Conference Paper: LSS/NenuFAR: The LOFAR Super Station project in Nançay
[Hide abstract]
ABSTRACT: We summarize the outcome of the scientific and technical study conducted in the past 3 years for the definition and prototyping of a LOFAR Super Station (LSS) in Nançay. We first present the LSS concept, then the steps addressed by the design study and the conclusions reached. We give an overview of the science case for the LSS, with special emphasis on the interest of a dedicated backend for standalone use. We compare the expected LSS characteristics to those of large low-frequency radio instruments, existing or in project. The main advantage of the LSS in standalone mode will be its very high instantaneous sensitivity, enabling or significantly improving a broad range of scientific studies. It will be a SKA precursor for the French community, both scientific and technical.
SF2A-2012: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics. Eds.: S. Boissier, P. de Laverny, N. Nardetto, R. Samadi, D. Valls-Gabaud and H. Wozniak, pp.687-694; 12/2012
• Article: Coupling between whistler waves and ion-scale solitary waves: cluster measurements in the magnetotail during a substorm.
[Hide abstract]
ABSTRACT: We present a new model of self-consistent coupling between low frequency, ion-scale coherent structures with high frequency whistler waves in order to interpret Cluster data. The idea relies on the possibility of trapping whistler waves by inhomogeneous external fields where they can be spatially confined and propagate for times much longer than their characteristic electronic time scale. Here we take the example of a slow magnetosonic soliton acting as a wave guide in analogy with the ducting properties of an inhomogeneous plasma. The soliton is characterized by a magnetic dip and density hump that traps and advects high frequency waves over many ion times. The model represents a new possible way of explaining space measurements often detecting the presence of whistler waves in correspondence to magnetic depressions and density humps. This approach, here given by means of slow solitons, but more general than that, is alternative to the standard approach of considering whistler wave packets as associated with nonpropagating magnetic holes resulting from a mirror-type instability.
Physical Review Letters 10/2012; 109(15):155005. · 7.73 Impact Factor
• Article: Systematic propagation analysis of whistler-mode waves in the inner magnetosphere
[Hide abstract]
ABSTRACT: Acceleration and dynamics of energetic electrons in the outer Van Allen radiation belt can be influenced by cross-energy coupling among the particle populations in the radiation belts, ring current and plasmasphere. Waves of different frequencies have been shown to play a significant role in these interactions. We analyze more than 10 years of measurements of the four Cluster spacecraft to investigate propagation properties of whistler-mode waves in the crucial regions of the Earth's magnetosphere. We use this unprecedented database to determine the distribution of the wave energy density in the space of the wave vector directions, which is a crucial parameter for modeling of both the wave-particle interactions and wave propagation in the inner magnetosphere. We show implications for radiation belt studies and upcoming inner magnetosphere spacecraft missions, and we also show similarities of the observed whistler-mode waves with results from the magnetosphere of Saturn collected by the Cassini mission.
07/2012;
• Article: Occurrence rate of magnetosonic equatorial noise emissions as a function of the McIlwain's parameter
Z. Hrbackova, O. Santolik, F. Nemec, N. Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: We report results of a statistical analysis of equatorial noise (EN) emissions based on the data set collected by the four Cluster spacecraft between January 2001 and December 2010. We have investigated a large range of the McIlwain's parameter from L≈1 to L≈12 thanks to the change of orbital parameters of the Cluster mission. We have processed data from the STAFF-SA instruments which analyze measurements of electric and magnetic field fluctuations onboard and provide us with hermitian spectral matrices. We have used linear polarization of magnetic field fluctuations as a selection criterion. Propagation in the vicinity of the geomagnetic equator has been used as an additional criterion for recognition of EN. We have identified more than 2000 events during the investigated time period. We demonstrate that EN can occur at all the analyzed L-shells. However, the occurrence rate at L-shells below 2.5 and above 7.0 is very low. We show that EN occurs in the plasmasphere as well as outside of the plasmasphere but with a lower occurrence rate.
04/2012;
• Article: Source region of the dayside whistler-mode chorus
[Hide abstract]
ABSTRACT: Intense whistler-mode waves can be generated by cyclotron interactions with anisotropic electrons at energies between a few and tens of keV. It has been shown that whistler-mode waves propagating in the Earth's magnetosphere can influence relativistic electrons in the outer Van Allen radiation belt. These electromagnetic wave emissions are therefore receiving increased attention for their possible role in coupling electron populations at lower energies to the electron radiation belt. Whistler-mode chorus emissions are known for their predominant occurrence in the dawnside and dayside magnetosphere. While it is generally accepted that dawnside chorus is excited by injected anisotropic plasma sheet electrons, the details of this process are still debated. Especially, possible mechanisms describing the origin of the dayside chorus are a subject of active research, including the role of the plasma density variations, and the role of a particular dayside configuration of the compressed Earth's magnetic field. We use data collected by the Cluster mission during the last few years, when the orbit of the Cluster spacecraft reached to larger radial distances from the Earth in the dayside low-latitude region. We analyze multipoint measurements of the WBD and STAFF-SA instruments. We investigate propagation and spectral properties of the observed whistler-mode waves. We concentrate our analysis on the properties of the chorus source and we show that the dayside magnetic field topology can lead to a displacement of the source region from the dipole equator to higher latitudes.
04/2012;
• Article: Variability of ULF wave power at the magnetopause: a study at low latitude with Cluster data
[Hide abstract]
ABSTRACT: Strong ULF wave activity has been observed at magnetopause crossings since a long time. Those turbulent-like waves are possible contributors to particle penetration from the Solar Wind to the Magnetosphere through the magnetopause. Statistical studies have been performed to understand under which conditions the ULF wave power is the most intense and thus the waves can be the most efficient for particle transport from one region to the other. Clearly the solar wind pressure organizes the data, the stronger the pressure, the higher the ULF power (Attié et al 2008). Double STAR-Cluster comparison has shown that ULF wave power is stronger at low latitude than at high latitude (Cornilleau-Wehrlin et al, 2008). The different studies performed have not, up to now, shown a stronger power in the vicinity of local noon. Nevertheless under identical activity conditions, the variability of this power, even at a given location in latitude and local time is very high. The present work intends at understanding this variability by means of the multi spacecraft mission Cluster. The data used are from spring 2008, while Cluster was crossing the magnetopause at low latitude, in particularly quite Solar Wind conditions. The first region of interest of this study is the sub-solar point vicinity where the long wavelength surface wave effects are most unlikely.
04/2012;
• Article: Propagation of EMIC triggered emissions toward the magnetic equatorial plane
[Hide abstract]
ABSTRACT: EMIC triggered emissions are observed close to the equatorial plane of the magnetosphere at locations where EMIC waves are commonly observed: close to the plasmapause region and in the dayside magnetosphere close to the magnetopause. Their overall characteristics (frequency with time dispersion, generation mechanism) make those waves the EMIC analogue of rising frequency whistler-mode chorus emissions. In our observations the Poynting flux of these emissions is usually clearly arriving from the equatorial region direction, especially when observations take place at more than 5 degrees of magnetic latitude. Simulations have also confirmed that the conditions of generation by interaction with energetic ions are at a maximum at the magnetic equator (lowest value of the background magnetic field along the field line). However in the Cluster case study presented here the Poynting flux of EMIC triggered emissions is propagating toward the equatorial region. The large angle between the wave vector and the background magnetic field is also unusual for this kind of emission. The rising tone starts just above half of the He+ gyrofrequency (Fhe+) and it disappears close to Fhe+. At the time of detection, the spacecraft magnetic latitude is larger than 10 degrees and L shell is about 4. The propagation sense of the emissions has been established using two independent methods: 1) sense of the parallel component of the Poynting flux for a single spacecraft and 2) timing of the emission detections at each of the four Cluster spacecraft which were in a relatively close configuration. We propose here to discuss this unexpected result considering a reflection of this emission at higher latitude.
AGU Fall Meeting Abstracts. 12/2011;
• Article: Observations of whistler-mode chorus in a large range of radial distances
[Hide abstract]
ABSTRACT: Whistler-mode chorus emissions are known for their capacity to interact with energetic electrons. We use data collected by the Cluster mission after 2005, when the orbit of the four Cluster spacecraft changed, thus facilitating the analysis of chorus in a large range of different radial distances from the Earth. We concentrate our analysis on the equatorial source region of chorus. We use multipoint measurements of the WBD and STAFF-SA instruments to characterize propagation and spectral properties of the observed waves. We show that intense whistler-mode emissions are found at large radial distances up to the dayside magnetopause. These emissions either have the form of hiss or they contain the typical structure of chorus wave packets. This result is supported by case studies as well as by statistical results, using the unprecedented database of Cluster measurements.
AGU Fall Meeting Abstracts. 12/2011;
Publication Stats
2k Citations 377.92 Total Impact Points
Institutions
• École Polytechnique
Paliseau, Île-de-France, France
• Laboratory of Plasma Physics
Paliseau, Île-de-France, France
• La Station de Radioastronomie de Nançay
Orléans, Centre, France
• French National Centre for Scientific Research
• Laboratoire de physique et chimie de l'environnement et de l'Espace (LPC2E)
Lutetia Parisorum, Île-de-France, France
• University of Oslo
• Department of Physics
Kristiania (historical), Oslo, Norway
• Charles University in Prague
• Faculty of Mathematics and Physics
Praha, Praha, Czech Republic
• Institut Pierre Simon Laplace
Lutetia Parisorum, Île-de-France, France
• Swedish Institute of Space Physics
Kiruna, Norrbotten, Sweden
• Université de Versailles Saint-Quentin
Versailles, Île-de-France, France
• Observatoire de Paris
Lutetia Parisorum, Île-de-France, France | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096313714981079, "perplexity": 1969.8612744833379}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463608.25/warc/CC-MAIN-20150226074103-00192-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/Chapter_10_Key_Terms | a measure of effect size based on the differences between two means. If $$d$$ is between 0 and 0.2 then the effect is small. If $$d$$ approaches is 0.5, then the effect is medium, and if $$d$$ approaches 0.8, then it is a large effect. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813733100891113, "perplexity": 148.90770673489288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00481.warc.gz"} |
https://www.varsitytutors.com/advanced_geometry-help/how-to-find-the-volume-of-a-tetrahedron | # Advanced Geometry : How to find the volume of a tetrahedron
## Example Questions
← Previous 1
### Example Question #31 : Tetrahedrons
What is the volume of the following tetrahedron? Assume the figure is a regular tetrahedron.
Explanation:
A regular tetrahedron is composed of four equilateral triangles. The formula for the volume of a regular tetrahedron is:
, where represents the length of the side.
Plugging in our values we get:
### Example Question #1 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula for the volume of a tetrahedron.
Substitute in the length of the edge provided in the problem.
Rationalize the denominator.
### Example Question #1 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula for the volume of a tetrahedron.
Substitute in the length of the edge provided in the problem:
Cancel out the in the denominator with one in the numerator:
A square root is being raised to the power of two in the numerator; these two operations cancel each other out. After canceling those operations, reduce the remaining fraction to arrive at the correct answer:
### Example Question #4 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula for finding the volume of a tetrahedron.
Substitute in the edge length provided in the problem.
Cancel out the in the denominator with part of the in the numerator:
Expand, rationalize the denominator, and reduce to arrive at the correct answer:
### Example Question #2 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula the volume of a tetrahedron.
Substitute the edge length provided in the equation into the formula.
Cancel out the denominator with part of the numerator and solve the remaining part of the numerator to arrive at the correct answer.
### Example Question #2 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula the volume of a tetrahedron and substitute in the provided edge length.
Rationalize the denominator to arrive at the correct answer.
### Example Question #81 : Solid Geometry
Find the volume of the regular tetrahedron with side length .
Explanation:
The formula for the volume of a regular tetrahedron is:
Where is the length of side. Using this formula and the given values, we get:
### Example Question #44 : Tetrahedrons
What is the volume of a regular tetrahedron with edges of ?
Explanation:
The volume of a tetrahedron is found with the formula:
,
where is the length of the edges.
When
.
### Example Question #1 : How To Find The Volume Of A Tetrahedron
What is the volume of a regular tetrahedron with edges of ?
None of the above.
Explanation:
The volume of a tetrahedron is found with the formula,
where is the length of the edges.
When the volume becomes,
The answer is in volume, so it must be in a cubic measurement!
### Example Question #4 : How To Find The Volume Of A Tetrahedron
What is the volume of a regular tetrahedron with edges of ?
None of the above.
None of the above.
Explanation:
The volume of a tetrahedron is found with the formula where is the length of the edges.
When | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524383306503296, "perplexity": 494.911146082517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00614-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://francomics.de/8pyfmok/175f6b-chemistry-calculator-moles | where mass is in grams and the molar mass is in grams per mole. We can use the above equation to find the mass of a substance when we are given the number of moles of the substance. Convert masses/volumes into moles. Chemical Mole to Gram Calculator Easily convert between grams and moles … The reactants and products, along with their coefficients will appear above. Mole-Mass Equation. or reactant. Number of particles. In addition, a mole of hydrogen is equal to a mole of glucose or a mole of uranium. To perform a stoichiometric calculation, enter an equation of a chemical reaction and press the Start button. This value is called Avogadro’s number. Number of particles in 1 mol of substance = 6 x 10 23 Free Online Chemistry Calculators, including periodic table, molecular weight calculator, molarity, chemical equation balancer, pH, boyle's law, idea gas law etc. Besides, one mole of any chemical compound or element is always the identical number. The atom is the smallest particle of a chemical element that can exist. Moles of hydrogen = 96/24 = 4 mol. E.g. Examples: Fe, Au, Co, Br, C, O, N, F. You can use parenthesis or brackets []. Use uppercase for the first character in the element and lowercase for the second character. You can calculate the mass of a product. The mole is the base unit of amount of substance. One mole of substance has 6 x 10 23 particles. 1. Finding Molar Mass. One mole of substance contains 6 x 10 23 particles. The mass (in grams) of a compound is equal to its molarity (in moles) multiply its molar mass: grams = mole × molar mass. How to Find Moles? Chemical Calculations and Moles GCSE chemistry equations, formulae and calculations are often the part of the syllabus that many students struggle with. grams = 58.443 × 5 = 292.215 (g) Moles to Mass Calculation. Moles of iron(III) oxide = 260/159.69 = 1.63 mol. Construct an ICE table and fill in the initial moles. Also, it is easier to calculate atoms in a mole than in lakhs and crores. 2. The Avogadro's number is a very important relationship to remember: 1 mole = $6.022\times10^{23}$ atoms, molecules, protons, etc. Calculate the mass of iron produced from the reaction. The mass and molarity of chemical compounds can be calculated based on the molar mass of the compound. Reacting masses. relative formula mass = mass ÷ number of moles = 440 ÷ 10 = 44. However, their masses are different. 1 mol of gas occupies 24 dm 3. You can determine the number of moles in any chemical reaction given the chemical formula and the mass of the reactants. It will calculate the total mass along with the elemental composition and mass of each element in the compound. Example: Calculate the mass of (a) 2 moles and (b) 0.25 moles of iron. TL;DR (Too Long; Didn't Read) To calculate molar relations in a chemical reaction, find the atomic mass units (amus) for each element found in the products and reactants and work out the stoichiometry of the reaction. mass = number of moles × molar mass. Furthermore, they are expressed as ‘mol’. Calculate and find the molar mass (molecular weight) of any element, molecule, compound, or substance. Instructions. Solution Stoichiometry (Moles, titration, and molarity calculations) Endmemo Chemical Mole Grams - Input chemical formulas here to figure out the number of moles or grams in a chemical formula. From understanding avagadro’s contact, to mole calculations, formula’s for percentage yield and atom economy, at first this part of the GCSE chemistry syllabus seems very difficult. Particles may be molecules, atoms, ions and electrons. Molar mass of NaCl is 58.443, how many grams is 5 mole NaCl? ENDMEMO. A mole is a unit which defined as the amount of a chemical substance that contains as many representative particles.
.
Andrew Jackson Hotel New Orleans, What Is Equivalent To Year 12 In Australia, How Long Does It Take To Grow A Pear Tree, Sony Xperia Z5 Release Date, White Mountain Home Rentals, Minnesota Vikings Purple Color Code, Hotel Provincial New Orleans, Sony Xperia Z5 Release Date, Vampire The Requiem Clan Quiz, Asian Pear Benefits, Secondary School Teacher Salary Uk, Hotel Provincial New Orleans, Secondary School Teacher Salary Uk, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177598714828491, "perplexity": 1823.135701825955}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00723.warc.gz"} |
http://www.koreascience.or.kr/search.page?keywords=Symmetric+and+asymmetric+topology | • Title, Summary, Keyword: Symmetric and asymmetric topology
### A New Family of Cascaded Transformer Six Switches Sub-Multilevel Inverter with Several Advantages
• Banaei, M.R.;Salary, E.
• Journal of Electrical Engineering and Technology
• /
• v.8 no.5
• /
• pp.1078-1085
• /
• 2013
• This paper presents a novel topology for cascaded transformer sub-multilevel converter. Eachsub-multilevel converter consists of two DC voltage sources with six switches to achieve five-level voltage. The proposed topology results in reduction of DC voltage sources and switches number. Single phase low frequency transformers are used in proposed topology and voltage transformation and galvanic isolation between load and sources are given by transformers. This topology can operate as symmetric or asymmetric converter but in this paper we have focused on symmetric state. The operation and performance of the suggested multilevel converter has been verified by the simulation results of a single-phase nine-level multilevel converter using MATLAB/SIMULINK.
• Ajami, Ali;Oskuee, Mohammad Reza Jannati;Mokhberdoran, Ataollah;Khosroshahi, Mahdi Toupchi
• Journal of Electrical Engineering and Technology
• /
• v.9 no.1
• /
• pp.127-135
• /
• 2014
• In this paper a novel converter structure based on cascade converter family is presented. The suggested multilevel advanced cascade converter has benefits such as reduction in number of switches and power losses. Comparison depict that proposed topology has the least number of IGBTs among all multilevel cascade type converters which have been introduced recently. This characteristic causes low cost and small installation area for suggested converter. The number of on state switches in current path is less than conventional topologies and so the output voltage drop and power losses are decreased. Symmetric and asymmetric modes are analyzed and compared with conventional multilevel cascade converter. Simulation and experimental results are presented to illustrate validity, good performance and effectiveness of the proposed configuration. The suggested converter can be applied in medium/high voltage and PV applications.
### The Postorder Fibonacci Circulants-a new interconnection networks with lower diameter (후위순회 피보나치 원형군-짧은 지름을 갖는 새로운 상호연결망)
• Kim Yong-Seok;Kwon Seung-Tag
• Proceedings of the IEEK Conference
• /
• /
• pp.91-94
• /
• 2004
• In this paper, we propose a new parallel computer topology, called the postorder Fibonacci circulants and analyze its properties. It is compared with Fibonacci cubes, when its number of nodes and its degree is kept the same of comparable one. Its diameter is improved from n-2 to [$\frac{n}{3}$] and a its topology is changed from asymmetric to symmetric. It includes Fibonacci cube as a spanning tree.
### Asymmetric Cascaded Multi-level Inverter: A Solution to Obtain High Number of Voltage Levels
• Banaei, M.R.;Salary, E.
• Journal of Electrical Engineering and Technology
• /
• v.8 no.2
• /
• pp.316-325
• /
• 2013
• Multilevel inverters produce a staircase output voltage from DC voltage sources. Requiring great number of semiconductor switches is main disadvantage of multilevel inverters. The multilevel inverters can be divided in two groups: symmetric and asymmetric converters. The asymmetric multilevel inverters provide a large number of output steps without increasing the number of DC voltage sources and components. In this paper, a novel topology for multilevel converters is proposed using cascaded sub-multilevel Cells. This sub-multilevel converters can produce five levels of voltage. Four algorithms for determining the DC voltage sources magnitudes have been presented. Finally, in order to verify the theoretical issues, simulation is presented.
### Topology Aggregation Schemes for Asymmetric Link State Information
• Yoo, Young-Hwan;Ahn, Sang-Hyun;Kim, Chong-Sang
• Journal of Communications and Networks
• /
• v.6 no.1
• /
• pp.46-59
• /
• 2004
• In this paper, we present two algorithms for efficiently aggregating link state information needed for quality-of-service (QoS) routing. In these algorithms, each edge node in a group is mapped onto a node of a shufflenet or a node of a de Bruijn graph. By this mapping, the number of links for which state information is maintained becomes aN (a is an integer, N is the number of edge nodes) which is significantly smaller than N2 in the full-mesh approach. Our algorithms also can support asymmetric link state parameters which are common in practice, while many previous algorithms such as the spanning tree approach can be applied only to networks with symmetric link state parameters. Experimental results show that the performance of our shufflenet algorithm is close to that of the full-mesh approach in terms of the accuracy of bandwidth and delay information, with only a much smaller amount of information. On the other hand, although it is not as good as the shufflenet approach, the de Bruijn algorithm also performs far better than the star approach which is one of the most widely accepted schemes. The de Bruijn algorithm needs smaller computational complexity than most previous algorithms for asymmetric networks, including the shufflenet algorithm.
### A Medium Access Control Mechanism for Distributed In-band Full-Duplex Wireless Networks
• Zuo, Haiwei;Sun, Yanjing;Li, Song;Ni, Qiang;Wang, Xiaolin;Zhang, Xiaoguang
• KSII Transactions on Internet and Information Systems (TIIS)
• /
• v.11 no.11
• /
• pp.5338-5359
• /
• 2017
• In-band full-duplex (IBFD) wireless communication supports symmetric dual transmission between two nodes and asymmetric dual transmission among three nodes, which allows improved throughput for distributed IBFD wireless networks. However, inter-node interference (INI) can affect desired packet reception in the downlink of three-node topology. The current Half-duplex (HD) medium access control (MAC) mechanism RTS/CTS is unable to establish an asymmetric dual link and consequently to suppress INI. In this paper, we propose a medium access control mechanism for use in distributed IBFD wireless networks, FD-DMAC (Full-Duplex Distributed MAC). In this approach, communication nodes only require single channel access to establish symmetric or asymmetric dual link, and we fully consider the two transmission modes of asymmetric dual link. Through FD-DMAC medium access, the neighbors of communication nodes can clearly know network transmission status, which will provide other opportunities of asymmetric IBFD dual communication and solve hidden node problem. Additionally, we leverage FD-DMAC to transmit received power information. This approach can assist communication nodes to adjust transmit powers and suppress INI. Finally, we give a theoretical analysis of network performance using a discrete-time Markov model. The numerical results show that FD-DMAC achieves a significant improvement over RTS/CTS in terms of throughput and delay.
### A Fibonacci Posterorder Circulants (피보나치 후위순회 원형군)
• Kim Yong-Seok
• Proceedings of the Korea Information Processing Society Conference
• /
• /
• pp.743-746
• /
• 2006
• In this paper, we propose and analyze a new parallel computer topology, called the Fibonacci posterorder circulants. It connects ${\Large f}_x,\;n{\geq}2$ processing nodes, same the number of nodes used in a comparable Fibonacci cube. Yet its diameter is only ${\lfloor}\frac{n}{3}{\rfloor}$ almost one third that of the Fibonacci cube. Fibonacci cube is asymmetric, but it is a regular and symmetric static interconnection networks for large-scale, loosely coupled systems. It includes scalability and Fibonacci cube as a spanning subgraph.
### Verification of New Family for Cascade Multilevel Inverters with Reduction of Components
• Banaei, M.R.;Salary, E.
• Journal of Electrical Engineering and Technology
• /
• v.6 no.2
• /
• pp.245-254
• /
• 2011
• This paper presents a new group for multilevel converter that operates as symmetric and asymmetric state. The proposed multilevel converter generates DC voltage levels similar to other topologies with less number of semiconductor switches. It results in the reduction of the number of switches, losses, installation area, and converter cost. To verify the voltage injection capabilities of the proposed inverter, the proposed topology is used in dynamic voltage restorer (DVR) to restore load voltage. The operation and performance of the proposed multilevel converters are verified by simulation using SIMULINK/MATLAB and experimental results.
### General Coupling Matrix Synthesis Method for Microwave Resonator Filters of Arbitrary Topology
• Uhm, Man-Seok;Lee, Ju-Seop;Yom, In-Bok;Kim, Jeong-Phill
• ETRI Journal
• /
• v.28 no.2
• /
• pp.223-226
• /
• 2006
• This letter presents a new approach to synthesize the resonator filters of an arbitrary topology. This method employs an optimization method based on the relation between the polynomial coefficients of the transfer function and those of the $S_{21}$ from the coupling matrix. Therefore, this new method can also be applied to self-equalized filters that were not considered in the conventional optimization methods. Two microwave filters, a symmetric 4-pole filter with four transmission zeros (TZs) and an asymmetric 8-pole filter with seven TZs, are synthesized using the present method for validation. Excellent agreement between the response of the transfer function and that of the synthesized $S_{21}$ from the coupling matrix is shown.
### Postorder Fibonacci Circulants (후위순회 피보나치 원형군)
• Kim, Yong-Seok;Roo, Myung-Gi
• The KIPS Transactions:PartA
• /
• v.15A no.1
• /
• pp.27-34
• /
• 2008
• In this paper, We propose a new parallel computer topology, called the Postorder Fibonacci Circulants and analyze its properties. It is compared with Fibonacci cubes, when its number of nodes is kept the same of comparable one. Its diameter is improved from n-2 to $[\frac{n}{3}]$ and its topology is changed from asymmetric to symmetric. It includes Fibonacci cube as a spanning graph. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828729510307312, "perplexity": 2881.7882537271794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00044.warc.gz"} |
https://arxiv.org/abs/1303.7162 | physics.hist-ph
(what is this?)
# Title: Fritz Hasenohrl and E = mc^2
Abstract: In 1904, the year before Einstein's seminal papers on special relativity, Austrian physicist Fritz Hasenohrl examined the properties of blackbody radiation in a moving cavity. He calculated the work necessary to keep the cavity moving at a constant velocity as it fills with radiation and concluded that the radiation energy has associated with it an apparent mass such that E = 3/8 mc^2. Also in 1904, Hasenohrl achieved the same result by computing the force necessary to accelerate a cavity already filled with radiation. In early 1905, he corrected the latter result to E = 3/4 mc^2. In this paper, Hasenohrl's papers are examined from a modern, relativistic point of view in an attempt to understand where he went wrong. The primary mistake in his first paper was, ironically, that he didn't account for the loss of mass of the blackbody end caps as they radiate energy into the cavity. However, even taking this into account one concludes that blackbody radiation has a mass equivalent of m = 4/3 E/c^2 or m = 5/3 E/c^2 depending on whether one equates the momentum or kinetic energy of radiation to the momentum or kinetic energy of an equivalent mass. In his second and third papers that deal with an accelerated cavity, Hasenohrl concluded that the mass associated with blackbody radiation is m = 4/3 E/c^2, a result which, within the restricted context of Hasenohrl's gedanken experiment, is actually consistent with special relativity. Both of these problems are non-trivial and the surprising results, indeed, turn out to be relevant to the "4/3 problem" in classical models of the electron. An important lesson of these analyses is that E = mc^2, while extremely useful, is not a "law of physics" in the sense that it ought not be applied indiscriminately to any extended system and, in particular, to the subsystems from which they are comprised.
Subjects: History and Philosophy of Physics (physics.hist-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) Journal reference: Eur. Phys. J. H. 38, 262-78 (2013) DOI: 10.1140/epjh/e2012-30061-5 Cite as: arXiv:1303.7162 [physics.hist-ph] (or arXiv:1303.7162v1 [physics.hist-ph] for this version)
## Submission history
From: Stephen P. Boughn [view email]
[v1] Thu, 28 Mar 2013 16:10:54 GMT (109kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868088722229004, "perplexity": 1053.654494196227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190181.34/warc/CC-MAIN-20170322212950-00245-ip-10-233-31-227.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.