url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.universetoday.com/1848/gamma-ray-bursts-eject-matter-at-nearly-the-speed-of-light/?shared=email&msg=fail
# Gamma Ray Bursts Eject Matter at Nearly the Speed of Light Gamma ray bursts are the most powerful explosions in the Universe, emitting more energy in an instant than our Sun can give off in its entire lifetime. But they don’t just blast out radiation, they also eject matter. And it turns out, they eject matter very very quickly – at 99.9997% the speed of light. This discovery was made by a large group of European researchers. They targeted the European Southern Observatory’s robotic La Silla Observatory at two recent gamma ray burst explosions. The observatory receives its targets automatically from NASA’s Swift satellite, and it autonomously zeros in to capture as much data as possible during the first few seconds after the explosion is detected. In two cases, La Silla observed the light curve of the explosion, and measured the peak. And measuring the peak is the key, since it allowed them to calculate the velocity of matter ejected from the explosion. In the case of these two explosions, the matter was calculated to be traveling 99.9997% the speed of light. That’s fast. Original Source: ESO News Release
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066978454589844, "perplexity": 1374.1092819277092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00688.warc.gz"}
https://chemistry.stackexchange.com/questions/76648/does-simmering-sparkling-water-change-decrease-its-mineral-composition-significa
# Does simmering sparkling water change/decrease its mineral composition significantly? I know that heating sparkling water will cause it to lose $\ce{CO2}$. Does this have any other effects on the water that remains, such as reducing its mineral content? I understand when non-carbonated mineral water evaporates, its minerals stay behind and create a greater mineral concentration in the water that's left. Does sparkling water behave in the same way, or do other reactions occur that cause the water to change in additional ways? • OP asks about a fairly basic concept, but that's no reason to be snide, @Karl. It's not out of the question that the carbonate may introduce some different reactivity as the water is heated. Actually, the solubility of many carbonates decreases with increasing temperature, now that I think about it -- I may have to revise my answer.... – hBy2Py Jun 22 '17 at 23:37 No, simmering sparkling water should have negligible effect on its mineral composition. You're exactly right that simmering it will drive off the $\ce{CO2}$. This $\ce{CO2}$ leaves as only/exactly $\ce{CO2}$ molecules, and any minerals that might have been associated with the carbonate/bicarbonate $(\ce{CO3^{2-}}/\ce{HCO3^-})$ will remain behind in the water. Most such minerals will be present in dilute enough of concentrations that they will remain dissolved in the no-longer-sparkling water, as long as only a relatively small amount of the water itself is boiled away.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5493528246879578, "perplexity": 1445.87575554011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00471.warc.gz"}
https://proofwiki.org/wiki/Mathematician:Mathematicians/Sorted_By_Nation/Germany
# Mathematician:Mathematicians/Sorted By Nation/Germany For more comprehensive information on the lives and works of mathematicians through the ages, see the MacTutor History of Mathematics archive, created by John J. O'Connor and Edmund F. Robertson. The army of those who have made at least one definite contribution to mathematics as we know it soon becomes a mob as we look back over history; 6,000 or 8,000 names press forward for some word from us to preserve them from oblivion, and once the bolder leaders have been recognised it becomes largely a matter of arbitrary, illogical legislation to judge who of the clamouring multitude shall be permitted to survive and who be condemned to be forgotten. -- Eric Temple Bell: Men of Mathematics, 1937, Victor Gollancz, London ## Holy Roman Empire ##### Nicholas of Cusa $($$\text {1401} – \text {1464}$$)$ German philosopher, theologian, jurist, and astronomer. Believed he had calculated $\pi$ exactly, as $3 \cdotp 1423$, but then also gave a good trigonometrical approximation later used by Willebrord van Royen Snell show full page ##### Johannes Müller von Königsberg $($$\text {1436} – \text {1476}$$)$ Better known under his Latinized name (Johannes Müller) Regiomontanus: both surnames mean King's mountain. German mathematician, astronomer, astrologer, translator, instrument maker and Catholic bishop. Pupil of Georg von Peuerbach, whose uncompleted work he continued. Set up a printing press at Nuremberg in $\text {1471}$ – $\text {1472}$ for printing scientific works. First publisher of such scientific literature. Became internationally famous within his own lifetime. show full page ##### Albrecht Dürer $($$\text {1471} – \text {1528}$$)$ German painter, printmaker and theorist whose theoretical treatises involve principles of mathematics, perspective and ideal proportions. show full page ##### Ludolph van Ceulen $($$\text {1540} – \text {1610}$$)$ German-Dutch mathematician best known for his calculation of the the value of $\pi$. The Ludolphine number is the expression of the value of $\pi$ to $35$ decimal places: $3 \cdotp 14159 \, 26535 \, 89793 \, 23846 \, 26433 \, 83279 \, 50288 \ldots$ ##### Johannes Kepler $($$\text {1571} – \text {1630}$$)$ German mathematician and astronomer best known nowadays for Kepler's Laws of Planetary Motion. Inherited the papers of Tycho Brahe and spent many years analysing his observations, looking for patterns. His most significant contribution to scientific thought was his deduction that the orbits of the planets are elliptical. Also pre-empted the methods of integral calculus to find the volume of a solid of revolution by slicing it into thin disks, calculating the volume of each, and then adding those volumes together. show full page ##### Johann Faulhaber $($$\text {1580} – \text {1635}$$)$ German surveyor and engineer who was also a mathematician of the cossist tradition. A significant influence on several mathematicians, including René Descartes, Jacob Bernoulli and Carl Jacobi. Best known for his work on series of powers. show full page ##### Nicholas Mercator $($$\text {c. 1620} – \text {1687}$$)$ German mathematician who designed a marine chronometer for Charles $\text {II}$ of England, and designed and constructed the fountains at the Palace of Versailles. Known for the Newton-Mercator Series. show full page ##### Johann Friedrich Pfaff $($$\text {1765} – \text {1825}$$)$ German mathematician who was a precursor of the German school, being a direct influence on Carl Friedrich Gauss. show full page ##### August Leopold Crelle $($$\text {1780} – \text {1855}$$)$ Self-educated and enthusiastic German mathematician whose most important work was founding Journal für die reine und angewandte Mathematik, better known as Crelle's Journal. show full page ##### Georg Simon Ohm $($$\text {1789} – \text {1834}$$)$ German physicist and mathematician best remembered for Ohm's Law. show full page ##### Wilhelm August Förstemann $($$\text {1791} – \text {1836}$$)$ German mathematician best known for his textbooks, which were standard German grammar schools texts for some considerable time. Published a series of articles on on the task of rationalizing equations. show full page ##### Karl Wilhelm Feuerbach $($$\text {1800} – \text {1834}$$)$ German geometer best known for Feuerbach's Theorem. Introduced homogeneous coordinates in $1827$, independently of August Ferdinand Möbius. show full page ##### Julius Plücker $($$\text {1801} – \text {1868}$$)$ German mathematician and physicist who fundamental contributions to the field of analytical geometry. Pioneer in the investigations of cathode rays that led eventually to the discovery of the electron. Vastly extended the study of Lamé curves. Published the first complete classification of plane cubic curves. show full page ##### Wilhelm Eduard Weber $($$\text {1804} – \text {1891}$$)$ German physicist who invented the first electromagnetic telegraph with Carl Friedrich Gauss. show full page ## Electoral Palatinate ##### Jakob Köbel $($$\text {1462} – \text {1533}$$)$ German mathematician and state official about whom little can be found on the internet. show full page ##### Elisabeth of the Palatinate $($$\text {1618} – \text {1680}$$)$ Princess of the Electorate of the Palatinate who studied (among other things) mathematics and philosophy with René Descartes. Her correspondence with Descartes survives as a record of the nature of philosophical and religious debates in that period. Renowned for her intelligence and humanism. show full page ##### Prince Rupert of the Rhine $($$\text {1619} – \text {1682}$$)$ Prince of the lines of both the Electorate of the Palatinate and the House of Stuart, who later in life turned to science and mathematics. Known for posing the question which is now known as Prince Rupert's Cube. Renowned for his military flair, but also notorious for his heavy-handed treatment of defeated enemies. show full page ##### Andreas Freiherr von Ettingshausen $($$\text {1796} – \text {1878}$$)$ German mathematician and physicist. The first to build an electromagnetic machine. Invented the notation $\dbinom n k$ for the binomial coefficient. show full page ##### Oskar Bolza $($$\text {1857} – \text {1942}$$)$ German mathematician best known for his research in the calculus of variations, particularly influenced by Karl Weierstrass's $1879$ lectures on the subject. show full page ## Esslingen am Neckar ##### Michael Stifel $($$\text {1487} – \text {1567}$$)$ German monk and mathematician who made significant advances in mathematical notation, including the juxtaposition technique for indicating multiplication. The first to use the term exponent. Published rules for calculation of powers. The first to use a standard method to solve quadratic equations. Also an early adopter of negative and irrational numbers. show full page ## Bavaria ##### Adam Ries $($$\text {1492} – \text {1559}$$)$ Influential German mathematician who wrote some important instructional works, including sets of tables for calculations. show full page ##### Simon Jacob $($$\text {c. 1500} – \text {1564}$$)$ German reckoner about whom little is known. Published a book demonstrating that he understood some facts about the Fibonacci numbers that were not rediscovered until centuries later. show full page ##### Wilhelm Xylander $($$\text {1532} – \text {1576}$$)$ German classical scholar and humanist who translated the Arithmetica of Diophantus. show full page ##### Christopher Clavius $($$\text {1538} – \text {1612}$$)$ German jesuit and logician. Best known for: • Clavius's Law (also written as Clavius' Law), otherwise known as the Consequentia Mirabilis, which states that if by assuming the negation of a proposition you can prove its truth, then that proposition is true. • Being instrumental in the development of the Gregorian calendar. • Writing highly-acclaimed and well-received text-books. ##### Johann Georg von Soldner $($$\text {1776} – \text {1833}$$)$ German mathematician, physicist and astronomer. Calculated the Euler-Mascheroni constant to 24 places. The first one to predict (100 years before Einstein) that light rays would be bent by the gravitational fields of stars. show full page ##### Gustav Conrad Bauer $($$\text {1820} – \text {1906}$$)$ German mathematician whose mathematical research dealt with algebra, geometric problems, spherical harmonics, the gamma function, and generalized continued fractions. show full page ##### Walther Franz Anton von Dyck $($$\text {1856} – \text {1934}$$)$ German mathematician who was one of the pioneers of group theory. The first to define a group in the abstract sense. The first to study a group by generators. A student of Felix Klein. show full page ## Württemberg ##### Johannes Scheubel $($$\text {1494} – \text {1570}$$)$ German mathematician noted for his work in popularising the use of algebra throughout Europe. Also published an edition of the first six books of Euclid's The Elements. show full page ##### Johann Wilhelm von Camerer $($$\text {1763} – \text {1847}$$)$ German protestant theologian, mathematician, astronomer and historian of mathematics. Also published an edition of the first six books of Euclid's The Elements. show full page ##### Wilhelm Jordan $($$\text {1842} – \text {1899}$$)$ German geodesist who conducted surveys in Germany and Africa and founded the German geodesy journal. Remembered for Gauss-Jordan elimination, a version of Gaussian elimination with improved stability, for minimizing the squared error in the sum of a series of surveying observations. show full page ##### Otto Ludwig Hölder $($$\text {1859} – \text {1937}$$)$ German mathematician most famous for his work in analysis (in particular Fourier series) and group theory. show full page ##### Wilhelm Weinberg $($$\text {1862} – \text {1937}$$)$ German obstetrician-gynecologist who expressed the concept that would later come to be known as the Hardy-Weinberg Principle. show full page ## Saxony ##### Petrus Apianus $($$\text {1495} – \text {1552}$$)$ German humanist and mathematician. One of his books significantly appears in the painting The Ambassadors by Hans Holbein the Younger. show full page ##### Erasmus Reinhold $($$\text {1511} – \text {1553}$$)$ German astronomer and mathematician, considered to be the most influential astronomical pedagogue of his generation. show full page ##### Gottfried Wilhelm von Leibniz $($$\text {1646} – \text {1716}$$)$ German mathematician and philosopher who is best known for being the co-inventor (independently of Isaac Newton) of calculus. Took some of the first philosophical steps towards a system of symbolic logic, but his works failed to have much influence on the development of logic, and these ideas were not developed to any significant extent. Invented the system of binary notation. show full page ##### Ehrenfried Walther von Tschirnhaus $($$\text {1651} – \text {1708}$$)$ German mathematician more famous for inventing a brand of porcelain. Worked on techniques in algebra, and also investigated catacaustic curves. Published what he thought was a solution to the quintic equation in $1683$, but Gottfried Wilhelm von Leibniz pointed out that it was fallacious. show full page ##### Gustav Roch $($$\text {1839} – \text {1866}$$)$ German mathematician who made significant contributions to the theory of Riemann surfaces. show full page ##### Erwin Papperitz $($$\text {1857} – \text {1938}$$)$ German mathematician who worked on the hypergeometric differential equation. show full page ##### Friedrich Engel $($$\text {1861} – \text {1941}$$)$ German mathematician specialising in partial differential equations. show full page ##### Alwin Reinhold Korselt $($$\text {1864} – \text {1947}$$)$ German mathematician best known for Korselt's Theorem which provides a definition for Carmichael numbers. Contributed an early result in relational algebra. show full page ## East Prussia ##### Christian Goldbach $($$\text {1690} – \text {1764}$$)$ Prussian amateur mathematician who also studied law and medicine. Best known for posing the Goldbach Conjecture, which also appears as Goldbach's Marginal Conjecture, and a similar weaker conjecture known as Goldbach's Weak Conjecture. show full page ##### Johann Daniel Titius $($$\text {1729} – \text {1796}$$)$ German astronomer best known for formulating the Titius-Bode Law, and thence to predict the existence of a planet between Mars and Jupiter. Also active in the field of biology. show full page ##### Friedrich Julius Richelot $($$\text {1808} – \text {1875}$$)$ German mathematician best known for his construction of the regular $257$-gon. show full page ##### Hermann Günter Grassmann $($$\text {1809} – \text {1877}$$)$ Prussian mathematician who pioneered the field of linear algebra and vector analysis. His work was way ahead of its time, and did not receive the recognition it deserved until much later. During his life he gained more recognition for his study of languages, including Gothic and Sanskrit, than as a mathematician. show full page ##### Gustav Robert Kirchhoff $($$\text {1824} – \text {1887}$$)$ Prussian physicist contributed to the fundamental understanding of electrical circuits, spectroscopy, and the emission of black-body radiation by heated objects. show full page ##### Daniel Friedrich Ernst Meissel $($$\text {1826} – \text {1895}$$)$ German astronomer who contributed to various aspects of number theory. show full page ##### Paul David Gustav du Bois-Reymond $($$\text {1831} – \text {1889}$$)$ German mathematician who worked on the mechanical equilibrium of fluids, the theory of functions and in mathematical physics. Also worked on Sturm–Liouville theory, integral equations, variational calculus, and Fourier series. In $1873$, constructed a continuous function whose Fourier series is not convergent. His lemma defines a sufficient condition to guarantee that a function vanishes almost everywhere. Also established that a trigonometric series that converges to a continuous function at every point is the Fourier series of this function. Discovered a proof method that later became known as the Cantor's diagonal argument. His name is also associated with the Fundamental Lemma of Calculus of Variations, of which he proved a refined version based on that of Lagrange. show full page ##### Rudolf Otto Sigismund Lipschitz $($$\text {1832} – \text {1903}$$)$ German mathematician who worked in many areas, including analysis, number theory and differential geometry. show full page ##### Paul Albert Gordan $($$\text {1837} – \text {1912}$$)$ German mathematician who worked in invariant theory and algebraic geometry. Best known for his proof of his finite base theorem. show full page ##### Johann Gustav Hermes $($$\text {1846} – \text {1912}$$)$ German mathematician best known for his attempted construction of the regular $65 \, 537$-gon. Recent research suggests that there may be mistakes in this construction. show full page ##### Kurt Wilhelm Sebastian Hensel $($$\text {1861} – \text {1941}$$)$ German mathematician best known for his introduction of $p$-adic numbers. show full page ##### Felix Hausdorff $($$\text {1868} – \text {1942}$$)$ German mathematician fundamental in the development of modern topology. Also active in set theory, measure theory and function theory. The first to formulate what is now known as the Generalized Continuum Hypothesis. show full page ##### Arnold Johannes Wilhelm Sommerfeld $($$\text {1868} – \text {1951}$$)$ German theoretical physicist who pioneered developments in atomic and quantum physics. show full page ##### Emanuel Lasker $($$\text {1868} – \text {1941}$$)$ German philosopher and mathematician who was also one of the greatest chess-players of all time. Inventor of the game now known as Lasca. show full page ## Hamburg ##### Johann Elert Bode $($$\text {1747} – \text {1826}$$)$ German astronomer known for his reformulation and popularization of the Titius-Bode Law. Determined the orbit of Uranus and suggested the planet's name. show full page ##### Johann Martin Zacharias Dase $($$\text {1824} – \text {1861}$$)$ German mental calculator famous for calculating $\pi$ to $200$ places in $1844$. show full page ## Duchy of Brunswick-Lüneburg ##### Carl Friedrich Gauss $($$\text {1777} – \text {1855}$$)$ One of the most influential mathematicians of all time, contributing to many fields, including number theory, statistics, analysis and differential geometry. show full page ## Prussia ##### Friedrich Wilhelm Bessel $($$\text {1784} – \text {1846}$$)$ Prussian mathematician best known for making a systematic study of what is now known as Bessel's equation. show full page ##### Heinrich Ferdinand Scherk $($$\text {1798} – \text {1885}$$)$ German mathematician notable for his work on minimal surfaces and the distribution of prime numbers. show full page ##### Carl Gustav Jacob Jacobi $($$\text {1804} – \text {1851}$$)$ Prolific Prussian mathematician, now most famous for his work with the elliptic functions. show full page ##### Ernst Eduard Kummer $($$\text {1810} – \text {1893}$$)$ German mathematician mostly active in the field of applied mathematics. Also worked in abstract algebra and field theory. Proved that Fermat's Last Theorem holds for all exponents $p$ such that $p$ is a regular prime. show full page ##### Theodor Schönemann $($$\text {1812} – \text {1868}$$)$ Also rendered as Theodor Schoenemann. German mathematician who obtained some important results in number theory. Obtained Hensel's Lemma before Hensel, and formulated Eisenstein's Criterion (also known as the Schönemann-Eisenstein Theorem) before Eisenstein. show full page ##### Heinrich Eduard Heine $($$\text {1821} – \text {1881}$$)$ German mathematician who worked mainly in analysis. show full page ##### Ferdinand Gotthold Max Eisenstein $($$\text {1823} – \text {1852}$$)$ German mathematician best known for his work in number theory. Student of Carl Friedrich Gauss. Died tragically young of tuberculosis. show full page ##### Leopold Kronecker $($$\text {1823} – \text {1891}$$)$ German mathematician most notable for his view that all of mathematics ought to be based on integers. Also a proponent of the mathematical philosophy of finitism, a forerunner of intuitionism and constructivism. His influence on the mathematical establishment was considerable. His views put him in direct opposition most notably to Georg Cantor, who was exploring the mathematics of the transfinite. show full page ##### August Beer $($$\text {1825} – \text {1863}$$)$ German physicist and mathematician. Contributed towards the Beer-Lambert-Bouguer Law. show full page ##### Elwin Bruno Christoffel $($$\text {1829} – \text {1900}$$)$ German mathematician and physicist. Introduced fundamental concepts of differential geometry, opening the way for the development of tensor calculus. This later provided the mathematical basis for general relativity. show full page ##### Karl Hermann Amandus Schwarz $($$\text {1843} – \text {1921}$$)$ German mathematician known for his work in the field of complex analysis. Student of Weierstrass. Best known for his contribution to the Cauchy-Bunyakovsky-Schwarz Inequality. show full page ##### Moritz Pasch $($$\text {1843} – \text {1930}$$)$ German mathematician who specialized in the foundations of geometry. His work served as the inspiration for work by Giuseppe Peano and David Hilbert in their work to re-axiomise the field of geometry. Best known for his formulation of what is now known as Pasch's Axiom. show full page ##### Felix Christian Klein $($$\text {1849} – \text {1925}$$)$ German mathematician best known for his work establishing the connections between geometry and group theory. Architect of the Erlangen program, which classifies geometries according to their symmetry groups. Noted for the Klein bottle and the Klein $4$-group. show full page ##### Ferdinand Georg Frobenius $($$\text {1849} – \text {1917}$$)$ German mathematician best known for his work on differential equations and group theory. Gave the first full proof of the Cayley-Hamilton Theorem. show full page ##### Alfred Pringsheim $($$\text {1850} – \text {1941}$$)$ German mathematician and patron of the arts, best known for Pringsheim's Theorem. show full page ##### Arthur Moritz Schönflies $($$\text {1853} – \text {1928}$$)$ German mathematician known for his contributions to the application of group theory to crystallography, and for work in topology. show full page ##### Hans Carl Friedrich von Mangoldt $($$\text {1854} – \text {1925}$$)$ German mathematician who contributed towards the solution of the Prime Number Theorem. show full page ##### Paul Rudolf Eugen Jahnke $($$\text {1861} – \text {1921}$$)$ German mathematician best known for his $1909$ Funktionentafeln mit Formeln und Kurven. show full page ##### David Hilbert $($$\text {1862} – \text {1943}$$)$ One of the most influential mathematicians in the late $19$th and early $20$th century. Most famous for the Hilbert $23$, a list he delivered in $1900$ of $23$ problems which were at the time still unsolved. show full page ##### Charles Proteus Steinmetz $($$\text {1865} – \text {1923}$$)$ Prussian-born American mathematician and electrical engineer and professor at Union College. Fostered development of alternating current that enabled expansion of electric power industry in United States. Formulated mathematical theories for engineers. Explained the phenomenon of hysteresis. show full page ##### Georg Wilhelm Scheffers $($$\text {1866} – \text {1945}$$)$ German mathematician whose specialty was differential geometry. Also a writer of several well-received textbooks. show full page ##### Martin Wilhelm Kutta $($$\text {1867} – \text {1944}$$)$ German mathematician best known for co-developing (with Carl David Tolmé Runge) of the Runge-Kutta Methods in the field of numerical analysis. Also known for the Zhukovsky-Kutta Aerofoil. show full page ## Saxony-Anhalt ##### August Ferdinand Möbius $($$\text {1790} – \text {1868}$$)$ German mathematician and theoretical astronomer, active in geometry and number theory. Best known for inventing the Möbius Strip, although this was actually invented independently by Johann Benedict Listing at around the same time. show full page ##### Hermann Hankel $($$\text {1839} – \text {1873}$$)$ German mathematician who worked on complex numbers and quaternions. show full page ##### Leo August Pochhammer $($$\text {1841} – \text {1920}$$)$ German mathematician known for his work on special functions. Also known for the Pochhammer symbol. show full page ##### Eugen Otto Erwin Netto $($$\text {1846} – \text {1919}$$)$ German mathematician known for his work in group theory. show full page ## Free Imperial City of Rothenburg ##### Karl Georg Christian von Staudt $($$\text {1798} – \text {1867}$$)$ German mathematician best known for his book Geometrie der Lage, an important work in the development of the discipline of projective geometry. show full page ## North Rhine / Westphalia ##### Daniel Christian Ludolph Lehmus $($$\text {1780} – \text {1863}$$)$ German mathematician best remembered for the Steiner-Lehmus Theorem. show full page ##### Johann Peter Gustav Lejeune Dirichlet $($$\text {1805} – \text {1859}$$)$ German mathematician who worked mainly in the field of analysis. Credited with the first formal definition of a function. show full page ##### Karl Theodor Wilhelm Weierstrass $($$\text {1815} – \text {1897}$$)$ German mathematician whose main work concerned the rigorous foundations of calculus. Known as "the father of modern analysis". show full page ##### Heinrich Menge $($$\text {1838} – \text {c. 1904}$$)$ German classical scholar and high school teacher, who contributed towards the documentation of the ancient history of mathematics. show full page ##### Wilhelm Karl Joseph Killing $($$\text {1847} – \text {1923}$$)$ German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry. show full page ## Hesse ##### Moritz Abraham Stern $($$\text {1807} – \text {1894}$$)$ German mathematician known for formulating Stern's diatomic series. Also known for the Stern-Brocot Tree which he wrote about in $1858$ and which Brocot independently discovered in $1861$. show full page ##### Johann Benedict Listing $($$\text {1808} – \text {1882}$$)$ German mathematician and physicist who coined the term topology in a letter of $1836$. In $1858$ he invented the Möbius strip at about the same time that August Ferdinand Möbius did. show full page ##### Gustavus Frankenstein $($$\text {1827} – \text {1893}$$)$ German-American clock maker, artist, mathematician and writer. Best known now for being the first to discover a perfect magic cube of order 8. show full page ##### Alexander Wilhelm von Brill $($$\text {1842} – \text {1935}$$)$ German mathematician best known for his involvement with Felix Klein in the reform of the teaching of mathematics. Made significant contributions to the field of algebraic geometry. show full page ##### August Otto Föppl $($$\text {1854} – \text {1924}$$)$ German mathematician credited with introducing Föppl-Klammer theory and the Föppl-von Kármán Equations. show full page ##### Paul Friedrich Wolfskehl $($$\text {1856} – \text {1906}$$)$ German physician with an interest in mathematics. He bequeathed $100\,000$ marks (equivalent to $£ 1\,000\,000$ pounds in $1997$ money) to the first person to prove Fermat's Last Theorem. By the time the prize was finally awarded to Andrew John Wiles on $27$ June $1997$, the monetary value of the award had dwindled to $£30\,000$. show full page ## Ernestine Duchies ##### Carl Anton Bretschneider $($$\text {1808} – \text {1878}$$)$ German mathematician who worked in geometry, number theory, and history of geometry. He also worked on logarithmic integrals and mathematical tables. Probably the first mathematicians to use the symbol $\gamma$ for the Euler-Mascheroni constant, which he published in a paper of $1837$. show full page ## Hanover ##### Georg Friedrich Bernhard Riemann $($$\text {1826} – \text {1866}$$)$ German mathematician most famous for the Riemann Hypothesis, which is (at time of writing, early $21$st century) one of the most highly sought-after results in mathematics. show full page ##### Carl Louis Ferdinand von Lindemann $($$\text {1852} – \text {1939}$$)$ German mathematician who made his mark by publishing a proof in $1882$ that $\pi$ is transcendental. show full page ##### Moritz Benedikt Cantor $($$\text {1829} – \text {1920}$$)$ German historian of mathematics. show full page ##### Friedrich Wilhelm Karl Ernst Schröder $($$\text {1841} – \text {1902}$$)$ German mathematician active mainly in the field of algebraic logic. He is best known for his contribution to what is now known as the Cantor-Bernstein-Schröder Theorem. show full page ##### Heinrich Martin Weber $($$\text {1842} – \text {1913}$$)$ German mathematician who worked in algebra, number theory, analysis and applications of analysis to mathematical physics. Formulated the ring axioms. show full page ##### Max Noether $($$\text {1844} – \text {1921}$$)$ German mathematician (also occasionally rendered Nöther) notable for his work in algebraic geometry and algebraic functions. Father of Emmy Noether. show full page ## Braunschweig ##### Julius Wilhelm Richard Dedekind $($$\text {1831} – \text {1916}$$)$ German mathematician who worked in the fields of abstract algebra, and algebraic number theory. Most noted for his work on the foundations of the real numbers. Used the thinking behind the resolution of Galileo's Paradox to underpin the definition of an infinite set. show full page ## Lower Saxony ##### Karl Theodor Reye $($$\text {1838} – \text {1919}$$)$ German mathematician who contributed to geometry, particularly projective geometry and synthetic geometry. show full page ##### Adolf Hurwitz $($$\text {1859} – \text {1919}$$)$ German mathematician who was an early master of the theory of Riemann surfaces. show full page ## German Conferderation ##### Georges Pfeffermann $($$\text {1838} – \text {1914}$$)$ German amateur mathematician who did a lot of work on magic squares and multiplicative magic squares. show full page ## Duchy of Mecklenburg-Schwerin ##### Friedrich Ludwig Gottlob Frege $($$\text {1848} – \text {1925}$$)$ German philosopher, logician, and mathematician, one of the founders of modern logic. Made major contributions to the foundations of mathematics. show full page ## Bremen ##### Carl David Tolmé Runge $($$\text {1856} – \text {1927}$$)$ German mathematician, physicist, and spectroscopist. Best known as the co-developer (with Martin Wilhelm Kutta) of the Runge-Kutta Methods in the field of numerical analysis. Also known for his work on the Zeeman effect. His work paved the way for the Thue-Siegel-Roth Theorem in the field of Diophantine equations. show full page ## Duchy of Holstein ##### Max Karl Ernst Ludwig Planck $($$\text {1858} – \text {1947}$$)$ German theoretical physicist whose discovery of energy quanta won him the Nobel Prize in Physics in $1918$. show full page ## German Empire ##### Ernst Friedrich Ferdinand Zermelo $($$\text {1871} – \text {1953}$$)$ German mathematician best known for his work on the foundations of mathematics. Laid the groundwork (later to be enhanced by Abraham Fraenkel) for what are now known as the Zermelo-Fraenkel axioms of axiomatic set theory. show full page ##### Fritz Emde $($$\text {1873} – \text {1951}$$)$ German electronic engineer and high school teacher, best known for his co-authorship with Eugen Jahnke of Funktionentafeln mit Formeln und Kurven. show full page ##### Heinrich Dörrie $($$\text {1873} – \text {1955}$$)$ German teacher of mathematics and author of several specialist books. show full page ##### Friedrich Moritz Hartogs $($$\text {1874} – \text {1943}$$)$ Killed himself as a result of the treatment he had received from the government of his country at the time. show full page ##### Erhard Schmidt $($$\text {1876} – \text {1959}$$)$ Baltic German mathematician whose work significantly influenced the direction of mathematics in the twentieth century. show full page ##### Heinrich Wilhelm Ewald Jung $($$\text {1876} – \text {1953}$$)$ German mathematician who specialized in geometry and algebraic geometry. show full page ##### Edmund Georg Hermann Landau $($$\text {1877} – \text {1938}$$)$ German mathematician who worked in the fields of number theory and complex analysis. show full page ##### Georg Karl Wilhelm Hamel $($$\text {1877} – \text {1954}$$)$ German mathematician with interests in mechanics, the foundations of mathematics and function theory. show full page ##### Felix Bernstein $($$\text {1878} – \text {1956}$$)$ German mathematician active mainly in the field of algebraic logic. He is best known for his $1897$ contribution to what is now known as the Cantor-Bernstein-Schröder Theorem. show full page ##### Leopold Löwenheim $($$\text {1878} – \text {1957}$$)$ German mathematician whose work pioneered the field of model theory. Much of his unpublished work was lost when the British brutally bombed his house in $1943$, an act of unforgivable barbarism for which the Brits have never delivered appropriate recompense. show full page ##### Albert Einstein $($$\text {1879} – \text {1955}$$)$ German-born mathematician and physicist. Probably the most famous scientist of all time. show full page ##### Paul Koebe $($$\text {1882} – \text {1945}$$)$ German-born mathematician who dealt exclusively with the complex numbers. His most important results were on the uniformization of Riemann surfaces. show full page ##### Emmy Noether $($$\text {1882} – \text {1935}$$)$ German-born mathematician who made considerable contributions to abstract algebra and theoretical physics. Most famous for Noether's Theorem which makes the fundamental connection between symmetry and various laws of conservation. Her philosophy and outlook were fundamental in the development of ideas that led to the establishment of the field of category theory. Daughter of Max Noether. show full page ##### Konrad Hermann Theodor Knopp $($$\text {1882} – \text {1957}$$)$ German mathematician who worked on generalized limits and complex functions. show full page ##### Max Born $($$\text {1882} – \text {1970}$$)$ German-Jewish physicist and mathematician who was instrumental in the development of quantum mechanics. Also made contributions to solid-state physics and optics. Supervised the work of a number of notable physicists in the 1920s and 1930s. show full page ##### Arthur Josef Alwin Wieferich $($$\text {1884} – \text {1954}$$)$ German mathematician who contributed briefly to the field of number theory before concentrating on a career in teaching. show full page ##### Hermann Klaus Hugo Weyl $($$\text {1885} – \text {1955}$$)$ German mathematician who worked in the fields of mathematical logic and mathematical physics. show full page ##### Ludwig Georg Elias Moses Bieberbach $($$\text {1886} – \text {1982}$$)$ German mathematician working mostly in analysis. show full page ##### Arthur Rosenthal $($$\text {1887} – \text {1959}$$)$ German mathematician working in geometry, in particular the classification of regular polyhedra and Hilbert's axioms. Also made contributions in analysis, including to Carathéodory's theory of measure. With Michel Plancherel, made contributions in ergodic theory and dynamical systems. show full page ##### Erich Hecke $($$\text {1887} – \text {1947}$$)$ German mathematician working mainly in functional analysis. show full page ##### Richard Courant $($$\text {1888} – \text {1972}$$)$ German mathematician best known for his writings. Made considerable contributions to the field numerical analysis. show full page ##### William Richard Maximilian Hugo Threlfall $($$\text {1888} – \text {1949}$$)$ German mathematician whose main work was in topology. Collaborated extensively with Karl Johannes Herbert Seifert. show full page ##### Abraham Halevi Fraenkel $($$\text {1891} – \text {1965}$$)$ German-born Israeli Hungarian mathematician best known for his work on axiomatic set theory. He improved Ernst Zermelo's axiomatic system, and out of that work came the Zermelo-Fraenkel axioms. He also wrote on topics in the history of mathematics. show full page ##### Rudolf Carnap $($$\text {1891} – \text {1970}$$)$ German-born philosopher who was active in Europe before 1935 and in the United States thereafter. show full page ##### Roland Percival Sprague $($$\text {1894} – \text {1967}$$)$ German mathematician, known for the Sprague-Grundy Theorem and for being the first mathematician to find a perfect squared square. show full page ##### Heinz Hopf $($$\text {1894} – \text {1971}$$)$ German mathematician who worked on the fields of topology and geometry. show full page ##### Wilhelm Friedrich Ackermann $($$\text {1896} – \text {1962}$$)$ German mathematician, best known for the Ackermann function. show full page ##### Ernst Paul Heinz Prüfer $($$\text {1896} – \text {1934}$$)$ German mathematician who worked on abelian groups, algebraic numbers, knot theory and Sturm-Liouville theory. Provided an ingenious proof of Cayley's Formula. show full page ##### Carl Ludwig Siegel $($$\text {1896} – \text {1981}$$)$ German mathematician specialising in analytic number theory. show full page ##### Gregor Wentzel $($$\text {1898} – \text {1978}$$)$ German physicist best known for development of quantum mechanics show full page ##### Hellmuth Kneser $($$\text {1898} – \text {1973}$$)$ German mathematician, who made notable contributions to group theory and topology. Derived the theorem on the existence of a prime decomposition for $3$-manifolds. Originated the concept of a normal surface. show full page ##### Helmut Hasse $($$\text {1898} – \text {1979}$$)$ German mathematician who worked mainly in algebraic number theory and class field theory. show full page ##### Karl Menninger $($$\text {1898} – \text {1963}$$)$ German teacher of and writer about mathematics. show full page ##### Wolfgang Krull $($$\text {1899} – \text {1971}$$)$ Made significant contributions to many areas of commutative algebra. Much of his work was influenced by Felix Klein and Emmy Noether. show full page ##### Richard Dagobert Brauer $($$\text {1901} – \text {1977}$$)$ German / American mathematician who worked mainly in abstract algebra. Made important contributions to number theory. Founder of modular representation theory. show full page ##### Kurt Otto Friedrichs $($$\text {1901} – \text {1982}$$)$ German applied mathematician whose major contribution was his work on partial differential equations. show full page ##### Werner Karl Heisenberg $($$\text {1901} – \text {1976}$$)$ German theoretical physicist who was one of the key pioneers of quantum mechanics. show full page ##### Oskar Morgenstern $($$\text {1902} – \text {1977}$$)$ German-born economist notable for founding the field of game theory in collaboration with John von Neumann, and applying it to economics. show full page ##### Camillo Herbert Grötzsch $($$\text {1902} – \text {1993}$$)$ German mathematician working mainly in graph theory show full page ##### Kurt Mahler $($$\text {1903} – \text {1988}$$)$ German mathematician working mainly in analysis and number theory. Proved the Prouhet-Thue-Morse constant and Champernowne constant to be transcendental. show full page ##### Helmut Grunsky $($$\text {1904} – \text {1986}$$)$ German mathematician who worked in complex analysis and geometric function theory. show full page ##### Hans Lewy $($$\text {1904} – \text {1988}$$)$ German born American mathematician, known for his work on partial differential equations and on the theory of functions of several complex variables. show full page ##### Hans Freudenthal $($$\text {1905} – \text {1990}$$)$ German born Dutch mathematician, made substantial contributions to algebraic topology. Took an interest in literature, philosophy, history and mathematics education. One of the most important figures in mathematics education in the $20$th century. show full page ##### Max August Zorn $($$\text {1906} – \text {1993}$$)$ German-born American mathematician who worked in algebra, set theory and numerical analysis. Best known for Zorn's Lemma, which he discovered in 1935. This is also known as the Kuratowski-Zorn Lemma, thereby acknowledging the work of Kazimierz Kuratowski who had published a version of it in 1922. show full page ##### Karl Johannes Herbert Seifert $($$\text {1907} – \text {1996}$$)$ German mathematician who worked mainly in topology and knot theory. Collaborated extensively with William Threlfall. One of the few who managed to weather the 2nd World War without upsetting either the Nazis or the Allies. show full page ##### Theodore Samuel Motzkin $($$\text {1908} – \text {1970}$$)$ German-born Israeli-American mathematician who was one of the pioneers of linear programming. Also published in the fields of algebra, graph theory, approximation theory, combinatorics, numerical analysis, algebraic geometry and number theory. Worked as a cryptographer for the British government during World War II. show full page ##### Bernhard Hermann Neumann $($$\text {1909} – \text {2002}$$)$ German-born mathematician who was one of the leaders in the field of group theory. Husband of Hanna Neumann and father of Peter Michael Neumann. show full page ##### Gerhard Karl Erich Gentzen $($$\text {1909} – \text {1945}$$)$ German mathematician and logician who made progress in symbolic logic. Proved that the Peano axioms are consistent. show full page ##### Fritz John $($$\text {1910} – \text {1994}$$)$ German mathematician best known for his work on partial differential equations and ill-posed problems. show full page ##### Lothar Collatz $($$\text {1910} – \text {1990}$$)$ German mathematician best known for posing the Collatz Conjecture. show full page ##### Helmut Wielandt $($$\text {1910} – \text {2001}$$)$ German mathematician whose main work was in group theory, especially permutation groups. show full page ##### Walter Ledermann $($$\text {1911} – \text {2009}$$)$ German mathematician best known for his work in homology, group theory and number theory. show full page ##### Theodor Schneider $($$\text {1911} – \text {1988}$$)$ German mathematician best known for providing a proof of the Gelfond-Schneider Theorem. show full page ##### Ernst Witt $($$\text {1911} – \text {1991}$$)$ German mathematician working mainly in the field of quadratic forms and algebraic function fields. show full page ##### Hans Julius Zassenhaus $($$\text {1912} – \text {1991}$$)$ German mathematician who did significant work in abstract algebra, and also pioneered the science of computer algebra. show full page ##### Karl Stein $($$\text {1913} – \text {2000}$$)$ German mathematician well known for complex analysis and cryptography. show full page ##### Paul Julius Oswald Teichmüller $($$\text {1913} – \text {1943}$$)$ German mathematician who introduced quasiconformal mappings and differential geometric methods into complex analysis. Usually known as Oswald Teichmüller. show full page ##### Hanna Neumann $($$\text {1914} – \text {1971}$$)$ German-born mathematician active in the field of group theory. show full page ##### Horst Feistel $($$\text {1915} – \text {1990}$$)$ German-American cryptographer who worked on the design of ciphers Initiating research that culminated in the development of the Data Encryption Standard (DES) in the 1970s. show full page ##### Abraham Robinson $($$\text {1918} – \text {1974}$$)$ German-American mathematician who is most widely known for development of non-standard analysis. show full page ## Weimar Republic ##### Richard Friederich Arens $($$\text {1919} – \text {2000}$$)$ German-born American mathematician who worked in the fields of functional analysis and topology. show full page ##### Gerhard Ringel $($$\text {1919} – \text {2008}$$)$ German mathematician who was one of the pioneers in the field of graph theory. show full page ##### Gerd Edzard Harry Reuter $($$\text {1921} – \text {1992}$$)$ German-born mathematician who emigrated to Britain who worked mainly in the fields of probability theory and analysis. show full page ##### Erwin O. Kreyszig $($$\text {1922} – \text {2008}$$)$ German-Canadian applied mathematician best known for his text books. show full page ##### Ernst Gabor Straus $($$\text {1922} – \text {1983}$$)$ German-American mathematician who helped found the theories of Euclidean Ramsey theory and of the arithmetic properties of analytic functions. Worked as the assistant to Albert Einstein. show full page ##### Paul Moritz Cohn $($$\text {1924} – \text {2006}$$)$ German-born mathematician renowned as an expert in abstract algebra, in particular non-commutative rings. show full page ##### Hans-Egon Richert $($$\text {1924} – \text {1993}$$)$ German mathematician who worked primarily in analytic number theory. Also contributed to sieve theory. show full page ##### Friedrich Ernst Peter Hirzebruch $($$\text {1927} – \text {2012}$$)$ German mathematician, working in the fields of topology, complex manifolds and algebraic geometry. show full page ##### Alexander Grothendieck $($$\text {1928} – \text {2014}$$)$ Sometimes rendered Alexandre Grothendieck. German-born mathematician of semi-Ukrainian ancestry who is usually credited with creating the modern field of algebraic geometry. His collaborative seminar-driven approach had the result of making him one of the most influential thinkers of the 20th century. show full page ##### Wolfgang Haken $($$\text {b. 1928}$$)$ German mathematician mainly involved in topology where the bulk of his work has been on 3-dimensional manifolds. In $1976$, along with Kenneth Ira Appel, proved the Four Color Theorem with the help of a computer. show full page ##### Jürgen Kurt Moser $($$\text {1928} – \text {1999}$$)$ German mathematician mainly involved in dynamical systems. show full page ##### Erich Müller-Pfeiffer $($$\text {b. 1930}$$)$ German mathematician best known for his text book. show full page ##### Robert John Aumann $($$\text {b. 1930}$$)$ German-born Israeli-American mathematician noted for his work on conflict and cooperation through game-theory analysis. show full page ##### Reinhold Remmert $($$\text {1930} – \text {2016}$$)$ German mathematician whose work has mainly been in developing the theory of complex spaces. show full page ##### Karl Heinrich Hofmann $($$\text {b. 1932}$$)$ German mathematician working in the fields of topological algebra and functional analysis, especially topological groups and semigroups and Lie theory. show full page ##### Uta Caecilia Merzbach $($$\text {1933} – \text {2017}$$)$ German-born American historian of mathematics. show full page ## 3rd Reich ##### Stefan Oscar Walter Hildebrandt $($$\text {1936} – \text {2005}$$)$ German mathematician concerned mainly with the calculus of variations and nonlinear partial differential equations. show full page ##### Bernd Fischer $($$\text {b. 1936}$$)$ German mathematician best known to his contributions to the classification of finite simple groups. Discovered several of the sporadic groups: Introduced 3-transposition groups Constructed the three Fischer groups Described the Baby Monster and computed its character table Predicted the existence of the Fischer-Griess Monster. ##### Wilfrid Keller $($$\text {b. 1937}$$)$ German mathematician best known for his activity in number theory, including the hunt for titanic primes. show full page ##### Jürgen Neukirch $($$\text {1937} – \text {1997}$$)$ German mathematician known for his work on algebraic number theory. show full page ##### Heiko Harborth $($$\text {b. 1938}$$)$ German mathematician whose work is mostly in the areas of number theory, combinatorics and discrete geometry, including graph theory.. show full page ##### Peter Schreiber $($$\text {b. 1938}$$)$ German mathematician and historian of mathematics who deals with the foundations of mathematics and geometry. show full page ##### Gunther Schmidt $($$\text {b. 1939}$$)$ German mathematician who works also in informatics. show full page ##### Christoph Bandelow $($$\text {1939} – \text {2011}$$)$ German mathematician mainly working in probability theory. Also known as the author of books on Rubik's cube and other mathematical recreations. show full page ##### Eberhard Freitag $($$\text {b. 1942}$$)$ German mathematician known for his work in function theory and modular forms. show full page ## East Germany ##### Ingmar Lehmann $($$\text {b. 1946}$$)$ German mathematician, university lecturer and non-fiction author. show full page ##### Gerd Rudolph $($$\text {b. 1950}$$)$ German mathematician and physicist specialising in gauge theory. show full page ## West Germany ##### Andreas Raphael Blass $($$\text {b. 1947}$$)$ German mathematician who works in mathematical logic, particularly set theory, and theoretical computer science. show full page ##### Dietmar Arno Salamon $($$\text {b. 1953}$$)$ German mathematician. show full page ##### Gerd Faltings $($$\text {b. 1954}$$)$ German mathematician known for his work in arithmetic algebraic geometry. show full page ##### Reinhard Diestel $($$\text {b. 1959}$$)$ German mathematician working mainly in graph theory. show full page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988470435142517, "perplexity": 11405.082034613122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00323.warc.gz"}
http://www.ck12.org/algebra/Simplifying-Rational-Expressions/lesson/Rational-Expression-Simplification-Honors/r4/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are viewing an older version of this Concept. Go to the latest version. # Simplifying Rational Expressions ## Factor numerator and denominator and cancel Estimated10 minsto complete % Progress Practice Simplifying Rational Expressions Progress Estimated10 minsto complete % Rational Expression Simplification How could you use factoring to help simplify the following rational expression? ### Guidance A rational number is any number of the form , where . A rational expression is any algebraic expression of the form , where . An example of a rational expression is: . Consider that any number or expression divided by itself is equal to 1. For example, and . This fact allows you to simplify rational expressions that are in factored form by looking for "1's". Consider the following rational expression: Factor both the numerator and denominator completely: Notice that there is one factor of in both the numerator and denominator. These factors divide to make 1, so they "cancel out" (the second factor of in the denominator will remain there). Also, the reduces to just . The simplified expression is: Keep in mind that you cannot "cancel out" common factors until both the numerator and denominator have been factored. A rational expression is like any other fraction in that it is said to be undefined if the denominator is equal to zero. Values of the variable that cause the denominator of a rational expression to be zero are referred to as restrictions and must be excluded from the set of possible values for the variable. For the original expression above, the restriction is because if then the denominator would be equal to zero. Note that to determine the restrictions you must look at the original expression before any common factors have been cancelled. #### Example A Simplify the following and state any restrictions on the denominator. Solution: To begin, factor both the numerator and the denominator: Cancel out the common factor of to create the simplified expression: The restrictions are and because both of those values for would have made the denominator of the original expression equal to zero. #### Example B Simplify the following and state any restrictions on the denominator. Solution: To begin, factor both the numerator and the denominator: Cancel out the common factor of to create the simplified expression: The restrictions are and because both of those values for would have made the denominator of the original expression equal to zero. #### Example C Simplify the following and state any restrictions on the denominator. Solution: To begin, factor both the numerator and the denominator: Cancel out the common factor of to create the simplified expression: The restrictions are and because both of those values for would have made the denominator of the original expression equal to zero. where and ### Vocabulary Rational Expression A rational expression is an algebraic expression that can be written in the form where . Restriction Any value of the variable in a rational expression that would result in a zero denominator is called a restriction on the denominator. ### Guided Practice Simplify each of the following and state the restrictions. 1. 2. 3. 1. , ; 2. , 3. , ; ### Practice For each of the following rational expressions, state the restrictions. Simplify each of the following rational expressions and state the restrictions. ### Vocabulary Language: English Rational Expression Rational Expression A rational expression is a fraction with polynomials in the numerator and the denominator. Restriction Restriction A restriction is a value of the domain where $x$ cannot be defined.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.9692960977554321, "perplexity": 604.1287777431647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157262.85/warc/CC-MAIN-20160205193917-00324-ip-10-236-182-209.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/263796/an-async-time-based-rate-limiting-semaphore-for-c/263817#263817
# An async time-based rate-limiting semaphore for C# This is a class that allows only limited number of requests to proceed per period of time. This is designed for use with external APIs that require such rate limiting, e.g. 600 requests per 10 minutes. I have a multithreaded application where these requests can be queued by users with varying frequency, and the goal is to allow these requests to progress as fast as possible without exceeding the API's limits. As the requests are user-submitted, there may also be long periods of time where no requests come in. Therefore, I've decided to avoid using a permanent timer/loop that would continuously reset the semaphore count; instead, the count is reset on-demand as requests come in or while there are pending requests that have not yet been allowed to proceed. class AsyncRateLimitedSemaphore { private DateTimeOffset nextResetTime; private readonly object resetTimeLock = new(); public AsyncRateLimitedSemaphore(int maxCount, TimeSpan resetTimeSpan) { this.maxCount = maxCount; this.resetTimeSpan = resetTimeSpan; this.semaphore = new SemaphoreSlim(maxCount, maxCount); this.nextResetTime = DateTimeOffset.UtcNow + this.resetTimeSpan; } private void TryResetSemaphore() { // quick exit if before the reset time, no need to lock if (!(DateTimeOffset.UtcNow > this.nextResetTime)) { return; } // take a lock so only one reset can happen per period lock (this.resetTimeLock) { var currentTime = DateTimeOffset.UtcNow; // need to check again in case a reset has already happened in this period if (currentTime > this.nextResetTime) { this.semaphore.Release(this.maxCount - this.semaphore.CurrentCount); this.nextResetTime = currentTime + this.resetTimeSpan; } } } { // attempt a reset in case it's been some time since the last wait TryResetSemaphore(); // if there are no slots, need to keep trying to reset until one opens up { var delayTime = this.nextResetTime - DateTimeOffset.UtcNow; // delay until the next reset period // can't delay a negative time so if it's already passed just continue with a completed task TryResetSemaphore(); } } } Some thoughts: • nextResetTime is not volatile, so the pre-lock read in TryResetSemaphore and the non-locked read in the delay loop could read stale data. This should be fine since it'd just progress sooner into the locked check, at which point it'd exit without doing anything anyway. • Ordering should be guaranteed by the order in which SemaphoreSlim.WaitAsync() is called. So earlier requests should be processed first and won't be starved. I don't have a strict ordering requirement, just that later incoming requests don't cause one of the early ones to wait forever. • The semaphore release could potentially be interleaved with other acquires. This should be fine since there is no need for the semaphore to actually hit max; those interleaved acquires would just subtract some counts from the post-release available pool. • What minimum .NET version must be supported? Jul 6, 2021 at 12:01 • @aepot Safe to assume latest, so currently .NET 5. If there's anything coming in .NET 6 that would help, I'm happy to consider them. I have full control over the runtime environment. – Bob Jul 6, 2021 at 13:17 The usage of nextResetTime is incorrect. The non-locked accesses were under the assumption that those reads would be atomic; unfortunately, DateTimeOffset (and DateTime) are structs, value types. C# only provides atomicity guarantees for reference types and a subset of value types (which structs do not fall under). The two possible solutions here are to either guard the reads of nextResetTime with locks, or store the raw tick count as longs with the appropriate Interlocked.Read/Interlocked.Exchange (since long is not guaranteed atomic either). The implementation below is with DateTimeOffset nextResetTime stored as long nextResetTimeTicks instead. class AsyncRateLimitedSemaphore { private long nextResetTimeTicks; private readonly object resetTimeLock = new(); public AsyncRateLimitedSemaphore(int maxCount, TimeSpan resetTimeSpan) { this.maxCount = maxCount; this.resetTimeSpan = resetTimeSpan; this.semaphore = new SemaphoreSlim(maxCount, maxCount); this.nextResetTimeTicks = (DateTimeOffset.UtcNow + this.resetTimeSpan).UtcTicks; } private void TryResetSemaphore() { // quick exit if before the reset time, no need to lock { return; } // take a lock so only one reset can happen per period lock (this.resetTimeLock) { var currentTime = DateTimeOffset.UtcNow; // need to check again in case a reset has already happened in this period { this.semaphore.Release(this.maxCount - this.semaphore.CurrentCount); var newResetTimeTicks = (currentTime + this.resetTimeSpan).UtcTicks; Interlocked.Exchange(ref this.nextResetTimeTicks, newResetTimeTicks); } } } { // attempt a reset in case it's been some time since the last wait TryResetSemaphore(); // if there are no slots, need to keep trying to reset until one opens up { var nextResetTime = new DateTimeOffset(new DateTime(ticks, DateTimeKind.Utc)); var delayTime = nextResetTime - DateTimeOffset.UtcNow; // delay until the next reset period // can't delay a negative time so if it's already passed just continue with a completed task $$$$ • A design suggestion: method with name like TryDoSomething probably might return bool not void. Jul 6, 2021 at 19:20 • @aepot I did waffle over the naming: it was originally ResetSemaphoreIfNeeded. The current one is perhaps wrongly suggestive of a TryParse`-like pattern. That said, I'd rather find a more appropriate name if possible, since there is nothing meaningful to be done with a return value in this scenario.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2082386314868927, "perplexity": 5138.2366960161835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00426.warc.gz"}
https://proofwiki.org/wiki/Definition:Set_Equality/Definition_1
# Definition:Set Equality/Definition 1 ## Definition Let $S$ and $T$ be sets. $S$ and $T$ are equal if and only if they have the same elements: $S = T \iff \paren {\forall x: x \in S \iff x \in T}$ Otherwise, $S$ and $T$ are distinct, or unequal. ## Equality of Classes In the context of class theory, the same definition applies. Let $A$ and $B$ be classes. $A$ and $B$ are equal, denoted $A = B$, if and only if: $\forall x: \paren {x \in A \iff x \in B}$ where $\in$ denotes class membership.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648146629333496, "perplexity": 627.4659478596566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00192.warc.gz"}
https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/ndwi/
# custom-scripts A repository of custom scripts that can be used with Sentinel-Hub services. # NDWI Normalized Difference Water Index ## General description of the script The NDWI is used to monitor changes related to water content in water bodies. As water bodies strongly absorb light in visible to infrared electromagnetic spectrum, NDWI uses green and near infrared bands to highlight water bodies. It is sensitive to built-up land and can result in over-estimation of water bodies. The index was proposed by McFeeters, 1996. Values description: Index values greater than 0.5 usually correspond to water bodies. Vegetation usually corresponds to much smaller values and built-up areas to values between zero and 0.2. Note: NDWI index is often used synonymously with the NDMI index, often using NIR-SWIR combination as one of the two options. NDMI seems to be consistently described using NIR-SWIR combination. As the indices with these two combinations work very differently, with NIR-SWIR highlighting differences in water content of leaves, and GREEN-NIR highlighting differences in water content of water bodies, we have decided to separate the indices on our repository as NDMI using NIR-SWIR, and NDWI using GREEN-NIR. ## Description of representative images NDWI of Italy. Acquired on 2020-08-01. NDWI of Canadian lakes. Acquired on 2020-08-05. ## References Source: https://en.wikipedia.org/wiki/Normalized_difference_water_index
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179417252540588, "perplexity": 4872.447769660387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.38/warc/CC-MAIN-20210921131252-20210921161252-00038.warc.gz"}
https://www.universetoday.com/114163/weird-x-rays-what-happens-when-eta-carinaes-massive-stars-get-close/
Weird X-Rays: What Happens When Eta Carinae’s Massive Stars Get Close? While the stars appear unchanging when you take a quick look at the night sky, there is so much variability out there that astronomers will be busy forever. One prominent example is Eta Carinae, a star system that erupted in the 19th century for about 20 years, becoming one of the brightest stars you could see in the night sky. It’s so volatile that it’s a high candidate for a supernova. The two stars came again to their closest approach this month, under the watchful eye of the Chandra X-Ray Observatory. The observations are to figure out a puzzling dip in X-ray emissions from Eta Carinae that happen during every close encounter, including one observed in 2009. The two stars orbit in a 5.5-year orbit, and even the lesser of them is massive — about 30 times the mass of the Sun. Winds are flowing rapidly from both of the stars, crashing into each other and creating a bow shock that makes the gas between the stars hotter. This is where the X-rays come from. Here’s where things get interesting: as the stars orbit around each other, their distance changes by a factor of 20. This means that the wind crashes differently depending on how close the stars are to each other. Surprisingly, the X-rays drop off when the stars are at their closest approach, which was studied closely by Chandra when that last occurred in 2009. “The study suggests that part of the reason for the dip at periastron is that X-rays from the apex are blocked by the dense wind from the more massive star in Eta Carinae, or perhaps by the surface of the star itself,” a Chandra press release stated. “Another factor responsible for the X-ray dip is that the shock wave appears to be disrupted near periastron, possibly because of faster cooling of the gas due to increased density, and/or a decrease in the strength of the companion star’s wind because of extra ultraviolet radiation from the massive star reaching it.” More observations are needed, so researchers are eagerly looking forward to finding out what Chandra dug up in the latest observations. A research paper on this was published earlier this year in the Astrophysical Journal, which you can also read in preprint version on Arxiv. The work was led by Kenji Hamaguchi, who is with NASA’s Goddard Space Flight Center in Maryland. Source: Chandra X-Ray Observatory
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834246814250946, "perplexity": 841.6433541760002}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00304.warc.gz"}
http://earthscience.stackexchange.com/questions/530/why-is-earths-outer-core-liquid
# Why is Earth's outer-core liquid? The Earth's inner core is solid because despite the enormous temperature in this region, there is also enormous pressure there, which in turn raises the melting point of iron and nickel to a value above the Earth's core temperature. Now as we move out from the solid inner core, temperature drops, and pressure also decreases. Obviously because the inner core is solid but the outer core is liquid, we must conclude that the drop in temperature vs the drop in pressure must be lower than the gradient of 16 degrees/GPa shown in the diagram below (link to source), given that at the outer-core temperature has exceeded the melting point of iron/nickel, which is a function of pressure. In other words, the drop in pressure must be quite significant compared to the drop in temperature as radius increases from the core. So how is it that pressure drops off fast enough relative to temperature to give rise to the liquid outer-core. A good answer will explain how temperature drops off with radius and how pressure drops off with radius and how these compare to give rise to the liquid outer-core. - Can you add a source for the figure? Or did you make it yourself? –  gerrit Apr 25 '14 at 12:12 @gerrit, thanks for pointing that out, I've added a link to the source. –  Geodude Apr 25 '14 at 13:00 First, you need a phase diagram that goes to higher pressure. The pressure at the inner/outer core boundary is over 300 GPa. The one in the question would only get us into the mantle: A typical temperature and pressure at the outermost part of the core would be 3750K and 135GPa, which is in the liquid region of the phase diagram. For more data on pressure and temperature as a function of depth see this University of Arizona source. All appropriate credit to Marcus Origlieri. - probably could edit that into his question, instead of putting it as an answer, but yes its a very good point. –  Neo Apr 24 '14 at 17:25 The pressure gradient is given by hydrostatic equilibrium. In a solid, this may not be exactly true, but creep will make it so. Let $p$ be the local pressure, $g$ be the local acceleration of gravity and $\rho$ the local density. Imagine a small element of volume with area $A$ horizontal and height $\Delta h$. Its mass is $\rho A \Delta h$ and it is attracted downward by force $g\rho \Delta h$ This has to be balanced by the pressure difference between the top and bottom, so $\frac {dp}{dh}=g\rho$. $g$ can be determined (assuming spherical symmetry) by just counting the total mass at smaller radii. - This is correct, but is perhaps not "spelled out enough" for I think 90% of readers. IE, you have to be familiar with the answer to understand it. –  Neo Apr 24 '14 at 17:33 @Neo: OP did ask for a mathematical answer. –  Ross Millikan Apr 24 '14 at 17:39 Yes he has Geodude. You just have to integrate dp/dh over the specified radii –  Neo Apr 25 '14 at 1:14 @RossMillikan, +1 that's a good start and thanks for your answer, but you haven't mentioned how the temperature drops off with radius in comparison. –  Geodude Apr 25 '14 at 1:43 @Geodude: I don't have an easy way to calculate that. OP asked specifically about pressure, so I answered that. –  Ross Millikan Apr 25 '14 at 3:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7853951454162598, "perplexity": 524.6277321538156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246642037.57/warc/CC-MAIN-20150417045722-00056-ip-10-235-10-82.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions/1142/how-key-materials-are-generated-in-ssl-v3-from-master-secret?answertab=votes
# How key materials are generated in SSL V3 from master secret The generation of key materials is given by key_block = MD5(master_secret + SHA('A' + master_secret + ServerHello.random + ClientHello.random)) + MD5(master_secret + SHA('BB' + master_secret + ServerHello.random + ClientHello.random)) + MD5(master_secret + SHA('CCC' + master_secret + ServerHello.random + ClientHello.random)) + [...]; The document says, it is done until "enough output has been generated". I think the "+" refers to append the data generated. We need: • client write mac secret • server write mac secret • client write key • server write key • client write IV • server write IV So these are generated by taking appropriate number of bits from the generated hash? I couldn't understand why the document says "enough output has been generated". Any way we need the 5 parts from the master key. - You can use TLS 1.0 as guidance: it is the direct successor of SSL 3.0, so many things are quite similar, and in some respects TLS 1.0 is a bit clearer. In section 6.3 you will find the key generation process, with the exact sentence: To generate the key material, compute [...] until enough output has been generated. Then the key_block is partitioned as follows: the important word being "partitioned". For instance, if you are using 3DES as symmetric cipher and SHA-1 for the MAC, the "write keys" are 24-byte long each, the IV are 8-byte long each, and the MAC keys are 20-byte long each. So, a total of 104 bytes. The key generation function repeatedly invokes SHA-1 and MD5 on various elements; each round produces 16 additional bytes (that's the output size of MD5). You need 104 bytes, hence, you will need 7 rounds. At the first round, you call SHA-1 over the concatenation of 'A' (a single byte of value 65), the master secret, the server random and the client random, in that order ('+' indeed denotes concatenation). This SHA-1 invocation yields 20 bytes. The concatenation of the master secret and the SHA-1 output is then hashed with MD5, which yields 16 bytes. These are the first 16 bytes of the key block. For the second round, the processing is identical except that you use 'BB' instead of 'A' (two bytes of value 66 each). This produces the next 16 bytes of the key block. You continue like this. Ultimately, you can potentially do 26 rounds (up to 'ZZZ...Z'; the SSL 3.0 specification does not define how to go beyond that). This would yield a total of 26*16 = 416 bytes. But if you need only the first 104 bytes, there is no need to compute the whole 26 rounds; just compute enough rounds to get the number of bytes you need. Once you have your key block (104 bytes with 3DES and SHA-1), you split it into the needed key elements. The first 20 bytes go to the client write MAC key, the next 20 bytes for the server write MAC key, then the next 24 bytes for the client write key, and so on. - The TLS 1.0 documentation says The cipher spec which is defined in this document which requires the most material is 3DES_EDE_CBC_SHA: it requires 2 x 24 byte keys, 2 x 20 byte MAC secrets, and 2 x 8 byte IVs, for a total of 104 bytes of key material. How they arrived at the key size of 24 byte for 3DES. A little search on net gives 3DES key size as 168, 112 or 56 bits – user5507 Nov 9 '11 at 1:03 @user5507: the 3DES specification says that the key is a 192-bit words (three 64-bit DES keys). If you follow the specification, you can see that 24 of these bits are totally ignored by the algorithm, so the effective key size (with regards to exhaustive key search) is 168 bits; but the standard key must still be an array of 24 bytes. The "ignored" bits were supposed to be parity control bits (one parity bit for every seven "used" key bits) but nobody bothers setting or controlling them. – Thomas Pornin Nov 9 '11 at 12:08 @ThomasPornin well, except for e.g. the HSM's we use at my work, they check DES parity all right. According to ECRYPT II the strength of the 3DES keys is: For three-key 3DES, the attack complexity can be reduced down to 2112 operations (or even down towards 2^100 under certain attack models), whereas for two-key 3DES it reduces from 2^112 down to 2^(120−t) operations if the attacker has access to 2t plaintext/ciphertext pairs (t > 8) using the same key. – Maarten Bodewes Feb 27 '12 at 20:29 @ThomasPornin Do you have any document that mention the keys' length (write-mac-key, write-key, write-iv) for all the ciphersuites ? – vantrung -cuncon Nov 7 '13 at 4:35 @vantrung-cuncon: it is in the standard, appendix C; see also the previous versions: TLS 1.0 and TLS 1.1. – Thomas Pornin Nov 7 '13 at 12:05 key_block = MD5(master_secret + SHA('A' + master_secret + ServerHello.random + ClientHello.random)) + MD5(master_secret + SHA('BB' + master_secret + ServerHello.random + ClientHello.random)) + MD5(master_secret + SHA('CCC' + master_secret + ServerHello.random + ClientHello.random)) + [...]; The document says, it is done until "enough output has been generated". "[E]nough output has been generated" means that you have to continue this process until the key length is equal to the block length. For example, if the block length is 512 bits, then the function MD5(master_secret + SHA('A' + master_secret + ServerHello.random + ClientHello.random)) needs to be repeated four times, because the output of MD5 is 128 bits long, and 4 × 128 = 512. And + indicates concatenation. - What block length are you here referring to? – Paŭlo Ebermann Feb 27 '12 at 8:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31427955627441406, "perplexity": 3419.995729142671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701151880.99/warc/CC-MAIN-20160205193911-00041-ip-10-236-182-209.ec2.internal.warc.gz"}
https://ncertmcq.com/rd-sharma-class-10-solutions-chapter-10-trigonometric-ratios-ex-10-3/
## RD Sharma Class 10 Solutions Chapter 10 Trigonometric Ratios Ex 10.3 These Solutions are part of RD Sharma Class 10 Solutions. Here we have given RD Sharma Class 10 Solutions Chapter 10 Trigonometric Ratios Ex 10.3 Other Exercises Question 1. Evaluate the following : Solution: Question 2. Evaluate the following : Solution: Question 3. Express each one of the following in terms of trigonometric ratios of angles lying between 0° and 45° (i) sin 59° + cos 56° (ii) tan 65° + cot 49“ (iii) sec 76° + cosec 52° (iv) cos 78° + sec 78° (v) cosec 54° + sin 72° (vi) cot 85″ + cos 75° (vii) sin 67° + cos 75° Solution: (i) sin 59° + cos 56° = sin (90° – 31°) + cos (90° – 34°) = cos 31° +sin 34° (ii) tan 65° + cot 49° = tan (90° – 25°) + cot (90° – 41°) = cot 25° + tan 41° (iii) sec 76° + cosec 52° = sec (90° – 14°) + cosec (90 0 – 38°) = cosec 14° + sec 38° (iv) cos 78° + sec 78° = cos (90° – 12°) + sec (90°- 12°) = sin 12° + cosec 12° (v) cosec 54° + sin 72° = cosec (90° – 36°) + sin (90°-18°) = sec 36° + cos 18° (vi) cot 85° + cos 75° = cot (90° – 5°) + cos (90° – 15°) = tan 5° + sin 15° (vii) sin 67° + cos 75° = sin (90° – 23°) + cos (90° – 15°) = cos 23° + sin 15° Question 4. Express cos 75° + cot 75° in terms of angles between 0° and 30°. Solution: cos 75° + cot 75° = cos (90° – 15°) + cot (90°-15°) = sin 15° + tan 15° Question 5. If sin 3A = cos (A – 26°), where 3A is an acute angle, And the value of A. Solution: sin 3A = cos (A – 26°) ⇒ cos (90° – 3A) = cos (A – 26°) Comparing, 90° – 3A = A – 26° ⇒ 90° + 26° = A + 3A ⇒ 4A = 116° Question 6. If A, B, C are the interior angles of a triangle ABC, prove Solution: Question 7. Prove that : Solution: Question 8. Prove the following : Solution: Question 9. Evaluate : Solution: Question 10. If sin θ= cos (θ – 45°), where θ and (θ – 45°) are acute angles, find the degree measure of θ. Solution: Question 11. If A, B, C are the interior angles of a AABC, show that : (i) $$sin\frac { B+C }{ 2 } cos\frac { A }{ 2 }$$ (ii) $$cos\frac { B+C }{ 2 } sin\frac { A }{ 2 }$$ Solution: Question 12. If 2θ + 45° and 30° – θ are acute angles, find the degree measures of θ satisfying sin (20 + 45°) = cos (30° – θ). Solution: Question 13. If θ is a positive acute angle such that sec θ = cosec 60°, And the value of 2 cos2 θ-1. Solution: Question 14. If cos 2 θ – sin 4 θ, where 2 θ and 4 θ are acute angles, find the value of θ. Solution: Question 15. If sin 3 θ = cos (θ – 6°), where 3 θ and θ – 6° are acute angles, find the value of θ. Solution: Question 16. If sec 4A = cosec (A – 20°), where 4A is an acute angle, find the value of A. Solution: Question 17. If sec 2A = cosec (A – 42°), where 2A is an acute angle, find the value of A. (C.B.S.E. 2008) Solution: Hope given RD Sharma Class 10 Solutions Chapter 10 Trigonometric Ratios Ex 10.3 are helpful to complete your math homework. If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837998747825623, "perplexity": 12035.402306390652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00303.warc.gz"}
https://www.tutorialspoint.com/robin-hood-hashing-in-data-structure
# Robin-Hood Hashing in Data Structure Data StructureAnalysis of AlgorithmsAlgorithms In this section we will see what is Robin-Hood Hashing scheme. This hashing is one of the technique of open addressing. This attempts to equalize the searching time of element by using the fairer collision resolution strategy. While we are trying to insert, if we want to insert element x at position xi, and there is already an element y is placed at yj = xi, then the younger of two elements must move on. So if i ≤ j, then we will try to insert x at position xi+1, xi+2 and so on. Otherwise we will store x at position xi, and try to insert y at position yj+1, yj+2 and so on. According to Devroye et al. show that after performing n insertions on an initially empty table, whose size is 𝑚 = Α𝑛, using the Robin-Hood insertion algorithm, the expected value of worst case search time is − $$E[W]=\Theta(log\:log\:n)$$ And its bound is tight. So this algorithm is a form of Open addressing, that has doubly logarithmic worst-case search time. Published on 11-Aug-2020 09:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7917950749397278, "perplexity": 1872.7631541345302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00043.warc.gz"}
http://www.piping-designer.com/index.php/properties/classical-mechanics/2264-final-velocity
# Final Velocity Written by Jerry Ratzlaff on . Posted in Classical Mechanics Final velocity, abbreviated as $$v_f$$, is the ending point of motion. ## Final Velocity formulas $$\large{ v_f = v_i + a \; t }$$ $$\large{ v_f = 2 \; \bar {v} - v_i }$$ $$\large{ v_f = \alpha_v \; v_i \; \left( T_f \;- \; T_i \right) + v_i }$$ (volumetric thermal expansion coefficient) ### Where: $$\large{ v_f }$$ = final velocity $$\large{ a }$$ = acceleration $$\large{ T_f }$$ = final temperature $$\large{ T_i }$$ = initial temperature $$\large{ t }$$ = time $$\large{ \bar {v} }$$ = average velocity $$\large{ v_i }$$ = initial velocity $$\large{ \alpha_v }$$  (Greek symbol alpha) = volumetric thermal expansion coefficient
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990710616111755, "perplexity": 7716.319958837758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00553.warc.gz"}
https://mathematica.stackexchange.com/questions/199188/first-replaceall-then-evaluate-an-expression
# First ReplaceAll, then evaluate an expression Here is a simple example using ReplaceAll: rvec[[2]] /. rvec -> {x, y} I think Mathematica first tries to evaluate rvec[[2]], realising this is not a list, and only afterwards uses ReplaceAll. The warning output confirms this. If instead of rvec[[2]] we have some complex expression (e.g. inversion of a matrix), Mathematica will to do it analytically first before making the substitution. This is not what I intend, I want to delay the evaluation of rvec[[2]] (or other more complex things in its place) until the ReplaceAll substitution has been done. How can I tell Mathematica to substitute first, then evaluate? • With[{rvec = {x,y}}, rvec[[2]]]? – AccidentalFourierTransform May 27 '19 at 13:22 • Thanks, this works. However, is it possible to keep my form (with a subsitution list) more or less intact? My rvec is really many expressions, and {x,y} a long substitution list. – Alexander Erlich May 27 '19 at 14:08 • I think it's best if you include a MWE where my trick above is inconvenient, and where a substitution really is required. Otherwise I'm not sure why you would want to do things your way (see What is the XY problem?). – AccidentalFourierTransform May 27 '19 at 15:00 You might be able to use Unevaluated: Unevaluated[rvec[[2]]] /. rvec->{x,y} (rvec /. rvec -> {x, y})[[2]]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3325856924057007, "perplexity": 2646.5746214145743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505366.8/warc/CC-MAIN-20200401034127-20200401064127-00179.warc.gz"}
http://www.exampleproblems.com/wiki/index.php/Amplitude
# Amplitude For the video game of the same name, see Amplitude (game). Amplitude is a nonnegative scalar measure of a wave's magnitude of oscillation, that is, magnitude of the maximum disturbance in the medium during one wave cycle. In the following diagram, the distance y is the amplitude of the wave. Sometimes this distance is called the "peak amplitude", distinguishing it from another concept of amplitude, used especially in electrical engineering: the root mean square (RMS) amplitude, defined as the square root of the temporal mean of the square of the vertical distance of this graph from the horizontal axis. The use of peak amplitude is unambiguous for symmetric, periodic waves, like a sine wave, a square wave, or a triangular wave. For an unsymmetric wave, for example periodic pulses in one direction, the peak amplitude becomes ambiguous because the value obtained is different depending on whether the maximum positive signal is measured relative to the mean, the maximum negative signal is measured relative to the mean, or the maximum positive signal is measured relative the maximum negative signal and then divided by two. For complex waveforms, especially non-repeating signals like noise, the RMS amplitude is usually used because it is unambiguous and because it has physical significance. For example, the power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude (and not, in general, to the square of the peak amplitude). There are a few ways to formalize amplitude: In the simple wave equation ${\displaystyle y=A\sin(t-K)+b}$ A is the amplitude of the wave. The units of the amplitude depends on the type of wave. For waves on a string, or in medium such as water, the amplitude is a distance. The amplitude of sound waves and audio signals conventionally refers to the amplitude of the air pressure in the wave, but sometimes the amplitude of the displacement (movements of the air or the diaphragm of a speaker) is described. Its logarithm is usually measured in dB, so a null amplitude corresponds to -infinity dB. For electromagnetic radiation, the amplitude corresponds to the electric field of the wave. The square of the amplitude is termed the intensity of the wave. The amplitude may be constant (in which case the wave is a continuous wave) or may vary with time and/or position. The form of the variation of amplitude is called the envelope of the wave.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856535196304321, "perplexity": 391.55413107027806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00167.warc.gz"}
https://admin.clutchprep.com/physics/practice-problems/46601/as-shown-in-figure-1-a-beam-of-particles-is-fired-at-a-stationary-target-the-res
# Problem: As shown in figure 1, a beam of particles is fired at a stationary target. The resulting nuclei from this collision are highly unstable, and decay almost immediatebly into more stable daughter nuclei. During this decay, charged particles are emitted, which curve in the magnetic field within the detector (in this case, the field is pointing out of the page). Each of these decay particles are collected by the detector and their energies are measured, producing the graph shown in figure 2. What type of decay are the unstable nuclei undergoing? A) α decay B) β- decay C) β+ decay D) γ decay ###### Problem Details As shown in figure 1, a beam of particles is fired at a stationary target. The resulting nuclei from this collision are highly unstable, and decay almost immediatebly into more stable daughter nuclei. During this decay, charged particles are emitted, which curve in the magnetic field within the detector (in this case, the field is pointing out of the page). Each of these decay particles are collected by the detector and their energies are measured, producing the graph shown in figure 2. What type of decay are the unstable nuclei undergoing? A) α decay B) β- decay C) β+ decay D) γ decay
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779672622680664, "perplexity": 916.1390644011814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00739.warc.gz"}
https://www.bionicturtle.com/forum/threads/p2-t7-517-credit-risk-capital-under-basel-ii-hull.8481/
What's new # P2.T7.517. Credit risk capital under Basel II (Hull) #### Nicole Seaman Staff member Subscriber Learning outcomes: Describe and contrast the major elements of the three options available for the calculation of credit risk: Standardised Approach, Foundation IRB Approach and Advanced IRB Approach Questions: 517.1. Consider the following four statements which attempt to summarize the approach to credit risk capital under Basel II: https://www.bionicturtle.com/forum/tags/credit+risk+capital/ I. For credit risk, Basel II specified three approaches: The Standardized Approach, The Foundation Internal Ratings Based (IRB) Approach, and The Advanced IRB Approach​ II. For the internal ratings-based (IRB) approach, regulators base the capital requirement on the value at risk calculated using a one-year time horizon and a 99.9% confidence level; they recognize that expected losses are usually covered by the way a financial institution prices its products. The capital required is, therefore, the value at risk minus the expected loss​ III. Under the Foundation IRB approach, banks supply PD while LGD, EAD, and M are supervisory values set by the Basel Committee. PD is subject to a floor of 0.03% for bank and corporate exposures. LGD is set at 45% for senior claims and 75% for subordinated claims. When there is eligible collateral, in order to correspond to the comprehensive approach, LGD is reduced by the ratio of the adjusted value of the collateral to the adjusted value of the exposure, both calculated using the comprehensive approach. The EAD is calculated in a manner similar to the credit equivalent amount in Basel I and includes the impact of netting. M is set at 2.5 in most circumstances.​ IV. Under the advanced IRB approach, banks supply their own estimates of the PD, LGD, EAD, and M for corporate, sovereign, and bank exposures. The PD can be reduced by credit mitigants such as credit triggers. (As in the case of the Foundation IRB approach, it is subject to a floor of 0.03% for bank and corporate exposures.) The two main factors influencing the LGD are the seniority of the debt and the collateral. In calculating EAD, banks can with regulatory approval use their own estimates of credit conversion factors​ Which of the above is (are) accurate? a. None are accurate b. Only I. and II. c. Only I. and IV. d. All are accurate 517.2. About the treatment of collateral in the calculation of credit risk under the STANDARDIZED (not IRB) approach Basel II, Hull explains, "there are two ways banks can adjust risk weights for collateral. The first is termed the simple approach and is similar to an approach used in Basel I. The second is termed the comprehensive approach. Banks have a choice as to which approach is used in the banking book, but must use the comprehensive approach to calculate capital for counterparty credit risk in the trading book. • Under the simple approach, the risk weight of the counterparty is replaced by the risk weight of the collateral for the part of the exposure covered by the collateral. (The exposure is calculated after netting.) For any exposure not covered by the collateral, the risk weight of the counterparty is used. The minimum level for the risk weight applied to the collateral is 20%.10 A requirement is that the collateral must be revalued at least every six months and must be pledged for at least the life of the exposure. • Under the comprehensive approach, banks adjust the size of their exposure upward to allow for possible increases in the exposure and adjust the value of the collateral downward to allow for possible decreases in the value of the collateral.11 (The adjustments depend on the volatility of the exposure and the collateral.) A new exposure equal to the excess of the adjusted exposure over the adjusted value of the collateral is calculated and the counterparty’s risk weight is applied to this exposure." (Source: John Hull, Risk Management and Financial Institutions, 5th Edition (New York: John Wiley & Sons, 2018)) Analyst Roger is analyzing a collateralized exposure for his firm. His firm has a $100.0 million exposure to a particular counterparty that is secured by collateral worth$80.0 million. The collateral consists of bonds issued by an A-rated company. The counterparty has a rating of B+. The risk weight for the counterparty is 150% and the risk weight for the collateral is 50%. Under the comprehensive approach, the adjustment to exposure to allow for possible future increases in the exposure is +10% and the adjustment to the collateral to allow for possible future decreases in its value is –15%. Roger calculates the risk-weighted assets under both approaches. What is the difference in risk-weighted assets between the two approaches to collateral in the standardized approach to credit risk? a. $7.0 million b.$15.0 million c. $20.0 million d.$33.0 million 517.3. Under the internal ratings-based (IRB) approach to credit risk, Hull explains that the credit value at risk is calculated using a one-factor Gaussian copula model of time to default. If we assume that a bank has a very large number of obligors and the i-th obligor has a one-year probability of default, PD(i), and the copula correlation between each pair of obligors is given by rho (ρ), then the worst-case probability of default, WCDR(i), is defined as follows: Suppose that the assets of a bank consist of $100.0 million of loans to A-rated corporations. The probability of default (PD) for the corporations is estimated as 0.90% and the LGD is 75%. The average maturity of loans is 2.50 years such that the maturity adjustment is 1.270. Which is nearest to the exposure's risk-weighted asset under the IRB approach? a.$66.0 million b. $85.5 million c.$148.0 million d. \$368.9 million
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937430143356323, "perplexity": 2828.0146649526528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00466.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/A_SAHA
• A SAHA Articles written in Pramana – Journal of Physics • Observational constraints on extended Chaplygin gas cosmologies We investigate cosmological models with extended Chaplygin gas (ECG) as a candidate for dark energy and determine the equation of state parameters using observed data namely, observed Hubble data, baryon acousticoscillation data and cosmic microwave background shift data. Cosmological models are investigated considering cosmic fluid which is an extension of Chaplygin gas, however, it reduces to modified Chaplygin gas (MCG) andalso to generalized Chaplygin gas (GCG) in special cases. It is found that in the case of MCG and GCG, the best-fit values of all the parameters are positive. The distance modulus agrees quite well with the experimental Union2data. The speed of sound obtained in the model is small, necessary for structure formation. We also determine the observational constraints on the constants of the ECG equation. • # Pramana – Journal of Physics Volume 96, 2022 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868138194084167, "perplexity": 2258.438266733866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00668.warc.gz"}
https://blog.theleapjournal.org/2013/10/fama-shiller-hansen.html
## Wednesday, October 16, 2013 ### Fama, Shiller, Hansen 1. Disappointing that the author over-glorifies Fama and under-appreciates Shiller. And, this lumping together of two very disparate theories under the same general umbrella of understanding asset prices is bizarre and disingenuous from the Nobel committee. Why should unpredictability be consistent with efficiency? The market can stay irrational longer than one can remain solvent should be a valid counter-argument here. In other words, there is no reason why inefficient markets cannot be unpredictable. Why is this commonsense argument not made, I will never know. Second argument is that the market provides feedback into reality, so reality can catch up to markets or vice versa and the market is an active participant in manipulating reality. So, to look at markets as a passive observer or consumer of information is wrong. Something like Soros' theory of reflexivity is nearer to reality. And similar ideas are formulated by behavioral economists as well. Of course, no serious finance scholar will take Soros's theory seriously, so I would point out MIT's Lo and MacKinlay's Adaptive Markets Hypothesis. Their book "A Non-Random Walk Down Wall Street" points out why the random walk hypothesis is wrong. But mainstream economists (or in the words of the author, every serious finance scholar) won't go that route as it makes life difficult for them. Because, where is the easy path from those assumptions? The common argument that only 1% of active investors outperform the average or that every serious finance scholar knows that they cannot outperform the market is wrong in so many ways. First of all, this is the case in most professions. Probably only 1% of writers write bestsellers. According to efficient theorists, only 1% of JEE applicants make it through to IIT and the average applicant does not make it to IIT, so no serious student should attempt to outperform on the JEE. An absurd argument. What is even more absurd is that if there are no active investors, there is no competitive market and no efficient market, so the whole argument collapses on itself. The argument that indexing is better can apply to non-finance professionals, because that is not their area of expertise. But, the author is applying this argument to "every serious finance scholar" which is so very wrong as is born out by reality and the arguments made here. The situation is no different than in any competitive, free market profession. Applying the same argument to other professions would mean that no serious professional should try to do better than average in his/her profession, which is impractical, but then impracticality is suspended when talking about efficient markets. Oh well... 1. I'm delighted that Shiller spoke about the "discordant" grouping of ideas by the Nobel committee in an interesting article: 2. I would agree with reviewer Vivek's comments and would like to supplement it. Market efficiency, i.e. stock prices move randomly does not necessarily imply absence of bubbles, a comparison, which' every serious finance scholar' is prone to make. Imagine stock pricees move in tandem with the results of biased coin (biased in favour of head). Even in these case while heads are more likely, nevertheless, every outcome is still random, i.e. one cannot still predict with certainty the stock price movements. Over a period of time, such a bias however shows up as prices move away from'fundamentals', a bubble scenario (incidentally a word, you cannot utter in Fama's presence). Hence Fama's model is better seen as a short term pricing tool , nothing more-nothing less. He surely deserves the prize though 3. Mostly agree with the comments above, disappointing article from Mr Ajay Shah. 4. Poolla R.K. Murti 1. The erudite comments presented so far are well appreciated. 2. I wish to submit that FAMA is well aware of the limitations of the assumptions of the "Random walk Model". I wish to quote from his celebrated Paper on Efficient Capital Markets II "Since there are surely positive information and trading costs, the extreme version of the market efficiency hypothesis is surely false. Its advantage, however, is that it is a clean benchmark that allows me to sidestep the messy problem of deciding what are reasonable information and trading costs. I can focus instead on the more interesting task of laying out the evidence on the adjustment of prices to various kinds of information. Each reader is then free to judge the scenarios where market efficiency is a good approximation (that is, deviations from the extreme version of the efficiency hypothesis are within information and trading costs) and those where some other model is a better simplifying view of the world" {Jr of Finance, Vol 46, issue 5, Dec 1991) This should be kept in mind while studying his seminal Paper:on Efficient Capital Markets presenting the Random Walk Model (Jr of Finance, Vol 25, issue 2, May 1970) "under the assumption that security prices at any time “fully reflect” all available information. A market in which prices always “fully reflect” available information is called “efficient.”. As such, Fama's assumptions (based on which the Random Walk Model gets evolved) are fairly clear while it is another matter (which should in no way detract from the great Author's Contribution) how far the assumptions are far from the "real world". 3. Apart from the subject, I would also submit the mean-variance Theory of Stock Returns is based on the basic assumption that Stock Returns can be modeled by the Normal Distribution while experience shows that a skewed Distribution may be more realistic. 4. The above points are presented to elicit more scholarly discussion from both gifted Academics and practitioners of the Capital Markets. I am merely a student of Finance and wish to learn from Scholars of the Subject. 5. FAMA, of course, has carved for himself a highly respectable place in the "Theory of Finance" (in a lighter vein, I failed in my Finance Course in the Katholic University of Leuven in 1978 because I could not understand his Theory that time!) Let us hive him the noblest academic accolades as also to Professor Ajay Shah for so succinctly presenting FAMA in an admirable fashion. Please note: Comments are moderated. Only civilised conversation is permitted on this blog. Criticism is perfectly okay; uncivilised language is not. We delete any comment which is spam, has personal attacks against anyone, or uses foul language. We delete any comment which does not contribute to the intellectual discussion about the blog article in question. LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3151295483112335, "perplexity": 1860.5750554130811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649841.6/warc/CC-MAIN-20191014074313-20191014101313-00328.warc.gz"}
http://davidkader.blogspot.com/
## 24 November 2007 ### 1st posting under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction under construction
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977189302444458, "perplexity": 4391.779783853453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.88/warc/CC-MAIN-20150521113210-00066-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/991739/absolute-value-problem-x-y-y-x
# Absolute value problem $|x-y|=|y-x|$ My question is from Apostol's Vol. 1 One-variable calculus with introduction to linear algebra textbook. Page 43. Problem 1 Prove each of the following properties of absolute values. (c) $|x-y|=|y-x|$. The attempt at a solution: I solved similar problem, which was this: $|x|-|y|\le|x-y|$, by manipulating triangle inequality, I guess this one might be similar but I don't see it. Please help. So far I have proven following properties: $|x|=0$ if and only if $x=0$. $|-x|=|x|$. Also, absolute value is defined in such way: If $x$ is a real number, the absolute value of $x$ is a nonnegative real number denoted by $|x|$ and defined as follows: $|x|=\begin{cases} x, & \text{if$x\ge0$,} \\ -x, & \text{if$x\le0$.} \end{cases}$. • If you've shown that the absolute value is multiplicative, then you could say $|x-y|=|(-1)(y-x)|=|-1||y-x|=|y-x$. – Hayden Oct 26 '14 at 13:31 • No, I have not done that yet, that's (f) part of problem 1. – George Apriashvili Oct 26 '14 at 13:33 • it might be helpful to include the properties you have proven, including, for example, how you're defining the absolute value (i.e. as either the square root of the square, or as a piecewise function, although these are clearly equivalent) – Hayden Oct 26 '14 at 13:34 • @Hayden Yes, Sorry for not being clear, I listed all that now. – George Apriashvili Oct 26 '14 at 13:42 $$|x-y|=|x-y|$$ $$|x-y|=|1|\cdot|x-y|$$ $$|x-y|=|-1|\cdot|x-y|$$ $$|x-y|=|-1\cdot(x-y)|$$ $$|x-y|=|y-x|$$ Without $|x||y|=|xy|$ If $x>y$ Since $y-x<0$ that means $|y-x|=-(y-x)=x-y$ $$|y-x|=x-y$$ Since $x-y>0$ that means $|x-y|=x-y$ $$|x-y|=x-y$$ Equality is transitive $$|x-y|=|y-x|$$ If $y>x$ Since $x-y<0$ that means $|x-y|=-(x-y)=y-x$ $$|x-y|=y-x$$ Since $y-x>0$ that means $|y-x|=y-x$ $$|y-x|=y-x$$ Equality is transitive $$|x-y|=|y-x|$$ The case of $x=y$ is left as an exercise for the reader. • I have not proven property $|xy|=|x||y|$ yet, so is there any other way to achieve the result? – George Apriashvili Oct 26 '14 at 13:36 • @GeorgeDirac Look at my edit – Alice Ryhl Oct 26 '14 at 13:45 You say that you have proven that $|x|=|-x|$, then it immediately follows that $$|x-y| = |-(x-y)| =|-x+y| = |y-x|.$$ • Yeah, thats true. Well I feel stupid now, that I didn't think of that, thanks for simple explanation. – George Apriashvili Oct 26 '14 at 13:54 • @GeorgeDirac You're welcome :-) – Eff Oct 26 '14 at 13:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904375433921814, "perplexity": 276.880969294634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00533.warc.gz"}
http://math.stackexchange.com/questions/74682/injective-functions-also-surjective
# injective functions also surjective? Is it true that for each Set $M$ a given injective function $f: M \rightarrow M$ is surjective, too? Can someone explain why it is true or not and give an example? - If the set $M$ is finite, then yes. –  Mariano Suárez-Alvarez Oct 21 '11 at 21:19 I could swear that this question was asked 10 times before. I guess it's easier to write a one line answer than to find the duplicates... –  Asaf Karagila Oct 21 '11 at 21:24 @Asaf In that case, it would also make sense to include this as a frequently asked question. :) –  Srivatsan Oct 21 '11 at 21:28 @Srivatsan: Yes, it would very much make sense to do that. In fact it might be worth a while to add a few other elementary set theory questions there. –  Asaf Karagila Oct 21 '11 at 21:29 This statement is true if $M$ is a finite set, and false if $M$ is infinite. In fact, one definition of an infinite set is that a set $M$ is infinite iff there exists a bijection $g : M \to N$ where $N$ is a proper subset of $M$. Given such a function $g$, the function $f : M \to M$ defined by $f(x) = g(x)$ for all $x \in M$ is injective, but not surjective. Henning's answer illustrates this with an example when $M = \mathbb N$. To put that example in the context of my answer, let $E \subseteq \mathbb N$ be the set of positive even numbers, and consider the bijection $g: \mathbb N \to E$ given by $g(x) = 2x$ for all $x \in \mathbb N$. On the other hand, if $M$ is finite and $f: M \to M$, then it is true that $f$ is injective iff it is surjective. Let $m = |M| < \infty$. Suppose $f$ is not surjective. Then $f(M)$ is a strict subset of $M$, and hence $|f(M)| < m$. Now, think of $x \in M$ as pigeons, and throw the pigeon $x$ in the hole $f(x)$ (also a member of $M$). Since the number of pigeons strictly exceeds the number of holes (both these numbers are finite), it follows from the pigeonhole principle that some two pigeons go into the same hole. That is, there exist distinct $x_1, x_2 \in M$ such that $f(x_1) = f(x_2)$, which shows that $f$ is not injective. (See if you can prove the other direction: if $f$ is surjective, then it is injective.) Note that the pigeonhole principle itself needs a proof and that proof is a little elaborate (relying on the definition of a finite set, for instance). I ignore such complications in this answer. - I didn't get why this is true for finite sets. What is the difference here between finite and infinite sets? –  sschaef Oct 21 '11 at 21:44 @Antoras: For starters, infinite sets are not finite. They allow more room to move things around. –  Asaf Karagila Oct 21 '11 at 22:02 Yes, that is true. But why do they make the injective function f not surjective? –  sschaef Oct 21 '11 at 22:07 @Antoras: It does not mean that every injective function is not surjective. It just means that some injective functions are not surjective, and some surjective functions are not injective either. –  Asaf Karagila Oct 21 '11 at 22:27 @Antoras: It appears that you believe a function is some universal object, but it is not. Different functions can have different domains (the set on which they operate). In fact the categorical approach defines a function along with its domain and codomain. This is exactly the point. If the function has a finite domain then injective is the same as surjective. If it has an infinite domain then this is no longer true. –  Asaf Karagila Oct 22 '11 at 8:42 No. Consider $f:\mathbb N\to\mathbb N$ defined by $f(n)=2n$. It is injective but not surjective. - Or $f(n)=n+1$. Peano says that $f$ is not surjective. –  lhf Oct 21 '11 at 22:47 Right. I wrote that first, but then changed it to $2n$ because I didn't want the confusion of choosing between saying "1 is not in the image" and "0 is not in the image". Then I forgot to add "the odd numbers are not in the image" to the answer anyway, so I might as well have left it at the successor function. –  Henning Makholm Oct 21 '11 at 22:53 For finite sets, consider the two point set $\{a,b\}$ . If you have an injective function, $f(a)\neq f(b)$, so one has to be $a$ and one has to be $b$, so the function is surjective. The same idea works for sets of any finite size. If the size is $n$ and it is injective, then $n$ distinct elements are in the range, which is all of $M$, so it is surjective. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493027925491333, "perplexity": 162.00090164984883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926964.7/warc/CC-MAIN-20150521113206-00070-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.simscale.com/docs/content/simulation/analysis_types/heatTransferDescription.html?highlight=heat%20transfer
# Heat transfer¶ The simulation type Heat transfer allows the calculation of the temperature distribution and heat flux in a solid under thermal loads as for example convection and radiation. As a result you can analyse the temperature distribution in a steady state scenario as well as for example a transient heating process of a mechanical part. A negative heat flux over the borders of the domain illustrates the thermal power loss of e.g. a cooled device. Thermal change in a PCB In the following the different simulation settings you have to define are described in detail as well as the various options you can add. ## Analysis Type¶ You can choose if you want to calculate the steady-state behavior of the system comparable to the Static analysis or if you want to take time dependent effects into consideration in a transient analysis. ## Domain¶ In order to perform an analysis a given geometrical domain you have to discretize your model by creating a mesh out of it. Details of CAD handling and Meshing are described in the Pre-processing section. After you assigned a mesh to the simulation you can add some optional domain-related settings and have a look on the mesh details. Please note that if you have an assembly of multiple bodies that are not fused together, you have to add Contacts if you want to build connections between those independent parts. ### Materials¶ In order to define the material properties of the whole domain, you have to assign exactly one material to every part and define the thermal properties of those. Note that the specific heat is only needed for transient analyses. ### Initial Conditions¶ For a time dependent behaviour of a solid structure it is important to define the Initial Conditions carefully, since these values determine the solution of the analysis. If you chose to run a transient analysis the temperature depends on time. It is set to room temperature (293.15 K) by default and is also provided for steady-state simulations for convergence reasons. ### Boundary conditions¶ You can define temperature and thermal load boundary conditions. If you provide a temperature boundary condition on an entity, the temperature value of all contained nodes is set to the given prescribed value. Thermal load boundary conditions define the heatflux into or out of the domain via different mechanisms. Note that a negative heat flux indicates a heat loss to the environment. As a temperature boundary condition prescribes the temperature value on a given part of the domain it is not possible to simultaneously add a thermal load on that part as it would be overconstrained in that case. Temperature boundary condition types (Thermal Constraints) Heat flux boundary condition types (Thermal Loads) ## Numerics¶ Under numerics you can set the equation solver of your simulation. The choice highly influences the computational time and the required memory size of the simulation. ## Simulation Control¶ The Simulation Control settings define the overall process of the calculation as for example the timestepping interval and the maximum time you want your simulation to run before it is automatically cancelled. ### Solver¶ The described Heat transfer analysis of the finite element code CalculiX Crunchix (CCX) is only available via the solver perspective. You may as well choose the finite element package Code_Aster for this analysis type (Heat transfer CA) using the standard Heat transfer analysis from the physics perspective or via the solver perspective choosing Code_Aster as solver. See our Third-party software section for further information. See our Third-party software section for further information.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851544976234436, "perplexity": 657.1032859498865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830305.92/warc/CC-MAIN-20181219005231-20181219031221-00039.warc.gz"}
https://informationtransfereconomics.blogspot.com/2015/10/we-built-this-theory-on-scope-conditions.html
## Thursday, October 1, 2015 ### We built this theory on scope conditions I've been reading Noah Smith's latest post over and over and I'm not quite sure I get the point. My summary: Economics wasn't very empirical, so theories used to be believed for theoretical reasons. Then along came data in the form of natural experiments, but these ruled out theories. Natural experiments have limited scope and don't tell you what the right theory is. This creates a philosophical crisis that manifests as an adversarial relationship between theory and data that will work itself out. Sounds like an irrational three year old with fingers in ears saying La-la-la ... I can't hear you! when it's time to go to bed. That's pretty funny because the rational agent stuff tends to be what is being killed. Actually, physics is dealing with the exact same problem (with dignity and grace). We have the standard model and general relativity (aka "the core theory"). There have been several natural experiments (supernovas telling us the expansion of the universe is accelerating, solar neutrino oscillations) that have 'proven' the core theory 'wrong' in ways that don't tell you what the right theory is. But there is no philosophical crisis and no adversarial relationship. Noah chalks that up to a long tradition of empiricism in physics, but I disagree. It is the existence of a framework that says that even though the core theory is wrong about neutrino oscillations and the accelerating universe, it's still right about the things that it is right about. That's because of what Noah (in the prior post) says physicists call 'scope conditions' (that is fine although the first google reference is to sociology, and domain of validity and scale of the theory were terms more commonly used by this physicist). It's actually the funniest line of that prior post: I have not seen economists spend much time thinking about domains of applicability (what physicists usually call "scope conditions"). But it's an important topic to think about. Yes. That does sound like an important thing to think about. I have this theory. Under what conditions does it apply? Maybe we should look into it ... Ya think? At least "we should look into it" is better than Dani Rodrik's assertion that the scope conditions are just whatever the model assumed in order to model a specific effect. The IS-LM model is limited to the Great Depression. DSGE models are limited to the period of the Great Moderation in the US. A model of the lack of impact on unemployment of that minimum wage increase in New Jersey from 4.25 to 5.05 in 1992 Noah mentions in his post is restricted to that minimum wage increase from 4.25 to 5.05. In New Jersey. In 1992. Anyway, physicists' so-called scope conditions mean that discovering neutrino oscillations or a positive cosmological constant doesn't burn through your theory like a building without firewalls or fire doors. The econ 101 model of a minimum wage rise causing unemployment doesn't actually have any scope conditions. So that minimum wage increase in NJ burns down the econ 101 model of minimum wages. To the ground. Rodrik tries to put in a firewall and say that the natural experiment should only burn down the econ 101 theory when you go from 4.25 to 5.05 in NJ in 1992. But that brings us to an even more important point. You can't interpret a natural experiment without a framework that produces scope conditions. How do you know if you've isolated an external factor if you don't know what the scale of the impact of that external factor is? The real answer is 'you can't', but economists have been trying to get around it with instrumental variables and structural estimation. Structural estimation is the idea that you could make up a plausible argument for X to depend on Y but not Z. Instrumental variables is the idea that ... you can make up a plausible argument for X to depend on Y but not Z. Anyway, those plausibility arguments are basically hand-waving scope conditions, but without a framework you have no idea what the size of the domain of validity is. As Noah says: "you have an epsilon-sized ball of knowledge, and no one tells you how large epsilon is." The other way to get around the issue of  data rejecting your theory and lack of scope conditions (thus the data burning your entire theory down) is to relax your definition of rejection. One way of doing that is called calibration. And all of these ran into each other on twitter today: Basically we have this ... Problem: Data rejects our theory and without the firewalls of scope conditions, it burns the entire theory down Solution 1: Scope conditions are limited to original purpose of theory (Rodrik) Solution 2: Hand-waving about scope conditions with instrumental variables Solution 3: Relax definition of "rejects" with calibration Let it burn was apparently not an option. ... PS I'm sure you want to ask about the scope conditions (domain of validity) of the information equilibrium models. Well, the scope of any particular model consists of its equilibrium relationships between its process variables. If data rejects the market information equilibrium relationship $A \rightleftarrows B$, then that relationship is rejected. If the model is made up of more than one relationship, but depends on the rejected relationship, then the model is rejected. That should make intuitive sense: either information flows between $A$ and $B$ or it doesn't. And if part of your model requires information to flow between $A$ and $B$ and it doesn't, then your model is wrong. PPS This post started out with my personal opinion that if a particular statistical method matters in rejecting or accepting your model, then your model probably doesn't tell us much. 1. This is very interesting. I spent many years in management consulting. One of my observations relates to the difference between novice consultants e.g. recent graduates and experienced consultants. Novice consultants tend to focus on themselves and the techniques in which they are experts. As a result, they tend to see every assignment as the same and they often jump quickly to conclusions. Experienced consultants, on the other hand, tend to focus on each client and his problem. They start by assuming that the current problem is unique. After some investigation, they begin to recognise patterns with previous assignments. The current assignment X has a lot in common with assignment C but with some flavour of assignment F, some of the people problems of assignment P and the technical computer problems of assignment T. In the terms of your post, novices start by assuming a broad domain of applicability of a specific technique or solution and require evidence (normally from someone else bashing them over the head) to change that assumption. Experienced consultants start by assuming a unique situation and then using evidence to widen the relevant domain. The difference between the novices and the experienced consultants is that the experienced consultants have developed a complex decision tree (mostly undocumented) to assess the scope conditions that apply to each problem. They focus on what is different about a problem at least as much as what is the same. They look at the problem from several different perspectives and they listen keenly to the views of people who have those different perspectives. They don’t talk about theories which are universally true. Rather, they talk about rules of thumb which apply in certain situations. As a result, they have a mental agility which the novices lack. In this respect at least, most economists behave like novice consultants. However, economists have an additional problem in that they don’t listen to anyone else apart from other economists who think in the same way, so they never change. 1. Hi Jamie, I see some similarities here with the "fox and hedgehog" theory (your novices are like hedgehog's who know one big thing well, and your managers are like foxes who are more generalists). This actually tends to be pretty typical in fields of research from my experience. For example: I am applying information theory (important in signal processing) to economics :) 2. I don’t think that the fox and hedgehog analogy is relevant to my point. Experienced consultants know the one big thing that novices know. However, they also know many other things. They are not generalist managers. The point is more that ‘if your only tool is a hammer you tend to see everything in terms of nails’. The novices have only one tool. The experienced consultants have many tools and the wisdom about how to use them. Regarding your information transfer techniques, you are looking to replace one forecasting technique with another. That’s fine but is a different point. There is always room to improve specific techniques. I read your blog because there is not enough innovative thinking in economics. It’s also interesting to read your perspective on mainstream economics and compare it with mine. However, the analogy with my point would be that macroeconomic forecasting is only one part of economics. One of the oddest aspects of economics is that physicists, chemists, engineers, entrepreneurs and even politicians have contributed much more to our economic prosperity than economists. For example, the inventors of the washing machine and the pill freed women from the home and doubled the effective workforce. Future economic prosperity will probably depend on further disruptive innovation. However, I doubt that such innovation is forecastable via any mathematical technique. It will arise from the randomness in your models rather than from the parameters you are modelling. Other changes may arise from factors such as demographics, climate change and wars. The biggest economic issue in Europe at the moment is the mass migration of people from the Middle East and North Africa. That wasn’t in any economic forecast even a year ago. Finally, mathematical forecasting is limited by our ability to measure things. When I buy a book over the internet I no longer have to spend an hour travelling to and from town to visit a physical bookshop. That hour of time is a benefit of internet shopping but it is not measured anywhere. Similarly, if I buy a faster computer, a more reliable car or better quality food, but pay the same price as before, this quality improvement is not measured either. These innovative changes take place at the same time as economists solemnly tell us that we have a ‘productivity problem’ due to a lack of innovation. Maybe economists have a measurement problem? 3. Hi Jamie, You said I'm trying to replace one forecasting method with another -- that is not exactly what I'm doing. There are a few methods of determining whether a model is correct. One way involves statistical tests (instrumental variables, Granger causality, etc). Another way is a precise retro-diction with a fairly convincing theoretical model (Einstein's calculation of the precession of Mercury falls in this category). However another way is to predict future data (conditional on model inputs). This last one is the reason I do the forecasts. I'm trying to replace (or maybe just augment) a set of analytical tools with a new one. Another reason for the forecasts is that they embody one of the key differences from the mainstream approach. Many economists think you can't predict recessions and the like because it boils down to predicting human behavior ... which people tend to believe can't be predicted. I want to show that the law of large numbers leads to the averaging out of human behavior and a fairly predictable macroeconomic system. The forecasts I make are attempts to show macro is predictable. Plus the forecasts I make are actually pretty boring. They all look like linear extrapolations! Two other things: The migration into Europe is a big deal in the news, but tends to be somewhat of a wash in terms of economic impact because it's only a few million people into a region with 800 million people. That's less only on the order of 0.1%, which is on the order of the error in the measurement of NGDP. The fall in prices is measured and so-called 'hedonic' adjustments are made to CPI. See e.g. here for computers the US: https://research.stlouisfed.org/fred2/series/CUSR0000SEEE01 Ordering the book online is measured in higher productivity of book producers, and having you travel to your bookstore is actually less efficient than the economies of scale available from moving the books to you in a "book bus" (delivery van). That's one less car on the road (reducing other's wait time in traffic), and you've saved a bit of fuel. Is all of this 'good'? Not really -- the bookshop is put out of business, and we use a lot more packaging that fills up landfills. 2. Or, laws in economics are few in number, empirically verifiable yet probabilistic, and should be based off of long time series. Economics needs better and longer empirical time series, and these will arrive "over time". Though, if you can't get the profession to accept that 3 month t-bill rates are mostly a function of monetary base / NGDP back to the formation of the Fed, the problem may be non-tractable, i.e political in nature -- economics may simply be politics by other means. 1. I agree that it is entirely possible there are very few empirical regularities -- and such a view would mean you should treat economics like history instead of a science. 3. Jason, a very interesting post. 4. Jason, I'm curious: I'm not really a Twitterer, nor do I have a desire to be one; I know basically how to navigate around and see the whole conversation, but that's about it. If I'm not mistaken, it appears in your screen capture above that you were about to respond to all of them, yet I didn't see a tweet from you in this conversation when I looked it up in Noah's conversations. 1. Did you chime in on this? 2. Do you generally chime in? Does Noah (in particular) ever respond? I'm curious since he so often erases your comments on his blog. 3. Would we see your tweets there if you did?... do any of them block you from their twitter conversations? 1. It always shows a reply space if you expand a conversation if you're on twitter. I didn't chime in there and I'm sure Noah has muted me already (you can't block people on Twitter as far as I know, you can only mute them so they don't show up in your own feed). As there were already 5 people in the conversation, adding another would probably take up all of the 140 characters anyway. 5. It'd be nice to hear Weird Al do a song based on this post to the tune of "We built this city." 1. It would be and that was the theory behind the title. On Twitter I used my second choice "burning down the theory". 2. In a strange confluence of events, Noah put Weird Al at the top of his post that bore striking similarities to my post that references this post ... http://noahpinionblog.blogspot.com/2016/01/situationalism-in-economic-policymaking.html The lattice of coincidences ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3689526617527008, "perplexity": 999.6445396416121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00685.warc.gz"}
https://gamedev.stackexchange.com/questions/115125/where-to-cast-light-shadows-in-a-2-5d-view/115127
# Where to cast light/shadows in a 2.5D view? I'm working on a tile-based 2D pseudo top-down/side-on game very much in the graphical style of Prison Archiect. More specifically, I'm referencing their style of drawing wall tiles where you can see the south, east and west sides of the wall at once. Here's an example from my engine: I'm also working on a pixel-based (i.e. not tile-based) lighting and shadows implementation. I'm currently struggling with trying to decide how/where to project from a light source on intersections with these wall tiles. I can't decide where on these tiles should occlude light cast out by light sources. If the red highlighted areas are the "top" of the wall, and the blue highlighted areas are the "side" of the wall, I believe I have two options: A) Only occlude light from the "top" of the wall It's worth mentioning that I also plan to use UV-mapping so that only the walls facing the light source will be illuminated, rather than the pre-shaded tiles I'm using as an example. However, that would mean that the tiles adjacent to a wall in shadow may be lit and I don't think this would look quite right. Alternatively... B) Occlude light from the entirety of the wall tile This seems more realistic for the ground tiles but does not let me easily illuminate the wall "sides". I'm not really happy with either solution so my question is: is there another alternative which will give more realistic shadow-casting in a 2.5D view? I'd also rather keep the sides of the walls visible rather than use a top-down only perspective as I feel this would force the rest of the art into a top-down perspective, rather than pseudo side-on. Going to try and doodle up what I mean here as soon as I finish typing this, but: Use the second (occlude by base) for everything that isn't a wall and the first (occlude by tops) for lighting the walls? You actually did this by accident in your second example, with the wall that goes off the bottom of the image. Extending this to the remaining walls won't be perfect, but it would allow some lighting of the walls that will look pretty decent. • Thanks, that should work great if I can get the directional facing of the walls to pick up lightly correctly. I was slowly coming to this realisation but I think I was put off by the fact it'll probably double the computation required. Still, I think it's the answer I was looking for! Jan 18 '16 at 20:43 • @RossTurner Sure thing :) I'm sure there will still be some "odd" results (such as light leaking through corners to illuminate walls) but for what you're trying to do, I think the result will be sufficiently simple and sufficiently accurate. Jan 18 '16 at 20:46 • Nice - simple and effective, probably doesn't need too much additional work, and seems to fit the graphical theme a bit better than my answer. Jan 18 '16 at 22:52 • good idea. I would also add a falloff circle of additive ambiant light in the dark zones, to fake GI and add some mood. this version is very dark and limbo-like. depends on what stress/relief you want to achieve though. Jan 19 '16 at 2:10 • @v.oddou The ambient light is much darker in this example than I'm actually planning for the finished result. Thanks for the input! Jan 19 '16 at 9:11 I won't be able to make an image for you, but one trick you could do to figure out if a piece of wall should light up is to take advantage of the 'alpha' channel for determining the direction the pixel is facing, as opposed to the opacity of the pixel. You could then determine whether the pixel should be lit between the light source and the facing of the wall pixel (alpha value). Flat Shading in 3d rendering is a cheap and effective method that usually uses a similar algorithm using the normal of the plane and the light source's position/color. In your case, the normal is interpreted from the alpha, but you could use a similar algorithm, resulting in very 3D lighting for a 2D game (which is probably overkill but still cool, in my opinion) //some psuedocode of the important parts Vector2 lightPosition; Vector2 pixelPosition; Vector2 lightDirection = lightPosition - pixelPosition; Vector2 normalDirection = //(translate into a rotation Vector) lightDirection.normalize(); normalDirection.normalize(); angleBetweenLightAndNormal = dotProduct(normalDirection, lightDirection); //determine from the angle whether the normal should be lit or not You could scale a value of 0.00 to be facing down (or whatever direction you prefer) and the value of 1.00 to also be down, as if the direction had rotated 360 degrees. This means the value of up is 0.50 If you plan on including the top of walls or the ground you may even scale it a few values short, and keep specific reserved values to mean top or ground. In fact, if you needed it to be as simple as possible: //0.00 = down //0.10 = left //0.20 = right //0.30 = up //0.40 = floor //0.50 = top of wall which then leaves plenty more values for other situations. The drawback is that you are taking over the alpha channel, which generally relates to opacity. This means both that your engine may take some modifying to ignore alpha, and, if you do need an alpha channel, you can't do this method directly. You could create a new image with only lighting information instead. The nice thing about creating a new image with lighting information is that your RGB values can fully translate into a 3-directional rotation for the normal that pixel is facing, and the alpha could be the "height" of the pixel (as if it were in 3D space) • Thanks, that's a great suggestion. I've also been considering doing uv mapping with a uv map texture in the rgb channel but as I don't really need a Z component as the lighting is on the same level as the objects being lit, so storing a 360 degree normal in the alpha is a nice idea. Having said that it may get a bit tricky to draw, I guess is have to write a custom image viewer to see the information, though that shouldn't be too bad either. One to think about! Jan 19 '16 at 9:05 • This answer reminds me that La Mulana 2 is actually a 3D game precisely for lighting purposes. It still plays like a 2D side scroller, but the actual level geometry is pushed out directly towards the screen so that when they use standard point lights, statues and such cast shadows. This answer is using pixel information to approximate that kind of effect, the mental visualization I made reading the answer reminded me of that team's approach. Jan 19 '16 at 15:30 I don't want to take anything away from @Draco18s' answer, but I went with his suggestion (combining the two) and ended up putting together a demonstration video on how it's done (for those interested) at https://www.youtube.com/watch?v=Cabl0LMmlgY In addition to the quick sketch that he added, I ended up using normal maps on each "face" of the wall so that if there was any "bleed over" light, the angle of incidence means that it isn't illuminated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3529907166957855, "perplexity": 903.6813714932318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00039.warc.gz"}
https://bird.bcamath.org/handle/20.500.11824/10/browse?authority=7e3b49e4-7ec5-4851-8896-ffb2805d6e2e&type=author
Now showing items 1-1 of 1 • #### Time-varying coefficient estimation in SURE models. Application to portfolio management.  (2017-01-01) This paper provides a detailed analysis of the asymptotic properties of a kernel estimator for a Seemingly Unrelated Regression Equations model with time-varying coefficients (tv-SURE) under very general conditions. ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055453300476074, "perplexity": 1879.3272016731237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00497.warc.gz"}
http://mathhelpforum.com/algebra/199957-simplify-expressiom.html
1. ## simplify expressiom How would you write this in LaTex sqrt [1+(x\ sqrt 1- x2)2] this was my ans 1/1+x2 the book was 1\sqrt 1+x2 2. ## Re: simplify expressiom I'm just guessing ... $\sqrt{1+(x\sqrt{1-x^2})^2}$ ??? 4. ## Re: simplify expressiom Originally Posted by zbest1966 quote my message and you'll see the Latex what do you mean by an "answer" ??? English is not your primary language, is it? 5. ## Re: simplify expressiom (LOL) I try latex but my was wrong. t this was my ans 1/1+x2 the book was 1\sqrt 1+x2 6. ## Re: simplify expressiom why are you using a backward slash \ ??? the forward slash / is used for division, not \ is the book's simplification $\frac{1}{\sqrt{1+x^2}}$ ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877870082855225, "perplexity": 18828.787160000455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541910.39/warc/CC-MAIN-20161202170901-00165-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/162888/forcing-with-nontransitive-models
# Forcing with Nontransitive Models A common approach to forcing is to use countable transitive model $M \in V$ with $\mathbb{P} \in M$ and take a $G \in M$ (which always exists) to form a countable transitive model $M[G]$. Another approach takes $M$ to be countable such that $M \prec H_\theta$ for sufficiently large $\theta$ (and hence may not be transitive). For example, a definition of proper forcing considers such models. Forcing with transitive models are quite convenient since many absoluteness results can be used to transfer properties of $x \in M[G]$ which hold in $M[G]$ up to $V$. If $M \prec H_\theta$ is not transitive, then it is not clear what type of property that $M[G]$ can prove about $x$ transfer to $V$. For instance, if $M[G] \models x \in {}^\omega\omega$, is $x \in {}^\omega\omega$ in $V$? Of course, one remedy could be to Mostowski collapse everything and then use the familiar absoluteness for transitive models. For $x \in {}^\omega\omega$, one could use the fact that $M \prec H_\theta$ implies $\omega \subseteq M$ and hence the Mostowski collapse of $M[G]$ would maps each real to itself and then use absoluteness to prove that $V \models x \in {}^\omega\omega$ as well. Is there a more direct way to prove these type of result rather than collapsing the forcing extension, which seem to suggest one should have started by collapse $M$ before starting the forcing construction. So my questions are 1 First, if one chooses to work with countable $M \prec H_\theta$ are there any changes that need to made to the forcing construction and the forcing theorem as they appear in Kunen or Jech? Of course, the definition of a generic filter should be changed to meeting those dense sets that appear in $M$. 2 I am aware that if $G$ has master conditions, then $M[G] \prec H_\theta[G]$? Is $H_\theta[G]$ just the forcing construction applied to $H_\theta$? As $G$ is not necessarily generic over $H_\theta$, it is not clear to me that the forcing theorem need to apply to $H_\theta[G]$ (or a priori $H_\Theta[G]$ models any particular amount of $\text{ZF}- \text{P}$, but since $M[G] \prec H_\theta[G]$, actually $H_\Theta[G]$ would model as much as $M[G]$.) In general without addition assumption like master conditions, does the relation $M[G] \prec H_\Theta[G]$ still hold. Also perhaps I am misunderstanding something, but since $\mathbb{P} \in M$, it appears that if $\theta$ is large enough, every $G \subseteq \mathbb{P}$ which is $\mathbb{P}$-generic over $M$ is already in $H_\Theta$. Would this not imply that $H_\theta[G] = H_\theta$ and hence $M[G] \prec H_\Theta$. Since $M \prec H_\theta$, $M$ and $M[G]$ models the exact same sentences. This surely can not happen. Thanks for any help and clarification that can be provided. - For 2, properness seems to be sufficient. – Mohammad Golshani Apr 9 '14 at 12:29 For your last comment, you need $G$ be $H_\theta-$generic to form $H_\theta[G]$ – Mohammad Golshani Apr 9 '14 at 12:32 All standard forcing machinery works when forcing over such $M$ because they satisfy a large enough fragment of $ZFC$, namely $ZFC$ without the powerset axiom. The purpose of forcing over such models is rarely to transfer results to $V$, although something like this can be done in the following way. Suppose that $M\prec H_\theta$ is countable with $\mathbb P\in M$ and for every $M$-generic $G$ in $V$, we have that $M[G]\models\varphi$. Then $M$ satisfies that $\varphi$ is forced by $\mathbb P$. But then by elementarity, $H_\theta$ satisfies that $\varphi$ is forced by $\mathbb P$ as well. Thus, $H_\theta[G]\models\varphi$ in every forcing extension $V[G]$. So in a way, we have transfered a property from $M[G]$ to $V[G]$. I recently encountered many such arguments when working with Schindler's remarkable cardinals and I have some notes written up here. In the case of remarkable cardinals, you use some properties of the transitive collapse of $M$ to argue that certain generic embeddings exist in its forcing extension by $Coll(\omega,<\kappa)$. Using the argument above you then conclude that such generic embeddings must exist in $H_\theta[G]$ where $G\subseteq Coll(\omega,<\kappa)$ is $V$-generic. The argument that $M[G]\prec H_\theta[G]$ works only in the case that $G$ is both fully $H_\theta$-generic and also $M$-generic (meets every dense set of $M$ in $M$ itself). Indeed, in most situations where forcing over $M\prec H_\theta$ is used, as in say proper forcing, the arguments usually involve fully generic $G$. It seems that generally the purpose of such arguments is to use $M[G]$ to conclude that some property holds in $V[G]$ by reflecting down to countable objects. This is for instance how one can use the definition of proper posets, in terms of the existence of $M$-generic filters for countable $M\prec H_\theta$, to argue that they don't collapse $\omega_1$. - Vika, I think your claim that "The argument that $M[G]\prec H_\theta[G]$ works only in the case that $G$ is both fully $H_\theta$-generic and also $M$-generic," is not actually correct, in light of the theorem in my answer. You don't actually need that $G$ is $M$-generic for this conclusion. – Joel David Hamkins Apr 10 '14 at 3:58 To clarify: $(M[G],{\in})\prec (H_\theta[G],{\in})$ is indeed true for all $H_\theta$-generic $G$. But often we want to use an additional unary predicate for $V$ or $H_\theta$, so we are interested in $(M[G],{\in},M)\prec (H_\theta[G],{\in},H_\theta)$. For $H_\theta$-generic $G$, this property is equivalent to $M$-genericity. – Goldstern Mar 8 at 16:29 What I'd like to point out is that, contrary to what has been stated, one doesn't actually need to assume that $G$ is $M$-generic in order to conclude $M[G]\prec H_\theta[G]$; having $G\subset\mathbb{P}\in M$ being $H_\theta$-generic (that is, $V$-generic) is sufficient. Let's begin by correcting, as Victoria does, your definition of what it means for $G\subset\mathbb{P}$ to be $M$-generic, in the case where $M\prec H_\theta$ is a possibly non-transitive elementary submodel of some $H_\theta$. You said to be generic means to meet every dense subset $D\subset \mathbb{P}$ with $D\in M$, but this is not the right definition. You want to say instead that $G$ meets every such dense set $D$ inside $M$. That is, that $G\cap D\cap M\neq\emptyset$. If we only have $G\cap D\neq\emptyset$, then $M$ will not have access to the conditions $p\in G\cap D$ that are useful when a filter meets a dense set. So it is the corrected definition that treats $\langle M,{\in^M}\rangle$ as a model of set theory in its own right, insisting that for every dense set in this structure, the filter meets it. Proper forcing is of course concerned all about this, since we seek a condition $p\in\mathbb{P}$ forcing that whenever $G\subset\mathbb{P}$ is $V$-generic, then it is also $M$-generic in this sense. But we may still form the extension $M[G]$ whether or not $G$ is $M$-generic in this sense, defining $M[G]=\{\tau_G\mid\tau\in M^{\mathbb{P}}\}$ to be the interpretation of all names in $M$ by the filter $G$. Now, it turns out that for $V$-generic filters $G$, we have that $G$ is $M$-generic just in case $M[G]\cap\text{Ord}=M\cap\text{Ord}$, which holds just in case $M[G]\cap V=M$. This is easy to see, since any name $\dot\alpha$ for an ordinal in $M$ gives rise to an antichain of possibilities in $M$, and so if $G$ is $M$-generic, then it will force $\dot\alpha$ to be an ordinal already in $M$. And for the other direction, given any maximal antichain in $M$, we may construct by the mixing lemma a name $\dot\alpha$ for an ordinal, which will be a new ordinal just in case $G$ does not meet $A\cap M$. Assume $H_\theta$ satisfies a sufficiently large fragment of ZFC. Theorem. If $M\prec H_\theta$ and $G\subset\mathbb{P}\in M$ is $H_\theta$-generic, then $M[G]\prec H_\theta[G]$. Proof. Suppose that $M\prec H_\theta$ and $G\subset\mathbb{P}\in M$ is $H_\theta$-generic. We may still form $M[G]=\{\tau_G\mid \tau\in M^{\mathbb{P}}\}$ as the set of interpretations of names in $M$ using the filter $G$. Let $\bar M=M[G]\cap V$. This is larger than $M$, precisely when $G$ is not $M$-generic. I claim that $\bar M\prec H_\theta$, by verifying the Tarski-Vaught criterion, since if $H_\theta$ has a witness, then we may find a name in $M$ for such a ground-model object, and so we will find a witness in $\bar M$. And since $\bar M\subset \bar M[G]\cap V\subset M[G]\cap V=\bar M$, it follows that $\bar M[G]\cap V=\bar M$, and so $G$ is actually $\bar M$-generic. So $M[G]=\bar M[G]\prec H_\theta[G]$ by reducing to the case where we do have the extra genericity. QED In regard to question 2, of course we want $G$ to be $H_\theta$-generic, since without this it is easy to make counterexamples to $M[G]\prec H_\theta[G]$. For example, if $M$ is countable we can easily find $M$-generic filters $G$ with $G\in H_\theta$, and in this case, if the forcing is nontrivial then $M[G]$ is definitely not an elementary substructure of $H_\theta[G]=H_\theta$. This is the argument of your last paragraph, and that is totally right; so the conclusion is that for this question we want to assume $G$ is $V$-generic. Lastly, let me point out that one doesn't need countable models in order to undertake the forcing construction, and one can speak of the forcing extensions of any model of set theory, whether it is countable, transitive, uncountable, nonstandard, whatever. The most illuminating way to do this is via Boolean-valued models, and by taking the quotient, one arrives at the Boolean ultrapower construction. The basic situation is the if $V$ is a model of set theory containing a complete Boolean algebra $\mathbb{B}$, and $U\subset\mathbb{B}$ is an ultrafilter ($U\in V$ is completely fine), then one may form the quotient $V^{\mathbb{B}}/U$ of the $\mathbb{B}$-valued structure, and this is realized as a forcing extension of its ground model $\check V_U$, and furthermore there is an elementary embedding of $V$ into $\check V_U$, called the Boolean ultrapower map. So the entire composition $$V\overset{\prec}{\scriptsize\sim} \check V_U\subset \check V_U[G]=V^{\mathbb{B}}/U$$ lives inside $V$. There is no need for $V$ to be countable and no need for $U$ to be generic in any sense, yet $G$, which is the equivalence class of the name $\dot G$ by $U$, is still nevertheless $\check V_U$-generic. You can find fuller details in my paper with D. Seabold, Boolean ultrapowers as large cardinal embeddings. - Joel, for some reason I am suspicious of that argument every time I see it :). Can you say something more about the statement "if $H_\theta$ has a witness, then we may find a name in $M$ for such a ground model object..." I don't quite follow it. – Victoria Gitman Apr 10 '14 at 11:59 If $H_\theta\models\varphi(x,\tau_G)$, with $\tau\in M$ and $\tau_G\in H_\theta$, then there is an antichain of possible values of $\tau$, and for each possible $y\in H_\theta$ that it might be, we have an $x$ for which $H_\theta\models\varphi(x,y)$. Now, by mixing $\check x$ along the antichain, we find a name $\dot x$ such that $H_\theta\models\varphi(\dot x_G,\tau_G)$. By elementarity $M\prec H_\theta$, there is such a name $\dot x$ inside $M$. And so $M[G]$ has the witness $\dot x_G$, which is one of the $x$'s that we mixed. – Joel David Hamkins Apr 10 '14 at 12:17 Ok, great! I am convinced. – Victoria Gitman Apr 10 '14 at 12:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600288271903992, "perplexity": 160.47456330791735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398458553.38/warc/CC-MAIN-20151124205418-00152-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.science.gov/topicpages/a/acid-base+equilibrium+constants.html
#### Sample records for acid-base equilibrium constants 1. Conjugate Acid-Base Pairs, Free Energy, and the Equilibrium Constant ERIC Educational Resources Information Center Beach, Darrell H. 1969-01-01 Describes a method of calculating the equilibrium constant from free energy data. Values of the equilibrium constants of six Bronsted-Lowry reactions calculated by the author's method and by a conventional textbook method are compared. (LC) 2. Distinguishing between keto-enol and acid-base forms of firefly oxyluciferin through calculation of excited-state equilibrium constants. PubMed Falklöf, Olle; Durbeej, Bo 2014-11-15 Although recent years have seen much progress in the elucidation of the mechanisms underlying the bioluminescence of fireflies, there is to date no consensus on the precise contributions to the light emission from the different possible forms of the chemiexcited oxyluciferin (OxyLH2) cofactor. Here, this problem is investigated by the calculation of excited-state equilibrium constants in aqueous solution for keto-enol and acid-base reactions connecting six neutral, monoanionic and dianionic forms of OxyLH2. Particularly, rather than relying on the standard Förster equation and the associated assumption that entropic effects are negligible, these equilibrium constants are for the first time calculated in terms of excited-state free energies of a Born-Haber cycle. Performing quantum chemical calculations with density functional theory methods and using a hybrid cluster-continuum approach to describe solvent effects, a suitable protocol for the modeling is first defined from benchmark calculations on phenol. Applying this protocol to the various OxyLH2 species and verifying that available experimental data (absorption shifts and ground-state equilibrium constants) are accurately reproduced, it is then found that the phenolate-keto-OxyLH(-) monoanion is intrinsically the preferred form of OxyLH2 in the excited state, which suggests a potential key role for this species in the bioluminescence of fireflies. 3. Equilibrium Constants You Can Smell. ERIC Educational Resources Information Center Anderson, Michael; Buckley, Amy 1996-01-01 Presents a simple experiment involving the sense of smell that students can accomplish during a lecture. Illustrates the important concepts of equilibrium along with the acid/base properties of various ions. (JRH) 4. Theoretical calculations of homoconjugation equilibrium constants in systems modeling acid-base interactions in side chains of biomolecules using the potential of mean force. PubMed Makowska, Joanna; Makowski, Mariusz; Liwo, Adam; Chmurzyński, Lech 2005-02-01 The potentials of mean force (PMFs) were determined for systems forming cationic and anionic homocomplexes composed of acetic acid, phenol, isopropylamine, n-butylamine, imidazole, and 4(5)-methylimidazole, and their conjugated bases or acids, respectively, in three solvents with different polarity and hydrogen-bonding propensity: acetonitrile (AN), dimethyl sulfoxide (DMSO), and water (H(2)O). For each pair and each solvent a series of umbrella-sampling molecular dynamics simulations with the AMBER force field, explicit solvent, and counterions added to maintain a zero net charge of a system were carried out and the PMF was calculated by using the Weighted Histogram Analysis Method (WHAM). Subsequently, homoconjugation-equilibrium constants were calculated by numerical integration of the respective PMF profiles. In all cases but imidazole stable homocomplexes were found to form in solution, which was manifested as the presence of contact minima corresponding to hydrogen-bonded species in the PMF curves. The calculated homoconjugation constants were found to be greater for complexes with the OHO bridge (acetic acid and phenol) than with the NHN bridge and they were found to decrease with increasing polarity and hydrogen-bonding propensity of the solvent (i.e., in the series AN > DMSO > H(2)O), both facts being in agreement with the available experimental data. It was also found that interactions with counterions are manifested as the broadening of the contact minimum or appearance of additional minima in the PMF profiles of the acetic acid-acetate, phenol/phenolate system in acetonitrile, and the 4(5)-methylimidazole/4(5)-methylimidzole cation conjugated base system in dimethyl sulfoxide. 5. Philicities, Fugalities, and Equilibrium Constants. PubMed Mayr, Herbert; Ofial, Armin R 2016-05-17 The mechanistic model of Organic Chemistry is based on relationships between rate and equilibrium constants. Thus, strong bases are generally considered to be good nucleophiles and poor nucleofuges. Exceptions to this rule have long been known, and the ability of iodide ions to catalyze nucleophilic substitutions, because they are good nucleophiles as well as good nucleofuges, is just a prominent example for exceptions from the general rule. In a reaction series, the Leffler-Hammond parameter α = δΔG(⧧)/δΔG° describes the fraction of the change in the Gibbs energy of reaction, which is reflected in the change of the Gibbs energy of activation. It has long been considered as a measure for the position of the transition state; thus, an α value close to 0 was associated with an early transition state, while an α value close to 1 was considered to be indicative of a late transition state. Bordwell's observation in 1969 that substituent variation in phenylnitromethanes has a larger effect on the rates of deprotonation than on the corresponding equilibrium constants (nitroalkane anomaly) triggered the breakdown of this interpretation. In the past, most systematic investigations of the relationships between rates and equilibria of organic reactions have dealt with proton transfer reactions, because only for few other reaction series complementary kinetic and thermodynamic data have been available. In this Account we report on a more general investigation of the relationships between Lewis basicities, nucleophilicities, and nucleofugalities as well as between Lewis acidities, electrophilicities, and electrofugalities. Definitions of these terms are summarized, and it is suggested to replace the hybrid terms "kinetic basicity" and "kinetic acidity" by "protophilicity" and "protofugality", respectively; in this way, the terms "acidity" and "basicity" are exclusively assigned to thermodynamic properties, while "philicity" and "fugality" refer to kinetics 6. Acid Base Equilibrium in a Lipid/Water Gel Streb, Kristina K.; Ilich, Predrag-Peter 2003-12-01 A new and original experiment in which partition of bromophenol blue dye between water and lipid/water gel causes a shift in the acid base equilibrium of the dye is described. The dye-absorbing material is a monoglyceride food additive of plant origin that mixes freely with water to form a stable cubic phase gel; the nascent gel absorbs the dye from aqueous solution and converts it to the acidic form. There are three concurrent processes taking place in the experiment: (a) formation of the lipid/water gel, (b) absorption of the dye by the gel, and (c) protonation of the dye in the lipid/water gel environment. As the aqueous solution of the dye is a deep purple-blue color at neutral pH and yellow at acidic pH the result of these processes is visually striking: the strongly green-yellow particles of lipid/water gel are suspended in purple-blue aqueous solution. The local acidity of the lipid/water gel is estimated by UV vis spectrophotometry. This experiment is an example of host-guest (lipid/water gel dye) interaction and is suitable for project-type biophysics, physical chemistry, or biochemistry labs. The experiment requires three, 3-hour lab sessions, two of which must not be separated by more than two days. 7. Born energy, acid-base equilibrium, structure and interactions of end-grafted weak polyelectrolyte layers SciTech Connect Nap, R. J.; Tagliazucchi, M.; Szleifer, I. 2014-01-14 This work addresses the effect of the Born self-energy contribution in the modeling of the structural and thermodynamical properties of weak polyelectrolytes confined to planar and curved surfaces. The theoretical framework is based on a theory that explicitly includes the conformations, size, shape, and charge distribution of all molecular species and considers the acid-base equilibrium of the weak polyelectrolyte. Namely, the degree of charge in the polymers is not imposed but it is a local varying property that results from the minimization of the total free energy. Inclusion of the dielectric properties of the polyelectrolyte is important as the environment of a polymer layer is very different from that in the adjacent aqueous solution. The main effect of the Born energy contribution on the molecular organization of an end-grafted weak polyacid layer is uncharging the weak acid (or basic) groups and consequently decreasing the concentration of mobile ions within the layer. The magnitude of the effect increases with polymer density and, in the case of the average degree of charge, it is qualitatively equivalent to a small shift in the equilibrium constant for the acid-base equilibrium of the weak polyelectrolyte monomers. The degree of charge is established by the competition between electrostatic interactions, the polymer conformational entropy, the excluded volume interactions, the translational entropy of the counterions and the acid-base chemical equilibrium. Consideration of the Born energy introduces an additional energetic penalty to the presence of charged groups in the polyelectrolyte layer, whose effect is mitigated by down-regulating the amount of charge, i.e., by shifting the local-acid base equilibrium towards its uncharged state. Shifting of the local acid-base equilibrium and its effect on the properties of the polyelectrolyte layer, without considering the Born energy, have been theoretically predicted previously. Account of the Born energy leads 8. Born energy, acid-base equilibrium, structure and interactions of end-grafted weak polyelectrolyte layers. PubMed Nap, R J; Tagliazucchi, M; Szleifer, I 2014-01-14 This work addresses the effect of the Born self-energy contribution in the modeling of the structural and thermodynamical properties of weak polyelectrolytes confined to planar and curved surfaces. The theoretical framework is based on a theory that explicitly includes the conformations, size, shape, and charge distribution of all molecular species and considers the acid-base equilibrium of the weak polyelectrolyte. Namely, the degree of charge in the polymers is not imposed but it is a local varying property that results from the minimization of the total free energy. Inclusion of the dielectric properties of the polyelectrolyte is important as the environment of a polymer layer is very different from that in the adjacent aqueous solution. The main effect of the Born energy contribution on the molecular organization of an end-grafted weak polyacid layer is uncharging the weak acid (or basic) groups and consequently decreasing the concentration of mobile ions within the layer. The magnitude of the effect increases with polymer density and, in the case of the average degree of charge, it is qualitatively equivalent to a small shift in the equilibrium constant for the acid-base equilibrium of the weak polyelectrolyte monomers. The degree of charge is established by the competition between electrostatic interactions, the polymer conformational entropy, the excluded volume interactions, the translational entropy of the counterions and the acid-base chemical equilibrium. Consideration of the Born energy introduces an additional energetic penalty to the presence of charged groups in the polyelectrolyte layer, whose effect is mitigated by down-regulating the amount of charge, i.e., by shifting the local-acid base equilibrium towards its uncharged state. Shifting of the local acid-base equilibrium and its effect on the properties of the polyelectrolyte layer, without considering the Born energy, have been theoretically predicted previously. Account of the Born energy leads 9. An Intuitive and General Approach to Acid-Base Equilibrium Calculations. ERIC Educational Resources Information Center Felty, Wayne L. 1978-01-01 Describes the intuitive approach used in general chemistry and points out its pedagogical advantages. Explains how to extend it to acid-base equilibrium calculations without the need to introduce additional sophisticated concepts. (GA) 10. Using the Logarithmic Concentration Diagram, Log "C", to Teach Acid-Base Equilibrium ERIC Educational Resources Information Center Kovac, Jeffrey 2012-01-01 Acid-base equilibrium is one of the most important and most challenging topics in a typical general chemistry course. This article introduces an alternative to the algebraic approach generally used in textbooks, the graphical log "C" method. Log "C" diagrams provide conceptual insight into the behavior of aqueous acid-base systems and allow… 11. [Acid-base equilibrium in sportsmen during physical exercise]. PubMed Brinzak, V P; Kalinskiĭ, M I; Val'tin, A I; Povzhitkova, M S 1983-01-01 Acid-base balance in venous blood of basketball players was studied under specific loadings of various intensity by means of the micro-Astrup device. It is established that under acyclic loadings (throwing the ball into the basket) the state of metabolic acidosis is developed in the sportsmen and the more intensive the work, the higher the degree of the state of metabolic acidosis. The efficiency of actions of the persons examined was in inverse dependence on the degree of metabolic disturbances, i.e. the least efficiency was marked under the most profound acidosis. 12. [Dichotomizing method applied to calculating equilibrium constant of dimerization system]. PubMed Cheng, Guo-zhong; Ye, Zhi-xiang 2002-06-01 The arbitrary trivariate algebraic equations are formed based on the combination principle. The univariata algebraic equation of equilibrium constant kappa for dimerization system is obtained through a series of algebraic transformation, and it depends on the properties of monotonic functions whether the equation is solvable or not. If the equation is solvable, equilibrium constant of dimerization system is obtained by dichotomy and its final equilibrium constant of dimerization system is determined according to the principle of error of fitting. The equilibrium constants of trisulfophthalocyanine and biosulfophthalocyanine obtained with this method are 47,973.4 and 30,271.8 respectively. The results are much better than those reported previously. 13. Calculation of individual isotope equilibrium constants for geochemical reactions USGS Publications Warehouse Thorstenson, D.C.; Parkhurst, D.L. 2004-01-01 Theory is derived from the work of Urey (Urey H. C. [1947] The thermodynamic properties of isotopic substances. J. Chem. Soc. 562-581) to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by ?? = (Kex)1/n, where n is the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example 13C16O18O and 1H2H18O. The equilibrium constants of the isotope exchange reactions can be expressed as ratios of individual isotope equilibrium constants for geochemical reactions. Knowledge of the equilibrium constant for the dominant isotopic species can then be used to calculate the individual isotope equilibrium constants. Individual isotope equilibrium constants are calculated for the reaction CO2g = CO2aq for all species that can be formed from 12C, 13C, 16O, and 18O; for the reaction between 12C18 O2aq and 1H218Ol; and among the various 1H, 2H, 16O, and 18O species of H2O. This is a subset of a larger number of equilibrium constants calculated elsewhere (Thorstenson D. C. and Parkhurst D. L. [2002] Calculation of individual isotope equilibrium constants for implementation in geochemical models. Water-Resources Investigation Report 02-4172. U.S. Geological Survey). Activity coefficients, activity-concentration conventions for the isotopic variants of H2O in the solvent 1H216Ol, and salt effects on isotope fractionation have been included in the derivations. The effects of nonideality are small because of the chemical similarity of different isotopic species of the same molecule or ion. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation 14. Chemical Equilibrium, Unit 4: Equilibria in Acid-Base Systems. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide. ERIC Educational Resources Information Center Settle, Frank A., Jr. Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this acid-base equilibria unit includes objectives, prerequisites, pretest, a discussion of equilibrium constants, and 20 problem sets.… 15. Effects of intravenous solutions on acid-base equilibrium: from crystalloids to colloids and blood components. PubMed Langer, Thomas; Ferrari, Michele; Zazzeron, Luca; Gattinoni, Luciano; Caironi, Pietro 2014-01-01 Intravenous fluid administration is a medical intervention performed worldwide on a daily basis. Nevertheless, only a few physicians are aware of the characteristics of intravenous fluids and their possible effects on plasma acid-base equilibrium. According to Stewart's theory, pH is independently regulated by three variables: partial pressure of carbon dioxide, strong ion difference (SID), and total amount of weak acids (ATOT). When fluids are infused, plasma SID and ATOT tend toward the SID and ATOT of the administered fluid. Depending on their composition, fluids can therefore lower, increase, or leave pH unchanged. As a general rule, crystalloids having a SID greater than plasma bicarbonate concentration (HCO₃-) cause an increase in plasma pH (alkalosis), those having a SID lower than HCO₃- cause a decrease in plasma pH (acidosis), while crystalloids with a SID equal to HCO₃- leave pH unchanged, regardless of the extent of the dilution. Colloids and blood components are composed of a crystalloid solution as solvent, and the abovementioned rules partially hold true also for these fluids. The scenario is however complicated by the possible presence of weak anions (albumin, phosphates and gelatins) and their effect on plasma pH. The present manuscript summarises the characteristics of crystalloids, colloids, buffer solutions and blood components and reviews their effect on acid-base equilibrium. Understanding the composition of intravenous fluids, along with the application of simple physicochemical rules best described by Stewart's approach, are pivotal steps to fully elucidate and predict alterations of plasma acid-base equilibrium induced by fluid therapy. 16. Constant Entropy Properties for an Approximate Model of Equilibrium Air NASA Technical Reports Server (NTRS) Hansen, C. Frederick; Hodge, Marion E. 1961-01-01 Approximate analytic solutions for properties of equilibrium air up to 15,000 K have been programmed for machine computation. Temperature, compressibility, enthalpy, specific heats, and speed of sound are tabulated as constant entropy functions of temperature. The reciprocal of acoustic impedance and its integral with respect to pressure are also given for the purpose of evaluating the Riemann constants for one-dimensional, isentropic flow. 17. Microcomputer Calculation of Equilibrium Constants from Molecular Parameters of Gases. ERIC Educational Resources Information Center Venugopalan, Mundiyath 1989-01-01 Lists a BASIC program which computes the equilibrium constant as a function of temperature. Suggests use by undergraduates taking a one-year calculus-based physical chemistry course. Notes the program provides for up to four species, typically two reactants and two products. (MVL) 18. Acid-base equilibrium dynamics in methanol and dimethyl sulfoxide probed by two-dimensional infrared spectroscopy. PubMed Lee, Chiho; Son, Hyewon; Park, Sungnam 2015-07-21 Two-dimensional infrared (2DIR) spectroscopy, which has been proven to be an excellent experimental method for studying thermally-driven chemical processes, was successfully used to investigate the acid dissociation equilibrium of HN3 in methanol (CH3OH) and dimethyl sulfoxide (DMSO) for the first time. Our 2DIR experimental results indicate that the acid-base equilibrium occurs on picosecond timescales in CH3OH but that it occurs on much longer timescales in DMSO. Our results imply that the different timescales of the acid-base equilibrium originate from different proton transfer mechanisms between the acidic (HN3) and basic (N3(-)) species in CH3OH and DMSO. In CH3OH, the acid-base equilibrium is assisted by the surrounding CH3OH molecules which can directly donate H(+) to N3(-) and accept H(+) from HN3 and the proton migrates through the hydrogen-bonded chain of CH3OH. On the other hand, the acid-base equilibrium in DMSO occurs through the mutual diffusion of HN3 and N3(-) or direct proton transfer. Our 2DIR experimental results corroborate different proton transfer mechanisms in the acid-base equilibrium in protic (CH3OH) and aprotic (DMSO) solvents. 19. Acid-base titration curves for acids with very small ratios of successive dissociation constants. PubMed Campbell, B H; Meites, L 1974-02-01 The shapes of the potentiometric acid-base titration curves obtained in the neutralizations of polyfunctional acids or bases for which each successive dissociation constant is smaller than the following one are examined. In the region 0 < < 1 (where is the fraction of the equivalent volume of reagent that has been added) the slope of the titration curve decreases as the number j of acidic or basic sites increases. The difference between the pH-values at = 0.75 and = 0.25 has (1 j)log 9 as the lower limit of its maximum value. 20. Species-Specific Thiol-Disulfide Equilibrium Constant: A Tool To Characterize Redox Transitions of Biological Importance. PubMed Mirzahosseini, Arash; Somlyay, Máté; Noszál, Béla 2015-08-13 Microscopic redox equilibrium constants, a new species-specific type of physicochemical parameters, were introduced and determined to quantify thiol-disulfide equilibria of biological significance. The thiol-disulfide redox equilibria of glutathione with cysteamine, cysteine, and homocysteine were approached from both sides, and the equilibrium mixtures were analyzed by quantitative NMR methods to characterize the highly composite, co-dependent acid-base and redox equilibria. The directly obtained, pH-dependent, conditional constants were then decomposed by a new evaluation method, resulting in pH-independent, microscopic redox equilibrium constants for the first time. The 80 different, microscopic redox equilibrium constant values show close correlation with the respective thiolate basicities and provide sound means for the development of potent agents against oxidative stress. 1. Chromophore Structure of Photochromic Fluorescent Protein Dronpa: Acid-Base Equilibrium of Two Cis Configurations. PubMed Higashino, Asuka; Mizuno, Misao; Mizutani, Yasuhisa 2016-04-01 Dronpa is a novel photochromic fluorescent protein that exhibits fast response to light. The present article is the first report of the resonance and preresonance Raman spectra of Dronpa. We used the intensity and frequency of Raman bands to determine the structure of the Dronpa chromophore in two thermally stable photochromic states. The acid-base equilibrium in one photochromic state was observed by spectroscopic pH titration. The Raman spectra revealed that the chromophore in this state shows a protonation/deprotonation transition with a pKa of 5.2 ± 0.3 and maintains the cis configuration. The observed resonance Raman bands showed that the other photochromic state of the chromophore is in a trans configuration. The results demonstrate that Raman bands selectively enhanced for the chromophore yield valuable information on the molecular structure of the chromophore in photochromic fluorescent proteins after careful elimination of the fluorescence background. PMID:26991398 2. Chromophore Structure of Photochromic Fluorescent Protein Dronpa: Acid-Base Equilibrium of Two Cis Configurations. PubMed Higashino, Asuka; Mizuno, Misao; Mizutani, Yasuhisa 2016-04-01 Dronpa is a novel photochromic fluorescent protein that exhibits fast response to light. The present article is the first report of the resonance and preresonance Raman spectra of Dronpa. We used the intensity and frequency of Raman bands to determine the structure of the Dronpa chromophore in two thermally stable photochromic states. The acid-base equilibrium in one photochromic state was observed by spectroscopic pH titration. The Raman spectra revealed that the chromophore in this state shows a protonation/deprotonation transition with a pKa of 5.2 ± 0.3 and maintains the cis configuration. The observed resonance Raman bands showed that the other photochromic state of the chromophore is in a trans configuration. The results demonstrate that Raman bands selectively enhanced for the chromophore yield valuable information on the molecular structure of the chromophore in photochromic fluorescent proteins after careful elimination of the fluorescence background. 3. Complexation Constants of Ubiquinone,0 and Ubiquinone,10 with Nucleosides and Nucleic Acid Bases Rahawi, Kassim Y.; Shanshal, Muthana 2008-02-01 UV spectrophotometric measurements were done on mixtures of the ubiquinones Ub,0 and Ub,10 in their monomeric form (c < 10-5 mol/l) with the nucleosides; adenosine, cytidine, 2'-desoxyadenosine, 2'-desoxy-quanosine, guanosine and thymidine, as well as the nucleic acid bases; adenine, cytosine, hypoxanthine, thymine and uracil. Applying the Liptay method, it was found that both ubiquinones form 1 : 1 interaction complexes with the nucleic acid components. The complexation constants were found in the order of 105 mol-1. The calculated ΔG values were negative (˜-7.0 kcal/mol), suggesting a favoured hydrogen bridge formation. This is confirmed by the positive change of the entropy ΔS. The complexation enthalpies ΔH for all complexes are negative, suggesting exothermal interactions. 4. Effect of water content on the acid-base equilibrium of cyanidin-3-glucoside. PubMed Coutinho, Isabel B; Freitas, Adilson; Maçanita, António L; Lima, J C 2015-04-01 Laser Flash Photolysis was employed to measure the deprotonation and reprotonation rate constants of cyanidin 3-monoglucoside (kuromanin) in water/methanol mixtures. It was found that the deprotonation rate constant kd decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin, which may accommodate and stabilize the outgoing protons. On the other hand, the reprotonation rate constant, kp, increases with the decrease in water concentration from a value of kp = 2 × 10(10) l mol(-1) s(-1) in water up to kp = 6 × 10(10) l mol(-1) s(-1) at 5.6M water concentration in the mixture. The higher value of kp at lower water concentrations reflects the fact that the proton is not freely escaping the solvation shell of the molecule. The deprotonation rate constant decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin that can accommodate the outgoing protons. Overall, the acidity constant of the flavylium cation decreases with the decrease in water concentration from pKa values of 3.8 in water to approximately 4.8 in water-depleted media, thus shifting the equilibrium towards the red-coloured form, AH(+), at low water contents. The presence, or lack, of water, will affect the colour shade (red to blue) of kuromanin. This is relevant for its role as an intrinsic food component and as a food pigment additive (E163). PMID:25442581 5. Effect of water content on the acid-base equilibrium of cyanidin-3-glucoside. PubMed Coutinho, Isabel B; Freitas, Adilson; Maçanita, António L; Lima, J C 2015-04-01 Laser Flash Photolysis was employed to measure the deprotonation and reprotonation rate constants of cyanidin 3-monoglucoside (kuromanin) in water/methanol mixtures. It was found that the deprotonation rate constant kd decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin, which may accommodate and stabilize the outgoing protons. On the other hand, the reprotonation rate constant, kp, increases with the decrease in water concentration from a value of kp = 2 × 10(10) l mol(-1) s(-1) in water up to kp = 6 × 10(10) l mol(-1) s(-1) at 5.6M water concentration in the mixture. The higher value of kp at lower water concentrations reflects the fact that the proton is not freely escaping the solvation shell of the molecule. The deprotonation rate constant decreases with decreasing water content, reflecting the lack of free water molecules around kuromanin that can accommodate the outgoing protons. Overall, the acidity constant of the flavylium cation decreases with the decrease in water concentration from pKa values of 3.8 in water to approximately 4.8 in water-depleted media, thus shifting the equilibrium towards the red-coloured form, AH(+), at low water contents. The presence, or lack, of water, will affect the colour shade (red to blue) of kuromanin. This is relevant for its role as an intrinsic food component and as a food pigment additive (E163). 6. Computational calculation of equilibrium constants: addition to carbonyl compounds. PubMed Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio 2009-10-22 Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within +/- 0.5 log K(hyd) units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than +/- 1.0 log K(hyd), on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary. PMID:19761202 7. Computational Calculation of Equilibrium Constants: Addition to Carbonyl Compounds Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio 2009-09-01 Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within ± 0.5 log Khyd units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than ± 1.0 log Khyd, on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary. 8. Determination of acid-base dissociation constants of azahelicenes by capillary zone electrophoresis. PubMed Ehala, Sille; Mísek, Jirí; Stará, Irena G; Starý, Ivo; Kasicka, Václav 2008-08-01 CZE was employed to determine acid-base dissociation constants (pK(a)) of ionogenic groups of azahelicenes in methanol (MeOH). Azahelicenes are unique 3-D aromatic systems, which consist of ortho-fused benzene/pyridine units and exhibit helical chirality. The pK(a) values of pyridinium groups of the studied azahelicenes were determined from the dependence of their effective electrophoretic mobility on pH by a nonlinear regression analysis. The effective mobilities of azahelicenes were determined by CZE at pH range between 2.1 and 10.5. Thermodynamic pK(a) values of monobasic 1-aza[6]helicene and 2-aza[6]helicene in MeOH were determined to be 4.94 +/- 0.05 and 5.68 +/- 0.05, respectively, and pK(a) values of dibasic 1,14-diaza[5]helicene were found to be equal to 7.56 +/- 0.38 and 8.85 +/- 0.26. From these values, the aqueous pK(a) of these compounds was estimated. 9. Why and How To Teach Acid-Base Reactions without Equilibrium. ERIC Educational Resources Information Center Carlton, Terry S. 1997-01-01 Recommends an approach to the treatment of acid-base equilibria that involves treating each reaction as either going to completion or not occurring at all. Compares the method with the traditional approach step by step. (DDR) 10. Equilibrium constants and protonation site for N-methylbenzenesulfonamides PubMed Central Rosa da Costa, Ana M; García-Río, Luis; Pessêgo, Márcia 2011-01-01 Summary The protonation equilibria of four substituted N-methylbenzenesulfonamides, X-MBS: X = 4-MeO (3a), 4-Me (3b), 4-Cl (3c) and 4-NO2 (3d), in aqueous sulfuric acid were studied at 25 °C by UV–vis spectroscopy. As expected, the values for the acidity constants are highly dependent on the electron-donor character of the substituent (the pK BH+ values are −3.5 ± 0.2, −4.2 ± 0.2, −5.2 ± 0.3 and −6.0 ± 0.3 for 3a, 3b, 3c and 3d, respectively). The solvation parameter m* is always higher than 0.5 and points to a decrease in the importance of solvation on the cation stabilization as the electron-donor character of the substituent increases. Hammett plots of the equilibrium constants showed a better correlation with the σ+ substituent parameter than with σ, which indicates that the initial protonation site is the oxygen atom of the sulfonyl group. PMID:22238552 11. Using nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) for simultaneous determination of concentration and equilibrium constant. PubMed Kanoatov, Mirzo; Galievsky, Victor A; Krylova, Svetlana M; Cherney, Leonid T; Jankowski, Hanna K; Krylov, Sergey N 2015-03-01 Nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) is a versatile tool for studying affinity binding. Here we describe a NECEEM-based approach for simultaneous determination of both the equilibrium constant, K(d), and the unknown concentration of a binder that we call a target, T. In essence, NECEEM is used to measure the unbound equilibrium fraction, R, for the binder with a known concentration that we call a ligand, L. The first set of experiments is performed at varying concentrations of T, prepared by serial dilution of the stock solution, but at a constant concentration of L, which is as low as its reliable quantitation allows. The value of R is plotted as a function of the dilution coefficient, and dilution corresponding to R = 0.5 is determined. This dilution of T is used in the second set of experiments in which the concentration of T is fixed but the concentration of L is varied. The experimental dependence of R on the concentration of L is fitted with a function describing their theoretical dependence. Both K(d) and the concentration of T are used as fitting parameters, and their sought values are determined as the ones that generate the best fit. We have fully validated this approach in silico by using computer-simulated NECEEM electropherograms and then applied it to experimental determination of the unknown concentration of MutS protein and K(d) of its interactions with a DNA aptamer. The general approach described here is applicable not only to NECEEM but also to any other method that can determine a fraction of unbound molecules at equilibrium. 12. The Perils of Carbonic Acid and Equilibrium Constants. ERIC Educational Resources Information Center Jencks, William P.; Altura, Rachel A. 1988-01-01 Discusses the effects caused by small amounts of carbon dioxide usually present in water and acid-base equilibria of dilute solutions. Notes that dilute solutions of most weak acids and bases undergo significant dissociation or protonation. (MVL) 13. Calculation of individual isotope equilibrium constants for implementation in geochemical models USGS Publications Warehouse Thorstenson, Donald C.; Parkhurst, David L. 2002-01-01 Theory is derived from the work of Urey to calculate equilibrium constants commonly used in geochemical equilibrium and reaction-transport models for reactions of individual isotopic species. Urey showed that equilibrium constants of isotope exchange reactions for molecules that contain two or more atoms of the same element in equivalent positions are related to isotope fractionation factors by , where is n the number of atoms exchanged. This relation is extended to include species containing multiple isotopes, for example and , and to include the effects of nonideality. The equilibrium constants of the isotope exchange reactions provide a basis for calculating the individual isotope equilibrium constants for the geochemical modeling reactions. The temperature dependence of the individual isotope equilibrium constants can be calculated from the temperature dependence of the fractionation factors. Equilibrium constants are calculated for all species that can be formed from and selected species containing , in the molecules and the ion pairs with where the subscripts g, aq, l, and s refer to gas, aqueous, liquid, and solid, respectively. These equilibrium constants are used in the geochemical model PHREEQC to produce an equilibrium and reaction-transport model that includes these isotopic species. Methods are presented for calculation of the individual isotope equilibrium constants for the asymmetric bicarbonate ion. An example calculates the equilibrium of multiple isotopes among multiple species and phases. 14. Effect of Acid-Base Equilibrium on Absorption Spectra of Humic acid in the Presence of Copper Ions Lavrik, N. L.; Mulloev, N. U. 2014-03-01 The reaction between humic acid (HA, sample IHSS) and a metal ion (Cu2+) that was manifested as absorption bands in the range 210-350 nm was recorded using absorption spectroscopy. The reaction was found to be more effective as the pH increased. These data were interpreted in the framework of generally accepted concepts about the influence of acid-base equilibrium on the dissociation of salts, according to which increasing the solution pH increases the concentration of HA anions. It was suggested that [HA-Cu2+] complexes formed. 15. Measurement of both the equilibrium constant and rate constant for electronic energy transfer by control of the limiting kinetic regimes. PubMed Vagnini, Michael T; Rutledge, W Caleb; Wagenknecht, Paul S 2010-02-01 Electronic energy transfer can fall into two limiting cases. When the rate of the energy transfer back reaction is much faster than relaxation of the acceptor excited state, equilibrium between the donor and acceptor excited states is achieved and only the equilibrium constant for the energy transfer can be measured. When the rate of the back reaction is much slower than relaxation of the acceptor, the energy transfer is irreversible and only the forward rate constant can be measured. Herein, we demonstrate that with trans-[Cr(d(4)-cyclam)(CN)(2)](+) as the donor and either trans-[Cr([15]ane-ane-N(4))(CN)(2)](+) or trans-[Cr(cyclam)(CN)(2)](+) as the acceptor, both limits can be obtained by control of the donor concentration. The equilibrium constant and rate constant for the case in which trans-[Cr([15]ane-ane-N(4))(CN)(2)](+) is the acceptor are 0.66 and 1.7 x 10(7) M(-1) s(-1), respectively. The equilibrium constant is in good agreement with the value of 0.60 determined using the excited state energy gap between the donor and acceptor species. For the thermoneutral case in which trans-[Cr(cyclam)(CN)(2)](+) is the acceptor, an experimental equilibrium constant of 0.99 was reported previously, and the rate constant has now been measured as 4.0 x 10(7) M(-1) s(-1). 16. A Unified Kinetics and Equilibrium Experiment: Rate Law, Activation Energy, and Equilibrium Constant for the Dissociation of Ferroin ERIC Educational Resources Information Center Sattar, Simeen 2011-01-01 Tris(1,10-phenanthroline)iron(II) is the basis of a suite of four experiments spanning 5 weeks. Students determine the rate law, activation energy, and equilibrium constant for the dissociation of the complex ion in acid solution and base dissociation constant for phenanthroline. The focus on one chemical system simplifies a daunting set of… 17. The Rigorous Evaluation of Spectrophotometric Data to Obtain an Equilibrium Constant. ERIC Educational Resources Information Center Long, John R.; Drago, Russell S. 1982-01-01 Most students do not know how to determine the equilibrium constant and estimate the error in it from spectrophotometric data that contain experimental errors. This "dry-lab" experiment describes a method that may be used to determine the "best-fit" value of the 1:1 equilibrium constant to spectrophotometric data. (Author/JN) 18. Constants and thermodynamics of the acid-base equilibria of triglycine in water-ethanol solutions containing sodium perchlorate at 298 K Pham Tkhi, L.; Usacheva, T. R.; Tukumova, N. V.; Koryshev, N. E.; Khrenova, T. M.; Sharnin, V. A. 2016-02-01 The acid-base equilibrium constants for glycyl-glycyl-glycine (triglycine) in water-ethanol solvents containing 0.0, 0.1, 0.3, and 0.5 mole fractions of ethanol are determined by potentiometric titration at 298.15 K and an ionic strength of 0.1, maintained with sodium perchlorate. It is established that an increase in the ethanol content in the solvent reduces the dissociation constant of the carboxyl group of triglycine (increases p K 1) and increases the dissociation constant of the amino group of triglycine (decreases p K 2). It is noted that the weakening of the acidic properties of a triglycinium ion upon an increase of the ethanol content in the solvent is due to the attenuation of the solvation shell of the zwitterionic form of triglycine, and to the increased solvation of triglycinium ions. It is concluded that the acid strength of triglycine increases along with a rise in the EtOH content in the solvent, due to the desolvation of the tripeptide zwitterion and the enhanced solvation of protons. 19. Acid-base equilibrium during capnoretroperitoneoscopic nephrectomy in patients with end-stage renal failure: a preliminary report. PubMed Demian, A D; Esmail, O M; Atallah, M M 2000-04-01 We have studied the acid-base equilibrium in 12 patients with end-stage renal failure (ESRF) during capnoretroperitoneoscopic nephrectomy. Bupivacaine (12 mL, 0.375%) and morphine (2mg) were given in the lumbar epidural space, and fentanyl (0.5 microg kg(-1)) and midazolam (50 microg kg(-1)) were given intravenously. Anaesthesia was induced by thiopental, maintained with halothane carried by oxygen enriched air (inspired oxygen fraction = 0.35), and ventilation was achieved with a tidal volume of 10 mL kg(-1) at a rate of 12 min(-1). This procedure resulted in a mild degree of respiratory acidosis that was cleared within 60 min. We conclude that capnoretroperitoneoscopic nephrectomy can be performed in patients with end-stage renal failure with minimal transient respiratory acidosis that can be avoided by increased ventilation. 20. Acid-base equilibrium during capnoretroperitoneoscopic nephrectomy in patients with end-stage renal failure: a preliminary report. PubMed Demian, A D; Esmail, O M; Atallah, M M 2000-04-01 We have studied the acid-base equilibrium in 12 patients with end-stage renal failure (ESRF) during capnoretroperitoneoscopic nephrectomy. Bupivacaine (12 mL, 0.375%) and morphine (2mg) were given in the lumbar epidural space, and fentanyl (0.5 microg kg(-1)) and midazolam (50 microg kg(-1)) were given intravenously. Anaesthesia was induced by thiopental, maintained with halothane carried by oxygen enriched air (inspired oxygen fraction = 0.35), and ventilation was achieved with a tidal volume of 10 mL kg(-1) at a rate of 12 min(-1). This procedure resulted in a mild degree of respiratory acidosis that was cleared within 60 min. We conclude that capnoretroperitoneoscopic nephrectomy can be performed in patients with end-stage renal failure with minimal transient respiratory acidosis that can be avoided by increased ventilation. PMID:10866009 1. Galvanic Cells and the Determination of Equilibrium Constants ERIC Educational Resources Information Center Brosmer, Jonathan L.; Peters, Dennis G. 2012-01-01 Readily assembled mini-galvanic cells can be employed to compare their observed voltages with those predicted from the Nernst equation and to determine solubility products for silver halides and overall formation constants for metal-ammonia complexes. Results obtained by students in both an honors-level first-year course in general chemistry and… 2. Weak Acid Ionization Constants and the Determination of Weak Acid-Weak Base Reaction Equilibrium Constants in the General Chemistry Laboratory ERIC Educational Resources Information Center Nyasulu, Frazier; McMills, Lauren; Barlag, Rebecca 2013-01-01 A laboratory to determine the equilibrium constants of weak acid negative weak base reactions is described. The equilibrium constants of component reactions when multiplied together equal the numerical value of the equilibrium constant of the summative reaction. The component reactions are weak acid ionization reactions, weak base hydrolysis… 3. Profiles of equilibrium constants for self-association of aromatic molecules. PubMed Beshnova, Daria A; Lantushenko, Anastasia O; Davies, David B; Evstigneev, Maxim P 2009-04-28 Analysis of the noncovalent, noncooperative self-association of identical aromatic molecules assumes that the equilibrium self-association constants are either independent of the number of molecules (the EK-model) or change progressively with increasing aggregation (the AK-model). The dependence of the self-association constant on the number of molecules in the aggregate (i.e., the profile of the equilibrium constant) was empirically derived in the AK-model but, in order to provide some physical understanding of the profile, it is proposed that the sources for attenuation of the equilibrium constant are the loss of translational and rotational degrees of freedom, the ordering of molecules in the aggregates and the electrostatic contribution (for charged units). Expressions are derived for the profiles of the equilibrium constants for both neutral and charged molecules. Although the EK-model has been widely used in the analysis of experimental data, it is shown in this work that the derived equilibrium constant, K(EK), depends on the concentration range used and hence, on the experimental method employed. The relationship has also been demonstrated between the equilibrium constant K(EK) and the real dimerization constant, K(D), which shows that the value of K(EK) is always lower than K(D). 4. A one-term extrapolation method for estimating equilibrium constants of aqueous reactions at elevated temperatures Gu, Y.; Gammons, C. H.; Bloom, M. S. 1994-09-01 A one-term method for extrapolating equilibrium constants for aqueous reactions is proposed which is based on the observation that the change in free energy of a well-balanced isocoulombic reaction is nearly independent of temperature. The current practice in extrapolating log K values for isocoulombic reactions is to omit the ΔCp term but include a ΔS term (i.e., the two-term extrapolation equation of LINDSAY, 1980). However, we observe that the ΔCp and ΔS terms for many isocoulombic reactions are not only small, but are often opposite in sign, and therefore tend to cancel one another. Thus, inclusion of an entropy term often yields estimates which are less accurate than omission of both terms. The one-term extrapolation technique is tested with literature data for a large number of isocoulombic reactions involving ion-ligand exchange, cation hydrolysis, acid-base neutralization, redox, and selected reactions involving solids. In most cases the extrapolated values are in excellent agreement with the experimental measurements, especially at higher temperatures where they are often more accurate than those obtained using the two-term equation of LINDSAY (1980). The results are also comparable to estimates obtained using the modified HKF model of TANGER and HELGESON (1988) and the density model of ANDERSON et al. (1991). It is also found to produce reasonable estimates for isocoulombic reactions at elevated pressure (up to P = 2 kb) and ionic strength (up to I = 1.0). The principal advantage of the one-term method is that accurate estimates of high temperature equilibrium constants may be obtained using only free energy data for the reaction of interest at one reference temperature. The principal disadvantage is that the accuracies of the estimates are somewhat dependent on the model reaction selected to balance the isocoulombic reaction. Satisfactory results are obtained for reactions that have minimal energetic, electrostatic, structural, and volumetric 5. The Equilibrium Constant for Bromothymol Blue: A General Chemistry Laboratory Experiment Using Spectroscopy ERIC Educational Resources Information Center Klotz, Elsbeth; Doyle, Robert; Gross, Erin; Mattson, Bruce 2011-01-01 A simple, inexpensive, and environmentally friendly undergraduate laboratory experiment is described in which students use visible spectroscopy to determine a numerical value for an equilibrium constant, K[subscript c]. The experiment correlates well with the lecture topic of equilibrium even though the subject of the study is an acid-base… 6. STUDIES OF THE ACID-BASE EQUILIBRIUM IN DISEASE FROM THE POINT OF VIEW OF BLOOD GASES. PubMed Means, J H; Bock, A V; Woodwell, M N 1921-01-31 Carbon dioxide diagrams (Haggard and Henderson (9)) have been constructed for the blood of a series of hospital patients as a method of studying disturbances in their acid-base equilibrium. A diabetic with a low level of blood alkali, but with a normal blood reaction, a compensated acidosis in other words, showed a rapid return towards normal with no treatment but fasting and increased water and salt intake. A nephritic with a decompensated acidosis and a very low blood alkali was rapidly brought to a condition of decompensated alkalosis with a high blood alkali by the therapeutic administration of sodium bicarbonate. It is suggested that the therapeutic use of alkali in acidosis is probably only indicated in the decompensated variety, and that there it should be controlled carefully and the production of alkalosis avoided. The diagram obtained in three pneumonia patients suggested that they were suffering from a condition of carbonic acidosis, due perhaps to insufficient pulmonary ventilation. In two out of three cases of anemia the dissociation curve was found to lie at a higher level than normal. No explanation for this finding was offered. PMID:19868489 7. Determination of acid-base dissociation constants of very weak zwitterionic heterocyclic bases by capillary zone electrophoresis. PubMed Ehala, Sille; Grishina, Anastasiya A; Sheshenev, Andrey E; Lyapkalo, Ilya M; Kašička, Václav 2010-12-17 Thermodynamic acid-base dissociation (ionization) constants (pK(a)) of seven zwitterionic heterocyclic bases, first representatives of new heterocyclic family (2,3,5,7,8,9-hexahydro-1H-diimidazo[1,2-c:2',1'-f][1,3,2]diazaphosphinin-4-ium-5-olate 5-oxides), originally designed as chiral Lewis base catalysts for enantioselective reactions, were determined by capillary zone electrophoresis (CZE). The pK(a) values of the above very weak zwitterionic bases were determined from the dependence of their effective electrophoretic mobility on pH in strongly acidic background electrolytes (pH 0.85-2.80). Prior to pK(a) calculation by non-linear regression analysis, the CZE measured effective mobilities were corrected to reference temperature, 25°C, and constant ionic strength, 25 mM. Thermodynamic pK(a) values of the analyzed zwitterionic heterocyclic bases were found to be particularly low, in the range 0.04-0.32. Moreover, from the pH dependence of effective mobility of the bases, some other relevant characteristics, such as actual and absolute ionic mobilities and hydrodynamic radii of the acidic cationic forms of the bases were determined. 8. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants ERIC Educational Resources Information Center Vargas, Francisco M. 2014-01-01 The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required… 9. Apparent equilibrium constants and standard transformed Gibbs energies of biochemical reactions involving carbon dioxide. PubMed Alberty, R A 1997-12-01 When carbon dioxide is produced in a biochemical reaction, the expression for the apparent equilibrium constant K' can be written in terms of the partial pressure of carbon dioxide in the gas phase or the total concentration of species containing CO2 in the aqueous phase, referred to here as [TotCO2]. The values of these two apparent equilibrium constants are different because they correspond to different ways of writing the biochemical equations. Their dependencies on pH and ionic strength are also different. The ratio of these two apparent equilibrium constants is equal to the apparent Henry's law constant K'H. This article provides derivations of equations for the calculation of the standard transformed Gibbs energies of formation of TotCO2 and values of the apparent Henry's law constant at various pH levels and ionic strengths. These equations involve the four equilibrium constants interconnecting the five species [CO2(g), CO2(aq), H2CO3, HCO3-, and CO3(2-)] of carbon dioxide. In the literature there are many errors in the treatment of equilibrium data on biochemical reactions involving carbon dioxide, and so several examples are discussed here, including calculation of standard transformed Gibbs energies of formation of reactants. This approach also applies to net reactions, and the net reaction for the oxidation of glucose to carbon dioxide and water is discussed. 10. Determination of acid-base dissociation constants of amino- and guanidinopurine nucleotide analogs and related compounds by capillary zone electrophoresis. PubMed Solínová, Veronika; Kasicka, Václav; Koval, Dusan; Cesnek, Michal; Holý, Antonín 2006-03-01 CZE has been applied for determination of acid-base dissociation constants (pKa) of ionogenic groups of newly synthesized amino- and (amino)guanidinopurine nucleotide analogs, such as acyclic nucleoside phosphonate, acyclic nucleoside phosphonate diesters and other related compounds. These compounds bear characteristic pharmacophores contained in various important biologically active substances, such as cytostatics and antivirals. The pKa values of ionogenic groups of the above compounds were determined by nonlinear regression analysis of the experimentally measured pH dependence of their effective electrophoretic mobilities. The effective mobilities were measured by CZE performed in series of BGEs in a broad pH range (3.50-11.25), at constant ionic strength (25 mM) and temperature (25 degrees C). pKa values were determined for the protonated guanidinyl group in (amino)guanidino 9-alkylpurines and in (amino)guanidinopurine nucleotide analogs, such as acyclic nucleoside phosphonates and acyclic nucleoside phosphonate diesters, for phosphonic acid to the second dissociation degree (-2) in acyclic nucleoside phosphonates of amino and (amino)guanidino 9-alkylpurines, and for protonated nitrogen in position 1 (N1) of purine moiety in acyclic nucleoside phosphonates of amino 9-alkylpurines. Thermodynamic pKa of protonated guanidinyl group was estimated to be in the range of 7.75-10.32, pKa of phosphonic acid to the second dissociation degree achieved values of 6.64-7.46, and pKa of protonated nitrogen in position 1 of purine was in the range of 4.13-4.89, depending on the structure of the analyzed compounds. 11. Classical calculation of the equilibrium constants for true bound dimers using complete potential energy surface SciTech Connect Buryak, Ilya; Vigasin, Andrey A. 2015-12-21 The present paper aims at deriving classical expressions which permit calculation of the equilibrium constant for weakly interacting molecular pairs using a complete multidimensional potential energy surface. The latter is often available nowadays as a result of the more and more sophisticated and accurate ab initio calculations. The water dimer formation is considered as an example. It is shown that even in case of a rather strongly bound dimer the suggested expression permits obtaining quite reliable estimate for the equilibrium constant. The reliability of our obtained water dimer equilibrium constant is briefly discussed by comparison with the available data based on experimental observations, quantum calculations, and the use of RRHO approximation, provided the latter is restricted to formation of true bound states only. 12. Classical calculation of the equilibrium constants for true bound dimers using complete potential energy surface Buryak, Ilya; Vigasin, Andrey A. 2015-12-01 The present paper aims at deriving classical expressions which permit calculation of the equilibrium constant for weakly interacting molecular pairs using a complete multidimensional potential energy surface. The latter is often available nowadays as a result of the more and more sophisticated and accurate ab initio calculations. The water dimer formation is considered as an example. It is shown that even in case of a rather strongly bound dimer the suggested expression permits obtaining quite reliable estimate for the equilibrium constant. The reliability of our obtained water dimer equilibrium constant is briefly discussed by comparison with the available data based on experimental observations, quantum calculations, and the use of RRHO approximation, provided the latter is restricted to formation of true bound states only. 13. Classical calculation of the equilibrium constants for true bound dimers using complete potential energy surface. PubMed Buryak, Ilya; Vigasin, Andrey A 2015-12-21 The present paper aims at deriving classical expressions which permit calculation of the equilibrium constant for weakly interacting molecular pairs using a complete multidimensional potential energy surface. The latter is often available nowadays as a result of the more and more sophisticated and accurate ab initio calculations. The water dimer formation is considered as an example. It is shown that even in case of a rather strongly bound dimer the suggested expression permits obtaining quite reliable estimate for the equilibrium constant. The reliability of our obtained water dimer equilibrium constant is briefly discussed by comparison with the available data based on experimental observations, quantum calculations, and the use of RRHO approximation, provided the latter is restricted to formation of true bound states only. 14. Estimation of the initial equilibrium constants in the formation of tetragonal lysozyme nuclei NASA Technical Reports Server (NTRS) Pusey, Marc L. 1991-01-01 Results are presented from a study of the equilibria, kinetic rates, and the aggregation pathway which leads from a lysozyme monomer crystal to a tetragonal crystal, using dialyzed and recrystallized commercial hen eggwhite lysozyme. Relative light scattering intensity measurements were used to estimate the initial equilibrium constants for undersaturated lysozyme solutions in the tetragonal regime. The K1 value was estimated to be (1-3) x 10 exp 4 L/mol. Estimates of subsequent equilibrium constants depend on the crystal aggregation model chosen or determined. Experimental data suggest that tetragonal lysozyme crystal grows by addition of aggregates preformed in the bulk solution, rather than by monomer addition. 15. Does the Addition of Inert Gases at Constant Volume and Temperature Affect Chemical Equilibrium? ERIC Educational Resources Information Center Paiva, Joao C. M.; Goncalves, Jorge; Fonseca, Susana 2008-01-01 In this article we examine three approaches, leading to different conclusions, for answering the question "Does the addition of inert gases at constant volume and temperature modify the state of equilibrium?" In the first approach, the answer is yes as a result of a common students' alternative conception; the second approach, valid only for ideal… 16. Revealing equilibrium and rate constants of weak and fast noncovalent interactions. PubMed Mironov, Gleb G; Okhonin, Victor; Gorelsky, Serge I; Berezovski, Maxim V 2011-03-15 Rate and equilibrium constants of weak noncovalent molecular interactions are extremely difficult to measure. Here, we introduced a homogeneous approach called equilibrium capillary electrophoresis of equilibrium mixtures (ECEEM) to determine k(on), k(off), and K(d) of weak (K(d) > 1 μM) and fast kinetics (relaxation time, τ < 0.1 s) in quasi-equilibrium for multiple unlabeled ligands simultaneously in one microreactor. Conceptually, an equilibrium mixture (EM) of a ligand (L), target (T), and a complex (C) is prepared. The mixture is introduced into the beginning of a capillary reactor with aspect ratio >1000 filled with T. Afterward, differential mobility of L, T, and C along the reactor is induced by an electric field. The combination of differential mobility of reactants and their interactions leads to a change of the EM peak shape. This change is a function of rate constants, so the rate and equilibrium constants can be directly determined from the analysis of the EM peak shape (width and symmetry) and propagation pattern along the reactor. We proved experimentally the use of ECEEM for multiplex determination of kinetic parameters describing weak (3 mM > K(d) > 80 μM) and fast (0.25 s ≥ τ ≥ 0.9 ms) noncovalent interactions between four small molecule drugs (ibuprofen, S-flurbiprofen, salicylic acid and phenylbutazone) and α- and β-cyclodextrins. The affinity of the drugs was significantly higher for β-cyclodextrin than α-cyclodextrin and mostly determined by the rate constant of complex formation. 17. Equilibrium constant for carbamate formation from monoethanolamine and its relationship with temperature SciTech Connect Aroua, M.K.; Benamor, A.; Haji-Sulaiman, M.Z. 1999-09-01 Removal of acid gases such as CO{sub 2} and H{sub 2}S using aqueous solutions of alkanolamines is an industrially important process. The equilibrium constant for the formation of carbamate from monoethanolamine was evaluated at various temperatures of 298, 308, 318, and 328 K and ionic strengths up to 1.7 M. From the plot of log{sub 10} K versus I{sup 0.5}, the variation of the thermodynamical constant with temperature follows the relationship log{sub 10} K{sub 1} = {minus}0.934 + (0.671 {times} 10{sup 3})K/T. 18. Spectrophotometric Determination of the Dissociation Constant of an Acid-Base Indicator Using a Mathematical Deconvolution Technique ERIC Educational Resources Information Center Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D. 2005-01-01 A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to… 19. Measuring Equilibrium Binding Constants for the WT1-DNA Interaction Using a Filter Binding Assay. PubMed Romaniuk, Paul J 2016-01-01 Equilibrium binding of WT1 to specific sites in DNA and potentially RNA molecules is central in mediating the regulatory roles of this protein. In order to understand the functional effects of mutations in the nucleic acid-binding domain of WT1 proteins and/or mutations in the DNA- or RNA-binding sites, it is necessary to measure the equilibrium constant for formation of the protein-nucleic acid complex. This chapter describes the use of a filter binding assay to make accurate measurements of the binding of the WT1 zinc finger domain to the consensus WT1-binding site in DNA. The method described is readily adapted to the measurement of the effects of mutations in either the WT1 zinc finger domain or the putative binding sites within a promoter element or cellular RNA. 20. Equilibrium and dynamic osmotic behaviour of aqueous solutions with varied concentration at constant and variable volume. PubMed Minkov, Ivan L; Manev, Emil D; Sazdanova, Svetla V; Kolikov, Kiril H 2013-01-01 Osmosis is essential for the living organisms. In biological systems the process usually occurs in confined volumes and may express specific features. The osmotic pressure in aqueous solutions was studied here experimentally as a function of solute concentration (0.05-0.5 M) in two different regimes: of constant and variable solution volume. Sucrose, a biologically active substance, was chosen as a reference solute for the complex tests. A custom made osmotic cell was used. A novel operative experimental approach, employing limited variation of the solution volume, was developed and applied for the purpose. The established equilibrium values of the osmotic pressure are in agreement with the theoretical expectations and do not exhibit any evident differences for both regimes. In contrast, the obtained kinetic dependences reveal striking divergence in the rates of the process at constant and varied solution volume for the respective solute concentrations. The rise of pressure is much faster at constant solution volume, while the solvent influx is many times greater in the regime of variable volume. The results obtained suggest a feasible mechanism for the way in which the living cells rapidly achieve osmotic equilibrium upon changes in the environment. 1. Does the ligand-biopolymer equilibrium binding constant depend on the number of bound ligands? PubMed Beshnova, Daria A; Lantushenko, Anastasia O; Evstigneev, Maxim P 2010-11-01 Conventional methods, such as Scatchard or McGhee-von Hippel analyses, used to treat ligand-biopolymer interactions, indirectly make the assumption that the microscopic binding constant is independent of the number of ligands, i, already bound to the biopolymer. Recent results on the aggregation of aromatic molecules (Beshnova et al., J Chem Phys 2009, 130, 165105) indicated that the equilibrium constant of self-association depends intrinsically on the number of molecules in an aggregate due to loss of translational and rotational degrees of freedom on formation of the complex. The influence of these factors on the equilibrium binding constant for ligand-biopolymer complexation was analyzed in this work. It was shown that under the conditions of binding of "small" molecules, these factors can effectively be ignored and, hence, do not provide any hidden systematic error in such widely-used approaches, such as the Scatchard or McGhee-von Hippel methods for analyzing ligand-biopolymer complexation. © 2010 Wiley Periodicals, Inc. Biopolymers 93: 932-935, 2010. 2. Anomalously slow cyanide binding to Glycera dibranchiata monomer methemoglobin component II: Implication for the equilibrium constant SciTech Connect Mintorovitch, J.; Satterlee, J.D. ) 1988-10-18 In comparison to sperm whale metmyoglobin, metleghemoglobin {alpha}, methemoglobins, and heme peroxidases, the purified Glycera dibranchiata monomer methemoglobin component II exhibits anomalously slow cyanide ligation kinetics. For the component II monomer methemoglobin this reaction has been studied under pseudo-first-order conditions at pH 6.0, 7.0, 8.0, and 9.0, employing 100-250-fold mole excesses of potassium cyanide at each pH. The analysis shows that the concentration-independent bimolecular rate constant is small in comparison to those of the other heme proteins. Furthermore, the results show that the dissociation rate is extremely slow. Separation of the bimolecular rate constant into contributions from k{sub CN{sup {minus}}} (the rate constant for CN{sup {minus}} binding) and from k{sub HCN} (the rate constant for HCN binding) shows that the former is approximately 90 times greater. These results indicate that cyanide ligation reactions are not instantaneous for this protein, which is important for those attempting to study the ligand-binding equilibria. From the results presented here the authors estimate that the actual equilibrium dissociation constant (K{sub D}) for cyanide binding to this G. dibranchiata monomer methemoglobin has a numerical upper limit that is at least 2 orders of magnitude smaller than the value reported before the kinetic results were known. 3. Acid-base equilibria in ethylene glycol--III: selection of titration conditions in ethylene glycol medium, protolysis constants of alkaloids in ethylene glycol and its mixtures. PubMed Zikolov, P; Zikolova, T; Budevsky, O 1976-08-01 Theoretical titration curves are used for the selection of appropriate conditions for the acid-base volumetric determination of weak bases in ethylene glycol medium. The theoretical curves for titration of some alkaloids are deduced graphically on the basis of the logarithmic concentration diagram. The acid-base constants used for the construction of the theoretical titration curves were determined by potentiometric titration in a cell without liquid junction, equipped with a glass and a silver-silver chloride electrode. It is shown that the alkaloids investigated can be determined accurately by visual or potentiometric titration. The same approach for the selection of titration conditions seems to be applicable to other non-aqueous amphiprotic solvents. 4. Determination of the Equilibrium Constants of a Weak Acid: An Experiment for Analytical or Physical Chemistry Bonham, Russell A. 1998-05-01 A simple experiment, utilizing readily available equipment and chemicals, is described. It allows students to explore the concepts of chemical equilibria, nonideal behavior of aqueous solutions, least squares with adjustment of nonlinear model parameters, and errors. The relationship between the pH of a solution of known initial concentration and volume of a weak acid as it is titrated by known volumes of a monohydroxy strong base is developed rigorously assuming ideal behavior. A distinctive feature of this work is a method that avoids dealing with the problems presented by equations with multiple roots. The volume of base added is calculated in terms of a known value of the pH and the equilibrium constants. The algebraic effort involved is nearly the same as the alternative of deriving a master equation for solving for the hydrogen ion concentration or activity and results in a more efficient computational algorithm. This approach offers two advantages over the use of computer software to solve directly for the hydrogen ion concentration. First, it avoids a potentially lengthy iterative procedure encountered when the polynomial exceeds third order in the hydrogen ion concentration; and second, it provides a means of obtaining results with a hand calculator that can prove useful in checking computer code. The approach is limited to weak solutions to avoid dealing with molalities and to insure that the Debye-Hückel limiting law is applicable. The nonlinear least squares algorithm Nonlinear Fit, found in the computational mathematics library Mathematica, is utilized to fit the measured volume of added base to the calculated value as a function of the measured pH subject to variation of all the equilibrium constants as parameters (including Kw). The experiment emphasizes both data collection and data analysis aspects of the problem. Data for the titration of phosphorous acid, H3PO3, by NaOH are used to illustrate the approach. Fits of the data without corrections 5. The estimation of affinity constants for the binding of model peptides to DNA by equilibrium dialysis. PubMed Central Standke, K C; Brunnert, H 1975-01-01 The binding of lysine model peptides of the type Lys-X-Lys, Lys-X-X-Lys and Lys-X-X-X-Lys (X = different aliphatic and aromatic amino acids) has been studied by equilibrium dialysis. It was shown that the strong electrostatic binding forces generated by protonated amino groups of lysine can be distinguished from the weak forces stemming from neutral and aromatic spacer amino acids. The overall binding strength of the lysine model peptides is modified by these weak binding forces and the apparent binding constants are influenced more by the hydrophobic character of the spacer amino acid side chains than by the chainlength of the spacers. PMID:1187347 6. Equilibrium constant for the reversible reaction ClO + O2 - ClO-O2 NASA Technical Reports Server (NTRS) Demore, W. B. 1990-01-01 It is shown here that the equilibrium constant for the reversible reaction ClO + O2 - ClO-O2 at stratospheric temperatures must be at least three orders of magnitude less than the current NASA upper limit. The new upper limit greatly diminishes the possible role of ClO-O2 in the chlorine-photosensitized decomposition of O3. Nevertheless, it does not preclude the possibility that it is a significant reservoir of ClO, as well as a possible reactant, at low temperatures characteristic of polar vortices. 7. Temperature dependency of the equilibrium constant for the formation of carbamate from diethanolamine SciTech Connect Aroua, M.K.; Amor, A.B.; Haji-Sulaiman, M.Z. 1997-07-01 Aqueous alkanolamine solutions are frequently used to remove acidic components such as H{sub 2}S and CO{sub 2} from process gas streams. The equilibrium constant for the formation of diethanolamine carbamate was determined experimentally at (303, 313, 323, and 331) K for ionic strengths up to 1.8 mol/dm{sup 3}, the inert electrolyte being NaClO{sub 4}. A linear relationship was found to hole between log K and I{sup 0.5}. The thermodynamical constant has been determined and expressed by the equation log K{sub 1} = {minus}5.12 + 1.781 {times} 10{sup 3} K/T. 8. Surface-dependent chemical equilibrium constants and capacitances for bare and 3-cyanopropyldimethylchlorosilane coated silica nanochannels. PubMed Andersen, Mathias Bækbo; Frey, Jared; Pennathur, Sumita; Bruus, Henrik 2011-01-01 We present a combined theoretical and experimental analysis of the solid-liquid interface of fused-silica nanofabricated channels with and without a hydrophilic 3-cyanopropyldimethylchlorosilane (cyanosilane) coating. We develop a model that relaxes the assumption that the surface parameters C(1), C(2), and pK(+) are constant and independent of surface composition. Our theoretical model consists of three parts: (i) a chemical equilibrium model of the bare or coated wall, (ii) a chemical equilibrium model of the buffered bulk electrolyte, and (iii) a self-consistent Gouy-Chapman-Stern triple-layer model of the electrochemical double layer coupling these two equilibrium models. To validate our model, we used both pH-sensitive dye-based capillary filling experiments as well as electro-osmotic current-monitoring measurements. Using our model we predict the dependence of ζ potential, surface charge density, and capillary filling length ratio on ionic strength for different surface compositions, which can be difficult to achieve otherwise. 9. Determination of equilibrium constants for the reaction between acetone and HO2 using infrared kinetic spectroscopy. PubMed Grieman, Fred J; Noell, Aaron C; Davis-Van Atta, Casey; Okumura, Mitchio; Sander, Stanley P 2011-09-29 The reaction between the hydroperoxy radical, HO(2), and acetone may play an important role in acetone removal and the budget of HO(x) radicals in the upper troposphere. We measured the equilibrium constants of this reaction over the temperature range of 215-272 K at an overall pressure of 100 Torr using a flow tube apparatus and laser flash photolysis to produce HO(2). The HO(2) concentration was monitored as a function of time by near-IR diode laser wavelength modulation spectroscopy. The resulting [HO(2)] decay curves in the presence of acetone are characterized by an immediate decrease in initial [HO(2)] followed by subsequent decay. These curves are interpreted as a rapid (<100 μs) equilibrium reaction between acetone and the HO(2) radical that occurs on time scales faster than the time resolution of the apparatus, followed by subsequent reactions. This separation of time scales between the initial equilibrium and ensuing reactions enabled the determination of the equilibrium constant with values ranging from 4.0 × 10(-16) to 7.7 × 10(-18) cm(3) molecule(-1) for T = 215-272 K. Thermodynamic parameters for the reaction determined from a second-law fit of our van't Hoff plot were Δ(r)H°(245) = -35.4 ± 2.0 kJ mol(-1) and Δ(r)S°(245) = -88.2 ± 8.5 J mol(-1) K(-1). Recent ab initio calculations predict that the reaction proceeds through a prereactive hydrogen-bonded molecular complex (HO(2)-acetone) with subsequent isomerization to a hydroxy-peroxy radical, 2-hydroxyisopropylperoxy (2-HIPP). The calculations differ greatly in the energetics of the complex and the peroxy radical, as well as the transition state for isomerization, leading to significant differences in their predictions of the extent of this reaction at tropospheric temperatures. The current results are consistent with equilibrium formation of the hydrogen-bonded molecular complex on a short time scale (100 μs). Formation of the hydrogen-bonded complex will have a negligible impact on the 10. Determination of equilibrium constants for the reaction between acetone and HO2 using infrared kinetic spectroscopy. PubMed Grieman, Fred J; Noell, Aaron C; Davis-Van Atta, Casey; Okumura, Mitchio; Sander, Stanley P 2011-09-29 The reaction between the hydroperoxy radical, HO(2), and acetone may play an important role in acetone removal and the budget of HO(x) radicals in the upper troposphere. We measured the equilibrium constants of this reaction over the temperature range of 215-272 K at an overall pressure of 100 Torr using a flow tube apparatus and laser flash photolysis to produce HO(2). The HO(2) concentration was monitored as a function of time by near-IR diode laser wavelength modulation spectroscopy. The resulting [HO(2)] decay curves in the presence of acetone are characterized by an immediate decrease in initial [HO(2)] followed by subsequent decay. These curves are interpreted as a rapid (<100 μs) equilibrium reaction between acetone and the HO(2) radical that occurs on time scales faster than the time resolution of the apparatus, followed by subsequent reactions. This separation of time scales between the initial equilibrium and ensuing reactions enabled the determination of the equilibrium constant with values ranging from 4.0 × 10(-16) to 7.7 × 10(-18) cm(3) molecule(-1) for T = 215-272 K. Thermodynamic parameters for the reaction determined from a second-law fit of our van't Hoff plot were Δ(r)H°(245) = -35.4 ± 2.0 kJ mol(-1) and Δ(r)S°(245) = -88.2 ± 8.5 J mol(-1) K(-1). Recent ab initio calculations predict that the reaction proceeds through a prereactive hydrogen-bonded molecular complex (HO(2)-acetone) with subsequent isomerization to a hydroxy-peroxy radical, 2-hydroxyisopropylperoxy (2-HIPP). The calculations differ greatly in the energetics of the complex and the peroxy radical, as well as the transition state for isomerization, leading to significant differences in their predictions of the extent of this reaction at tropospheric temperatures. The current results are consistent with equilibrium formation of the hydrogen-bonded molecular complex on a short time scale (100 μs). Formation of the hydrogen-bonded complex will have a negligible impact on the 11. Revealing model dependencies in "Assessing the RAFT equilibrium constant via model systems: an EPR study". PubMed Junkers, Thomas; Barner-Kowollik, Christopher; Coote, Michelle L 2011-12-01 In a recent article (W. Meiser, M. Buback, Assessing the RAFT Equilibrium Constant via Model Systems: An EPR Study, Macromol. Rapid Commun. 2011, 18, 1490-1494), it is claimed that evidence is found that unequivocally proves that quantum mechanical calculations assessing the equilibrium constant and fragmentation rate coefficients in dithiobenzoate-mediated reversible addition fragmentation transfer (RAFT) systems are beset with a considerable uncertainty. In the present work, we show that these claims made by Meiser and Buback are beset with a model dependency, as a critical key parameter in their data analysis - the addition rate coefficient of the radicals attacking the C=S double bond in the dithiobenzoate - induces a model insensitivity into the data analysis. Contrary to the claims made by Meiser and Buback, their experimental results can be brought into agreement with the quantum chemical calculations if a lower addition rate coefficient of cyanoisopropyl radicals (CIP) to the CIP dithiobenzoate (CPDB) is assumed. To resolve the model dependency, the addition rate coefficient of CIP radicals to CPDB needs to be determined as a matter of priority. 12. Water dimers in the atmosphere III: equilibrium constant from a flexible potential. PubMed Scribano, Yohann; Goldman, Nir; Saykally, R J; Leforestier, Claude 2006-04-27 We present new results for the water dimer equilibrium constant K(p)(T) in the range 190-390 K, using a flexible potential energy surface fitted to spectroscopical data. The increased numerical complexity due to explicit consideration of the monomer vibrations is handled via an adiabatic (6 + 6)d decoupling between intra- and intermolecular modes. The convergence of the canonical partition function of the dimer is ensured by computing all energy levels up to dissociation for total angular momentum values J = 0-5 and using an extrapolation scheme to higher values. The newly calculated values for K(p)(T) are in very good agreement with available experimental data at room temperature. At higher temperatures, an analysis of the convergence of the partition function reveals that quasi-bound states are likely to contribute to the equilibrium constant. Additional thermodynamical quantities (deltaG, deltaH, deltaS, and C(p)) have also been determined and fit to quadratic expressions a + bT + cT2. 13. "Assessing the RAFT equilibrium constant via model systems: an EPR study"--response to a comment. PubMed Meiser, Wibke; Buback, Michael 2012-08-14 We have presented an EPR-based approach for deducing the RAFT equilibrium constant, K(eq), of a dithiobenzoate-mediated system [Meiser, W. and Buback M. Macromol. Rapid Commun. 2011, 32, 1490]. Our value is by four orders of magnitude below K(eq) from ab initio calculations for the identical monomer-free system. Junkers et al. [Macromol. Rapid Commun. 2011, 32, 1891] claim that our EPR approach would be model dependent and our data could be equally well fitted by assuming slow addition of radicals to the RAFT agent and slow fragmentation of the so-obtained intermediate radical as well as high cross-termination rate. By identification of all side products, our EPR-based method is shown to be model independent and to provide reliable K(eq) values, which demonstrate the validity of the intermediate radical termination model. 14. Assessing the RAFT equilibrium constant via model systems: an EPR study. PubMed Meiser, Wibke; Buback, Michael 2011-09-15 Reversible addition-fragmentation chain transfer (RAFT) equilibrium constants, K(eq), for the model system cyano-iso-propyl dithiobenzoate (CPDB) - cyano-iso-propyl radical (CIP) have been deduced via electron paramagnetic resonance (EPR) spectroscopy. The CIP species is produced by thermal decomposition of azobis-iso-butyronitrile (AIBN). In solution of toluene at 70 °C, K(eq) has been determined to be (9 ± 1) L · mol(-1). Measurement of K(eq) = k(ad)/k(β) between 60 and 100 °C yields ΔE(a) = (-28 ± 4) kJ · mol(-1) as the difference in the activation energies of k(ad) and k(β). The data measured on the model system are indicative of fast fragmentation of the intermediate radical produced by addition of CIP to CPDB. 15. Rough-to-smooth transition of an equilibrium neutral constant stress layer NASA Technical Reports Server (NTRS) Logan, E., Jr.; Fichtl, G. H. 1975-01-01 Purpose of research on rough-to-smooth transition of an equilibrium neutral constant stress layer is to develop a model for low-level atmospheric flow over terrains of abruptly changing roughness, such as those occurring near the windward end of a landing strip, and to use the model to derive functions which define the extent of the region affected by the roughness change and allow adequate prediction of wind and shear stress profiles at all points within the region. A model consisting of two bounding logarithmic layers and an intermediate velocity defect layer is assumed, and dimensionless velocity and stress distribution functions which meet all boundary and matching conditions are hypothesized. The functions are used in an asymptotic form of the equation of motion to derive a relation which governs the growth of the internal boundary layer. The growth relation is used to predict variation of surface shear stress. 16. Calculation of cooperativity and equilibrium constants of ligands binding to G-quadruplex DNA in solution. PubMed Kudrev, A G 2013-11-15 Equilibrium model of a ligand binding with DNA oligomer has been considered as a process of small molecule adsorption onto a lattice of multiple binding sites. An experimental example has been used to verify the assertion that during saturation of the macromolecule by a ligand should expect effect of cooperativity due to changes in DNA conformation or the mutual influence between bound ligands. Such phenomenon cannot be entirely described by the classical stepwise complex formation model. To evaluate a ligand binding affinity and cooperativity of ligand-oligomer complex formation the statistical approach has been proposed. This new computational approach used to re-examine previously studded ligand binding towards DNA quadruplexes targets with multiple binding sites. The intrinsic equilibrium constants K1-3 of the mesotetrakis-(N-methyl-4-pyridyl)-porphyrin (TMPyP4) binding with the [d(T4G4)]4 and with the [AG3(T2AG3)3] quadruplexes and the correction for the mutual influence between bound ligands (cooperativity parameters ω) was determined from the Job plots based upon the nonlinear least-squares fitting procedure. The re-examination of experimental curves reveals that the equilibrium is affected by the positive cooperative (ω>1) binding of the TMPyP4 ligand with tetramolecular [d(T4G4)]4. However for an intramolecular antiparallel-parallel hybrid structure [AG3(T2AG3)3] the weak anti-cooperativity of TMPyP4 accommodation (ω<1) onto two from three nonidentical sites was detected. PMID:24148442 17. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition. PubMed Zheng, Xiliang; Wang, Jin 2015-04-01 We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453 18. The Universal Statistical Distributions of the Affinity, Equilibrium Constants, Kinetics and Specificity in Biomolecular Recognition PubMed Central Zheng, Xiliang; Wang, Jin 2015-01-01 We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453 19. The universal statistical distributions of the affinity, equilibrium constants, kinetics and specificity in biomolecular recognition. PubMed Zheng, Xiliang; Wang, Jin 2015-04-01 We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. 20. Modulation and Salt-Induced Reverse Modulation of the Excited-State Proton-Transfer Process of Lysozymized Pyranine: The Contrasting Scenario of the Ground-State Acid-Base Equilibrium of the Photoacid. PubMed Das, Ishita; Panja, Sudipta; Halder, Mintu 2016-07-28 Here we report on the excited-state behavior in terms of the excited-state proton-transfer (ESPT) reaction as well as the ground-state acid-base property of pyranine [8-hydroxypyrene-1,3,6-trisulfonate (HPTS)] in the presence of an enzymatic protein, human lysozyme (LYZ). HPTS forms a 1:1 ground-state complex with LYZ having the binding constant KBH = (1.4 ± 0.05) × 10(4) M(-1), and its acid-base equilibrium gets shifted toward the deprotonated conjugate base (RO(-)), resulting in a downward shift in pKa. This suggests that the conjugate base (RO(-)) is thermodynamically more favored over the protonated (ROH) species inside the lysozyme matrix, resulting in an increased population of the deprotonated form. However, for the release of the proton from the excited photoacid, interestingly, the rate of proton transfer gets slowed down due to the "slow" acceptor biological water molecules present in the immediate vicinity of the fluorophore binding region inside the protein. The observed ESPT time constants, ∼140 and ∼750 ps, of protein-bound pyranine are slower than in bulk aqueous media (∼100 ps, single exponential). The molecular docking study predicts that the most probable binding location of the fluorophore is in a region near to the active site of the protein. Here we also report on the effect of external electrolyte (NaCl) on the reverse modulation of ground-state prototropy as well as the ESPT process of the protein-bound pyranine. It is found that there is a dominant role of electrostatic forces in the HPTS-LYZ interaction process, because an increase in ionic strength by the addition of NaCl dislodges the fluorophore from the protein pocket to the bulk again. The study shows a considerably different perspective of the perturbation offered by the model macromolecular host used, unlike the available literature reports on the concerned photoacid. PMID:27355857 1. Modulation and Salt-Induced Reverse Modulation of the Excited-State Proton-Transfer Process of Lysozymized Pyranine: The Contrasting Scenario of the Ground-State Acid-Base Equilibrium of the Photoacid. PubMed Das, Ishita; Panja, Sudipta; Halder, Mintu 2016-07-28 Here we report on the excited-state behavior in terms of the excited-state proton-transfer (ESPT) reaction as well as the ground-state acid-base property of pyranine [8-hydroxypyrene-1,3,6-trisulfonate (HPTS)] in the presence of an enzymatic protein, human lysozyme (LYZ). HPTS forms a 1:1 ground-state complex with LYZ having the binding constant KBH = (1.4 ± 0.05) × 10(4) M(-1), and its acid-base equilibrium gets shifted toward the deprotonated conjugate base (RO(-)), resulting in a downward shift in pKa. This suggests that the conjugate base (RO(-)) is thermodynamically more favored over the protonated (ROH) species inside the lysozyme matrix, resulting in an increased population of the deprotonated form. However, for the release of the proton from the excited photoacid, interestingly, the rate of proton transfer gets slowed down due to the "slow" acceptor biological water molecules present in the immediate vicinity of the fluorophore binding region inside the protein. The observed ESPT time constants, ∼140 and ∼750 ps, of protein-bound pyranine are slower than in bulk aqueous media (∼100 ps, single exponential). The molecular docking study predicts that the most probable binding location of the fluorophore is in a region near to the active site of the protein. Here we also report on the effect of external electrolyte (NaCl) on the reverse modulation of ground-state prototropy as well as the ESPT process of the protein-bound pyranine. It is found that there is a dominant role of electrostatic forces in the HPTS-LYZ interaction process, because an increase in ionic strength by the addition of NaCl dislodges the fluorophore from the protein pocket to the bulk again. The study shows a considerably different perspective of the perturbation offered by the model macromolecular host used, unlike the available literature reports on the concerned photoacid. 2. Lysozyme adsorption in pH-responsive hydrogel thin-films: the non-trivial role of acid-base equilibrium. PubMed Narambuena, Claudio F; Longo, Gabriel S; Szleifer, Igal 2015-09-01 We develop and apply a molecular theory to study the adsorption of lysozyme on weak polyacid hydrogel films. The theory explicitly accounts for the conformation of the network, the structure of the proteins, the size and shape of all the molecular species, their interactions as well as the chemical equilibrium of each titratable unit of both the protein and the polymer network. The driving forces for adsorption are the electrostatic attractions between the negatively charged network and the positively charged protein. The adsorption is a non-monotonic function of the solution pH, with a maximum in the region between pH 8 and 9 depending on the salt concentration of the solution. The non-monotonic adsorption is the result of increasing negative charge of the network with pH, while the positive charge of the protein decreases. At low pH the network is roughly electroneutral, while at sufficiently high pH the protein is negatively charged. Upon adsorption, the acid-base equilibrium of the different amino acids of the protein shifts in a nontrivial fashion that depends critically on the particular kind of residue and solution composition. Thus, the proteins regulate their charge and enhance adsorption under a wide range of conditions. In particular, adsorption is predicted above the protein isoelectric point where both the solution lysozyme and the polymer network are negatively charged. This behavior occurs because the pH in the interior of the gel is significantly lower than that in the bulk solution and it is also regulated by the adsorption of the protein in order to optimize protein-gel interactions. Under high pH conditions we predict that the protein changes its charge from negative in the solution to positive within the gel. The change occurs within a few nanometers at the interface of the hydrogel film. Our predictions show the non-trivial interplay between acid-base equilibrium, physical interactions and molecular organization under nanoconfined conditions 3. Lysozyme adsorption in pH-responsive hydrogel thin-films: the non-trivial role of acid-base equilibrium. PubMed Narambuena, Claudio F; Longo, Gabriel S; Szleifer, Igal 2015-09-01 We develop and apply a molecular theory to study the adsorption of lysozyme on weak polyacid hydrogel films. The theory explicitly accounts for the conformation of the network, the structure of the proteins, the size and shape of all the molecular species, their interactions as well as the chemical equilibrium of each titratable unit of both the protein and the polymer network. The driving forces for adsorption are the electrostatic attractions between the negatively charged network and the positively charged protein. The adsorption is a non-monotonic function of the solution pH, with a maximum in the region between pH 8 and 9 depending on the salt concentration of the solution. The non-monotonic adsorption is the result of increasing negative charge of the network with pH, while the positive charge of the protein decreases. At low pH the network is roughly electroneutral, while at sufficiently high pH the protein is negatively charged. Upon adsorption, the acid-base equilibrium of the different amino acids of the protein shifts in a nontrivial fashion that depends critically on the particular kind of residue and solution composition. Thus, the proteins regulate their charge and enhance adsorption under a wide range of conditions. In particular, adsorption is predicted above the protein isoelectric point where both the solution lysozyme and the polymer network are negatively charged. This behavior occurs because the pH in the interior of the gel is significantly lower than that in the bulk solution and it is also regulated by the adsorption of the protein in order to optimize protein-gel interactions. Under high pH conditions we predict that the protein changes its charge from negative in the solution to positive within the gel. The change occurs within a few nanometers at the interface of the hydrogel film. Our predictions show the non-trivial interplay between acid-base equilibrium, physical interactions and molecular organization under nanoconfined conditions 4. Discovering a Change in Equilibrium Constant with Change in Ionic Strength: An Empirical Laboratory Experiment for General Chemistry Stolzberg, Richard J. 1999-05-01 Students are challenged to investigate the hypothesis that an equilibrium constant, Kc, measured as a product and quotient of molar concentrations, is constant at constant temperature. Spectrophotometric measurements of absorbance of a solution of Fe3+(aq) and SCN-(aq) treated with different amounts of KNO3 are made to determine Kc for the formation of FeSCN2+(aq). Students observe a regular decrease in the value of Kc as the concentration of added KNO3 is increased. 5. Partition functions and equilibrium constants for diatomic molecules and atoms of astrophysical interest Barklem, P. S.; Collet, R. 2016-04-01 Partition functions and dissociation equilibrium constants are presented for 291 diatomic molecules for temperatures in the range from near absolute zero to 10 000 K, thus providing data for many diatomic molecules of astrophysical interest at low temperature. The calculations are based on molecular spectroscopic data from the book of Huber & Herzberg (1979, Constants of Diatomic Molecules) with significant improvements from the literature, especially updated data for ground states of many of the most important molecules by Irikura (2007, J. Phys. Chem. Ref. Data, 36, 389). Dissociation energies are collated from compilations of experimental and theoretical values. Partition functions for 284 species of atoms for all elements from H to U are also presented based on data collected at NIST. The calculated data are expected to be useful for modelling a range of low density astrophysical environments, especially star-forming regions, protoplanetary disks, the interstellar medium, and planetary and cool stellar atmospheres. The input data, which will be made available electronically, also provides a possible foundation for future improvement by the community. Full Tables 1-8 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A96 6. Non-Condon equilibrium Fermi's golden rule electronic transition rate constants via the linearized semiclassical method Sun, Xiang; Geva, Eitan 2016-06-01 In this paper, we test the accuracy of the linearized semiclassical (LSC) expression for the equilibrium Fermi's golden rule rate constant for electronic transitions in the presence of non-Condon effects. We do so by performing a comparison with the exact quantum-mechanical result for a model where the donor and acceptor potential energy surfaces are parabolic and identical except for shifts in the equilibrium energy and geometry, and the coupling between them is linear in the nuclear coordinates. Since non-Condon effects may or may not give rise to conical intersections, both possibilities are examined by considering: (1) A modified Garg-Onuchic-Ambegaokar model for charge transfer in the condensed phase, where the donor-acceptor coupling is linear in the primary mode coordinate, and for which non-Condon effects do not give rise to a conical intersection; (2) the linear vibronic coupling model for electronic transitions in gas phase molecules, where non-Condon effects give rise to conical intersections. We also present a comprehensive comparison between the linearized semiclassical expression and a progression of more approximate expressions. The comparison is performed over a wide range of frictions and temperatures for model (1) and over a wide range of temperatures for model (2). The linearized semiclassical method is found to reproduce the exact quantum-mechanical result remarkably well for both models over the entire range of parameters under consideration. In contrast, more approximate expressions are observed to deviate considerably from the exact result in some regions of parameter space. 7. Non-Condon equilibrium Fermi's golden rule electronic transition rate constants via the linearized semiclassical method. PubMed Sun, Xiang; Geva, Eitan 2016-06-28 In this paper, we test the accuracy of the linearized semiclassical (LSC) expression for the equilibrium Fermi's golden rule rate constant for electronic transitions in the presence of non-Condon effects. We do so by performing a comparison with the exact quantum-mechanical result for a model where the donor and acceptor potential energy surfaces are parabolic and identical except for shifts in the equilibrium energy and geometry, and the coupling between them is linear in the nuclear coordinates. Since non-Condon effects may or may not give rise to conical intersections, both possibilities are examined by considering: (1) A modified Garg-Onuchic-Ambegaokar model for charge transfer in the condensed phase, where the donor-acceptor coupling is linear in the primary mode coordinate, and for which non-Condon effects do not give rise to a conical intersection; (2) the linear vibronic coupling model for electronic transitions in gas phase molecules, where non-Condon effects give rise to conical intersections. We also present a comprehensive comparison between the linearized semiclassical expression and a progression of more approximate expressions. The comparison is performed over a wide range of frictions and temperatures for model (1) and over a wide range of temperatures for model (2). The linearized semiclassical method is found to reproduce the exact quantum-mechanical result remarkably well for both models over the entire range of parameters under consideration. In contrast, more approximate expressions are observed to deviate considerably from the exact result in some regions of parameter space. PMID:27369495 8. SARS CoV main proteinase: The monomer-dimer equilibrium dissociation constant. PubMed Graziano, Vito; McGrath, William J; Yang, Lin; Mangel, Walter F 2006-12-12 The SARS coronavirus main proteinase (SARS CoV main proteinase) is required for the replication of the severe acute respiratory syndrome coronavirus (SARS CoV), the virus that causes SARS. One function of the enzyme is to process viral polyproteins. The active form of the SARS CoV main proteinase is a homodimer. In the literature, estimates of the monomer-dimer equilibrium dissociation constant, KD, have varied more than 65,0000-fold, from <1 nM to more than 200 microM. Because of these discrepancies and because compounds that interfere with activation of the enzyme by dimerization may be potential antiviral agents, we investigated the monomer-dimer equilibrium by three different techniques: small-angle X-ray scattering, chemical cross-linking, and enzyme kinetics. Analysis of small-angle X-ray scattering data from a series of measurements at different SARS CoV main proteinase concentrations yielded KD values of 5.8 +/- 0.8 microM (obtained from the entire scattering curve), 6.5 +/- 2.2 microM (obtained from the radii of gyration), and 6.8 +/- 1.5 microM (obtained from the forward scattering). The KD from chemical cross-linking was 12.7 +/- 1.1 microM, and from enzyme kinetics, it was 5.2 +/- 0.4 microM. While each of these three techniques can present different, potential limitations, they all yielded similar KD values. 9. SPECIES - EVALUATING THERMODYNAMIC PROPERTIES, TRANSPORT PROPERTIES & EQUILIBRIUM CONSTANTS OF AN 11-SPECIES AIR MODEL NASA Technical Reports Server (NTRS) Thompson, R. A. 1994-01-01 Accurate numerical prediction of high-temperature, chemically reacting flowfields requires a knowledge of the physical properties and reaction kinetics for the species involved in the reacting gas mixture. Assuming an 11-species air model at temperatures below 30,000 degrees Kelvin, SPECIES (Computer Codes for the Evaluation of Thermodynamic Properties, Transport Properties, and Equilibrium Constants of an 11-Species Air Model) computes values for the species thermodynamic and transport properties, diffusion coefficients and collision cross sections for any combination of the eleven species, and reaction rates for the twenty reactions normally occurring. The species represented in the model are diatomic nitrogen, diatomic oxygen, atomic nitrogen, atomic oxygen, nitric oxide, ionized nitric oxide, the free electron, ionized atomic nitrogen, ionized atomic oxygen, ionized diatomic nitrogen, and ionized diatomic oxygen. Sixteen subroutines compute the following properties for both a single species, interaction pair, or reaction, and an array of all species, pairs, or reactions: species specific heat and static enthalpy, species viscosity, species frozen thermal conductivity, diffusion coefficient, collision cross section (OMEGA 1,1), collision cross section (OMEGA 2,2), collision cross section ratio, and equilibrium constant. The program uses least squares polynomial curve-fits of the most accurate data believed available to provide the requested values more quickly than is possible with table look-up methods. The subroutines for computing transport coefficients and collision cross sections use additional code to correct for any electron pressure when working with ionic species. SPECIES was developed on a SUN 3/280 computer running the SunOS 3.5 operating system. It is written in standard FORTRAN 77 for use on any machine, and requires roughly 92K memory. The standard distribution medium for SPECIES is a 5.25 inch 360K MS-DOS format diskette. The contents of the 10. A rigorous multiple independent binding site model for determining cell-based equilibrium dissociation constants. PubMed Drake, Andrew W; Klakamp, Scott L 2007-01-10 A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data. PMID:17141800 11. Reversible inhibition of proton release activity and the anesthetic-induced acid-base equilibrium between the 480 and 570 nm forms of bacteriorhodopsin. PubMed Central Boucher, F; Taneva, S G; Elouatik, S; Déry, M; Messaoudi, S; Harvey-Girard, E; Beaudoin, N 1996-01-01 In purple membrane added with general anesthetics, there exists an acid-base equilibrium between two spectral forms of the pigment: bR570 and bR480 (apparent pKa = 7.3). As the purple 570 nm bacteriorhodopsin is reversibly transformed into its red 480 nm form, the proton pumping capability of the pigment reversibly decreases, as indicated by transient proton release measurements and proton translocation action spectra of mixture of both spectral forms. It happens in spite of a complete photochemical activity in bR480 that is mostly characterized by fast deprotonation and slow reprotonation steps and which, under continuous illumination, bleaches with a yield comparable to that of bR570. This modified photochemical activity has a correlated specific photoelectrical counterpart: a faster proton extrusion current and a slower reprotonation current. The relative areas of all photocurrent phases are reduced in bR480, most likely because its photochemistry is accompanied by charge movements for shorter distances than in the native pigment, reflecting a reversible inhibition of the pumping activity. PMID:8789112 12. Using Electrophoretic Mobility Shift Assays to Measure Equilibrium Dissociation Constants: GAL4-p53 Binding DNA as a Model System ERIC Educational Resources Information Center Heffler, Michael A.; Walters, Ryan D.; Kugel, Jennifer F. 2012-01-01 An undergraduate biochemistry laboratory experiment is described that will teach students the practical and theoretical considerations for measuring the equilibrium dissociation constant (K[subscript D]) for a protein/DNA interaction using electrophoretic mobility shift assays (EMSAs). An EMSA monitors the migration of DNA through a native gel;… 13. Rate and Equilibrium Constants for an Enzyme Conformational Change during Catalysis by Orotidine 5'-Monophosphate Decarboxylase. PubMed Goryanova, Bogdana; Goldman, Lawrence M; Ming, Shonoi; Amyes, Tina L; Gerlt, John A; Richard, John P 2015-07-28 complex between FOMP and the open enzyme, that the tyrosyl phenol group stabilizes the closed form of ScOMPDC by hydrogen bonding to the substrate phosphodianion, and that the phenyl group of Y217 and F217 facilitates formation of the transition state for the rate-limiting conformational change. An analysis of kinetic data for mutant enzyme-catalyzed decarboxylation of OMP and FOMP provides estimates for the rate and equilibrium constants for the conformational change that traps FOMP at the enzyme active site. 14. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays. PubMed van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël 2014-01-01 Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be. 15. Equilibrium Fermi's Golden Rule Charge Transfer Rate Constants in the Condensed Phase: The Linearized Semiclassical Method vs Classical Marcus Theory. PubMed Sun, Xiang; Geva, Eitan 2016-05-19 In this article, we present a comprehensive comparison between the linearized semiclassical expression for the equilibrium Fermi's golden rule rate constant and the progression of more approximate expressions that lead to the classical Marcus expression. We do so within the context of the canonical Marcus model, where the donor and acceptor potential energy surface are parabolic and identical except for a shift in both the free energies and equilibrium geometries, and within the Condon region. The comparison is performed for two different spectral densities and over a wide range of frictions and temperatures, thereby providing a clear test for the validity, or lack thereof, of the more approximate expressions. We also comment on the computational cost and scaling associated with numerically calculating the linearized semiclassical expression for the rate constant and its dependence on the spectral density, temperature, and friction. 16. Rate and equilibrium constants for the addition of N-heterocyclic carbenes into benzaldehydes: a remarkable 2-substituent effect. PubMed Collett, Christopher J; Massey, Richard S; Taylor, James E; Maguire, Oliver R; O'Donoghue, AnnMarie C; Smith, Andrew D 2015-06-01 Rate and equilibrium constants for the reaction between N-aryl triazolium N-heterocyclic carbene (NHC) precatalysts and substituted benzaldehyde derivatives to form 3-(hydroxybenzyl)azolium adducts under both catalytic and stoichiometric conditions have been measured. Kinetic analysis and reaction profile fitting of both the forward and reverse reactions, plus onwards reaction to the Breslow intermediate, demonstrate the remarkable effect of the benzaldehyde 2-substituent in these reactions and provide insight into the chemoselectivity of cross-benzoin reactions. 17. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays. PubMed van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël 2014-01-01 Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be. PMID:24465496 18. A METHOD FOR THE MEASUREMENT OF SITE-SPECIFIC TAUTOMERIC AND ZWITTERIONIC MICROSPECIES EQUILIBRIUM CONSTANTS EPA Science Inventory We describe a method for the individual measurement of simultaneously occurring, unimolecular, site-specific "microequilibrium" constants as in, for example, prototropic tautomerism and zwitterionic equilibria. Our method represents an elaboration of that of Nygren et al. (Anal. ... 19. METHOD FOR THE MEASUREMENT OF SITE-SPECIFIC TAUTOMERIC AND ZWITTERIONIC MICROSPECIES EQUILIBRIUM CONSTANTS EPA Science Inventory We describe a method for the individual measurement of simultaneously occurring, unimolecular, site-specific “microequilibrium” constants as in, for example, prototropic tautomerism and zwitterionic equilibria. Our method represents an elaboration of that of Nygren et al. (Anal. ... 20. Experimental determination of equilibrium constant for the complexing reaction of nitric oxide with hexamminecobalt(II) in aqueous solution. PubMed Mao, Yan-Peng; Chen, Hua; Long, Xiang-Li; Xiao, Wen-de; Li, Wei; Yuan, Wei-Kang 2009-02-15 Ammonia solution can be used to scrub NO from the flue gases by adding soluble cobalt(II) salts into the aqueous ammonia solutions. The hexamminecobalt(II), Co(NH3)6(2+), formed by ammonia binding with Co2+ is the active constituent of eliminating NO from the flue gas streams. The hexamminecobalt(II) can combine with NO to form a complex. For the development of this process, the data of the equilibrium constants for the coordination between NO and Co(NH3)6(2+)over a range of temperature is very important. Therefore, a series of experiments were performed in a bubble column to investigate the chemical equilibrium. The equilibrium constant was determined in the temperature range of 30.0-80.0 degrees C under atmospheric pressure at pH 9.14. All experimental data fit the following equation well: [see text] where the enthalpy and entropy are DeltaH degrees = - (44.559 +/- 2.329)kJ mol(-1) and DeltaS degrees = - (109.50 +/- 7.126) J K(-1)mol(-1), respectively. 1. A procedure to find thermodynamic equilibrium constants for CO2 and CH4 adsorption on activated carbon. PubMed Trinh, T T; van Erp, T S; Bedeaux, D; Kjelstrup, S; Grande, C A 2015-03-28 Thermodynamic equilibrium for adsorption means that the chemical potential of gas and adsorbed phase are equal. A precise knowledge of the chemical potential is, however, often lacking, because the activity coefficient of the adsorbate is not known. Adsorption isotherms are therefore commonly fitted to ideal models such as the Langmuir, Sips or Henry models. We propose here a new procedure to find the activity coefficient and the equilibrium constant for adsorption which uses the thermodynamic factor. Instead of fitting the data to a model, we calculate the thermodynamic factor and use this to find first the activity coefficient. We show, using published molecular simulation data, how this procedure gives the thermodynamic equilibrium constant and enthalpies of adsorption for CO2(g) on graphite. We also use published experimental data to find similar thermodynamic properties of CO2(g) and of CH4(g) adsorbed on activated carbon. The procedure gives a higher accuracy in the determination of enthalpies of adsorption than ideal models do. 2. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water PubMed Central Wu, Xiongwu; Brooks, Bernard R. 2015-01-01 Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66’s pKa. PMID:26506245 3. A Virtual Mixture Approach to the Study of Multistate Equilibrium: Application to Constant pH Simulation in Explicit Water. PubMed Wu, Xiongwu; Brooks, Bernard R 2015-10-01 Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa. 4. Dynamics of Equilibrium Folding and Unfolding Transitions of Titin Immunoglobulin Domain under Constant Forces PubMed Central Chen, Hu; Yuan, Guohua; Winardhi, Ricksen S.; Yao, Mingxi; Popa, Ionel; Fernandez, Julio M.; Yan, Jie 2015-01-01 The mechanical stability of force-bearing proteins is crucial for their functions. However, slow transition rates of complex protein domains have made it challenging to investigate their equilibrium force-dependent structural transitions. Using ultra stable magnetic tweezers, we report the first equilibrium single-molecule force manipulation study of the classic titin I27 immunoglobulin domain. We found that individual I27 in a tandem repeat unfold/fold independently. We obtained the force-dependent free energy difference between unfolded and folded I27 and determined the critical force (∼5.4 pN) at which unfolding and folding have equal probability. We also determined the force-dependent free energy landscape of unfolding/folding transitions based on measurement of the free energy cost of unfolding. In addition to providing insights into the force-dependent structural transitions of titin I27, our results suggest that the conformations of titin immunoglobulin domains can be significantly altered during low force, long duration muscle stretching. PMID:25726700 5. Dynamics of equilibrium folding and unfolding transitions of titin immunoglobulin domain under constant forces. PubMed Chen, Hu; Yuan, Guohua; Winardhi, Ricksen S; Yao, Mingxi; Popa, Ionel; Fernandez, Julio M; Yan, Jie 2015-03-18 The mechanical stability of force-bearing proteins is crucial for their functions. However, slow transition rates of complex protein domains have made it challenging to investigate their equilibrium force-dependent structural transitions. Using ultra stable magnetic tweezers, we report the first equilibrium single-molecule force manipulation study of the classic titin I27 immunoglobulin domain. We found that individual I27 in a tandem repeat unfold/fold independently. We obtained the force-dependent free energy difference between unfolded and folded I27 and determined the critical force (∼5.4 pN) at which unfolding and folding have equal probability. We also determined the force-dependent free energy landscape of unfolding/folding transitions based on measurement of the free energy cost of unfolding. In addition to providing insights into the force-dependent structural transitions of titin I27, our results suggest that the conformations of titin immunoglobulin domains can be significantly altered during low force, long duration muscle stretching. 6. Toward Improving Atmospheric Models and Ozone Projections: Laboratory UV Absorption Cross Sections and Equilibrium Constant of ClOOCl Wilmouth, D. M.; Klobas, J. E.; Anderson, J. G. 2015-12-01 Thirty years have now passed since the discovery of the Antarctic ozone hole, and despite comprehensive international agreements being in place to phase out CFCs and halons, polar ozone losses generally remain severe. The relevant halogen compounds have very long atmospheric lifetimes, which ensures that seasonal polar ozone depletion will likely continue for decades to come. Changes in the climate system can further impact stratospheric ozone abundance through changes in the temperature and water vapor structure of the atmosphere and through the potential initiation of solar radiation management efforts. In many ways, the rate at which climate is changing must now be considered fast relative to the slow removal of halogens from the atmosphere. Photochemical models of Earth's atmosphere play a critical role in understanding and projecting ozone levels, but in order for these models to be accurate, they must be built on a foundation of accurate laboratory data. ClOOCl is the centerpiece of the catalytic cycle that accounts for more than 50% of the chlorine-catalyzed ozone loss in the Arctic and Antarctic stratosphere every spring, and so uncertainties in the ultraviolet cross sections of ClOOCl are particularly important. Additionally, the equilibrium constant of the dimerization reaction of ClO merits further study, as there are important discrepancies between in situ measurements and lab-based models, and the JPL-11 recommended equilibrium constant includes high error bars at atmospherically relevant temperatures (~75% at 200 K). Here we analyze available data for the ClOOCl ultraviolet cross sections and equilibrium constant and present new laboratory spectroscopic results. 7. Computer codes for the evaluation of thermodynamic properties, transport properties, and equilibrium constants of an 11-species air model NASA Technical Reports Server (NTRS) Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N. 1990-01-01 The computer codes developed provide data to 30000 K for the thermodynamic and transport properties of individual species and reaction rates for the prominent reactions occurring in an 11-species nonequilibrium air model. These properties and the reaction-rate data are computed through the use of curve-fit relations which are functions of temperature (and number density for the equilibrium constant). The curve fits were made using the most accurate data believed available. A detailed review and discussion of the sources and accuracy of the curve-fitted data used herein are given in NASA RP 1232. 8. An Experimental Evaluation of Programed Instruction as One of Two Review Techniques for Two-Year College Students Concerned with Solving Acid-Base Chemical Equilibrium Problems. ERIC Educational Resources Information Center Sharon, Jared Bear The major purpose of this study was to design and evaluate a programed instructional unit for a first year college chemistry course. The topic of the unit was the categorization and solution of acid-base equilibria problems. The experimental programed instruction text was used by 41 students and the fifth edition of Schaum's Theory and Problems of… 9. Rate Constant in Far-from-Equilibrium States of a Replicating System with Mutually Catalyzing Chemicals Kamimura, Atsushi; Yukawa, Satoshi; Ito, Nobuyasu 2006-02-01 As a first step to study reaction dynamics in far-from-equilibrium open systems, we propose a stochastic protocell model in which two mutually catalyzing chemicals are replicating depending on the external flow of energy resources J. This model exhibits an Arrhenius type reaction; furthermore, it produces a non-Arrhenius reaction that exhibits a power-law reaction rate with regard to the activation energy. These dependences are explained using the dynamics of J; the asymmetric random walk of J results in the Arrhenius equation and conservation of J results in a power-law dependence. Further, we find that the discreteness of molecules results in the power change. Effects of cell divisions are also discussed in our model. 10. Equilibrium theory analysis of liquid chromatography with non-constant velocity. PubMed Ortner, Franziska; Joss, Lisa; Mazzotti, Marco 2014-12-19 In liquid chromatography, adsorption and desorption lead to velocity variations within the column if the adsorbing compounds make up a high volumetric ratio of the mobile phase and if there is a substantial difference in the adsorption capacities. An equilibrium theory model for binary systems accounting for these velocity changes is derived and solved analytically for competitive Langmuir isotherms. Characteristic properties of concentration and velocity profiles predicted by the derived model are illustrated by two exemplary systems. Applicability of the model equations for the estimation of isotherm parameters from experimental data is investigated, and accurate results are obtained for systems with one adsorbing and one inert compound, as well as for systems with two adsorbing compounds. 11. The polysiloxane cyclization equilibrium constant: a theoretical focus on small and intermediate size rings. PubMed Madeleine-Perdrillat, Claire; Delor-Jestin, Florence; de Sainte Claire, Pascal 2014-01-01 The nonlinear dependence of polysiloxane cyclization constants (log(K(x))) with ring size (log(x)) is explained by a thermodynamic model that treats specific torsional modes of the macromolecular chains with a classical coupled hindered rotor model. Several parameters such as the dependence of the internal rotation kinetic energy matrix with geometry, the effect of potential energy hindrance, anharmonicity, and the couplings between internal rotors were investigated. This behavior arises from the competing effects of local molecular entropy that is mainly driven by the intrinsic transformation of vibrations in small cycles into hindered rotations in larger cycles and configurational entropy. 12. A benchmark study of molecular structure by experimental and theoretical methods: Equilibrium structure of thymine from microwave rotational constants and coupled-cluster computations Vogt, Natalja; Demaison, Jean; Ksenafontov, Denis N.; Rudolph, Heinz Dieter 2014-11-01 Accurate equilibrium, re, structures of thymine have been determined using two different, and to some extent complementary techniques. The composite ab initio Born-Oppenheimer, re(best ab initio), structural parameters are obtained from the all-electron CCSD(T) and MP2 geometry optimizations using Gaussian basis sets up to quadruple-zeta quality. The semi-experimental mixed estimation method, where internal coordinates are fitted concurrently to equilibrium rotational constants and geometry parameters obtained from a high level of electronic structure theory. The equilibrium rotational constants are derived from experimental effective ground-state rotational constants and rovibrational corrections based on a quantum-chemical cubic force field. Equilibrium molecular structures accurate to 0.002 Å and 0.2° have been determined. This work is one of a few accurate equilibrium structure determinations for large molecules. The poor behavior of Kraitchman's equations is discussed. 13. Theory for rates, equilibrium constants, and Brønsted slopes in F1-ATPase single molecule imaging experiments PubMed Central Volkán-Kacsó, Sándor; Marcus, Rudolph A. 2015-01-01 A theoretical model of elastically coupled reactions is proposed for single molecule imaging and rotor manipulation experiments on F1-ATPase. Stalling experiments are considered in which rates of individual ligand binding, ligand release, and chemical reaction steps have an exponential dependence on rotor angle. These data are treated in terms of the effect of thermodynamic driving forces on reaction rates, and lead to equations relating rate constants and free energies to the stalling angle. These relations, in turn, are modeled using a formalism originally developed to treat electron and other transfer reactions. During stalling the free energy profile of the enzymatic steps is altered by a work term due to elastic structural twisting. Using biochemical and single molecule data, the dependence of the rate constant and equilibrium constant on the stall angle, as well as the Børnsted slope are predicted and compared with experiment. Reasonable agreement is found with stalling experiments for ATP and GTP binding. The model can be applied to other torque-generating steps of reversible ligand binding, such as ADP and Pi release, when sufficient data become available. PMID:26483483 14. Theory for rates, equilibrium constants, and Brønsted slopes in F1-ATPase single molecule imaging experiments. PubMed Volkán-Kacsó, Sándor; Marcus, Rudolph A 2015-11-17 A theoretical model of elastically coupled reactions is proposed for single molecule imaging and rotor manipulation experiments on F1-ATPase. Stalling experiments are considered in which rates of individual ligand binding, ligand release, and chemical reaction steps have an exponential dependence on rotor angle. These data are treated in terms of the effect of thermodynamic driving forces on reaction rates, and lead to equations relating rate constants and free energies to the stalling angle. These relations, in turn, are modeled using a formalism originally developed to treat electron and other transfer reactions. During stalling the free energy profile of the enzymatic steps is altered by a work term due to elastic structural twisting. Using biochemical and single molecule data, the dependence of the rate constant and equilibrium constant on the stall angle, as well as the Børnsted slope are predicted and compared with experiment. Reasonable agreement is found with stalling experiments for ATP and GTP binding. The model can be applied to other torque-generating steps of reversible ligand binding, such as ADP and Pi release, when sufficient data become available. 15. Optimization of Electrospray Ionization by Statistical Design of Experiments and Response Surface Methodology: Protein-Ligand Equilibrium Dissociation Constant Determinations Pedro, Liliana; Van Voorhis, Wesley C.; Quinn, Ronald J. 2016-09-01 Electrospray ionization mass spectrometry (ESI-MS) binding studies between proteins and ligands under native conditions require that instrumental ESI source conditions are optimized if relative solution-phase equilibrium concentrations between the protein-ligand complex and free protein are to be retained. Instrumental ESI source conditions that simultaneously maximize the relative ionization efficiency of the protein-ligand complex over free protein and minimize the protein-ligand complex dissociation during the ESI process and the transfer from atmospheric pressure to vacuum are generally specific for each protein-ligand system and should be established when an accurate equilibrium dissociation constant (KD) is to be determined via titration. In this paper, a straightforward and systematic approach for ESI source optimization is presented. The method uses statistical design of experiments (DOE) in conjunction with response surface methodology (RSM) and is demonstrated for the complexes between Plasmodium vivax guanylate kinase ( PvGK) and two ligands: 5'-guanosine monophosphate (GMP) and 5'-guanosine diphosphate (GDP). It was verified that even though the ligands are structurally similar, the most appropriate ESI conditions for KD determination by titration are different for each. 16. Optimization of Electrospray Ionization by Statistical Design of Experiments and Response Surface Methodology: Protein-Ligand Equilibrium Dissociation Constant Determinations. PubMed Pedro, Liliana; Van Voorhis, Wesley C; Quinn, Ronald J 2016-09-01 Electrospray ionization mass spectrometry (ESI-MS) binding studies between proteins and ligands under native conditions require that instrumental ESI source conditions are optimized if relative solution-phase equilibrium concentrations between the protein-ligand complex and free protein are to be retained. Instrumental ESI source conditions that simultaneously maximize the relative ionization efficiency of the protein-ligand complex over free protein and minimize the protein-ligand complex dissociation during the ESI process and the transfer from atmospheric pressure to vacuum are generally specific for each protein-ligand system and should be established when an accurate equilibrium dissociation constant (KD) is to be determined via titration. In this paper, a straightforward and systematic approach for ESI source optimization is presented. The method uses statistical design of experiments (DOE) in conjunction with response surface methodology (RSM) and is demonstrated for the complexes between Plasmodium vivax guanylate kinase (PvGK) and two ligands: 5'-guanosine monophosphate (GMP) and 5'-guanosine diphosphate (GDP). It was verified that even though the ligands are structurally similar, the most appropriate ESI conditions for KD determination by titration are different for each. Graphical Abstract ᅟ. 17. Equilibrium constant for the reaction ClO + ClO ↔ ClOOCl between 250 and 206 K. PubMed Hume, Kelly L; Bayes, Kyle D; Sander, Stanley P 2015-05-14 The chlorine peroxide molecule, ClOOCl, is an important participant in the chlorine-catalyzed destruction of ozone in the stratosphere. Very few laboratory measurements have been made for the partitioning between monomer ClO and dimer ClOOCl at temperatures lower than 250 K. This paper reports absorption spectra for both ClO and ClOOCl when they are in equilibrium at 1 atm and temperatures down to 206 K. The very low ClO concentrations involved requires measuring and calibrating a differential cross section, ΔσClO, for the 10-0 band of ClO. A third law fit of the new results gives Keq = [(2.01 ± 0.17) 10–27 cm3 molecule–1] e(8554∓21)K/T, where the error limits reflect the uncertainty in the entropy change. The resulting equilibrium constants are slightly lower than currently recommended. The slope of the van’t Hoff plot yields a value for the enthalpy of formation of ClOOCl at 298 K, ΔHfo, of 129.8 ± 0.6 kJ mol–1. Uncertainties in the absolute ultraviolet cross sections of ClOOCl and ClO appear to be the limiting factors in these measurements. The new Keq parameters are consistent with the measurements of Santee et al.42 in the stratosphere. PMID:25560546 18. Equilibrium constant for the reaction ClO + ClO ↔ ClOOCl between 250 and 206 K. PubMed Hume, Kelly L; Bayes, Kyle D; Sander, Stanley P 2015-05-14 The chlorine peroxide molecule, ClOOCl, is an important participant in the chlorine-catalyzed destruction of ozone in the stratosphere. Very few laboratory measurements have been made for the partitioning between monomer ClO and dimer ClOOCl at temperatures lower than 250 K. This paper reports absorption spectra for both ClO and ClOOCl when they are in equilibrium at 1 atm and temperatures down to 206 K. The very low ClO concentrations involved requires measuring and calibrating a differential cross section, ΔσClO, for the 10-0 band of ClO. A third law fit of the new results gives Keq = [(2.01 ± 0.17) 10–27 cm3 molecule–1] e(8554∓21)K/T, where the error limits reflect the uncertainty in the entropy change. The resulting equilibrium constants are slightly lower than currently recommended. The slope of the van’t Hoff plot yields a value for the enthalpy of formation of ClOOCl at 298 K, ΔHfo, of 129.8 ± 0.6 kJ mol–1. Uncertainties in the absolute ultraviolet cross sections of ClOOCl and ClO appear to be the limiting factors in these measurements. The new Keq parameters are consistent with the measurements of Santee et al.42 in the stratosphere. 19. Fundamental and overtone vibrational spectroscopy, enthalpy of hydrogen bond formation and equilibrium constant determination of the methanol-dimethylamine complex. PubMed Du, Lin; Mackeprang, Kasper; Kjaergaard, Henrik G 2013-07-01 We have measured gas phase vibrational spectra of the bimolecular complex formed between methanol (MeOH) and dimethylamine (DMA) up to about 9800 cm(-1). In addition to the strong fundamental OH-stretching transition we have also detected the weak second overtone NH-stretching transition. The spectra of the complex are obtained by spectral subtraction of the monomer spectra from spectra recorded for the mixture. For comparison, we also measured the fundamental OH-stretching transition in the bimolecular complex between MeOH and trimethylamine (TMA). The enthalpies of hydrogen bond formation (ΔH) for the MeOH-DMA and MeOH-TMA complexes have been determined by measurements of the fundamental OH-stretching transition in the temperature range from 298 to 358 K. The enthalpy of formation is found to be -35.8 ± 3.9 and -38.2 ± 3.3 kJ mol(-1) for MeOH-DMA and MeOH-TMA, respectively, in the 298 to 358 K region. The equilibrium constant (Kp) for the formation of the MeOH-DMA complex has been determined from the measured and calculated transition intensities of the OH-stretching fundamental transition and the NH-stretching second overtone transition. The transition intensities were calculated using an anharmonic oscillator local mode model with dipole moment and potential energy curves calculated using explicitly correlated coupled cluster methods. The equilibrium constant for formation of the MeOH-DMA complex was determined to be 0.2 ± 0.1 atm(-1), corresponding to a ΔG value of about 4.0 kJ mol(-1). 20. Acid-base titrations of functional groups on the surface of the thermophilic bacterium Anoxybacillus flavithermus: comparing a chemical equilibrium model with ATR-IR spectroscopic data. PubMed Heinrich, Hannah T M; Bremer, Phil J; Daughney, Christopher J; McQuillan, A James 2007-02-27 Acid-base functional groups at the surface of Anoxybacillus flavithermus (AF) were assigned from the modeling of batch titration data of bacterial suspensions and compared with those determined from in situ infrared spectroscopic titration analysis. The computer program FITMOD was used to generate a two-site Donnan model (site 1: pKa = 3.26, wet concn = 2.46 x 10(-4) mol g(-1); site 2: pKa = 6.12, wet concn = 6.55 x 10(-5) mol g(-1)), which was able to describe data for whole exponential phase cells from both batch acid-base titrations at 0.01 M ionic strength and electrophoretic mobility measurements over a range of different pH values and ionic strengths. In agreement with information on the composition of bacterial cell walls and a considerable body of modeling literature, site 1 of the model was assigned to carboxyl groups, and site 2 was assigned to amino groups. pH difference IR spectra acquired by in situ attenuated total reflection infrared (ATR-IR) spectroscopy confirmed the presence of carboxyl groups. The spectra appear to show a carboxyl pKa in the 3.3-4.0 range. Further peaks were assigned to phosphodiester groups, which deprotonated at slightly lower pH. The presence of amino groups could not be confirmed or discounted by IR spectroscopy, but a positively charged group corresponding to site 2 was implicated by electrophoretic mobility data. Carboxyl group speciation over a pH range of 2.3-10.3 at two different ionic strengths was further compared to modeling predictions. While model predictions were strongly influenced by the ionic strength change, pH difference IR data showed no significant change. This meant that modeling predictions agreed reasonably well with the IR data for 0.5 M ionic strength but not for 0.01 M ionic strength. 1. Evaluation of equilibrium constants for the interaction of lactate dehydrogenase isoenzymes with reduced nicotinamide-adenine dinucleotide by affinity chromatography. PubMed Central Brinkworth, R I; Masters, C J; Winzor, D J 1975-01-01 Rabbit muscle lactate dehydrogenase was subjected to frontal affinity chromatography on Sepharose-oxamate in the presence of various concentrations of NADH and sodium phosphate buffer (0.05 M, pH 6.8) containing 0.5 M-NaCl. Quantitative interpretation of the results yields an intrinsic association constant of 9.0 x 10 (4)M-1 for the interaction of enzyme with NADH at 5 degrees C, a value that is confirmed by equilibrium-binding measurements. In a second series of experiments, zonal affinity chromatography of a mouse tissue extract under the same conditions was used to evaluate assoication constants of the order 2 x 10(5)M-1, 3 x 10(5)M-1, 4 x 10(5)M-1, 7 x 10(5)M-1 and 2 x 10(6)M-1 for the interaction of NADH with the M4, M3H, M2H2, MH3 and H4 isoenzymes respectively of lactate dehydrogenase. PMID:175784 2. Effect-compartment equilibrium rate constant (keo) for propofol during induction of anesthesia with a target-controlled infusion device. PubMed Lim, Thiam Aun; Wong, Wai Hong; Lim, Kin Yuee 2006-01-01 The effect-compartment concentration (C(e)) of a drug at a specific pharmacodynamic endpoint should be independent of the rate of drug injection. We used this assumption to derive an effect-compartment equilibrium rate constant (k(eo)) for propofol during induction of anesthesia, using a target controlled infusion device (Diprifusor). Eighteen unpremedicated patients were induced with a target blood propofol concentration of 5 microg x ml(-1) (group 1), while another 18 were induced with a target concentration of 6 microg x ml(-1) (group 2). The time at loss of the eyelash reflex was recorded. Computer simulation was used to derive the rate constant (k(eo)) that resulted in the mean C(e) at loss of the eyelash reflex in group 1 being equal to that in group 2. Using this population technique, we found the k(eo) to be 0.57 min(-1). The mean (SD) effect compartment concentration at loss of the eyelash reflex was 2.39 (0.70) microg x ml(-1). This means that to achieve a desired C(e) within 3 min of induction, the initial target blood concentration should be set at 1.67 times that of the desired C(e) for 1 min, after which it should revert to the desired concentration. 3. On the Temperature Dependence of Intrinsic Surface Protonation Equilibrium Constants: An Extension of the Revised MUSIC Model. PubMed Machesky, Michael L.; Wesolowski, David J.; Palmer, Donald A.; Ridley, Moira K. 2001-07-15 The revised multisite complexation (MUSIC) model of T. Hiemstra et al. (J. Colloid Interface Sci. 184, 680 (1996)) is the most thoroughly developed approach to date that explicitly considers the protonation behavior of the various types of hydroxyl groups known to exist on mineral surfaces. We have extended their revised MUSIC model to temperatures other than 25 degrees C to help rationalize the adsorption data we have been collecting for various metal oxides, including rutile and magnetite to 300 degrees C. Temperature-corrected MUSIC model A constants were calculated using a consistent set of solution protonation reactions with equilibrium constants that are reasonably well known as a function of temperature. A critical component of this approach was to incorporate an empirical correction factor that accounts for the observed decrease in cation hydration number with increasing temperature. This extension of the revised MUSIC model matches our experimentally determined pH of zero net proton charge pH values (pH(znpc)) for rutile to within 0.05 pH units between 25 and 250 degrees C and for magnetite within 0.2 pH units between 50 and 290 degrees C. Moreover, combining the MUSIC-model-derived surface protonation constants with the basic Stern description of electrical double-layer structure results in a good fit to our experimental rutile surface protonation data for all conditions investigated (25 to 250 degrees C, and 0.03 to 1.0 m NaCl or tetramethylammonium chloride media). Consequently, this approach should be useful in other instances where it is necessary to describe and/or predict the adsorption behavior of metal oxide surfaces over a wide temperature range. Copyright 2001 Academic Press. PMID:11426995 4. Effect of Temperature on Acidity and Hydration Equilibrium Constants of Delphinidin-3-O- and Cyanidin-3-O-sambubioside Calculated from Uni- and Multiwavelength Spectroscopic Data. PubMed Vidot, Kévin; Achir, Nawel; Mertz, Christian; Sinela, André; Rawat, Nadirah; Prades, Alexia; Dangles, Olivier; Fulcrand, Hélène; Dornier, Manuel 2016-05-25 Delphinidin-3-O-sambubioside and cyanidin-3-O-sambubioside are the main anthocyanins of Hibiscus sabdariffa calyces, traditionally used to make a bright red beverage by decoction in water. At natural pH, these anthocyanins are mainly in their flavylium form (red) in equilibrium with the quinonoid base (purple) and the hemiketal (colorless). For the first time, their acidity and hydration equilibrium constants were obtained from a pH-jump method followed by UV-vis spectroscopy as a function of temperature from 4 to 37 °C. Equilibrium constant determination was also performed by multivariate curve resolution (MCR). Acidity and hydration constants of cyanidin-3-O-sambubioside at 25 °C were 4.12 × 10(-5) and 7.74 × 10(-4), respectively, and were significantly higher for delphinidin-3-O-sambubioside (4.95 × 10(-5) and 1.21 × 10(-3), respectively). MCR enabled the obtaining of concentration and spectrum of each form but led to overestimated values for the equilibrium constants. However, both methods showed that formations of the quinonoid base and hemiketal were endothermic reactions. Equilibrium constants of anthocyanins in the hibiscus extract showed comparable values as for the isolated anthocyanins. PMID:27124576 5. Effect of Temperature on Acidity and Hydration Equilibrium Constants of Delphinidin-3-O- and Cyanidin-3-O-sambubioside Calculated from Uni- and Multiwavelength Spectroscopic Data. PubMed Vidot, Kévin; Achir, Nawel; Mertz, Christian; Sinela, André; Rawat, Nadirah; Prades, Alexia; Dangles, Olivier; Fulcrand, Hélène; Dornier, Manuel 2016-05-25 Delphinidin-3-O-sambubioside and cyanidin-3-O-sambubioside are the main anthocyanins of Hibiscus sabdariffa calyces, traditionally used to make a bright red beverage by decoction in water. At natural pH, these anthocyanins are mainly in their flavylium form (red) in equilibrium with the quinonoid base (purple) and the hemiketal (colorless). For the first time, their acidity and hydration equilibrium constants were obtained from a pH-jump method followed by UV-vis spectroscopy as a function of temperature from 4 to 37 °C. Equilibrium constant determination was also performed by multivariate curve resolution (MCR). Acidity and hydration constants of cyanidin-3-O-sambubioside at 25 °C were 4.12 × 10(-5) and 7.74 × 10(-4), respectively, and were significantly higher for delphinidin-3-O-sambubioside (4.95 × 10(-5) and 1.21 × 10(-3), respectively). MCR enabled the obtaining of concentration and spectrum of each form but led to overestimated values for the equilibrium constants. However, both methods showed that formations of the quinonoid base and hemiketal were endothermic reactions. Equilibrium constants of anthocyanins in the hibiscus extract showed comparable values as for the isolated anthocyanins. 6. Colorimetric Determination of the Iron(III)-Thiocyanate Reaction Equilibrium Constant with Calibration and Equilibrium Solutions Prepared in a Cuvette by Sequential Additions of One Reagent to the Other ERIC Educational Resources Information Center Nyasulu, Frazier; Barlag, Rebecca 2011-01-01 The well-known colorimetric determination of the equilibrium constant of the iron(III-thiocyanate complex is simplified by preparing solutions in a cuvette. For the calibration plot, 0.10 mL increments of 0.00100 M KSCN are added to 4.00 mL of 0.200 M Fe(NO[subscript 3])[subscript 3], and for the equilibrium solutions, 0.50 mL increments of… 7. Solubility of stibnite in hydrogen sulfide solutions, speciation, and equilibrium constants, from 25 to 350 degree C SciTech Connect Krupp, R.E. ) 1988-12-01 Solubility of stibnite (Sb{sub 2}S{sub 3}) was measured in aqueous hydrogen sulfide solutions as a function of pH and total free sulfur (TFS) concentrations at 25, 90, 200, 275, and 350{degree}C and at saturated vapor pressures. At 25 and 90{degree}C and TFS {approx} 0.01 molal solubility is controlled by the thioantimonite complexes H{sub 2}Sb{sub 2}S{sup 0}{sub 4}, Hsb{sub 2}S{sub 4}{sup {minus}}, Sb{sub 2}S{sup 2{minus}}{sub 4}. At higher temperatures the hydroxothioantimonite complex Sb{sub 2}S{sub 2}(OH){sup 0}{sub 2} becomes dominant. Polymerization due to condensation reactions yields long chains made up of trigonal-pyramidal SbS{sub 3}-groups. Equilibrium constants were derived for the dimers. The transition from thioantimonites to the hydroxothioantimonite species at approximately 120{degree}C is endothermic and is entirely driven by a gain in entropy. Stibnite solubility calculated for some geothermal fluids indicate that these fluids are undersaturated in Sb if stibnite is the solid equilibrium phase. At high temperatures (> 100{degree}C) precipitation of stibnite from ore fluids can occur in response to conductive cooling, while at low temperatures, where thioantimonites dominate, acidification of the fluid is the more likely mechanism. Precipitation of stibnite from fluids containing hydroxothioantimonite consumes H{sub 2}S and may thus trigger precipitation of other metals carried as sulfide complexes, e.g. Au(HS){sup {minus}}{sub 2}. 8. (SO2)-S-34-O-16: High-resolution analysis of the (030),(101), (111), (002) and (201) vibrational states; determination of equilibrium rotational constants for sulfur dioxide and anharmonic vibrational constants SciTech Connect Lafferty, Walter; Flaud, Jean-marie; Ngom, El Hadji A.; Sams, Robert L. 2009-01-02 High resolution Fourier transform spectra of a sample of sulfur dioxide, enriched in 34S (95.3%). were completely analyzed leading to a large set of assigned lines. The experimental levels derived from this set of transitions were fit to within their experimental uncertainties using Watson-type Hamiltonians. Precise band centers, rotational and centrifugal distortion constants were determined. The following band centers in cm-1 were obtained: ν0(3ν2)=1538.720198(11), ν0(ν1+ν3)=2475.828004(29), ν0(ν1+ν2+ν3)=2982.118600(20), ν0(2ν3)=2679.800919(35), and ν0(2ν1+ν3)=3598.773915(38). The rotational constants obtained in this work have been fit together with the rotational constants of lower lying vibrational states [ W.J. Lafferty, J.-M. Flaud, R.L. Sams and EL Hadjiabib, in press] to obtain equilibrium constants as well as vibration-rotation constants. These equilibrium constants have been fit together with those of 32S16O2 [J.-M. Flaud and W.J. Lafferty, J. Mol. Spectrosc. 16 (1993) 396-402] leading to an improved equilibrium structure. Finally the observed band centers have been fit to obtain anharmonic rotational constants. 9. Determination of acid/base dissociation constants based on a rapid detection of the half equivalence point by feedback-based flow ratiometry. PubMed Tanaka, Hideji; Tachibana, Takahiro 2004-06-01 Acid dissociation constants (Ka) were determined through the rapid detection of the half equivalence point (EP1/2) based on a feedback-based flow ratiometry. A titrand, delivered at a constant flow rate, was merged with a titrant, whose flow rate was varied in response to a control voltage (Vc) from a controller. Downstream, the pH of the mixed solution was monitored. Initially, Vc was increased linearly. At the instance that the detector sensed EP1/2, the ramp direction of Vc changed downward. When EP1/2 was sensed again, Vc was increased again. This series of process was repeated automatically. The pH at EP1/2 was regarded as being pKa of the analyte after an activity correction. Satisfactory results were obtained for different acids in various matrices with good precision (RSD approximately 3%) at a throughput rate of 56 s/determination. 10. Using electrophoretic mobility shift assays to measure equilibrium dissociation constants: GAL4-p53 binding DNA as a model system. PubMed Heffler, Michael A; Walters, Ryan D; Kugel, Jennifer F 2012-01-01 An undergraduate biochemistry laboratory experiment is described that will teach students the practical and theoretical considerations for measuring the equilibrium dissociation constant (K(D) ) for a protein/DNA interaction using electrophoretic mobility shift assays (EMSAs). An EMSA monitors the migration of DNA through a native gel; the DNA migrates more slowly when bound to a protein. To determine a K(D) the amount of unbound and protein-bound DNA in the gel is measured as the protein concentration increases. By performing this experiment, students will be introduced to making affinity measurements and gain experience in performing quantitative EMSAs. The experiment describes measuring the K(D) for the interaction between the chimeric protein GAL4-p53 and its DNA recognition site; however, the techniques are adaptable to other DNA binding proteins. In addition, the basic experiment described can be easily expanded to include additional inquiry-driven experimentation. © 2012 by The International Union of Biochemistry and Molecular Biology. 11. Rate and Equilibrium Constants for an Enzyme Conformational Change during Catalysis by Orotidine 5′-Monophosphate Decarboxylase PubMed Central 2016-01-01 from the complex between FOMP and the open enzyme, that the tyrosyl phenol group stabilizes the closed form of ScOMPDC by hydrogen bonding to the substrate phosphodianion, and that the phenyl group of Y217 and F217 facilitates formation of the transition state for the rate-limiting conformational change. An analysis of kinetic data for mutant enzyme-catalyzed decarboxylation of OMP and FOMP provides estimates for the rate and equilibrium constants for the conformational change that traps FOMP at the enzyme active site. PMID:26135041 12. Understanding Chemical Equilibrium Using Entropy Analysis: The Relationship between [delta]S[subscript tot](sys[superscript o]) and the Equilibrium Constant ERIC Educational Resources Information Center Bindel, Thomas H. 2010-01-01 Entropy analyses as a function of the extent of reaction are presented for a number of physicochemical processes, including vaporization of a liquid, dimerization of nitrogen dioxide, and the autoionization of water. Graphs of the total entropy change versus the extent of reaction give a visual representation of chemical equilibrium and the second… 13. Norfloxacin Zn(II)-based complexes: acid base ionization constant determination, DNA and albumin binding properties and the biological effect against Trypanosoma cruzi. PubMed Gouvea, Ligiane R; Martins, Darliane A; Batista, Denise da Gama Jean; Soeiro, Maria de Nazaré C; Louro, Sonia R W; Barbeira, Paulo J S; Teixeira, Letícia R 2013-10-01 Zn(II) complexes with norfloxacin (NOR) in the absence or in the presence of 1,10-phenanthroline (phen) were obtained and characterized. In both complexes, the ligand NOR was coordinated through a keto and a carboxyl oxygen. Tetrahedral and octahedral geometries were proposed for [ZnCl2(NOR)]·H2O (1) and [ZnCl2(NOR)(phen)]·2H2O (2), respectively. Since the biological activity of the chemicals depends on the pH value, pH titrations of the Zn(II) complexes were performed. UV spectroscopic studies of the interaction of the complexes with calf-thymus DNA (CT DNA) have suggested that they can bind to CT DNA with moderate affinity in an intercalative mode. The interactions between the Zn(II) complexes and bovine serum albumin (BSA) were investigated by steady-state and time-resolved fluorescence spectroscopy at pH 7.4. The experimental data showed static quenching of BSA fluorescence, indicating that both complexes bind to BSA. A modified Stern-Volmer plot for the quenching by complex 2 demonstrated preferential binding near one of the two tryptophan residues of BSA. The binding constants obtained (K b ) showed that BSA had a two orders of magnitude higher affinity for complex 2 than for 1. The results also showed that the affinity of both complexes for BSA was much higher than for DNA. This preferential interaction with protein sites could be important to their biological mechanisms of action. The analysis in vitro of the Zn(II) complexes and corresponding ligand were assayed against Trypanosoma cruzi, the causative agent of Chagas disease and the data showed that complex 2 was the most active against bloodstream trypomastigotes. 14. Analysis of responsive characteristics of ionic-strength-sensitive hydrogel with consideration of effect of equilibrium constant by a chemo-electro-mechanical model. PubMed Li, Hua; Lai, Fukun; Luo, Rongmo 2009-11-17 A multiphysics model is presented in this paper for analysis of the influence of various equilibrium constants on the smart hydrogel responsive to the ionic strength of environmental solution, and termed the multieffect-coupling ionic-strength stimulus (MECis) model. The model is characterized by a set of partial differential governing equations by consideration of the mass and momentum conservations of the system and coupled chemical, electrical, and mechanical multienergy domains. The Nernst-Planck equations are derived by the mass conservation of the ionic species in both the interstitial fluid of the hydrogel and the surrounding solution. The binding reaction between the fixed charge groups of the hydrogel and the mobile ions in the solution is described by the fixed charge equation, which is based on the Langmuir monolayer theory. As an important effect for the binding reaction, the equilibrium constant is incorporated into the fixed charge equation. The kinetics of the hydrogel swelling/deswelling is illustrated by the mechanical equation, based on the law of momentum conservation for the solid polymeric networks matrix within the hydrogel. The MECis model is examined by comparison of the numerical simulations and experiments from open literature. The analysis of the influence of different equilibrium constants on the responsive characteristics of the ionic-strength-sensitive hydrogel is carried out with detailed discussion. 15. On the use of dynamic fluorescence measurements to determine equilibrium and kinetic constants. The inclusion of pyrene in β-cyclodextrin cavities De Feyter, Steven; van Stam, Jan; Boens, Noël; De Schryver, Fans C. 1996-01-01 An analysis of the kinetic identifiability of two-state excited-state processes goves the conditions which have to be fulfilled to make it possible to estimate the ground-state equilibrium constant from dynamic fluorescence data. For the aqueous system β-cyclodextrin:pyrene it turns out that the only kinetic parameters which can be estimated are (i) the deactivation rate constant of pyrene dissolved in the aqeuous mbulk, (ii) the rate of formation of a β-cyclodextrin:pyrene inclusion complex in the excited-state, which is negligibly slow, and (iii) the sum of the rate constats for deactivation to the ground-state and for exclusion into the aqueous bulk of the excited pyrene participating in inclusion complex formation. This sum cannot be separated into its individual rate constant contributions, and it is impossible to determine the ground-state equilibrium constant for the formation of β-cyclodextrin:pyrene inclusion complexes solely from fluorescence decay data, a fact not taken into account in the literature. 16. Analytic calculation of physiological acid-base parameters in plasma. PubMed Wooten, E W 1999-01-01 Analytic expressions for plasma total titratable base, base excess (DeltaCB), strong-ion difference, change in strong-ion difference (DeltaSID), change in Van Slyke standard bicarbonate (DeltaVSSB), anion gap, and change in anion gap are derived as a function of pH, total buffer ion concentration, and conditional molar equilibrium constants. The behavior of these various parameters under respiratory and metabolic acid-base disturbances for constant and variable buffer ion concentrations is considered. For constant noncarbonate buffer concentrations, DeltaSID = DeltaCB = DeltaVSSB, whereas these equalities no longer hold under changes in noncarbonate buffer concentration. The equivalence is restored if the reference state is changed to include the new buffer concentrations. 17. Beyond transition state theory: accurate description of nuclear quantum effects on the rate and equilibrium constants of chemical reactions using Feynman path integrals. PubMed Vanícek, Jirí 2011-01-01 Nuclear tunneling and other nuclear quantum effects have been shown to play a significant role in molecules as large as enzymes even at physiological temperatures. I discuss how these quantum phenomena can be accounted for rigorously using Feynman path integrals in calculations of the equilibrium and kinetic isotope effects as well as of the temperature dependence of the rate constant. Because these calculations are extremely computationally demanding, special attention is devoted to increasing the computational efficiency by orders of magnitude by employing efficient path integral estimators. 18. Stability of equilibrium of a superconducting ring that levitates in the field of a fixed ring with constant current Bishaev, A. M.; Bush, A. A.; Gavrikov, M. B.; Kamentsev, K. E.; Kozintseva, M. V.; Savel'ev, V. V.; Sigov, A. S. 2015-11-01 In order to develop a plasma trap with levitating superconducting magnetic coils, it is necessary to search for their stable levitating states. An analytical expression for the potential energy of a single superconducting ring that captures a fixed magnetic flux in the field of a fixed ring with constant current versus the coordinate of the free ring on the axis of the system, deviation angle of its axis from the axis of the system, and radial displacement of its plane is derived for uniform gravity field in the thin ring approximation. The calculated stable levitation states of the superconducting ring in the field of the ring with constant current are proven in experiments. The generalization of such an approach to the levitation of several rings makes it possible to search for stable levitation states of several coils that form a magnetic system of a multipole trap. 19. The determination of equilibrium constants, DeltaG, DeltaH and DeltaS for vapour interaction with a pharmaceutical drug, using gravimetric vapour sorption. PubMed Willson, Richard J; Beezer, Anthony E 2003-06-01 The application of gravimetric vapour sorption (GVS) to the characterisation of pharmaceutical drugs is often restricted to the study of gross behaviour such as a measure of hygroscopicity. Although useful in early development of a drug substance, for example, in salt selection screening exercises, such types of analysis may not contribute to a fundamental understanding of the properties of the material. This paper reports a new methodology for GVS experimentation that will allow specific sorption parameters to be calculated; equilibrium constant (K), van't Hoff enthalpy change (DeltaH(v)), Gibbs free energy for sorption (DeltaG) and the entropy change for sorption (DeltaS). Unlike other reports of such type of analysis that require the application of a specific model, this method is model free. The analysis does require that over the narrow temperature range of the study DeltaH(v) is constant and there is no change in interaction mechanism. 20. The equilibrium constant for N2O5 = NO2 + NO3 - Absolute determination by direct measurement from 243 to 397 K NASA Technical Reports Server (NTRS) Cantrell, C. A.; Davidson, J. A.; Mcdaniel, A. H.; Shetter, R. E.; Calvert, J. G. 1988-01-01 Direct determinations of the equilibrium constant for the reaction N2O5 = NO2 + NO3 were carried out by measuring NO2, NO3, and N2O5 using long-path visible and infrared absorption spectroscopy as a function of temperature from 243 to 397 K. The first-order decay rate constant of N2O5 was experimentally measured as a function of temperature. These results are in turn used to derive a value for the rate coefficient for the NO-forming channel in the reaction of NO3 with NO2. The implications of the results for atmospheric chemistry, the thermodynamics of NO3, and for laboratory kinetics studies are discussed. 1. Equilibrium and rate constants, and reaction mechanism of the HF dissociation in the HF(H2O)7 cluster by ab initio rare event simulations. PubMed Elena, Alin Marin; Meloni, Simone; Ciccotti, Giovanni 2013-12-12 We perform restrained hybrid Monte Carlo (MC) simulations to compute the equilibrium constant of the dissociation reaction of HF in HF(H2O)7. We find that the HF is a stronger acid in the cluster than in the bulk, and its acidity is higher at lower T. The latter phenomenon has a vibrational entropic origin, resulting from a counterintuitive balance of intra- and intermolecular terms. We find also a temperature dependence of the reactions mechanism. At low T (≤225 K) the dissociation reaction follows a concerted path, with the H atoms belonging to the relevant hydrogen bond chain moving synchronously. At higher T (300 K), the first two hydrogen atoms move together, forming an intermediate metastable state having the structure of an eigen ion (H9O4(+)), and then the third hydrogen migrates completing the reaction. We also compute the dissociation rate constant, kRP. At very low T (≤75 K) kRP depends strongly on the temperature, whereas it gets almost constant at higher T’s. With respect to the bulk, the HF dissociation in the HF(H2O)7 is about 1 order of magnitude faster. This is due to a lower free energy barrier for the dissociation in the cluster. 2. Determination of the dissociation constant of valine from acetohydroxy acid synthase by equilibrium partition in an aqueous two-phase system. PubMed Engel, S; Vyazmensky, M; Barak, Z; Chipman, D M; Merchuk, J C 2000-06-23 An aqueous polyethylene glycol/salt two-phase system was used to estimate the dissociation constant, K(dis), of the Escherichia coli isoenzyme AHAS III regulatory subunit, ilvH protein, from the feedback inhibitor valine. The amounts of the bound and free radioactive valine in the system were determined. A Scatchard plot of the data revealed a 1:1 valine-protein binding ratio and K(dis) of 133+/-14 microM. The protein did not bind leucine, and the ilvH protein isolated from a valine resistant mutant showed no valine binding. This method is very simple, rapid and requires only a small amounts of protein compared to the presently used equilibrium dialysis method. 3. Rate and equilibrium constant of the reaction of 1-methylvinoxy radicals with O2: CH3COCH2 + O2<--> CH3COCH2O2. PubMed Hassouna, Melynda; Delbos, Eric; Devolder, Pascal; Viskolcz, Bela; Fittschen, Christa 2006-06-01 The reaction of 1-methylvinoxy radicals, CH3COCH2, with molecular oxygen has been investigated by experimental and theoretical methods as a function of temperature (291-520 K) and pressure (0.042-10 bar He). Experiments have been performed by laser photolysis coupled to a detection of 1-methylvinoxy radicals by laser-induced fluorescence LIF. The potential energy surface calculations were performed using ab inito molecular orbital theory at the G3MP2B3 and CBSQB3 level of theory based on the density function theory optimized geometries. Derived molecular properties of the characteristic points of the potential energy surface were used to describe the mechanism and kinetics of the reaction under investigation. At 295 K, no pressure dependence of the rate constant for the association reaction has been observed: k(1,298K) = (1.18 +/- 0.04) x 10(-12) cm3 s(-1). Biexponential decays have been observed in the temperature range 459-520 K and have been interpreted as an equilibrium reaction. The temperature-dependent equilibrium constants have been extracted from these decays and a standard reaction enthalpy of deltaH(r,298K) = -105.0 +/- 2.0 kJ mol(-1) and entropy of deltaS(r,298K) = -143.0 +/- 4.0 J mol(-1) K(-1) were derived, in excellent agreement with the theoretical results. Consistent heats of formation for the vinoxy and the 1-methylvinoxy radical as well as their O2 adducts are recommended based on our complementary experimental and theoretical study deltaH(f,298K) = 13.0 +/- 2.0, -32. 9+/- 2.0, -85.9 +/- 4.0, and -142.1 +/- 4.0 kJ mol(-1) for CH2CHO, CH3COCH2 radicals, and their adducts, respectively. 4. Basis for the equilibrium constant in the interconversion of l-lysine and l-beta-lysine by lysine 2,3-aminomutase. PubMed Chen, Dawei; Tanem, Justinn; Frey, Perry A 2007-02-01 l-beta-lysine and beta-glutamate are produced by the actions of lysine 2,3-aminomutase and glutamate 2,3-aminomutase, respectively. The pK(a) values have been titrimetrically measured and are for l-beta-lysine: pK(1)=3.25 (carboxyl), pK(2)=9.30 (beta-aminium), and pK(3)=10.5 (epsilon-aminium). For beta-glutamate the values are pK(1)=3.13 (carboxyl), pK(2)=3.73 (carboxyl), and pK(3)=10.1 (beta-aminium). The equilibrium constants for reactions of 2,3-aminomutases favor the beta-isomers. The pH and temperature dependencies of K(eq) have been measured for the reaction of lysine 2,3-aminomutase to determine the basis for preferential formation of beta-lysine. The value of K(eq) (8.5 at 37 degrees C) is independent of pH between pH 6 and pH 11; ruling out differences in pK-values as the basis for the equilibrium constant. The K(eq)-value is temperature-dependent and ranges from 10.9 at 4 degrees C to 6.8 at 65 degrees C. The linear van't Hoff plot shows the reaction to be enthalpy-driven, with DeltaH degrees =-1.4 kcal mol(-1) and DeltaS degrees =-0.25 cal deg(-1) mol(-1). Exothermicity is attributed to the greater strength of the bond C(beta)-N(beta) in l-beta-lysine than C(alpha)-N(alpha) in l-lysine, and this should hold for other amino acids. 5. Equilibrium binding constants for Tl+ with gramicidins A, B and C in a lysophosphatidylcholine environment determined by 205Tl nuclear magnetic resonance spectroscopy. PubMed Central Hinton, J F; Koeppe, R E; Shungu, D; Whaley, W L; Paczkowski, J A; Millett, F S 1986-01-01 Nuclear Magnetic Resonance (NMR) 205Tl spectroscopy has been used to monitor the binding of Tl+ to gramicidins A, B, and C packaged in aqueous dispersions of lysophosphatidylcholine. For 5 mM gramicidin dimer in the presence of 100 mM lysophosphatidylcholine, only approximately 50% or less of the gramicidin appears to be accessible to Tl+. Analysis of the 205Tl chemical shift as a function of Tl+ concentration over the 0.65-50 mM range indicates that only one Tl+ ion can be bound by gramicidin A, B, or C under these experimental conditions. In this system, the Tl+ equilibrium binding constant is 582 +/- 20 M-1 for gramicidin 1949 +/- 100 M-1 for gramicidin B, and 390 +/- 20 M-1 for gramicidin C. Gramicidin B not only binds Tl+ more strongly but it is also in a different conformational state than that of A and C, as shown by Circular Dichroism spectroscopy. The 205Tl NMR technique can now be extended to determinations of binding constants of other cations to gramicidin by competition studies using a 205Tl probe. PMID:2420383 6. Oligomer formation of the bacterial second messenger c-di-GMP: reaction rates and equilibrium constants indicate a monomeric state at physiological concentrations. PubMed Gentner, Martin; Allan, Martin G; Zaehringer, Franziska; Schirmer, Tilman; Grzesiek, Stephan 2012-01-18 Cyclic diguanosine-monophosphate (c-di-GMP) is a bacterial signaling molecule that triggers a switch from motile to sessile bacterial lifestyles. This mechanism is of considerable pharmaceutical interest, since it is related to bacterial virulence, biofilm formation, and persistence of infection. Previously, c-di-GMP has been reported to display a rich polymorphism of various oligomeric forms at millimolar concentrations, which differ in base stacking and G-quartet interactions. Here, we have analyzed the equilibrium and exchange kinetics between these various forms by NMR spectroscopy. We find that the association of the monomer into a dimeric form is in fast exchange (equilibrium constant of about 1 mM. At concentrations above 100 μM, higher oligomers are formed in the presence of cations. These are presumably tetramers and octamers, with octamers dominating above about 0.5 mM. Thus, at the low micromolar concentrations of the cellular environment and in the absence of additional compounds that stabilize oligomers, c-di-GMP should be predominantly monomeric. This finding has important implications for the understanding of c-di-GMP recognition by protein receptors. In contrast to the monomer/dimer exchange, formation and dissociation of higher oligomers occurs on a time scale of several hours to days. The time course can be described quantitatively by a simple kinetic model where tetramers are intermediates of octamer formation. The extremely slow oligomer dissociation may generate severe artifacts in biological experiments when c-di-GMP is diluted from concentrated stock solution. We present a simple method to quantify c-di-GMP monomers and oligomers from UV spectra and a procedure to dissolve the unwanted oligomers by an annealing step. 7. Constraining the chlorine monoxide (ClO)/chlorine peroxide (ClOOCl) equilibrium constant from Aura Microwave Limb Sounder measurements of nighttime ClO. PubMed Santee, Michelle L; Sander, Stanley P; Livesey, Nathaniel J; Froidevaux, Lucien 2010-04-13 The primary ozone loss process in the cold polar lower stratosphere hinges on chlorine monoxide (ClO) and one of its dimers, chlorine peroxide (ClOOCl). Recently, analyses of atmospheric observations have suggested that the equilibrium constant, K(eq), governing the balance between ClOOCl formation and thermal decomposition in darkness is lower than that in the current evaluation of kinetics data. Measurements of ClO at night, when ClOOCl is unaffected by photolysis, provide a useful means of testing quantitative understanding of the ClO/ClOOCl relationship. Here we analyze nighttime ClO measurements from the National Aeronautics and Space Administration Aura Microwave Limb Sounder (MLS) to infer an expression for K(eq). Although the observed temperature dependence of the nighttime ClO is in line with the theoretical ClO/ClOOCl equilibrium relationship, none of the previously published expressions for K(eq) consistently produces ClO abundances that match the MLS observations well under all conditions. Employing a standard expression for K(eq), A x exp(B/T), we constrain the parameter A to currently recommended values and estimate B using a nonlinear weighted least squares analysis of nighttime MLS ClO data. ClO measurements at multiple pressure levels throughout the periods of peak chlorine activation in three Arctic and four Antarctic winters are used to estimate B. Our derived B leads to values of K(eq) that are approximately 1.4 times smaller at stratospherically relevant temperatures than currently recommended, consistent with earlier studies. Our results are in better agreement with the newly updated (2009) kinetics evaluation than with the previous (2006) recommendation. 8. Chemical Principles Revisited: Chemical Equilibrium. ERIC Educational Resources Information Center Mickey, Charles D. 1980-01-01 Describes: (1) Law of Mass Action; (2) equilibrium constant and ideal behavior; (3) general form of the equilibrium constant; (4) forward and reverse reactions; (5) factors influencing equilibrium; (6) Le Chatelier's principle; (7) effects of temperature, changing concentration, and pressure on equilibrium; and (8) catalysts and equilibrium. (JN) 9. Understanding Acid Base Disorders. PubMed Gomez, Hernando; Kellum, John A 2015-10-01 The concentration of hydrogen ions is regulated in biologic solutions. There are currently 3 recognized approaches to assess changes in acid base status. First is the traditional Henderson-Hasselbalch approach, also called the physiologic approach, which uses the relationship between HCO3(-) and Pco2; the second is the standard base excess approach based on the Van Slyke equation. The third approach is the quantitative or Stewart approach, which uses the strong ion difference and the total weak acids. This article explores the origins of the current concepts framing the existing methods to analyze acid base balance. 10. Evaluating the Equilibrium Association Constant between ArtinM Lectin and Myeloid Leukemia Cells by Impedimetric and Piezoelectric Label Free Approaches PubMed Central Carvalho, Fernanda C.; Martins, Denise C.; Santos, Adriano; Roque-Barreira, Maria-Cristina; Bueno, Paulo R. 2014-01-01 Label-free methods for evaluating lectin–cell binding have been developed to determine the lectin–carbohydrate interactions in the context of cell-surface oligosaccharides. In the present study, mass loading and electrochemical transducer signals were compared to characterize the interaction between lectin and cellular membranes by measuring the equilibrium association constant, Ka, between ArtinM lectin and the carbohydrate sites of NB4 leukemia cells. By functionalizing sensor interfaces with ArtinM, it was possible to determine Ka over a range of leukemia cell concentrations to construct analytical curves from impedimetric and/or mass-associated frequency shifts with analytical signals following a Langmuir pattern. Using the Langmuir isotherm-binding model, the Ka obtained were (8.9 ± 1.0) × 10−5 mL/cell and (1.05 ± 0.09) × 10−6 mL/cell with the electrochemical impedance spectroscopy (EIS) and quartz crystal microbalance (QCM) methods, respectively. The observed differences were attributed to the intrinsic characteristic sensitivity of each method in following Langmuir isotherm premises. PMID:25587428 11. Evaluating the Equilibrium Association Constant between ArtinM Lectin and Myeloid Leukemia Cells by Impedimetric and Piezoelectric Label Free Approaches. PubMed Carvalho, Fernanda C; Martins, Denise C; Santos, Adriano; Roque-Barreira, Maria-Cristina; Bueno, Paulo R 2014-12-01 Label-free methods for evaluating lectin-cell binding have been developed to determine the lectin-carbohydrate interactions in the context of cell-surface oligosaccharides. In the present study, mass loading and electrochemical transducer signals were compared to characterize the interaction between lectin and cellular membranes by measuring the equilibrium association constant, Ka , between ArtinM lectin and the carbohydrate sites of NB4 leukemia cells. By functionalizing sensor interfaces with ArtinM, it was possible to determine Ka over a range of leukemia cell concentrations to construct analytical curves from impedimetric and/or mass-associated frequency shifts with analytical signals following a Langmuir pattern. Using the Langmuir isotherm-binding model, the Ka obtained were (8.9 ± 1.0) × 10(-5) mL/cell and (1.05 ± 0.09) × 10(-6) mL/cell with the electrochemical impedance spectroscopy (EIS) and quartz crystal microbalance (QCM) methods, respectively. The observed differences were attributed to the intrinsic characteristic sensitivity of each method in following Langmuir isotherm premises. 12. Determination of equilibrium constant of amino carbamate adduct formation in sisomicin by a high pH based high performance liquid chromatography. PubMed Wlasichuk, Kenneth B; Tan, Li; Guo, Yushen; Hildebrandt, Darin J; Zhang, Hao; Karr, Dane E; Schmidt, Donald E 2015-01-01 Amino carbamate adduct formation from the amino group of an aminoglycoside and carbon dioxide has been postulated as a mechanism for reducing nephrotoxicity in the aminoglycoside class compounds. In this study, sisomicin was used as a model compound for amino carbamate analysis. A high pH based reversed-phase high performance liquid chromatography (RP-HPLC) method is used to separate the amino carbamate from sisomicin. The carbamate is stable as the breakdown is inhibited at high pH and any reactive carbon dioxide is removed as the carbonate. The amino carbamate was quantified and the molar fraction of amine as the carbamate of sisomicin was obtained from the HPLC peak areas. The equilibrium constant of carbamate formation, Kc, was determined to be 3.3 × 10(-6) and it was used to predict the fraction of carbamate over the pH range in a typical biological systems. Based on these results, the fraction of amino carbamate at physiological pH values is less than 13%, and the postulated mechanism for nephrotoxicity protection is not valid. The same methodology is applicable for other aminoglycosides. 13. Acid-Base Homeostasis. PubMed Hamm, L Lee; Nakhoul, Nazih; Hering-Smith, Kathleen S 2015-12-01 Acid-base homeostasis and pH regulation are critical for both normal physiology and cell metabolism and function. The importance of this regulation is evidenced by a variety of physiologic derangements that occur when plasma pH is either high or low. The kidneys have the predominant role in regulating the systemic bicarbonate concentration and hence, the metabolic component of acid-base balance. This function of the kidneys has two components: reabsorption of virtually all of the filtered HCO3(-) and production of new bicarbonate to replace that consumed by normal or pathologic acids. This production or generation of new HCO3(-) is done by net acid excretion. Under normal conditions, approximately one-third to one-half of net acid excretion by the kidneys is in the form of titratable acid. The other one-half to two-thirds is the excretion of ammonium. The capacity to excrete ammonium under conditions of acid loads is quantitatively much greater than the capacity to increase titratable acid. Multiple, often redundant pathways and processes exist to regulate these renal functions. Derangements in acid-base homeostasis, however, are common in clinical medicine and can often be related to the systems involved in acid-base transport in the kidneys. 14. Estimating the plasma effect-site equilibrium rate constant (Ke₀) of propofol by fitting time of loss and recovery of consciousness. PubMed Wu, Qi; Sun, Baozhu; Wang, Shuqin; Zhao, Lianying; Qi, Feng 2013-01-01 The present paper proposes a new approach for fitting the plasma effect-site equilibrium rate constant (Ke0) of propofol to satisfy the condition that the effect-site concentration (Ce) is equal at the time of loss of consciousness (LOC) and recovery of consciousness (ROC). Forty patients receiving intravenous anesthesia were divided into 4 groups and injected propofol 1.4, 1.6, 1.8, or 2 mg/kg at 1,200 mL/h. Durations from the start of injection to LOC and to ROC were recorded. LOC and ROC were defined as an observer's assessment of alertness and sedation scale change from 3 to 2 and from 2 to 3, respectively. Software utilizing bisection method iteration algorithms was built. Then, Ke0 satisfying the CeLOC=CeROC condition was estimated. The accuracy of the Ke0 estimated by our method was compared with the Diprifusor TCI Pump built-in Ke0 (0.26 min(-1)), and the Orchestra Workstation built-in Ke0 (1.21 min(-1)) in another group of 21 patients who were injected propofol 1.4 to 2 mg/kg. Our results show that the population Ke0 of propofol was 0.53 ± 0.18 min(-1). The regression equation for adjustment by dose (mg/kg) and age was Ke0=1.42-0.30 × dose-0.0074 × age. Only Ke0 adjusted by dose and age achieved the level of accuracy required for clinical applications. We conclude that the Ke0 estimated based on clinical signs and the two-point fitting method significantly improved the ability of CeLOC to predict CeROC. However, only the Ke0 adjusted by dose and age and not a fixed Ke0 value can meet clinical requirements of accuracy. 15. History of medical understanding and misunderstanding of Acid base balance. PubMed Aiken, Christopher Geoffrey Alexander 2013-09-01 To establish how controversies in understanding acid base balance arose, the literature on acid base balance was reviewed from 1909, when Henderson described how the neutral reaction of blood is determined by carbonic and organic acids being in equilibrium with an excess of mineral bases over mineral acids. From 1914 to 1930, Van Slyke and others established our acid base principles. They recognised that carbonic acid converts into bicarbonate all non-volatile mineral bases not bound by mineral acids and determined therefore that bicarbonate represents the alkaline reserve of the body and should be a physiological constant. They showed that standard bicarbonate is a good measure of acidosis caused by increased production or decreased elimination of organic acids. However, they recognised that bicarbonate improved low plasma bicarbonate but not high urine acid excretion in diabetic ketoacidosis, and that increasing pCO2 caused chloride to shift into cells raising plasma titratable alkali. Both indicate that minerals influence pH. In 1945 Darrow showed that hyperchloraemic metabolic acidosis in preterm infants fed milk with 5.7 mmol of chloride and 2.0 mmol of sodium per 100 kcal was caused by retention of chloride in excess of sodium. Similar findings were made but not recognised in later studies of metabolic acidosis in preterm infants. Shohl in 1921 and Kildeberg in 1978 presented the theory that carbonic and organic acids are neutralised by mineral base, where mineral base is the excess of mineral cations over anions and organic acid is the difference between mineral base, bicarbonate and protein anion. The degree of metabolic acidosis measured as base excess is determined by deviation in both mineral base and organic acid from normal. 16. Acid-base properties of xanthosine 5'-monophosphate (XMP) and of some related nucleobase derivatives in aqueous solution: micro acidity constant evaluations of the (N1)H versus the (N3)H deprotonation ambiguity. PubMed Massoud, Salah S; Corfù, Nicolas A; Griesser, Rolf; Sigel, Helmut 2004-10-11 The first acidity constant of fully protonated xanthosine 5'-monophosphate, that is, of H3(XMP)+, was estimated by means of a micro acidity constant scheme and the following three deprotonations of the H2(XMP)+/- (pKa=0.97), H(XMP)- (5.30), and XMP2- (6.45) species were determined by potentiometric pH titrations; further deprotonation of (XMP-H)3- is possible only with pKa>12. The most important results are that the xanthine residue is deprotonated before the P(O)2(OH)- group loses its final proton; that is, twofold negatively charged XMP carries one negative charge in the pyrimidine ring and one at the phosphate group. Micro acidity constant evaluations reveal that this latter mentioned species occurs with a formation degree of 88 %, whereas its tautomer with a neutral xanthine moiety and a PO3(2-) group is formed only to 12 %; this distinguishes XMP from its related nucleoside 5'-monophosphates, like guanosine 5'-monophosphate. At the physiological pH of about 7.5 mainly (XMP-H)3- exists. The question, which of the purine sites, (N1)H or (N3)H, is deprotonated in this species cannot be answered unequivocally, though it appears that the (N3)H site is more acidic. By application of several methylated xanthine species intrinsic micro acidity constants are calculated and it is shown that, for example, for 7-methylxanthine the N1-deprotonated tautomer occurs with a formation degree of about 5 %; a small but significant amount that, as is discussed, may possibly be enhanced by metal ion coordination to N7, which is known to occur preferably to this site. 17. Three applications of path integrals: equilibrium and kinetic isotope effects, and the temperature dependence of the rate constant of the [1,5] sigmatropic hydrogen shift in (Z)-1,3-pentadiene. PubMed Zimmermann, Tomáš; Vaníček, Jiří 2010-11-01 Recent experiments have confirmed the importance of nuclear quantum effects even in large biomolecules at physiological temperature. Here we describe how the path integral formalism can be used to describe rigorously the nuclear quantum effects on equilibrium and kinetic properties of molecules. Specifically, we explain how path integrals can be employed to evaluate the equilibrium (EIE) and kinetic (KIE) isotope effects, and the temperature dependence of the rate constant. The methodology is applied to the [1,5] sigmatropic hydrogen shift in pentadiene. Both the KIE and the temperature dependence of the rate constant confirm the importance of tunneling and other nuclear quantum effects as well as of the anharmonicity of the potential energy surface. Moreover, previous results on the KIE were improved by using a combination of a high level electronic structure calculation within the harmonic approximation with a path integral anharmonicity correction using a lower level method. 18. Analysis of fast and slow acid dissociation equilibria of 3',3″,5',5″-tetrabromophenolphthalein and determination of its equilibrium constants by capillary zone electrophoresis. PubMed Takayanagi, Toshio 2013-01-01 Acid dissociation constants of 3',3″,5',5″-tetrabrompohenolphthalein (TBPP) were determined in an aqueous solution by capillary zone electrophoresis at an ionic strength of 0.01 mol/L. Two steps of the fast acid-dissociation equilibria including precipitable species of H2TBPP were analyzed at a weakly acidic pH region by using the change in effective electrophoretic mobility of TBPP with the pH of the separation buffer. On the other hand, an acid-dissociation reaction of TBPP at an alkaline pH region was reversible, but very slow to reach its equilibrium; the two TBPP species concerned with the equilibrium were detected as distinct signals in the electropherograms. After reaching its equilibrium, the acid-dissociation constant was determined with the signal height corresponding to its dianion form. Thus, three steps of the acid dissociation constants of TBPP were determined in an aqueous solution as pKa1 = 5.29 ± 0.06, pKa2 = 6.35 ± 0.02, and pKa3 = 11.03 ± 0.04. 19. Assessment of acid-base balance. Stewart's approach. PubMed Fores-Novales, B; Diez-Fores, P; Aguilera-Celorrio, L J 2016-04-01 The study of acid-base equilibrium, its regulation and its interpretation have been a source of debate since the beginning of 20th century. Most accepted and commonly used analyses are based on pH, a notion first introduced by Sorensen in 1909, and on the Henderson-Hasselbalch equation (1916). Since then new concepts have been development in order to complete and make easier the understanding of acid-base disorders. In the early 1980's Peter Stewart brought the traditional interpretation of acid-base disturbances into question and proposed a new method. This innovative approach seems more suitable for studying acid-base abnormalities in critically ill patients. The aim of this paper is to update acid-base concepts, methods, limitations and applications. 20. Calculation of equilibrium constants from multiwavelength spectroscopic data--II: SPECFIT: two user-friendly programs in basic and standard FORTRAN 77. PubMed Gampp, H; Maeder, M; Meyer, C J; Zuberbühler, A D 1985-04-01 A new program (SPECFIT), written in HP BASIC or FORTRAN 77, for the calculation of stability constants from spectroscopic data, is presented. Stability constants have been successfully calculated from multiwavelength spectrophotometric and EPR data, but the program can be equally well applied to the numerical treatment of other spectroscopic measurements. The special features included in SPECFIT to improve convergence, increase numerical reliability, and minimize memory as well as computing time requirements, include (i) elimination of the linear parameters (i.e., molar absorptivities), (ii) the use of analytical instead of numerical derivatives and (iii) factor analysis. Calculation of stability constants from spectroscopic data is then as straightforward as from potentiometric titration curves and gives results of analogous reproducibility. The spectroscopic method has proved, however, to be superior in discrimination between chemical models. 1. Thermodynamic and microscopic equilibrium constants of molecular species formed from pyridoxal 5'-phosphate and 2-amino-3-phosphonopropionic acid in aqueous and D/sub 2/O solution SciTech Connect Szpoganicz, B.; Martell, A.E. 1984-09-19 Schiff base formation between pyridoxal 5'-phosphate (PLP) and 2-amino-3-phosphonopropionic acid (APP) has been investigated by measurement of the corresponding NMR and electronic absorption spectra. A value of 0.26 was found for the formation constant of the completely deprotonated Schiff base species, and is much smaller than the values reported for pyridoxal-..beta..-chloroalanine and pyridoxal-O-phosphoserine. The protonation constants for the aldehyde and hydrate forms of PLP were determined in D/sub 2/O by measurement of the variation of chemical shifts with pD (pH in D/sub 2/O). The hydration constants of PLP were determined in a pD range 2-12, and species distributions were calculated. The protonation constants of the APP-PLP Schiff base determined by NMR in D/sub 2/O were found to have the log values 12.54, 8.10, 6.70, and 5.95, and the species distributions were calculated for a range of pD values. Evidence is reported for hydrogen bonding involving the phosphate and phosphonate groups of the diprotonated Schiff base. The cis and trans forms of the Schiff bases were distinguished with the aid of the nuclear Overhauser effect. 43 references, 9 figures, 3 tables. 2. Students' Understanding of Acids/Bases in Organic Chemistry Contexts ERIC Educational Resources Information Center Cartrette, David P.; Mayo, Provi M. 2011-01-01 Understanding key foundational principles is vital to learning chemistry across different contexts. One such foundational principle is the acid/base behavior of molecules. In the general chemistry sequence, the Bronsted-Lowry theory is stressed, because it lends itself well to studying equilibrium and kinetics. However, the Lewis theory of… 3. Formation and reactivity of a porphyrin iridium hydride in water: acid dissociation constants and equilibrium thermodynamics relevant to Ir-H, Ir-OH, and Ir-CH2- bond dissociation energetics. PubMed 2011-11-01 Aqueous solutions of group nine metal(III) (M = Co, Rh, Ir) complexes of tetra(3,5-disulfonatomesityl)porphyrin [(TMPS)M(III)] form an equilibrium distribution of aquo and hydroxo complexes ([(TMPS)M(III)(D(2)O)(2-n)(OD)(n)]((7+n)-)). Evaluation of acid dissociation constants for coordinated water show that the extent of proton dissociation from water increases regularly on moving down the group from cobalt to iridium, which is consistent with the expected order of increasing metal-ligand bond strengths. Aqueous (D(2)O) solutions of [(TMPS)Ir(III)(D(2)O)(2)](7-) react with dihydrogen to form an iridium hydride complex ([(TMPS)Ir-D(D(2)O)](8-)) with an acid dissociation constant of 1.8(0.5) × 10(-12) (298 K), which is much smaller than the Rh-D derivative (4.3 (0.4) × 10(-8)), reflecting a stronger Ir-D bond. The iridium hydride complex adds with ethene and acetaldehyde to form organometallic derivatives [(TMPS)Ir-CH(2)CH(2)D(D(2)O)](8-) and [(TMPS)Ir-CH(OD)CH(3)(D(2)O)](8-). Only a six-coordinate carbonyl complex [(TMPS)Ir-D(CO)](8-) is observed for reaction of the Ir-D with CO (P(CO) = 0.2-2.0 atm), which contrasts with the (TMPS)Rh-D analog which reacts with CO to produce an equilibrium with a rhodium formyl complex ([(TMPS)Rh-CDO(D(2)O)](8-)). Reactivity studies and equilibrium thermodynamic measurements were used to discuss the relative M-X bond energetics (M = Rh, Ir; X = H, OH, and CH(2)-) and the thermodynamically favorable oxidative addition of water with the (TMPS)Ir(II) derivatives. 4. Automated method for determination of dissolved organic carbon-water distribution constants of structurally diverse pollutants using pre-equilibrium solid-phase microextraction. PubMed Ripszam, Matyas; Haglund, Peter 2015-02-01 Dissolved organic carbon (DOC) plays a key role in determining the environmental fate of semivolatile organic environmental contaminants. The goal of the present study was to develop a method using commercially available hardware to rapidly characterize the sorption properties of DOC in water samples. The resulting method uses negligible-depletion direct immersion solid-phase microextraction (SPME) and gas chromatography-mass spectrometry. Its performance was evaluated using Nordic reference fulvic acid and 40 priority environmental contaminants that cover a wide range of physicochemical properties. Two SPME fibers had to be used to cope with the span of properties, 1 coated with polydimethylsiloxane and 1 coated with polystyrene divinylbenzene polydimethylsiloxane, for nonpolar and semipolar contaminants, respectively. The measured DOC-water distribution constants showed reasonably good reproducibility (standard deviation ≤ 0.32) and good correlation (R(2)  = 0.80) with log octanol-water partition coefficients for nonpolar persistent organic pollutants. The sample pretreatment is limited to filtration, and the method is easy to adjust to different DOC concentrations. These experiments also utilized the latest SPME automation that largely decreases total cycle time (to 20 min or shorter) and increases sample throughput, which is advantageous in cases when many samples of DOC must be characterized or when the determinations must be performed quickly, for example, to avoid precipitation, aggregation, and other changes of DOC structure and properties. The data generated by this method are valuable as a basis for transport and fate modeling studies. 5. An Acid-Base Chemistry Example: Conversion of Nicotine Summerfield, John H. 1999-10-01 The current government interest in nicotine conversion by cigarette companies provides an example of acid-base chemistry that can be explained to students in the second semester of general chemistry. In particular, the conversion by ammonia of the +1 form of nicotine to the easier-to-assimilate free-base form illustrates the effect of pH on acid-base equilibrium. The part played by ammonia in tobacco smoke is analogous to what takes place when cocaine is "free-based". 6. Chemical Principles Revisited: Using the Equilibrium Concept. ERIC Educational Resources Information Center Mickey, Charles D., Ed. 1981-01-01 Discusses the concept of equilibrium in chemical systems, particularly in relation to predicting the position of equilibrium, predicting spontaneity of a reaction, quantitative applications of the equilibrium constant, heterogeneous equilibrium, determination of the solubility product constant, common-ion effect, and dissolution of precipitates.… 7. A study of pH-dependent photodegradation of amiloride by a multivariate curve resolution approach to combined kinetic and acid-base titration UV data. PubMed De Luca, Michele; Ioele, Giuseppina; Mas, Sílvia; Tauler, Romà; Ragno, Gaetano 2012-11-21 Amiloride photostability at different pH values was studied in depth by applying Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) to the UV spectrophotometric data from drug solutions exposed to stressing irradiation. Resolution of all degradation photoproducts was possible by simultaneous spectrophotometric analysis of kinetic photodegradation and acid-base titration experiments. Amiloride photodegradation showed to be strongly dependent on pH. Two hard modelling constraints were sequentially used in MCR-ALS for the unambiguous resolution of all the species involved in the photodegradation process. An amiloride acid-base system was defined by using the equilibrium constraint, and the photodegradation pathway was modelled taking into account the kinetic constraint. The simultaneous analysis of photodegradation and titration experiments revealed the presence of eight different species, which were differently distributed according to pH and time. Concentration profiles of all the species as well as their pure spectra were resolved and kinetic rate constants were estimated. The values of rate constants changed with pH and under alkaline conditions the degradation pathway and photoproducts also changed. These results were compared to those obtained by LC-MS analysis from drug photodegradation experiments. MS analysis allowed the identification of up to five species and showed the simultaneous presence of more than one acid-base equilibrium. 8. Rapid-Equilibrium Enzyme Kinetics ERIC Educational Resources Information Center Alberty, Robert A. 2008-01-01 Rapid-equilibrium rate equations for enzyme-catalyzed reactions are especially useful because if experimental data can be fit by these simpler rate equations, the Michaelis constants can be interpreted as equilibrium constants. However, for some reactions it is necessary to use the more complicated steady-state rate equations. Thermodynamics is… 9. Renal acidification responses to respiratory acid-base disorders. PubMed 2010-01-01 Respiratory acid-base disorders are those abnormalities in acid-base equilibrium that are expressed as primary changes in the arterial carbon dioxide tension (PaCO2). An increase in PaCO2 (hypercapnia) acidifies body fluids and initiates the acid-base disturbance known as respiratory acidosis. By contrast, a decrease in PaCO2 (hypocapnia) alkalinizes body fluids and initiates the acid-base disturbance known as respiratory alkalosis. The impact on systemic acidity of these primary changes in PaCO2 is ameliorated by secondary, directional changes in plasma [HCO3¯] that occur in 2 stages. Acutely, hypercapnia or hypocapnia yields relatively small changes in plasma [HCO3¯] that originate virtually exclusively from titration of the body's nonbicarbonate buffers. During sustained hypercapnia or hypocapnia, much larger changes in plasma [HCO3¯] occur that reflect adjustments in renal acidification mechanisms. Consequently, the deviation of systemic acidity from normal is smaller in the chronic forms of these disorders. Here we provide an overview of the renal acidification responses to respiratory acid-base disorders. We also identify gaps in knowledge that require further research. PubMed Erdey, L; Gimesi, O; Szabadváry, F 1969-03-01 Acid-base titrations can be performed with radiometric end-point detection by use of labelled metal salts (e.g., ZnCl(2), HgCl(2)). Owing to the formation or dissolution of the corresponding hydroxide after the equivalence point, the activity of the titrated solution linearly increases or decreases as excess of standard solution is added. The end-point of the titration is determined graphically. 11. Electroreduction and acid-base properties of dipyrrolylquinoxalines. PubMed Fu, Zhen; Zhang, Min; Zhu, Weihua; Karnas, Elizabeth; Mase, Kentaro; Ohkubo, Kei; Sessler, Jonathan L; Fukuzumi, Shunichi; Kadish, Karl M 2012-10-18 The electroreduction and acid-base properties of dipyrrolylquinoxalines of the form H(2)DPQ, H(2)DPQ(NO(2)), and H(2)DPQ(NO(2))(2) were investigated in benzonitrile (PhCN) containing 0.1 M tetra-n-butylammonium perchlorate (TBAP). This study focuses on elucidating the complete electrochemistry, spectroelectrochemistry, and acid-base properties of H(2)DPQ(NO(2))(n) (n = 0, 1, or 2) in PhCN before and after the addition of trifluoroacetic acid (TFA), tetra-n-butylammonium hydroxide (TBAOH), tetra-n-butylammonium fluoride (TBAF), or tetra-n-butylammonium acetate (TBAOAc) to solution. Electrochemical and spectroelectrochemical data provide support for the formation of a monodeprotonated anion after disproportionation of a dipyrrolylquinoxaline radical anion produced initially. The generated monoanion is then further reduced in two reversible one-electron-transfer steps at more negative potentials in the case of H(2)DPQ(NO(2)) and H(2)DPQ(NO(2))(2). Electrochemically monitored titrations of H(2)DPQ(NO(2))(n) with OH(-), F(-), or OAc(-) (in the form of TBA(+)X(-) salts) give rise to the same monodeprotonated H(2)DPQ(NO(2))(n) produced during electroreduction in PhCN. This latter anion can then be reduced in two additional one-electron-transfer steps in the case of H(2)DPQ(NO(2)) and H(2)DPQ(NO(2))(2). Spectroscopically monitored titrations of H(2)DPQ(NO(2))(n) with X(-) show a 1:2 stoichiometry and provide evidence for the production of both [H(2)DPQ(NO(2))(n)](-) and XHX(-). The spectroscopically measured equilibrium constants range from log β(2) = 5.3 for the reaction of H(2)DPQ with TBAOAc to log β(2) = 8.8 for the reaction of H(2)DPQ(NO(2))(2) with TBAOH. These results are consistent with a combined deprotonation and anion binding process. Equilibrium constants for the addition of one H(+) to each quinoxaline nitrogen of H(2)DPQ, H(2)DPQ(NO(2)), and H(2)DPQ(NO(2))(2) in PhCN containing 0.1 M TBAP were also determined via electrochemical and spectroscopic means 12. The Conceptual Change Approach to Teaching Chemical Equilibrium ERIC Educational Resources Information Center Canpolat, Nurtac; Pinarbasi, Tacettin; Bayrakceken, Samih; Geban, Omer 2006-01-01 This study investigates the effect of a conceptual change approach over traditional instruction on students' understanding of chemical equilibrium concepts (e.g. dynamic nature of equilibrium, definition of equilibrium constant, heterogeneous equilibrium, qualitative interpreting of equilibrium constant, changing the reaction conditions). This… 13. Use of lipophilic ion adsorption isotherms to determine the surface area and the monolayer capacity of a chromatographic packing, as well as the thermodynamic equilibrium constant for its adsorption. PubMed Cecchi, T 2005-04-29 A method that champions the approaches of two independent research groups, to quantitate the chromatographic stationary phase surface available for lipophilic ion adsorption, is presented. For the first time the non-approximated expression of the electrostatically modified Langmuir adsorption isotherm was used. The non approximated Gouy-Chapman (G-C) theory equation was used to give the rigorous surface potential. The method helps model makers, interested in ionic interactions, determine whether the potential modified Langmuir isotherm can be linearized, and, accordingly, whether simplified retention equations can be properly used. The theory cultivated here allows the estimates not only of the chromatographically accessible surface area, but also of the thermodynamic equilibrium constant for the adsorption of the amphiphile, the standard free energy of its adsorption, and the monolayer capacity of the packing. In addition, it establishes the limit between a theoretical and an empirical use of the Freundlich isotherm to determine the surface area. Estimates of the parameters characterising the chromatographic system are reliable from the physical point of view, and this greatly validates the present comprehensive approach. 14. Temperature dependence of the NO3 absorption cross-section above 298 K and determination of the equilibrium constant for NO3 + NO2 <--> N2O5 at atmospherically relevant conditions. PubMed Osthoff, Hans D; Pilling, Michael J; Ravishankara, A R; Brown, Steven S 2007-11-21 The reaction NO3 + NO2 <--> N2O5 was studied over the 278-323 K temperature range. Concentrations of NO3, N2O5, and NO2 were measured simultaneously in a 3-channel cavity ring-down spectrometer. Equilibrium constants were determined over atmospherically relevant concentration ranges of the three species in both synthetic samples in the laboratory and ambient air samples in the field. A fit to the laboratory data yielded Keq = (5.1 +/- 0.8) x 10(-27) x e((10871 +/- 46)/7) cm3 molecule(-1). The temperature dependence of the NO3 absorption cross-section at 662 nm was investigated over the 298-388 K temperature range. The line width was found to be independent of temperature, in agreement with previous results. New data for the peak cross section (662.2 nm, vacuum wavelength) were combined with previous measurements in the 200 K-298 K region. A least-squares fit to the combined data gave sigma = [(4.582 +/- 0.096) - (0.00796 +/- 0.00031) x T] x 10(-17) cm2 molecule(-1). 15. A chemical equilibrium model for metal adsorption onto bacterial surfaces Fein, Jeremy B.; Daughney, Christopher J.; Yee, Nathan; Davis, Thomas A. 1997-08-01 This study quantifies metal adsorption onto cell wall surfaces of Bacillus subtilis by applying equilibrium thermodynamics to the specific chemical reactions that occur at the water-bacteria interface. We use acid/base titrations to determine deprotonation constants for the important surface functional groups, and we perform metal-bacteria adsorption experiments, using Cd, Cu, Pb, and Al, to yield site-specific stability constants for the important metal-bacteria surface complexes. The acid/base properties of the cell wall of B. subtilis can best be characterized by invoking three distinct types of surface organic acid functional groups, with pK a values of 4.82 ± 0.14, 6.9 ± 0.5, and 9.4 ± 0.6. These functional groups likely correspond to carboxyl, phosphate, and hydroxyl sites, respectively, that are displayed on the cell wall surface. The results of the metal adsorption experiments indicate that both the carboxyl sites and the phosphate sites contribute to metal uptake. The values of the log stability constants for metal-carboxyl surface complexes range from 3.4 for Cd, 4.2 for Pb, 4.3 for Cu, to 5.0 for Al. These results suggest that the stabilities of the metal-surface complexes are high enough for metal-bacterial interactions to affect metal mobilities in many aqueous systems, and this approach enables quantitative assessment of the effects of bacteria on metal mobilities. 16. Grinding kinetics and equilibrium states NASA Technical Reports Server (NTRS) 1984-01-01 The temporary and permanent equilibrium occurring during the initial stage of cement grinding does not indicate the end of comminution, but rather an increased energy consumption during grinding. The constant dynamic equilibrium occurs after a long grinding period indicating the end of comminution for a given particle size. Grinding equilibrium curves can be constructed to show the stages of comminution and agglomeration for certain particle sizes. 17. Hemolymph acid-base balance of the crayfish Astacus leptodactylus as a function of the oxygenation and the acid-base balance of the ambient water. PubMed Dejours, P; Armand, J 1980-07-01 The acid-base balance of the prebranchial hemolymph of the crayfish Astacus leptodactylus was studied at various acid-base balances and levels of oxygenation of the ambient water at 13 degrees C. The water acid-base balance was controlled automatically by a pH-CO2-stat. Into water of constant titration alkalinity, TA, this device intermittenly injects carbon dioxide to maintain the pH at a preset value. Water pH was reduced to the same value either by hypercapnia (at constant TA) or by adding HCl or H2SO4 to decrease the TA (at constant CO2 tension). Decrease of hemolymph pH and increase of hemolymph PCO2 were similar for the three acidic waters. Water oxygenation changes strongly affected hemolymph ABB. In crayfish living in hyperoxic water (PO2 congruent to 600 Torr) compared to those in hypoxic water (PO2 congruent to 40 Torr), hemolymph pH was 0.3 to 0.4 unit lower and hemolymph PCO2 several times higher, the exact values of pH and PCO2 depending on the controlled ambient acid-base balance. In any study of the hemolymph acid-base balance of the crayfish, it is an important to control ambient water's acid-base balance and oxygenation as it is to control its temperature, a conclusion which probably holds true for studies on all water breathers. 18. Implementing an Equilibrium Law Teaching Sequence for Secondary School Students to Learn Chemical Equilibrium ERIC Educational Resources Information Center Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio 2015-01-01 A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of… 19. Molten fatty acid based microemulsions. PubMed Noirjean, Cecile; Testard, Fabienne; Dejugnat, Christophe; Jestin, Jacques; Carriere, David 2016-06-21 We show that ternary mixtures of water (polar phase), myristic acid (MA, apolar phase) and cetyltrimethylammonium bromide (CTAB, cationic surfactant) studied above the melting point of myristic acid allow the preparation of microemulsions without adding a salt or a co-surfactant. The combination of SANS, SAXS/WAXS, DSC, and phase diagram determination allows a complete characterization of the structures and interactions between components in the molten fatty acid based microemulsions. For the different structures characterized (microemulsion, lamellar or hexagonal phases), a similar thermal behaviour is observed for all ternary MA/CTAB/water monophasic samples and for binary MA/CTAB mixtures without water: crystalline myristic acid melts at 52 °C, and a thermal transition at 70 °C is assigned to the breaking of hydrogen bounds inside the mixed myristic acid/CTAB complex (being the surfactant film in the ternary system). Water determines the film curvature, hence the structures observed at high temperature, but does not influence the thermal behaviour of the ternary system. Myristic acid is partitioned in two "species" that behave independently: pure myristic acid and myristic acid associated with CTAB to form an equimolar complex that plays the role of the surfactant film. We therefore show that myristic acid plays the role of a solvent (oil) and a co-surfactant allowing the fine tuning of the structure of oil and water mixtures. This solvosurfactant behaviour of long chain fatty acid opens the way for new formulations with a complex structure without the addition of any extra compound. PMID:27241163 20. Surface properties of bacillus subtilis determined by acid/base titrations, and the implications for metal adsorption in fluid-rock systems SciTech Connect Fein, J.B.; Davis, T.A. 1996-10-01 Bacteria are ubiquitous in low temperature aqueous systems, but quantifying their effects on aqueous mass transport remains a problem. Numerous studies have qualitatively examined the metal binding capacity of bacterial cell walls. However, quantitative thermodynamic modeling of metal-bacteria-mineral systems requires a detailed knowledge of the surface properties of the bacterial functional groups. In this study, we have conducted acid/base titrations of suspensions of B. subtilis, a common subsurface species whose surface properties are largely controlled by carboxyl groups. Titrations were conducted between pH 2 and 11 at several ionic strengths. The data are analyzed using a constant capacitance model to account for the surface electric field effects on the acidity constant. The pK{sub a} value that best fits the titration data is 3.9 {plus_minus} 0.3. This result represents the first step toward quantifying bacteria-metal and mineral-bacteria-metal interactions using equilibrium thermodynamics. 1. The Kidney and Acid-Base Regulation ERIC Educational Resources Information Center Koeppen, Bruce M. 2009-01-01 Since the topic of the role of the kidneys in the regulation of acid base balance was last reviewed from a teaching perspective (Koeppen BM. Renal regulation of acid-base balance. Adv Physiol Educ 20: 132-141, 1998), our understanding of the specific membrane transporters involved in H+, HCO , and NH transport, and especially how these… 2. The Conjugate Acid-Base Chart. ERIC Educational Resources Information Center Treptow, Richard S. 1986-01-01 Discusses the difficulties that beginning chemistry students have in understanding acid-base chemistry. Describes the use of conjugate acid-base charts in helping students visualize the conjugate relationship. Addresses chart construction, metal ions, buffers and pH titrations, and the organic functional groups and nonaqueous solvents. (TW) 3. Acid-Base Balance in Uremic Rats with Vascular Calcification PubMed Central Peralta-Ramírez, Alan; Raya, Ana Isabel; Pineda, Carmen; Rodríguez, Mariano; Aguilera-Tejero, Escolástico; López, Ignacio 2014-01-01 Background/Aims Vascular calcification (VC), a major complication in humans and animals with chronic kidney disease (CKD), is influenced by changes in acid-base balance. The purpose of this study was to describe the acid-base balance in uremic rats with VC and to correlate the parameters that define acid-base equilibrium with VC. Methods Twenty-two rats with CKD induced by 5/6 nephrectomy (5/6 Nx) and 10 nonuremic control rats were studied. Results The 5/6 Nx rats showed extensive VC as evidenced by a high aortic calcium (9.2 ± 1.7 mg/g of tissue) and phosphorus (20.6 ± 4.9 mg/g of tissue) content. Uremic rats had an increased pH level (7.57 ± 0.03) as a consequence of both respiratory (PaCO2 = 28.4 ± 2.1 mm Hg) and, to a lesser degree, metabolic (base excess = 4.1 ± 1 mmol/l) derangements. A high positive correlation between both anion gap (AG) and strong ion difference (SID) with aortic calcium (AG: r = 0.604, p = 0.02; SID: r = 0.647, p = 0.01) and with aortic phosphorus (AG: r = 0.684, p = 0.007; SID: r = 0.785, p = 0.01) was detected. Conclusions In an experimental model of uremic rats, VC showed high positive correlation with AG and SID. PMID:25177336 4. Acid-base properties of bentonite rocks with different origins. PubMed Nagy, Noémi M; Kónya, József 2006-03-01 Five bentonite samples (35-47% montmorillonite) from a Sarmatian sediment series with bentonite sites around Sajóbábony (Hungary) is studied. Some of these samples were tuffogenic bentonite (sedimentary), the others were bentonitized tuff with volcano sedimentary origin. The acid-base properties of the edge sites were studied by potentiometric titrations and surface complexation modeling. It was found that the number and the ratio of silanol and aluminol sites as well as the intrinsic stability constants are different for the sedimentary bentonite and bentonitized tuff. The characteristic properties of the edges sites depend on the origins. The acid-base properties are compared to other commercial and standard bentonites. 5. Potentiometric study of reaction between periodate and iodide as their tetrabutylammonium salts in chloroform. Application to the determination of iodide and potentiometric detection of end points in acid-base titrations in chloroform. PubMed 1995-03-01 A potentiometric method for the titration of tetrabutylammonium iodide (TBAI) in chloroform using tetrabutylammonium periodate (TBAPI) as a strong and suitable oxidizing reagent is described. The potentiometric conditions were optimized and the equilibrium constants of the reactions occurring during the titration were determined. The method was used for the determination of iodide both in chloroform and aqueous solutions after extraction into chloroform as ion-association with tetraphenylarsonium. The reaction between TBAPI and TBAI was also used as acid indicator for the potentiometric detection of end points of acid-base titrations in chloroform. 6. Determination of Henry's constant, the dissociation constant, and the buffer capacity of the bicarbonate system in ruminal fluid. PubMed Hille, Katharina T; Hetz, Stefan K; Rosendahl, Julia; Braun, Hannah-Sophie; Pieper, Robert; Stumpff, Friederike 2016-01-01 Despite the clinical importance of ruminal acidosis, ruminal buffering continues to be poorly understood. In particular, the constants for the dissociation of H2CO3 and the solubility of CO2 (Henry's constant) have never been stringently determined for ruminal fluid. The pH was measured in parallel directly in the rumen and the reticulum in vivo, and in samples obtained via aspiration from 10 fistulated cows on hay- or concentrate-based diets. The equilibrium constants of the bicarbonate system were measured at 38°C both using the Astrup technique and a newly developed method with titration at 2 levels of partial pressure of CO2 (pCO2; 4.75 and 94.98 kPa), yielding mean values of 0.234 ± 0.005 mmol ∙ L(-1) ∙ kPa(-1) and 6.11 ± 0.02 for Henry's constant and the dissociation constant, respectively (n/n = 31/10). Both reticular pH and the pH of samples measured after removal were more alkalic than those measured in vivo in the rumen (by ΔpH = 0.87 ± 0.04 and 0.26 ± 0.04). The amount of acid or base required to shift the pH of ruminal samples to 6.4 or 5.8 (base excess) differed between the 2 feeding groups. Experimental results are compared with the mathematical predictions of an open 2-buffer Henderson-Hasselbalch equilibrium model. Because pCO2 has pronounced effects on ruminal pH and can decrease rapidly in samples removed from the rumen, introduction of a generally accepted protocol for determining the acid-base status of ruminal fluid with standard levels of pCO2 and measurement of base excess in addition to pH should be considered. 7. Determination of Henry's constant, the dissociation constant, and the buffer capacity of the bicarbonate system in ruminal fluid. PubMed Hille, Katharina T; Hetz, Stefan K; Rosendahl, Julia; Braun, Hannah-Sophie; Pieper, Robert; Stumpff, Friederike 2016-01-01 Despite the clinical importance of ruminal acidosis, ruminal buffering continues to be poorly understood. In particular, the constants for the dissociation of H2CO3 and the solubility of CO2 (Henry's constant) have never been stringently determined for ruminal fluid. The pH was measured in parallel directly in the rumen and the reticulum in vivo, and in samples obtained via aspiration from 10 fistulated cows on hay- or concentrate-based diets. The equilibrium constants of the bicarbonate system were measured at 38°C both using the Astrup technique and a newly developed method with titration at 2 levels of partial pressure of CO2 (pCO2; 4.75 and 94.98 kPa), yielding mean values of 0.234 ± 0.005 mmol ∙ L(-1) ∙ kPa(-1) and 6.11 ± 0.02 for Henry's constant and the dissociation constant, respectively (n/n = 31/10). Both reticular pH and the pH of samples measured after removal were more alkalic than those measured in vivo in the rumen (by ΔpH = 0.87 ± 0.04 and 0.26 ± 0.04). The amount of acid or base required to shift the pH of ruminal samples to 6.4 or 5.8 (base excess) differed between the 2 feeding groups. Experimental results are compared with the mathematical predictions of an open 2-buffer Henderson-Hasselbalch equilibrium model. Because pCO2 has pronounced effects on ruminal pH and can decrease rapidly in samples removed from the rumen, introduction of a generally accepted protocol for determining the acid-base status of ruminal fluid with standard levels of pCO2 and measurement of base excess in addition to pH should be considered. PMID:26519978 8. Jammed acid-base reactions at interfaces. PubMed Gibbs-Davis, Julianne M; Kruk, Jennifer J; Konek, Christopher T; Scheidt, Karl A; Geiger, Franz M 2008-11-19 Using nonlinear optics, we show that acid-base chemistry at aqueous/solid interfaces tracks bulk pH changes at low salt concentrations. In the presence of 10 to 100 mM salt concentrations, however, the interfacial acid-base chemistry remains jammed for hours, until it finally occurs within minutes at a rate that follows the kinetic salt effect. For various alkali halide salts, the delay times increase with increasing anion polarizability and extent of cation hydration and lead to massive hysteresis in interfacial acid-base titrations. The resulting implications for pH cycling in these systems are that interfacial systems can spatially and temporally lag bulk acid-base chemistry when the Debye length approaches 1 nm. 9. Use of an Acid-Base Table. ERIC Educational Resources Information Center Willis, Grover; And Others 1986-01-01 Identifies several ways in which an acid-base table can provide students with information about chemical reactions. Cites examples of the chart's use and includes a table which indicates the strengths of some common acids and bases. (ML) 10. The comprehensive acid-base characterization of glutathione Mirzahosseini, Arash; Somlyay, Máté; Noszál, Béla 2015-02-01 Glutathione in its thiol (GSH) and disulfide (GSSG) forms, and 4 related compounds were studied by 1H NMR-pH titrations and a case-tailored evaluation method. The resulting acid-base properties are quantified in terms of 128 microscopic protonation constants; the first complete set of such parameters for this vitally important pair of compounds. The concomitant 12 interactivity parameters were also determined. Since biological redox systems are regularly compared to the GSH-GSSG pair, the eight microscopic thiolate basicities determined this way are exclusive means for assessing subtle redox parameters in a wide pH range. 11. Are Fundamental Constants Really Constant? ERIC Educational Resources Information Center Swetman, T. P. 1972-01-01 Dirac's classical conclusions, that the values of e2, M and m are constants and the quantity of G decreases with time. Evoked considerable interest among researchers and traces historical development by which further experimental evidence points out that both e and G are constant values. (PS) 12. [Kidney, Fluid, and Acid-Base Balance]. PubMed Shioji, Naohiro; Hayashi, Masao; Morimatsu, Hiroshi 2016-05-01 Kidneys play an important role to maintain human homeostasis. They contribute to maintain body fluid, electrolytes, and acid-base balance. Especially in fluid control, we, physicians can intervene body fluid balance using fluid resuscitation and diuretics. In recent years, one type of fluid resuscitation, hydroxyl ethyl starch has been extensively studied in the field of intensive care. Although their effects on fluid resuscitation are reasonable, serious complications such as kidney injury requiring renal replacement therapy occur frequently. Now we have to pay more attention to this important complication. Another topic of fluid management is tolvaptan, a selective vasopressin-2 receptor antagonist Recent randomized trial suggested that tolvaptan has a similar supportive effect for fluid control and more cost effective compared to carperitide. In recent years, Stewart approach is recognized as one important tool to assess acid-base balance in critically ill patients. This approach has great value, especially to understand metabolic components in acid-base balance. Even for assessing the effects of kidneys on acid-base balance, this approach gives us interesting insight. We should appropriately use this new approach to treat acid-base abnormality in critically ill patients. PMID:27319095 13. Estimation of medium effects on equilibrium constants in moderate and high ionic strength solutions at elevated temperatures by using specific interaction theory (SIT): interaction coefficients involving Cl, OH- and Ac- up to 200 degrees C and 400 bars. PubMed Xiong, Yongliang 2006-01-01 In this study, a series of interaction coefficients of the Brønsted-Guggenheim-Scatchard specific interaction theory (SIT) have been estimated up to 200 degrees C and 400 bars. The interaction coefficients involving Cl- estimated include epsilon(H+, Cl-), epsilon(Na+, Cl-), epsilon(Ag+, Cl-), epsilon(Na+, AgCl2 -), epsilon(Mg2+, Cl-), epsilon(Ca2+, Cl-), epsilon(Sr2+, Cl-), epsilon(Ba2+, Cl-), epsilon(Sm3+, Cl-), epsilon(Eu3+, Cl-), epsilon(Gd3+, Cl-), and epsilon(GdAc2+, Cl-). The interaction coefficients involving OH- estimated include epsilon(Li+, OH-), epsilon(K+, OH-), epsilon(Na+, OH-), epsilon(Cs+, OH-), epsilon(Sr2+, OH-), and epsilon(Ba2+, OH-). In addition, the interaction coefficients of epsilon(Na+, Ac-) and epsilon(Ca2+, Ac-) have also been estimated. The bulk of interaction coefficients presented in this study has been evaluated from the mean activity coefficients. A few of them have been estimated from the potentiometric and solubility studies. The above interaction coefficients are tested against both experimental mean activity coefficients and equilibrium quotients. Predicted mean activity coefficients are in satisfactory agreement with experimental data. Predicted equilibrium quotients are in very good agreement with experimental values. Based upon its relatively rapid attainment of equilibrium and the ease of determining magnesium concentrations, this study also proposes that the solubility of brucite can be used as a pH (pcH) buffer/sensor for experimental systems in NaCl solutions up to 200 degrees C by employing the predicted solubility quotients of brucite in conjunction with the dissociation quotients of water and the first hydrolysis quotients of Mg2+, all in NaCl solutions. 14. Rapid determination of the equivalence volume in potentiometric acid-base titrations to a preset pH-II Standardizing a solution of a strong base, graphic location of equivalence volume, determination of stability constants of acids and titration of a mixture of two weak acids. PubMed 1974-06-01 A newly proposed method of titrating weak acids with strong bases is applied to standardize a solution of a strong base, to graphic determination of equivalence volume of acetic acid with an error of 0.2%, to calculate the stability constants of hydroxylammonium ion, boric acid and hydrogen ascorbate ion and to analyse a mixture of acetic acid and ammonium ion with an error of 0.2-0.7%. 15. An arbitrary correction function for CO(2) evolution in acid-base titrations and its use in multiparametric refinement of data. PubMed Wozniak, M; Nowogrocki, G 1981-08-01 A great number of acid-base titrations are performed under an inert gas flow: in the procedure, a variable amount of CO(2)-from carbonated reactants-is carried away and thus prevents strict application of mass-balance equations. A function for the CO(2) evolution is proposed and introduced into the general expression for the volume of titrant. Use of this expression in multiparametric refinement yields, besides the usual values (concentrations, acidity constants...), a parameter characteristic of this departure of CO(2). Furthermore, a modified weighting factor is introduced to take into account the departure from equilibrium caused by the slow CO(2) evolution. The validity of these functions was successfully tested on three typical examples: neutralization of strong acid by sodium carbonate, of sodium carbonate by strong acid, and of a mixture of hydrochloric acid, 4-nitrophenol and phenol by carbonated potassium hydroxide. 16. Equilibrium Shaping Izzo, Dario; Petazzi, Lorenzo 2006-08-01 We present a satellite path planning technique able to make identical spacecraft aquire a given configuration. The technique exploits a behaviour-based approach to achieve an autonomous and distributed control over the relative geometry making use of limited sensorial information. A desired velocity is defined for each satellite as a sum of different contributions coming from generic high level behaviours: forcing the final desired configuration the behaviours are further defined by an inverse dynamic calculation dubbed Equilibrium Shaping. We show how considering only three different kind of behaviours it is possible to acquire a number of interesting formations and we set down the theoretical framework to find the entire set. We find that allowing a limited amount of communication the technique may be used also to form complex lattice structures. Several control feedbacks able to track the desired velocities are introduced and discussed. Our results suggest that sliding mode control is particularly appropriate in connection with the developed technique. 17. The physiological assessment of acid-base balance. PubMed Howorth, P J 1975-04-01 Acid-base terminology including the sue of SI units is reviewed. The historical reasons why nomograms have been particularly used in acid-base work are discussed. The theoretical basis of the Henderson-Hasselbalch equation is considered. It is emphasized that the solubility of CO2 in plasma and the apparent first dissociation constant of carbonic acid are not chemical constants when applied to media of uncertain and varying composition such as blood plasma. The use of the Henderson-Hasselbalch equation in making hypothermia corrections for PCO2 is discussed. The Astrup system for the in vitro determination of blood gases and derived parameters is described and the theoretical weakness of the base excess concept stressed. A more clinically-oriented approach to the assessment of acid-base problems is presented. Measurement of blood [H+] and PCO2 are considered to be primary data which should be recorded on a chart with in vivo CO2-titration lines (see below). Clinical information and results of other laboratory investigations such as plasma bicarbonate, PO2,P50 are then to be considered together with the primary data. In order to interpret this combined information it is essential to take into account the known ventilatory response to metabolic acidosis and alkalosis, and the renal response to respiratory acidosis and alkalosis. The use is recommended of a chart showing the whole-body CO2-titration points obtained when patients with different initial levels of non-respiratory [H+] are ventilated. A number of examples are given of the use of this [H+] and PCO2 in vivo chart in the interpretation of acid-base data. The aetiology, prognosis and treatment of metabolic alkalosis is briefly reviewed. Treatment with intravenous acid is recommended for established cases. Attention is drawn to the possibility of iatrogenic production of metabolic alkalosis. Caution is expressed over the use of intravenous alkali in all but the severest cases of metabolic acidosis. The role of 18. Separation of Acids, Bases, and Neutral Compounds Fujita, Megumi; Mah, Helen M.; Sgarbi, Paulo W. M.; Lall, Manjinder S.; Ly, Tai Wei; Browne, Lois M. 2003-01-01 Separation of Acids, Bases, and Neutral Compounds requires the following software, which is available for free download from the Internet: Netscape Navigator, version 4.75 or higher, or Microsoft Internet Explorer, version 5.0 or higher; Chime plug-in, version compatible with your OS and browser (available from MDL); and Flash player, version 5 or higher (available from Macromedia). 19. Jigsaw Cooperative Learning: Acid-Base Theories ERIC Educational Resources Information Center Tarhan, Leman; Sesen, Burcin Acar 2012-01-01 This study focused on investigating the effectiveness of jigsaw cooperative learning instruction on first-year undergraduates' understanding of acid-base theories. Undergraduates' opinions about jigsaw cooperative learning instruction were also investigated. The participants of this study were 38 first-year undergraduates in chemistry education… 20. The Magic Sign: Acids, Bases, and Indicators. ERIC Educational Resources Information Center Phillips, Donald B. 1986-01-01 Presents an approach that is used to introduce elementary and junior high students to a series of activities that will provide concrete experiences with acids, bases, and indicators. Provides instructions and listings of needed solutions and materials for developing this "magic sign" device. Includes background information and several student… 1. Potentiometric Measurement of Transition Ranges and Titration Errors for Acid/Base Indicators Flowers, Paul A. 1997-07-01 Sophomore analytical chemistry courses typically devote a substantial amount of lecture time to acid/base equilibrium theory, and usually include at least one laboratory project employing potentiometric titrations. In an effort to provide students a laboratory experience that more directly supports their classroom discussions on this important topic, an experiment involving potentiometric measurement of transition ranges and titration errors for common acid/base indicators has been developed. The pH and visually-assessed color of a millimolar strong acid/base system are monitored as a function of added titrant volume, and the resultant data plotted to permit determination of the indicator's transition range and associated titration error. Student response is typically quite positive, and the measured quantities correlate reasonably well to literature values. 2. Thermodynamics and Kinetics of Chemical Equilibrium in Solution. ERIC Educational Resources Information Center Leenson, I. A. 1986-01-01 Discusses theory of thermodynamics of the equilibrium in solution and dissociation-dimerization kinetics. Describes experimental procedure including determination of molar absorptivity and equilibrium constant, reaction enthalpy, and kinetics of the dissociation-dimerization reaction. (JM) 3. On the Equilibrium States of Interconnected Bubbles or Balloons. ERIC Educational Resources Information Center Weinhaus, F.; Barker, W. 1978-01-01 Describes the equilibrium states of a system composed of two interconnected, air-filled spherical membranes of different sizes. The equilibrium configurations are determined by the method of minimization of the availability of the system at constant temperature. (GA) 4. Nuclear magnetic resonance as a tool for determining protonation constants of natural polyprotic bases in solution. PubMed Frassineti, C; Ghelli, S; Gans, P; Sabatini, A; Moruzzi, M S; Vacca, A 1995-11-01 The acid-base properties of the tetramine 1,5,10,14-tetraazatetradecane H2N(CH2)3NH(CH2)4NH(CH2)3NH2 (spermine) in deuterated water have been studied at 40 degrees C at various pD values by means of NMR spectroscopy. Both one-dimensional 13C[1H] spectra and two-dimensional 1H/13C heterocorrelation spectra with inverse detection have been recorded. A calculation procedure of general validity has been developed to unravel the effect of rapid exchange between the various species in equilibrium as a function of pD of the solution. The method of calculation used in this part of the new computer program, HYPNMR, is independent of the equilibrium model. HYPNMR has been used to obtain the basicity constants of spermine with respect to the D+ cation at 40 degrees C. Calculations have been performed using either 13C[1H] or 1H/13C data individually, or using both sets of data simultaneously. The results of the latter calculations were practically the same as the results obtained with the single data sets; the calculated errors on the refined parameters were a little smaller. After appropriate empirical corrections for temperature effects and for the presence of D+ in contrast to H+, the calculated constants are compared with spermine protonation constants which have been determined previously both from potentiometric and NMR data. 5. Surface Lewis acid-base properties of polymers measured by inverse gas chromatography. PubMed Shi, Baoli; Zhang, Qianru; Jia, Lina; Liu, Yang; Li, Bin 2007-05-18 Surface Lewis acid-base properties are significant for polymers materials. The acid constant, K(a) and base constant, K(b) of many polymers were characterized by some researchers with inverse gas chromatography (IGC) in recent years. In this paper, the surface acid-base constants, K(a) and K(b) of 20 kinds of polymers measured by IGC in recent years are summarized and discussed, including seven polymers characterized in this work. After plotting K(b) versus K(a), it is found that the polymers can be encircled by a triangle. They scatter in two regions of the triangle. Four polymers exist in region I. K(b)/K(a) of the polymers in region I are 1.4-2.1. The other polymers exist in region II. Most of the polymers are relative basic materials. 6. Model for acid-base chemistry in nanoparticle growth (MABNAG) Yli-Juuti, T.; Barsanti, K.; Hildebrandt Ruiz, L.; Kieloaho, A.-J.; Makkonen, U.; Petäjä, T.; Ruuskanen, T.; Kulmala, M.; Riipinen, I. 2013-12-01 Climatic effects of newly-formed atmospheric secondary aerosol particles are to a large extent determined by their condensational growth rates. However, all the vapours condensing on atmospheric nanoparticles and growing them to climatically relevant sizes are not identified yet and the effects of particle phase processes on particle growth rates are poorly known. Besides sulfuric acid, organic compounds are known to contribute significantly to atmospheric nanoparticle growth. In this study a particle growth model MABNAG (Model for Acid-Base chemistry in NAnoparticle Growth) was developed to study the effect of salt formation on nanoparticle growth, which has been proposed as a potential mechanism lowering the equilibrium vapour pressures of organic compounds through dissociation in the particle phase and thus preventing their evaporation. MABNAG is a model for monodisperse aqueous particles and it couples dynamics of condensation to particle phase chemistry. Non-zero equilibrium vapour pressures, with both size and composition dependence, are considered for condensation. The model was applied for atmospherically relevant systems with sulfuric acid, one organic acid, ammonia, one amine and water in the gas phase allowed to condense on 3-20 nm particles. The effect of dissociation of the organic acid was found to be small under ambient conditions typical for a boreal forest site, but considerable for base-rich environments (gas phase concentrations of about 1010 cm-3 for the sum of the bases). The contribution of the bases to particle mass decreased as particle size increased, except at very high gas phase concentrations of the bases. The relative importance of amine versus ammonia did not change significantly as a function of particle size. While our results give a reasonable first estimate on the maximum contribution of salt formation to nanoparticle growth, further studies on, e.g. the thermodynamic properties of the atmospheric organics, concentrations of low 7. Model for acid-base chemistry in nanoparticle growth (MABNAG) Yli-Juuti, T.; Barsanti, K.; Hildebrandt Ruiz, L.; Kieloaho, A.-J.; Makkonen, U.; Petäjä, T.; Ruuskanen, T.; Kulmala, M.; Riipinen, I. 2013-03-01 Climatic effects of newly-formed atmospheric secondary aerosol particles are to a large extent determined by their condensational growth rates. However, all the vapors condensing on atmospheric nanoparticles and growing them to climatically relevant sizes are not identified yet and the effects of particle phase processes on particle growth rates are poorly known. Besides sulfuric acid, organic compounds are known to contribute significantly to atmospheric nanoparticle growth. In this study a particle growth model MABNAG (Model for Acid-Base chemistry in NAnoparticle Growth) was developed to study the effect of salt formation on nanoparticle growth, which has been proposed as a potential mechanism lowering the equilibrium vapor pressures of organic compounds through dissociation in the particle phase and thus preventing their evaporation. MABNAG is a model for monodisperse aqueous particles and it couples dynamics of condensation to particle phase chemistry. Non-zero equilibrium vapor pressures, with both size and composition dependence, are considered for condensation. The model was applied for atmospherically relevant systems with sulfuric acid, one organic acid, ammonia, one amine and water in the gas phase allowed to condense on 3-20 nm particles. The effect of dissociation of the organic acid was found to be small under ambient conditions typical for a boreal forest site, but considerable for base-rich environments (gas phase concentrations of about 1010 cm-3 for the sum of the bases). The contribution of the bases to particle mass decreased as particle size increased, except at very high gas phase concentrations of the bases. The relative importance of amine versus ammonia did not change significantly as a function of particle size. While our results give a reasonable first estimate on the maximum contribution of salt formation to nanoparticle growth, further studies on, e.g. the thermodynamic properties of the atmospheric organics, concentrations of low 8. Exploring Chemical Equilibrium with Poker Chips: A General Chemistry Laboratory Exercise ERIC Educational Resources Information Center Bindel, Thomas H. 2012-01-01 A hands-on laboratory exercise at the general chemistry level introduces students to chemical equilibrium through a simulation that uses poker chips and rate equations. More specifically, the exercise allows students to explore reaction tables, dynamic chemical equilibrium, equilibrium constant expressions, and the equilibrium constant based on… 9. Mathematical modeling of acid-base physiology PubMed Central Occhipinti, Rossana; Boron, Walter F. 2015-01-01 pH is one of the most important parameters in life, influencing virtually every biological process at the cellular, tissue, and whole-body level. Thus, for cells, it is critical to regulate intracellular pH (pHi) and, for multicellular organisms, to regulate extracellular pH (pHo). pHi regulation depends on the opposing actions of plasma-membrane transporters that tend to increase pHi, and others that tend to decrease pHi. In addition, passive fluxes of uncharged species (e.g., CO2, NH3) and charged species (e.g., HCO3− , NH4+) perturb pHi. These movements not only influence one another, but also perturb the equilibria of a multitude of intracellular and extracellular buffers. Thus, even at the level of a single cell, perturbations in acid-base reactions, diffusion, and transport are so complex that it is impossible to understand them without a quantitative model. Here we summarize some mathematical models developed to shed light onto the complex interconnected events triggered by acids-base movements. We then describe a mathematical model of a spherical cell–which to our knowledge is the first one capable of handling a multitude of buffer reaction–that our team has recently developed to simulate changes in pHi and pHo caused by movements of acid-base equivalents across the plasma membrane of a Xenopus oocyte. Finally, we extend our work to a consideration of the effects of simultaneous CO2 and HCO3− influx into a cell, and envision how future models might extend to other cell types (e.g., erythrocytes) or tissues (e.g., renal proximal-tubule epithelium) important for whole-body pH homeostasis. PMID:25617697 10. Mathematical modeling of acid-base physiology. PubMed Occhipinti, Rossana; Boron, Walter F 2015-01-01 pH is one of the most important parameters in life, influencing virtually every biological process at the cellular, tissue, and whole-body level. Thus, for cells, it is critical to regulate intracellular pH (pHi) and, for multicellular organisms, to regulate extracellular pH (pHo). pHi regulation depends on the opposing actions of plasma-membrane transporters that tend to increase pHi, and others that tend to decrease pHi. In addition, passive fluxes of uncharged species (e.g., CO2, NH3) and charged species (e.g., HCO3(-), [Formula: see text] ) perturb pHi. These movements not only influence one another, but also perturb the equilibria of a multitude of intracellular and extracellular buffers. Thus, even at the level of a single cell, perturbations in acid-base reactions, diffusion, and transport are so complex that it is impossible to understand them without a quantitative model. Here we summarize some mathematical models developed to shed light onto the complex interconnected events triggered by acids-base movements. We then describe a mathematical model of a spherical cells-which to our knowledge is the first one capable of handling a multitude of buffer reactions-that our team has recently developed to simulate changes in pHi and pHo caused by movements of acid-base equivalents across the plasma membrane of a Xenopus oocyte. Finally, we extend our work to a consideration of the effects of simultaneous CO2 and HCO3(-) influx into a cell, and envision how future models might extend to other cell types (e.g., erythrocytes) or tissues (e.g., renal proximal-tubule epithelium) important for whole-body pH homeostasis. 11. Absorption, fluorescence, and acid-base equilibria of rhodamines in micellar media of sodium dodecyl sulfate. PubMed Obukhova, Elena N; Mchedlov-Petrossyan, Nikolay O; Vodolazkaya, Natalya A; Patsenker, Leonid D; Doroshenko, Andrey O; Marynin, Andriy I; Krasovitskii, Boris M 2017-01-01 Rhodamine dyes are widely used as molecular probes in different fields of science. The aim of this paper was to ascertain to what extent the structural peculiarities of the compounds influence their absorption, emission, and acid-base properties under unified conditions. The acid-base dissociation (HR(+)⇄R+H(+)) of a series of rhodamine dyes was studied in sodium n-dodecylsulfate micellar solutions. In this media, the form R exists as a zwitterion R(±). The indices of apparent ionization constants of fifteen rhodamine cations HR(+) with different substituents in the xanthene moiety vary within the range of pKa(app)=5.04 to 5.53. The distinct dependence of emission of rhodamines bound to micelles on pH of bulk water opens the possibility of using them as fluorescent interfacial acid-base indicators. 12. Absorption, fluorescence, and acid-base equilibria of rhodamines in micellar media of sodium dodecyl sulfate. PubMed Obukhova, Elena N; Mchedlov-Petrossyan, Nikolay O; Vodolazkaya, Natalya A; Patsenker, Leonid D; Doroshenko, Andrey O; Marynin, Andriy I; Krasovitskii, Boris M 2017-01-01 Rhodamine dyes are widely used as molecular probes in different fields of science. The aim of this paper was to ascertain to what extent the structural peculiarities of the compounds influence their absorption, emission, and acid-base properties under unified conditions. The acid-base dissociation (HR(+)⇄R+H(+)) of a series of rhodamine dyes was studied in sodium n-dodecylsulfate micellar solutions. In this media, the form R exists as a zwitterion R(±). The indices of apparent ionization constants of fifteen rhodamine cations HR(+) with different substituents in the xanthene moiety vary within the range of pKa(app)=5.04 to 5.53. The distinct dependence of emission of rhodamines bound to micelles on pH of bulk water opens the possibility of using them as fluorescent interfacial acid-base indicators. PMID:27423469 13. Extraction of electrolytes from aqueous solutions and their spectrophotometric determination by use of acid-base chromoionophores in lipophylic solvents. PubMed Barberi, Paola; Giannetto, Marco; Mori, Giovanni 2004-04-01 The formation of non-absorbing complexes in an organic phase has been exploited for the spectrophotometric determination of ionic analytes in aqueous solutions. The method is based on liquid-liquid extraction of aqueous solution with lipophylic organic phases containing an acid-base chromoionophore, a neutral lypophilic ligand (neutral carrier) selective to the analyte and a cationic (or anionic) exchanger. The method avoids all difficulties of the preparation of the very thin membranes used in optodes, so that it can advantageously be used for the study of the role physical-chemical parameters of the system in order to optimize them and to prepare, if necessary, an optimized optode. Two lipophylic derivatives of Nile Blue and 4',5-dibromofluorescein have been synthesized, in order to ensure their permanence within organic phase. Two different neutral carriers previously characterized by us as ionophores for liquid-membrane Ion Selective Electrodes have been employed. Three different ionic exchangers have been tested. Furthermore, a model allowing the interpolation of experimental data and the determination of the thermodynamic constant of the ionic-exchange equilibrium has been developed and applied. PMID:15242090 14. Acid-base strength and acidochromism of some dimethylamino-azinium iodides. An integrated experimental and theoretical study. PubMed Benassi, Enrico; Carlotti, Benedetta; Fortuna, Cosimo G; Barone, Vincenzo; Elisei, Fausto; Spalletti, Anna 2015-01-15 The effects of pH on the spectral properties of stilbazolium salts bearing dimethylamino substituents, namely, trans isomers of the iodides of the dipolar E-[2-(4-dimethylamino)styryl]-1-methylpyridinium, its branched quadrupolar analogue E,E-[2,6-di-(p-dimethylamino)styryl]-1-methylpyridinium, and three analogues, chosen to investigate the effects of the stronger quinolinium acceptor, the longer butadiene π bridge, or both, were investigated through a joint experimental and computational approach. A noticeable acidochromism of the absorption spectra (interesting for applications) was observed, with the basic and protonated species giving intensely colored and transparent solutions, respectively. The acid–base equilibrium constants for the protonation of the dimethylamino group in the ground state (pKa) were experimentally derived. Theoretical calculations according to the thermodynamic Born-Haber cycle provided pKa values in good agreement with the experimental values. The very low fluorescence yield did not allow a direct investigation of the changes in the acid-base properties in the excited state (pKa*) by fluorimetric titrations. Their values were derived by quantum-mechanical calculations and estimated experimentally on the basis of the Förster cycle. 15. Equilibrium bond lengths, force constants and vibrational frequencies of MnF 2, FeF 2, CoF 2, NiF 2, and ZnF 2 from least-squares analysis of gas-phase electron diffraction data Vogt, Natalja 2001-08-01 The least-squares analysis of the electron diffraction data for MnF 2, FeF 2, CoF 2, NiF 2 and ZnF 2 was carried out in terms of a cubic potential function. The obtained equilibrium bond lengths (in Å) are re(Mn-F)=1.797(6), re(Fe-F)=1.755(6), re(Co-F)=1.738(6), re(Ni-F)=1.715(7), and re(Zn-F)=1.729(7). The determined force constants and the corresponding vibrational frequencies are listed. The bond length re(Cu-F)=1.700(14) Å for CuF 2 was estimated and the variations of bond lengths for the first-row transition metal difluorides were discussed in light of their electronic structure. 16. Modern quantitative acid-base chemistry. PubMed Stewart, P A 1983-12-01 Quantitative analysis of ionic solutions in terms of physical and chemical principles has been effectively prohibited in the past by the overwhelming amount of calculation it required, but computers have suddenly eliminated that prohibition. The result is an approach to acid-base which revolutionizes our ability to understand, predict, and control what happens to hydrogen ions in living systems. This review outlines that approach and suggests some of its most useful implications. Quantitative understanding requires distinctions between independent variables (in body fluids: pCO2, net strong ion charge, and total weak acid, usually protein), and dependent variables [( HCO-3], [HA], [A-], [CO(2-)3], [OH-], and [H+] (or pH]. Dependent variables are determined by independent variables, and can be calculated from the defining equations for the specific system. Hydrogen ion movements between solutions can not affect hydrogen ion concentration; only changes in independent variables can. Many current models for ion movements through membranes will require modification on the basis of this quantitative analysis. Whole body acid-base balance can be understood quantitatively in terms of the three independent variables and their physiological regulation by the lungs, kidneys, gut, and liver. Quantitative analysis also shows that body fluids interact mainly by strong ion movements through the membranes separating them. 17. Bipolar Membranes for Acid Base Flow Batteries Anthamatten, Mitchell; Roddecha, Supacharee; Jorne, Jacob; Coughlan, Anna 2011-03-01 Rechargeable batteries can provide grid-scale electricity storage to match power generation with consumption and promote renewable energy sources. Flow batteries offer modular and flexible design, low cost per kWh and high efficiencies. A novel flow battery concept will be presented based on acid-base neutralization where protons (H+) and hydroxyl (OH-) ions react electrochemically to produce water. The large free energy of this highly reversible reaction can be stored chemically, and, upon discharge, can be harvested as usable electricity. The acid-base flow battery concept avoids the use of a sluggish oxygen electrode and utilizes the highly reversible hydrogen electrode, thus eliminating the need for expensive noble metal catalysts. The proposed flow battery is a hybrid of a battery and a fuel cell---hydrogen gas storing chemical energy is produced at one electrode and is immediately consumed at the other electrode. The two electrodes are exposed to low and high pH solutions, and these solutions are separated by a hybrid membrane containing a hybrid cation and anion exchange membrane (CEM/AEM). Membrane design will be discussed, along with ion-transport data for synthesized membranes. 18. A Constant Pressure Bomb NASA Technical Reports Server (NTRS) Stevens, F W 1924-01-01 This report describes a new optical method of unusual simplicity and of good accuracy suitable to study the kinetics of gaseous reactions. The device is the complement of the spherical bomb of constant volume, and extends the applicability of the relationship, pv=rt for gaseous equilibrium conditions, to the use of both factors p and v. The method substitutes for the mechanical complications of a manometer placed at some distance from the seat of reaction the possibility of allowing the radiant effects of reaction to record themselves directly upon a sensitive film. It is possible the device may be of use in the study of the photoelectric effects of radiation. The method makes possible a greater precision in the measurement of normal flame velocities than was previously possible. An approximate analysis shows that the increase of pressure and density ahead of the flame is negligible until the velocity of the flame approaches that of sound. 19. Teaching Acid/Base Physiology in the Laboratory ERIC Educational Resources Information Center Friis, Ulla G.; Plovsing, Ronni; Hansen, Klaus; Laursen, Bent G.; Wallstedt, Birgitta 2010-01-01 Acid/base homeostasis is one of the most difficult subdisciplines of physiology for medical students to master. A different approach, where theory and practice are linked, might help students develop a deeper understanding of acid/base homeostasis. We therefore set out to develop a laboratory exercise in acid/base physiology that would provide… 20. A General Simulator for Acid-Base Titrations de Levie, Robert 1999-07-01 General formal expressions are provided to facilitate the automatic computer calculation of acid-base titration curves of arbitrary mixtures of acids, bases, and salts, without and with activity corrections based on the Davies equation. Explicit relations are also given for the buffer strength of mixtures of acids, bases, and salts. 1. Using Willie's Acid-Base Box for Blood Gas Analysis ERIC Educational Resources Information Center Dietz, John R. 2011-01-01 In this article, the author describes a method developed by Dr. William T. Lipscomb for teaching blood gas analysis of acid-base status and provides three examples using Willie's acid-base box. Willie's acid-base box is constructed using three of the parameters of standard arterial blood gas analysis: (1) pH; (2) bicarbonate; and (3) CO[subscript… 2. A comparative study of surface acid-base characteristics of natural illites from different origins SciTech Connect Liu, W.; Sun, Z.; Forsling, W.; Du, Q.; Tang, H. 1999-11-01 The acid-base characteristics of naturally occurring illites, collected from different locations, were investigated by potentiometeric titrations. The experimental data were interpreted using the constant capacitance surface complexation model. Considerable release of Al and Si from illite samples and subsequent complexation or precipitation of hydroxyl aluminosilicates generated during the acidimetric forward titration and the alkalimetric back titration, respectively, were observed. In order to describe the acid-base chemistry of aqueous illite surfaces, two surface proton-reaction models, introducing the corresponding reactions between the dissolved aluminum species and silicic acid, as well as a surface Al-Si complex on homogeneous illite surface sites, were proposed. Optimization results indicated that both models could provide a good description of the titration behavior for all aqueous illite systems in this study. The intrinsic acidity constants for the different illites were similar in Model 1, showing some generalities in their acid-base properties. Model 1 may be considered as a simplification of Model 2, evident in the similarities between the corresponding constants. In addition, the formation constant for surface Al-Si species (complexes or precipitates) is relatively stable in this study. 3. Importance of acid-base equilibrium in electrocatalytic oxidation of formic acid on platinum. PubMed Joo, Jiyong; Uchida, Taro; Cuesta, Angel; Koper, Marc T M; Osawa, Masatoshi 2013-07-10 Electro-oxidation of formic acid on Pt in acid is one of the most fundamental model reactions in electrocatalysis. However, its reaction mechanism is still a matter of strong debate. Two different mechanisms, bridge-bonded adsorbed formate mechanism and direct HCOOH oxidation mechanism, have been proposed by assuming a priori that formic acid is the major reactant. Through systematic examination of the reaction over a wide pH range (0-12) by cyclic voltammetry and surface-enhanced infrared spectroscopy, we show that the formate ion is the major reactant over the whole pH range examined, even in strong acid. The performance of the reaction is maximal at a pH close to the pKa of formic acid. The experimental results are reasonably explained by a new mechanism in which formate ion is directly oxidized via a weakly adsorbed formate precursor. The reaction serves as a generic example illustrating the importance of pH variation in catalytic proton-coupled electron-transfer reactions. 4. Acid-base regulation during heating and cooling in the lizard, Varanus exanthematicus. PubMed Wood, S C; Johansen, K; Glass, M L; Hoyt, R W 1981-04-01 Current concepts of acid-base balance in ectothermic animals require that arterial pH vary inversely with body temperature in order to maintain a constant OH-/H+ and constant net charge on proteins. The present study evaluates acid-base regulation in Varanus exanthematicus under various regimes of heating and cooling between 15 and 38 degrees C. Arterial blood was sampled during heating and cooling at various rates, using restrained and unrestrained animals with and without face masks. Arterial pH was found to have a small temperature dependence, i.e., pH = 7.66--0.005 (T). The slope (dpH/dT = -0.005), while significantly greater than zero (P less than 0.05), is much less than that required for a constant OH-/H+ or a constant imidazole alphastat (dpH/dT congruent to 0.018). The physiological mechanism that distinguishes this species from most other ectotherms is the presence of a ventilatory response to temperature-induced changes in CO2 production and O2 uptake, i.e., VE/VO2 is constant. This results in a constant O2 extraction and arterial saturation (approx. 90%), which is adaptive to the high aerobic requirements of this species. 5. Effect of temperature on the acid-base properties of the alumina surface: microcalorimetry and acid-base titration experiments. PubMed Morel, Jean-Pierre; Marmier, Nicolas; Hurel, Charlotte; Morel-Desrosiers, Nicole 2006-06-15 Sorption reactions on natural or synthetic materials that can attenuate the migration of pollutants in the geosphere could be affected by temperature variations. Nevertheless, most of the theoretical models describing sorption reactions are at 25 degrees C. To check these models at different temperatures, experimental data such as the enthalpies of sorption are thus required. Highly sensitive microcalorimeters can now be used to determine the heat effects accompanying the sorption of radionuclides on oxide-water interfaces, but enthalpies of sorption cannot be extracted from microcalorimetric data without a clear knowledge of the thermodynamics of protonation and deprotonation of the oxide surface. However, the values reported in the literature show large discrepancies and one must conclude that, amazingly, this fundamental problem of proton binding is not yet resolved. We have thus undertaken to measure by titration microcalorimetry the heat effects accompanying proton exchange at the alumina-water interface at 25 degrees C. Based on (i) the surface sites speciation provided by a surface complexation model (built from acid-base titrations at 25 degrees C) and (ii) results of the microcalorimetric experiments, calculations have been made to extract the enthalpic variations associated respectively to first and second deprotonation of the alumina surface. Values obtained are deltaH1 = 80+/-10 kJ mol(-1) and deltaH2 = 5+/-3 kJ mol(-1). In a second step, these enthalpy values were used to calculate the alumina surface acidity constants at 50 degrees C via the van't Hoff equation. Then a theoretical titration curve at 50 degrees C was calculated and compared to the experimental alumina surface titration curve. Good agreement between the predicted acid-base titration curve and the experimental one was observed. 6. Acid-base properties of 2-phenethyldithiocarbamoylacetic acid, an antitumor agent Novozhilova, N. E.; Kutina, N. N.; Petukhova, O. A.; Kharitonov, Yu. Ya. 2013-07-01 The acid-base properties of the 2-phenethyldithiocarbamoylacetic acid (PET) substance belonging to the class of isothiocyanates and capable of inhibiting the development of tumors on many experimental models were studied. The acidity and hydrolysis constants of the PET substance in ethanol, acetone, aqueous ethanol, and aqueous acetone solutions were determined from the data of potentiometric (pH-metric) titration of ethanol and acetone solutions of PET with aqueous solidum hydroxide at room temperature. 7. Drug-induced acid-base disorders. PubMed Kitterer, Daniel; Schwab, Matthias; Alscher, M Dominik; Braun, Niko; Latus, Joerg 2015-09-01 The incidence of acid-base disorders (ABDs) is high, especially in hospitalized patients. ABDs are often indicators for severe systemic disorders. In everyday clinical practice, analysis of ABDs must be performed in a standardized manner. Highly sensitive diagnostic tools to distinguish the various ABDs include the anion gap and the serum osmolar gap. Drug-induced ABDs can be classified into five different categories in terms of their pathophysiology: (1) metabolic acidosis caused by acid overload, which may occur through accumulation of acids by endogenous (e.g., lactic acidosis by biguanides, propofol-related syndrome) or exogenous (e.g., glycol-dependant drugs, such as diazepam or salicylates) mechanisms or by decreased renal acid excretion (e.g., distal renal tubular acidosis by amphotericin B, nonsteroidal anti-inflammatory drugs, vitamin D); (2) base loss: proximal renal tubular acidosis by drugs (e.g., ifosfamide, aminoglycosides, carbonic anhydrase inhibitors, antiretrovirals, oxaliplatin or cisplatin) in the context of Fanconi syndrome; (3) alkalosis resulting from acid and/or chloride loss by renal (e.g., diuretics, penicillins, aminoglycosides) or extrarenal (e.g., laxative drugs) mechanisms; (4) exogenous bicarbonate loads: milk-alkali syndrome, overshoot alkalosis after bicarbonate therapy or citrate administration; and (5) respiratory acidosis or alkalosis resulting from drug-induced depression of the respiratory center or neuromuscular impairment (e.g., anesthetics, sedatives) or hyperventilation (e.g., salicylates, epinephrine, nicotine). 8. Semi-empirical proton binding constants for natural organic matter Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain 2010-03-01 Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed 9. Acid-base balance and plasma composition in the aestivating lungfish (Protopterus). PubMed DeLaney, R G; Lahiri, S; Hamilton, R; Fishman, P 1977-01-01 Upon entering into aestivation, Protopterus aethiopicus develops a respiratory acidosis. A slow compensatory increase in plasma bicarbonate suffices only to partially restore arterial pH toward normal. The cessation of water intake from the start of aestivation results in hemoconcentration and marked oliguria. The concentrations of most plasma constituents continue to increase progressively, and the electrolyte ratios change. The increase in urea concentration is disproportionately high for the degree of dehydration and constitutes an increasing fraction of total plasma osmolality. Acid-base and electrolyte balance do not reach a new equilibrium within 1 yr in the cocoon. PMID:13665 10. Teaching Chemical Equilibrium with the Jigsaw Technique Doymus, Kemal 2008-03-01 This study investigates the effect of cooperative learning (jigsaw) versus individual learning methods on students’ understanding of chemical equilibrium in a first-year general chemistry course. This study was carried out in two different classes in the department of primary science education during the 2005-2006 academic year. One of the classes was randomly assigned as the non-jigsaw group (control) and other as the jigsaw group (cooperative). Students participating in the jigsaw group were divided into four “home groups” since the topic chemical equilibrium is divided into four subtopics (Modules A, B, C and D). Each of these home groups contained four students. The groups were as follows: (1) Home Group A (HGA), representin g the equilibrium state and quantitative aspects of equilibrium (Module A), (2) Home Group B (HGB), representing the equilibrium constant and relationships involving equilibrium constants (Module B), (3) Home Group C (HGC), representing Altering Equilibrium Conditions: Le Chatelier’s principle (Module C), and (4) Home Group D (HGD), representing calculations with equilibrium constants (Module D). The home groups then broke apart, like pieces of a jigsaw puzzle, and the students moved into jigsaw groups consisting of members from the other home groups who were assigned the same portion of the material. The jigsaw groups were then in charge of teaching their specific subtopic to the rest of the students in their learning group. The main data collection tool was a Chemical Equilibrium Achievement Test (CEAT), which was applied to both the jigsaw and non-jigsaw groups The results indicated that the jigsaw group was more successful than the non-jigsaw group (individual learning method). 11. Using quantitative acid-base analysis in the ICU. PubMed Lloyd, P; Freebairn, R 2006-03-01 The quantitative acid-base 'Strong Ion' calculator is a practical application of quantitative acid-base chemistry, as developed by Peter Stewart and Peter Constable. It quantifies the three independent factors that control acidity, calculates the concentration and charge of unmeasured ions, produces a report based on these calculations and displays a Gamblegram depicting measured ionic species. Used together with the medical history, quantitative acid-base analysis has advantages over traditional approaches. 12. Kinetics of acid base catalyzed transesterification of Jatropha curcas oil. PubMed Jain, Siddharth; Sharma, M P 2010-10-01 Out of various non-edible oil resources, Jatropha curcas oil (JCO) is considered as future feedstock for biodiesel production in India. Limited work is reported on the kinetics of transesterification of high free fatty acids containing oil. The present study reports the results of kinetic study of two-step acid base catalyzed transesterification process carried out at an optimum temperature of 65 °C and 50 °C for esterification and transesterification respectively under the optimum methanol to oil ratio of 3:7 (v/v), catalyst concentration 1% (w/w) for H₂SO₄ and NaOH. The yield of methyl ester (ME) has been used to study the effect of different parameters. The results indicate that both esterification and transesterification reaction are of first order with reaction rate constant of 0.0031 min⁻¹ and 0.008 min⁻¹ respectively. The maximum yield of 21.2% of ME during esterification and 90.1% from transesterification of pretreated JCO has been obtained. 13. The effects of secular calcium and magnesium concentration changes on the thermodynamics of seawater acid/base chemistry: Implications for Eocene and Cretaceous ocean carbon chemistry and buffering Hain, Mathis P.; Sigman, Daniel M.; Higgins, John A.; Haug, Gerald H. 2015-05-01 Reconstructed changes in seawater calcium and magnesium concentration ([Ca2+], [Mg2+]) predictably affect the ocean's acid/base and carbon chemistry. Yet inaccurate formulations of chemical equilibrium "constants" are currently in use to account for these changes. Here we develop an efficient implementation of the MIAMI Ionic Interaction Model to predict all chemical equilibrium constants required for carbon chemistry calculations under variable [Ca2+] and [Mg2+]. We investigate the impact of [Ca2+] and [Mg2+] on the relationships among the ocean's pH, CO2, dissolved inorganic carbon (DIC), saturation state of CaCO3 (Ω), and buffer capacity. Increasing [Ca2+] and/or [Mg2+] enhances "ion pairing," which increases seawater buffering by increasing the concentration ratio of total to "free" (uncomplexed) carbonate ion. An increase in [Ca2+], however, also causes a decline in carbonate ion to maintain a given Ω, thereby overwhelming the ion pairing effect and decreasing seawater buffering. Given the reconstructions of Eocene [Ca2+] and [Mg2+] ([Ca2+]~20 mM; [Mg2+]~30 mM), Eocene seawater would have required essentially the same DIC as today to simultaneously explain a similar-to-modern Ω and the estimated Eocene atmospheric CO2 of ~1000 ppm. During the Cretaceous, at ~4 times modern [Ca2+], ocean buffering would have been at a minimum. Overall, during times of high seawater [Ca2+], CaCO3 saturation, pH, and atmospheric CO2 were more susceptible to perturbations of the global carbon cycle. For example, given both Eocene and Cretaceous seawater [Ca2+] and [Mg2+], a doubling of atmospheric CO2 would require less carbon addition to the ocean/atmosphere system than under modern seawater composition. Moreover, increasing seawater buffering since the Cretaceous may have been a driver of evolution by raising energetic demands of biologically controlled calcification and CO2 concentration mechanisms that aid photosynthesis. 14. Magnetospheric equilibrium with anisotropic pressure SciTech Connect Cheng, C.Z. 1991-07-01 Self-consistent magnetospheric equilibrium with anisotropic pressure is obtained by employing an iterative metric method for solving the inverse equilibrium equation in an optimal flux coordinate system. A method of determining plasma parallel and perpendicular pressures from either analytic particle distribution or particle distribution measured along the satellite's path is presented. The numerical results of axisymmetric magnetospheric equilibrium including the effects of finite beta, pressure anisotropy, and boundary conditions are presented for a bi-Maxwellian particle distribution. For the isotropic pressure cases, the finite beta effect produces an outward expansion of the constant magnetic flux surfaces in relation to the dipole field lines, and along the magnetic field the toroidal ring current is maximum at the magnetic equator. The effect of pressure anisotropy is found to further expand the flux surfaces outward. Along the magnetic field lines the westward ring current can be peak away from the equator due to an eastward current contribution resulting from pressure anisotropy. As pressure anisotropy increases, the peak westward current can become more singular. The outer boundary flux surface has significant effect on the magnetospheric equilibrium. For the outer flux boundary resembling dayside compressed flux surface due to solar wind pressure, the deformation of the magnetic field can be quite different from that for the outer flux boundary resembling the tail-like surface. 23 refs., 17 figs. 15. Identification of acid-base catalytic residues of high-Mr thioredoxin reductase from Plasmodium falciparum. PubMed McMillan, Paul J; Arscott, L David; Ballou, David P; Becker, Katja; Williams, Charles H; Müller, Sylke 2006-11-01 High-M(r) thioredoxin reductase from the malaria parasite Plasmodium falciparum (PfTrxR) contains three redox active centers (FAD, Cys-88/Cys-93, and Cys-535/Cys-540) that are in redox communication. The catalytic mechanism of PfTrxR, which involves dithiol-disulfide interchanges requiring acid-base catalysis, was studied by steady-state kinetics, spectral analyses of anaerobic static titrations, and rapid kinetics analysis of wild-type enzyme and variants involving the His-509-Glu-514 dyad as the presumed acid-base catalyst. The dyad is conserved in all members of the enzyme family. Substitution of His-509 with glutamine and Glu-514 with alanine led to TrxR with only 0.5 and 7% of wild type activity, respectively, thus demonstrating the crucial roles of these residues for enzymatic activity. The H509Q variant had rate constants in both the reductive and oxidative half-reactions that were dramatically less than those of wild-type enzyme, and no thiolateflavin charge-transfer complex was observed. Glu-514 was shown to be involved in dithiol-disulfide interchange between the Cys-88/Cys-93 and Cys-535/Cys-540 pairs. In addition, Glu-514 appears to greatly enhance the role of His-509 in acid-base catalysis. It can be concluded that the His-509-Glu-514 dyad, in analogy to those in related oxidoreductases, acts as the acid-base catalyst in PfTrxR. 16. Isodynamic axisymmetric equilibrium near the magnetic axis SciTech Connect Arsenin, V. V. 2013-08-15 Plasma equilibrium near the magnetic axis of an axisymmetric toroidal magnetic confinement system is described in orthogonal flux coordinates. For the case of a constant current density in the vicinity of the axis and magnetic surfaces with nearly circular cross sections, expressions for the poloidal and toroidal magnetic field components are obtained in these coordinates by using expansion in the reciprocal of the aspect ratio. These expressions allow one to easily derive relationships between quantities in an isodynamic equilibrium, in which the absolute value of the magnetic field is constant along the magnetic surface (Palumbo’s configuration) 17. Renal acid-base metabolism after ischemia. PubMed Holloway, J C; Phifer, T; Henderson, R; Welbourne, T C 1986-05-01 The response of the kidney to ischemia-induced cellular acidosis was followed over the immediate one hr post-ischemia reflow period. Clearance and extraction experiments as well as measurement of cortical intracellular pH (pHi) were performed on Inactin-anesthetized Sprague-Dawley rats. Arteriovenous concentration differences and para-aminohippurate extraction were obtained by cannulating the left renal vein. Base production was monitored as bicarbonate released into the renal vein and urine; net base production was related to the renal handling of glutamine and ammonia as well as to renal oxygen consumption and pHi. After a 15 min control period, the left renal artery was snared for one-half hr followed by release and four consecutive 15 min reflow periods. During the control period, cortical cell pHi measured by [14C]-5,5-Dimethyl-2,4-Oxazolidinedione distribution was 7.07 +/- 0.08, and Q-O2 was 14.1 +/- 2.2 micromoles/min; neither net glutamine utilization nor net bicarbonate generation occurred. After 30 min of ischemia, renal tissue pH fell to 6.6 +/- 0.15. However, within 45 min of reflow, cortical cell pH returned and exceeded the control value, 7.33 +/- 0.06 vs. 7.15 +/- 0.08. This increase in pHi was associated with a significant rise in cellular metabolic rate, Q-O2 increased to 20.3 +/- 6.4 micromoles/min. Corresponding with cellular alkalosis was a net production of bicarbonate and a net ammonia uptake and glutamine release; urinary acidification was abolished. These results are consistent with a nonexcretory renal metabolic base generating mechanism governing cellular acid base homeostasis following ischemia. PMID:3723929 18. What is the Ultimate Goal in Acid-Base Regulation? ERIC Educational Resources Information Center Balakrishnan, Selvakumar; Gopalakrishnan, Maya; Alagesan, Murali; Prakash, E. Sankaranarayanan 2007-01-01 It is common to see chapters on acid-base physiology state that the goal of acid-base regulatory mechanisms is to maintain the pH of arterial plasma and not arterial PCO [subscript 2] (Pa[subscript CO[subscript 2 19. A Closer Look at Acid-Base Olfactory Titrations ERIC Educational Resources Information Center Neppel, Kerry; Oliver-Hoyo, Maria T.; Queen, Connie; Reed, Nicole 2005-01-01 Olfactory titrations using raw onions and eugenol as acid-base indicators are reported. An in-depth investigation on olfactory titrations is presented to include requirements for potential olfactory indicators and protocols for using garlic, onions, and vanillin as acid-base olfactory indicators are tested. 20. A Modern Approach to Acid-Base Chemistry ERIC Educational Resources Information Center Drago, Russell S. 1974-01-01 Summarizes current status of our knowledge about acid-base interactions, including Lewis considerations, experimental design, data about donor-acceptor systems, common misconceptions, and hard-soft acid-base model. Indicates that there is the possibility of developing unifying concepts for chemical reactions of inorganic compounds. (CC) 1. Chiral shift reagent for amino acids based on resonance-assisted hydrogen bonding. PubMed Chin, Jik; Kim, Dong Chan; Kim, Hae-Jo; Panosyan, Francis B; Kim, Kwan Mook 2004-07-22 [structure: see text] A chiral aldehyde that forms resonance-assisted hydrogen bonded imines with amino acids has been developed. This hydrogen bond not only increases the equilibrium constant for imine formation but also provides a highly downfield-shifted NMR singlet for evaluating enantiomeric excess and absolute stereochemistry of amino acids. PMID:15255698 2. Influence of kinetics on the determination of the surface reactivity of oxide suspensions by acid-base titration. PubMed Duc, M; Adekola, F; Lefèvre, G; Fédoroff, M 2006-11-01 The effect of acid-base titration protocol and speed on pH measurement and surface charge calculation was studied on suspensions of gamma-alumina, hematite, goethite, and silica, whose size and porosity have been well characterized. The titration protocol has an important effect on surface charge calculation as well as on acid-base constants obtained by fitting of the titration curves. Variations of pH versus time after addition of acid or base to the suspension were interpreted as diffusion processes. Resulting apparent diffusion coefficients depend on the nature of the oxide and on its porosity. 3. Influence of dissolved organic carbon content on modelling natural organic matter acid-base properties. PubMed Garnier, Cédric; Mounier, Stéphane; Benaïm, Jean Yves 2004-10-01 Natural organic matter (NOM) behaviour towards proton is an important parameter to understand NOM fate in the environment. Moreover, it is necessary to determine NOM acid-base properties before investigating trace metals complexation by natural organic matter. This work focuses on the possibility to determine these acid-base properties by accurate and simple titrations, even at low organic matter concentrations. So, the experiments were conducted on concentrated and diluted solutions of extracted humic and fulvic acid from Laurentian River, on concentrated and diluted model solutions of well-known simple molecules (acetic and phenolic acids), and on natural samples from the Seine river (France) which are not pre-concentrated. Titration experiments were modelled by a 6 acidic-sites discrete model, except for the model solutions. The modelling software used, called PROSECE (Programme d'Optimisation et de SpEciation Chimique dans l'Environnement), has been developed in our laboratory, is based on the mass balance equilibrium resolution. The results obtained on extracted organic matter and model solutions point out a threshold value for a confident determination of the studied organic matter acid-base properties. They also show an aberrant decreasing carboxylic/phenolic ratio with increasing sample dilution. This shift is neither due to any conformational effect, since it is also observed on model solutions, nor to ionic strength variations which is controlled during all experiments. On the other hand, it could be the result of an electrode troubleshooting occurring at basic pH values, which effect is amplified at low total concentration of acidic sites. So, in our conditions, the limit for a correct modelling of NOM acid-base properties is defined as 0.04 meq of total analysed acidic sites concentration. As for the analysed natural samples, due to their high acidic sites content, it is possible to model their behaviour despite the low organic carbon concentration. 4. Sound speeds in suspensions in thermodynamic equilibrium Temkin, S. 1992-11-01 This work considers sound propagation in suspensions of particles of constant mass in fluids, in both relaxed and frozen thermodynamic equilibrium. Treating suspensions as relaxing media, thermodynamic arguments are used to obtain their sound speeds in equilibrium conditions. The results for relaxed equilibrium, which is applicable in the limit of low frequencies, agree with existing theories for aerosols, but disagree with Wood's equation. It is shown that the latter is thermodynamically correct only in the exceptional case when the specific heat ratios of the fluid and of the particles are equal to unity. In all other cases discrepancies occur. These may be significant when one of the two phases in the suspension is a gas, as is the case in aerosols and in bubbly liquids. The paper also includes a brief discussion of the sound speed in frozen equilibrium. 5. Radiative-dynamical equilibrium states for Jupiter NASA Technical Reports Server (NTRS) Trafton, L. M.; Stone, P. H. 1974-01-01 In order to obtain accurate estimates of the radiative heating that drives motions in Jupiter's atmosphere, previous radiative equilibrium calculations are improved by including the NH3 opacities and updated results for the pressure-induced opacities. These additions increase the radiative lapse rate near the top of the statically unstable region and lead to a fairly constant radiative lapse rate below the tropopause. The radiative-convective equilibrium temperature structure consistent with these changes is calculated, but it differs only slightly from earlier calculations. The radiative equilibrium calculations are used to calculate whether equilibrium states can occur on Jupiter which are similar to the baroclinic instability regimes on the earth and Mars. The results show that Jupiter's dynamical regime cannot be of this kind, except possibly at very high latitudes, and that its regime must be a basically less stable one than this kind. 6. Study of monoprotic acid-base equilibria in aqueous micellar solutions of nonionic surfactants using spectrophotometry and chemometrics. PubMed 2015-10-01 Many studies have shown the distribution of solutes between aqueous phase and micellar pseudo-phase in aqueous micellar solutions. However, spectrophotometric studies of acid-base equilibria in these media do not confirm such distribution because of the collinearity between concentrations of chemical species in the two phases. The collinearity causes the number of detected species to be equal to the number of species in a homogenous solution that automatically misinterpreted as homogeneity of micellar solutions, therefore the collinearity is often neglected. This interpretation is in contradiction to the distribution theory in micellar media that must be avoided. Acid-base equilibrium of an indicator was studied in aqueous micellar solutions of a nonionic surfactant to address the collinearity using UV/Visible spectrophotometry. Simultaneous analysis (matrix augmentation) of the equilibrium and solvation data was applied to eliminate the collinearity from the equilibrium data. A model was then suggested for the equilibrium that was fitted to the augmented data to estimate distribution coefficients of the species between the two phases. Moreover, complete resolution of concentration and spectral profiles of species in each phase was achieved. 7. Equilibrium structure of gas phase o-benzyne Groner, Peter; Kukolich, Stephen G. 2006-01-01 An equilibrium structure has been derived for o-benzyne from experimental rotational constants of seven isotopomers and vibration-rotation constants calculated from MP2 (full)/6-31G(d) quadratic and cubic force fields. In the case of benzene, this method yields results that are in excellent agreement with those obtained from high quality ab initio force fields. The ab initio-calculated vibrational averaging corrections were applied to the measured A0, B0 and C0 rotational constants and the resulting experimental, near-equilibrium, rotational constants were used in a least squares fit to determine the approximate equilibrium structural parameters. The C-C bond lengths for this equilibrium structure of o-benzyne are, beginning with the formal triple bond (C 1-C 2): 1.255, 1.383, 1.403 and 1.405 Å. The bond angles obtained are in good agreement with most of the recent ab initio predictions. 8. An Olfactory Indicator for Acid-Base Titrations. ERIC Educational Resources Information Center Flair, Mark N.; Setzer, William N. 1990-01-01 The use of an olfactory acid-base indicator in titrations for visually impaired students is discussed. Potential olfactory indicators include eugenol, thymol, vanillin, and thiophenol. Titrations performed with each indicator with eugenol proved to be successful. (KR) 9. Biologist's Toolbox. Acid-base Balance: An Educational Computer Game. ERIC Educational Resources Information Center Boyle, Joseph, III; Robinson, Gloria 1987-01-01 Describes a microcomputer program that can be used in teaching the basic physiological aspects of acid-base (AB) balance. Explains how its game format and graphic approach can be applied in diagnostic and therapeutic exercises. (ML) 10. The Bronsted-Lowery Acid-Base Concept. ERIC Educational Resources Information Center Kauffman, George B. 1988-01-01 Gives the background history of the simultaneous discovery of acid-base relationships by Johannes Bronsted and Thomas Lowry. Provides a brief biographical sketch of each. Discusses their concept of acids and bases in some detail. (CW) 11. Getting Freshman in Equilibrium. ERIC Educational Resources Information Center Journal of Chemical Education, 1983 1983-01-01 Various aspects of chemical equilibrium were discussed in six papers presented at the Seventh Biennial Conference on Chemical Education (Stillwater, Oklahoma 1982). These include student problems in understanding hydrolysis, helping students discover/uncover topics, equilibrium demonstrations, instructional strategies, and flaws to kinetic… 12. Acid-base homeostasis in the human system NASA Technical Reports Server (NTRS) White, R. J. 1974-01-01 Acid-base regulation is a cooperative phenomena in vivo with body fluids, extracellular and intracellular buffers, lungs, and kidneys all playing important roles. The present account is much too brief to be considered a review of present knowledge of these regulatory systems, and should be viewed, instead, as a guide to the elements necessary to construct a simple model of the mutual interactions of the acid-base regulatory systems of the body. 13. Spectral and Acid-Base Properties of Hydroxyflavones in Micellar Solutions of Cationic Surfactants Lipkovska, N. A.; Barvinchenko, V. N.; Fedyanina, T. V.; Rugal', A. A. 2014-09-01 It has been shown that the spectral characteristics (intensity, position of the absorption band) and the acid-base properties in a series of structurally similar hydroxyflavones depend on the concentration of the cationic surfactants miramistin and decamethoxin in aqueous solutions, and the extent of their changes is more pronounced for hydrophobic quercetin than for hydrophilic rutin. For the first time, we have determined the apparent dissociation constants of quercetin and rutin in solutions of these cationic surfactants (pKa1) over a broad concentration range and we have established that they decrease in the series water-decamethoxin-miramistin. 14. The species- and site-specific acid-base properties of penicillamine and its homodisulfide Mirzahosseini, Arash; Szilvay, András; Noszál, Béla 2014-08-01 Penicillamine, penicillamine disulfide and 4 related compounds were studied by 1H NMR-pH titrations and case-tailored evaluation methods. The resulting acid-base properties are quantified in terms of 14 macroscopic and 28 microscopic protonation constants and the concomitant 7 interactivity parameters. The species- and site-specific basicities are interpreted by means of inductive and shielding effects through various intra- and intermolecular comparisons. The thiolate basicities determined this way are key parameters and exclusive means for the prediction of thiolate oxidizabilities and chelate forming properties in order to understand and influence chelation therapy and oxidative stress at the molecular level. 15. Stoichiometry and Formation Constant Determination by Linear Sweep Voltammetry. ERIC Educational Resources Information Center Schultz, Franklin A. 1979-01-01 In this paper an experiment is described in which the equilibrium constants necessary for determining the composition and distribution of lead (II)-oxalate species may be measured by linear sweep voltammetry. (Author/BB) 16. Determination of Acidity Constants by Gradient Flow-Injection Titration ERIC Educational Resources Information Center Conceicao, Antonio C. L.; Minas da Piedade, Manuel E. 2006-01-01 A three-hour laboratory experiment, designed for an advanced undergraduate course in instrumental analysis that illustrates the application of the gradient chamber flow-injection titration (GCFIT) method with spectrophotometric detection to determine acidity constants is presented. The procedure involves the use of an acid-base indicator to obtain… 17. A Better Way of Dealing with Chemical Equilibrium. ERIC Educational Resources Information Center Tykodi, Ralph J. 1986-01-01 Discusses how to address the concept of chemical equilibrium through the use of thermodynamic activities. Describes the advantages of setting up an equilibrium constant in terms of activities and demonstrates how to approximate those activities by practical measures such as partial pressures, mole fractions, and molar concentrations. (TW) 18. Formation of nitric acid hydrates - A chemical equilibrium approach NASA Technical Reports Server (NTRS) Smith, Roland H. 1990-01-01 Published data are used to calculate equilibrium constants for reactions of the formation of nitric acid hydrates over the temperature range 190 to 205 K. Standard enthalpies of formation and standard entropies are calculated for the tri- and mono-hydrates. These are shown to be in reasonable agreement with earlier calorimetric measurements. The formation of nitric acid trihydrate in the polar stratosphere is discussed in terms of these equilibrium constants. 19. Carbonic anhydrase and acid-base regulation in fish. PubMed Gilmour, K M; Perry, S F 2009-06-01 Carbonic anhydrase (CA) is the zinc metalloenzyme that catalyses the reversible reactions of CO(2) with water. CA plays a crucial role in systemic acid-base regulation in fish by providing acid-base equivalents for exchange with the environment. Unlike air-breathing vertebrates, which frequently utilize alterations of breathing (respiratory compensation) to regulate acid-base status, acid-base balance in fish relies almost entirely upon the direct exchange of acid-base equivalents with the environment (metabolic compensation). The gill is the critical site of metabolic compensation, with the kidney playing a supporting role. At the gill, cytosolic CA catalyses the hydration of CO(2) to H(+) and HCO(3)(-) for export to the water. In the kidney, cytosolic and membrane-bound CA isoforms have been implicated in HCO(3)(-) reabsorption and urine acidification. In this review, the CA isoforms that have been identified to date in fish will be discussed together with their tissue localizations and roles in systemic acid-base regulation. 20. The acid-base titration of montmorillonite Bourg, I. C.; Sposito, G.; Bourg, A. C. 2003-12-01 Proton binding to clay minerals plays an important role in the chemical reactivity of soils (e.g., acidification, retention of nutrients or pollutants). If should also affect the performance of clay barriers for waste disposal. The surface acidity of clay minerals is commonly modelled empirically by assuming generic amphoteric surface sites (>SOH) on a flat surface, with fitted site densities and acidity constant. Current advances in experimental methods (notably spectroscopy) are rapidly improving our understanding of the structure and reactivity of the surface of clay minerals (arrangement of the particles, nature of the reactive surface sites, adsorption mechanisms). These developments are motivated by the difficulty of modelling the surface chemistry of mineral surfaces at the macro-scale (e.g., adsorption or titration) without a detailed (molecular-scale) picture of the mechanisms, and should be progressively incorporated into surface complexation models. In this view, we have combined recent estimates of montmorillonite surface properties (surface site density and structure, edge surface area, surface electrostatic potential) with surface site acidities obtained from the titration of alpha-Al2O3 and SiO2, and a novel method of accounting for the unknown initial net proton surface charge of the solid. The model predictions were compared to experimental titrations of SWy-1 montmorillonite and purified MX-80 bentonite in 0.1-0.5 mol/L NaClO4 and 0.005-0.5 mol/L NaNO3 background electrolytes, respectively. Most of the experimental data were appropriately described by the model after we adjusted a single parameter (silanol sites on the surface of montmorillonite were made to be slightly more acidic than those of silica). At low ionic strength and acidic pH the model underestimated the buffering capacity of the montmorillonite, perhaps due to clay swelling or to the interlayer adsorption of dissolved aluminum. The agreement between our model and the experimental 1. Temperature and acid-base balance in the American lobster Homarus americanus. PubMed Qadri, Syed Aman; Camacho, Joseph; Wang, Hongkun; Taylor, Josi R; Grosell, Martin; Worden, Mary Kate 2007-04-01 Lobsters (Homarus americanus) in the wild inhabit ocean waters where temperature can vary over a broad range (0-25 degrees C). To examine how environmental thermal variability might affect lobster physiology, we examine the effects of temperature and thermal change on the acid-base status of the lobster hemolymph. Total CO(2), pH, P(CO)2 and HCO(-)(3) were measured in hemolymph sampled from lobsters acclimated to temperature in the laboratory as well as from lobsters acclimated to seasonal temperatures in the wild. Our results demonstrate that the change in hemolymph pH as a function of temperature follows the rule of constant relative alkalinity in lobsters acclimated to temperature over a period of weeks. However, thermal change can alter lobster acid-base status over a time course of minutes. Acute increases in temperature trigger a respiratory compensated metabolic acidosis of the hemolymph. Both the strength and frequency of the lobster heartbeat in vitro are modulated by changes in pH within the physiological range measured in vivo. These observations suggest that changes in acid-base status triggered by thermal variations in the environment might modulate lobster cardiac performance in vivo. 2. Acid-base chemistry of white wine: analytical characterisation and chemical modelling. PubMed Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe 2012-01-01 A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic "wine" especially adapted for testing. 3. Acid-Base Chemistry of White Wine: Analytical Characterisation and Chemical Modelling PubMed Central Prenesti, Enrico; Berto, Silvia; Toso, Simona; Daniele, Pier Giuseppe 2012-01-01 A chemical model of the acid-base properties is optimized for each white wine under study, together with the calculation of their ionic strength, taking into account the contributions of all significant ionic species (strong electrolytes and weak one sensitive to the chemical equilibria). Coupling the HPLC-IEC and HPLC-RP methods, we are able to quantify up to 12 carboxylic acids, the most relevant substances responsible of the acid-base equilibria of wine. The analytical concentration of carboxylic acids and of other acid-base active substances was used as input, with the total acidity, for the chemical modelling step of the study based on the contemporary treatment of overlapped protonation equilibria. New protonation constants were refined (L-lactic and succinic acids) with respect to our previous investigation on red wines. Attention was paid for mixed solvent (ethanol-water mixture), ionic strength, and temperature to ensure a thermodynamic level to the study. Validation of the chemical model optimized is achieved by way of conductometric measurements and using a synthetic “wine” especially adapted for testing. PMID:22566762 4. A New Application for Radioimmunoassay: Measurement of Thermodynamic Constants. ERIC Educational Resources Information Center 1983-01-01 Describes a laboratory experiment in which an equilibrium radioimmunoassay (RIA) is used to estimate thermodynamic parameters such as equilibrium constants. The experiment is simple and inexpensive, and it introduces a technique that is important in the clinical chemistry and research laboratory. Background information, procedures, and results are… 5. Chemical Equilibrium, Unit 3: Chemical Equilibrium Calculations. A Computer-Enriched Module for Introductory Chemistry. Student's Guide and Teacher's Guide. ERIC Educational Resources Information Center Jameson, Cynthia J. Presented are the teacher's guide and student materials for one of a series of self-instructional, computer-based learning modules for an introductory, undergraduate chemistry course. The student manual for this unit on chemical equilibrium calculations includes objectives, prerequisites, a discussion of the equilibrium constant (K), and ten… 6. Microwave spectrum and equilibrium structure of o-xylene Vogt, Natalja; Demaison, Jean; Geiger, Werner; Rudolph, Heinz Dieter 2013-06-01 Ground state rotational constants were determined for 14 isotopologues of o-xylene. These rotational constants have been corrected with the rovibrational constants calculated from a quantum chemical force field. It was found that the derived semiexperimental equilibrium rotational constants of the deuterated isotopologues are not fully compatible with those of the non-deuterated ones. To mitigate the consequences of this incompatibility, the semiexperimental equilibrium rotational constants of the non-deuterated isotopologues have been supplemented by structural parameters, in particular those for hydrogen atoms, from high level ab initio calculations. The combined data have been used in a weighted least-squares fit to determine an accurate equilibrium structure. It was shown, at least in the present case, that the empirical structures are not sufficiently accurate and are, therefore, hardly appropriate for large molecules with many hydrogen atoms. 7. Far-from-equilibrium kinetic processes 2015-12-01 We analyze the kinetics of activated processes that take place under far-from-equilibrium conditions, when the system is subjected to external driving forces or gradients or at high values of affinities. We use mesoscopic non-equilibrium thermodynamics to show that when a force is applied, the reaction rate depends on the force. In the case of a chemical reaction at high affinity values, the reaction rate is no longer constant but depends on affinity, which implies that the law of mass action is no longer valid. This result is in good agreement with the kinetic theory of reacting gases, which uses a Chapman-Enskog expansion of the probability distribution. 8. Acid-base properties of the Fe(CN){sub 6}{sup 3-}/Fe(CN){sub 6}{sup 4-} redox couple in the presence of various background mineral acids and salts SciTech Connect Crozes, X.; Blanc, P.; Moisy, P.; Cote, G. 2012-04-15 The acid-base behavior of Fe(CN){sub 6}{sup 4-} was investigated by measuring the formal potentials of the Fe(CN){sub 6}{sup 3-}/Fe(CN){sub 6}{sup 4-} couple over a wide range of acidic and neutral solution compositions. The experimental data were fitted to a model taking into account the protonated forms of Fe(CN){sub 6}{sup 4-} and using values of the activities of species in solution, calculated with a simple solution model and a series of binary data available in the literature. The fitting needed to take account of the protonated species HFe(CN){sub 6}{sup 3-} and H{sub 2}Fe(CN){sub 6}{sup 2-}, already described in the literature, but also the species H{sub 3}Fe(CN){sub 6}{sup -} (associated with the acid-base equilibrium H{sub 3}Fe(CN){sub 6}{sup -} ↔ H{sub 2}Fe(CN){sub 6}{sup 2-} + H{sup +}). The acidic dissociation constants of HFe(CN){sub 6}{sup 3-}, H{sub 2}Fe(CN){sub 6}{sup 2-} and H{sub 3}Fe(CN){sub 6}{sup -} were found to be pK(1)(II) = 3.9 ± 0.1, pK(2)(II) = 2.0 ± 0.1, and pK(3)(II) = 0.0 ± 0.1, respectively. These constants were determined by taking into account that the activities of the species are independent of the ionic strength. (authors) 9. Acid base reactions, phosphate and arsenate complexation, and their competitive adsorption at the surface of goethite in 0.7 M NaCl solution Gao, Yan; Mucci, Alfonso 2001-07-01 Potentiometric titrations of the goethite-water interface were carried out at 25°C in 0.1, 0.3 and 0.7 M NaCl solutions. The acid/base properties of goethite at pH > 4 in a 0.7 M NaCl solution can be reproduced successfully using either the Constant Capacitance (CCM), the Basic Stern (BSM) or the Triple Layer models (TLM) when two surface acidity constants are considered. Phosphate and arsenate complexation at the surface of goethite was studied in batch adsorption experiments. The experiments were conducted in 0.7 mol/L NaCl solutions at 25°C in the pH range of 3.0 to 10.0. Phosphate shows a strong affinity for the goethite surface and the amount of phosphate adsorbed decreases with increasing pH. Phosphate complexation is described using a model consisting of three monodentate surface complexes. Arsenate shows a similar adsorption pattern on goethite but a higher affinity than phosphate. A model including three surface complexation constants describes the arsenate adsorption at [AsO 4] init = 23 and 34 μmol/L. The model prediction, however, overestimates arsenate adsorption at [AsO 4] init = 8.8 μmol/L. The goethite surface acidity constants as well as the preceding phosphate and arsenate surface complexation constants were evaluated by the CCM and BSM with the aid of the computer program FITEQL, version 2.0. The experimental investigation of phosphate and arsenate competitive adsorption in 0.7 mol/L NaCl was performed at [PO 4]/[AsO 4] ratios of 1:1, 2.5:1 and 5:1 with [AsO 4] init = 9.0 μmol/L and at a [PO 4]/[AsO 4] ratio of 1:1 with [AsO 4] init = 22 μmol/L. The surface complexation of arsenate decreases significantly in competitive adsorption experiments and the decrease is proportional to the amount of phosphate present. Phosphate adsorption is also reduced but less drastically in competitive adsorption and is not affected significantly by incremental additions of arsenate at pH > 7. The equilibrium model derived by combining the single oxyanion 10. Response reactions: equilibrium coupling. PubMed Hoffmann, Eufrozina A; Nagypal, Istvan 2006-06-01 It is pointed out and illustrated in the present paper that if a homogeneous multiple equilibrium system containing k components and q species is composed of the reactants actually taken and their reactions contain only k + 1 species, then we have a unique representation with (q - k) stoichiometrically independent reactions (SIRs). We define these as coupling reactions. All the other possible combinations with k + 1 species are the coupled reactions that are in equilibrium when the (q - k) SIRs are in equilibrium. The response of the equilibrium state for perturbation is determined by the coupling and coupled equilibria. Depending on the circumstances and the actual thermodynamic data, the effect of coupled equilibria may overtake the effect of the coupling ones, leading to phenomena that are in apparent contradiction with Le Chatelier's principle. PMID:16722770 11. Approaches to the Treatment of Equilibrium Perturbations Canagaratna, Sebastian G. 2003-10-01 Perturbations from equilibrium are treated in the textbooks by a combination of Le Châtelier's principle, the comparison of the equilibrium constant K with the reaction quotient Q,and the kinetic approach. Each of these methods is briefly reviewed. This is followed by derivations of the variation of the equilibrium value of the extent of reaction, ξeq, with various parameters on which it depends. Near equilibrium this relationship can be represented by a straight line. The equilibrium system can be regarded as moving on this line as the parameter is varied. The slope of the line depends on quantities like enthalpy of reaction, volume of reaction and so forth. The derivation shows that these quantities pertain to the equilibrium system, not the standard state. Also, the derivation makes clear what kind of assumptions underlie our conclusions. The derivation of these relations involves knowledge of thermodynamics that is well within the grasp of junior level physical chemistry students. The conclusions that follow from the derived relations are given as subsidiary rules in the form of the slope of ξeq, with T, p, et cetera. The rules are used to develop a visual way of predicting the direction of shift of a perturbed system. This method can be used to supplement one of the other methods even at the introductory level. 12. Computing Equilibrium Chemical Compositions NASA Technical Reports Server (NTRS) Mcbride, Bonnie J.; Gordon, Sanford 1995-01-01 Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN. 13. Acid-Base Titration of (S)-Aspartic Acid: A Circular Dichroism Spectrophotometry Experiment Cavaleiro, Ana M. V.; Pedrosa de Jesus, Júlio D. 2000-09-01 The magnitude of the circular dichroism of (S)-aspartic acid in aqueous solutions at a fixed wavelength varies with the addition of strong base. This laboratory experiment consists of the circular dichroism spectrophotometric acid-base titration of (S)-aspartic acid in dilute aqueous solutions, and the use of the resulting data to determine the ionization constant of the protonated amino group. The work familiarizes students with circular dichroism and illustrates the possibility of performing titrations using a less usual instrumental method of following the course of a reaction. It shows the use of a chiroptical property in the determination of the concentration in solution of an optically active molecule, and exemplifies the use of a spectrophotometric titration in the determination of an ionization constant. 14. Chemical rescue, multiple ionizable groups, and general acid-base catalysis in the HDV genomic ribozyme. PubMed Perrotta, Anne T; Wadkins, Timothy S; Been, Michael D 2006-07-01 In the ribozyme from the hepatitis delta virus (HDV) genomic strand RNA, a cytosine side chain is proposed to facilitate proton transfer in the transition state of the reaction and, thus, act as a general acid-base catalyst. Mutation of this active-site cytosine (C75) reduced RNA cleavage rates by as much as one million-fold, but addition of exogenous cytosine and certain nucleobase or imidazole analogs can partially rescue activity in these mutants. However, pH-rate profiles for the rescued reactions were bell shaped, and only one leg of the pH-rate curve could be attributed to ionization of the exogenous nucleobase or buffer. When a second potential ionizable nucleobase (C41) was removed, one leg of the bell-shaped curve was eliminated in the chemical-rescue reaction. With this construct, the apparent pK(a) determined from the pH-rate profile correlated with the solution pK(a) of the buffer, and the contribution of the buffer to the rate enhancement could be directly evaluated in a free-energy or Brønsted plot. The free-energy relationship between the acid dissociation constant of the buffer and the rate constant for cleavage (Brønsted value, beta, = approximately 0.5) was consistent with a mechanism in which the buffer acted as a general acid-base catalyst. These data support the hypothesis that cytosine 75, in the intact ribozyme, acts as a general acid-base catalyst. 15. Site-specific acid-base properties of pholcodine and related compounds. PubMed Kovács, Z; Hosztafi, S; Noszál, B 2006-11-01 The acid-base properties of pholcodine, a cough-depressant agent, and related compounds including metabolites were studied by 1H NMR-pH titrations, and are characterised in terms of macroscopic and microscopic protonation constants. New N-methylated derivatives were also synthesized in order to quantitate site- and nucleus-specific protonation shifts and to unravel microscopic acid-base equilibria. The piperidine nitrogen was found to be 38 and 400 times more basic than its morpholine counterpart in pholcodine and norpholcodine, respectively. The protonation data show that the molecule of pholcodine bears an average of positive charge of 1.07 at physiological pH, preventing it from entering the central nervous system, a plausible reason for its lack of analgesic or addictive properties. The protonation constants of pholcodine and its derivatives are interpreted by comparing with related molecules of pharmaceutical interest. The pH-dependent relative concentrations of the variously protonated forms of pholcodine and morphine are depicted in distribution diagrams. 16. Site-specific acid-base properties of pholcodine and related compounds. PubMed Kovács, Z; Hosztafi, S; Noszál, B 2006-11-01 The acid-base properties of pholcodine, a cough-depressant agent, and related compounds including metabolites were studied by 1H NMR-pH titrations, and are characterised in terms of macroscopic and microscopic protonation constants. New N-methylated derivatives were also synthesized in order to quantitate site- and nucleus-specific protonation shifts and to unravel microscopic acid-base equilibria. The piperidine nitrogen was found to be 38 and 400 times more basic than its morpholine counterpart in pholcodine and norpholcodine, respectively. The protonation data show that the molecule of pholcodine bears an average of positive charge of 1.07 at physiological pH, preventing it from entering the central nervous system, a plausible reason for its lack of analgesic or addictive properties. The protonation constants of pholcodine and its derivatives are interpreted by comparing with related molecules of pharmaceutical interest. The pH-dependent relative concentrations of the variously protonated forms of pholcodine and morphine are depicted in distribution diagrams. PMID:17004059 17. Effects of temperature on acid-base balance and ventilation in desert iguanas. PubMed Bickler, P E 1981-08-01 The effects of constant and changing temperatures on blood acid-base status and pulmonary ventilation were studied in the eurythermal lizard Dipsosaurus dorsalis. Constant temperatures between 18 and 42 degrees C maintained for 24 h or more produced arterial pH changes of -0.0145 U X degrees C-1. Arterial CO2 tension (PCO2) increased from 9.9 to 32 Torr plasma [HCO-3] and total CO2 contents remained constant at near 19 and 22 mM, respectively. Under constant temperature conditions, ventilation-gas exchange ratios (VE/MCO2 and VE/MO2) were inversely related to temperature and can adequately explain the changes in arterial PCO2 and pH. During warming and cooling between 25 and 42 degrees C arterial pH, PCO2 [HCO-3], and respiratory exchange ratios (MCO2/MO2) were similar to steady-state values. Warming and cooling each took about 2 h. During the temperature changes, rapid changes in lung ventilation following steady-state patterns were seen. Blood relative alkalinity changed slightly with steady-state or changing body temperatures, whereas calculated charge on protein histidine imidazole was closely conserved. Cooling to 17-18 degrees C resulted in a transient respiratory acidosis correlated with a decline in the ratio VE/MCO2. After 12-24 h at 17-18 degrees C, pH, PCO2, and VE returned to steady-state values. The importance of thermal history of patterns of acid-base regulation in reptiles is discussed. 18. On the accuracy of acid-base determinations from potentiometric titrations using only a few points from the titration curve. PubMed Olin, A; Wallén, B 1977-05-01 There are several procedures which use only a few points on the titration curve for the calculation of equivalence volumes in acid-base titrations. The accuracy of such determinations will depend on the positions of the points on the titration curve. The effects of errors in the stability constants and in the pH measurements on the accuracy of the analysis have been considered, and the results are used to establish the conditions under which these errors are minimized. 19. Experimental determination of thermodynamic equilibrium in biocatalytic transamination. PubMed Tufvesson, Pär; Jensen, Jacob S; Kroutil, Wolfgang; Woodley, John M 2012-08-01 The equilibrium constant is a critical parameter for making rational design choices in biocatalytic transamination for the synthesis of chiral amines. However, very few reports are available in the scientific literature determining the equilibrium constant (K) for the transamination of ketones. Various methods for determining (or estimating) equilibrium have previously been suggested, both experimental as well as computational (based on group contribution methods). However, none of these were found suitable for determining the equilibrium constant for the transamination of ketones. Therefore, in this communication we suggest a simple experimental methodology which we hope will stimulate more accurate determination of thermodynamic equilibria when reporting the results of transaminase-catalyzed reactions in order to increase understanding of the relationship between substrate and product molecular structure on reaction thermodynamics. 20. Modelling of the acid base properties of two thermophilic bacteria at different growth times Heinrich, Hannah T. M.; Bremer, Phil J.; McQuillan, A. James; Daughney, Christopher J. 2008-09-01 Acid-base titrations and electrophoretic mobility measurements were conducted on the thermophilic bacteria Anoxybacillus flavithermus and Geobacillus stearothermophilus at two different growth times corresponding to exponential and stationary/death phase. The data showed significant differences between the two investigated growth times for both bacterial species. In stationary/death phase samples, cells were disrupted and their buffering capacity was lower than that of exponential phase cells. For G. stearothermophilus the electrophoretic mobility profiles changed dramatically. Chemical equilibrium models were developed to simultaneously describe the data from the titrations and the electrophoretic mobility measurements. A simple approach was developed to determine confidence intervals for the overall variance between the model and the experimental data, in order to identify statistically significant changes in model fit and thereby select the simplest model that was able to adequately describe each data set. Exponential phase cells of the investigated thermophiles had a higher total site concentration than the average found for mesophilic bacteria (based on a previously published generalised model for the acid-base behaviour of mesophiles), whereas the opposite was true for cells in stationary/death phase. The results of this study indicate that growth phase is an important parameter that can affect ion binding by bacteria, that growth phase should be considered when developing or employing chemical models for bacteria-bearing systems. 1. Acid Base Titrations in Nonaqueous Solvents and Solvent Mixtures Barcza, Lajos; Buvári-Barcza, Ágnes 2003-07-01 The acid base determination of different substances by nonaqueous titrations is highly preferred in pharmaceutical analyses since the method is quantitative, exact, and reproducible. The modern interpretation of the reactions in nonaqueous solvents started in the last century, but several inconsistencies and unsolved problems can be found in the literature. The acid base theories of Brønsted Lowry and Lewis as well as the so-called solvent theory are outlined first, then the promoting (and leveling) and the differentiating effects are discussed on the basis of the hydrogen-bond concept. Emphasis is put on the properties of formic acid and acetic anhydride since their importance is increasing. 2. Equilibrium games in networks Li, Angsheng; Zhang, Xiaohui; Pan, Yicheng; Peng, Pan 2014-12-01 It seems a universal phenomenon of networks that the attacks on a small number of nodes by an adversary player Alice may generate a global cascading failure of the networks. It has been shown (Li et al., 2013) that classic scale-free networks (Barabási and Albert, 1999, Barabási, 2009) are insecure against attacks of as small as O(logn) many nodes. This poses a natural and fundamental question: Can we introduce a second player Bob to prevent Alice from global cascading failure of the networks? We proposed a game in networks. We say that a network has an equilibrium game if the second player Bob has a strategy to balance the cascading influence of attacks by the adversary player Alice. It was shown that networks of the preferential attachment model (Barabási and Albert, 1999) fail to have equilibrium games, that random graphs of the Erdös-Rényi model (Erdös and Rényi, 1959, Erdös and Rényi, 1960) have, for which randomness is the mechanism, and that homophyly networks (Li et al., 2013) have equilibrium games, for which homophyly and preferential attachment are the underlying mechanisms. We found that some real networks have equilibrium games, but most real networks fail to have. We anticipate that our results lead to an interesting new direction of network theory, that is, equilibrium games in networks. 3. Immunity by equilibrium. PubMed Eberl, Gérard 2016-08-01 The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology. 4. Beyond Equilibrium Thermodynamics Öttinger, Hans Christian 2005-01-01 Beyond Equilibrium Thermodynamics fills a niche in the market by providing a comprehensive introduction to a new, emerging topic in the field. The importance of non-equilibrium thermodynamics is addressed in order to fully understand how a system works, whether it is in a biological system like the brain or a system that develops plastic. In order to fully grasp the subject, the book clearly explains the physical concepts and mathematics involved, as well as presenting problems and solutions; over 200 exercises and answers are included. Engineers, scientists, and applied mathematicians can all use the book to address their problems in modelling, calculating, and understanding dynamic responses of materials. 5. Acid-base status in dietary treatment of phenylketonuria. PubMed Manz, F; Schmidt, H; Schärer, K; Bickel, H 1977-10-01 Blood acid-base status, serum electrolytes, and urine pH were examined in 64 infants and children with phenylketonuria (PKU) treated with three different low phenylalanine protein hydrolyzates (Aponti, Cymogran, AlbumaidXP) and two synthetic amino acid mixtures (Aminogran, PAM). The formulas caused significant differences in acid-base status, serum potassium, and chloride, and in urine pH. In acid-base balance studies in two patients with PKU, Aponti, PAM, and two modifications of PAM (P2 + P3) were given. We observed a change from mild alkalosis to increasing metabolic acidosis from Aponti (serum bicarbonate 25,8 mval/liter) to P3 (24,0Y, P2 (19, 3) and PAM (17,0). Urine pH decreased and renal net acid excretion increased. In the formulas PAM, P2 and P3 differences in renal net acid excretion correlated with differences in chloride and sulfur contents of the diets and of the urines. New modifications of AlbumaidXP and of PAM, prepared according to our recommendations, showed normal renal net acid excretion (1 mEq/kg/24 hr) in a balance study performed in one patient with PKU and normal acid base status in 20 further patients. 6. Potentiometric Acid-Base Titrations with Activated Graphite Electrodes Riyazuddin, P.; Devika, D. 1997-10-01 Dry cell graphite (DCG) electrodes activated with potassium permanganate are employed as potentiometric indicator electrodes for acid-base titrations. Special attention is given to an indicator probe comprising activated DCG-non-activiated DCG electrode couple. This combination also proves suitable for the titration of strong or weak acids. 7. Thymine, adenine and lipoamino acid based gene delivery systems. PubMed Skwarczynski, Mariusz; Ziora, Zyta M; Coles, Daniel J; Lin, I-Chun; Toth, Istvan 2010-05-14 A novel class of thymine, adenine and lipoamino acid based non-viral carriers for gene delivery has been developed. Their ability to bind to DNA by hydrogen bonding was confirmed by NMR diffusion, isothermal titration calorimetry and transmission electron microscopy experiments. 8. Soil Studies: Applying Acid-Base Chemistry to Environmental Analysis. ERIC Educational Resources Information Center West, Donna M.; Sterling, Donna R. 2001-01-01 Laboratory activities for chemistry students focus attention on the use of acid-base chemistry to examine environmental conditions. After using standard laboratory procedures to analyze soil and rainwater samples, students use web-based resources to interpret their findings. Uses CBL probes and graphing calculators to gather and analyze data and… 9. Acid-Base Disorders--A Computer Simulation. ERIC Educational Resources Information Center Maude, David L. 1985-01-01 Describes and lists a program for Apple Pascal Version 1.1 which investigates the behavior of the bicarbonate-carbon dioxide buffer system in acid-base disorders. Designed specifically for the preclinical medical student, the program has proven easy to use and enables students to use blood gas parameters to arrive at diagnoses. (DH) 10. Using Spreadsheets to Produce Acid-Base Titration Curves. ERIC Educational Resources Information Center Cawley, Martin James; Parkinson, John 1995-01-01 Describes two spreadsheets for producing acid-base titration curves, one uses relatively simple cell formulae that can be written into the spreadsheet by inexperienced students and the second uses more complex formulae that are best written by the teacher. (JRH) 11. On the Khinchin Constant NASA Technical Reports Server (NTRS) Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Craw, James M. (Technical Monitor) 1995-01-01 We prove known identities for the Khinchin constant and develop new identities for the more general Hoelder mean limits of continued fractions. Any of these constants can be developed as a rapidly converging series involving values of the Riemann zeta function and rational coefficients. Such identities allow for efficient numerical evaluation of the relevant constants. We present free-parameter, optimizable versions of the identities, and report numerical results. 12. Has Stewart approach improved our ability to diagnose acid-base disorders in critically ill patients? PubMed Masevicius, Fabio D; Dubin, Arnaldo 2015-02-01 The Stewart approach-the application of basic physical-chemical principles of aqueous solutions to blood-is an appealing method for analyzing acid-base disorders. These principles mainly dictate that pH is determined by three independent variables, which change primarily and independently of one other. In blood plasma in vivo these variables are: (1) the PCO2; (2) the strong ion difference (SID)-the difference between the sums of all the strong (i.e., fully dissociated, chemically nonreacting) cations and all the strong anions; and (3) the nonvolatile weak acids (Atot). Accordingly, the pH and the bicarbonate levels (dependent variables) are only altered when one or more of the independent variables change. Moreover, the source of H(+) is the dissociation of water to maintain electroneutrality when the independent variables are modified. The basic principles of the Stewart approach in blood, however, have been challenged in different ways. First, the presumed independent variables are actually interdependent as occurs in situations such as: (1) the Hamburger effect (a chloride shift when CO2 is added to venous blood from the tissues); (2) the loss of Donnan equilibrium (a chloride shift from the interstitium to the intravascular compartment to balance the decrease of Atot secondary to capillary leak; and (3) the compensatory response to a primary disturbance in either independent variable. Second, the concept of water dissociation in response to changes in SID is controversial and lacks experimental evidence. In addition, the Stewart approach is not better than the conventional method for understanding acid-base disorders such as hyperchloremic metabolic acidosis secondary to a chloride-rich-fluid load. Finally, several attempts were performed to demonstrate the clinical superiority of the Stewart approach. These studies, however, have severe methodological drawbacks. In contrast, the largest study on this issue indicated the interchangeability of the Stewart and 13. The hubble constant. PubMed Huchra, J P 1992-04-17 The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy The Hubble constant is the constant of proportionality between recession velocity and development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution. PMID:17743107 14. The cosmological constant NASA Technical Reports Server (NTRS) Carroll, Sean M.; Press, William H.; Turner, Edwin L. 1992-01-01 The cosmological constant problem is examined in the context of both astronomy and physics. Effects of a nonzero cosmological constant are discussed with reference to expansion dynamics, the age of the universe, distance measures, comoving density of objects, growth of linear perturbations, and gravitational lens probabilities. The observational status of the cosmological constant is reviewed, with attention given to the existence of high-redshift objects, age derivation from globular clusters and cosmic nuclear data, dynamical tests of Omega sub Lambda, quasar absorption line statistics, gravitational lensing, and astrophysics of distant objects. Finally, possible solutions to the physicist's cosmological constant problem are examined. 15. Decoupling the contribution of dispersive and acid-base components of surface energy on the cohesion of pharmaceutical powders. PubMed Shah, Umang V; Olusanmi, Dolapo; Narang, Ajit S; Hussain, Munir A; Tobyn, Michael J; Heng, Jerry Y Y 2014-11-20 This study reports an experimental approach to determine the contribution from two different components of surface energy on cohesion. A method to tailor the surface chemistry of mefenamic acid via silanization is established and the role of surface energy on cohesion is investigated. Silanization was used as a method to functionalize mefenamic acid surfaces with four different functional end groups resulting in an ascending order of the dispersive component of surface energy. Furthermore, four haloalkane functional end groups were grafted on to the surface of mefenamic acid, resulting in varying levels of acid-base component of surface energy, while maintaining constant dispersive component of surface energy. A proportional increase in cohesion was observed with increases in both dispersive as well as acid-base components of surface energy. Contributions from dispersive and acid-base surface energy on cohesion were determined using an iterative approach. Due to the contribution from acid-base surface energy, cohesion was found to increase ∼11.7× compared to the contribution from dispersive surface energy. Here, we provide an approach to deconvolute the contribution from two different components of surface energy on cohesion, which has the potential of predicting powder flow behavior and ultimately controlling powder cohesion. 16. Biochemical thermodynamics and rapid-equilibrium enzyme kinetics. PubMed Alberty, Robert A 2010-12-30 Biochemical thermodynamics is based on the chemical thermodynamics of aqueous solutions, but it is quite different because pH is used as an independent variable. A transformed Gibbs energy G' is used, and that leads to transformed enthalpies H' and transformed entropies S'. Equilibrium constants for enzyme-catalyzed reactions are referred to as apparent equilibrium constants K' to indicate that they are functions of pH in addition to temperature and ionic strength. Despite this, the most useful way to store basic thermodynamic data on enzyme-catalyzed reactions is to give standard Gibbs energies of formation, standard enthalpies of formation, electric charges, and numbers of hydrogen atoms in species of biochemical reactants like ATP. This makes it possible to calculate standard transformed Gibbs energies of formation, standard transformed enthalpies of formation of reactants (sums of species), and apparent equilibrium constants at desired temperatures, pHs, and ionic strengths. These calculations are complicated, and therefore, a mathematical application in a computer is needed. Rapid-equilibrium enzyme kinetics is based on biochemical thermodynamics because all reactions in the mechanism prior to the rate-determining reaction are at equilibrium. The expression for the equilibrium concentration of the enzyme-substrate complex that yields products can be derived by applying Solve in a computer to the expressions for the equilibrium constants in the mechanism and the conservation equation for enzymatic sites. In 1979, Duggleby pointed out that the minimum number of velocities of enzyme-catalyzed reactions required to estimate the values of the kinetic parameters is equal to the number of kinetic parameters. Solve can be used to do this with steady-state rate equations as well as rapid-equilibrium rate equations, provided that the rate equation is a polynomial. Rapid-equilibrium rate equations can be derived for complicated mechanisms that involve several reactants 17. Equilibrium CO bond lengths Demaison, Jean; Császár, Attila G. 2012-09-01 Based on a sample of 38 molecules, 47 accurate equilibrium CO bond lengths have been collected and analyzed. These ultimate experimental (reEX), semiexperimental (reSE), and Born-Oppenheimer (reBO) equilibrium structures are compared to reBO estimates from two lower-level techniques of electronic structure theory, MP2(FC)/cc-pVQZ and B3LYP/6-311+G(3df,2pd). A linear relationship is found between the best equilibrium bond lengths and their MP2 or B3LYP estimates. These (and similar) linear relationships permit to estimate the CO bond length with an accuracy of 0.002 Å within the full range of 1.10-1.43 Å, corresponding to single, double, and triple CO bonds, for a large number of molecules. The variation of the CO bond length is qualitatively explained using the Atoms in Molecules method. In particular, a nice correlation is found between the CO bond length and the bond critical point density and it appears that the CO bond is at the same time covalent and ionic. Conditions which permit the computation of an accurate ab initio Born-Oppenheimer equilibrium structure are discussed. In particular, the core-core and core-valence correlation is investigated and it is shown to roughly increase with the bond length. 18. An Updated Equilibrium Machine ERIC Educational Resources Information Center Schultz, Emeric 2008-01-01 A device that can demonstrate equilibrium, kinetic, and thermodynamic concepts is described. The device consists of a leaf blower attached to a plastic container divided into two chambers by a barrier of variable size and form. Styrofoam balls can be exchanged across the barrier when the leaf blower is turned on and various air pressures are… 19. Determination of the Vibrational Constants of Some Diatomic Molecules: A Combined Infrared Spectroscopic and Quantum Chemical Third Year Chemistry Project. ERIC Educational Resources Information Center Ford, T. A. 1979-01-01 In one option for this project, the rotation-vibration infrared spectra of a number of gaseous diatomic molecules were recorded, from which the fundamental vibrational wavenumber, the force constant, the rotation-vibration interaction constant, the equilibrium rotational constant, and the equilibrium internuclear distance were determined.… 20. Fundamental Physical Constants National Institute of Standards and Technology Data Gateway SRD 121 CODATA Fundamental Physical Constants (Web, free access)   This site, developed in the Physics Laboratory at NIST, addresses three topics: fundamental physical constants, the International System of Units (SI), which is the modern metric system, and expressing the uncertainty of measurement results. 1. Calculation of magnetostriction constants Tatebayashi, T.; Ohtsuka, S.; Ukai, T.; Mori, N. 1986-02-01 The magnetostriction constants h1 and h2 for Ni and Fe metals and the anisotropy constants K1 and K2 for Fe metal are calculated on the basis of the approximate d bands obtained by Deegan's prescription, by using Gilat-Raubenheimer's method. The obtained results are compared with the experimental ones. 2. A simplified strong ion model for acid-base equilibria: application to horse plasma. PubMed Constable, P D 1997-07-01 The Henderson-Hasselbalch equation and Stewart's strong ion model are currently used to describe mammalian acid-base equilibria. Anomalies exist when the Henderson-Hasselbalch equation is applied to plasma, whereas the strong ion model does not provide a practical method for determining the total plasma concentration of nonvolatile weak acids ([Atot]) and the effective dissociation constant for plasma weak acids (Ka). A simplified strong ion model, which was developed from the assumption that plasma ions act as strong ions, volatile buffer ions (HCO-3), or nonvolatile buffer ions, indicates that plasma pH is determined by five independent variables: PCO2, strong ion difference, concentration of individual nonvolatile plasma buffers (albumin, globulin, and phosphate), ionic strength, and temperature. The simplified strong ion model conveys on a fundamental level the mechanism for change in acid-base status, explains many of the anomalies when the Henderson-Hasselbalch equation is applied to plasma, is conceptually and algebraically simpler than Stewart's strong ion model, and provides a practical in vitro method for determining [Atot] and Ka of plasma. Application of the simplified strong ion model to CO2-tonometered horse plasma produced values for [Atot] (15.0 +/- 3.1 meq/l) and Ka (2.22 +/- 0.32 x 10(-7) eq/l) that were significantly different from the values commonly assumed for human plasma ([Atot] = 20.0 meq/l, Ka = 3.0 x 10(-7) eq/l). Moreover, application of the experimentally determined values for [Atot] and Ka to published data for the horse (known PCO2, strong ion difference, and plasma protein concentration) predicted plasma pH more accurately than the values for [Atot] and Ka commonly assumed for human plasma. Species-specific values for [Atot] and Ka should be experimentally determined when the simplified strong ion model (or strong ion model) is used to describe acid-base equilibria. 3. Ion effects on the lac repressor--operator equilibrium. PubMed Barkley, M D; Lewis, P A; Sullivan, G E 1981-06-23 The effects of ions on the interaction of lac repressor protein and operator DNA have been studied by the membrane filter technique. The equilibrium association constant was determined as a function of monovalent and divalent cation concentrations, anions, and pH. The binding of repressor and operator is extremely sensitive to the ionic environment. The dependence of the observed equilibrium constant on salt concentration is analyzed according to the binding theory of Record et al. [Record, M. T., Jr., Lohman, T. M., & deHaseth, P. L. (1976) J. Mol. Biol. 107, 145]. The number of ionic interactions in repressor--operator complex is deduced from the slopes of the linear log-log plots. About 11 ionic interactions are formed between repressor and DNA phosphates at pH 7.4 and about 9 ionic interactions at pH 8.0, in reasonable agreement with previous estimates. A favorable nonelectrostatic binding free energy of about 9-12 kcal/mol is estimated from the extrapolated equilibrium constants at the 1 M standard state. The values are in good accord with recent results for the salt-independent binding of repressor core and operator DNA. The effects of pH on the repressor--operator interaction are small, and probably result from titration of functional groups in the DNA-binding site of the protein. For monovalent salts, the equilibrium constant is slightly dependent on cation type and highly dependent on anion type. At constant salt concentration, the equilibrium constant decreases about 10000-fold in the order CH3CO2- greater than or equal to F- greater than Cl- greater than Br- greater than NO3- greater than SCN- greater than I-. The wide range of accessible equilibrium constants provides a useful tool for in vitro studies of the repressor--operator interaction. 4. The glmS ribozyme cofactor is a general acid-base catalyst. PubMed 2012-11-21 The glmS ribozyme is the first natural self-cleaving ribozyme known to require a cofactor. The d-glucosamine-6-phosphate (GlcN6P) cofactor has been proposed to serve as a general acid, but its role in the catalytic mechanism has not been established conclusively. We surveyed GlcN6P-like molecules for their ability to support self-cleavage of the glmS ribozyme and found a strong correlation between the pH dependence of the cleavage reaction and the intrinsic acidity of the cofactors. For cofactors with low binding affinities, the contribution to rate enhancement was proportional to their intrinsic acidity. This linear free-energy relationship between cofactor efficiency and acid dissociation constants is consistent with a mechanism in which the cofactors participate directly in the reaction as general acid-base catalysts. A high value for the Brønsted coefficient (β ~ 0.7) indicates that a significant amount of proton transfer has already occurred in the transition state. The glmS ribozyme is the first self-cleaving RNA to use an exogenous acid-base catalyst. 5. The glmS Ribozyme Cofactor is a General Acid-Base Catalyst PubMed Central 2012-01-01 The glmS ribozyme is the first natural self-cleaving ribozyme known to require a cofactor. The D-glucosamine-6-phosphate (GlcN6P) cofactor has been proposed to serve as a general acid, but its role in the catalytic mechanism has not been established conclusively. We surveyed GlcN6P-like molecules for their ability to support self-cleavage of the glmS ribozyme and found a strong correlation between the pH dependence of the cleavage reaction and the intrinsic acidity of the cofactors. For cofactors with low binding affinities the contribution to rate enhancement was proportional to their intrinsic acidity. This linear free-energy relationship between cofactor efficiency and acid dissociation constants is consistent with a mechanism in which the cofactors participate directly in the reaction as general acid-base catalysts. A high value for the Brønsted coefficient (β ~ 0.7) indicates that a significant amount of proton transfer has already occurred in the transition state. The glmS ribozyme is the first self-cleaving RNA to use an exogenous acid-base catalyst. PMID:23113700 6. [Blood acid-base balance of sportsmen during physical activity]. PubMed Petrushova, O P; Mikulyak, N I 2014-01-01 The aim of this study was to investigate the acid-base balance parameters in blood of sportsmen by physical activity. Before exercise lactate concentration in blood was normal. Carbon dioxide pressure (рСО2), bicarbonate concentration (НСО3 -), base excess (BE), were increased immediately after physical activity lactate concentration increased, while pH, BE, НСО3 -, рСО2 decreased in capillary blood of sportsmen. These changes show the development of lactate-acidosis which is partly compensated with bicarbonate buffering system and respiratory alkalosis. During postexercise recovery lactate concentration decreased, while рСО2, НСО3 -, BE increased. The results of this study can be used for diagnostics of acid-base disorders and their medical treatment for preservation of sportsmen physical capacity. 7. Evolution of the Acid-Base Status in Cardiac Arrest PubMed Central Carrasco G., Hugo A.; Oletta L., José F. 1973-01-01 In a study of the evolution of acid-base status in 26 patients who had cardiopulmonary arrest in the operating room, it appeared that: The determination of acid-base status within the first hour post-cardiac arrest is useful in differentiating final survivors from non-survivors. Respiratory or combined acidosis carries a poor prognosis not evidenced for metabolic acidosis. Late respiratory complications are more frequent in patients with initial combined acidosis. Treatment should be instituted on the basis of frequent determinations of acidbase status, since accurate diagnosis of degree and type of acidosis cannot be done on clinical grounds only. Recovery of consciousness is influenced by the type and severity of acidosis, less so by duration of arrest; and that high pCO2 is associated frequently with unconsciousness after recovery of circulatory function. PMID:4709532 8. Absolute Equilibrium Entropy NASA Technical Reports Server (NTRS) Shebalin, John V. 1997-01-01 The entropy associated with absolute equilibrium ensemble theories of ideal, homogeneous, fluid and magneto-fluid turbulence is discussed and the three-dimensional fluid case is examined in detail. A sigma-function is defined, whose minimum value with respect to global parameters is the entropy. A comparison is made between the use of global functions sigma and phase functions H (associated with the development of various H-theorems of ideal turbulence). It is shown that the two approaches are complimentary though conceptually different: H-theorems show that an isolated system tends to equilibrium while sigma-functions allow the demonstration that entropy never decreases when two previously isolated systems are combined. This provides a more complete picture of entropy in the statistical mechanics of ideal fluids. 9. An Updated Equilibrium Machine Schultz, Emeric 2008-08-01 A device that can demonstrate equilibrium, kinetic, and thermodynamic concepts is described. The device consists of a leaf blower attached to a plastic container divided into two chambers by a barrier of variable size and form. Styrofoam balls can be exchanged across the barrier when the leaf blower is turned on and various air pressures are applied. Equilibrium can be approached from different distributions of balls in the container under different conditions. The Le Châtelier principle can be demonstrated. Kinetic concepts can be demonstrated by changing the nature of the barrier, either changing the height or by having various sized holes in the barrier. Thermodynamic concepts can be demonstrated by taping over some or all of the openings and restricting air flow into container on either side of the barrier. 10. Space Shuttle astrodynamical constants NASA Technical Reports Server (NTRS) Cockrell, B. F.; Williamson, B. 1978-01-01 Basic space shuttle astrodynamic constants are reported for use in mission planning and construction of ground and onboard software input loads. The data included here are provided to facilitate the use of consistent numerical values throughout the project. 11. The cosmological constant problem SciTech Connect Dolgov, A.D. 1989-05-01 A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs. 12. The species- and site-specific acid-base properties of biological thiols and their homodisulfides. PubMed Mirzahosseini, Arash; Noszál, Béla 2014-07-01 Cysteamine, cysteine, homocysteine, their homodisulfides and 9 related compounds were studied by ¹H NMR-pH titrations and case-tailored evaluation methods. The resulting acid-base properties are quantified in terms of 33 macroscopic and 62 microscopic protonation constants and the concomitant 16 interactivity parameters, providing thus the first complete microspeciation of this vitally important family of biomolecules. The species- and site-specific basicities are interpreted by means of inductive and hydrogen-bonding effects through various intra- and intermolecular comparisons. The pH-dependent distribution of the microspecies is depicted. The thiolate basicities determined this way provide exclusive means for the prediction of thiolate oxidizabilities, a key parameter to understand and influence oxidative stress at the molecular level. 13. Equilibrium and fluctuation analysis for ZTH electrical diagnostics SciTech Connect Miller, G.; Ingraham, J.C. 1988-12-01 Some of the rationale behind the electrical diagnostics proposed for the Los Alamos Confinement Physics Research Facility, ZTH, is discussed. The axisymmetric equilibrium measurements consist of a poloidal flux array and a toroidally averaged poloidal field array. The equilibrium quantities of interest, for example, the radial magnetic field causing displacement of the outer plasma magnetic surface, are obtained from the measurements by linear combination with constant coefficients. Some possible objectives for the nonaxisymmetric field measurements are discussed. 7 refs., 6 figs. 14. Constant potential pulse polarography USGS Publications Warehouse Christie, J.H.; Jackson, L.L.; Osteryoung, R.A. 1976-01-01 The new technique of constant potential pulse polarography, In which all pulses are to be the same potential, is presented theoretically and evaluated experimentally. The response obtained is in the form of a faradaic current wave superimposed on a constant capacitative component. Results obtained with a computer-controlled system exhibit a capillary response current similar to that observed In normal pulse polarography. Calibration curves for Pb obtained using a modified commercial pulse polarographic instrument are in good accord with theoretical predictions. 15. Equivalence-point electromigration acid-base titration via moving neutralization boundary electrophoresis. PubMed Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi 2011-04-01 In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry. 16. The normal acid-base status of mice. PubMed Iversen, Nina K; Malte, Hans; Baatrup, Erik; Wang, Tobias 2012-03-15 Rodent models are commonly used for various physiological studies including acid-base regulation. Despite the widespread use of especially genetic modified mice, little attention have been made to characterise the normal acid-base status in these animals in order to reveal proper control values. Furthermore, several studies report blood gas values obtained in anaesthetised animals. We, therefore, decided to characterise blood CO(2) binding characteristic of mouse blood in vitro and to characterise normal acid-base status in conscious BALBc mice. In vitro CO(2) dissociation curves, performed on whole blood equilibrated to various PCO₂ levels in rotating tonometers, revealed a typical mammalian pK' (pK'=7.816-0.234 × pH (r=0.34)) and a non-bicarbonate buffer capacity (16.1 ± 2.6 slyke). To measure arterial acid-base status, small blood samples were taken from undisturbed mice with indwelling catheters in the carotid artery. In these animals, pH was 7.391 ± 0.026, plasma [HCO(3)(-)] 18.4 ± 0.83 mM, PCO₂ 30.3 ± 2.1 mm Hg and lactate concentration 4.6 ± 0.7 mM. Our study, therefore, shows that mice have an arterial pH that resembles other mammals, although arterial PCO₂ tends to be lower than in larger mammals. However, pH from arterial blood sampled from mice anaesthetised with isoflurane was significantly lower (pH 7.239 ± 0.021), while plasma [HCO(3)(-)] was 18.5 ± 1.4 mM, PCO₂ 41.9 ± 2.9 mm Hg and lactate concentration 4.48 ± 0.67 mM. Furthermore, we measured metabolism and ventilation (V(E)) in order to determine the ventilation requirements (VE/VO₂) to answer whether small mammals tend to hyperventilate. We recommend, therefore, that studies on acid-base regulation in mice should be based on samples taken for indwelling catheters rather than cardiac puncture of terminally anaesthetised mice. 17. Acid-base disorders in calves with chronic diarrhea. PubMed Bednarski, M; Kupczyński, R; Sobiech, P 2015-01-01 The aim of this study was to analyze disorders of acid-base balance in calves with chronic diarrhea caused by mixed, viral, bacterial and Cryptosporydium parvum infection. We compared results ob- tained with the classic model (Henderson-Hasselbalch) and strong ion approach (the Steward model). The study included 36 calves aged between 14 and 21 days. The calves were allocated to three groups: I - (control) non-diarrheic calves, group II - animals with compensated acid-base imbalance and group III calves with compensated acid-base disorders and hypoalbuminemia. Plasma concentrations of Na+, K+, Cl-, C12+, Mg2+, P, albumin and lactate were measured. In the classic model, acid-base balance was determined on the basis of blood pH, pCO2, HCO3-, BE and anion gap. In the strong ion model, strong ion difference (SID), effective strong anion difference, total plasma concentration of nonvolatile buffers (A(Tot)) and strong ion gap (SIG) were measured. The control calves and the animals from groups II and III did not differ significantly in terms of their blood pH. The plasma concentration of HCO3-, BE and partial pressure of CO2 in animals from the two groups with chronic diarrhea were significantly higher than those found in the controls. The highest BE (6.03 mmol/l) was documented in calves from group II. The animals from this group presented compensation resulted from activation of metabolic mechanisms. The calves with hypoal- buminemia (group III) showed lower plasma concentrations of albumin (15.37 g/L), Cl (74.94 mmol/L), Mg2+ (0.53 mmol/L), P (1.41 mmol/L) and higher value of anion gap (39.03 mmol/L). This group III presented significantly higher SID3 (71.89 mmol/L), SID7 (72.92 mmol/L) and SIG (43.53 mmol/L) values than animals from the remaining groups (P < 0.01), whereas A(Tot) (6.82 mmol/L) were significantly lower. The main finding of the correlation study was the excellent relationship between the AGcorr and SID3, SID7, SIG. In conclusion, chronic diarrhea leads 18. Equilibrium thermodynamics in modified gravitational theories Bamba, Kazuharu; Geng, Chao-Qiang; Tsujikawa, Shinji 2010-04-01 We show that it is possible to obtain a picture of equilibrium thermodynamics on the apparent horizon in the expanding cosmological background for a wide class of modified gravity theories with the Lagrangian density f(R,ϕ,X), where R is the Ricci scalar and X is the kinetic energy of a scalar field ϕ. This comes from a suitable definition of an energy-momentum tensor of the “dark” component that respects to a local energy conservation in the Jordan frame. In this framework the horizon entropy S corresponding to equilibrium thermodynamics is equal to a quarter of the horizon area A in units of gravitational constant G, as in Einstein gravity. For a flat cosmological background with a decreasing Hubble parameter, S globally increases with time, as it happens for viable f(R) inflation and dark energy models. We also show that the equilibrium description in terms of the horizon entropy S is convenient because it takes into account the contribution of both the horizon entropy S' in non-equilibrium thermodynamics and an entropy production term. 19. Structural design using equilibrium programming NASA Technical Reports Server (NTRS) Scotti, Stephen J. 1992-01-01 Multiple nonlinear programming methods are combined in the method of equilibrium programming. Equilibrium programming theory has been appied to problems in operations research, and in the present study it is investigated as a framework to solve structural design problems. Several existing formal methods for structural optimization are shown to actually be equilibrium programming methods. Additionally, the equilibrium programming framework is utilized to develop a new structural design method. Selected computational results are presented to demonstrate the methods. 20. Chemical equilibrium. [maximizing entropy of gas system to derive relations between thermodynamic variables NASA Technical Reports Server (NTRS) 1976-01-01 The entropy of a gas system with the number of particles subject to external control is maximized to derive relations between the thermodynamic variables that obtain at equilibrium. These relations are described in terms of the chemical potential, defined as equivalent partial derivatives of entropy, energy, enthalpy, free energy, or free enthalpy. At equilibrium, the change in total chemical potential must vanish. This fact is used to derive the equilibrium constants for chemical reactions in terms of the partition functions of the species involved in the reaction. Thus the equilibrium constants can be determined accurately, just as other thermodynamic properties, from a knowledge of the energy levels and degeneracies for the gas species involved. These equilibrium constants permit one to calculate the equilibrium concentrations or partial pressures of chemically reacting species that occur in gas mixtures at any given condition of pressure and temperature or volume and temperature. 1. A physicochemical model of crystalloid infusion on acid-base status. PubMed Omron, Edward M; Omron, Rodney M 2010-09-01 The objective of this study is to develop a physicochemical model of the projected change in standard base excess (SBE) consequent to the infused volume of crystalloid solutions in common use. A clinical simulation of modeled acid-base and fluid compartment parameters was conducted in a 70-kg test participant at standard physiologic state: pH =7.40, partial pressure of carbon dioxide (PCO2) = 40 mm Hg, Henderson-Hasselbalch actual bicarbonate ([HCO3]HH) = 24.5 mEq/L, strong ion difference (SID) = 38.9 mEq/L, albumin = 4.40 g/dL, inorganic phosphate = 1.16 mmol/L, citrate total = 0.135 mmol/L, and SBE =0.1 mEq/L. Simulations of multiple, sequential crystalloid infusions up to 10 L were conducted of normal saline (SID = 0), lactated Ringer's (SID = 28), plasmalyte 148 (SID = 50), one-half normal saline þ 75 mEq/L sodium bicarbonate (NaHCO3; SID = 75), 0.15 mol/L NaHCO3 (SID = 150), and a hypothetical crystalloid solution whose SID = 24.5 mEq/L, respectively. Simulations were based on theoretical completion of steady-state equilibrium and PCO2 was fixed at 40 mm Hg to assess nonrespiratory acid-base effects. A crystalloid SID equivalent to standard state actual bicarbonate (24.5 mEq/L) results in a neutral metabolic acid-base status for infusions up to 10 L. The 5 study solutions exhibited curvilinear relationships between SBE and crystalloid infusion volume in liters. Solutions whose SID was greater than 24.5 mEq/L demonstrated a progressive metabolic alkalosis and less, a progressive metabolic acidosis. In a human model system, the effects of crystalloid infusion on SBE are a function of the crystalloid and plasma SID, volume infused, and nonvolatile plasma weak acid changes. A projection of the impact of a unit volume of various isotonic crystalloid solutions on SBE is presented. The model's validation, applications, and limitations are examined. 2. Variation of Fundamental Constants Flambaum, V. V. 2006-11-01 Theories unifying gravity with other interactions suggest temporal and spatial variation of the fundamental ``constants'' in expanding Universe. The spatial variation can explain a fine tuning of the fundamental constants which allows humans (and any life) to appear. We appeared in the area of the Universe where the values of the fundamental constants are consistent with our existence. We present a review of recent works devoted to the variation of the fine structure constant α, strong interaction and fundamental masses. There are some hints for the variation in quasar absorption spectra. Big Bang nucleosynthesis, and Oklo natural nuclear reactor data. A very promising method to search for the variation of the fundamental constants consists in comparison of different atomic clocks. Huge enhancement of the variation effects happens in transition between accidentally degenerate atomic and molecular energy levels. A new idea is to build a ``nuclear'' clock based on the ultraviolet transition between very low excited state and ground state in Thorium nucleus. This may allow to improve sensitivity to the variation up to 10 orders of magnitude! Huge enhancement of the variation effects is also possible in cold atomic and molecular collisions near Feshbach resonance. 3. Cosmic curvature from de Sitter equilibrium cosmology. PubMed Albrecht, Andreas 2011-10-01 I show that the de Sitter equilibrium cosmology generically predicts observable levels of curvature in the Universe today. The predicted value of the curvature, Ω(k), depends only on the ratio of the density of nonrelativistic matter to cosmological constant density ρ(m)(0)/ρ(Λ) and the value of the curvature from the initial bubble that starts the inflation, Ω(k)(B). The result is independent of the scale of inflation, the shape of the potential during inflation, and many other details of the cosmology. Future cosmological measurements of ρ(m)(0)/ρ(Λ) and Ω(k) will open up a window on the very beginning of our Universe and offer an opportunity to support or falsify the de Sitter equilibrium cosmology. 4. Elastic constants of calcite USGS Publications Warehouse Peselnick, L.; Robie, R.A. 1962-01-01 The recent measurements of the elastic constants of calcite by Reddy and Subrahmanyam (1960) disagree with the values obtained independently by Voigt (1910) and Bhimasenachar (1945). The present authors, using an ultrasonic pulse technique at 3 Mc and 25??C, determined the elastic constants of calcite using the exact equations governing the wave velocities in the single crystal. The results are C11=13.7, C33=8.11, C44=3.50, C12=4.82, C13=5.68, and C14=-2.00, in units of 1011 dyncm2. Independent checks of several of the elastic constants were made employing other directions and polarizations of the wave velocities. With the exception of C13, these values substantially agree with the data of Voigt and Bhimasenachar. ?? 1962 The American Institute of Physics. 5. The Hubble constant NASA Technical Reports Server (NTRS) Huchra, John P. 1992-01-01 The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy radial velocities and distances. Although there has been considerable progress in the development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution. 6. Functional nucleic-acid-based sensors for environmental monitoring. PubMed Sett, Arghya; Das, Suradip; Bora, Utpal 2014-10-01 Efforts to replace conventional chromatographic methods for environmental monitoring with cheaper and easy to use biosensors for precise detection and estimation of hazardous environmental toxicants, water or air borne pathogens as well as various other chemicals and biologics are gaining momentum. Out of the various types of biosensors classified according to their bio-recognition principle, nucleic-acid-based sensors have shown high potential in terms of cost, sensitivity, and specificity. The discovery of catalytic activities of RNA (ribozymes) and DNA (DNAzymes) which could be triggered by divalent metallic ions paved the way for their extensive use in detection of heavy metal contaminants in environment. This was followed with the invention of small oligonucleotide sequences called aptamers which can fold into specific 3D conformation under suitable conditions after binding to target molecules. Due to their high affinity, specificity, reusability, stability, and non-immunogenicity to vast array of targets like small and macromolecules from organic, inorganic, and biological origin, they can often be exploited as sensors in industrial waste management, pollution control, and environmental toxicology. Further, rational combination of the catalytic activity of DNAzymes and RNAzymes along with the sequence-specific binding ability of aptamers have given rise to the most advanced form of functional nucleic-acid-based sensors called aptazymes. Functional nucleic-acid-based sensors (FNASs) can be conjugated with fluorescent molecules, metallic nanoparticles, or quantum dots to aid in rapid detection of a variety of target molecules by target-induced structure switch (TISS) mode. Although intensive research is being carried out for further improvements of FNAs as sensors, challenges remain in integrating such bio-recognition element with advanced transduction platform to enable its use as a networked analytical system for tailor made analysis of environmental 7. Thermal equilibrium of goats. PubMed Maia, Alex S C; Nascimento, Sheila T; Nascimento, Carolina C N; Gebremedhin, Kifle G 2016-05-01 The effects of air temperature and relative humidity on thermal equilibrium of goats in a tropical region was evaluated. Nine non-pregnant Anglo Nubian nanny goats were used in the study. An indirect calorimeter was designed and developed to measure oxygen consumption, carbon dioxide production, methane production and water vapour pressure of the air exhaled from goats. Physiological parameters: rectal temperature, skin temperature, hair-coat temperature, expired air temperature and respiratory rate and volume as well as environmental parameters: air temperature, relative humidity and mean radiant temperature were measured. The results show that respiratory and volume rates and latent heat loss did not change significantly for air temperature between 22 and 26°C. In this temperature range, metabolic heat was lost mainly by convection and long-wave radiation. For temperature greater than 30°C, the goats maintained thermal equilibrium mainly by evaporative heat loss. At the higher air temperature, the respiratory and ventilation rates as well as body temperatures were significantly elevated. It can be concluded that for Anglo Nubian goats, the upper limit of air temperature for comfort is around 26°C when the goats are protected from direct solar radiation. 8. Thermal equilibrium of goats. PubMed Maia, Alex S C; Nascimento, Sheila T; Nascimento, Carolina C N; Gebremedhin, Kifle G 2016-05-01 The effects of air temperature and relative humidity on thermal equilibrium of goats in a tropical region was evaluated. Nine non-pregnant Anglo Nubian nanny goats were used in the study. An indirect calorimeter was designed and developed to measure oxygen consumption, carbon dioxide production, methane production and water vapour pressure of the air exhaled from goats. Physiological parameters: rectal temperature, skin temperature, hair-coat temperature, expired air temperature and respiratory rate and volume as well as environmental parameters: air temperature, relative humidity and mean radiant temperature were measured. The results show that respiratory and volume rates and latent heat loss did not change significantly for air temperature between 22 and 26°C. In this temperature range, metabolic heat was lost mainly by convection and long-wave radiation. For temperature greater than 30°C, the goats maintained thermal equilibrium mainly by evaporative heat loss. At the higher air temperature, the respiratory and ventilation rates as well as body temperatures were significantly elevated. It can be concluded that for Anglo Nubian goats, the upper limit of air temperature for comfort is around 26°C when the goats are protected from direct solar radiation. PMID:27157333 9. Gallic acid-based indanone derivatives as anticancer agents. PubMed Saxena, Hari Om; Faridi, Uzma; Srivastava, Suchita; Kumar, J K; Darokar, M P; Luqman, Suaib; Chanotiya, C S; Krishna, Vinay; Negi, Arvind S; Khanuja, S P S 2008-07-15 Gallic acid-based indanone derivatives have been synthesised. Some of the indanones showed very good anticancer activity in MTT assay. Compounds 10, 11, 12 and 14 possessed potent anticancer activity against various human cancer cell lines. The most potent indanone (10, IC(50)=2.2 microM), against MCF-7, that is, hormone-dependent breast cancer cell line, showed no toxicity to human erythrocytes even at higher concentrations (100 microg/ml, 258 microM). While, indanones 11, 12 and 14 showed toxicities to erythrocytes at higher concentrations. 10. Acid-Base Homeostasis: Overview for Infusion Nurses. PubMed Masco, Natalie A 2016-01-01 Acid-base homeostasis is essential to normal function of the human body. Even slight alterations can significantly alter physiologic processes at the tissue and cellular levels. To optimally care for patients, nurses must be able to recognize signs and symptoms that indicate deviations from normal. Nurses who provide infusions to patients-whether in acute care, home care, or infusion center settings-have a responsibility to be able to recognize the laboratory value changes that occur with the imbalance and appreciate the treatment options, including intravenous infusions. PMID:27598068 11. A fully automatic system for acid-base coulometric titrations. PubMed Cladera, A; Caro, A; Estela, J M; Cerdà, V 1990-01-01 An automatic system for acid-base titrations by electrogeneration of H(+) and OH(-) ions, with potentiometric end-point detection, was developed. The system includes a PC-compatible computer for instrumental control, data acquisition and processing, which allows up to 13 samples to be analysed sequentially with no human intervention.The system performance was tested on the titration of standard solutions, which it carried out with low errors and RSD. It was subsequently applied to the analysis of various samples of environmental and nutritional interest, specifically waters, soft drinks and wines. 12. A Computer-Based Simulation of an Acid-Base Titration ERIC Educational Resources Information Center Boblick, John M. 1971-01-01 Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR) 13. Compassion is a constant. PubMed Scott, Tricia 2015-11-01 Compassion is a powerful word that describes an intense feeling of commiseration and a desire to help those struck by misfortune. Most people know intuitively how and when to offer compassion to relieve another person's suffering. In health care, compassion is a constant; it cannot be rationed because emergency nurses have limited time or resources to manage increasing demands. 14. XrayOpticsConstants 2005-06-20 This application (XrayOpticsConstants) is a tool for displaying X-ray and Optical properties for a given material, x-ray photon energy, and in the case of a gas, pressure. The display includes fields such as the photo-electric absorption attenuation length, density, material composition, index of refraction, and emission properties (for scintillator materials). 15. Compassion is a constant. PubMed Scott, Tricia 2015-11-01 Compassion is a powerful word that describes an intense feeling of commiseration and a desire to help those struck by misfortune. Most people know intuitively how and when to offer compassion to relieve another person's suffering. In health care, compassion is a constant; it cannot be rationed because emergency nurses have limited time or resources to manage increasing demands. PMID:26542898 16. Potentiometric determination of the total acidity of humic acids by constant-current coulometry. PubMed Palladino, Giuseppe; Ferri, Diego; Manfredi, Carla; Vasca, Ermanno 2007-01-16 A straightforward method for both the quantitative and the equilibrium analysis of humic acids in solution, based on the combination of potentiometry with coulometry, is presented. The method is based on potentiometric titrations of alkaline solutions containing, besides the humic acid sample, also NaClO(4) 1M; by means of constant current coulometry the analytical acidity in the solutions is increased with a high precision, until the formation of a solid phase occurs. Hence, the total acid content of the macromolecules may be determined from the e.m.f. data by using modified Gran plots or least-squares sum minimization programs as well. It is proposed to use the pK(w) value in the ionic medium as a check of the correctness of each experiment; this datum may be readily obtained as a side-result in each titration. Modelling acid-base equilibria of the HA samples analysed was also performed, on the basis of the buffer capacity variations occurring during each titration. The experimental data fit, having the least standard deviation, was obtained assuming a mixture of three monoprotic acids (HX, HY, HZ) having about the same analytical concentration, whose acid dissociation constants in NaClO(4) 1M at 25 degrees C were pK(HX)=3.9+/-0.2, pK(HY)=7.5+/-0.3, pK(HZ)=9.5+/-0.2, respectively. With the proposed method the handling of alkaline HA solutions, the titration with very dilute NaOH or HCl solutions and the need for the availability of very small volumes of titrant to be added by microburettes may be avoided. 17. 78 FR 36698 - Microbiology Devices; Reclassification of Nucleic Acid-Based Systems for Mycobacterium tuberculosis Federal Register 2010, 2011, 2012, 2013, 2014 2013-06-19 ... Nucleic Acid-Based Systems for Mycobacterium tuberculosis Complex in Respiratory Specimens AGENCY: Food...) is proposing to reclassify nucleic acid-based in vitro diagnostic devices for the detection of... Controls Guideline: Nucleic Acid-Based In Vitro Diagnostic Devices for the Detection of... 18. Phase equilibrium studies SciTech Connect Mathias, P.M.; Stein, F.P. 1983-09-01 A phase equilibrium model has been developed for the SRC-I process, as well as the other coal liquefaction processes. It is applicable to both vapor/liquid and liquid/liquid equilibria; it also provides an approximate but adequate description of aqueous mixtures where the volatile electrolyte components dissociate to form ionic species. This report completes the description of the model presented in an earlier report (Mathias and Stein, 1983a). Comparisons of the model to previously published data on coal-fluid mixtures are presented. Further, a preliminary analysis of new data on SRC-I coal fluids is presented. Finally, the current capabilities and deficiencies of the model are discussed. 25 references, 17 figures, 30 tables. 19. Statistical physics ""Beyond equilibrium SciTech Connect Ecke, Robert E 2009-01-01 The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids. 20. Stochastic acid-based quenching in chemically amplified photoresists: a simulation study Mack, Chris A.; Biafore, John J.; Smith, Mark D. 2011-04-01 BACKGROUND: The stochastic nature of acid-base quenching in chemically amplified photoresists leads to variations in the resulting acid concentration during post-exposure bake, which leads to line-edge roughness (LER) of the resulting features. METHODS: Using a stochastic resist simulator, we predicted the mean and standard deviation of the acid concentration after post-exposure bake for an open-frame exposure and fit the results to empirical expressions. RESULTS: The mean acid concentration after quenching can be predicted using the reaction-limited rate equation and an effective rate constant. The effective quenching rate constant is predicted by an empirical expression. A second empirical expression for the standard deviation of the acid concentration matched the output of the PROLITH stochastic resist model to within a few percent CONCLUSIONS: Predicting the stochastic uncertainty in acid concentration during post-exposure bake for 193-nm and extreme ultraviolet resists allows optimization of resist processing and formulations, and may form the basis of a comprehensive LER model. 1. Acid-base and catalytic properties of the products of oxidative thermolysis of double complex compounds Pechenyuk, S. I.; Semushina, Yu. P.; Kuz'mich, L. F.; Ivanov, Yu. V. 2016-01-01 Acid-base properties of the products of thermal decomposition of [M(A)6] x; [M1(L)6] y (where M is Co, Cr, Cu, Ni; M1 is Fe, Cr, Co; A is NH3, 1/2 en, 1/2 pn, CO(NH2)2; and L is CN, 1/2C2O4) binary complexes in air and their catalytic properties in the oxidation reaction of ethanol with atmospheric oxygen are studied. It is found that these thermolysis products are mixed oxides of the central atoms of complexes characterized by pH values of the zero charge point in the region of 4-9, OH-group sorption limits from 1 × 10-4 to 4.5 × 10-4 g-eq/g, OH-group surface concentrations of 10-50 nm-2 in 0.1 M NaCl solutions, and S sp from 3 to 95 m2/g. Their catalytic activity is estimated from the apparent rate constant of the conversion of ethanol in CO2. The values of constants are (1-6.5) × 10-5 s-1, depending on the gas flow rate and the S sp value. 2. Equilibrium properties of chemically reacting gases NASA Technical Reports Server (NTRS) 1976-01-01 The equilibrium energy, enthalpy, entropy, specific heat at constant volume and constant pressure, and the equation of state of the gas are all derived for chemically reacting gas mixtures in terms of the compressibility, the mol fractions, the thermodynamic properties of the pure gas components, and the change in zero point energy due to reaction. Results are illustrated for a simple diatomic dissociation reaction and nitrogen is used as an example. Next, a gas mixture resulting from combined diatomic dissociation and atomic ionization reactions is treated and, again, nitrogen is used as an example. A short discussion is given of the additional complexities involved when precise solutions for high-temperature air are desired, including effects caused by NO produced in shuffle reactions and by other trace species formed from CO2, H2O and Ar found in normal air. 3. Wall of fundamental constants SciTech Connect Olive, Keith A.; Peloso, Marco; Uzan, Jean-Philippe 2011-02-15 We consider the signatures of a domain wall produced in the spontaneous symmetry breaking involving a dilatonlike scalar field coupled to electromagnetism. Domains on either side of the wall exhibit slight differences in their respective values of the fine-structure constant, {alpha}. If such a wall is present within our Hubble volume, absorption spectra at large redshifts may or may not provide a variation in {alpha} relative to the terrestrial value, depending on our relative position with respect to the wall. This wall could resolve the contradiction between claims of a variation of {alpha} based on Keck/Hires data and of the constancy of {alpha} based on Very Large Telescope data. We derive the properties of the wall and the parameters of the underlying microscopic model required to reproduce the possible spatial variation of {alpha}. We discuss the constraints on the existence of the low-energy domain wall and describe its observational implications concerning the variation of the fundamental constants. 4. A continuum model for flocking: Obstacle avoidance, equilibrium, and stability Mecholsky, Nicholas Alexander The modeling and investigation of the dynamics and configurations of animal groups is a subject of growing attention. In this dissertation, we present a partial-differential-equation based continuum model of flocking and use it to investigate several properties of group dynamics and equilibrium. We analyze the reaction of a flock to an obstacle or an attacking predator. We show that the flock response is in the form of density disturbances that resemble Mach cones whose configuration is determined by the anisotropic propagation of waves through the flock. We investigate the effect of a flock 'pressure' and pairwise repulsion on an equilibrium density distribution. We investigate both linear and nonlinear pressures, look at the convergence to a 'cold' (T → 0) equilibrium solution, and find regions of parameter space where different models produce the same equilibrium. Finally, we analyze the stability of an equilibrium density distribution to long-wavelength perturbations. Analytic results for the stability of a constant density solution as well as stability regimes for constant density solutions to the equilibrium equations are presented. 5. Varying constants quantum cosmology SciTech Connect Leszczyńska, Katarzyna; Balcerzak, Adam; Dabrowski, Mariusz P. E-mail: [email protected] 2015-02-01 We discuss minisuperspace models within the framework of varying physical constants theories including Λ-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ansätze for the variability of constants: c(a) = c{sub 0} a{sup n} and G(a)=G{sub 0} a{sup q}. We find that most of the varying c and G minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe ''from nothing'' (a=0) to a Friedmann geometry with the scale factor a{sub t} is large for growing c models and is strongly suppressed for diminishing c models. As for G varying, the probability of tunneling is large for G diminishing, while it is small for G increasing. In general, both varying c and G change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models. 6. Absorption Spectroscopy Study of Acid-Base and Metal-Binding Properties of Flavanones Shubina, V. S.; Shatalina, Yu. V. 2013-11-01 We have used absorption spectroscopy to study the acid-base and metal-binding properties of two structurally similar flavanones: taxifolin and naringenin. We have determined the acid dissociation constants for taxifolin (pKa1 = 7.10 ± 0.05, pKa2 = 8.60 ± 0.09, pKa3 = 8.59 ± 0.19, pKa4 = 11.82 ± 0.36) and naringenin (pKa1 = 7.05 ± 0.05, pKa2 = 8.85 ± 0.09, pKa3 = 12.01 ± 0.38). The appearance of new absorption bands in the visible wavelength region let us determine the stoichiometric composition of the iron (II) complexes of the flavanones. We show that at pH 5, in solution there is a mixture of complexes between taxifolin and iron (II) ions in stoichiometric ratio 2:1 and 1:2, while at pH 7.4 and pH 9, we detect a 1:1 taxifolin:Fe(II) complex. We established that at these pH values, naringenin forms a 2:1 complex with iron (II) ions. We propose structures for the complexes formed. Comprehensive study of the acid-base properties and the metal-binding capability of the two structurally similar flavanones let us determine the structure-properties relation and the conditions under which antioxidant activity of the polyphenols appears, via chelation of variable-valence metal ions. 7. Acid-base and respiratory properties of a buffered bovine erythrocyte perfusion medium. PubMed Lindinger, M I; Heigenhauser, G J; Jones, N L 1986-05-01 Current research in organ physiology often utilizes in situ or isolated perfused tissues. We have characterized a perfusion medium associated with excellent performance characteristics in perfused mammalian skeletal muscle. The perfusion medium consisting of Krebs-Henseleit buffer, bovine serum albumin, and fresh bovine erythrocytes was studied with respect to its gas-carrying relationships and its response to manipulation of acid-base state. Equilibration of the perfusion medium at base excess of -10, -5, 0, 5, and 10 mmol X L-1 to humidified gas mixtures varying in their CO2 and O2 content was followed by measurements of perfusate hematocrit, hemoglobin concentration, pH, Pco2, Cco2, Po2, and percent oxygen saturation. The oxygen dissociation curve was similar to that of mammalian bloods, having a P50 of 32 Torr (1 Torr = 133.3 Pa), Hill's constant n of 2.87 +/- 0.15, and a Bohr factor of -0.47, showing the typical Bohr shifts with respect to CO2 and pH. The oxygen capacity was calculated to be 190 mL X L-1 blood. The carbon dioxide dissociation curve was also similar to that of mammalian blood. The in vitro nonbicarbonate buffer capacity (delta [HCO3-] X delta pH-1) at zero base excess was -24.6 and -29.9 mmol X L-1 X pH-1 for the perfusate and buffer, respectively. The effects of reduced oxygen saturation on base excess and pH of the medium were quantified. The data were used to construct an acid-base alignment diagram for the medium, which may be used to quantify the flux of nonvolatile acid or base added to the venous effluent during tissue perfusions. 8. Acid-base titrations using microfluidic paper-based analytical devices. PubMed Karita, Shingo; Kaneta, Takashi 2014-12-16 9. Acid-base balance in the developing marsupial: from ectotherm to endotherm. PubMed Andrewartha, Sarah J; Cummings, Kevin J; Frappell, Peter B 2014-05-01 Marsupial joeys are born ectothermic and develop endothermy within their mother's thermally stable pouch. We hypothesized that Tammar wallaby joeys would switch from α-stat to pH-stat regulation during the transition from ectothermy to endothermy. To address this, we compared ventilation (Ve), metabolic rate (Vo2), and variables relevant to blood gas and acid-base regulation and oxygen transport including the ventilatory requirements (Ve/Vo2 and Ve/Vco2), partial pressures of oxygen (PaO2), carbon dioxide (PaCO2), pHa, and oxygen content (CaO2) during progressive hypothermia in ecto- and endothermic Tammar wallabies. We also measured the same variables in the well-studied endotherm, the Sprague-Dawley rat. Hypothermia was induced in unrestrained, unanesthetized joeys and rats by progressively dropping the ambient temperature (Ta). Rats were additionally exposed to helox (80% helium, 20% oxygen) to facilitate heat loss. Respiratory, metabolic, and blood-gas variables were measured over a large body temperature (Tb) range (∼15-16°C in both species). Ectothermic joeys displayed limited thermogenic ability during cooling: after an initial plateau, Vo2 decreased with the progressive drop in Tb. The Tb of endothermic joeys and rats fell despite Vo2 nearly doubling with the initiation of cold stress. In all three groups the changes in Vo2 were met by changes in Ve, resulting in constant Ve/Vo2 and Ve/Vco2, blood gases, and pHa. Thus, although thermogenic capability was nearly absent in ectothermic joeys, blood acid-base regulation was similar to endothermic joeys and rats. This suggests that unlike some reptiles, unanesthetized mammals protect arterial blood pH with changing Tb, irrespective of their thermogenic ability and/or stage of development. 10. The influence of dissolved organic matter on the acid-base system of the Baltic Sea Kuliński, Karol; Schneider, Bernd; Hammer, Karoline; Machulik, Ulrike; Schulz-Bull, Detlef 2014-04-01 To assess the influence of dissolved organic matter (DOM) on the acid-base system of the Baltic Sea, 19 stations along the salinity gradient from Mecklenburg Bight to the Bothnian Bay were sampled in November 2011 for total alkalinity (AT), total inorganic carbon concentration (CT), partial pressure of CO2 (pCO2), and pH. Based on these data, an organic alkalinity contribution (Aorg) was determined, defined as the difference between measured AT and the inorganic alkalinity calculated from CT and pH and/or CT and pCO2. Aorg was in the range of 22-58 μmol kg- 1, corresponding to 1.5-3.5% of AT. The method to determine Aorg was validated in an experiment performed on DOM-enriched river water samples collected from the mouths of the Vistula and Oder Rivers in May 2012. The Aorg increase determined in that experiment correlated directly with the increased DOC concentration caused by enrichment of the > 1 kDa DOM fraction. To examine the effect of Aorg on calculations of the marine CO2 system, the pCO2 and pH values measured in Baltic Sea water were compared with calculated values that were based on the measured alkalinity and another variable of the CO2 system, but ignored the existence of Aorg. Large differences between measured and calculated pCO2 and pH were obtained when the computations were based on AT and CT. The calculated pCO2 was 27-56% lower than the measured value whereas the calculated pH was overestimated by more than 0.4 pH units. Since biogeochemical models are based on the transport and transformations of AT and CT, the acid-base properties of DOM should be included in calculations of the CO2 system in DOM-rich basins like the Baltic Sea. In view of our limited knowledge about the composition and acid/base properties of DOM, this is best achieved using a bulk dissociation constant, KDOM, that represents all weakly acidic functional groups present in DOM. Our preliminary results indicated that the bulk KDOM in the Baltic Sea is 2.94 · 10- 8 mol kg- 1 11. Semiexperimental equilibrium structure of the lower energy conformer of glycidol by the mixed estimation method. PubMed Demaison, Jean; Craig, Norman C; Conrad, Andrew R; Tubergen, Michael J; Rudolph, Heinz Dieter 2012-09-13 Rotational constants were determined for (18)O-substituted isotopologues of the lower energy conformer of glycidol, which has an intramolecular inner hydrogen bond from the hydroxyl group to the oxirane ring oxygen. Rotational constants were previously determined for the (13)C and the OD species. These rotational constants have been corrected with the rovibrational constants calculated from an ab initio cubic force field. The derived semiexperimental equilibrium rotational constants have been supplemented by carefully chosen structural parameters, including those for hydrogen atoms, from medium level ab initio calculations. The combined data have been used in a weighted least-squares fit to determine an equilibrium structure for the glycidol H-bond inner conformer. This work shows that the mixed estimation method allows us to determine a complete and reliable equilibrium structure for large molecules, even when the rotational constants of a number of isotopologues are unavailable. 12. A damped pendulum forced with a constant torque Coullet, P.; Gilli, J. M.; Monticelli, M.; Vandenberghe, N. 2005-12-01 The dynamics of a damped pendulum driven by a constant torque is studied experimentally and theoretically. We use this simple device to demonstrate some generic dynamical behavior including the loss of equilibrium or saddle node bifurcation with or without hysteresis and the homoclinic bifurcation. A qualitative analysis is developed to emphasize the role of two dimensionless parameters corresponding to damping and forcing. 13. Henry's law constants for dimethylsulfide in freshwater and seawater NASA Technical Reports Server (NTRS) Dacey, J. W. H.; Wakeham, S. G.; Howes, B. L. 1984-01-01 Distilled water and several waters of varying salinity were subjected, over a 0-32 C temperature range, to measurements for Henry's law constants for dimethylsulfide. Values for distilled water and seawater of the solubility parameters A and C are obtained which support the concept that the concentration of dimethylsulfide in the atmosphere is far from equilibrium with seawater. 14. Fatty acid-based polyurethane films for wound dressing applications. PubMed Gultekin, Guncem; Atalay-Oral, Cigdem; Erkal, Sibel; Sahin, Fikret; Karastova, Djursun; Tantekin-Ersolmaz, S Birgul; Guner, F Seniha 2009-01-01 Fatty acid-based polyurethane films were prepared for use as potential wound dressing material. The polymerization reaction was carried out with or without catalyst. Polymer films were prepared by casting-evaporation technique with or without crosslink-catalyst. The film prepared from uncatalyzed reaction product with crosslink-catalyst gave slightly higher crosslink density. The mechanical tests showed that, the increase in the tensile strength and decrease in the elongation at break is due to the increase in the degree of crosslinking. All films were flexible, and resisted to acid solution. The films prepared without crosslink-catalyst were more hydrophilic, absorbed more water. The highest permeability values were generally obtained for the films prepared without crosslink catalyst. Both the direct contact method and the MMT test were applied for determination of cytotoxicity of polymer films and the polyurethane film prepared from uncatalyzed reaction product without crosslink-catalyst showed better biocompatibility property, closest to the commercial product, Opsite. 15. Fatty acid-based polyurethane films for wound dressing applications. PubMed Gultekin, Guncem; Atalay-Oral, Cigdem; Erkal, Sibel; Sahin, Fikret; Karastova, Djursun; Tantekin-Ersolmaz, S Birgul; Guner, F Seniha 2009-01-01 Fatty acid-based polyurethane films were prepared for use as potential wound dressing material. The polymerization reaction was carried out with or without catalyst. Polymer films were prepared by casting-evaporation technique with or without crosslink-catalyst. The film prepared from uncatalyzed reaction product with crosslink-catalyst gave slightly higher crosslink density. The mechanical tests showed that, the increase in the tensile strength and decrease in the elongation at break is due to the increase in the degree of crosslinking. All films were flexible, and resisted to acid solution. The films prepared without crosslink-catalyst were more hydrophilic, absorbed more water. The highest permeability values were generally obtained for the films prepared without crosslink catalyst. Both the direct contact method and the MMT test were applied for determination of cytotoxicity of polymer films and the polyurethane film prepared from uncatalyzed reaction product without crosslink-catalyst showed better biocompatibility property, closest to the commercial product, Opsite. PMID:18839285 16. Ultrasonic and densimetric titration applied for acid-base reactions. PubMed Burakowski, Andrzej; Gliński, Jacek 2014-01-01 Classical acoustic acid-base titration was monitored using sound speed and density measurements. Plots of these parameters, as well as of the adiabatic compressibility coefficient calculated from them, exhibit changes with the volume of added titrant. Compressibility changes can be explained and quantitatively predicted theoretically in terms of Pasynski theory of non-compressible hydrates combined with that of the additivity of the hydration numbers with the amount and type of ions and molecules present in solution. It also seems that this development could be applied in chemical engineering for monitoring the course of chemical processes, since the applied experimental methods can be carried out almost independently on the medium under test (harmful, aggressive, etc.). 17. Micellar acid-base potentiometric titrations of weak acidic and/or insoluble drugs. PubMed Gerakis, A M; Koupparis, M A; Efstathiou, C E 1993-01-01 The effect of various surfactants [the cationics cetyl trimethyl ammonium bromide (CTAB) and cetyl pyridinium chloride (CPC), the anionic sodium dodecyl sulphate (SDS), and the nonionic polysorbate 80 (Tween 80)] on the solubility and ionization constant of some sparingly soluble weak acids of pharmaceutical interest was studied. Benzoic acid (and its 3-methyl-, 3-nitro-, and 4-tert-butyl-derivatives), acetylsalicylic acid, naproxen and iopanoic acid were chosen as model examples. Precise and accurate acid-base titrations in micellar systems were made feasible using a microcomputer-controlled titrator. The response curve, response time and potential drift of the glass electrode in the micellar systems were examined. The cationics CTAB and CPC were found to increase considerably the ionization constant of the weak acids (delta pKa ranged from -0.21 to -3.57), while the anionic SDS showed negligible effect and the nonionic Tween 80 generally decreased the ionization constants. The solubility of the acids in aqueous micellar and acidified micellar solutions was studied spectrophotometrically and it was found increased in all cases. Acetylsalicylic acid, naproxen, benzoic acid and iopanoic acid could be easily determined in raw material and some of them in pharmaceutical preparations by direct titration in CTAB-micellar system instead of using the traditional non-aqueous or back titrimetry. Precisions of 0.3-4.3% RSD and good correlation with the official tedious methods were obtained. The interference study of some excipients showed that a preliminary test should be carried out before the assay of formulations. 18. Nucleic acid-based tissue biomarkers of urologic malignancies. PubMed Dietrich, Dimo; Meller, Sebastian; Uhl, Barbara; Ralla, Bernhard; Stephan, Carsten; Jung, Klaus; Ellinger, Jörg; Kristiansen, Glen 2014-08-01 Molecular biomarkers play an important role in the clinical management of cancer patients. Biomarkers allow estimation of the risk of developing cancer; help to diagnose a tumor, ideally at an early stage when cure is still possible; and aid in monitoring disease progression. Furthermore, they hold the potential to predict the outcome of the disease (prognostic biomarkers) and the response to therapy (predictive biomarkers). Altogether, biomarkers will help to avoid tumor-related deaths and reduce overtreatment, and will contribute to increased survival and quality of life in cancer patients due to personalized treatments. It is well established that the process of carcinogenesis is a complex interplay between genomic predisposition, acquired somatic mutations, epigenetic changes and genomic aberrations. Within this complex interplay, nucleic acids, i.e. RNA and DNA, play a fundamental role and therefore represent ideal candidates for biomarkers. They are particularly promising candidates because sequence-specific hybridization and amplification technologies allow highly accurate and sensitive assessment of these biomarker levels over a broad dynamic range. This article provides an overview of nucleic acid-based biomarkers in tissues for the management of urologic malignancies, i.e. tumors of the prostate, testis, kidney, penis, urinary bladder, renal pelvis, ureter and other urinary organs. Special emphasis is put on genomic, transcriptomic and epigenomic biomarkers (SNPs, mutations [genomic and mitochondrial], microsatellite instabilities, viral and bacterial DNA, DNA methylation and hydroxymethylation, mRNA expression, and non-coding RNAs [lncRNA, miRNA, siRNA, piRNA, snRNA, snoRNA]). Due to the multitude of published biomarker candidates, special focus is given to the general applicability of different molecular classes as biomarkers and some particularly promising nucleic acid biomarkers. Furthermore, specific challenges regarding the development and clinical 19. Napoleon Is in Equilibrium PubMed Central Phillips, Rob 2016-01-01 It has been said that the cell is the test tube of the twenty-first century. If so, the theoretical tools needed to quantitatively and predictively describe what goes on in such test tubes lag sorely behind the stunning experimental advances in biology seen in the decades since the molecular biology revolution began. Perhaps surprisingly, one of the theoretical tools that has been used with great success on problems ranging from how cells communicate with their environment and each other to the nature of the organization of proteins and lipids within the cell membrane is statistical mechanics. A knee-jerk reaction to the use of statistical mechanics in the description of cellular processes is that living organisms are so far from equilibrium that one has no business even thinking about it. But such reactions are probably too hasty given that there are many regimes in which, because of a separation of timescales, for example, such an approach can be a useful first step. In this article, we explore the power of statistical mechanical thinking in the biological setting, with special emphasis on cell signaling and regulation. We show how such models are used to make predictions and describe some recent experiments designed to test them. We also consider the limits of such models based on the relative timescales of the processes of interest. PMID:27429713 20. Copolymer Crystallization: Approaching Equilibrium Crist, Buckley; Finerman, Terry 2002-03-01 Random ethylene-butene copolymers of uniform chemical composition and degree of polymerization are crystallized by evaporation of thin films (1 μ m - 5 μ m) from solution. Macroscopic films ( 100 μm) formed by sequential layer deposition are characterized by density, calorimetry and X-ray techniques. Most notable is the density, which in some cases implies a crystalline fraction nearly 90% of the equilibrium value calculated from Flory theory. Melting temperature of these solution deposited layers is increased by as much as 8 ^oC over Tm for the same polymer crystallized from the melt. Small-angle X-ray scattering indicates that the amorphous layer thickness is strongly reduced by this layered crystallization process. X-ray diffraction shows a pronounced orientation of chain axes and lamellar normals parallel to the normal of the macroscopic film. It is clear that solvent enhances chain mobility, permitting proper sequences to aggregate and crystallize in a manner that is never achieved in the melt. 1. Napoleon Is in Equilibrium Phillips, Rob 2015-03-01 It has been said that the cell is the test tube of the twenty-first century. If so, the theoretical tools needed to quantitatively and predictively describe what goes on in such test tubes lag sorely behind the stunning experimental advances in biology seen in the decades since the molecular biology revolution began. Perhaps surprisingly, one of the theoretical tools that has been used with great success on problems ranging from how cells communicate with their environment and each other to the nature of the organization of proteins and lipids within the cell membrane is statistical mechanics. A knee-jerk reaction to the use of statistical mechanics in the description of cellular processes is that living organisms are so far from equilibrium that one has no business even thinking about it. But such reactions are probably too hasty given that there are many regimes in which, because of a separation of timescales, for example, such an approach can be a useful first step. In this article, we explore the power of statistical mechanical thinking in the biological setting, with special emphasis on cell signaling and regulation. We show how such models are used to make predictions and describe some recent experiments designed to test them. We also consider the limits of such models based on the relative timescales of the processes of interest. 2. Change is a Constant. PubMed Lubowitz, James H; Provencher, Matthew T; Brand, Jefferson C; Rossi, Michael J; Poehling, Gary G 2015-06-01 In 2015, Henry P. Hackett, Managing Editor, Arthroscopy, retires, and Edward A. Goss, Executive Director, Arthroscopy Association of North America (AANA), retires. Association is a positive constant, in a time of change. With change comes a need for continuing education, research, and sharing of ideas. While the quality of education at AANA and ISAKOS is superior and most relevant, the unique reason to travel and meet is the opportunity to interact with innovative colleagues. Personal interaction best stimulates new ideas to improve patient care, research, and teaching. Through our network, we best create innovation. 3. Cosmology with varying constants. PubMed Martins, Carlos J A P 2002-12-15 The idea of possible time or space variations of the 'fundamental' constants of nature, although not new, is only now beginning to be actively considered by large numbers of researchers in the particle physics, cosmology and astrophysics communities. This revival is mostly due to the claims of possible detection of such variations, in various different contexts and by several groups. I present the current theoretical motivations and expectations for such variations, review the current observational status and discuss the impact of a possible confirmation of these results in our views of cosmology and physics as a whole. 4. Transition State Charge Stabilization and Acid-Base Catalysis of mRNA Cleavage by the Endoribonuclease RelE. PubMed Dunican, Brian F; Hiller, David A; Strobel, Scott A 2015-12-01 The bacterial toxin RelE is a ribosome-dependent endoribonuclease. It is part of a type II toxin-antitoxin system that contributes to antibiotic resistance and biofilm formation. During amino acid starvation, RelE cleaves mRNA in the ribosomal A-site, globally inhibiting protein translation. RelE is structurally similar to microbial RNases that employ general acid-base catalysis to facilitate RNA cleavage. The RelE active site is atypical for acid-base catalysis, in that it is enriched with positively charged residues and lacks the prototypical histidine-glutamate catalytic pair, making the mechanism of mRNA cleavage unclear. In this study, we use a single-turnover kinetic analysis to measure the effect of pH and phosphorothioate substitution on the rate constant for cleavage of mRNA by wild-type RelE and seven active-site mutants. Mutation and thio effects indicate a major role for stabilization of increased negative change in the transition state by arginine 61. The wild-type RelE cleavage rate constant is pH-independent, but the reaction catalyzed by many of the mutants is strongly dependent on pH, suggestive of general acid-base catalysis. pH-rate curves indicate that wild-type RelE operates with the pK(a) of at least one catalytic residue significantly downshifted by the local environment. Mutation of any single active-site residue is sufficient to disrupt this microenvironment and revert the shifted pK(a) back above neutrality. pH-rate curves are consistent with K54 functioning as a general base and R81 as a general acid. The capacity of RelE to effect a large pK(a) shift and facilitate a common catalytic mechanism by uncommon means furthers our understanding of other atypical enzymatic active sites. 5. Local thermodynamic equilibrium for globally disequilibrium open systems under stress 2016-04-01 Predictive modeling of far and near equilibrium processes is essential for understanding of patterns formation and for quantifying of natural processes that are never in global equilibrium. Methods of both equilibrium and non-equilibrium thermodynamics are needed and have to be combined. For example, predicting temperature evolution due to heat conduction requires simultaneous use of equilibrium relationship between internal energy and temperature via heat capacity (the caloric equation of state) and disequilibrium relationship between heat flux and temperature gradient. Similarly, modeling of rocks deforming under stress, reactions in system open for the porous fluid flow, or kinetic overstepping of the equilibrium reaction boundary necessarily needs both equilibrium and disequilibrium material properties measured under fundamentally different laboratory conditions. Classical irreversible thermodynamics (CIT) is the well-developed discipline providing the working recipes for the combined application of mutually exclusive experimental data such as density and chemical potential at rest under constant pressure and temperature and viscosity of the flow under stress. Several examples will be presented. 6. Equilibrium and non-equilibrium cluster phases in colloids with competing interactions. PubMed Mani, Ethayaraja; Lechner, Wolfgang; Kegel, Willem K; Bolhuis, Peter G 2014-07-01 The phase behavior of colloids that interact via competing interactions - short-range attraction and long-range repulsion - is studied by computer simulation. In particular, for a fixed strength and range of repulsion, the effect of the strength of an attractive interaction (ε) on the phase behavior is investigated at various colloid densities (ρ). A thermodynamically stable equilibrium colloidal cluster phase, consisting of compact crystalline clusters, is found below the fluid-solid coexistence line in the ε-ρ parameter space. The mean cluster size is found to linearly increase with the colloid density. At large ε and low densities, and at small ε and high densities, a non-equilibrium cluster phase, consisting of elongated Bernal spiral-like clusters, is observed. Although gelation can be induced either by increasing ε at constant density or vice versa, the gelation mechanism is different in either route. While in the ρ route gelation occurs via a glass transition of compact clusters, gelation in the ε route is characterized by percolation of elongated clusters. This study both provides the location of equilibrium and non-equilibrium cluster phases with respect to the fluid-solid coexistence, and reveals the dependencies of the gelation mechanism on the preparation route. 7. Compilation of Henry's law constants, version 3.99 Sander, R. 2014-11-01 Many atmospheric chemicals occur in the gas phase as well as in liquid cloud droplets and aerosol particles. Therefore, it is necessary to understand the distribution between the phases. According to Henry's law, the equilibrium ratio between the abundances in the gas phase and in the aqueous phase is constant for a dilute solution. Henry's law constants of trace gases of potential importance in environmental chemistry have been collected and converted into a uniform format. The compilation contains 14775 values of Henry's law constants for 3214 species, collected from 639 references. It is also available on the internet at http://www.henrys-law.org. 8. The spectroscopic constants and anharmonic force field of AgSH: An ab initio study Zhao, Yanliang; Wang, Meishan; Yang, Chuanlu; Ma, Xiaoguang; Zhu, Ziliang 2016-07-01 The equilibrium structure, spectroscopy constants, and anharmonic force field of silver hydrosulfide (AgSH) have been calculated at B3P86, B3PW91 and MP2 methods employing two basis sets, TZP and QZP, respectively. The calculated geometries, ground state rotational constants, harmonic vibrational wave numbers, and quartic and sextic centrifugal distortion constants are compared with the available experimental and theoretical data. The equilibrium rotational constants, fundamental frequencies, anharmonic constants, and vibration-rotation interaction constants, Coriolis coupling constants, cubic and quartic force constants are predicted. The calculated results show that the MP2/TZP results are in good agreement with experiment observation and are also an advisable choice to study the anharmonic force field of AgSH. 9. Comparison of the acid-base properties of ribose and 2'-deoxyribose nucleotides. PubMed Mucha, Ariel; Knobloch, Bernd; Jezowska-Bojczuk, Małgorzata; Kozłowski, Henryk; Sigel, Roland K O 2008-01-01 The extent to which the replacement of a ribose unit by a 2'-deoxyribose unit influences the acid-base properties of nucleotides has not hitherto been determined in detail. In this study, by potentiometric pH titrations in aqueous solution, we have measured the acidity constants of the 5'-di- and 5'-triphosphates of 2'-deoxyguanosine [i.e., of H(2)(dGDP)(-) and H(2)(dGTP)(2-)] as well as of the 5'-mono-, 5'-di-, and 5'-triphosphates of 2'-deoxyadenosine [i.e., of H(2)(dAMP)(+/-), H(2)(dADP)(-), and H(2)(dATP)(2-)]. These 12 acidity constants (of the 56 that are listed) are compared with those of the corresponding ribose derivatives (published data) measured under the same experimental conditions. The results show that all protonation sites in the 2'-deoxynucleotides are more basic than those in their ribose counterparts. The influence of the 2'-OH group is dependent on the number of 5'-phosphate groups as well as on the nature of the purine nucleobase. The basicity of N7 in guanine nucleotides is most significantly enhanced (by about 0.2 pK units), while the effect on the phosphate groups and the N1H or N1H(+) sites is less pronounced but clearly present. In addition, (1)H NMR chemical shift change studies in dependence on pD in D(2)O have been carried out for the dAMP, dADP, and dATP systems, which confirmed the results from the potentiometric pH titrations and showed the nucleotides to be in their anti conformations. Overall, our results are not only of relevance for metal ion binding to nucleotides or nucleic acids, but also constitute an exact basis for the calculation, determination, and understanding of perturbed pK(a) values in DNAzymes and ribozymes, as needed for the delineation of acid-base mechanisms in catalysis. 10. Equilibrium Shape of Colloidal Crystals. PubMed Sehgal, Ray M; Maroudas, Dimitrios 2015-10-27 Assembling colloidal particles into highly ordered configurations, such as photonic crystals, has significant potential for enabling a broad range of new technologies. Facilitating the nucleation of colloidal crystals and developing successful crystal growth strategies require a fundamental understanding of the equilibrium structure and morphology of small colloidal assemblies. Here, we report the results of a novel computational approach to determine the equilibrium shape of assemblies of colloidal particles that interact via an experimentally validated pair potential. While the well-known Wulff construction can accurately capture the equilibrium shape of large colloidal assemblies, containing O(10(4)) or more particles, determining the equilibrium shape of small colloidal assemblies of O(10) particles requires a generalized Wulff construction technique which we have developed for a proper description of equilibrium structure and morphology of small crystals. We identify and characterize fully several "magic" clusters which are significantly more stable than other similarly sized clusters. 11. A Simple Method for the Consecutive Determination of Protonation Constants through Evaluation of Formation Curves ERIC Educational Resources Information Center Hurek, Jozef; Nackiewicz, Joanna 2013-01-01 A simple method is presented for the consecutive determination of protonation constants of polyprotic acids based on their formation curves. The procedure is based on generally known equations that describe dissociation equilibria. It has been demonstrated through simulation that the values obtained through the proposed method are sufficiently… 12. Acid-base property of N-methylimidazolium-based protic ionic liquids depending on anion. PubMed Kanzaki, Ryo; Doi, Hiroyuki; Song, Xuedan; Hara, Shota; Ishiguro, Shin-ichi; Umebayashi, Yasuhiro 2012-12-01 Proton-donating and ionization properties of several protic ionic liquids (PILs) made from N-methylimidazole (Mim) and a series of acids (HA) have been assessed by means of potentiometric and calorimetric titrations. With regard to strong acids, bis(trifluoromethanesulfonyl) amide (Tf(2)NH) and trifluoromethanesulfonic acid (TfOH), it was elucidated that the two equimolar mixtures with Mim almost consist of ionic species, HMim(+) and A(-), and the proton transfer equilibrium corresponding to autoprotolysis in ordinary molecular liquids was established. The respective autoprotolysis constants were successfully evaluated, which indicate the proton-donating abilities of TfOH and Tf(2)NH in the respective PILs are similar. In the case of trifluoroacetic acid, the proton-donating ability of CF(3)COOH is much weaker than those of TfOH and Tf(2)NH, while ions are predominant species. On the other hand, with regard to formic acid and acetic acid, protons of these acids are suggested not to transfer to Mim sufficiently. From calorimetric titrations, about half of Mim is estimated to be proton-attached at most in the CH(3)COOH-Mim equimolar mixture. In such a mixture, hydrogen-bonding adducts formation has been suggested. The autoprotolysis constants of the present PILs show a good linear correlation with dissociation constants of the constituent acids in an aqueous phase. Blichert-Toft, J.; Albarede, F. 2011-12-01 When only modern isotope compositions are concerned, the choice of normalization values is inconsequential provided that their values are universally accepted. No harm is done as long as large amounts of standard reference material with known isotopic differences with respect to the reference value ('anchor point') can be maintained under controlled conditions. For over five decades, the scientific community has been referring to an essentially unavailable SMOW for stable O and H isotopes and to a long-gone belemnite sample for carbon. For radiogenic isotopes, the isotope composition of the daughter element, the parent-daughter ratio, and a particular value of the decay constant are all part of the reference. For the Lu-Hf system, for which the physical measurements of the decay constant have been particularly defective, the reference includes the isotope composition of Hf and the Lu/Hf ratio of an unfortunately heterogeneous chondrite mix that has been successively refined by Patchett and Tatsumoto (1981), Blichert-Toft and Albarede (1997, BTA), and Bouvier et al. (2008, BVP). The \\varepsilonHf(T) difference created by using BTA and BVP is nearly within error (+0.45 epsilon units today and -0.36 at 3 Ga) and therefore of little or no consequence. A more serious issue arises when the chondritic reference is taken to represent the Hf isotope evolution of the Bulk Silicate Earth (BSE): the initial isotope composition of the Solar System, as determined by the indistinguishable intercepts of the external eucrite isochron (Blichert-Toft et al., 2002) and the internal angrite SAH99555 isochron (Thrane et al., 2010), differs from the chondrite value of BTA and BVP extrapolated to 4.56 Ga by ~5 epsilon units. This difference and the overestimated value of the 176Lu decay constant derived from the slopes of these isochrons, have been interpreted as reflecting irradiation of the solar nebula by either gamma (Albarede et al., 2006) or cosmic rays (Thrane et al., 2010) during 14. Measurement of the solar constant NASA Technical Reports Server (NTRS) Crommelynck, D. 1981-01-01 The absolute value of the solar constant and the long term variations that exist in the absolute value of the solar constant were measured. The solar constant is the total irradiance of the Sun at a distance of one astronomical unit. An absolute radiometer removed from the effects of the atmosphere with its calibration tested in situ was used to measure the solar constant. The importance of an accurate knowledge of the solar constant is emphasized. 15. Triprotic acid-base microequilibria and pharmacokinetic sequelae of cetirizine. PubMed Marosi, Attila; Kovács, Zsuzsanna; Béni, Szabolcs; Kökösi, József; Noszál, Béla 2009-06-28 (1)H NMR-pH titrations of cetirizine, the widely used antihistamine and four related compounds were carried out and the related 11 macroscopic protonation constants were determined. The interactivity parameter between the two piperazine amine groups was obtained from two symmetric piperazine derivatives. Combining these two types of datasets, all the 12 microconstants and derived tautomeric constants of cetirizine were calculated. Upon this basis, the conflicting literature data of cetirizine microspeciation were clarified, and the pharmacokinetic absorption-distribution properties could be interpreted. The pH-dependent distribution of the microspecies is provided. 16. The Hubble constant. PubMed Tully, R B 1993-06-01 Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391 17. When constants are important SciTech Connect Beiu, V. 1997-04-01 In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff. 18. The Hubble constant. PubMed Central Tully, R B 1993-01-01 Five methods of estimating distances have demonstrated internal reproducibility at the level of 5-20% rms accuracy. The best of these are the cepheid (and RR Lyrae), planetary nebulae, and surface-brightness fluctuation techniques. Luminosity-line width and Dn-sigma methods are less accurate for an individual case but can be applied to large numbers of galaxies. The agreement is excellent between these five procedures. It is determined that Hubble constant H0 = 90 +/- 10 km.s-1.Mpc-1 [1 parsec (pc) = 3.09 x 10(16) m]. It is difficult to reconcile this value with the preferred world model even in the low-density case. The standard model with Omega = 1 may be excluded unless there is something totally misunderstood about the foundation of the distance scale or the ages of stars. PMID:11607391 19. Unitaxial constant velocity microactuator DOEpatents McIntyre, Timothy J. 1994-01-01 A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-manometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment. 20. Unitaxial constant velocity microactuator DOEpatents McIntyre, T.J. 1994-06-07 A uniaxial drive system or microactuator capable of operating in an ultra-high vacuum environment is disclosed. The mechanism includes a flexible coupling having a bore therethrough, and two clamp/pusher assemblies mounted in axial ends of the coupling. The clamp/pusher assemblies are energized by voltage-operated piezoelectrics therewithin to operatively engage the shaft and coupling causing the shaft to move along its rotational axis through the bore. The microactuator is capable of repeatably positioning to sub-nanometer accuracy while affording a scan range in excess of 5 centimeters. Moreover, the microactuator generates smooth, constant velocity motion profiles while producing a drive thrust of greater than 10 pounds. The system is remotely controlled and piezoelectrically driven, hence minimal thermal loading, vibrational excitation, or outgassing is introduced to the operating environment. 10 figs. 1. Constant attitude orbit transfer Cress, Peter; Evans, Michael A two-impulse orbital transfer technique is described in which the spacecraft attitude remains constant for both burns, eliminating the need for attitude maneuvers between the burns. This can lead to significant savings in vehicle weight, cost and complexity. Analysis is provided for a restricted class of applications of this transfer between circular orbits. For those transfers with a plane change less than 30 deg, the total velocity cost of the maneuver is less than twelve percent greater than that of an optimum plane split Hohmann transfer. While this maneuver does not minimize velocity requirement, it does provide a means of achieving necessary transfer while substantially reducing the cost and complexity of the spacecraft. 2. Effect of acid-base alterations on hepatic lactate utilization PubMed Central Goldstein, Philip J.; Simmons, Daniel H.; Tashkin, Donald P. 1972-01-01 1. The effect of acid-base changes on hepatic lactate utilization was investigated in anaesthetized, mechanically ventilated dogs. 2. Portal vein flow and hepatic artery flow were measured with electromagnetic flowmeters, lactate concentration of portal vein, arterial and mixed hepatic venous blood was determined by an enzymatic technique, and hepatic lactate uptake was calculated using the Fick principle. 3. Respiratory alkalosis (Δ pH 0·25 ± 0·02) in four dogs resulted in a significant fall in total hepatic blood flow (-22 ± 4%) and a significant rise in both arterial lactate concentration (2·18 ± 0·32 m-mole/l.) and hepatic lactate utilization (3·9 ± 1·2 μmole/min.kg). 4. 0·6 M-Tris buffer infusion (Δ pH 0·21 ± 0·02) in four dogs produced no significant changes in liver blood flow, arterial lactate concentration or hepatic lactate uptake. 5. Respiratory acidosis (Δ pH -0·20 ± 0·03) in six dogs and metabolic acidosis (Δ pH -0·20 ± 0·02) in four dogs produced no significant changes in liver blood flow, decreases in arterial lactate concentration of 0·38 ± 0·09 m-mole/l. (P < 0·05) and 0·13 ± 0·13 m-mole/l., respectively, and no significant changes in hepatic lactate uptake. 6. A significant correlation (r = 0·63; P < 0·01) was found between hepatic lactate utilization and arterial lactate concentration during the hyperlactataemia associated with respiratory alkalosis. 7. Hyperlactataemia induced in four dogs by infusion of buffered sodium lactate (Δ pH 0·05 ± 0·01;% Δ liver blood flow 29 ± 7%) was also significantly correlated with hepatic lactate utilization (r = 0·70; P < 0·01) and the slope of the regression was similar to that during respiratory alkalosis. 8. These data suggest that the hyperlactataemia of alkalosis is not due to impaired hepatic utilization of lactate and that the principal determinant of hepatic lactate uptake during alkalosis or lactate infusion is blood lactate concentration, rather than liver 3. [Rigorous algorithms for calculating the exact concentrations and activity levels of all the different species during acid-base titrations in water]. PubMed Burgot, G; Burgot, J L 2000-10-01 The principles of two algorithms allowing the calculations of the concentration and activity levels of the different species during acid-base titrations in water are described. They simulate titrations at constant and variable ionic strengths respectively. They are designed so acid and base strengths, their concentrations and the titrant volume added can be chosen freely. The calculations are based on rigorous equations with a general scope. They are sufficiently compact to be processed on pocket calculators. The algorithms can easily simulate pH-metric, spectrophotometric, conductometric and calorimetric titrations, and hence allow determining concentrations and some physico-chemical constants related to the occurring chemical systems. 4. Capillary zone electrophoresis of basic analytes in methanol as non-aqueous solvent mobility and ionisation constant. PubMed Porras, S P; Riekkola, M L; Kenndler, E 2001-01-01 The electrophoretically relevant properties of monoacidic 21 bases (including common drugs) containing aliphatic or aromatic amino groups were determined in methanol as solvent. These properties are the actual mobilities (that of the fully ionised weak bases), and their pKa values. Actual mobilities were measured in acidic methanolic solutions containing perchloric acid. The ionisation constants of the amines were derived from the dependence of the ionic mobilities on the pH of the background electrolyte solution. The pH scale in methanol was established from acids with known conventional pK*a values in this solvent used as buffers, avoiding thus further adjustment with a pH sensitive electrode that might bias the scale. Actual mobilities in methanol were found larger than in water, and do not correlate well with the solvent's viscosity. The pK*a values of the cation acids, HB-, the corresponding form of the base, B, are higher in methanol, whereas a less pronounced shift was found than for neutral acids of type HA. The mean increase (compared to pure aqueous solution) for aliphatic ammonium type analytes is 1.8, for substituted anilinium 1.1, and for aromatic ammonium from pyridinium type 0.5 units. The interpretation of this shift was undertaken with the concept of the medium effect on the particles involved in the acid-base equilibrium: the proton, the molecular base, B, and the cation HB+. PMID:11206793 5. Acid-base metabolism: implications for kidney stones formation. PubMed Hess, Bernhard 2006-04-01 The physiology and pathophysiology of renal H+ ion excretion and urinary buffer systems are reviewed. The main focus is on the two major conditions related to acid-base metabolism that cause kidney stone formation, i.e., distal renal tubular acidosis (dRTA) and abnormally low urine pH with subsequent uric acid stone formation. Both the entities can be seen on the background of disturbances of the major urinary buffer system, NH3+ <--> NH4+. On the one hand, reduced distal tubular secretion of H+ ions results in an abnormally high urinary pH and either incomplete or complete dRTA. On the other hand, reduced production/availability of NH4+ is the cause of an abnormally low urinary pH, which predisposes to uric acid stone formation. Most recent research indicates that the latter abnormality may be a renal manifestation of the increasingly prevalent metabolic syndrome. Despite opposite deviations from normal urinary pH values, both the dRTA and uric acid stone formation due to low urinary pH require the same treatment, i.e., alkali. In the dRTA, alkali is needed for improving the body's buffer capacity, whereas the goal of alkali treatment in uric acid stone formers is to increase the urinary pH to 6.2-6.8 in order to minimize uric acid crystallization. 6. Solution influence on biomolecular equilibria - Nucleic acid base associations NASA Technical Reports Server (NTRS) Pohorille, A.; Pratt, L. R.; Burt, S. K.; Macelroy, R. D. 1984-01-01 Various attempts to construct an understanding of the influence of solution environment on biomolecular equilibria at the molecular level using computer simulation are discussed. First, the application of the formal statistical thermodynamic program for investigating biomolecular equilibria in solution is presented, addressing modeling and conceptual simplications such as perturbative methods, long-range interaction approximations, surface thermodynamics, and hydration shell. Then, Monte Carlo calculations on the associations of nucleic acid bases in both polar and nonpolar solvents such as water and carbon tetrachloride are carried out. The solvent contribution to the enthalpy of base association is positive (destabilizing) in both polar and nonpolar solvents while negative enthalpies for stacked complexes are obtained only when the solute-solute in vacuo energy is added to the total energy. The release upon association of solvent molecules from the first hydration layer around a solute to the bulk is accompanied by an increase in solute-solvent energy and decrease in solvent-solvent energy. The techniques presented are expectd to displace less molecular and more heuristic modeling of biomolecular equilibria in solution. 7. Acid-base transport by the renal proximal tubule PubMed Central Skelton, Lara A.; Boron, Walter F.; Zhou, Yuehan 2015-01-01 Each day, the kidneys filter 180 L of blood plasma, equating to some 4,300 mmol of the major blood buffer, bicarbonate (HCO3−). The glomerular filtrate enters the lumen of the proximal tubule (PT), and the majority of filtered HCO3− is reclaimed along the early (S1) and convoluted (S2) portions of the PT in a manner coupled to the secretion of H+ into the lumen. The PT also uses the secreted H+ to titrate non-HCO3− buffers in the lumen, in the process creating “new HCO3−” for transport into the blood. Thus, the PT – along with more distal renal segments – is largely responsible for regulating plasma [HCO3−]. In this review we first focus on the milestone discoveries over the past 50+ years that define the mechanism and regulation of acid-base transport by the proximal tubule. Further on in the review, we will summarize research still in progress from our laboratory, work that addresses the problem of how the PT is able to finely adapt to acid–base disturbances by rapidly sensing changes in basolateral levels of HCO3− and CO2 (but not pH), and thereby to exert tight control over the acid–base composition of the blood plasma. PMID:21170887 8. Acid/base account and minesoils: A review SciTech Connect Hossner, L.R.; Brandt, J.E. 1997-12-31 Generation of acidity from the oxidation of iron sulfides (FeS{sub 2}) is a common feature of geological materials exposed to the atmosphere by mining activities. Acid/base accounting (ABA) has been the primary method to evaluate the acid- or alkaline-potential of geological materials and to predict if weathering of these materials will have an adverse effect on terrestrial and aquatic environments. The ABA procedure has also been used to evaluate minesoils at different stages of weathering and, in some cases, to estimate lime requirements. Conflicting assessments of the methodology have been reported in the literature. The ABA is the fastest and easiest way to evaluate the acid-forming characteristics of overburden materials; however, accurate evaluations sometimes require that ABA data be examined in conjunction with additional sample information and results from other analytical procedures. The end use of ABA data, whether it be for minesoil evaluation or water quality prediction, will dictate the method`s interpretive criteria. Reaction kinetics and stoichiometry may vary and are not clearly defined for all situations. There is an increasing awareness of the potential for interfering compounds, particularly siderite (FeCO{sub 3}), to be present in geological materials associated with coal mines. Hardrock mines, with possible mixed sulfide mineralogy, offer a challenge to the ABA, since acid generation may be caused by minerals other than pyrite. A combination of methods, static and kinetic, is appropriate to properly evaluate the presence of acid-forming materials. 9. [Development of Nucleic Acid-Based Adjuvant for Cancer Immunotherapy]. PubMed Kobiyama, Kouji; Ishii, Ken J 2015-09-01 Since the discovery of the human T cell-defined tumor antigen, the cancer immunotherapy field has rapidly progressed, with the research and development of cancer immunotherapy, including cancer vaccines, being conducted actively. However, the disadvantages of most cancer vaccines include relatively weak immunogenicity and immune escape or exhaustion. Adjuvants with innate immunostimulatory activities have been used to overcome these issues, and these agents have been shown to enhance the immunogenicity of cancer vaccines and to act as mono-therapeutic anti-tumor agents. CpG ODN, an agonist for TLR9, is one of the promising nucleic acid-based adjuvants, and it is a potent inducer of innate immune effector functions. CpG ODN suppresses tumor growth in the absence of tumor antigens and peptide administration. Therefore, CpG ODN is expected to be useful as a cancer vaccine adjuvant as well as a cancer immunotherapy agent. In this review, we discuss the potential therapeutic applications and mechanisms of CpG ODN for cancer immunotherapy. 10. Environmental applications of poly(amic acid)-based nanomaterials. PubMed Okello, Veronica A; Du, Nian; Deng, Boling; Sadik, Omowunmi A 2011-05-01 Nanoscale materials offer new possibilities for the development of novel remediation and environmental monitoring technologies. Different nanoscale materials have been exploited for preventing environmental degradation and pollutant transformation. However, the rapid self-aggregation of nanoparticles or their association with suspended solids or sediments where they could bioaccumulate supports the need for polymeric coatings to improve mobility, allows faster site cleanups and reduces remediation cost. The ideal material must be able to coordinate different nanomaterials functionalities and exhibit the potential for reusability. We hereby describe two novel environmental applications of nanostructured poly (amic acid)-based (nPAA) materials. In the first application, nPAA was used as both reductant and stabilizer during the in situ chemical reduction of chromium(vi) to chromium(iii). Results showed that Cr(vi) species were rapidly reduced within the concentration range of 10(-1) to 10(2) mM with efficiency of 99.9% at 40 °C in water samples and 90% at 40 °C in soil samples respectively. Furthermore, the presence of PdNPs on the PAA-Au electrode was found to significantly enhance the rate of reduction. In the second application, nPAA membranes were tested as filters to capture, isolate and detect nanosilver. Preliminary results demonstrate the capability of the nPAA membranes to quantitatively capture nanoparticles from suspension and quantify their abundance on the membranes. Silver nanoparticles detection at concentrations near the toxic threshold of silver was also demonstrated. 11. Acid-base transport in pancreas—new challenges PubMed Central Novak, Ivana; Haanes, Kristian A.; Wang, Jing 2013-01-01 Along the gastrointestinal tract a number of epithelia contribute with acid or basic secretions in order to aid digestive processes. The stomach and pancreas are the most extreme examples of acid (H+) and base (HCO−3) transporters, respectively. Nevertheless, they share the same challenges of transporting acid and bases across epithelia and effectively regulating their intracellular pH. In this review, we will make use of comparative physiology to enlighten the cellular mechanisms of pancreatic HCO−3 and fluid secretion, which is still challenging physiologists. Some of the novel transporters to consider in pancreas are the proton pumps (H+-K+-ATPases), as well as the calcium-activated K+ and Cl− channels, such as KCa3.1 and TMEM16A/ANO1. Local regulators, such as purinergic signaling, fine-tune, and coordinate pancreatic secretion. Lastly, we speculate whether dys-regulation of acid-base transport contributes to pancreatic diseases including cystic fibrosis, pancreatitis, and cancer. PMID:24391597 12. DEOXYRIBONUCLEIC ACID BASE COMPOSITION OF PROTEUS AND PROVIDENCE ORGANISMS PubMed Central Falkow, Stanley; Ryman, I. R.; Washington, O. 1962-01-01 Falkow, Stanley (Walter Reed Army Institute of Research, Washington D.C.), I. R. Ryman, and O. Washington. Deoxyribonucleic acid base composition of Proteus and Providence organisms. J. Bacteriol. 83:1318–1321. 1962.—Deoxyribonucleic acids (DNA) from various species of Proteus and of Providence bacteria have been examined for their guanine + cytosine (GC) content. P. vulgaris, P. mirabilis, and P. rettgeri possess essentially identical mean GC contents of 39%, and Providence DNA has a GC content of 41.5%. In marked contrast, P. morganii DNA was found to contain 50% GC. The base composition of P. morganii is only slightly lower than those observed for representatives of the Escherichia, Shigella, and Salmonella groups. Aerobacter and Serratia differ significantly from the other members of the family by their relatively high GC content. Since a minimal requirement for genetic compatibility among different species appears to be similarity of their DNA base composition, it is suggested that P. morganii is distinct genetically from the other species of Proteus as well as Providence strains. The determination of the DNA base composition of microorganisms is important for its predictive information. This information should prove of considerable value in investigating genetic and taxonomic relationships among bacteria. PMID:13891463 13. Nucleic acid-based nanoengineering: novel structures for biomedical applications PubMed Central Li, Hanying; LaBean, Thomas H.; Leong, Kam W. 2011-01-01 Nanoengineering exploits the interactions of materials at the nanometre scale to create functional nanostructures. It relies on the precise organization of nanomaterials to achieve unique functionality. There are no interactions more elegant than those governing nucleic acids via Watson–Crick base-pairing rules. The infinite combinations of DNA/RNA base pairs and their remarkable molecular recognition capability can give rise to interesting nanostructures that are only limited by our imagination. Over the past years, creative assembly of nucleic acids has fashioned a plethora of two-dimensional and three-dimensional nanostructures with precisely controlled size, shape and spatial functionalization. These nanostructures have been precisely patterned with molecules, proteins and gold nanoparticles for the observation of chemical reactions at the single molecule level, activation of enzymatic cascade and novel modality of photonic detection, respectively. Recently, they have also been engineered to encapsulate and release bioactive agents in a stimulus-responsive manner for therapeutic applications. The future of nucleic acid-based nanoengineering is bright and exciting. In this review, we will discuss the strategies to control the assembly of nucleic acids and highlight the recent efforts to build functional nucleic acid nanodevices for nanomedicine. PMID:23050076 14. [Development of Nucleic Acid-Based Adjuvant for Cancer Immunotherapy]. PubMed Kobiyama, Kouji; Ishii, Ken J 2015-09-01 Since the discovery of the human T cell-defined tumor antigen, the cancer immunotherapy field has rapidly progressed, with the research and development of cancer immunotherapy, including cancer vaccines, being conducted actively. However, the disadvantages of most cancer vaccines include relatively weak immunogenicity and immune escape or exhaustion. Adjuvants with innate immunostimulatory activities have been used to overcome these issues, and these agents have been shown to enhance the immunogenicity of cancer vaccines and to act as mono-therapeutic anti-tumor agents. CpG ODN, an agonist for TLR9, is one of the promising nucleic acid-based adjuvants, and it is a potent inducer of innate immune effector functions. CpG ODN suppresses tumor growth in the absence of tumor antigens and peptide administration. Therefore, CpG ODN is expected to be useful as a cancer vaccine adjuvant as well as a cancer immunotherapy agent. In this review, we discuss the potential therapeutic applications and mechanisms of CpG ODN for cancer immunotherapy. PMID:26469159 15. Water-wire catalysis in photoinduced acid-base reactions. PubMed Kwon, Oh-Hoon; Mohammed, Omar F 2012-07-01 The pronounced ability of water to form a hyperdense hydrogen (H)-bond network among itself is at the heart of its exceptional properties. Due to the unique H-bonding capability and amphoteric nature, water is not only a passive medium, but also behaves as an active participant in many chemical and biological reactions. Here, we reveal the catalytic role of a short water wire, composed of two (or three) water molecules, in model aqueous acid-base reactions synthesizing 7-hydroxyquinoline derivatives. Utilizing femtosecond-resolved fluorescence spectroscopy, we tracked the trajectories of excited-state proton transfer and discovered that proton hopping along the water wire accomplishes the reaction more efficiently compared to the transfer occurring with bulk water clusters. Our finding suggests that the directionality of the proton movements along the charge-gradient H-bond network may be a key element for long-distance proton translocation in biological systems, as the H-bond networks wiring acidic and basic sites distal to each other can provide a shortcut for a proton in searching a global minimum on a complex energy landscape to its destination. 16. Tuning, ergodicity, equilibrium, and cosmology Albrecht, Andreas 2015-05-01 I explore the possibility that the cosmos is fundamentally an equilibrium system and review the attractive features of such theories. Equilibrium cosmologies are commonly thought to fail due to the "Boltzmann brain" problem. I show that it is possible to evade the Boltzmann brain problem if there is a suitable coarse-grained relationship between the fundamental degrees of freedom and the cosmological observables. I make my main points with simple toy models and then review the de Sitter equilibrium model as an illustration. 17. Understanding thermal equilibrium through activities 2015-03-01 Thermal equilibrium is a basic concept in thermodynamics. In India, this concept is generally introduced at the first year of undergraduate education in physics and chemistry. In our earlier studies (Pathare and Pradhan 2011 Proc. episteme-4 Int. Conf. to Review Research on Science Technology and Mathematics Education pp 169-72) we found that students in India have a rather unsatisfactory understanding of thermal equilibrium. We have designed and developed a module of five activities, which are presented in succession to the students. These activities address the students’ alternative conceptions that underlie their lack of understanding of thermal equilibrium and aim at enhancing their understanding of the concept. 18. Computer Assisted Instruction for Equilibrium. ERIC Educational Resources Information Center Berry, Gifford L. 1988-01-01 Describes two computer assisted tutorials, one on acid ionization constants (Ka), and the other on solubility product constants (Ksp). Discusses framework to be used in writing computer assisted instruction programs. Lists topics covered in the programs. (MVL) 19. Lewis Acid Based Sorption of Trace Amounts of RuCl3 by Polyaniline. PubMed Harbottle, Allison M; Hira, Steven M; Josowicz, Mira; Janata, Jiří 2016-08-23 A sorption process of RuCl3 in phosphate buffer by polyaniline (PANI) powder chemically synthesized from phosphoric acid was spectrophotometrically monitored as a function of time. It was determined that the sorption process follows the Langmuir and Freundlich isotherms, and their constants were evaluated. It was determined that chemisorption was the rate-controlling step. By conducting detailed studies, we assigned the chemisorption to Lewis acid based interactions of the sorbent electron pair localized at the benzenoid amine (-NH2) and quinoid imine (═NH) groups, with the sorbate, RuCl3, as the electron acceptor. The stability of the interaction over a period of ∼1 week showed that the presence of the Ru(III) in the PANI matrix reverses its state from emeraldine base to emeraldine salt, resulting in a change of conductivity. The partial electron donor based charge transfer is a slow process as compared to the sorption process involving Brønsted acid doping. PMID:27479848 20. Acid-base titration of melanocortin peptides: evidence of Trp rotational conformers interconversion. PubMed Fernandez, Roberto M; Vieira, Renata F F; Nakaie, Clóvis R; Lamy, M Teresa; Ito, Amando S 2005-01-01 Tryptophantime-resolved fluorescence was used to monitor acid-base titration properties of alpha-melanocyte stimulating hormone (alpha-MSH) and the biologically more potent analog [Nle4, D-Phe7]alpha -MSH (NDP-MSH), labeled or not with the paramagnetic amino acid probe 2,2,6,6-tetramthylpiperidine-N-oxyl-4-amino-4-carboxylic acid (Toac). Global analysis of fluorescence decay profiles measured in the pH range between 2.0 and 11.0 showed that, for each peptide, the data could be well fitted to three lifetimes whose values remained constant. The less populated short lifetime component changed little with pH and was ascribed to Trp g+ chi1 rotamer, in which electron transfer deactivation predominates over fluorescence. The long and intermediate lifetime preexponential factors interconverted along that pH interval and the result was interpreted as due to interconversion between Trp g- and trans chi1 rotamers, driven by conformational changes promoted by modifications in the ionization state of side-chain residues. The differences in the extent of interconversion in alpha-MSH and NDP-MSH are indicative of structural differences between the peptides, while titration curves suggest structural similarities between each peptide and its Toac-labeled species, in aqueous solution. Though less sensitive than fluorescence, the Toac electron spin resonance (ESR) isotropic hyperfine splitting parameter can also monitor the titration of side-chain residues located relatively far from the probe. 1. Temperature lapse rates at restricted thermodynamic equilibrium in the Earth system Björnbom, Pehr 2015-03-01 Equilibrium temperature profiles obtained by maximizing the entropy of a column of fluid with a given height and volume under the influence of gravity are discussed by using numerical experiments. Calculations are made both for the case of an ideal gas and for a liquid with constant isobaric heat capacity, constant compressibility and constant thermal expansion coefficient representing idealized conditions corresponding to atmosphere and ocean. Calculations confirm the classical equilibrium condition by Gibbs that an isothermal temperature profile gives a maximum in entropy constrained by a constant mass and a constant sum of internal and potential energy. However, it was also found that an isentropic profile gives a maximum in entropy constrained by a constant mass and a constant internal energy of the fluid column. On the basis of this result a hypothesis is suggested that the adiabatic lapse rate represents a restricted or transitory and metastable equilibrium state, which has a maximum in entropy with lower value than the maximum in the state with an isothermal lapse rate. This transitory equilibrium state is maintained by passive forces, preventing or slowing down the transition of the system to the final or ultimate equilibrium state. 2. Effects of anaesthesia on blood gases, acid-base status and ions in the toad Bufo marinus. PubMed Andersen, Johnnie Bremholm; Wang, Tobias 2002-03-01 It is common practice to chronically implant catheters for subsequent blood sampling from conscious and undisturbed animals. This method reduces stress associated with blood sampling, but anaesthesia per se can also be a source of stress in animals. Therefore, it is imperative to evaluate the time required for physiological parameters (e.g. blood gases, acid-base status, plasma ions, heart rate and blood pressure) to stabilise following surgery. Here, we report physiological parameters during and after anaesthesia in the toad Bufo marinus. For anaesthesia, toads were immersed in benzocaine (1 g l(-1)) for 15 min or until the corneal reflex disappeared, and the femoral artery was cannulated. A 1-ml blood sample was taken immediately after surgery and subsequently after 2, 5, 24 and 48 h. Breathing ceased during anaesthesia, which resulted in arterial Po(2) values below 30 mmHg, and respiratory acidosis developed, with arterial Pco(2) levels reaching 19.5+/-2 mmHg and pH 7.64+/-0.04. The animals resumed pulmonary ventilation shortly after the operation, and oxygen levels increased to a constant level within 2 h. Acid--base status, however, did not stabilise until 24 h after anaesthesia. Haematocrit doubled immediately after cannulation (26+/-1%), but reached a constant level of 13% within 24 h. Blood pressure and heart rate were elevated for the first 5 h, but decreased after 24 h to a constant level of approximately 30 cm H2O and 35 beats min(-1), respectively. There were no changes following anaesthesia in mean cellular haemoglobin concentration, [K+], [Cl-], [Na+], [lactate] or osmolarity. Toads fully recovered from anaesthesia after 24 h. 3. Equilibrium and Orientation in Cephalopods. ERIC Educational Resources Information Center Budelmann, Bernd-Ulrich 1980-01-01 Describes the structure of the equilibrium receptor system in cephalopods, comparing it to the vertebrate counterpart--the vestibular system. Relates the evolution of this complex system to the competition of cephalopods with fishes. (CS) 4. Model studies of intracellular acid-base temperature responses in ectotherms. PubMed Reeves, R B; Malan, A 1976-10-01 Measurements of intracellular pH (pHi) in air-breathing ectotherms have only been made in the steady state; these pHi indicate that protein charge state, measured as alpha imidazole (alphaIM), the fractional dissociation of protein histidine imidazole groups, is preserved when ectotherm tissues change temperature in vivo, with related changes in pHi and PCO2. In partial answer to the question of how such tissues are able to avoid disrupting transients to functions sensitive to protein charge states, model studies were carried out to assess the passive intracellular buffer system response to a combined change in body temperature and CO2 partial pressure as occurs in vivo in these species. The cell compartment was modeled as a closed volume of ternary buffer solution, containing protein imidazole (50 mM/1); phosphate (15 mM/1) and CO2-bicarbonate buffer components, permeable only to CO2 and permitted no change in buffer base. Excursions from a steady-state non-equilibrium pHi were computed to a step-change in temperature/PCO2. Computations for frog (Rana catesbeiana) striated muscle show that the calculated pHi response on the basis of estimated composition and concentration of cell buffer components, moves along the curve describing the steady-state temperature relationship. No transient away from steady-state alphaIM and carbon dioxide content need be postulated. Applications to turtle (Pseudemys scripta) striated muscle are also explored. These calculations show that ectotherm cells may be capable of responding without appreciable time for adaptation to intracellular acid-base state changes incurred by sudden alteration of body temperature in vivo, given the observed adjustments of blood PCO2 with temperature. 5. Solution properties and emulsification properties of amino acid-based gemini surfactants derived from cysteine. PubMed Yoshimura, Tomokazu; Sakato, Ayako; Esumi, Kunio 2013-01-01 Amino acid-based anionic gemini surfactants (2C(n)diCys, where n represents an alkyl chain with a length of 10, 12, or 14 carbons and "di" and "Cys" indicate adipoyl and cysteine, respectively) were synthesized using the amino acid cysteine. Biodegradability, equilibrium surface tension, and dynamic light scattering were used to characterize the properties of gemini surfactants. Additionally, the effects of alkyl chain length, number of chains, and structure on these properties were evaluated by comparing previously reported gemini surfactants derived from cystine (2C(n)Cys) and monomeric surfactants (C(n)Cys). 2C(n)diCys shows relatively higher biodegradability than does C(n)Cys and previously reported sugar-based gemini surfactants. Both critical micelle concentration (CMC) and surface tension decrease when alkyl chain length is increased from 10 to 12, while a further increase in chain length to 14 results in increased CMC and surface tension. This indicates that long-chain gemini surfactants have a decreased aggregation tendency due to the steric hindrance of the bulky spacer as well as premicelle formation at concentrations below the CMC and are poorly packed at the air/water interface. Formation of micelles (measuring 2 to 5 nm in solution) from 2C(n)diCys shows no dependence on alkyl chain length. Further, shaking the mixtures of aqueous 2C(n)diCys surfactant solutions and squalane results in the formation of oil-in-water type emulsions. The highly stable emulsions are formed using 2C₁₂diCys or 2C₁₄diCys solution and squalane in a 1:1 or 2:1 volume ratio. 6. Strongly Non-equilibrium Dynamics of Nanochannel Confined DNA Reisner, Walter Nanoconfined DNA exhibits a wide-range of fascinating transient and steady-state non-equilibrium phenomena. Yet, while experiment, simulation and scaling analytics are converging on a comprehensive picture regarding the equilibrium behavior of nanochannel confined DNA, non-equilibrium behavior remains largely unexplored. In particular, while the DNA extension along the nanochannel is the key observable in equilibrium experiments, in the non-equilibrium case it is necessary to measure and model not just the extension but the molecule's full time-dependent one-dimensional concentration profile. Here, we apply controlled compressive forces to a nanochannel confined molecule via a nanodozer assay, whereby an optically trapped bead is slid down the channel at a constant speed. Upon contact with the molecule, a propagating concentration ``shockwave'' develops near the bead and the molecule is dynamically compressed. This experiment, a single-molecule implementation of a macroscopic cylinder-piston apparatus, can be used to observe the molecule response over a range of forcings and benchmark theoretical description of non-equilibrium behavior. We show that the dynamic concentration profiles, including both transient and steady-state response, can be modelled via a partial differential evolution equation combining nonlinear diffusion and convection. Lastly, we present preliminary results for dynamic compression of multiple confined molecules to explore regimes of segregation and mixing for multiple chains in confinement. 7. A search for equilibrium states NASA Technical Reports Server (NTRS) Zeleznik, F. J. 1982-01-01 An efficient search algorithm is described for the location of equilibrium states in a search set of states which differ from one another only by the choice of pure phases. The algorithm has three important characteristics: (1) it ignores states which have little prospect for being an improved approximation to the true equilibrium state; (2) it avoids states which lead to singular iteration equations; (3) it furnishes a search history which can provide clues to alternative search paths. 8. Edge equilibrium code for tokamaks SciTech Connect 2014-01-15 The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids. 9. Equilibrium and non-equilibrium properties of finite-volume crystallites Degawa, Masashi Finite volume effects on equilibrium and non-equilibrium properties of nano-crystallites are studied theoretically and compared to both experiment and simulation. When a system is isolated or its size is small compared to the correlation length, all equilibrium and close-to-equilibrium properties will depend on the system boundary condition. Specifically for solid nano-crystallites, their finite size introduces global curvature to the system, which alters its equilibrium properties compared to the thermodynamic limit. Also such global curvature leads to capillary-induced morphology changes of the surface. Interesting dynamics can arise when the crystallite is supported on a substrate, with crossovers of the dominant driving force from the capillary force and crystallite-substrate interactions. To address these questions, we introduce thermodynamic functions for the boundary conditions, which can be derived from microscopic models. For nano-crystallites, the boundary is the surface (including interfaces), the thermodynamic description is based on the steps that define the shape of the surface, and the underlying microscopic model includes kinks. The global curvature of the surface introduces metastable states with different shapes governed by a constant of integration of the extra boundary condition, which we call the shape parameter c. The discrete height of the steps introduces transition states in between the metastable states, and the lowest energy accessible structure (energy barrier less 10k BT) as a function of the volume has been determined. The dynamics of nano-crystallites as they relax from a non-equilibrium structure is described quantitatively in terms of the motion of steps in both capillary-induced and interface-boundary-induced regimes. The step-edge fluctuations of the top facet are also influenced by global curvature and volume conservation and the effect yields different dynamic scaling exponents from a pure 1D system. Theoretical results are 10. Kinetic and equilibrium studies of acrylonitrile binding to cytochrome c peroxidase and oxidation of acrylonitrile by cytochrome c peroxidase compound I. PubMed Chinchilla, Diana; Kilheeney, Heather; Vitello, Lidia B; Erman, James E 2014-01-01 Ferric heme proteins bind weakly basic ligands and the binding affinity is often pH dependent due to protonation of the ligand as well as the protein. In an effort to find a small, neutral ligand without significant acid/base properties to probe ligand binding reactions in ferric heme proteins we were led to consider the organonitriles. Although organonitriles are known to bind to transition metals, we have been unable to find any prior studies of nitrile binding to heme proteins. In this communication we report on the equilibrium and kinetic properties of acrylonitrile binding to cytochrome c peroxidase (CcP) as well as the oxidation of acrylonitrile by CcP compound I. Acrylonitrile binding to CcP is independent of pH between pH 4 and 8. The association and dissociation rate constants are 0.32±0.16 M(-1) s(-1) and 0.34±0.15 s(-1), respectively, and the independently measured equilibrium dissociation constant for the complex is 1.1±0.2 M. We have demonstrated for the first time that acrylonitrile can bind to a ferric heme protein. The binding mechanism appears to be a simple, one-step association of the ligand with the heme iron. We have also demonstrated that CcP can catalyze the oxidation of acrylonitrile, most likely to 2-cyanoethylene oxide in a "peroxygenase"-type reaction, with rates that are similar to rat liver microsomal cytochrome P450-catalyzed oxidation of acrylonitrile in the monooxygenase reaction. CcP compound I oxidizes acrylonitrile with a maximum turnover number of 0.61 min(-1) at pH 6.0. PMID:24291498 11. Kinetic and equilibrium studies of acrylonitrile binding to cytochrome c peroxidase and oxidation of acrylonitrile by cytochrome c peroxidase compound I PubMed Central Chinchilla, Diana; Kilheeney, Heather; Vitello, Lidia B.; Erman, James E. 2013-01-01 Ferric heme proteins bind weakly basic ligands and the binding affinity is often pH dependent due to protonation of the ligand as well as the protein. In an effort to find a small, neutral ligand without significant acid/base properties to probe ligand binding reactions in ferric heme proteins we were led to consider the organonitriles. Although organonitriles are known to bind to transition metals, we have been unable to find any prior studies of nitrile binding to heme proteins. In this communication we report on the equilibrium and kinetic properties of acrylonitrile binding to cytochrome c peroxidase (CcP) as well as the oxidation of acrylonitrile by CcP compound I. Acrylonitrile binding to CcP is independent of pH between pH 4 and 8. The association and dissociation rate constants are 0.32 ± 0.16 M−1s−1 and 0.34 ± 0.15 s−1, respectively, and the independently measured equilibrium dissociation constant for the complex is 1.1 ± 0.2 M. We have demonstrated for the first time that acrylonitrile can bind to a ferric heme protein. The binding mechanism appears to be a simple, one-step association of the ligand with the heme iron. We have also demonstrated that CcP can catalyze the oxidation of acrylonitrile, most likely to 2-cyanoethylene oxide in a “peroxygenase”-type reaction, with rates that are similar to rat liver microsomal cytochrome P450-catalyzed oxidation of acrylonitrile in the monooxygenase reaction. CcP compound I oxidizes acrylonitrile with a maximum turnover number of 0.61 min−1 at pH 6.0. PMID:24291498 12. Kinetic and equilibrium studies of acrylonitrile binding to cytochrome c peroxidase and oxidation of acrylonitrile by cytochrome c peroxidase compound I. PubMed Chinchilla, Diana; Kilheeney, Heather; Vitello, Lidia B; Erman, James E 2014-01-01 Ferric heme proteins bind weakly basic ligands and the binding affinity is often pH dependent due to protonation of the ligand as well as the protein. In an effort to find a small, neutral ligand without significant acid/base properties to probe ligand binding reactions in ferric heme proteins we were led to consider the organonitriles. Although organonitriles are known to bind to transition metals, we have been unable to find any prior studies of nitrile binding to heme proteins. In this communication we report on the equilibrium and kinetic properties of acrylonitrile binding to cytochrome c peroxidase (CcP) as well as the oxidation of acrylonitrile by CcP compound I. Acrylonitrile binding to CcP is independent of pH between pH 4 and 8. The association and dissociation rate constants are 0.32±0.16 M(-1) s(-1) and 0.34±0.15 s(-1), respectively, and the independently measured equilibrium dissociation constant for the complex is 1.1±0.2 M. We have demonstrated for the first time that acrylonitrile can bind to a ferric heme protein. The binding mechanism appears to be a simple, one-step association of the ligand with the heme iron. We have also demonstrated that CcP can catalyze the oxidation of acrylonitrile, most likely to 2-cyanoethylene oxide in a "peroxygenase"-type reaction, with rates that are similar to rat liver microsomal cytochrome P450-catalyzed oxidation of acrylonitrile in the monooxygenase reaction. CcP compound I oxidizes acrylonitrile with a maximum turnover number of 0.61 min(-1) at pH 6.0. 13. Adansonian Analysis and Deoxyribonucleic Acid Base Composition of Serratia marcescens PubMed Central Colwell, R. R.; Mandel, M. 1965-01-01 Colwell, R. R. (Georgetown University, Washington, D.C.), and M. Mandel. Adansonian analysis and deoxyribonucleic acid base composition of Serratia marcescens. J. Bacteriol. 89:454–461. 1965.—A total of 33 strains of Serratia marcescens were subjected to Adansonian analysis for which more than 200 coded features for each of the organisms were included. In addition, the base composition [expressed as moles per cent guanine + cytosine (G + C)] of the deoxyribonucleic acid (DNA) prepared from each of the strains was determined. Except for four strains which were intermediate between Serratia and the Hafnia and Aerobacter group C of Edwards and Ewing, the S. marcescens species group proved to be extremely homogeneous, and the different strains showed high affinities for each other (mean similarity, ¯S = 77%). The G + C ratio of the DNA from the Serratia strains ranged from 56.2 to 58.4% G + C. Many species names have been listed for the genus, but only a single clustering of the strains was obtained at the species level, for which the species name S. marcescens was retained. S. kiliensis, S. indica, S. plymuthica, and S. marinorubra could not be distinguished from S. marcescens; it was concluded, therefore, that there is only a single species in the genus. The variety designation kiliensis does not appear to be valid, since no subspecies clustering of strains with negative Voges-Proskauer reactions could be detected. The characteristics of the species are listed, and a description of S. marcescens is presented. PMID:14255714 14. Acid-base chemical mechanism of aspartase from Hafnia alvei. PubMed Yoon, M Y; Thayer-Cook, K A; Berdis, A J; Karsten, W E; Schnackerz, K D; Cook, P F 1995-06-20 An acid-base chemical mechanism is proposed for Hafnia alvei aspartase in which a proton is abstracted from C-3 of the monoanionic form of L-aspartate by an enzyme general base with a pK of 6.3-6.6 in the absence and presence of Mg2+. The resulting carbanion is presumably stabilized by delocalization of electrons into the beta-carboxyl with the assistance of a protonated enzyme group in the vicinity of the beta-carboxyl. Ammonia is then expelled with the assistance of a general acid group that traps an initially expelled NH3 as the final NH4+ product. In agreement with the function of the general acid group, potassium, an analog of NH4+, binds optimally when the group is unprotonated. The pK for the general acid is about 7 in the absence of Mg2+, but is increased by about a pH unit in the presence of Mg2+. Since the same pK values are observed in the pKi(succinate) and V/K pH profile, both enzyme groups must be in their optimum protonation state for efficient binding of reactant in the presence of Mg2+. At the end of a catalytic cycle, both the general base and general acid groups are in a protonation state opposite that in which they started when aspartate was bound. The presence of Mg2+ causes a pH-dependent activation of aspartase exhibited as a partial change in the V and V/Kasp pH profiles. When the aspartase reaction is run in D2O to greater than 50% completion no deuterium is found in the remaining aspartate, indicating that the site is inaccessible to solvent during the catalytic cycle. 15. Beyond the Hubble Constant 1995-08-01 about the distances to galaxies and thereby about the expansion rate of the Universe. A simple way to determine the distance to a remote galaxy is by measuring its redshift, calculate its velocity from the redshift and divide this by the Hubble constant, H0. For instance, the measured redshift of the parent galaxy of SN 1995K (0.478) yields a velocity of 116,000 km/sec, somewhat more than one-third of the speed of light (300,000 km/sec). From the universal expansion rate, described by the Hubble constant (H0 = 20 km/sec per million lightyears as found by some studies), this velocity would indicate a distance to the supernova and its parent galaxy of about 5,800 million lightyears. The explosion of the supernova would thus have taken place 5,800 million years ago, i.e. about 1,000 million years before the solar system was formed. However, such a simple calculation works only for relatively ``nearby'' objects, perhaps out to some hundred million lightyears. When we look much further into space, we also look far back in time and it is not excluded that the universal expansion rate, i.e. the Hubble constant, may have been different at earlier epochs. This means that unless we know the change of the Hubble constant with time, we cannot determine reliable distances of distant galaxies from their measured redshifts and velocities. At the same time, knowledge about such change or lack of the same will provide unique information about the time elapsed since the Universe began to expand (the ``Big Bang''), that is, the age of the Universe and also its ultimate fate. The Deceleration Parameter q0 Cosmologists are therefore eager to determine not only the current expansion rate (i.e., the Hubble constant, H0) but also its possible change with time (known as the deceleration parameter, q0). Although a highly accurate value of H0 has still not become available, increasing attention is now given to the observational determination of the second parameter, cf. also the Appendix at the 16. Anisotropic pressure tokamak equilibrium and stability considerations SciTech Connect Salberta, E.R.; Grimm, R.C.; Johnson, J.L.; Manickam, J.; Tang, W.M. 1987-02-01 Investigation of the effect of pressure anisotropy on tokamak equilibrium and stability is made with an MHD model. Realistic perpendicular and parallel pressure distributions, P/sub perpendicular/(psi,B) and P/sub parallel/(psi,B), are obtained by solving a one-dimensional Fokker-Planck equation for neutral beam injection to find a distribution function f(E, v/sub parallel//v) at the position of minimum field on each magnetic surface and then using invariance of the magnetic moment to determine its value at each point on the surface. The shift of the surfaces of constant perpendicular and parallel pressure from the flux surfaces depends strongly on the angle of injection. This shift explains the observed increase or decrease in the stability conditions. Estimates of the stabilizing effect of hot trapped ions indicates that a large fraction must be nonresonant and thus decoupled from the bad curvature before it becomes important. 17. Radiative equilibrium model of Titan's atmosphere NASA Technical Reports Server (NTRS) Samuelson, R. E. 1983-01-01 The present global radiative equilibrium model for the Saturn satellite Titan is restricted to the two-stream approximation, is vertically homogeneous in its scattering properties, and is spectrally divided into one thermal and two solar channels. Between 13 and 33% of the total incident solar radiation is absorbed at the planetary surface, and the 30-60 ratio of violet to thermal IR absorption cross sections in the stratosphere leads to the large temperature inversion observed there. The spectrally integrated mass absorption coefficient at thermal wavelengths is approximately constant throughout the stratosphere, and approximately linear with pressure in the troposphere, implying the presence of a uniformly mixed aerosol in the stratosphere. There also appear to be two regions of enhanced opacity near 30 and 500 mbar. 18. The effect of heating insufflation gas on acid-base alterations and core temperature during laparoscopic major abdominal surgery PubMed Central Lee, Kyung-Cheon; Kim, Ji Young; Lee, Hee-Dong; Kwon, Il Won 2011-01-01 Background Carbon dioxide (CO2) has different biophysical properties under different thermal conditions, which may affect its rate of absorption in the blood and the related adverse events. The present study was aimed to investigate the effects of heating of CO2 on acid-base balance using Stewart's physiochemical approach, and body temperature during laparoscopy. Methods Thirty adult patients undergoing laparoscopic major abdominal surgery were randomized to receive either room temperature CO2 (control group, n = 15) or heated CO2 (heated group, n = 15). The acid-base parameters were measured 10 min after the induction of anesthesia (T1), 40 min after pneumoperitoneum (T2), at the end of surgery (T3) and 1 h after surgery (T4). Body temperature was measured at 15-min intervals until the end of the surgery. Results There were no significant differences in pH, PaCO2, the apparent strong ion difference, the strong ion gap, bicarbonate ion, or lactate between two groups throughout the whole investigation period. At T2, pH was decreased whereas PaCO2 was increased in both groups compared with T1 but these changes were not significantly different. Body temperatures in the heated group were significantly higher than those in the control group from 30 to 90 min after pneumoperitoneum. Conclusions The heating of insufflating CO2 did not affect changes in the acid-base status and PaCO2 in patients undergoing laparoscopic abdominal surgery when the ventilator was set to maintain constant end-tidal CO2. However, the heated CO2 reduced the decrease in the core body temperature 30 min after the pneumoperitoneum. PMID:22110878 19. Shape characteristics of equilibrium and non-equilibrium fractal clusters. PubMed Mansfield, Marc L; Douglas, Jack F 2013-07-28 It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other 20. Shape characteristics of equilibrium and non-equilibrium fractal clusters. PubMed Mansfield, Marc L; Douglas, Jack F 2013-07-28 It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other 1. Shape characteristics of equilibrium and non-equilibrium fractal clusters Mansfield, Marc L.; Douglas, Jack F. 2013-07-01 It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other 2. The empirical equilibrium structure of diacetylene Thorwirth, Sven; Harding, Michael E.; Muders, Dirk; Gauss, Jürgen 2008-09-01 High-level quantum-chemical calculations are reported at the MP2 and CCSD(T) levels of theory for the equilibrium structure and the harmonic and anharmonic force fields of diacetylene, H sbnd C tbnd C sbnd C tbnd C sbnd H. The calculations were performed employing Dunning's hierarchy of correlation-consistent basis sets cc-pV XZ, cc-pCV XZ, and cc-pwCV XZ, as well as the ANO2 basis set of Almlöf and Taylor. An empirical equilibrium structure based on experimental rotational constants for 13 isotopic species of diacetylene and computed zero-point vibrational corrections is determined (reemp:r=1.0615 Å,r=1.2085 Å,r=1.3727 Å) and in good agreement with the best theoretical structure (CCSD(T)/cc-pCV5Z: r=1.0617 Å, r=1.2083 Å, r=1.3737 Å). In addition, the computed fundamental vibrational frequencies are compared with the available experimental data and found in satisfactory agreement. 3. Achieving Chemical Equilibrium: The Role of Imposed Conditions in the Ammonia Formation Reaction ERIC Educational Resources Information Center Tellinghuisen, Joel 2006-01-01 Under conditions of constant temperature T and pressure P, chemical equilibrium occurs in a closed system (fixed mass) when the Gibbs free energy G of the reaction mixture is minimized. However, when chemical reactions occur under other conditions, other thermodynamic functions are minimized or maximized. For processes at constant T and volume V,… 4. Equilibrium econophysics: A unified formalism for neoclassical economics and equilibrium thermodynamics Sousa, Tânia; Domingos, Tiago 2006-11-01 We develop a unified conceptual and mathematical structure for equilibrium econophysics, i.e., the use of concepts and tools of equilibrium thermodynamics in neoclassical microeconomics and vice versa. Within this conceptual structure the results obtained in microeconomic theory are: (1) the definition of irreversibility in economic behavior; (2) the clarification that the Engel curve and the offer curve are not descriptions of real processes dictated by the maximization of utility at constant endowment; (3) the derivation of a relation between elasticities proving that economic elasticities are not all independent; (4) the proof that Giffen goods do not exist in a stable equilibrium; (5) the derivation that ‘economic integrability’ is equivalent to the generalized Le Chatelier principle and (6) the definition of a first order phase transition, i.e., a transition between separate points in the utility function. In thermodynamics the results obtained are: (1) a relation between the non-dimensional isothermal and adiabatic compressibilities and the increase or decrease in the thermodynamic potentials; (2) the distinction between mathematical integrability and optimization behavior and (3) the generalization of the Clapeyron equation. 5. The acid-base resistant zone in three dentin bonding systems. PubMed Inoue, Go; Nikaido, Toru; Foxton, Richard M; Tagami, Junji 2009-11-01 An acid-base resistant zone has been found to exist after acid-base challenge adjacent to the hybrid layer using SEM. The aim of this study was to examine the acid-base resistant zone using three different bonding systems. Dentin disks were applied with three different bonding systems, and then a resin composite was light-cured to make dentin disk sandwiches. After acid-base challenge, the polished surfaces were observed using SEM. For both one- and two-step self-etching primer systems, an acid-base resistant zone was clearly observed adjacent to the hybrid layer - but with differing appearances. For the wet bonding system, the presence of an acid-base resistant zone was unclear. This was because the self-etching primer systems etched the dentin surface mildly, such that the remaining mineral phase of dentin and the bonding agent yielded clear acid-base resistant zones. In conclusion, the acid-base resistant zone was clearly observed when self-etching primer systems were used, but not so for the wet bonding system. 6. Thai Grade 11 Students' Alternative Conceptions for Acid-Base Chemistry ERIC Educational Resources Information Center Artdej, Romklao; Ratanaroutai, Thasaneeya; Coll, Richard Kevin; Thongpanchang, Tienthong 2010-01-01 This study involved the development of a two-tier diagnostic instrument to assess Thai high school students' understanding of acid-base chemistry. The acid-base diagnostic test (ABDT) comprising 18 items was administered to 55 Grade 11 students in a science and mathematics programme during the second semester of the 2008 academic year. Analysis of… 7. A Comparative Study of French and Turkish Students' Ideas on Acid-Base Reactions ERIC Educational Resources Information Center Cokelez, Aytekin 2010-01-01 The goal of this comparative study was to determine the knowledge that French and Turkish upper secondary-school students (grades 11 and 12) acquire on the concept of acid-base reactions. Following an examination of the relevant curricula and textbooks in the two countries, 528 students answered six written questions about the acid-base concept.… 8. High School Students' Understanding of Acid-Base Concepts: An Ongoing Challenge for Teachers ERIC Educational Resources Information Center Damanhuri, Muhd Ibrahim Muhamad; Treagust, David F.; Won, Mihye; Chandrasegaran, A. L. 2016-01-01 Using a quantitative case study design, the "Acids-Bases Chemistry Achievement Test" ("ABCAT") was developed to evaluate the extent to which students in Malaysian secondary schools achieved the intended curriculum on acid-base concepts. Responses were obtained from 260 Form 5 (Grade 11) students from five schools to initially… 9. Modeling description and spectroscopic evidence of surface acid-base properties of natural illites. PubMed Liu, W 2001-12-01 The acid-base properties of natural illites from different areas were studied by potentiometric titrations. The acidimetric supernatant was regarded as the system blank to calculate the surface site concentration due to consideration of substrate dissolution during the prolonged acidic titration. The following surface complexation model could give a good interpretation of the surface acid-base reactions of the aqueous illites: 10. Collaborative Strategies for Teaching Common Acid-Base Disorders to Medical Students ERIC Educational Resources Information Center Petersen, Marie Warrer; Toksvang, Linea Natalie; Plovsing, Ronni R.; Berg, Ronan M. G. 2014-01-01 The ability to recognize and diagnose acid-base disorders is of the utmost importance in the clinical setting. However, it has been the experience of the authors that medical students often have difficulties learning the basic principles of acid-base physiology in the respiratory physiology curriculum, particularly when applying this knowledge to… 11. Canonical Pedagogical Content Knowledge by Cores for Teaching Acid-Base Chemistry at High School ERIC Educational Resources Information Center 2015-01-01 The topic of acid-base chemistry is one of the oldest in general chemistry courses and it has been almost continuously in academic discussion. The central purpose of documenting the knowledge and beliefs of a group of ten Mexican teachers with experience in teaching acid-base chemistry in high school was to know how they design, prepare and… 12. [Dynamics of blood gases and acid-base balance in patients with carbon monoxide acute poisoning]. PubMed Polozova, E V; Shilov, V V; Bogachova, A S; Davydova, E V 2015-01-01 Evaluation of blood gases and acid-base balance covered patients with carbon monoxide acute poisoning, in accordance with inhalation trauma presence. Evidence is that thermochemical injury of respiratory tract induced severe acid-base dysbalance remaining decompensated for a long time despite the treatment. 13. Neutral and charged matter in equilibrium with black holes Bronnikov, K. A.; Zaslavskii, O. B. 2011-10-01 We study the conditions of a possible static equilibrium between spherically symmetric, electrically charged or neutral black holes and ambient matter. The following kinds of matter are considered: (1) neutral and charged matter with a linear equation of state pr=wρ (for neutral matter the results of our previous work are reproduced), (2) neutral and charged matter with pr˜ρm, m>1, and (3) the possible presence of a “vacuum fluid” (the cosmological constant or, more generally, anything that satisfies the equality T00=T11 at least at the horizon). We find a number of new cases of such an equilibrium, including those generalizing the well-known Majumdar-Papapetrou conditions for charged dust. It turns out, in particular, that ultraextremal black holes cannot be in equilibrium with any matter in the absence of a vacuum fluid; meanwhile, matter with w>0, if it is properly charged, can surround an extremal charged black hole. 14. Uncertainty of mantle geophysical properties computed from phase equilibrium models Connolly, J. A. D.; Khan, A. 2016-05-01 Phase equilibrium models are used routinely to predict geophysically relevant mantle properties. A limitation of this approach is that nonlinearity of the phase equilibrium problem precludes direct assessment of the resultant uncertainties. To overcome this obstacle, we stochastically assess uncertainties along self-consistent mantle adiabats for pyrolitic and basaltic bulk compositions to 2000 km depth. The dominant components of the uncertainty are the identity, composition and elastic properties of the minerals. For P wave speed and density, the latter components vary little, whereas the first is confined to the upper mantle. Consequently, P wave speeds, densities, and adiabatic temperatures and pressures predicted by phase equilibrium models are more uncertain in the upper mantle than in the lower mantle. In contrast, uncertainties in S wave speeds are dominated by the uncertainty in shear moduli and are approximately constant throughout the model depth range. 15. Chemical-equilibrium calculations for aqueous geothermal brines SciTech Connect Kerrisk, J.F. 1981-05-01 Results from four chemical-equilibrium computer programs, REDEQL.EPAK, GEOCHEM, WATEQF, and SENECA2, have been compared with experimental solubility data for some simple systems of interest with geothermal brines. Seven test cases involving solubilities of CaCO/sub 3/, amorphous SiO/sub 2/, CaSO/sub 4/, and BaSO/sub 4/ at various temperatures from 25 to 300/sup 0/C and in NaCl or HCl solutions of 0 to 4 molal have been examined. Significant differences between calculated results and experimental data occurred in some cases. These differences were traced to inaccuracies in free-energy or equilibrium-constant data and in activity coefficients used by the programs. Although currently available chemical-equilibrium programs can give reasonable results for these calculations, considerable care must be taken in the selection of free-energy data and methods of calculating activity coefficients. 16. Spectral Quasi-Equilibrium Manifold for Chemical Kinetics. PubMed Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V 2016-05-26 The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description. 17. Effective Torsion and Spring Constants in a Hybrid Translational-Rotational Oscillator ERIC Educational Resources Information Center Nakhoda, Zein; Taylor, Ken 2011-01-01 A torsion oscillator is a vibrating system that experiences a restoring torque given by [tau] = -[kappa][theta] when it experiences a rotational displacement [theta] from its equilibrium position. The torsion constant [kappa] (kappa) is analogous to the spring constant "k" for the traditional translational oscillator (for which the restoring force… 18. Equilibrium studies of copper ion adsorption onto palm kernel fibre. PubMed Ofomaja, Augustine E 2010-07-01 The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic. PMID:20346574 19. Equilibrium studies of copper ion adsorption onto palm kernel fibre. PubMed Ofomaja, Augustine E 2010-07-01 The equilibrium sorption of copper ions from aqueous solution using a new adsorbent, palm kernel fibre, has been studied. Palm kernel fibre is obtained in large amounts as a waste product of palm oil production. Batch equilibrium studies were carried out and system variables such as solution pH, sorbent dose, and sorption temperature were varied. The equilibrium sorption data was then analyzed using the Langmuir, Freundlich, Dubinin-Radushkevich (D-R) and Temkin isotherms. The fit of these isotherm models to the equilibrium sorption data was determined, using the linear coefficient of determination, r(2), and the non-linear Chi-square, chi(2) error analysis. The results revealed that sorption was pH dependent and increased with increasing solution pH above the pH(PZC) of the palm kernel fibre with an optimum dose of 10g/dm(3). The equilibrium data were found to fit the Langmuir isotherm model best, with a monolayer capacity of 3.17 x 10(-4)mol/g at 339K. The sorption equilibrium constant, K(a), increased with increasing temperature, indicating that bond strength between sorbate and sorbent increased with temperature and sorption was endothermic. This was confirmed by the increase in the values of the Temkin isotherm constant, B(1), with increasing temperature. The Dubinin-Radushkevich (D-R) isotherm parameter, free energy, E, was in the range of 15.7-16.7kJ/mol suggesting that the sorption mechanism was ion exchange. Desorption studies showed that a high percentage of the copper was desorbed from the adsorbent using acid solutions (HCl, HNO(3) and CH(3)COOH) and the desorption percentage increased with acid concentration. The thermodynamics of the copper ions/palm kernel fibre system indicate that the process is spontaneous and endothermic. 20. New Quasar Studies Keep Fundamental Physical Constant Constant 2004-03-01 Very Large Telescope sets stringent limit on possible variation of the fine-structure constant over cosmological time Summary Detecting or constraining the possible time variations of fundamental physical constants is an important step toward a complete understanding of basic physics and hence the world in which we live. A step in which astrophysics proves most useful. Previous astronomical measurements of the fine structure constant - the dimensionless number that determines the strength of interactions between charged particles and electromagnetic fields - suggested that this particular constant is increasing very slightly with time. If confirmed, this would have very profound implications for our understanding of fundamental physics. New studies, conducted using the UVES spectrograph on Kueyen, one of the 8.2-m telescopes of ESO's Very Large Telescope array at Paranal (Chile), secured new data with unprecedented quality. These data, combined with a very careful analysis, have provided the strongest astronomical constraints to date on the possible variation of the fine structure constant. They show that, contrary to previous claims, no evidence exist for assuming a time variation of this fundamental constant. PR Photo 07/04: Relative Changes with Redshift of the Fine Structure Constant (VLT/UVES) A fine constant To explain the Universe and to represent it mathematically, scientists rely on so-called fundamental constants or fixed numbers. The fundamental laws of physics, as we presently understand them, depend on about 25 such constants. Well-known examples are the gravitational constant, which defines the strength of the force acting between two bodies, such as the Earth and the Moon, and the speed of light. One of these constants is the so-called "fine structure constant", alpha = 1/137.03599958, a combination of electrical charge of the electron, the Planck constant and the speed of light. The fine structure constant describes how electromagnetic forces hold 1. Interactions of Virus Like Particles in Equilibrium and Non-equilibrium Systems Lin, Hsiang-Ku This thesis summarizes my Ph.D. research on the interactions of virus like particles in equilibrium and non-equilibrium biological systems. In the equilibrium system, we studied the fluctuation-induced forces between inclusions in a fluid membrane. We developed an exact method to calculate thermal Casimir forces between inclusions of arbitrary shapes and separation, embedded in a fluid membrane whose fluctuations are governed by the combined action of surface tension, bending modulus, and Gaussian rigidity. Each objects shape and mechanical properties enter only through a characteristic matrix, a static analog of the scattering matrix. We calculate the Casimir interaction between two elastic disks embedded in a membrane. In particular, we find that at short separations the interaction is strong and independent of surface tension. In the non-equilibrium system, we studied the transport and deposition dynamics of colloids in saturated porous media under un-favorable filtering conditions. As an alternative to traditional convection-diffusion or more detailed numerical models, we consider a mean-field description in which the attachment and detachment processes are characterized by an entire spectrum of rate constants, ranging from shallow traps which mostly account for hydrodynamic dispersivity, all the way to the permanent traps associated with physical straining. The model has an analytical solution which allows analysis of its properties including the long time asymptotic behavior and the profile of the deposition curves. Furthermore, the model gives rise to a filtering front whose structure, stability and propagation velocity are examined. Based on these results, we propose an experimental protocol to determine the parameters of the model. 2. Equilibrium and dynamic design principles for binding molecules engineered for reagentless biosensors. PubMed de Picciotto, Seymour; Imperiali, Barbara; Griffith, Linda G; Wittrup, K Dane 2014-09-01 Reagentless biosensors rely on the interaction of a binding partner and its target to generate a change in fluorescent signal using an environment-sensitive fluorophore or Förster resonance energy transfer. Binding affinity can exert a significant influence on both the equilibrium and the dynamic response characteristics of such a biosensor. We here develop a kinetic model for the dynamic performance of a reagentless biosensor. Using a sinusoidal signal for ligand concentration, our findings suggest that it is optimal to use a binding moiety whose equilibrium dissociation constant matches that of the average predicted input signal, while maximizing both the association rate constant and the dissociation rate constant at the necessary ratio to create the desired equilibrium constant. Although practical limitations constrain the attainment of these objectives, the derivation of these design principles provides guidance for improved reagentless biosensor performance and metrics for quality standards in the development of biosensors. These concepts are broadly relevant to reagentless biosensor modalities. 3. A mathematical model of pH, based on the total stoichiometric concentration of acids, bases and ampholytes dissolved in water. PubMed Mioni, Roberto; Mioni, Giuseppe 2015-10-01 In chemistry and in acid-base physiology, the Henderson-Hasselbalch equation plays a pivotal role in studying the behaviour of the buffer solutions. However, it seems that the general function to calculate the valence of acids, bases and ampholytes, N = f(pH), at any pH, has only been provided by Kildeberg. This equation can be applied to strong acids and bases, pluriprotic weak acids, bases and ampholytes, with an arbitrary number of acid strength constants, pKA, including water. By differentiating this function with respect to pH, we obtain the general equation for the buffer value. In addition, by integrating the titration curve, TA, proposed by Kildeberg, and calculating its Legendre transform, we obtain the Gibbs free energy of pH (or pOH)-dependent titratable acid. Starting from the law of electroneutrality and applying suitable simplifications, it is possible to calculate the pH of the buffer solutions by numerical methods, available in software packages such as Excel. The concept of buffer capacity has also been clarified by Urbansky, but, at variance with our approach, not in an organic manner. In fact, for each set of monobasic, dibasic, tribasic acids, etc., various equations are presented which independently fit each individual acid-base category. Consequently, with the increase in acid groups (pKA), the equations become more and more difficult, both in practice and in theory. Some examples are proposed to highlight the boundary that exists between acid-base physiology and the thermodynamic concepts of energy, chemical potential, amount of substance and acid resistance. PMID:26059505 4. A mathematical model of pH, based on the total stoichiometric concentration of acids, bases and ampholytes dissolved in water. PubMed Mioni, Roberto; Mioni, Giuseppe 2015-10-01 In chemistry and in acid-base physiology, the Henderson-Hasselbalch equation plays a pivotal role in studying the behaviour of the buffer solutions. However, it seems that the general function to calculate the valence of acids, bases and ampholytes, N = f(pH), at any pH, has only been provided by Kildeberg. This equation can be applied to strong acids and bases, pluriprotic weak acids, bases and ampholytes, with an arbitrary number of acid strength constants, pKA, including water. By differentiating this function with respect to pH, we obtain the general equation for the buffer value. In addition, by integrating the titration curve, TA, proposed by Kildeberg, and calculating its Legendre transform, we obtain the Gibbs free energy of pH (or pOH)-dependent titratable acid. Starting from the law of electroneutrality and applying suitable simplifications, it is possible to calculate the pH of the buffer solutions by numerical methods, available in software packages such as Excel. The concept of buffer capacity has also been clarified by Urbansky, but, at variance with our approach, not in an organic manner. In fact, for each set of monobasic, dibasic, tribasic acids, etc., various equations are presented which independently fit each individual acid-base category. Consequently, with the increase in acid groups (pKA), the equations become more and more difficult, both in practice and in theory. Some examples are proposed to highlight the boundary that exists between acid-base physiology and the thermodynamic concepts of energy, chemical potential, amount of substance and acid resistance. 5. A beta-D-allopyranoside-grafted Ru(II) complex: synthesis and acid-base and DNA-binding properties. PubMed Ma, Yan-Zi; Yin, Hong-Ju; Wang, Ke-Zhi 2009-08-01 A new ruthenium(II) complex grafted with beta-d-allopyranoside, Ru(bpy)(2)(Happip)(ClO(4))(2) (where bpy = 2,2'-bipyridine; Happip = 2-(4-(beta-d-allopyranoside)phenyl)imidazo[4,5-f][1,10]phenanthroline), has been synthesized and characterized by elemental analysis, (1)H NMR spectroscopy, and mass spectrometry. The acid-base properties of the complex have been studied by UV-visible and luminescence spectrophotometric pH titrations, and ground- and excited-state ionization constants have been derived. The Ru(II) complex functions as a DNA intercalator as revealed by UV-visible and emission titrations, salt effects, steady-state emission quenching by [Fe(CN)(6)](4-), DNA competitive binding with ethidium bromide, DNA melting experiment, and viscosity measurements. 6. Tuning universality far from equilibrium PubMed Central Karl, Markus; Nowak, Boris; Gasenzer, Thomas 2013-01-01 Possible universal dynamics of a many-body system far from thermal equilibrium are explored. A focus is set on meta-stable non-thermal states exhibiting critical properties such as self-similarity and independence of the details of how the respective state has been reached. It is proposed that universal dynamics far from equilibrium can be tuned to exhibit a dynamical transition where these critical properties change qualitatively. This is demonstrated for the case of a superfluid two-component Bose gas exhibiting different types of long-lived but non-thermal critical order. Scaling exponents controlled by the ratio of experimentally tuneable coupling parameters offer themselves as natural smoking guns. The results shed light on the wealth of universal phenomena expected to exist in the far-from-equilibrium realm. PMID:23928853 7. Phase coexistence far from equilibrium Dickman, Ronald 2016-04-01 Investigation of simple far-from-equilibrium systems exhibiting phase separation leads to the conclusion that phase coexistence is not well defined in this context. This is because the properties of the coexisting nonequilibrium systems depend on how they are placed in contact, as verified in the driven lattice gas with attractive interactions, and in the two-temperature lattice gas, under (a) weak global exchange between uniform systems, and (b) phase-separated (nonuniform) systems. Thus, far from equilibrium, the notions of universality of phase coexistence (i.e., independence of how systems exchange particles and/or energy), and of phases with intrinsic properties (independent of their environment) are lost. 8. Toroidal plasma equilibrium with gravity SciTech Connect Yoshikawa, S. 1980-09-01 Toroidal magnetic field configuration in a gravitational field is calculated both from a simple force-balance and from the calculation using magnetic surfaces. The configuration is found which is positionally stable in a star. The vibrational frequency near the equilibrium point is proportional to the hydrostatic frequency of a star multiplied by the ratio (W/sub B//W/sub M/)/sup 1/2/ where W/sub B/ is the magnetic field energy density, and W/sub M/ is the material pressure at the equilibrium point. It is proposed that this frequency may account for the observed solar spot cycles. 9. Adiabatic evolution of plasma equilibrium PubMed Central Grad, H.; Hu, P. N.; Stevens, D. C. 1975-01-01 A new theory of plasma equilibrium is introduced in which adiabatic constraints are specified. This leads to a mathematically nonstandard structure, as compared to the usual equilibrium theory, in which prescription of pressure and current profiles leads to an elliptic partial differential equation. Topologically complex configurations require further generalization of the concept of adiabaticity to allow irreversible mixing of plasma and magnetic flux among islands. Matching conditions across a boundary layer at the separatrix are obtained from appropriate conservation laws. Applications are made to configurations with planned islands (as in Doublet) and accidental islands (as in Tokamaks). Two-dimensional, axially symmetric, helically symmetric, and closed line equilibria are included. PMID:16578729 10. Novel mapping in non-equilibrium stochastic processes Heseltine, James; Kim, Eun-jin 2016-04-01 We investigate the time-evolution of a non-equilibrium system in view of the change in information and provide a novel mapping relation which quantifies the change in information far from equilibrium and the proximity of a non-equilibrium state to the attractor. Specifically, we utilize a nonlinear stochastic model where the stochastic noise plays the role of incoherent regulation of the dynamical variable x and analytically compute the rate of change in information (information velocity) from the time-dependent probability distribution function. From this, we quantify the total change in information in terms of information length { L } and the associated action { J }, where { L } represents the distance that the system travels in the fluctuation-based, statistical metric space parameterized by time. As the initial probability density function’s mean position (μ) is decreased from the final equilibrium value {μ }* (the carrying capacity), { L } and { J } increase monotonically with interesting power-law mapping relations. In comparison, as μ is increased from {μ }*,{ L } and { J } increase slowly until they level off to a constant value. This manifests the proximity of the state to the attractor caused by a strong correlation for large μ through large fluctuations. Our proposed mapping relation provides a new way of understanding the progression of the complexity in non-equilibrium system in view of information change and the structure of underlying attractor. 11. Modeling Bacteria Surface Acid-Base Properties: The Overprint Of Biology Amores, D. R.; Smith, S.; Warren, L. A. 2009-05-01 Bacteria are ubiquitous in the environment and are important repositories for metals as well as nucleation templates for a myriad of secondary minerals due to an abundance of reactive surface binding sites. Model elucidation of whole cell surface reactivity simplifies bacteria as viable but static, i.e., no metabolic activity, to enable fits of microbial data sets from models derived from mineral surfaces. Here we investigate the surface proton charging behavior of live and dead whole cell cyanobacteria (Synechococcus sp.) harvested from a single parent culture by acid-base titration using a Fully Optimized ContinUouS (FOCUS) pKa spectrum method. Viability of live cells was verified by successful recultivation post experimentation, whereas dead cells were consistently non-recultivable. Surface site identities derived from binding constants determined for both the live and dead cells are consistent with molecular analogs for organic functional groups known to occur on microbial surfaces: carboxylic (pKa = 2.87-3.11), phosphoryl (pKa = 6.01-6.92) and amine/hydroxyl groups (pKa = 9.56-9.99). However, variability in total ligand concentration among the live cells is greater than those between the live and dead. The total ligand concentrations (LT, mol- mg-1 dry solid) derived from the live cell titrations (n=12) clustered into two sub-populations: high (LT = 24.4) and low (LT = 5.8), compared to the single concentration for the dead cell titrations (LT = 18.8; n=5). We infer from these results that metabolic activity can substantively impact surface reactivity of morphologically identical cells. These results and their modeling implications for bacteria surface reactivities will be discussed. 12. Interpretation of pH-activity profiles for acid-base catalysis from molecular simulations. PubMed Dissanayake, Thakshila; Swails, Jason M; Harris, Michael E; Roitberg, Adrian E; York, Darrin M 2015-02-17 The measurement of reaction rate as a function of pH provides essential information about mechanism. These rates are sensitive to the pK(a) values of amino acids directly involved in catalysis that are often shifted by the enzyme active site environment. Experimentally observed pH-rate profiles are usually interpreted using simple kinetic models that allow estimation of "apparent pK(a)" values of presumed general acid and base catalysts. One of the underlying assumptions in these models is that the protonation states are uncorrelated. In this work, we introduce the use of constant pH molecular dynamics simulations in explicit solvent (CpHMD) with replica exchange in the pH-dimension (pH-REMD) as a tool to aid in the interpretation of pH-activity data of enzymes and to test the validity of different kinetic models. We apply the methods to RNase A, a prototype acid-base catalyst, to predict the macroscopic and microscopic pK(a) values, as well as the shape of the pH-rate profile. Results for apo and cCMP-bound RNase A agree well with available experimental data and suggest that deprotonation of the general acid and protonation of the general base are not strongly coupled in transphosphorylation and hydrolysis steps. Stronger coupling, however, is predicted for the Lys41 and His119 protonation states in apo RNase A, leading to the requirement for a microscopic kinetic model. This type of analysis may be important for other catalytic systems where the active forms of the implicated general acid and base are oppositely charged and more highly correlated. These results suggest a new way for CpHMD/pH-REMD simulations to bridge the gap with experiments to provide a molecular-level interpretation of pH-activity data in studies of enzyme mechanisms. 13. Understanding Thermal Equilibrium through Activities ERIC Educational Resources Information Center 2015-01-01 Thermal equilibrium is a basic concept in thermodynamics. In India, this concept is generally introduced at the first year of undergraduate education in physics and chemistry. In our earlier studies (Pathare and Pradhan 2011 "Proc. episteme-4 Int. Conf. to Review Research on Science Technology and Mathematics Education" pp 169-72) we… 14. An investigation of equilibrium concepts NASA Technical Reports Server (NTRS) Prozan, R. J. 1982-01-01 A different approach to modeling of the thermochemistry of rocket engine combustion phenomena is presented. The methodology described is based on the hypothesis of a new variational principle applicable to compressible fluid mechanics. This hypothesis is extended to treat the thermochemical behavior of a reacting (equilibrium) gas in an open system. 15. A Simplified Undergraduate Laboratory Experiment to Evaluate the Effect of the Ionic Strength on the Equilibrium Concentration Quotient of the Bromcresol Green Dye ERIC Educational Resources Information Center Rodriguez, Hernan B.; Mirenda, Martin 2012-01-01 A modified laboratory experiment for undergraduate students is presented to evaluate the effects of the ionic strength, "I", on the equilibrium concentration quotient, K[subscript c], of the acid-base indicator bromcresol green (BCG). The two-step deprotonation of the acidic form of the dye (sultone form), as it is dissolved in water, yields… 16. Constant-Pressure Hydraulic Pump NASA Technical Reports Server (NTRS) Galloway, C. W. 1982-01-01 Constant output pressure in gas-driven hydraulic pump would be assured in new design for gas-to-hydraulic power converter. With a force-multiplying ring attached to gas piston, expanding gas would apply constant force on hydraulic piston even though gas pressure drops. As a result, pressure of hydraulic fluid remains steady, and power output of the pump does not vary. 17. Improving pharmacy students' understanding and long-term retention of acid-base chemistry. PubMed Roche, Victoria F 2007-12-15 Despite repeated exposure to the principles underlying the behavior of organic acids and bases in aqueous solution, some pharmacy students remain confused about the topic of acid-base chemistry. Since a majority of organic drug molecules have acid-base character, the ability to predict their reactivity and the extent to which they will ionize in a given medium is paramount to students' understanding of essentially all aspects of drug action in vivo and in vitro. This manuscript presents a medicinal chemistry lesson in the fundamentals of acid-base chemistry that many pharmacy students have found enlightening and clarifying. 18. [Practical diagnostics of acid-base disorders: part I: differentiation between respiratory and metabolic disturbances]. PubMed Deetjen, P; Lichtwarck-Aschoff, M 2012-11-01 The first part of this overview on diagnostic tools for acid-base disorders focuses on basic knowledge for distinguishing between respiratory and metabolic causes of a particular disturbance. Rather than taking sides in the great transatlantic or traditional-modern debate on the best theoretical model for understanding acid-base physiology, this article tries to extract what is most relevant for everyday clinical practice from the three schools involved in these keen debates: the Copenhagen, the Boston and the Stewart schools. Each school is particularly strong in a specific diagnostic or therapeutic field. Appreciating these various strengths a unifying, simplified algorithm together with an acid-base calculator will be discussed. 19. Acid-base and chelatometric photo-titrations with photosensors and membrane photosensors. PubMed Matsuo, T; Masuda, Y; Sekido, E 1986-08-01 Photosensors (PS) and membrane photosensors (MPS), which can be immersed in the test solution and facilitate the measurement of concentration, have been developed by miniaturizing an optical system consisting of a light source and a photocell. For use in acid-base or complexometric titrations a poly(vinyl chloride) membrane containing an acid-base or metallochromic indicator can be applied as a coating to the photocell. Spectrophotometric determination of copper(II), and photometric acid-base and chelatometric titrations have been performed with the PS and MPS systems. 20. Water dimer equilibrium constant calculation: a quantum formulation including metastable states. PubMed Leforestier, Claude 2014-02-21 We present a full quantum evaluation of the water second virial coefficient B(T) based on the Takahashi-Imada second order approximation. As the associated trace T r[e(-βH(AB)) - e(-βH(0)(AB))] is performed in the coordinate representation, it does also include contribution from the whole continuum, i.e., resonances and collision pairs of monomers. This approach is compared to a Path Integral Monte Carlo evaluation of this coefficient by Schenter [J. Chem. Phys. 117, 6573 (2002)] for the TIP4P potential and shown to give extremely close results in the low temperature range (250-450 K) reported. Using a recent ab initio flexible potential for the water dimer, this new formulation leads to very good agreement with experimental values over the whole range of temperatures available. The virial coefficient is then used in the well known relation Kp(T) = -(B(T) - bM)/RT where the excluded volume bM is assimilated to the second virial coefficient of pure water monomer vapor and approximated from the inner repulsive part of the interaction potential. This definition, which renders bM temperature dependent, allows us to retrieve the 38 cm(3) mol(-1) value commonly used, at room temperature. The resulting values for Kp(T) are in agreement with available experimental data obtained from infrared absorption spectra of water vapor. 1. Non-Equilibrium Properties from Equilibrium Free Energy Calculations NASA Technical Reports Server (NTRS) Pohorille, Andrew; Wilson, Michael A. 2012-01-01 Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments. 2. Electrospun poly(lactic acid) based conducting nanofibrous networks Patra, S. N.; Bhattacharyya, D.; Ray, S.; Easteal, A. J. 2009-08-01 Multi-functionalised micro/nanostructures of conducting polymers in neat or blended forms have received much attention because of their unique properties and technological applications in electrical, magnetic and biomedical devices. Biopolymer-based conducting fibrous mats are of special interest for tissue engineering because they not only physically support tissue growth but also are electrically conductive, and thus are able to stimulate specific cell functions or trigger cell responses. They are effective for carrying current in biological environments and can thus be considered for delivering local electrical stimuli at the site of damaged tissue to promote wound healing. Electrospinning is an established way to process polymer solutions or melts into continuous fibres with diameter often in the nanometre range. This process primarily depends on a number of parameters, including the type of polymer, solution viscosity, polarity and surface tension of the solvent, electric field strength and the distance between the spinneret and the collector. The present research has included polyaniline (PANi) as the conducting polymer and poly(L-lactic acid) (PLLA) as the biopolymer. Dodecylbenzene sulphonic acid (DBSA) doped PANi and PLLA have been dissolved in a common solvent (mixtures of chloroform and dimethyl formamide (DMF)), and the solutions successfully electrospun. DMF enhanced the dielectric constant of the solvent, and tetra butyl ammonium bromide (TBAB) was used as an additive to increase the conductivity of the solution. DBSA-doped PANi/PLLA mat exhibits an almost bead-free network of nanofibres that have extraordinarily smooth surface and diameters in the range 75 to 100 nm. 3. Envisioning an enzymatic Diels-Alder reaction by in situ acid-base catalyzed diene generation. PubMed Linder, Mats; Johansson, Adam Johannes; Manta, Bianca; Olsson, Philip; Brinck, Tore 2012-06-01 We present and evaluate a new and potentially efficient route for enzyme-mediated Diels-Alder reactions, utilizing general acid-base catalysis. The viability of employing the active site of ketosteroid isomerase is demonstrated. 4. Going Beyond, Going Further: The Preparation of Acid-Base Titration Curves. ERIC Educational Resources Information Center McClendon, Michael 1984-01-01 Background information, list of materials needed, and procedures used are provided for a simple technique for generating mechanically plotted acid-base titration curves. The method is suitable for second-year high school chemistry students. (JN) 5. Ultrastructural observation of the acid-base resistant zone of all-in-one adhesives using three different acid-base challenges. PubMed Tsujimoto, Miho; Nikaido, Toru; Inoue, Go; Sadr, Alireza; Tagami, Junji 2010-11-01 The aim of this study was to analyze the ultrastructure of the dentin-adhesive interface using two all-in-one adhesive systems (Clearfil Tri-S Bond, TB; Tokuyama Bond Force, BF) after different acid-base challenges. Three solutions were used as acidic solutions for the acid-base challenges: a demineralizing solution (DS), a phosphoric acid solution (PA), and a hydrochloric acid solution (HCl). After the acid-base challenges, the bonded interfaces were examined by scanning electron microscopy. Thickness of the acid-base resistant zone (ABRZ) created in PA and HCl was thinner than in DS for both adhesive systems. For BF adhesive, an eroded area was observed beneath the ABRZ after immersion in PA and HCl, but not in DS. Conversely for TB adhesive, the eroded area was observed only after immersion in PA. In conclusion, although the ABRZ was observed for both all-in-one adhesive systems, its morphological features were influenced by the ingredients of both the adhesive material and acidic solution. 6. Constants and Variables of Nature SciTech Connect Sean Carroll 2009-04-03 It is conventional to imagine that the various parameters which characterize our physical theories, such as the fine structure constant or Newton’s gravitational constant, are truly “constant”, in the sense that they do not change from place to place or time to time. Recent developments in both theory and observation have led us to re-examine this assumption, and to take seriously the possibility that our supposed constants are actually gradually changing. I will discuss why we might expect these parameters to vary, and what observation and experiment have to say about the issue. 7. Enthalpies of formation of rare earths and actinide(III) hydroxides: Their acid-base relationships and estimation of their thermodynamic properties SciTech Connect 1991-12-31 This paper reviews the literature on rare earth(III) and actinide(III) hydroxide thermodynamics, in particular the determination of their enthalpies of formation at 25{degree}C. The hydroxide unit-cell volumes, lanthanide/actinide ion sizes, and solid-solution stability trends have been correlated with a generalized acid-base strength model for oxides to estimate properties for heterogeneous equilibria that are relevant to nuclear waste modeling and to characterization of potential actinide environmental interactions. Enthalpies of formation and solubility-product constants of actinide(III) hydroxides are estimated. 8. Enthalpies of formation of rare earths and actinide(III) hydroxides: Their acid-base relationships and estimation of their thermodynamic properties SciTech Connect 1991-01-01 This paper reviews the literature on rare earth(III) and actinide(III) hydroxide thermodynamics, in particular the determination of their enthalpies of formation at 25{degree}C. The hydroxide unit-cell volumes, lanthanide/actinide ion sizes, and solid-solution stability trends have been correlated with a generalized acid-base strength model for oxides to estimate properties for heterogeneous equilibria that are relevant to nuclear waste modeling and to characterization of potential actinide environmental interactions. Enthalpies of formation and solubility-product constants of actinide(III) hydroxides are estimated. 9. Near equilibrium distributions for beams with space charge in linear and nonlinear periodic focusing systems SciTech Connect Sonnad, Kiran G.; Cary, John R. 2015-04-15 A procedure to obtain a near equilibrium phase space distribution function has been derived for beams with space charge effects in a generalized periodic focusing transport channel. The method utilizes the Lie transform perturbation theory to canonically transform to slowly oscillating phase space coordinates. The procedure results in transforming the periodic focusing system to a constant focusing one, where equilibrium distributions can be found. Transforming back to the original phase space coordinates yields an equilibrium distribution function corresponding to a constant focusing system along with perturbations resulting from the periodicity in the focusing. Examples used here include linear and nonlinear alternating gradient focusing systems. It is shown that the nonlinear focusing components can be chosen such that the system is close to integrability. The equilibrium distribution functions are numerically calculated, and their properties associated with the corresponding focusing system are discussed. 10. Equilibrium & Nonequilibrium Fluctuation Effects in Biopolymer Networks Kachan, Devin Michael Fluctuation-induced interactions are an important organizing principle in a variety of soft matter systems. In this dissertation, I explore the role of both thermal and active fluctuations within cross-linked polymer networks. The systems I study are in large part inspired by the amazing physics found within the cytoskeleton of eukaryotic cells. I first predict and verify the existence of a thermal Casimir force between cross-linkers bound to a semi-flexible polymer. The calculation is complicated by the appearance of second order derivatives in the bending Hamiltonian for such polymers, which requires a careful evaluation of the the path integral formulation of the partition function in order to arrive at the physically correct continuum limit and properly address ultraviolet divergences. I find that cross linkers interact along a filament with an attractive logarithmic potential proportional to thermal energy. The proportionality constant depends on whether and how the cross linkers constrain the relative angle between the two filaments to which they are bound. The interaction has important implications for the synthesis of biopolymer bundles within cells. I model the cross-linkers as existing in two phases: bound to the bundle and free in solution. When the cross-linkers are bound, they behave as a one-dimensional gas of particles interacting with the Casimir force, while the free phase is a simple ideal gas. Demanding equilibrium between the two phases, I find a discontinuous transition between a sparsely and a densely bound bundle. This discontinuous condensation transition induced by the long-ranged nature of the Casimir interaction allows for a similarly abrupt structural transition in semiflexible filament networks between a low cross linker density isotropic phase and a higher cross link density bundle network. This work is supported by the results of finite element Brownian dynamics simulations of semiflexible filaments and transient cross-linkers. I 11. Phonon Mapping in Flowing Equilibrium Ruff, J. P. C. 2015-03-01 When a material conducts heat, a modification of the phonon population occurs. The equilibrium Bose-Einstein distribution is perturbed towards flowing-equilibrium, for which the distribution function is not analytically known. Here I argue that the altered phonon population can be efficiently mapped over broad regions of reciprocal space, via diffuse x-ray scattering or time-of-flight neutron scattering, while a thermal gradient is applied across a single crystal sample. When compared to traditional transport measurements, this technique offers a superior, information-rich new perspective on lattice thermal conductivity, wherein the band and momentum dependences of the phonon thermal current are directly resolved. The proposed method is benchmarked using x-ray thermal diffuse scattering measurements of single crystal diamond under transport conditions. CHESS is supported by the NSF & NIH/NIGMS via NSF Award DMR-1332208. 12. Punctuated equilibrium comes of age Gould, Stephan Jay; Eldredge, Niles 1993-11-01 The intense controversies that surrounded the youth of punctuated equilibrium have helped it mature to a useful extension of evolutionary theory. As a complement to phyletic gradualism, its most important implications remain the recognition of stasis as a meaningful and predominant pattern within the history of species, and in the recasting of macroevolution as the differential success of certain species (and their descendants) within clades. 13. Thermodynamic equilibrium at heterogeneous pressure Vrijmoed, J. C.; Podladchikov, Y. Y. 2015-07-01 Recent advances in metamorphic petrology point out the importance of grain-scale pressure variations in high-temperature metamorphic rocks. Pressure derived from chemical zonation using unconventional geobarometry based on equal chemical potentials fits mechanically feasible pressure variations. Here, a thermodynamic equilibrium method is presented that predicts chemical zoning as a result of pressure variations by Gibbs energy minimization. Equilibrium thermodynamic prediction of the chemical zoning in the case of pressure heterogeneity is done by constrained Gibbs minimization using linear programming techniques. In addition to constraining the system composition, a certain proportion of the system is constrained at a specified pressure. Input pressure variations need to be discretized, and each discrete pressure defines an additional constraint for the minimization. The Gibbs minimization method provides identical results to a geobarometry approach based on chemical potentials, thus validating the inferred pressure gradient. The thermodynamic consistency of the calculation is supported by the similar result obtained from two different approaches. In addition, the method can be used for multi-component, multi-phase systems of which several applications are given. A good fit to natural observations in multi-phase, multi-component systems demonstrates the possibility to explain phase assemblages and zoning by spatial pressure variations at equilibrium as an alternative to pressure variation in time due to disequilibrium. 14. Long-term equilibrium tides Shaffer, John A.; Cerveny, Randall S. 1998-08-01 Extreme equilibrium tides, or ``hypertides,'' are computed in a new equilibrium tidal model combining algorithms of a version of the Chapront ELP-2000/82 Lunar Theory with the BER78 Milankovitch astronomical expansions. For the recent past, a high correspondence exists between computed semidiurnal tide levels and a record of coastal flooding demonstrating that astronomical alignment is a potential influence on such flooding. For the Holocene and near future, maximum tides demonstrate cyclic variations with peaks at near 5000 B.P. and 4000 A.P. On the late Quaternary timescale, variations in maximum equilibrium tide level display oscillations with periods of approximately 10,000, 100,000 and 400,000 years, because of precessional shifts in tidal maxima between vernal and autumnal equinoxes. While flooding occurs under the combined effects of tides and storms via ``storm surges,'' the most extensive flooding will occur with the coincidence of storms and the rarer hypertides and is thus primarily influenced by hypertides. Therefore we suggest that astronomical alignment's relationship to coastal flooding is probabilistic rather than deterministic. Data derived from this model are applicable to (1) archaeological and paleoclimatic coastal reconstructions, (2) long-term planning, for example, radioactive waste site selection, (3) sealevel change and paleoestuarine studies or (4) ocean-meteorological interactions. 15. Radioligand Binding Assays for Determining Dissociation Constants of Phytohormone Receptors. PubMed Hellmuth, Antje; Calderón Villalobos, Luz Irina A 2016-01-01 In receptor-ligand interactions, dissociation constants provide a key parameter for characterizing binding. Here, we describe filter-based radioligand binding assays at equilibrium, either varying ligand concentrations up to receptor saturation or outcompeting ligand from its receptor with increasing concentrations of ligand analogue. Using the auxin coreceptor system, we illustrate how to use a saturation binding assay to determine the apparent dissociation constant (K D (') ) for the formation of a ternary TIR1-auxin-AUX/IAA complex. Also, we show how to determine the inhibitory constant (K i) for auxin binding by the coreceptor complex via a competition binding assay. These assays can be applied broadly to characterize a one-site binding reaction of a hormone to its receptor. PMID:27424743 16. A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA) SciTech Connect Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K. 2005-12-23 This paper describes the development and application of a new multicomponent equilibrium solver for aerosol-phase (MESA) to predict the complex solid-liquid partitioning in atmospheric particles containing H+, NH4+, Na+, Ca2+, SO4=, HSO4-, NO3-, and Cl- ions. The algorithm of MESA involves integrating the set of ordinary differential equations describing the transient precipitation and dissolution reactions for each salt until the system satisfies the equilibrium or mass convergence criteria. Arbitrary values are chosen for the dissolution and precipitation rate constants such that their ratio is equal to the equilibrium constant. Numerically, this approach is equivalent to iterating all the equilibrium reactions simultaneously with a single iteration loop. Because CaSO4 is sparingly soluble, it is assumed to exist as a solid over the entire RH range to simplify the algorithm for calcium containing particles. Temperature-dependent mutual deliquescence relative humidity polynomials (valid from 240 to 310 K) for all the possible salt mixtures were constructed using the comprehensive Pitzer-Simonson-Clegg (PSC) activity coefficient model at 298.15 K and temperature-dependent equilibrium constants in MESA. Performance of MESA is evaluated for 16 representative mixed-electrolyte systems commonly found in tropospheric aerosols using PSC and two other multicomponent activity coefficient methods – Multicomponent Taylor Expansion Method (MTEM) of Zaveri et al. [2004], and the widely-used Kusik and Meissner method (KM), and the results are compared against the predictions of the Web-based AIM Model III or available experimental data. Excellent agreement was found between AIM, MESA-PSC, and MESA-MTEM predictions of the multistage deliquescence growth as a function of RH. On the other hand, MESA-KM displayed up to 20% deviations in the mass growth factors for common salt mixtures in the sulfate-poor cases while significant discrepancies were found in the predicted multistage 17. Thermodynamics of sodium dodecyl sulphate-salicylic acid based micellar systems and their potential use in fruits postharvest. PubMed Cid, A; Morales, J; Mejuto, J C; Briz-Cid, N; Rial-Otero, R; Simal-Gándara, J 2014-05-15 Micellar systems have excellent food applications due to their capability to solubilise a large range of hydrophilic and hydrophobic substances. In this work, the mixed micelle formation between the ionic surfactant sodium dodecyl sulphate (SDS) and the phenolic acid salicylic acid have been studied at several temperatures in aqueous solution. The critical micelle concentration and the micellization degree were determined by conductometric techniques and the experimental data used to calculate several useful thermodynamic parameters, like standard free energy, enthalpy and entropy of micelle formation. Salicylic acid helps the micellization of SDS, both by increasing the additive concentration at a constant temperature and by increasing temperature at a constant concentration of additive. The formation of micelles of SDS in the presence of salicylic acid was a thermodynamically spontaneous process, and is also entropically controlled. Salicylic acid plays the role of a stabilizer, and gives a pathway to control the three-dimensional water matrix structure. The driving force of the micellization process is provided by the hydrophobic interactions. The isostructural temperature was found to be 307.5 K for the mixed micellar system. This article explores the use of SDS-salicylic acid based micellar systems for their potential use in fruits postharvest. 18. The Importance of the Ionic Product for Water to Understand the Physiology of the Acid-Base Balance in Humans PubMed Central Adeva-Andany, María M.; Carneiro-Freire, Natalia; Donapetry-García, Cristóbal; Rañal-Muíño, Eva; López-Pereiro, Yosua 2014-01-01 Human plasma is an aqueous solution that has to abide by chemical rules such as the principle of electrical neutrality and the constancy of the ionic product for water. These rules define the acid-base balance in the human body. According to the electroneutrality principle, plasma has to be electrically neutral and the sum of its cations equals the sum of its anions. In addition, the ionic product for water has to be constant. Therefore, the plasma concentration of hydrogen ions depends on the plasma ionic composition. Variations in the concentration of plasma ions that alter the relative proportion of anions and cations predictably lead to a change in the plasma concentration of hydrogen ions by driving adaptive adjustments in water ionization that allow plasma electroneutrality while maintaining constant the ionic product for water. The accumulation of plasma anions out of proportion of cations induces an electrical imbalance compensated by a fall of hydroxide ions that brings about a rise in hydrogen ions (acidosis). By contrast, the deficiency of chloride relative to sodium generates plasma alkalosis by increasing hydroxide ions. The adjustment of plasma bicarbonate concentration to these changes is an important compensatory mechanism that protects plasma pH from severe deviations. PMID:24877130 19. The importance of the ionic product for water to understand the physiology of the acid-base balance in humans. PubMed Adeva-Andany, María M; Carneiro-Freire, Natalia; Donapetry-García, Cristóbal; Rañal-Muíño, Eva; López-Pereiro, Yosua 2014-01-01 Human plasma is an aqueous solution that has to abide by chemical rules such as the principle of electrical neutrality and the constancy of the ionic product for water. These rules define the acid-base balance in the human body. According to the electroneutrality principle, plasma has to be electrically neutral and the sum of its cations equals the sum of its anions. In addition, the ionic product for water has to be constant. Therefore, the plasma concentration of hydrogen ions depends on the plasma ionic composition. Variations in the concentration of plasma ions that alter the relative proportion of anions and cations predictably lead to a change in the plasma concentration of hydrogen ions by driving adaptive adjustments in water ionization that allow plasma electroneutrality while maintaining constant the ionic product for water. The accumulation of plasma anions out of proportion of cations induces an electrical imbalance compensated by a fall of hydroxide ions that brings about a rise in hydrogen ions (acidosis). By contrast, the deficiency of chloride relative to sodium generates plasma alkalosis by increasing hydroxide ions. The adjustment of plasma bicarbonate concentration to these changes is an important compensatory mechanism that protects plasma pH from severe deviations. 20. Varying Constants, Gravitation and Cosmology Uzan, Jean-Philippe 2011-12-01 Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with. 1. Equilibrium and kinetics in metamorphism Pattison, D. R. 2012-12-01 The equilibrium model for metamorphism is founded on the metamorphic facies principle, the repeated association of the same mineral assemblages in rocks of different bulk composition that have been metamorphosed together. Yet, for any metamorphic process to occur, there must be some degree of reaction overstepping (disequilibrium) to initiate reaction. The magnitude and variability of overstepping, and the degree to which it is either a relatively minor wrinkle or a more substantive challenge to the interpretation of metamorphic rocks using the equilibrium model, is an active area of current research. Kinetic barriers to reaction generally diminish with rising temperature due to the Arrhenius relation. In contrast, the rate of build-up of the macroscopic energetic driving force needed to overcome kinetic barriers to reaction, reaction affinity, does not vary uniformly with temperature, instead varying from reaction to reaction. High-entropy reactions that release large quantities of H2O build up reaction affinity more rapidly than low-entropy reactions that release little or no H2O, such that the former are expected to be overstepped less than the latter. Some consequences include: (1) metamorphic reaction intervals may be discrete rather than continuous, initiating at the point that sufficient reaction affinity has built up to overcome kinetic barriers; (2) metamorphic reaction intervals may not correspond in a simple way to reaction boundaries in an equilibrium phase diagram; (3) metamorphic reactions may involve metastable reactions; (4) metamorphic 'cascades' are possible, in which stable and metastable reactions involving the same reactant phases may proceed simultaneously; and (5) fluid generation, and possibly fluid presence in general, may be episodic rather than continuous, corresponding to discrete intervals of reaction. These considerations bear on the interpretation of P-T-t paths from metamorphic mineral assemblages and textures. The success of the 2. Sorption: Equilibrium partitioning and QSAR development using molecular predictors SciTech Connect Means, J.C. 1994-12-31 Sorption of chemical contaminants to sediments and soils has long been a subject of intensive investigation and QSAR development. Progressing the development of organic carbon-normalized, equilibrium partition constants (Koc) have greatly advanced the prediction of environmental fate. Integration of observed experimental results with thermodynamic modeling of compound behavior, based upon concepts of phase activities and fugacity have placed these QSARs on a firm theoretical base. An increasing spectrum of compound properties such as solubility, chemical activity, molecular surface area and other molecular topological indices have been evaluated for their utility as predictors of sorption properties. Questions concerning the effects of nonequilibrium states, hysteresis or irreversibility in desorption kinetics and equilibria, and particle-concentrations effects upon equilibrium constants as they affect fate predictions remain areas of contemporary investigation. These phenomena are considered and reviewed. The effects of modifying factors such as the effects of salinity or the presence of co-solvents may alter predicted fate of a compound. Competitive sorption with mobile microparticulate or colloidal phases may also impact OSAR predictions. Research on the role of both inorganic and organic-rich colloidal phases as a modifying influence on soil/sediment equilibrium partitioning theory is summarized. 3. Synthesis of crystalline americium hydroxide, Am(OH){sub 3}, and determination of its enthalpy of formation; estimation of the solubility-product constants of actinide(III) hydroxides SciTech Connect 1993-12-31 This paper reports a new synthesis of pure, microcrystalline Am(OH){sub 3}, its characterization by x-ray powder diffraction and infrared spectroscopy, and the calorimetric determination of its enthalpy of solution in dilute hydrochloric acid. From the enthalpy of solution the enthalpy of formation of Am(OH){sub 3} has been calculated to be {minus}1371.2{plus_minus}7.9 kj{center_dot}mol{sup {minus}1}, which represents the first experimental determination of an enthalpy of formation of any actinide hydroxide. The free energy of formation and solubility product constant of Am(OH){sub 3} (K{sub sp} = 7 {times} 10{sup {minus}31}) have been calculated from our enthalpy of formation and entropy estimates and are compared with literature measurements under near-equilibrium conditions. Since many properties of the tripositive lanthanide and actinide ions (e.g., hydrolysis, complex-ion formation, and thermochemistry) change in a regular manner, these properties can be interpreted systematically in terms of ionic size. This paper compares the thermochemistry of Am(OH){sub 3} with thermochemical studies of lanthanide hydroxides. A combined structural and acid-base model is used to explain the systematic differences in enthalpies of solution between the oxides and hydroxides of the 4f{sup n} and 5f{sup n} subgroups and to predict solubility-product constants for the actinide(III) hydroxides of Pu through Cf. 4. Synthesis, structure and study of azo-hydrazone tautomeric equilibrium of 1,3-dimethyl-5-(arylazo)-6-amino-uracil derivatives Debnath, Diptanu; Roy, Subhadip; Li, Bing-Han; Lin, Chia-Her; Misra, Tarun Kumar 2015-04-01 Azo dyes, 1,3-dimethyl-5-(arylazo)-6-aminouracil (aryl = -C6H5 (1), -p-CH3C6H4 (2), -p-ClC6H4 (3), -p-NO2C6H4 (4)) were prepared and characterized by UV-vis, FT-IR, 1H NMR, 13C NMR spectroscopic techniques and single crystal X-ray crystallographic analysis. In the light of spectroscopic analysis it evidences that of the tautomeric forms, the azo-enamine-keto (A) form is the predominant form in the solid state whereas in different solvents it is the hydrazone-imine-keto (B) form. The study also reveals that the hydrazone-imine-keto (B) form exists in an equilibrium mixture with its anionic form in various organic solvents. The solvatochromic and photophysical properties of the dyes in various solvents with different hydrogen bonding parameter were investigated. The dyes exhibit positive solvatochromic property on moving from polar protic to polar aprotic solvents. They are fluorescent active molecules and exhibit high intense fluorescent peak in some solvents like DMSO and DMF. It has been demonstrated that the anionic form of the hydrazone-imine form is responsible for the high intense fluorescent peak. In addition, the acid-base equilibrium in between neutral and anionic form of hydrazone-imine form in buffer solution of varying pH was investigated and evaluated the pKa values of the dyes by making the use of UV-vis spectroscopic methods. The determined acid dissociation constant (pKa) values increase according to the sequence of 2 > 1 > 3 > 4. 5. Synthesis, structure and study of azo-hydrazone tautomeric equilibrium of 1,3-dimethyl-5-(arylazo)-6-amino-uracil derivatives. PubMed Debnath, Diptanu; Roy, Subhadip; Li, Bing-Han; Lin, Chia-Her; Misra, Tarun Kumar 2015-04-01 Azo dyes, 1,3-dimethyl-5-(arylazo)-6-aminouracil (aryl=-C6H5 (1), -p-CH3C6H4 (2), -p-ClC6H4 (3), -p-NO2C6H4 (4)) were prepared and characterized by UV-vis, FT-IR, 1H NMR, 13C NMR spectroscopic techniques and single crystal X-ray crystallographic analysis. In the light of spectroscopic analysis it evidences that of the tautomeric forms, the azo-enamine-keto (A) form is the predominant form in the solid state whereas in different solvents it is the hydrazone-imine-keto (B) form. The study also reveals that the hydrazone-imine-keto (B) form exists in an equilibrium mixture with its anionic form in various organic solvents. The solvatochromic and photophysical properties of the dyes in various solvents with different hydrogen bonding parameter were investigated. The dyes exhibit positive solvatochromic property on moving from polar protic to polar aprotic solvents. They are fluorescent active molecules and exhibit high intense fluorescent peak in some solvents like DMSO and DMF. It has been demonstrated that the anionic form of the hydrazone-imine form is responsible for the high intense fluorescent peak. In addition, the acid-base equilibrium in between neutral and anionic form of hydrazone-imine form in buffer solution of varying pH was investigated and evaluated the pKa values of the dyes by making the use of UV-vis spectroscopic methods. The determined acid dissociation constant (pKa) values increase according to the sequence of 2>1>3>4. 6. Oxygen affinity of haemoglobin and red cell acid-base status in patients with severe chronic obstructive lung disease. PubMed Huckauf, H; Schäfer, J H; Kollo, D 1976-01-01 The oxygen affinity of hemoglobin and the factors determining the position of the oxygen dissociation curve were investigated in twenty-five patients with severe chronic obstructive lung disease. Patients have been separated into three groups: group I showed a normal or mild decrease of PaO2, group II a moderate fall in arterial oxygen pressure, and group III a severe hypoxia with balanced acid-base equilibrium and hypercapnia. Blood hemoglobin exhibited a significant increase in all groups, indicating an improved oxygen transport. In most patients a leftward shifting of the oxygen dissociation curve occurred. It is discussed that the tendency to left shifting is based upon alkalosis inside the red cells, evidently demonstrated in all groups studied. 2,3-diphosphoglycerate showed no close relation to evaluated oxygen affinity of hemoglobin. The evidence for an increased oxygen affinity may reveal a further compensatory mechanism in oxygen transport in patients with pulmonary disorders. Additionally the alkalosis inside the cells may counterbalance too great a right shifting of oxygen dissociation curve in vivo when severe hypoxia and hypercapnia occur. PMID:13884 7. Constant fields and constant gradients in open ionic channels. PubMed Chen, D P; Barcilon, V; Eisenberg, R S 1992-05-01 Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. 8. Constant fields and constant gradients in open ionic channels. PubMed Central Chen, D P; Barcilon, V; Eisenberg, R S 1992-01-01 Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159 9. Torque equilibrium attitude control for Skylab reentry NASA Technical Reports Server (NTRS) Glaese, J. R.; Kennel, H. F. 1979-01-01 All the available torque equilibrium attitudes (most were useless from the standpoint of lack of electrical power) and the equilibrium seeking method are presented, as well as the actual successful application during the 3 weeks prior to Skylab reentry. 10. GEOMETRIC PROGRAMMING, CHEMICAL EQUILIBRIUM, AND THE ANTI-ENTROPY FUNCTION* PubMed Central Duffin, R. J.; Zener, C. 1969-01-01 The culmination of this paper is the following duality principle of thermodynamics: maximum S = minimum S*. (1) The left side of relation (1) is the classical characterization of equilibrium. It says to maximize the entropy function S with respect to extensive variables which are subject to certain constraints. The right side of (1) is a new characterization of equilibrium and concerns minimization of an anti-entropy function S* with respect to intensive variables. Relation (1) is applied to the chemical equilibrium of a mixture of gases at constant temperature and volume. Then (1) specializes to minimum F = maximum F*, (2) where F is the Helmholtz function for free energy and F* is an anti-Helmholtz function. The right-side of (2) is an unconstrained maximization problem and gives a simplified practical procedure for calculating equilibrium concentrations. We also give a direct proof of (2) by the duality theorem of geometric programming. The duality theorem of geometric programming states that minimum cost = maximum anti-cost. (30) PMID:16591769 11. Full characterization of GPCR monomer-dimer dynamic equilibrium by single molecule imaging. PubMed Kasai, Rinshi S; Suzuki, Kenichi G N; Prossnitz, Eric R; Koyama-Honda, Ikuko; Nakada, Chieko; Fujiwara, Takahiro K; Kusumi, Akihiro 2011-02-01 Receptor dimerization is important for many signaling pathways. However, the monomer-dimer equilibrium has never been fully characterized for any receptor with a 2D equilibrium constant as well as association/dissociation rate constants (termed super-quantification). Here, we determined the dynamic equilibrium for the N-formyl peptide receptor (FPR), a chemoattractant G protein-coupled receptor (GPCR), in live cells at 37°C by developing a single fluorescent-molecule imaging method. Both before and after liganding, the dimer-monomer 2D equilibrium is unchanged, giving an equilibrium constant of 3.6 copies/µm(2), with a dissociation and 2D association rate constant of 11.0 s(-1) and 3.1 copies/µm(2)s(-1), respectively. At physiological expression levels of ∼2.1 receptor copies/µm(2) (∼6,000 copies/cell), monomers continually convert into dimers every 150 ms, dimers dissociate into monomers in 91 ms, and at any moment, 2,500 and 3,500 receptor molecules participate in transient dimers and monomers, respectively. Not only do FPR dimers fall apart rapidly, but FPR monomers also convert into dimers very quickly. 12. Influence of substituent on equilibrium of benzoxazine synthesis from Mannich base and formaldehyde. PubMed Deng, Yuyuan; Zhang, Qin; Zhou, Qianhao; Zhang, Chengxi; Zhu, Rongqi; Gu, Yi 2014-09-14 N-Substituted aminomethylphenol (Mannich base) and 3,4-dihydro-2H-3-substituted 1,3-benzoxazine (benzoxazine) were synthesized from substituted phenol (p-cresol, phenol, p-chlorophenol), substituted aniline (p-toluidine, aniline, p-chloroaniline) and formaldehyde to study influence of substituent on equilibrium of benzoxazine synthesis from Mannich base and formaldehyde. (1)H-NMR and charges of nitrogen and oxygen atoms illustrate effect of substituent on reactivity of Mannich base, while oxazine ring stability is characterized by differential scanning calorimetry (DSC) and C-O bond order. Equilibrium constants were tested from 50 °C to 80 °C, and the results show that substituent attached to phenol or aniline has same impact on reactivity of Mannich base; however, it has opposite influence on oxazine ring stability and equilibrium constant. Compared with the phenol-aniline system, electron-donating methyl on phenol or aniline increases the charge of nitrogen and oxygen atoms in Mannich base. When the methyl group is located at para position of phenol, oxazine ring stability increases, and the equilibrium constant climbs, whereas when the methyl group is located at the para position of aniline, oxazine ring stability decreases, the benzoxazine hydrolysis tends to happen and equilibrium constant is significantly low. 13. Effective cosmological constant induced by stochastic fluctuations of Newton's constant de Cesare, Marco; Lizzi, Fedele; Sakellariadou, Mairi 2016-09-01 We consider implications of the microscopic dynamics of spacetime for the evolution of cosmological models. We argue that quantum geometry effects may lead to stochastic fluctuations of the gravitational constant, which is thus considered as a macroscopic effective dynamical quantity. Consistency with Riemannian geometry entails the presence of a time-dependent dark energy term in the modified field equations, which can be expressed in terms of the dynamical gravitational constant. We suggest that the late-time accelerated expansion of the Universe may be ascribed to quantum fluctuations in the geometry of spacetime rather than the vacuum energy from the matter sector. 14. The influence of dissolved organic matter on the acid-base system of the Baltic Sea: A pilot study Kulinski, Karol; Schneider, Bernd; Hammer, Karoline; Schulz-Bull, Detlef 2015-04-01 To assess the influence of dissolved organic matter (DOM) on the acid-base system of the Baltic Sea, 19 stations along the salinity gradient from Mecklenburg Bight to the Bothnian Bay were sampled in November 2011 for total alkalinity (AT), total inorganic carbon concentration (CT), partial pressure of CO2 (pCO2), and pH. Based on these data, an organic alkalinity contribution (Aorg) was determined, defined as the difference between measured AT and the inorganic alkalinity calculated from CT and pH and/or CT and pCO2. Aorg was in the range of 22-58 µmol kg-1, corresponding to 1.5-3.5% of AT. The method to determine Aorg was validated in an experiment performed on DOM-enriched river water samples collected from the mouths of the Vistula and Oder Rivers in May 2012. The Aorg increase determined in that experiment correlated directly with the increase of DOC concentration caused by enrichment of the >1 kDa DOM fraction. To examine the effect of Aorg on calculations of the marine CO2 system, the pCO2 and pH values measured in Baltic Sea water were compared with calculated values that were based on the measured alkalinity and another variable of the CO2 system, but ignored the existence of Aorg. Large differences between measured and calculated pCO2 and pH were obtained when the computations were based on AT and CT. The calculated pCO2 was 27-56% lower than the measured values whereas the calculated pH was overestimated by more than 0.4 pH units. Since biogeochemical models are based on the transport and transformations of AT and CT, the acid-base properties of DOM should be included in calculations of the CO2 system in DOM-rich basins like the Baltic Sea. In view of our limited knowledge about the composition and acid/base properties of DOM, this is best achieved using a bulk dissociation constant, KDOM, that represents all weakly acidic functional groups present in DOM. Our preliminary results indicated that the bulk KDOM in the Baltic Sea is 2.94•10-8 mol kg-1 15. Particle orbits in two-dimensional equilibrium models for the magnetotail NASA Technical Reports Server (NTRS) Karimabadi, H.; Pritchett, P. L.; Coroniti, F. V. 1990-01-01 Assuming that there exist an equilibrium state for the magnetotail, particle orbits are investigated in two-dimensional kinetic equilibrium models for the magnetotail. Particle orbits in the equilibrium field are compared with those calculated earlier with one-dimensional models, where the main component of the magnetic field (Bx) was approximated as either a hyperbolic tangent or a linear function of z with the normal field (Bz) assumed to be a constant. It was found that the particle orbits calculated with the two types of models are significantly different, mainly due to the neglect of the variation of Bx with x in the one-dimensional fields. 16. Resonant behaviour of MHD waves on magnetic flux tubes. III - Effect of equilibrium flow NASA Technical Reports Server (NTRS) Goossens, Marcel; Hollweg, Joseph V.; Sakurai, Takashi 1992-01-01 The Hollweg et al. (1990) analysis of MHD surface waves in a stationary equilibrium is extended. The conservation laws and jump conditions at Alfven and slow resonance points obtained by Sakurai et al. (1990) are generalized to include an equilibrium flow, and the assumption that the Eulerian perturbation of total pressure is constant is recovered as the special case of the conservation law for an equilibrium with straight magnetic field lines and flow along the magnetic field lines. It is shown that the conclusions formulated by Hollweg et al. are still valid for the straight cylindrical case. The effect of curvature is examined. 17. Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer. PubMed Hu, Yujing; Gao, Yang; An, Bo 2015-07-01 An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases. 18. Clinical assessment of acid-base status. Strong ion difference theory. PubMed Constable, P D 1999-11-01 The traditional approach to evaluating acid-base balance uses the Henderson-Hasselbalch equation to categorize four primary acid-base disturbances: respiratory acidosis (increased PCO2), respiratory alkalosis (decreased PCO2), metabolic acidosis (decreased extracellular base excess), or metabolic alkalosis (increased extracellular base excess). The anion gap is calculated to detect the presence of unidentified anions in plasma. This approach works well clinically and is recommended for use whenever serum total protein, albumin, and phosphate concentrations are approximately normal; however, when their concentrations are markedly abnormal, the Henderson-Hasselbalch equation frequently provides erroneous conclusions as to the cause of an acid-base disturbance. Moreover, the Henderson-Hasselbalch approach is more descriptive than mechanistic. The new approach to evaluating acid-base balance uses the simplified strong ion model to categorize eight primary acid-base disturbances: respiratory acidosis (increased PCO2), respiratory alkalosis (decreased PCO2), strong ion acidosis (decreased [SID+]) or strong ion alkalosis (increased [SID+]), nonvolatile buffer ion acidosis (increased [ATOT]) or nonvolatile buffer ion alkalosis (decreased [ATOT]), and temperature acidosis (increased body temperature) or temperature alkalosis (decreased body temperature). The strong ion gap is calculated to detect the presence of unidentified anions in plasma. This simplified strong ion approach works well clinically and is recommended for use whenever serum total protein, albumin, and phosphate concentrations are markedly abnormal. The simplified strong ion approach is mechanistic and is therefore well suited for describing the cause of any acid-base disturbance. The new approach should therefore be valuable in a clinical setting and in research studies investigating acid-base balance. The presence of unmeasured strong ions in plasma or serum (such as lactate, ketoacids, and uremic anions 19. Out-of-equilibrium relaxation of the thermal Casimir effect in a model polarizable material. PubMed Dean, David S; Démery, Vincent; Parsegian, V Adrian; Podgornik, Rudolf 2012-03-01 Relaxation of the thermal Casimir or van der Waals force (the high temperature limit of the Casimir force) for a model dielectric medium is investigated. We start with a model of interacting polarization fields with a dynamics that leads to a frequency dependent dielectric constant of the Debye form. In the static limit, the usual zero frequency Matsubara mode component of the Casimir force is recovered. We then consider the out-of-equilibrium relaxation of the van der Waals force to its equilibrium value when two initially uncorrelated dielectric bodies are brought into sudden proximity. For the interaction between dielectric slabs, it is found that the spatial dependence of the out-of-equilibrium force is the same as the equilibrium one, but it has a time dependent amplitude, or Hamaker coefficient, which increases in time to its equilibrium value. The final relaxation of the force to its equilibrium value is exponential in systems with a single or finite number of polarization field relaxation times. However, in systems, such as those described by the Havriliak-Negami dielectric constant with a broad distribution of relaxation times, we observe a much slower power law decay to the equilibrium value. 20. Optical constants of solid methane NASA Technical Reports Server (NTRS) Khare, Bishun N.; Thompson, W. R.; Sagan, C.; Arakawa, E. T.; Bruel, C.; Judish, J. P.; Khanna, R. K.; Pollack, J. B. 1989-01-01 Methane is the most abundant simple organic molecule in the outer solar system bodies. In addition to being a gaseous constituent of the atmospheres of the Jovian planets and Titan, it is present in the solid form as a constituent of icy surfaces such as those of Triton and Pluto, and as cloud condensate in the atmospheres of Titan, Uranus, and Neptune. It is expected in the liquid form as a constituent of the ocean of Titan. Cometary ices also contain solid methane. The optical constants for both solid and liquid phases of CH4 for a wide temperature range are needed for radiative transfer calculations, for studies of reflection from surfaces, and for modeling of emission in the far infrared and microwave regions. The astronomically important visual to near infrared measurements of solid methane optical constants are conspicuously absent from the literature. Preliminary results are presented of the optical constants of solid methane for the 0.4 to 2.6 micron region. K is reported for both the amorphous and the crystalline (annealed) states. Using the previously measured values of the real part of the refractive index, n, of liquid methane at 110 K n is computed for solid methane using the Lorentz-Lorentz relationship. Work is in progress to extend the measurements of optical constants n and k for liquid and solid to both shorter and longer wavelengths, eventually providing a complete optical constants database for condensed CH4.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109467029571533, "perplexity": 4250.613256849287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00385.warc.gz"}
http://crypto.stackexchange.com/questions/8437/what-is-the-fastest-elliptic-curve-operation-fp-in-affine-coordinates-such-tha?answertab=oldest
# What is the fastest elliptic curve operation f(P) in affine coordinates such that f^n(P)=P only if n is large? I'm working with the affine representations of points of the Secp256k1 elliptic curve (from Bitcoin). I've read many papers that show that computing some functions, like $f(P)=3P$ can be computed faster than the standard way. Other papers say that with some pre-computation, the field inversion can be amortized if $F^1(P) ... F^k(P)$ must be computed. I need the fastest function $F(P)$ that, when applied to the last result iteratively, generates a sequences of points whose average period is large (I don't need any proof, it can be just large in practice). To be fast I suppose it should be computed without field inversions. I don't mind to pre-compute some values. For example, it could be $F(P) = 1.5P+4Q$ for a fixed $Q$. It doesn't matter which function it is, because I need it to generate random points in the curve. The probability distribution doesn't matter either. (notation: $1.5$ is the point halving of $3P$) Motivation: Solutions to this problem may be helpful for generating vanity addresses. - The standard way to generate random points is to select a random value for X, check to see if there's a solution for the elliptic curve equation with that value, and if these is, pick one of the two possible values for Y. Or, do you need random values with known relationships, or for which you can compute output number N+1 given output number N? –  poncho May 23 '13 at 16:03 Yes, I need a way to track a point back to source points P1, P2, Pn (with a known relation) and that's why I had though about a linear function F on the previous points. –  Richard May 23 '13 at 18:07 I bet this is a Bitcoin-related question, in which case although people say it uses a Koblitz curve it is in fact not one. I think I have a good candidate solution for your problem but it works for a composite modulus and only if the group order is kept secret. If that's useful then let me know. If you're working in affine coordinates and you want to generate new points without inversions then you're probably limited to the Frobenius endomorphism. –  Barack Obama May 24 '13 at 0:04 Can you describe what problem you actually want to solve? –  CodesInChaos May 24 '13 at 5:47 I have some experience with Bitcoin's curve and I'm very confident that you will be unable to avoid an inversion for your problem as stated. I'm also quite confident that the restrictions you have specified above are more restrictive than are really necessary. Perhaps you can let us know whether you are a) trying to break the curve, b) generate vanity addresses, c) implementing some deterministic wallet scheme or d) implementing transactions which third parties can't link to an address. –  Barack Obama May 24 '13 at 23:20 With your curve, you can use the Gallant-Lambert-Vanstone (GLV) method to answer your question. Indeed, the equation of your curve is: $$y^2=x^3+7.$$ Since $p$ is congruent to $1$ modulo $3$, there are cube roots of unity modulo $p$. Let: $$j=55594575648329892869085402983802832744385952214688224221778511981742606582254 \pmod{p}.$$ You can check that $j^3\equiv 1\pmod{p}$. The complex multiplication by $j$ sends $P=(X_P,Y_P)$ to $P'=(jX_P,Y_P)$. Moreover, $P'=J\cdot P,$ where $$J=37718080363155996902926221483475020450927657555482586988616620542887997980018.$$ Finally, multiplication by $J-1$ can be performed efficiently (one application of complex multiplication and one addition) and has high order. Don't use $J+1$: it has order $6$. EDIT $J^3$ is $1$ modulo the order of the curve, while $j^3$ is $1$ mod $p$. This endomorphism of the curve is the projection of the complex multiplication of the curve $y^2=x^3+7$ over the rationals to the curve reduced mod $p$. This is why is is usually called the complex multiplication. All in all, this gives a reasonably fast way to generate random looking multiples of $P$. The full GLV method is much more than that since it speeds up multiplication by an arbitrary constant compared to regular double and add, but its basic idea relies on having an endomorphism that can be computed quickly. - This is not the GLV method - it's just using an efficiently computable endomorphism which is well known. Also, the solution described does not avoid the inversion required to produce an affine point. Finally, I'm not sure what's so "complex" about the multiplication by j! –  Barack Obama Jul 11 '13 at 0:00 Depending on what you actually want to do, it might be possible to speed this up using a batch inversion, instead of inverting each denominator individually. 1. Use some form of extended coordinates 2. Compute a few hundred new points in extended coordinates, with known relation to the original point. 3. Multiply all denominators together and invert it. 4. Use multiplications of the combined denominator with the existing denominators to compute the individual denominators. AFAIK Step 3+4 have a cost of 3 field multiplications per-point. Which is much cheaper that 200ish multiplications required for an inversion. One way to implement steps 2 and 3 is: Given the denominators $z_1 ... z_n$: • Define $r_i = \Pi_{j=i+1}^n z_j$ compute iteratively as $r_n=1$, $r_i=r_{i+1} \cdot z_{i+1}$ for $i=n ... 0$ and store it in an array. • Compute $r_0^{-1}$ using a field inversion. • Define $l_1 = r_0^{-1} \cdot \Pi_{j=1}^{i-1} z_j$ and compute it iteratively as $l_0= r_0^{-1}$ and then $l_i = l_{i-1}\cdot z_{i-1}$ • $z_i^{-1}=l_i \cdot r_i$. Given $z_i^{-1}$ the affine coordinate can be obtained by multiplying it with the nominator. Why does this work? $z_i^{-1} =\\ = (\Pi_{j=1}^n z_j) \cdot (\Pi_{j=1}^n z_j)^{-1} \cdot z_i^{-1} \\ = (\Pi_{j=1}^{i-1} z_j \cdot z_i \cdot \Pi_{j=i+1}^n z_j)\cdot (\Pi_{j=1}^n z_j)^{-1} \cdot z_i^{-1}\\ = ((\Pi_{j=1}^n z_j)^{-1}\cdot\Pi_{j=1}^{i-1} z_j) \cdot \Pi_{j=i+1}^n z_j\\ = l_i \cdot r_i$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305610656738281, "perplexity": 458.35163113730914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00194-ip-10-180-206-219.ec2.internal.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/67523/HTML/default/statug_surveylogistic_details06.htm
# The SURVEYLOGISTIC Procedure ### Model Fitting Subsections: #### Determining Observations for Likelihood Contributions If you use the events/trials syntax, each observation is split into two observations. One has the response value 1 with a frequency equal to the value of the events variable. The other observation has the response value 2 and a frequency equal to the value of (trials – events). These two observations have the same explanatory variable values and the same WEIGHT values as the original observation. For either the single-trial or the events/trials syntax, let j index all observations. In other words, for the single-trial syntax, j indexes the actual observations. And, for the events/trials syntax, j indexes the observations after splitting (as described previously). If your data set has 30 observations and you use the single-trial syntax, j has values from 1 to 30; if you use the events/trials syntax, j has values from 1 to 60. Suppose the response variable in a cumulative response model can take on the ordered values , where k is an integer . The likelihood for the jth observation with ordered response value and explanatory variables vector ( row vectors) is given by where is the logistic, normal, or extreme-value distribution function; are ordered intercept parameters; and is the slope parameter vector. For the generalized logit model, letting the st level be the reference level, the intercepts are unordered and the slope vector varies with each logit. The likelihood for the jth observation with ordered response value and explanatory variables vector (row vectors) is given by #### Iterative Algorithms for Model Fitting Two iterative maximum likelihood algorithms are available in PROC SURVEYLOGISTIC to obtain the pseudo-estimate of the model parameter . The default is the Fisher scoring method, which is equivalent to fitting by iteratively reweighted least squares. The alternative algorithm is the Newton-Raphson method. Both algorithms give the same parameter estimates; the covariance matrix of is estimated in the section Variance Estimation. For a generalized logit model, only the Newton-Raphson technique is available. You can use the TECHNIQUE= option in the MODEL statement to select a fitting algorithm. ##### Iteratively Reweighted Least Squares Algorithm (Fisher Scoring) Let Y be the response variable that takes values . Let j index all observations and be the value of response for the jth observation. Consider the multinomial variable such that and . With denoting the probability that the jth observation has response value i, the expected value of is , and . The covariance matrix of is , which is the covariance matrix of a multinomial random variable for one trial with parameter vector . Let be the vector of regression parameters—for example, for cumulative logit model. Let be the matrix of partial derivatives of with respect to . The estimating equation for the regression parameters is where , and and are the WEIGHT and FREQ values of the jth observation. With a starting value of , the pseudo-estimate of is obtained iteratively as where , , and are evaluated at the ith iteration . The expression after the plus sign is the step size. If the log likelihood evaluated at is less than that evaluated at , then is recomputed by step-halving or ridging. The iterative scheme continues until convergence is obtained—that is, until is sufficiently close to . Then the maximum likelihood estimate of is . By default, starting values are zero for the slope parameters, and starting values are the observed cumulative logits (that is, logits of the observed cumulative proportions of response) for the intercept parameters. Alternatively, the starting values can be specified with the INEST= option in the PROC SURVEYLOGISTIC statement. ##### Newton-Raphson Algorithm Let be the gradient vector and the Hessian matrix, where is the log likelihood for the jth observation. With a starting value of , the pseudo-estimate of is obtained iteratively until convergence is obtained: where and are evaluated at the ith iteration . If the log likelihood evaluated at is less than that evaluated at , then is recomputed by step-halving or ridging. The iterative scheme continues until convergence is obtained—that is, until is sufficiently close to . Then the maximum likelihood estimate of is . #### Convergence Criteria Four convergence criteria are allowed: ABSFCONV= , FCONV= , GCONV= , and XCONV= . If you specify more than one convergence criterion, the optimization is terminated as soon as one of the criteria is satisfied. If none of the criteria is specified, the default is GCONV=1E–8. #### Existence of Maximum Likelihood Estimates The likelihood equation for a logistic regression model does not always have a finite solution. Sometimes there is a nonunique maximum on the boundary of the parameter space, at infinity. The existence, finiteness, and uniqueness of pseudo-estimates for the logistic regression model depend on the patterns of data points in the observation space (Albert and Anderson, 1984; Santner and Duffy, 1986). Consider a binary response model. Let be the response of the ith subject, and let be the row vector of explanatory variables (including the constant 1 associated with the intercept). There are three mutually exclusive and exhaustive types of data configurations: complete separation, quasi-complete separation, and overlap. Complete separation There is a complete separation of data points if there exists a vector that correctly allocates all observations to their response groups; that is, This configuration gives nonunique infinite estimates. If the iterative process of maximizing the likelihood function is allowed to continue, the log likelihood diminishes to zero, and the dispersion matrix becomes unbounded. Quasi-complete separation The data are not completely separable, but there is a vector such that and equality holds for at least one subject in each response group. This configuration also yields non­unique infinite estimates. If the iterative process of maximizing the likelihood function is allowed to continue, the dispersion matrix becomes unbounded and the log likelihood diminishes to a nonzero constant. Overlap If neither complete nor quasi-complete separation exists in the sample points, there is an overlap of sample points. In this configuration, the pseudo-estimates exist and are unique. Complete separation and quasi-complete separation are problems typically encountered with small data sets. Although complete separation can occur with any type of data, quasi-complete separation is not likely with truly continuous explanatory variables. The SURVEYLOGISTIC procedure uses a simple empirical approach to recognize the data configurations that lead to infinite parameter estimates. The basis of this approach is that any convergence method of maximizing the log likelihood must yield a solution that gives complete separation, if such a solution exists. In maximizing the log likelihood, there is no checking for complete or quasi-complete separation if convergence is attained in eight or fewer iterations. Subsequent to the eighth iteration, the probability of the observed response is computed for each observation. If the probability of the observed response is one for all observations, there is a complete separation of data points and the iteration process is stopped. If the complete separation of data has not been determined and an observation is identified to have an extremely large probability (0.95) of the observed response, there are two possible situations. First, there is overlap in the data set, and the observation is an atypical observation of its own group. The iterative process, if allowed to continue, stops when a maximum is reached. Second, there is quasi-complete separation in the data set, and the asymptotic dispersion matrix is unbounded. If any of the diagonal elements of the dispersion matrix for the standardized observations vectors (all explanatory variables standardized to zero mean and unit variance) exceeds 5,000, quasi-complete separation is declared and the iterative process is stopped. If either complete separation or quasi-complete separation is detected, a warning message is displayed in the procedure output. Checking for quasi-complete separation is less foolproof than checking for complete separation. The NOCHECK option in the MODEL statement turns off the process of checking for infinite parameter estimates. In cases of complete or quasi-complete separation, turning off the checking process typically results in the procedure failing to converge. #### Model Fitting Statistics Suppose the model contains s explanatory effects. For the jth observation, let be the estimated probability of the observed response. The three criteria displayed by the SURVEYLOGISTIC procedure are calculated as follows: • –2 log likelihood: where and are the weight and frequency values, respectively, of the jth observation. For binary response models that use the events/trials syntax, this is equivalent to where is the number of events, is the number of trials, and is the estimated event probability. • Akaike information criterion: where p is the number of parameters in the model. For cumulative response models, , where k is the total number of response levels minus one, and s is the number of explanatory effects. For the generalized logit model, . • Schwarz criterion: where p is the number of parameters in the model. For cumulative response models, , where k is the total number of response levels minus one, and s is the number of explanatory effects. For the generalized logit model, . The –2 log likelihood statistic has a chi-square distribution under the null hypothesis (that all the explanatory effects in the model are zero), and the procedure produces a p-value for this statistic. The AIC and SC statistics give two different ways of adjusting the –2 log likelihood statistic for the number of terms in the model and the number of observations used. #### Generalized Coefficient of Determination Cox and Snell (1989, pp. 208–209) propose the following generalization of the coefficient of determination to a more general linear model: where is the likelihood of the intercept-only model, is the likelihood of the specified model, and n is the sample size. The quantity achieves a maximum of less than 1 for discrete models, where the maximum is given by Nagelkerke (1991) proposes the following adjusted coefficient, which can achieve a maximum value of 1: Properties and interpretation of and are provided in Nagelkerke (1991). In the "Testing Global Null Hypothesis: BETA=0" table, is labeled as "RSquare" and is labeled as "Max-rescaled RSquare."  Use the RSQUARE option to request and . #### INEST= Data Set You can specify starting values for the iterative algorithm in the INEST= data set. The INEST= data set contains one observation for each BY group. The INEST= data set must contain the intercept variables (named Intercept for binary response models and Intercept, Intercept2, Intercept3, and so forth, for ordinal response models) and all explanatory variables in the MODEL statement. If BY processing is used, the INEST= data set should also include the BY variables, and there must be one observation for each BY group. If the INEST= data set also contains the _TYPE_ variable, only observations with _TYPE_ value 'PARMS' are used as starting values.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592205882072449, "perplexity": 853.2720503959132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00894.warc.gz"}
http://mrandrewandrade.com/blog/2015/10/22/modeling-thermistor-using-data-science.html
In the last blog post, I talked about voltage dividers and how they can be used to limit voltage to eventually connect to an ADC to measure voltage on a BeagleBone. Today I am going to build on the voltage divider and pair hardware engineering with data science. The next couple of posts will be pieces of battery testing rig, and then I will put it all togeather to explain how the full system works. First things first, what temperature sensor should we use? ## Selecting a Thermal Sensor Half due to avaliability, and the other half due to shear laziness (of not wanting to buy sensors), I decided we are going to use thermisters as our temperature sensor of choice. A friend hand a bunch so why use them? Specifically, he gave me a bunch of 100k thermistors Part No. HT100K3950-1 invidually connected to a 1m cable. ## What is a thermister? A thermister can be simply defined as a special type of resistor whose resistance is highly dependent on temperature. Therefore if you are able to measure the resistance on the thermister, you can easily determine the temperature if you know how the thermister behaves. Simple enough right? If you go to the site which sells them, they give a bit more information: Hotend Thermistor 100K Glass-sealed Thermistor 3950 1% thermistor 1.8mm glass head 1M long cables 2 wires PTFE Tube 0.6*1mm to protect from the thermistor Shrink wrap between thermistor and the cables. The most important piece of information to get started using the thermister is its behaviour responding to temperature. Luckly for me, my friend sent me the data which was on the site. When you have a table or chart which maps the resistance to temperature, you can simply measure the resistance with a multimemter and look up the temperature on the chart, or you can again use a voltage divider and connect it to an ADC (more on that later). ## Mapping Resistance to Temperature using Curve Fitting The site included a weird word document with a bunch of numbers. It really confuses me why one would have tabular data in a word document, a spreadsheet serves that purpose. Anyways, the information was easily extractable and I was able to put it into an spreadsheet and save as a .CSV file. If you want to follow along, you can you download the document here Once you own the file in a speadsheet program or in your text editor of choice, you can see there are four columns: temperature in Celcius, maximum resistance in $k\Omega$, normal (average)resistance in $k\Omega$ and minimum resistance in $k\Omega$ . If we were measuring the resistance by hand, we could simply just look up (and eyeball) the closest resistance value and read of the temperature. We could be more fancy and use linear interpolation as an alternative to eyeballing. That is all great, but our goal is to use a micocontroller (or computer) to store, measure and use the temperature readings. This means we have to mathematically model how the thermister behaves. Since we have data provided from the manufacturer, we do this by plotting the data which was provided: Now we can see that it does not have a linear relation. Actually it has a inverse relation or an $x^{n}$ where $n<0$ most probably. Since I am curious, I plotted the min and the max resistance to get a better feel for the error in the temperature reading. My ituition tells me that that is th range the thermister operates in. Now, the top graph, isn’t that useful. All it shows is that the range is very small, and is wider (there is more error) when temperatures are below 0 degrees. To see if we can do better, lets limit the range (with contingency) of the temperatures we will be dealing with on the project: 0-100 degrees. The plot is a bit clearer but not perfect, let’s try and be more fancy and reprensent the error with error bars like they do in stats 101. Great! A bit better, but it is still hard to read. Let’s try plotting the error on the same axis as the expected (normal) resistance. From this figure it is quite clear as tempertature decreases, there is more error in the thermistor reading. This figure also shows that reading taken over 20 C should have good accuracy. We can even take this futher by one more plot This figure shows the upper and lower bound of error (to be around $\pm 1.5k\Omega$)+ 125 kohm, since the expected reading would be around $125kOmega$ at 20 degrees C. Knowing the smallest resistance within our operating range will occur at 100 degrees (around $6720\Omega$), R_1 can be calculated to be around $1200\Omega$ using the voltage divider presented in the previous post. Now, the largest possible error can be calculated and used as a very conservative estime of the temperature reading resolution. Before we do that, let us fit a curve to the data. Using grade 11 math, we can estimate that the function which discribes the invest curve would look something like $resistance = a \times e^{-b \times temprature} + c$. We can then use SciPy’s curve_fit to determine the fit parameters and the covarience matrix. full temperature fit coefficients: [ 3.22248984e+02 5.51886907e-02 4.56056442e+00] Covariance matrix: [[ 1.82147996e+00 -2.16345654e-04 -2.82466975e-01] [ -2.16345654e-04 2.97792772e-08 2.87522862e-05] [ -2.82466975e-01 2.87522862e-05 2.25733939e-01]] The fit coefficients are now known! This means that the following equation approximates the behaviour of the thermister: $resistance = 322 \times e^{-0.055 \times temprature} + 4.56$ We can also determine the standard deviation of the fit from the diagonal of the covariance matrix, and plot it for each parameter. As we can see, the standard deviation is very small and thus results in a good fit accross the full range of temperature as shown in the three figures below: While the model accross the full temperature is useful, we can improve our model by curve fitting only in the temperature range we are interested in. This prevents the algorithm for compesating for select set of data which is irrevant. fit coefficients: [ 3.16643400e+02 4.84933743e-02 6.53548105e+00] Covariance matrix: [[ 3.29405526e-01 4.07305280e-05 -1.67742297e-02] [ 4.07305280e-05 3.89687128e-08 4.14707796e-05] [ -1.67742297e-02 4.14707796e-05 7.51633680e-02]] We can see the difference by comparing both of the curve fit models on the interested temperature range: While the testing spec we developed states we should have the capability of measuring from 0-100 degress C, the average range of operation is actually between 20-80 degrees C, so we can change the range to match the standard operating range fit coefficients: [ 2.94311453e+02 4.51009053e-02 5.05438839e+00] Covariance matrix: [[ 6.57786227e-01 1.12143572e-04 8.24269669e-02] [ 1.12143572e-04 2.18837795e-08 1.84056321e-05] [ 8.24269669e-02 1.84056321e-05 1.81122519e-02]] Residual mean (Kohm): 9.99334067349e-10 Residual std dev (Kohm): 0.2302444024 The results of the curve fit within the standard operating temperature is much better. Now only is residual error mean essentually zero (9.99e-10 k ohm) with a relatively small standard deviation (0.230 kohm), but the residual error have the general appearance of being normally distributed (unlike the previous curve fits). What this means is that, the model will predict the resistance very well (have a very low error) for the stardard operating temperatures, but will perform poorer outside. Luckly for us, our battries will not be operating below 20 degrees C or above 80 degrees C. ### Curve fit model: The model we created can now be sumarized to the following equation: $R_2 = a e^{-b \times temperature} + c$ where R_2 is meaured in $k\Omega$, temperature measued in degree celcius, and the constants a, b, and c were found (through curve fitting) to be: $a = 2.94311453 \times 10^2$ $b = 4.51009053 \times 10 ^{-02}$ $c = 5.054388390$ In addition, the error in the curve fitting is averaged at 9.99334067349e-10 kohm and the standard deviation of 0.2302444024 kohm. This error, while small can later be used in filtering algorithms when we are monitoring the temperature. I will touch on this in a later post, but this data about statistical noise while reading can aid in estimating temperature through more advance software (such as using Kalman Filtering). ## Writing software to convert voltage into temperature readings If we think of the system as a whole, the input to the system is Vo which is measured on an analog input pin of the BBB. Based on this voltage reading, we have to write software which estimates the temperature. To begin, we know that the thermister (R_2) changes resistance values depending on temperature. We can then use the voltage devider to map this relation: Vin in this case will be 3.3 V which is provided from the BBB. As noted in the last post, the spec on the BBB states that that analog input pin can only take in voltage in the range of 0 to 1.8 V. This means we can set the max Vout = 1.8. Finally based on the resistance to temperature data, we know that resistance increases as the temperature decreases. This means that R_2 (the resistance of the thermistor), will be the greatest when T = 0 degrees (R_2 will be around 327 kohms. We can then use this in our voltage divider equation and solve for R_1. The solution is R_1 = 272700 ohms. Because resistors come in standard sizes, 274K ohm is the stardard 1% error resistor to use. Technically, in this case we should go to the next highest resistor value (to limit the voltage to 1.8V), this doesn’t necessarily have to be true since we will not be cooling the batteries lower than room temperature. While I recommend using the 274k ohm resistor (with 1% error), one can use a 270k ohm (with 5% error) without much concequece if they ensure that the temperature does not fall below 5 degrees. Even if it does, the Beaglebone has some circuity to help prevent damamge for a slightly larder analog input voltage. We can use algebra to solve for R_2 in terms of the other variables as the following: $R_2 = \frac{R_1 V_o}{V_{in} - V_o}$ where $V_{in} \neq V_o$ In this equation, R_2 can be solved from R_1 (set by us, V_o (measured), V_in (set to by us). Next we can use the previous curve fit equation, and use algebra solve for temperature: $temperature = - \frac{ln (\frac {R_2-c}{a}}{b}$ where $R_2 > c$$$We can then substitute our relation of R_2 to the measured Vo to get the following equation: $temperature = -ln (\frac {\frac{R_1 V_o}{V_{in} - V_o}-c}{ab}$ where $R_2 > c$ and $V_{in} \neq V_o$$$ Before we can use this, we have to ensure the conditions hold. V_o will on equal V_in if R_2 is equal to zero, and we also ensure C (about 5.5 kohm) > R_2. If we use the data found in the csv, we see that the smallest resistance in our operaterting temperature (100 C) is 6.7100 kohm, so we are safe for both conditions. # Results! We can now write a simple which takes the a volatage as a parameter, and returns the temperature. -2.11347171107 We can now use this function and plot for the full range of input voltages: Using this chart we can now test the system and measure temperature! The next post will be about combining all the prieces and doing the testing!
{"extraction_info": {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317792415618896, "perplexity": 953.7832244898952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00064.warc.gz"}
https://en.wikipedia.org/wiki/Jacobi_sum
Jacobi sum Jump to navigation Jump to search In mathematics, a Jacobi sum is a type of character sum formed with Dirichlet characters. Simple examples would be Jacobi sums J(χ, ψ) for Dirichlet characters χ, ψ modulo a prime number p, defined by ${\displaystyle J(\chi ,\psi )=\sum \chi (a)\psi (1-a)\,,}$ where the summation runs over all residues a = 2, 3, ..., p − 1 mod p (for which neither a nor 1 − a is 0). Jacobi sums are the analogues for finite fields of the beta function. Such sums were introduced by C. G. J. Jacobi early in the nineteenth century in connection with the theory of cyclotomy. Jacobi sums J can be factored generically into products of powers of Gauss sums g. For example, when the character χψ is nontrivial, ${\displaystyle J(\chi ,\psi )={\frac {g(\chi )g(\psi )}{g(\chi \psi )}}\,,}$ analogous to the formula for the beta function in terms of gamma functions. Since the nontrivial Gauss sums g have absolute value p12, it follows that J(χ, ψ) also has absolute value p12 when the characters χψ, χ, ψ are nontrivial. Jacobi sums J lie in smaller cyclotomic fields than do the nontrivial Gauss sums g. The summands of J(χ, ψ) for example involve no pth root of unity, but rather involve just values which lie in the cyclotomic field of (p − 1)th roots of unity. Like Gauss sums, Jacobi sums have known prime ideal factorisations in their cyclotomic fields; see Stickelberger's theorem. When χ is the Legendre symbol, ${\displaystyle J(\chi ,\chi )=-\chi (-1)=(-1)^{\frac {p+1}{2}}\,.}$ In general the values of Jacobi sums occur in relation with the local zeta-functions of diagonal forms. The result on the Legendre symbol amounts to the formula p + 1 for the number of points on a conic section that is a projective line over the field of p elements. A paper of André Weil from 1949 very much revived the subject. Indeed, through the Hasse–Davenport relation of the late 20th century, the formal properties of powers of Gauss sums had become current once more. As well as pointing out the possibility of writing down local zeta-functions for diagonal hypersurfaces by means of general Jacobi sums, Weil (1952) demonstrated the properties of Jacobi sums as Hecke characters. This was to become important once the complex multiplication of abelian varieties became established. The Hecke characters in question were exactly those one needs to express the Hasse–Weil L-functions of the Fermat curves, for example. The exact conductors of these characters, a question Weil had left open, were determined in later work. References • Berndt, B. C.; Evans, R. J.; Williams, K. S. (1998). Gauss and Jacobi Sums. Wiley.[ISBN missing] • Lang, S. (1978). Cyclotomic fields. Graduate Texts in Mathematics. 59. Springer Verlag. ch. 1. ISBN 0-387-90307-0. • Weil, André (1949). "Numbers of solutions of equations in finite fields". Bull. Amer. Math. Soc. 55: 497–508. doi:10.1090/s0002-9904-1949-09219-4. • Weil, André (1952). "Jacobi sums as Grössencharaktere". Trans. Amer. Math. Soc. 73: 487–495. doi:10.1090/s0002-9947-1952-0051263-0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738745093345642, "perplexity": 943.6024190572018}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00094.warc.gz"}
https://opentsne.readthedocs.io/en/latest/examples/02_advanced_usage/02_advanced_usage.html
This notebook replicates what was done in the simple_usage notebooks, but this time with the advanced API. The advanced API is required if we want to use non-standard affinity methods that better preserve global structure. If you are comfortable with the advanced API, please refer to the preserving_global_structure notebook for a guide how obtain better embeddings and preserve more global structure. Data set contains 44808 samples with 50 features ## Create train/test split¶ 30021 training samples 14787 test samples ## Create a t-SNE embedding¶ Like in the simple_usage notebook, we will run the standard t-SNE optimization. This example shows the standard t-SNE optimization. Much can be done in order to better preserve global structure and improve embedding quality. Please refer to the preserving_global_structure notebook for some examples. 1. Compute the affinities between data points CPU times: user 1min 39s, sys: 2.15 s, total: 1min 41s Wall time: 19.6 s 2. Generate initial coordinates for our embedding CPU times: user 3.01 s, sys: 49.6 ms, total: 3.06 s Wall time: 77.3 ms 3. Construct the TSNEEmbedding object 4. Optimize embedding 1. Early exaggeration phase Iteration 50, KL divergence 5.7889, 50 iterations in 1.1595 sec Iteration 100, KL divergence 5.2496, 50 iterations in 1.1852 sec Iteration 150, KL divergence 5.1563, 50 iterations in 1.1364 sec Iteration 200, KL divergence 5.1203, 50 iterations in 1.1426 sec Iteration 250, KL divergence 5.1018, 50 iterations in 1.1117 sec CPU times: user 2min 52s, sys: 3.41 s, total: 2min 55s Wall time: 5.79 s 1. Regular optimization Iteration 50, KL divergence 3.7958, 50 iterations in 1.3252 sec Iteration 100, KL divergence 3.4076, 50 iterations in 1.2355 sec Iteration 150, KL divergence 3.1945, 50 iterations in 1.4455 sec Iteration 200, KL divergence 3.0541, 50 iterations in 1.4912 sec Iteration 250, KL divergence 2.9521, 50 iterations in 1.9103 sec Iteration 300, KL divergence 2.8745, 50 iterations in 2.1101 sec Iteration 350, KL divergence 2.8131, 50 iterations in 2.6402 sec Iteration 400, KL divergence 2.7642, 50 iterations in 3.6373 sec Iteration 450, KL divergence 2.7241, 50 iterations in 3.8347 sec Iteration 500, KL divergence 2.6918, 50 iterations in 4.7176 sec Iteration 550, KL divergence 2.6655, 50 iterations in 6.8521 sec Iteration 600, KL divergence 2.6441, 50 iterations in 5.5079 sec Iteration 650, KL divergence 2.6264, 50 iterations in 6.5560 sec Iteration 700, KL divergence 2.6121, 50 iterations in 7.5798 sec Iteration 750, KL divergence 2.6002, 50 iterations in 9.0642 sec CPU times: user 27min 24s, sys: 32.9 s, total: 27min 57s Wall time: 1min ## Transform¶ CPU times: user 3.55 s, sys: 150 ms, total: 3.7 s Wall time: 1.22 s Iteration 50, KL divergence 212577.9338, 50 iterations in 8.4328 sec Iteration 100, KL divergence 212507.1902, 50 iterations in 6.1227 sec CPU times: user 3min 14s, sys: 3.71 s, total: 3min 18s Wall time: 14.7 s ## Together¶ We superimpose the transformed points onto the original embedding with larger opacity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39513063430786133, "perplexity": 21175.158583947923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00231.warc.gz"}
https://swdocs.cypress.com/html/psoc6-with-anycloud/en/latest/api/psoc-base-lib/pdl/group__group__sysanalog__functions.html
# Functions¶ group group_sysanalog_functions Functions cy_en_sysanalog_status_t Cy_SysAnalog_Init(const cy_stc_sysanalog_config_t *config) Initialize the AREF block. /* Scenario: The AREF is a system wide resource; it is used by multiple blocks (SAR, CTDAC, CTB, and CSDv2). * If one of these blocks require one of the AREF reference outputs, the AREF must be enabled. * The AREF block is not meant to be used stand-alone. */ /* The Cy_SysAnalog_Fast_Local configuration is provided by the driver * to cover a majority of use cases. It is the recommended configuration * for analog performance; it configures the block for * fast startup and sources all references with its local generators. */ cy_en_sysanalog_status_t status; status = Cy_SysAnalog_Init(&Cy_SysAnalog_Fast_Local); /* Turn on the hardware block. */ Cy_SysAnalog_Enable(); /* After the AREF is enabled, enable the consumer blocks (SAR, CTDAC, CTB, and CSDv2). */ Return Function Usage Parameters __STATIC_INLINE void Cy_SysAnalog_DeInit (void) Reset AREF configuration back to power on reset defaults. /* Scenario: The AREF is no longer needed. Reset the AREF to power on reset settings. */ (void) Cy_SysAnalog_DeInit(); Return None Function Usage __STATIC_INLINE uint32_t Cy_SysAnalog_GetIntrCauseExtended (const PASS_Type *base) Return the PASS interrupt cause register value. Depending on the device, there may be interrupts from these PASS blocks: 1. CTDAC (up to 4 instances) 2. CTB(m) (up to 4 instances) 3. SAR (up to 4 instances) 4. FIFO (up to 4 instances) Compare this returned value with the enum values in cy_en_sysanalog_intr_cause_t to determine which block caused/triggered the interrupt. /* Scenario: The device has multiple CTBs or CTDACs and * that user wants to know which instance caused the global interrupt. * This function is not useful or needed when the device has only * one CTB or CTDAC. */ uint32_t intrCause; intrCause = Cy_SysAnalog_GetIntrCauseExtended(PASS); if ((uint32_t) CY_SYSANALOG_INTR_CAUSE_CTB0 == (intrCause & (uint32_t) CY_SYSANALOG_INTR_CAUSE_CTB0)) { /* CTB0 caused the interrupt */ } Return uint32_t Interrupt cause register value. Function Usage __STATIC_INLINE void Cy_SysAnalog_SetDeepSleepMode (cy_en_sysanalog_deep_sleep_t deepSleep) Set what parts of the AREF are enabled in Deep Sleep mode. • Disable AREF IP block • Enable IPTAT generator for fast wakeup from Deep Sleep mode. IPTAT outputs for CTBs are disabled. • Enable IPTAT generator and IPTAT outputs for CTB • Enable all generators and outputs: IPTAT, IZTAT, and VREF note The SRSS references are not available to the AREF in Deep Sleep mode. When operating in Deep Sleep mode, the local or external references must be selected. /* Scenario: * The CTB opamps are using the current references from the AREF block. * The CTDAC is using the 1.2 V voltage reference from the AREF block. * The user wants the CTB and CTDAC to be enabled in Deep Sleep mode. * In order for the CTB and CTDAC to function in Deep Sleep mode, the references * from the AREF block must also be enabled for Deep Sleep operation. */ Cy_SysAnalog_SetDeepSleepMode(CY_SYSANALOG_DEEPSLEEP_IPTAT_IZTAT_VREF); Return None Function Usage Parameters __STATIC_INLINE cy_en_sysanalog_deep_sleep_t Cy_SysAnalog_GetDeepSleepMode (void) Return Deep Sleep mode configuration as set by Cy_SysAnalog_SetDeepSleepMode. /* Scenario: As a system wide resource, the AREF Deep Sleep mode configuration * affects all consumer blocks. The Deep Sleep mode settings should be queried * before it is modified as to not affect other blocks. */ cy_en_sysanalog_deep_sleep_t mode; mode = Cy_SysAnalog_GetDeepSleepMode(); /* Don't enable the voltage reference, VREF, in Deep Sleep unless * it is needed by other blocks (e.g. the CTDAC) as it will consume more power. */ if (CY_SYSANALOG_DEEPSLEEP_IPTAT_IZTAT_VREF != mode) { Cy_SysAnalog_SetDeepSleepMode(CY_SYSANALOG_DEEPSLEEP_IPTAT_2); } Return A value from cy_en_sysanalog_deep_sleep_t Function Usage __STATIC_INLINE void Cy_SysAnalog_Enable (void) Enable the AREF hardware block. /* Scenario: The AREF block has been initialized and needs to be enabled. */ Cy_SysAnalog_Enable(); /* After the AREF is enabled, enable the consumer blocks (SAR, CTDAC, CTB, and CSDv2). */ Return None Function Usage __STATIC_INLINE void Cy_SysAnalog_Disable (void) Disable the AREF hardware block. /* Scenario: The AREF block is no longer needed. * That is, all the consumer blocks (SAR, CTDAC, CTB, and CSDv2) have been disabled. * Disable the AREF block to save power. */ Cy_SysAnalog_Disable(); Return None Function Usage __STATIC_INLINE void Cy_SysAnalog_SetArefMode (cy_en_sysanalog_startup_t startup) Set the AREF startup mode from power on reset or from Deep Sleep wakeup. The AREF can startup in a normal or fast mode. If fast startup is desired from Deep Sleep wakeup, the IPTAT generators must be enabled during Deep Sleep. This is a minimum Deep Sleep mode setting of CY_SYSANALOG_DEEPSLEEP_IPTAT_1 (see also Cy_SysAnalog_SetDeepSleepMode). /* Scenario: The fast startup mode is desired. */ Cy_SysAnalog_SetArefMode(CY_SYSANALOG_STARTUP_FAST); Return None Function Usage Parameters __STATIC_INLINE void Cy_SysAnalog_VrefSelect (cy_en_sysanalog_vref_source_t vref) Set the source for the Vref. The Vref can come from: • the locally generated 1.2 V reference • the SRSS, which provides a 0.8 V reference (not available to the AREF in Deep Sleep mode) • an external device pin The locally generated reference has higher accuracy, more stability over temperature, and lower noise than the SRSS reference. /* Select the local 1.2 V generator as the Vref source for optimal analog performance. */ Cy_SysAnalog_VrefSelect(CY_SYSANALOG_VREF_SOURCE_LOCAL_1_2V); Return None Function Usage Parameters __STATIC_INLINE void Cy_SysAnalog_IztatSelect (cy_en_sysanalog_iztat_source_t iztat) Set the source for the 1 uA IZTAT. The IZTAT can come from: • the locally generated IZTAT • the SRSS (not available to the AREF in Deep Sleep mode) The locally generated reference has higher accuracy, more stability over temperature, and lower noise than the SRSS reference. /* Select the local generator as the IZTAT source for optimal analog performance. */ Cy_SysAnalog_IztatSelect(CY_SYSANALOG_IZTAT_SOURCE_LOCAL); Return None Function Usage Parameters cy_en_sysanalog_status_t Cy_SysAnalog_DeepSleepInit(PASS_Type *base, const cy_stc_sysanalog_deep_sleep_config_t *config) Initialize PASS_ver2 Deep Sleep features such as Low Power Oscillator, Deep Sleep Clock, Timer. Return Function Usage /* Initializes Deep Sleep features */ const cy_stc_sysanalog_deep_sleep_config_t dsConfig = { /*.lpOscDsMode */ CY_SYSANALOG_LPOSC_ALWAYS_ON, /*.dsClkSource */ CY_SYSANALOG_DEEPSLEEP_SRC_LPOSC, /*.dsClkdivider */ CY_SYSANALOG_DEEPSLEEP_CLK_DIV_BY_4, /*.timerClock */ CY_SYSANALOG_TIMER_CLK_DEEPSLEEP, /*.timerPeriod */ 4000UL }; if (CY_SYSANALOG_SUCCESS == Cy_SysAnalog_DeepSleepInit(PASS, &dsConfig)) { /* Enable LpOsc and Timer blocks. */ Cy_SysAnalog_LpOscEnable(PASS); Cy_SysAnalog_TimerEnable(PASS); } Parameters
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18482857942581177, "perplexity": 29647.56105710546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00298.warc.gz"}
https://www.projecteuclid.org/euclid.aos/1069362303
## The Annals of Statistics ### Second-order correctness of the blockwise bootstrap for stationary observations #### Abstract We show that the blockwise bootstrap approximation for the distribution of a studentized statistic computed from dependent data is second-order correct provided we choose an appropriate variance estimator. We also show how to adapt the $BC_a$ confidence interval of Efron to the a dependent case. For the proofs we extend the results of Götze and Hipp on the validity of the formal Edgeworth expansion for a sum to the studentized mean. #### Article information Source Ann. Statist., Volume 24, Number 5 (1996), 1914-1933. Dates First available in Project Euclid: 20 November 2003 Permanent link to this document https://projecteuclid.org/euclid.aos/1069362303 Digital Object Identifier doi:10.1214/aos/1069362303 Mathematical Reviews number (MathSciNet) MR1421154 Zentralblatt MATH identifier 0906.62040 #### Citation Götze, F.; Künsch, H. R. Second-order correctness of the blockwise bootstrap for stationary observations. Ann. Statist. 24 (1996), no. 5, 1914--1933. doi:10.1214/aos/1069362303. https://projecteuclid.org/euclid.aos/1069362303
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7636188864707947, "perplexity": 2655.1523522835323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00474.warc.gz"}
https://www.physicsforums.com/threads/intersection-of-subspaces.87626/
# Intersection of subspaces 1. Sep 5, 2005 ### loli12 I have 2 subspaces U and V of R^3 which U = {(a1, a2, a3) in R^3: a1 = 3(a2) and a3 = -a2} V = {(a1, a2, a3) in R^3: a1 - 4(a2) - a3 = 0} I used the information in U and substituted it into the equation in V and I got 0 = 0. So, does it mean that the intersection of U and V is the whole R^3 which has no restrictions on a1, a2 and a3 (they are free)? Or do the original restrictions on both the original subspaces still being applied to the intersection? 2. Sep 5, 2005 ### AKG The intersection of U and V cannot possibly be all of R³. How could the intersection of two sets be bigger than both of the sets? Both subspaces are 1-dimensional, so the intersection is either 1-dimensional or 0-dimensional. Can you find a non-zero point that is in both U and V? If so, then the intersection of U and V is U and is also V (i.e. U = V). A point in U takes the form (x, x/3, -x/3). Would such a point be in V? x - 4(x/3) - (-x/3) = x - (4/3)x + (1/3)x = 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8247607350349426, "perplexity": 829.7203638657429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720967.29/warc/CC-MAIN-20161020183840-00099-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/god-hypothesis.75695/
# God Hypothesis 1. May 15, 2005 ### the_truth Scientific Method. 1. Observation and description of a phenomenon or group of phenomena. 2. Formulation of an hypothesis to explain the phenomena. In physics, the hypothesis often takes the form of a causal mechanism or a mathematical relation. 3. Use of the hypothesis to predict the existence of other phenomena, or to predict quantitatively the results of new observations. 4. Performance of experimental tests of the predictions by several independent experimenters and properly performed experiments. God. This is a proper hypothesis which remains unproven, which is a step forward from the seemingly completely out of the blue hypothesis of god. The aim of this hypothesis is also to provoke discussion on how do you choose a hypothesis, that most malleable element of scientific method and one which is very relevant to today's physics. Possibly also the element which einstein refused to work with and led to his stagnation. 1: Observation 1. You cannot measure anything certainly due to the heisenburg uncertainty principle and also because you cannot measure anything precisely. You cannot for instance say a ruler is exactly 30 cms long as the chances are it could very well be 30.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 cms long and you cannot measure with such precision and if you could measure with such precision you still wouldn't know whether the ruler is 30.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 cms long or not. Observation 2. You would need to be able to measure things to an infinite degree of precision in order to know the exact length of the ruler. Observation 3. If there was an omnipotent sentient being he would have the ability to measure things to an infinite degree of precision and thus with the laws of the universe be able to predict the entire universe. God is also creditted with being the creator of the universe and that the laws of the universe exist becasue he is a watchmaker god, whom does not externally influence the universe after it has been set in motion. 2: Hypothesis. The relationship between observation 1 and 2 with observation 1 is not a coincidence. Bear in mind observation 3 is an observation of irrational opinions. 3: Evidence. The possibility of the relationship being a coincidence is unknown. There is also the possibility that the idea of god has caused me to introduce it into my observations, which would be circular. However it is an observation and allowed by scientific method and so should not be ignored on that basis. More scientific observations which correlate with ancient ideas of god are required before the possibility of this relationship can no longer be considerred a relationship. 2. May 15, 2005 ### <<<GUILLE>>> You can't measure infinites. I like your speech. I'm only posting a conversation from about 200 years ago, adn that's all because I think it says everything I need/think: Napoleon-I have heard that you haven't included god, in your explenation about the universe? Laplace-No; I didn't require that hypothesis. Napoleon-Oh, it's a very good theory, it explains many things. :rofl: :rofl: Poor Napoleon, he was very inteligent for strategy though. 3. May 27, 2005 ### the_truth Yeah.. Shuffle 500000 men into Siberia, great idea. Similar Discussions: God Hypothesis
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8070688843727112, "perplexity": 871.3638103624792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188924.7/warc/CC-MAIN-20170322212948-00036-ip-10-233-31-227.ec2.internal.warc.gz"}
http://export.arxiv.org/list/gr-qc/pastweek?skip=96&show=25
# General Relativity and Quantum Cosmology ## Authors and titles for recent submissions, skipping first 96 [ total of 120 entries: 1-25 | 22-46 | 47-71 | 72-96 | 97-120 ] [ showing 25 entries per page: fewer | more | all ] ### Tue, 15 Sep 2020 (continued, showing last 4 of 36 entries) [97]  arXiv:2009.05719 (cross-list from astro-ph.HE) [pdf, other] Title: Possible Signature of First-Order Phase Transition in the Multi-messenger Data of Neutron Stars Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc); Nuclear Theory (nucl-th) [98]  arXiv:2009.05611 (cross-list from cond-mat.quant-gas) [pdf, other] Title: Inflationary Dynamics and Particle Production in a Toroidal Bose-Einstein Condensate Subjects: Quantum Gases (cond-mat.quant-gas); General Relativity and Quantum Cosmology (gr-qc) [99]  arXiv:2009.02898 (cross-list from hep-th) [pdf, other] Title: The Cosmological Optical Theorem Subjects: High Energy Physics - Theory (hep-th); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) [100]  arXiv:2008.07549 (cross-list from astro-ph.CO) [pdf, other] Title: Primordial black holes as dark matter and gravitational waves from axion inflation Comments: 25 pages + Appendices, 7 figures. Typos corrected, references and footnotes added. Inspired by the recent literature, slightly larger value of the curvature threshold is adopted and Figure 4 and 5 are revised accordingly. The general results did not change Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) ### Mon, 14 Sep 2020 [101] Title: No Inner-Horizon Theorem for Black Holes with Charged Scalar Hair Comments: 5+8 pages, 8 figures; v2: revised version, typos fixed, hyperbolic black hole with inner horizon added in Fig.3 Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [102] Title: Search for the stochastic gravitational-wave background induced by primordial curvature perturbations in LIGO's second observing run Subjects: General Relativity and Quantum Cosmology (gr-qc) [103] Title: Canonical variational completion of 4D Gauss-Bonnet gravity Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph) [104] Title: Cosmic Acceleration and Growth of Structure in Massive Gravity Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th) [105] Title: The (ultra) light in the dark: A potential vector boson of $8.7\times 10^{-13}$ eV from GW190521 Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Phenomenology (hep-ph) [106] Title: All higher-dimensional Majumdar-Papapetrou black holes Authors: James Lucietti Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [107] Title: Dynamic properties of thermodynamic phase transition for five-dimensional neutral Gauss-Bonnet AdS black hole on free energy landscape Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [108] Title: Scalarized charged black holes in the Einstein-Maxwell-Scalar theory with two U(1) fields Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [109] Title: Wheeler-DeWitt equation rejects quantum effects of grown-up universes as a candidate for dark energy Journal-ref: Phys. Lett. B 809, 135747 (2020) Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Quantum Physics (quant-ph) [110] Title: The phase diagram of the multi-matrix model with ABAB-interaction from functional renormalization Comments: 32 pages, 6 figures, 2 tables Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [111]  arXiv:2009.05574 (cross-list from hep-th) [pdf, other] Title: Trace dynamics and division algebras: towards quantum gravity and unification Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); Quantum Physics (quant-ph) [112]  arXiv:2009.05573 (cross-list from astro-ph.CO) [pdf, other] Title: Analytical approximations for curved primordial power spectra Comments: 11 pages, 2 figures, supplementary material available at this https URL To be submitted to Phys Rev. D Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) [113]  arXiv:2009.05517 (cross-list from astro-ph.CO) [pdf, other] Title: Galaxy imaging surveys as spin-sensitive detector for cosmological colliders Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) [114]  arXiv:2009.05472 (cross-list from astro-ph.HE) [pdf, other] Title: Don't fall into the gap: GW190521 as a straddling binary Comments: 5 pages, 3 figures, plus 1 page Appendix Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc) [115]  arXiv:2009.05461 (cross-list from astro-ph.HE) [pdf, other] Title: GW190521 as a Highly Eccentric Black Hole Merger Comments: 5 pages, 2 figures, 6 supplementary pages Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc) [116]  arXiv:2009.05201 (cross-list from astro-ph.HE) [pdf, ps, other] Title: Limiting Superluminal Neutrino Velocity and Lorentz Invariance Violation by Neutrino Emission from the Blazar TXS 0506+056 Comments: 4 pages, 1 table, accepted by Phys. Rev. D Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Phenomenology (hep-ph) [117]  arXiv:2009.05179 (cross-list from quant-ph) [pdf, ps, other] Title: Influence of acceleration on multi-body entangled quantum states Journal-ref: Phys. Rev. A 101, 062111 (2020) Subjects: Quantum Physics (quant-ph); General Relativity and Quantum Cosmology (gr-qc) [118]  arXiv:2009.05143 (cross-list from astro-ph.IM) [pdf, other] Title: Model Dependence of Bayesian Gravitational-Wave Background Statistics for Pulsar Timing Arrays Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc); Data Analysis, Statistics and Probability (physics.data-an) [119]  arXiv:2009.05071 (cross-list from hep-th) [pdf, ps, other] Title: Horndeski genesis: consistency of classical theory Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc) [120]  arXiv:2007.16091 (cross-list from hep-th) [pdf, other] Title: Bra-ket wormholes in gravitationally prepared states Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc) [ total of 120 entries: 1-25 | 22-46 | 47-71 | 72-96 | 97-120 ] [ showing 25 entries per page: fewer | more | all ] Disable MathJax (What is MathJax?) Links to: arXiv, form interface, find, gr-qc, new, 2009, contact, help  (Access key information)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3702905476093292, "perplexity": 11547.151072659766}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00669.warc.gz"}
http://math.stackexchange.com/questions/146658/the-game-officers
# The game Officers In the book "Winning Ways" by Berlekamp, Conway, Guy (aka the bible of combinatorial game theory) there is a short section about the game Officers in Chapter 4. It has also the symbolical name $\cdot {\bf 6}$. Its Grundy function is given by $G(n) := mex(\{G(a) \oplus G(n-1-a)\} : 1 \leq a \leq n\}$, where $\oplus$ is the nim-sum. This is the OEIS sequence A046695, where you also find some values. It is noted that this sequence has "a strong inclination towards a period of $26$", but that "a complete analysis is still to be found". According to the paper "Periods in Taking and Splitting Games" by Ian Caines, Carrie Gates, Richard K. Guy, and Richard J. Nowakowski, this was still open in the year 1999. Does anything has changed so far? - What are the rules? –  TonyK May 18 '12 at 15:31 Achim Flammenkamp's list of octal games (most recently dated as of November 2012), seems to report the exploration of $2^{33}$ positions without a proven solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3345593512058258, "perplexity": 425.8140105529594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500812867.24/warc/CC-MAIN-20140820021332-00131-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www2.physics.ox.ac.uk/contacts/people/devriendt/publications?page=3
# Publications by Julien Devriendt ## Mergers drive spin swings along the cosmic web Monthly Notices of the Royal Astronomical Society Oxford University Press 445 (2014) L46-L50 C Welker, J Devriendt, Y Dubois, C Pichon, S Peirani <p>The close relationship between mergers and the reorientation of the <em>spin</em> for galaxies and their host dark haloes is investigated using a cosmological hydrodynamical simulation (Horizon-AGN). Through a statistical analysis of merger trees, we show that spin swings are mainly driven by mergers along the filamentary structure of the cosmic web, and that these events account for the preferred perpendicular orientation of massive galaxies with respect to their nearest filament. By contrast, low-mass galaxies (<em>M</em><sub>s</sub> &lt; 10<sup>10</sup> M<sub>⊙</sub> at redshift 1.5) having undergone very few mergers, if at all, tend to possess a spin well aligned with their filament. Haloes follow the same trend as galaxies but display a greater sensitivity to smooth anisotropic accretion. The relative effect of mergers on magnitude is qualitatively different for minor and major mergers: mergers (and diffuse accretion) generally increase the magnitude of the specific angular momentum, but major mergers also give rise to a population of objects with less specific angular momentum left. Without mergers, secular accretion builds up the specific angular momentum of galaxies but not that of haloes. It also (re)aligns galaxies with their filament.</p> ## Integral field spectroscopy of high redshift galaxies with the HARMONI spectrograph on the European Extremely Large Telescope GROUND-BASED AND AIRBORNE INSTRUMENTATION FOR ASTRONOMY V 9147 (2014) ARTN 91478Z S Kendrew, S Zieleniewski, N Thatte, J Devriendt, R Houghton, T Fusco, M Tecza, F Clarke, K O'Brien ## Satellite Survival in Highly Resolved Milky Way Class Halos Monthly Notices of the Royal Astronomical Society 429 (2012) 633-651 S Geen, A Slyz, J Devriendt Surprisingly little is known about the origin and evolution of the Milky Way's satellite galaxy companions. UV photoionisation, supernova feedback and interactions with the larger host halo are all thought to play a role in shaping the population of satellites that we observe today, but there is still no consensus as to which of these effects, if any, dominates. In this paper, we revisit the issue by re-simulating a Milky Way class dark matter (DM) halo with unprecedented resolution. Our set of cosmological hydrodynamic Adaptive Mesh Refinement (AMR) simulations, called the Nut suite, allows us to investigate the effect of supernova feedback and UV photoionisation at high redshift with sub-parsec resolution. We subsequently follow the effect of interactions with the Milky Way-like halo using a lower spatial resolution (50pc) version of the simulation down to z=0. This latter produces a population of simulated satellites that we compare to the observed satellites of the Milky Way and M31. We find that supernova feedback reduces star formation in the least massive satellites but enhances it in the more massive ones. Photoionisation appears to play a very minor role in suppressing star and galaxy formation in all progenitors of satellite halos. By far the largest effect on the satellite population is found to be the mass of the host and whether gas cooling is included in the simulation or not. Indeed, inclusion of gas cooling dramatically reduces the number of satellites captured at high redshift which survive down to z=0. ## Constraining stellar assembly and active galactic nucleus feedback at the peak epoch of star formation MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 425 (2012) L96-L100 T Kimm, S Kaviraj, JEG Devriendt, SH Cohen, RA Windhorst, Y Dubois, A Slyz, NP Hathi, RRE Jr, RW O'Connell, MA Dopita, J Silk ## Self-regulated growth of supermassive black holes by a dual jet-heating active galactic nucleus feedback mechanism: Methods, tests and implications for cosmological simulations Monthly Notices of the Royal Astronomical Society 420 (2012) 2662-2683 Y Dubois, J Devriendt, A Slyz, R Teyssier We develop a subgrid model for the growth of supermassive black holes (BHs) and their associated active galactic nucleus (AGN) feedback in hydrodynamical cosmological simulations. This model transposes previous attempts to describe BH accretion and AGN feedback with the smoothed particle hydrodynamics (SPH) technique to the adaptive mesh refinement framework. It also furthers their development by implementing a new jet-like outflow treatment of the AGN feedback which we combine with the heating mode traditionally used in the SPH approach. Thus, our approach allows one to test the robustness of the conclusions derived from simulating the impact of self-regulated AGN feedback on galaxy formation vis-à-vis the numerical method. Assuming that BHs are created in the early stages of galaxy formation, they grow by mergers and accretion of gas at a Eddington-limited Bondi accretion rate. However this growth is regulated by AGN feedback which we model using two different modes: a quasar-heating mode when accretion rates on to the BHs are comparable to the Eddington rate, and a radio-jet mode at lower accretion rates which not only deposits energy, but also deposits mass and momentum on the grid. In other words, our feedback model deposits energy as a succession of thermal bursts and jet outflows depending on the properties of the gas surrounding the BHs. We assess the plausibility of such a model by comparing our results to observational measurements of the co-evolution of BHs and their host galaxy properties, and check their robustness with respect to numerical resolution. We show that AGN feedback must be a crucial physical ingredient for the formation of massive galaxies as it appears to be able to efficiently prevent the accumulation of and/or expel cold gas out of haloes/galaxies and significantly suppress star formation. Our model predicts that the relationship between BHs and their host galaxy mass evolves as a function of redshift, because of the vigorous accretion of cold material in the early Universe that drives Eddington-limited accretion on to BHs. Quasar activity is also enhanced at high redshift. However, as structures grow in mass and lose their cold material through star formation and efficient BH feedback ejection, the AGN activity in the low-redshift Universe becomes more and more dominated by the radio mode, which powers jets through the hot circumgalactic medium. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS. ## THE EPOCH OF DISK SETTLING: z similar to 1 TO NOW ASTROPHYSICAL JOURNAL 758 (2012) ARTN 106 SA Kassin, BJ Weiner, SM Faber, JP Gardner, CNA Willmer, AL Coil, MC Cooper, J Devriendt, AA Dutton, P Guhathakurta, DC Koo, AJ Metevier, KG Noeske, JR Primack ## The radius of baryonic collapse in disc galaxy formation Monthly Notices of the Royal Astronomical Society 424 (2012) 502-507 SA Kassin, J Devriendt, SM Fall, RS de Jong, B Allgood, JR Primack In the standard picture of disc galaxy formation, baryons and dark matter receive the same tidal torques, and therefore approximately the same initial specific angular momentum. However, observations indicate that disc galaxies typically have only about half as much specific angular momentum as their dark matter haloes. We argue this does not necessarily imply that baryons lose this much specific angular momentum as they form galaxies. It may instead indicate that galaxies are most directly related to the inner regions of their host haloes, as may be expected in a scenario where baryons in the inner parts of haloes collapse first. A limiting case is examined under the idealized assumption of perfect angular momentum conservation. Namely, we determine the density contrast Δ, with respect to the critical density of the Universe, by which dark matter haloes need to be defined in order to have the same average specific angular momentum as the galaxies they host. Under the assumption that galaxies are related to haloes via their characteristic rotation velocities, the necessary Δ is ∼600. This Δ corresponds to an average halo radius and mass which are ∼60per cent and ∼75per cent, respectively, of the virial values (i.e. for Δ= 200). We refer to this radius as the radius of baryonic collapse R BC, since if specific angular momentum is conserved perfectly, baryons would come from within it. It is not likely a simple step function due to the complex gastrophysics involved; therefore, we regard it as an effective radius. In summary, the difference between the predicted initial and the observed final specific angular momentum of galaxies, which is conventionally attributed solely to angular momentum loss, can more naturally be explained by a preference for collapse of baryons within R BC, with possibly some later angular momentum transfer. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS. ## Feeding compact bulges and supermassive black holes with low angular momentum cosmic gas at high redshift Monthly Notices of the Royal Astronomical Society 423 (2012) 3616-3630 Y Dubois, C Pichon, M Haehnelt, T Kimm, A Slyz, J Devriendt, D Pogosyan We use cosmological hydrodynamical simulations to show that a significant fraction of the gas in high redshift rare massive haloes falls nearly radially to their very centre on extremely short time-scales. This process results in the formation of very compact bulges with specific angular momentum a factor of 5-30 smaller than the average angular momentum of the baryons in the whole halo. Such low angular momentum originates from both segregation and effective cancellation when the gas flows to the centre of the halo along well-defined cold filamentary streams. These filaments penetrate deep inside the halo and connect to the bulge from multiple rapidly changing directions. Structures falling in along the filaments (satellite galaxies) or formed by gravitational instabilities triggered by the inflow (star clusters) further reduce the angular momentum of the gas in the bulge. Finally, the fraction of gas radially falling to the centre appears to increase with the mass of the halo; we argue that this is most likely due to an enhanced cancellation of angular momentum in rarer haloes which are fed by more isotropically distributed cold streams. Such an increasingly efficient funnelling of low angular momentum gas to the centre of very massive haloes at high redshift may account for the rapid pace at which the most massive supermassive black holes grow to reach observed masses around 10 9M ⊙ at an epoch when the Universe is barely 1 Gyr old. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS. ## The environment and redshift dependence of accretion on to dark matter haloes and subhaloes Monthly Notices of the Royal Astronomical Society (2011) H Tillson, L Miller, J Devriendt ## How active galactic nucleus feedback and metal cooling shape cluster entropy profiles Monthly Notices of the Royal Astronomical Society (2011) Y Dubois, J Devriendt, R Teyssier, A Slyz ## The environment and redshift dependence of accretion on to dark matter haloes and subhaloes Monthly Notices of the Royal Astronomical Society 417 (2011) 666-680 H Tillson, L Miller, J Devriendt A dark-matter-only Horizon Project simulation is used to investigate the environment and redshift dependences of accretion on to both haloes and subhaloes. These objects grow in the simulation via mergers and via accretion of diffuse non-halo material, and we measure the combined signal from these two modes of accretion. It is found that the halo accretion rate varies less strongly with redshift than predicted by the Extended Press-Schechter formalism and is dominated by minor merger and diffuse accretion events at z= 0, for all haloes. These latter growth mechanisms may be able to drive the radio-mode feedback hypothesised for recent galaxy-formation models, and have both the correct accretion rate and the form of cosmological evolution. The low-redshift subhalo accretors in the simulation form a mass-selected subsample safely above the mass resolution limit that reside in the outer regions of their host, with ∼70 per cent beyond their host's virial radius, where they are probably not being significantly stripped of mass. These subhaloes accrete, on average, at higher rates than haloes at low redshift and we argue that this is due to their enhanced clustering at small scales. At cluster scales, the mass accretion rate on to haloes and subhaloes at low redshift is found to be only weakly dependent on environment, and we confirm that at z∼ 2 haloes accrete independently of their environment at all scales, as reported by other authors. By comparing our results with an observational study of black hole growth, we support previous suggestions that at z > 1, dark matter haloes and their associated central black holes grew coevally, but show that by the present-day, dark matter haloes could be accreting at fractional rates that are up to a factor of 3 - 4 higher than their associated black holes. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS. ## Rigging dark haloes: Why is hierarchical galaxy formation consistent with the inside-out build-up of thin discs? Monthly Notices of the Royal Astronomical Society 418 (2011) 2493-2507 C Pichon, D Pogosyan, T Kimm, A Slyz, J Devriendt, Y Dubois State-of-the-art hydrodynamical simulations show that gas inflow through the virial sphere of dark matter haloes is focused (i.e. has a preferred inflow direction), consistent (i.e. its orientation is steady in time) and amplified (i.e. the amplitude of its advected specific angular momentum increases with time). We explain this to be a consequence of the dynamics of the cosmic web within the neighbourhood of the halo, which produces steady, angular momentum rich, filamentary inflow of cold gas. On large scales, the dynamics within neighbouring patches drives matter out of the surrounding voids, into walls and filaments before it finally gets accreted on to virialized dark matter haloes. As these walls/filaments constitute the boundaries of asymmetric voids, they acquire a net transverse motion, which explains the angular momentum rich nature of the later infall which comes from further away. We conjecture that this large-scale driven consistency explains why cold flows are so efficient at building up high-redshift thin discs inside out. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS. ## Galactic star formation in parsec-scale resolution simulations Proceedings of the IAU (2011) LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier The interstellar medium (ISM) in galaxies is multiphase and cloudy, with stars forming in the very dense, cold gas found in Giant Molecular Clouds (GMCs). Simulating the evolution of an entire galaxy, however, is a computational problem which covers many orders of magnitude, so many simulations cannot reach densities high enough or temperatures low enough to resolve this multiphase nature. Therefore, the formation of GMCs is not captured and the resulting gas distribution is smooth, contrary to observations. We investigate how star formation (SF) proceeds in simulated galaxies when we obtain parsec-scale resolution and more successfully capture the multiphase ISM. Both major mergers and the accretion of cold gas via filaments are dominant contributors to a galaxy's total stellar budget and we examine SF at high resolution in both of these contexts. ## The impact of ISM turbulence, clustered star formation and feedback on galaxy mass assembly through cold flows and mergers Proceedings of the IAU (2011) LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier Two of the dominant channels for galaxy mass assembly are cold flows (cold gas supplied via the filaments of the cosmic web) and mergers. How these processes combine in a cosmological setting, at both low and high redshift, to produce the whole zoo of galaxies we observe is largely unknown. Indeed there is still much to understand about the detailed physics of each process in isolation. While these formation channels have been studied using hydrodynamical simulations, here we study their impact on gas properties and star formation (SF) with some of the first simulations that capture the multiphase, cloudy nature of the interstellar medium (ISM), by virtue of their high spatial resolution (and corresponding low temperature threshold). In this regime, we examine the competition between cold flows and a supernovae(SNe)-driven outflow in a very high-redshift galaxy (z {\approx} 9) and study the evolution of equal-mass galaxy mergers at low and high redshift, focusing on the induced SF. We find that SNe-driven outflows cannot reduce the cold accretion at z {\approx} 9 and that SF is actually enhanced due to the ensuing metal enrichment. We demonstrate how several recent observational results on galaxy populations (e.g. enhanced HCN/CO ratios in ULIRGs, a separate Kennicutt Schmidt (KS) sequence for starbursts and the population of compact early type galaxies (ETGs) at high redshift) can be explained with mechanisms captured in galaxy merger simulations, provided that the multiphase nature of the ISM is resolved. ## How active galactic nucleus feedback and metal cooling shape cluster entropy profiles Monthly Notices of the Royal Astronomical Society 417 (2011) 1853-1870 Y Dubois, J Devriendt, R Teyssier, A Slyz Observed clusters of galaxies essentially come in two flavours: non-cool-core clusters characterized by an isothermal temperature profile and a central entropy floor, and cool-core clusters where temperature and entropy in the central region are increasing with radius. Using cosmological resimulations of a galaxy cluster, we study the evolution of its intracluster medium (ICM) gas properties, and through them we assess the effect of different (subgrid) modelling of the physical processes at play, namely gas cooling, star formation, feedback from supernovae and active galactic nuclei (AGNs). More specifically, we show that AGN feedback plays a major role in the pre-heating of the protocluster as it prevents a high concentration of mass from collecting in the centre of the future galaxy cluster at early times. However, AGN activity during the cluster's later evolution is also required to regulate the mass flow into its core and prevent runaway star formation in the central galaxy. Whereas the energy deposited by supernovae alone is insufficient to prevent an overcooling catastrophe, supernovae are responsible for spreading a large amount of metals at high redshift, enhancing the cooling efficiency of the ICM gas. As the AGN energy release depends on the accretion rate of gas on to its central black hole engine, the AGNs respond to this supernova-enhanced gas accretion by injecting more energy into the surrounding gas, and as a result increase the amount of early pre-heating. We demonstrate that the interaction between an AGN jet and the ICM gas that regulates the growth of the AGN's black hole can naturally produce cool-core clusters if we neglect metals. However, as soon as metals are allowed to contribute to the radiative cooling, only the non-cool-core solution is produced. © 2011 The Authors. Monthly Notices of the Royal Astronomical Society © 2011 RAS. ## Extreme value statistics of smooth Gaussian random fields Monthly Notices of the Royal Astronomical Society (2011) S Colombi, O Davis, J Devriendt, S Prunet, J Silk We consider the Gumbel or extreme value statistics describing the distribution function p G (ν max ) of the maximum values of a random field ν within patches of fixed size. We present, for smooth Gaussian random fields in two and three dimensions, an analytical estimate of p G which is expected to hold in a regime where local maxima of the field are moderately high and weakly clustered. When the patch size becomes sufficiently large, the negative of the logarithm of the cumulative extreme value distribution is simply equal to the average of the Euler characteristic of the field in the excursion ν≥ν max inside the patches. The Gumbel statistics therefore represents an interesting alternative probe of the genus as a test of non-Gaussianity, e.g. in cosmic microwave background temperature maps or in 3D galaxy catalogues. It can be approximated, except in the remote positive tail, by a negative Weibull-type form, converging slowly to the expected Gumbel-type form for infinitely large patch size. Convergence is facilitated when large-scale correlations are weaker. We compare the analytic predictions to numerical experiments for the case of a scale-free Gaussian field in two dimensions, achieving impressive agreement between approximate theory and measurements. We also discuss the generalization of our formalism to non-Gaussian fields. © 2011 The Authors. Monthly Notices of the Royal Astronomical Society © 2011 RAS. ## Galactic star formation in parsec-scale resolution simulations Proceedings of the International Astronomical Union 6 (2011) 487-490 LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier The interstellar medium (ISM) in galaxies is multiphase and cloudy, with stars forming in the very dense, cold gas found in Giant Molecular Clouds (GMCs). Simulating the evolution of an entire galaxy, however, is a computational problem which covers many orders of magnitude, so many simulations cannot reach densities high enough or temperatures low enough to resolve this multiphase nature. Therefore, the formation of GMCs is not captured and the resulting gas distribution is smooth, contrary to observations. We investigate how star formation (SF) proceeds in simulated galaxies when we obtain parsec-scale resolution and more successfully capture the multiphase ISM. Both major mergers and the accretion of cold gas via filaments are dominant contributors to a galaxy's total stellar budget and we examine SF at high resolution in both of these contexts. © 2011 International Astronomical Union. ## The origin and evolution of the mass-metallicity relation at high redshift using galics Monthly Notices of the Royal Astronomical Society 410 (2011) 2203-2216 J Sakstein, A Pipino, JEG Devriendt, R Maiolino The Galaxies in Cosmological Simulations (galics) semi-analytical model of hierarchical galaxy formation is used to investigate the effects of different galactic properties, including star formation rate (SFR) and outflows, on the shape of the mass-metallicity relation and to predict the relation for galaxies at redshift z= 2.27 and 3.54. Our version of galics has the chemical evolution implemented in great detail and is less heavily reliant on approximations, such as instantaneous recycling. We vary the model parameters controlling both the efficiency and redshift dependence of the SFR as well as the efficiency of supernova feedback. We find that the factors controlling the SFR influence the relation significantly at all redshifts and require a strong redshift dependence, proportional to 1 +z, in order to reproduce the observed relation at the low-mass end. Indeed, at any redshift, the predicted relation flattens out at the high-mass end resulting in a poorer agreement with observations in this regime. We also find that variation in the parameters associated with outflows has a minimal effect on the relation at high redshift but does serve to alter its shape in the more recent past. We thus conclude that the relation is one between the SFR and mass and that outflows are only important in shaping the relation at late times. When the relation is stratified by the SFR, it is apparent that the predicted galaxies with increasing stellar masses have higher SFRs, supporting the view that galaxy downsizing is the origin of the relation. Attempting to reproduce the observed relation, we vary the parameters controlling the efficiency of star formation and its redshift dependence and compare the predicted relations with those of Erb et al. at z= 2.27 and Maiolino et al. at z= 3.54 in order to find the best-fitting parameters. We succeed in fitting the relation at z= 3.54 reasonably well; however, we fail at z= 2.27, our relation lying on average below the observed one at the one standard deviation level. We do, however, predict the observed evolution between z= 3.54 and 0. Finally, we discuss the reasons for the above failure and the flattening at high masses, with regards to both the comparability of our predictions with observations and the possible lack of underlying physics. Several of these problems are common to many semi-analytic/hybrid models and so we discuss possible improvements and set the stage for future work by considering how the predictions and physics in these models can be made more robust in light of our results. © 2010 The Authors Monthly Notices of the Royal Astronomical Society © 2010 RAS. ## The impact of ISM turbulence, clustered star formation and feedback on galaxy mass assembly through cold flows and mergers Proceedings of the International Astronomical Union 6 (2010) 234-237 LC Powell, F Bournaud, D Chapon, J Devriendt, A Slyz, R Teyssier Two of the dominant channels for galaxy mass assembly are cold flows (cold gas supplied via the filaments of the cosmic web) and mergers. How these processes combine in a cosmological setting, at both low and high redshift, to produce the whole zoo of galaxies we observe is largely unknown. Indeed there is still much to understand about the detailed physics of each process in isolation. While these formation channels have been studied using hydrodynamical simulations, here we study their impact on gas properties and star formation (SF) with some of the first from simulations that capture the multiphase, cloudy nature of the interstellar medium (ISM), by virtue of their high spatial resolution (and corresponding low temperature threshold). In this regime, we examine the competition between cold flows and a supernovae(SNe)-driven outflow in a very high-redshift galaxy (z ≈ 9) and study the evolution of equal-mass galaxy mergers at low and high redshift, focusing on the induced SF. We find that SNe-driven outflows cannot reduce the cold accretion at z ≈ 9 and that SF is actually enhanced due to the ensuing metal enrichment. We demonstrate how several recent observational results on galaxy populations (e.g. enhanced HCN/CO ratios in ULIRGs, a separate Kennicutt Schmidt (KS) sequence for starbursts and the population of compact early type galaxies (ETGs) at high redshift) can be explained with mechanisms captured in galaxy merger simulations, provided that the multiphase nature of the ISM is resolved. © Copyright International Astronomical Union 2011. ## The skeleton: Connecting large scale structures to galaxy formation AIP Conference Proceedings 1241 (2010) 1108-1117 C Pichon, C Gay, D Pogosyan, S Prunet, T Sousbie, S Colombi, A Slyz, J Devriendt We report on two quantitative, morphological estimators of the filamentary structure of the Cosmic Web, the so-called global and local skeletons. The first, based on a global study of the matter density gradient flow, allows us to study the connectivity between a density peak and its surroundings, with direct relevance to the anisotropic accretion via cold flows on galactic halos. From the second, based on a local constraint equation involving the derivatives of the field, we can derive predictions for powerful statistics, such as the differential length and the relative saddle to extrema counts of the Cosmic web as a function of density threshold (with application to percolation of structures and connectivity), as well as a theoretical framework to study their cosmic evolution through the onset of gravity-induced non-linearities. © 2010 American Institute of Physics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397698402404785, "perplexity": 2005.5795382927508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00103.warc.gz"}
https://granthaminstitute.com/2018/01/23/the-lower-the-climate-sensitivity-the-better-but-what-we-need-is-zero-carbon/
The lower the climate sensitivity the better – but what we need is zero carbon Following the publication of a paper presenting a new narrower estimate of “equilibrium climate sensitivity” – a measure of how future greenhouse gas emissions could alter the climate – Professor Joanna Haigh, co-director of the Grantham Institute, explains the implications of climate sensitivity and why it should be interpreted carefully. What concerns me about a recent paper published in Nature is the interpretation of its results by some commentators. The findings have been pounced on by some as an indication that climate scientists have been exaggerating the risk associated with greenhouse gas increases. Even some climate scientists have concluded that “the risk of very high surface temperature changes occurring in the future will decrease”. But this can only be the case if carbon dioxide (CO2) emissions cease. When we think about the future of the Earth’s climate the first thing to consider is how concentrations of atmospheric greenhouse gases will change; the next is how the climate will respond – aka the climate sensitivity. Estimating climate sensitivity is difficult – we need to know not only the direct impact of greenhouse gases trapping heat radiation, but also the impact of knock-on effects such as changes in humidity, cloud, ice, and the broader carbon cycle, including plant species and cover. ‘Equilibrium Climate Sensitivity’ is an estimate of the increase in average global temperatures that would occur when the Earth has fully adjusted to atmospheric CO2 doubling in concentration from pre-industrial levels. A range of different methods have been employed to calculate ECS, using observational records of CO2 concentration and temperature. The models used range from simple energy balance considerations to complex computer simulations of the whole climate system, but all methods need to include assumptions of one type or another. The 2013 report of the Intergovernmental Panel on Climate Change suggested that ECS lies between 1.5 and 4.5°C. However, the paper published last week suggests a narrowing of this range to between 2.2 and 3.4°C. It is not for me here to discuss the merits of that study, though I note its range still lies within that of the IPCC – and it is certainly not the last word on this issue. What is important to note is that, while ECS gives an indication of climate sensitivity to increasing greenhouse gases, it is not very useful as a predictor of actual temperature. Firstly adjustment is very slow, and surface temperatures will continue to rise well after the date of the doubling. Secondly it assumes the concentrations of greenhouse gases have stabilised. An easier to visualise perspective is given by the idea of burnable carbon: there is a limit to the amount of CO2 that we can allow to accumulate in the atmosphere if we wish to avoid dangerous levels of warming. The greater the rate of CO2 emissions, the sooner that threshold will be reached. Warming can only be halted if CO2 emissions cease. Of course, a lower ECS means that warming is slower, but it must not be interpreted as a maximum possible temperature increase. As long as we go on pumping greenhouse gases into the atmosphere, the temperature will rise and rise inexorably. We need to stop. Find out more about Grantham Institute research on low-carbon pathways here.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049954771995544, "perplexity": 975.7609655888183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00119.warc.gz"}
https://www.birs.ca/events/2017/5-day-workshops/17w5030/videos/watch/201709211227-Yuan.html
Video From 17w5030: Splitting Algorithms, Modern Operator Theory, and Applications Thursday, September 21, 2017 12:27 - 13:01 Partial error bound conditions and the linear convergence rate of ADMM
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095030426979065, "perplexity": 2927.573708927696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00339.warc.gz"}
http://chalkdustmagazine.com/blog/whats-favourite-shape/
650 Everyone loves a good shape. You may think that you learnt all the shapes at primary school, but there are plenty still around that mathematicians find interesting. Sprinkled through Issue 03 of Chalkdust were some of the team’s favourite shapes. Here we have collected them together, and added many more. We’d really love to hear about yours! Send them to us at [email protected], tweet them to @chalkdustmag, or post them on Facebook and you might just see them on a future blog! ### The triangle (Eleanor Doman) My favourite shape is one you cannot get away from—the triangle. The cosine rule guarantees that for any triangle with the lengths of the sides given, there is a unique combination of internal angles. Simply put, the triangle is rigid provided the sides are fixed, making it an essential shape in the fields of architecture and engineering. ### Möbius strip (Rob Beckett) My favourite ‘shape’ is the one sided non-orientable surface called the Möbius strip. This can be created by simply twisting a long strip of paper and gluing the ends together. One of the explanations most regularly associated with the Möbius strip is that of MC Escher, who described an ant crawling along its surface. The ant would be able to do this and return to his starting point having not even crossed an edge (or maybe it keeps crawling on indefinitely hoping to find the end!). In the Numberphile video Möbius bridges and buildings Carlo H Séquin (UC Berkeley) considers using the idea of a Möbius strip to create aesthetic bridges and buildings. ### Gabriel’s horn (Matthew Scroggs) My favourite shape has a finite volume but an infinite surface area. This means that it is possible to fill it with paint, but not possible to paint its surface. Gabriel’s horn can be created by taking the curve $y=\frac1x$ (for $x\geq1$) and rotating it about the x-axis. Its volume is \begin{align*} \pi\int_1^\infty y^2\,dx&=\pi\int_1^\infty \tfrac1{x^2}\,dx\\&=\pi\left[-\tfrac1x\right]_1^\infty\\&=\pi\lim_{k\to\infty}(1-\tfrac1k)\\&=\pi, \end{align*} which is finite. Its surface area is \begin{align*} \int_1^\infty y\sqrt{1+\left(\tfrac{dy}{dx}\right)^2}\,dx&=\int_1^\infty \tfrac1x\sqrt{1+\tfrac1{x^4}}\,dx. \end{align*} Whenever $x$ is positive, $\displaystyle\tfrac1x\sqrt{1+\tfrac1{x^4}}$ is greater than $\tfrac1x$ and so \begin{align*} \int_1^\infty \tfrac1x\sqrt{1+\tfrac1{x^4}}\,dx\geq&\int_1^\infty \tfrac1x\,dx\\&=\left[\ln x\right]_1^\infty\\&=\lim_{k\to\infty}(\ln k), \end{align*} which is infinite. ### The circle (Pietro Servini) Mathematically, the circle is the set of all points in a plane that are at the same distance (the radius) from the centre and it’s intimately related to the most famous irrational number, $\pi$, which appears in the most unexpected places.  Historically, the circle inspired the creation of the wheel, generally accepted as mankind’s greatest invention, giving us the ability to move easily and at speed and, later, develop complex machinery relying on gears and cogs.  The first known wheel, however, was not used for locomotion but for pottery: a potter’s wheel was found in Mesopotamia (modern day Iraq) dating to around 3,500 BC, relatively late in humanity’s development. Perhaps most importantly, the circle then gives rise to the 3D sphere, whose aerodynamic characteristics and ability to roll have spawned so many sports without which the world would be so much bleaker… ### 4-simplex  (Belgin Seymenoğlu) We can start with just a point which, believe it or not, is already a simplex. Then if we introduce a second point, we can connect the two to get a new shape called a 1-simplex (or a line to you and me). Next, if we take a third point, and connect it to our two other points, we have the 2-simplex, otherwise known as a triangle. But if we then connect our three points in the triangle to yet another new point, we get a three-dimensional shape: the tetrahedron (or the 3-simplex. What’s more, there is yet another member of the family: a four-dimensional shape. This shape is called the 4-simplex, and it has five vertices. The 4-simplex is useful in population biology because if you have, for example, five different species, you can represent the fractions of each population by plotting a point in the 4-simplex. If that’s not enough for you, you can make a five-dimensional, six-dimensional or even an $n$-dimensional simplex! ### Penrose tiles (Rudolf Kohulák) My favourite shape is a rhombus that has been split into two pieces called ‘kite’ and ‘dart’. These shapes might not look interesting, but the British physicist Roger Penrose discovered an unusual feature of these objects. They can be arranged to cover the whole plane without any gaps or overlaps. However, the resulting image is highly unsymmetrical. For instance, it lacks translational symmetry (ie you cannot shift the pattern such that the result would end up being identical to the original picture). The discovery revolutionised the field of crystallography and led to the identification of quasicrystals. ### Sierpinski triangle (Nikoleta Kalaydzhieva) My favourite shape is the Sierpinski triangle. It is one of the most basic fractal shapes, but appears in various mathematical areas. What I find fascinating about it is how many different ways there are for constructing it. For example, you could use a methodical geometric approach by inscribing a similar triangle in the original one via its midpoints and iterating. Another, more intriguing construction, is via the Chaos game. You can even construct it using basic algebra, by shading the odd numbers in Pascal’s triangle. ### Helicoid (Alexander Doak) My favourite shape is the helicoid, as it has many interesting geometric properties. Firstly, it is a ruled surface. The helicoid is constructed by moving a straight line in space; in this case by rotating it about an axis while moving along said axis at a constant speed. Ruled surfaces are very popular in architecture, such as hyperboloid cooling towers and, of course, helicoid staircases. Secondly, it is a minimal surface. In fact, it has been proven that the helicoid, along with the plane, are the only ruled minimal surfaces! My favourite geometrical figure is the heptadecagon, a regular polygon with 17 sides. It comes with the history of a great challenge that required the efforts of almost eighty generations of mathematicians to solve. Ancient Greeks knew how to construct polygons with 3, 4, 5, 6, 8, 10, 12, 15, 16, and 20 edges using only a straightedge and compass, while 18th century algebraists knew that it was impossible to use the same tools to construct polygons with 7, 9, 11, 13, 14, 18 and 19 sides. Gauss, at 19, was the first to prove that the heptadecagon was constructible. ### The light cone (Matthew Wright) My favourite shape is the light cone. It is a four-dimensional shape lying in space-time, and it is the path travelled by beams of light emitted from a single point. Although a simple concept, it turns out to be of fundamental importance: it determines the entire notion of causality. Everything that can be causally affected by an event at one point in space and time must lie within that event’s light cone, since nothing can travel faster than the speed of light. Einstein realised that gravity wasn’t a force in the conventional sense, but rather distorts the structure of space and time, tipping and deforming the light cones in the process. This is why nothing can escape a black hole: the light cones are tipped over so much that everything in the future of the light cone must lie inside the black hole. Inspired by the shapes above? Think you know better? Remember to send us your favourite shape either by email ([email protected]), on Twitter (@chalkdustmag) or Facebook (/chalkdustmag). 650 • ### Top ten vote issue 04 Vote for your favourite part of a circle • ### Issue 04 Our sparkly new edition, out now! Problem solving 101, proof by storytelling, plus the return of all your favourite fun pages.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5206120014190674, "perplexity": 852.9225330693922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719463.40/warc/CC-MAIN-20161020183839-00232-ip-10-171-6-4.ec2.internal.warc.gz"}
https://nyuscholars.nyu.edu/en/publications/the-scope-and-limits-of-simulation-in-automated-reasoning-3
# The scope and limits of simulation in automated reasoning Ernest Davis, Gary Marcus Research output: Contribution to journalReview articlepeer-review ## Abstract In scientific computing and in realistic graphic animation, simulation - that is, step-by-step calculation of the complete trajectory of a physical system - is one of the most common and important modes of calculation. In this article, we address the scope and limits of the use of simulation, with respect to AI tasks that involve high-level physical reasoning. We argue that, in many cases, simulation can play at most a limited role. Simulation is most effective when the task is prediction, when complete information is available, when a reasonably high quality theory is available, and when the range of scales involved, both temporal and spatial, is not extreme. When these conditions do not hold, simulation is less effective or entirely inappropriate. We discuss twelve features of physical reasoning problems that pose challenges for simulation-based reasoning. We briefly survey alternative techniques for physical reasoning that do not rely on simulation. Original language English (US) 60-72 13 Artificial Intelligence 233 https://doi.org/10.1016/j.artint.2015.12.003 Published - Apr 2016 ## Keywords • Physical reasoning • Simulation ## ASJC Scopus subject areas • Language and Linguistics • Linguistics and Language • Artificial Intelligence ## Fingerprint Dive into the research topics of 'The scope and limits of simulation in automated reasoning'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638341426849365, "perplexity": 2255.0353371965216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00691.warc.gz"}
http://www.math.mtu.edu/graduate/comp/node12.html
Next: Algebra Up: Optimization/Numerical Linear Algebra Previous: Outline ### Sample questions 1. Compare and contrast the line search and trust region methods of globalizing a quasi-Newton method. Your discussion should touch on the following points: (a) The cost of taking a step (that is, of solving the line search subproblem and trust region subproblem), and the complexity of the algorithms. (b) Dealing with nonconvexity. (c) Convergence theorems. 2. Derive the BFGS update used for approximating the Hessian of the objective function in an unconstrained minimization problem. Explain the rationale for the steps in the derivation. 3. (a) State carefully and prove the first-order necessary condition for to have a local minimum at x=x*. (b) Give an example to show that the first-order condition is only necessary, not sufficient. (c) State carefully and prove the second-order necessary condition for to have a local minimum at x=x*. (d) Give an example to show that the second-order condition is only necessary, not sufficient. 4. (a) Let be a sequence in , and suppose Define q-quadratically.'' (b) Let be given. Newton's method for minimizing f is locally q-quadratically convergent. State carefully and prove this theorem. 5. Suppose is convex. (a) State and prove a theorem indicating that the usual first-order necessary condition is, in this case, a sufficient condition. (b) Prove that every local minimum of f is, in fact, a global minimum. 6. Consider the equality-constrained problem (2) where and are smooth functions. (a) Explain how to apply the quadratic penalty method to (). How does one obtain an estimate of the Lagrange multiplier? (b) Explain how to apply the augmented Lagrangian method to (). How does one obtain an estimate of the Lagrange multiplier? 7. Recall that the norm of is defined by Derive the formula for the corresponding induced matrix norm of , and prove that it is correct. 8. (a) What is the condition number of a matrix ? (b) Explain how this condition number is related to the problem of computing the solution x to Ax=b, where b is regarded as the data of the problem, and A is regarded as being known exactly. 9. Consider the least-squares problem Ax=b, where , m>n, and are given, and x is to be determined. (a) Assume that A has full rank. Explain how to solve the least-squares problem using: i. the normal equations; ii. the QR factorization of A; iii. the SVD of A. In each case, your explanation must include a justification that the algorithm leads to the solution of the least-squares problem (e.g. explain why the solution of the normal equations is the solution of the least-squares problem). (b) Discuss the advantages and disadvantages of each of the above methods. Which is the method of choice for the full rank least-squares problem? 10. (a) Give a simple example to show that Gaussian elimination without partial pivoting is unstable in finite-precision arithmetic. (Hint: The example can be as small as .) (b) Using the concept of backward error analysis, explain the conditions under which Gaussian elimination with partial pivoting can be unstable in finite-precision arithmetic. (Note: This question does not ask you to perform a backward error analysis. Rather, you can quote standard results in your explanation.) (c) Give an example to show that Gaussian elimination with partial pivoting can be unstable. 11. (a) Suppose that has eigenvalues Explain how to perform the power method, and under what conditions it converges to an eigenvalue. (b) Explain the idea of simultaneous iteration. (c) Explain the QR algorithm and its relationship to simultaneous iteration. 12. Suppose that is invertible, B is an estimate of A-1, and AB = I+E. Show that the relative error in B is bounded by (using an arbitrary matrix norm). 13. Show that if A is symmetric positive definite and banded, say aij = 0 for |i-j| > p, then the Cholesky factor B of A satisfies bij = 0 for j > i or j < i-p. 14. Suppose that Gaussian elimination (without partial pivoting) is applied to a symmetric positive definite matrix A. Write where each Ej is an elementary (lower triangular) matrix (left-multiplication by Ej accomplishes the jth step of Gaussian elimination). None of the Ejs is a permutation matrix, that is, no row interchanges are performed. The purpose of this exercise is to prove that this is possible (i.e. that Gaussian elimination can be applied without row interchanges) and to prove the following inequality: Do this by proving the following three lemmas: (a) Let B be a symmetric positive definite matrix. Then bii > 0for and the largest entry of B (in magnitude) occurs on the diagonal. (b) Let A be a symmetric positive definite matrix, and suppose one step of Gaussian elimination is applied to A to obtain Then is also symmetric positive definite. (c) Using the notation of the previous lemma, Now complete the proof by induction. (Note that this result both proves that no partial pivoting is required for a symmetric positive definite matrix, and also that Gaussian elimination is perfectly stable when applied to such a matrix.) 15. Let have SVD A=USVT, where and are orthogonal and S is diagonal (Sij=0 if ). Define S(k) by and define A(k) by A(k)=US(k)VT. What is , where is the matrix norm induced by the Euclidean () vector norm? Prove your answer. Next: Algebra Up: Optimization/Numerical Linear Algebra Previous: Outline Math Dept Webmaster 2003-08-28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.97694993019104, "perplexity": 556.2359378529358}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292887.6/warc/CC-MAIN-20160823195812-00011-ip-10-153-172-175.ec2.internal.warc.gz"}
https://gmatclub.com/forum/if-a-b-and-c-are-greater-than-0-and-a-is-twice-as-large-as-90197.html
It is currently 23 Mar 2018, 02:05 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar If a, b, and C are greater than 0 and a is twice as large as Author Message TAGS: Hide Tags Senior Manager Joined: 22 Dec 2009 Posts: 350 If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 07 Feb 2010, 12:14 1 KUDOS 7 This post was BOOKMARKED 00:00 Difficulty: 55% (hard) Question Stats: 60% (01:25) correct 40% (01:38) wrong based on 329 sessions HideShow timer Statistics Need Algebric solution - step by step: If a, b, and C are greater than 0 and a is twice as large as b percent of c, then in terms of b & c, what is a percent of c? (A) $$\frac{2bc}{100}$$ (B) $$\frac{2bc^2}{1000}$$ (C) $$\frac{bc^2}{5000}$$ (D) $$\frac{b^2c}{5000}$$ (E) $$\frac{5000b}{c^2}$$ [Reveal] Spoiler: OA _________________ Cheers! JT........... If u like my post..... payback in Kudos!! |For CR refer Powerscore CR Bible|For SC refer Manhattan SC Guide| ~~Better Burn Out... Than Fade Away~~ Math Expert Joined: 02 Sep 2009 Posts: 44412 Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 07 Feb 2010, 12:29 6 KUDOS Expert's post jeeteshsingh wrote: Need Algebric solution - step by step: If a, b, and C are greater than 0 and a is twice as large as b percent of c, then in terms of b & c, what is a percent of c? (A) $$\frac{2bc}{100}$$ (B) $$\frac{2bc^2}{1000}$$ (C) $$\frac{bc^2}{5000}$$ (D) $$\frac{b^2c}{5000}$$ (E) $$\frac{5000b}{c^2}$$ [Reveal] Spoiler: OA C $$a$$ is twice as large as $$b$$ percent of $$c$$ --> $$a=2c\frac{b}{100}$$ What is $$a$$ percent of $$c$$? --> $$c\frac{a}{100}=?$$ Multiply first equation by $$\frac{c}{100}$$ --> $$a\frac{c}{100}=\frac{c}{100}2c\frac{b}{100}$$ --> $$c\frac{a}{100}=\frac{c^2b}{5000}$$ _________________ Senior Manager Joined: 22 Dec 2009 Posts: 350 Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 07 Feb 2010, 12:49 Bunuel wrote: jeeteshsingh wrote: Need Algebric solution - step by step: If a, b, and C are greater than 0 and a is twice as large as b percent of c, then in terms of b & c, what is a percent of c? (A) $$\frac{2bc}{100}$$ (B) $$\frac{2bc^2}{1000}$$ (C) $$\frac{bc^2}{5000}$$ (D) $$\frac{b^2c}{5000}$$ (E) $$\frac{5000b}{c^2}$$ [Reveal] Spoiler: OA C $$a$$ is twice as large as $$b$$ percent of $$c$$ --> $$a=2c\frac{b}{100}$$ What is $$a$$ percent of $$c$$? --> $$c\frac{a}{100}=?$$ Multiply first equation by $$\frac{c}{100}$$ --> $$a\frac{c}{100}=\frac{c}{100}2c\frac{b}{100}$$ --> $$c\frac{a}{100}=\frac{c^2b}{5000}$$ Thanks Bunuel.. I misunderstood the question's last part as What percent is a of c.... and hence I was getting 2b as the answer! _________________ Cheers! JT........... If u like my post..... payback in Kudos!! |For CR refer Powerscore CR Bible|For SC refer Manhattan SC Guide| ~~Better Burn Out... Than Fade Away~~ Senior Manager Status: Not afraid of failures, disappointments, and falls. Joined: 20 Jan 2010 Posts: 286 Concentration: Technology, Entrepreneurship WE: Operations (Telecommunications) Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 31 Oct 2010, 04:43 5 KUDOS Explanation: We can use Word Translation to translate the wording into mathematics and Algebra to solve step by step; $$a$$ is twice $$b$$ percent of $$c$$ ==> $$a = 2 * \frac{b}{100} * c = 2bc/100$$ --> eq-1 and What is $$a$$ percent of $$c$$? ==> $$x = \frac{a}{100} * c = a*c*\frac{1}{100}$$ --> eq-2 As, we know the value of $$a$$ from the first equation eq-1 and after putting that value into eq-2, we get; $$x = \frac{2bc}{100}*c*\frac{1}{100} = \frac{bc}{50} *c*\frac{1}{100}= \frac{bc^2}{50} * \frac{1}{100} = \frac{bc^2}{5000}$$. Which is option C _________________ "I choose to rise after every fall" Target=770 http://challengemba.blogspot.com Kudos?? Senior Manager Joined: 24 Aug 2009 Posts: 488 Schools: Harvard, Columbia, Stern, Booth, LSB, Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 05 Sep 2012, 11:32 As per me the best way to solve this question is to Pick Smart Numbers & then verify the options _________________ If you like my Question/Explanation or the contribution, Kindly appreciate by pressing KUDOS. Kudos always maximizes GMATCLUB worth -Game Theory If you have any question regarding my post, kindly pm me or else I won't be able to reply Manager Status: Prevent and prepare. Not repent and repair!! Joined: 13 Feb 2010 Posts: 238 Location: India Concentration: Technology, General Management GPA: 3.75 WE: Sales (Telecommunications) Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 16 Sep 2012, 06:43 jeeteshsingh wrote: Need Algebric solution - step by step: If a, b, and C are greater than 0 and a is twice as large as b percent of c, then in terms of b & c, what is a percent of c? (A) $$\frac{2bc}{100}$$ (B) $$\frac{2bc^2}{1000}$$ (C) $$\frac{bc^2}{5000}$$ (D) $$\frac{b^2c}{5000}$$ (E) $$\frac{5000b}{c^2}$$ [Reveal] Spoiler: OA C It is already given that a=2[bc/100] (Twice as large as b%c)...... Let this be equation 1 We need a%c in terms of b. So Multiply equation 1 with c/100 we get- ac/100=2bc^2/10000 Therefore Ans is C _________________ I've failed over and over and over again in my life and that is why I succeed--Michael Jordan Kudos drives a person to better himself every single time. So Pls give it generously Wont give up till i hit a 700+ Senior Manager Joined: 13 Aug 2012 Posts: 453 Concentration: Marketing, Finance GPA: 3.23 Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 07 Dec 2012, 03:45 b percent of c : $$\frac{bc}{100}$$ a is twice as large as b percent of c: a = $$\frac{2bc}{100}=\frac{bc}{50}$$ What is a percent of c? a percent of c: $$\frac{ac}{100} = \frac{bc}{50}*\frac{c}{100}=\frac{bc^2}{5000}$$ _________________ Impossible is nothing to God. Manager Joined: 31 May 2010 Posts: 87 Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 10 Jul 2013, 04:37 lets say a = 10 and b=5 so as per given info. 10 = 2*5*c/100 Which will give c = 100 substitute the value of 'b' and 'c' in in answer choices to get value 'a'. _________________ Kudos if any of my post helps you !!! Non-Human User Joined: 09 Sep 2013 Posts: 6535 Re: If a, b, and C are greater than 0 and a is twice as large as [#permalink] Show Tags 14 Nov 2017, 03:17 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: If a, b, and C are greater than 0 and a is twice as large as   [#permalink] 14 Nov 2017, 03:17 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5925452709197998, "perplexity": 2606.414285483239}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00308.warc.gz"}
http://math.stackexchange.com/questions/66014/why-probability-measures-in-ergodic-theory
# Why probability measures in ergodic theory? I just had a look at Walters' introductory book on ergodic theory and was struck that the book always sticks to probability measures. Why is it the case that ergodic theory mainly considers probability measures? Is it that the important theorems, for example Birkhoff's ergodic theorem, is true only for probability measures? Or is it because of the relation with concepts from thermodynamics such as entropy? I also wish to ask one more doubt; this one slightly more technical. Probability theory always works with the Borel sigma algebra; it is rarely the case that the sigma algebra is enlarged to the Lebesgue sigma algebra for the case of the real numbers(for defining random variables) or the unit circle, for instance. In ergodic theory, do we go by this restriction, or not? That is, when ignoring sets of measure zero, do we have that subsets of measure zero are measurable? - Everything that works for probability measures should also probably work for finite measures (by mere normalization). As for infinite measure spaces, there is a well-developed theory in that case too. See Aaronson's monograph: amazon.com/… – Mark Sep 20 '11 at 10:58 The second question is addressed in this thread: mathoverflow.net/questions/31603/… – user18297 Dec 19 '11 at 20:57 The question isn't really about probability spaces, it's about finite measure. Usually the theory on classic ergodic theory (by classic I mean on finite measure spaces) is developed on probability spaces but it also works on any finite measure spaces, just take the measure normalized and everything will work fine. This hyppotesis is really needed, some theorems doens't really work on spaces that doesn't have finite measure, eg, Poincarré Reccurence Thm it's not true if you open this possibility. (Just take the transformation defined on the real line by $T(x)=x+1$. It is measure preserving but it's not recurrent.) Specificly on the Birkhoff Thm, it still valid on $\sigma$-finite spaces but it doesn't give you much information about the limit. In fact, the Birkhoff' sum converges to 0. But there's a nice theory going on $\sigma$-finite spaces with full measure infinity. Actually there is a nice book by Aaronson about infinite ergodic theory and some really good notes by Zweimüller. Things here chage a bit, eg, you don't have the property given by Poincarré Recurrence (you have to ask it as a definition).Some of the results try to chance how you make the Birkhoff sum in order to get some aditional information and can be applied to the calculus of Markov Chains. Another nice example that was object of recent study is the Boole's Transformation and it is deffined by \begin{eqnarray*} B: \mathbb{R} &\rightarrow& \mathbb{R} \\ x &\mapsto& \dfrac{x^2-1}{x} \end{eqnarray*} I don't know if I made myself very clear, but I recommend those texts. You should try it, it offers this theory and seek for the answer of your kind of question. Aaronson, J. - An Introduction to Infinite Ergodic Theory. Mathematical Surveys and Monographs, AMS, 1997. Zweimüller, R. - Surrey Notes on Infinite Ergodic Theory. You can get it here -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9373173117637634, "perplexity": 323.5562717738158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701151880.99/warc/CC-MAIN-20160205193911-00115-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-5-polynomials-and-factoring-5-1-introduction-to-factoring-5-1-exercise-set-page-310/59
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $(y+8)(y^2-2)$ Using factoring by grouping, the factored form of the given expression, $y^3+8y^2-2y-16 ,$ is \begin{array}{l} (y^3+8y^2)-(2y+16) \\\\= y^2(y+8)-2(y+8) \\\\= (y+8)(y^2-2) .\end{array}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502295851707458, "perplexity": 12383.17019207257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158205.30/warc/CC-MAIN-20180922064457-20180922084857-00131.warc.gz"}
https://ask.openstack.org/en/questions/8946/revisions/
Revision history [back] External Gateway I have a 3 node setup. 1 control setup as the network node also and 2 compute nodes. I am able to get instances of cirros running and after manually setting the ip addresses, i'm able to ping instances that are on the same network. DHCP never assigns an address and i get the following: Starting network... udhcpc (v1.20.1) started Sending discover... Sending discover... Sending discover... No lease, failing on the logs of the instances. But they cant reach their external gateway on the router. This is what my Network Topology looks like: http://i.imgur.com/Vx4L4dX.jpg There so much info i could include but It would make the post all cluttered. Where should I begin to troubleshoot this? 2 No.2 Revision larsks 5589 ●24 ●61 ●100 http://redhat.com/ External Gateway I have a 3 node setup. 1 control setup as the network node also and 2 compute nodes. I am able to get instances of cirros running and after manually setting the ip addresses, i'm able to ping instances that are on the same network. DHCP never assigns an address and i get the following: Starting network... udhcpc (v1.20.1) started Sending discover... Sending discover... Sending discover... No lease, failingfailing on the logs of the instances. But they cant reach their external gateway on the router. This is what my Network Topology looks like: http://i.imgur.com/Vx4L4dX.jpg There so much info i could include but It would make the post all cluttered. Where should I begin to troubleshoot this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45075175166130066, "perplexity": 4817.7678531077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00443.warc.gz"}
https://tex.stackexchange.com/questions/25477/pst-text-with-plain-xetex
# pst-text with plain XeTeX How, if possible, can I use `pst-text` with plain XeTeX? I have TeX Live 2011. When I try to run the minimal example: ``````\input pst-text \pscharpath{TeXnik} \bye `````` , I get a bunch of `** WARNING`'s, among them: ``````** WARNING ** Failed to read converted PSTricks image file. `````` and ``````** WARNING ** Interpreting special command PST: (ps:) failed. `````` Am I doing something wrong, or is it not supposed to work (this way), or is there some step I'm missing? I don't get any warnings or anything weird in the logfile, but I do to the terminal: ``````Ilmalaiva% xetex pikakoe.tex This is XeTeX, Version 3.1415926-2.3-0.9997.5 (TeX Live 2011) restricted \write18 enabled. entering extended mode (./pikakoe.tex (/usr/local/texlive/2011basic/texmf-dist/tex/generic/pst-text/pst-text.tex (/usr/local/texlive/2011basic/texmf-dist/tex/generic/pstricks/pstricks.tex we are running tex and have to define some LaTeX commands ... (/usr/local/texlive/2011basic/texmf-dist/tex/generic/xkeyval/pst-xkey.tex 2005/11/25 v1.6 PSTricks specialization of xkeyval (HA) (/usr/local/texlive/2011basic/texmf-dist/tex/generic/xkeyval/xkeyval.tex 2008/08/13 v2.6a key=value parser (HA) (/usr/local/texlive/2011basic/texmf-dist/tex/generic/xkeyval/xkvtxhdr.tex 2005/02/22 v1.1 xkeyval TeX header (HA)) (/usr/local/texlive/2011basic/texmf-dist/tex/generic/xkeyval/keyval.tex))) (/usr/local/texlive/2011basic/texmf-dist/tex/generic/pstricks/pst-fp.tex `pst-fp' v0.05, 2010/01/17 (hv)) `PSTricks' v2.20 <2011/04/23> (tvz) (/usr/local/texlive/2011basic/texmf-dist/tex/xetex/xetex-pstricks/pstricks.con (/usr/local/texlive/2011basic/texmf-dist/tex/generic/pstricks/config/xdvipdfmx. cfg)) (/usr/local/texlive/2011basic/texmf-dist/tex/xetex/xetex-pstricks/pstricks.con (/usr/local/texlive/2011basic/texmf-dist/tex/generic/pstricks/config/xdvipdfmx. cfg Using PSTricks configuration for XeTeX+xdvipdfmx ))) v1.00, 2006/11/05(tvz,hv)) [1] ** WARNING ** pdf_open: Not a PDF 1.[1-5] file. ** WARNING ** Failed to include image file "/var/folders/2w/40kkgr916n34r23ts240d9bh0000gn/T//dvipdfmx.8Yyf2pkO" ** WARNING ** >> Please check if ** WARNING ** >> rungs -q -dNOPAUSE -dBATCH -sPAPERSIZE=a0 -sDEVICE=pdfwrite -dCompatibilityLevel=%v -dAutoFilterGrayImages=false -dGrayImageFilter=/FlateEncode -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode -sOutputFile='%o' '%i' -c quit ** WARNING ** >> %o = output filename, %i = input filename, %b = input filename without suffix ** WARNING ** >> can really convert "/var/folders/2w/40kkgr916n34r23ts240d9bh0000gn/T//dvipdfmx.8Yyf2pkO" to PDF format image. ** WARNING ** pdf: image inclusion failed for "/var/folders/2w/40kkgr916n34r23ts240d9bh0000gn/T//dvipdfmx.8Yyf2pkO". ** WARNING ** Failed to read converted PSTricks image file. ** WARNING ** Interpreting special command PST: (ps:) failed. ** WARNING ** >> at page="1" position="(123.75, 759.927)" (in PDF) ** WARNING ** >> xxx "PST: tx@Dict begin gsave STV /ArrowA { moveto } def /ArrowB { }" ) Output written on pikakoe.pdf (1 page). Transcript written on pikakoe.log. `````` • I do not get any warning with current TeXLive – user2478 Aug 11 '11 at 16:32 • @Marco: Why did you remove the {plain-tex} tag? – Caramdir Aug 11 '11 at 17:02 • @Caradimir: I think `plain-tex` is something which is compiled with `tex`. `xetex`/`xelatex` is a special deployment of `tex` That's the reason. I also tested the example and I had also no errors. – Marco Daniel Aug 11 '11 at 17:09 • @Herbert: I forgot to add the word "basic" to the TL-version I'm using (TeX Live 2011 basic). Also I'm on OSX 10.7. I'm guessing I'm missing something really basic; looking at one of the lines which starts off with "Please check if rungs...", makes me wonder if that `rungs`'s `gs` comes from `GhostScript`? – morbusg Aug 11 '11 at 17:10 • @Marco: plain-tex is a format. XeTeX is an engine. – morbusg Aug 11 '11 at 17:11 You need to have GhostScript installed. here is my log: ``````voss@shania:~> xetex Namenlos-2.tex This is XeTeX, Version 3.1415926-2.3-0.9997.5 (TeX Live 2011) restricted \write18 enabled. entering extended mode (./Namenlos-2.tex (/usr/local/texlive/2011/../texmf-local/tex/generic/pst-text/pst-text.tex (/usr/local/texlive/2011/../texmf-local/tex/generic/pstricks/generic/pstricks.t ex we are running tex and have to define some LaTeX commands ... (/usr/local/texlive/2011/texmf-dist/tex/generic/xkeyval/pst-xkey.tex 2005/11/25 v1.6 PSTricks specialization of xkeyval (HA) (/usr/local/texlive/2011/texmf-dist/tex/generic/xkeyval/xkeyval.tex 2008/08/13 v2.6a key=value parser (HA) (/usr/local/texlive/2011/texmf-dist/tex/generic/xkeyval/xkvtxhdr.tex 2005/02/22 v1.1 xkeyval TeX header (HA)) (/usr/local/texlive/2011/texmf-dist/tex/generic/xkeyval/keyval.tex))) (/usr/local/texlive/2011/../texmf-local/tex/generic/pstricks/generic/pst-fp.tex `pst-fp' v0.05, 2010/01/17 (hv)) `PSTricks' v2.22 <2011/07/09> (tvz) (/usr/local/texlive/2011/texmf-dist/tex/xetex/xetex-pstricks/pstricks.con (/usr/local/texlive/2011/../texmf-local/tex/generic/pstricks/config/xdvipdfmx.c fg)) (/usr/local/texlive/2011/texmf-dist/tex/xetex/xetex-pstricks/pstricks.con (/usr/local/texlive/2011/../texmf-local/tex/generic/pstricks/config/xdvipdfmx.c fg Using PSTricks configuration for XeTeX+xdvipdfmx ))) v1.00, 2006/11/05(tvz,hv)) [1] ) Output written on Namenlos-2.pdf (1 page). Transcript written on Namenlos-2.log. `````` the files in my local tree are the same as on TeXLive 2011 • This post doesn't even nearly count as an answer. It doesn't contain any algorithm to solve the problem. – Andrey Vihrov Aug 12 '11 at 7:01 • that is obvious, that is was not an answer. – user2478 Aug 12 '11 at 7:06 • It is a blog on a log. – kiss my armpit Feb 22 '12 at 11:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458428025245667, "perplexity": 26973.961612856263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00108.warc.gz"}
https://hal.archives-ouvertes.fr/hal-00752375
# Tree-level lepton universality violation in the presence of sterile neutrinos: impact for $R_K$ and $R_\pi$ Abstract : We consider a tree-level enhancement to the violation of lepton flavour universality in light meson decays arising from modified $W \ell \nu$ couplings in the standard model minimally extended by sterile neutrinos. Due to the presence of additional mixings between the active (left-handed) neutrinos and the new sterile states, the deviation from unitarity of the leptonic mixing matrix intervening in charged currents might lead to a tree-level enhancement of $R_{P} = \Gamma (P \to e \nu) / \Gamma (P \to \mu \nu)$, with $P=K, \pi$. We illustrate these enhancements in the case of the inverse seesaw model, showing that one can saturate the current experimental bounds on $\Delta r_{K}$ (and $\Delta r_{\pi}$), while in agreement with the different experimental and observational constraints. Type de document : Article dans une revue Journal of High Energy Physics, Springer, 2013, 1302, pp.048. 〈10.1007/JHEP02(2013)048〉 https://hal.archives-ouvertes.fr/hal-00752375 Contributeur : Responsable Bibliotheque <> Soumis le : jeudi 15 novembre 2012 - 15:29:29 Dernière modification le : jeudi 15 mars 2018 - 09:44:05 ### Citation A. Abada, D. Das, A. M. Teixeira, A. Vicente, C. Weiland. Tree-level lepton universality violation in the presence of sterile neutrinos: impact for $R_K$ and $R_\pi$. Journal of High Energy Physics, Springer, 2013, 1302, pp.048. 〈10.1007/JHEP02(2013)048〉. 〈hal-00752375〉 ### Métriques Consultations de la notice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618092179298401, "perplexity": 3049.1063459666784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00034.warc.gz"}
https://crypto.stackexchange.com/questions/75628/whats-the-proper-way-of-fitting-hash-digest-to-encryption-scheme/75635
# What's the proper way of fitting hash digest to encryption scheme? I would like to know what is the proper way of fitting the hash digest to the prime in which the encryption scheme operates. regardless if the bits of the hash digest is larger or smaller than the prime. I've read that the Cramer-Shoup uses a universal one-way hash function, but didn't state what it is. Wikipedia says its just a property, and with that I plan on using SHA256. My simulator uses smaller bits for presentation purposes and the larger bits digest of SHA256, I have a problem on how to fit it in. I've read in some forums to use mod on in, is this the proper way ? • In the section, a simple implementation, it proposes to use SHA-1. But proposes to use a new hash to solve this issue. Note that SHA256, SHA-1 is not universal, GHASH and Poly1305 is. In either way, use XOF. – kelalaka Nov 10 '19 at 8:29 • Well, MGF1 is the method that PKCS#1 uses, does that fit your need? It has security proofs for both signing and encryption (although those have been attacked and had to be amended, at least for RSA / OAEP). – Maarten Bodewes Nov 10 '19 at 14:18 • And generally we don't allow smaller hash values to be used, because that would impede the security / collision resistance of a hash. However, you could define a hash algorithm that uses smaller output than the original. For instance, you can use SHA-224 which is just SHA-256 with different initial constants and a smaller output size. Similarly you could define SHA-160 or lower - but it would be only 80 bit collision resistant. (Using a XOF would be better if you want both large and small output sizes, I suppose, but that requires SHA-3) – Maarten Bodewes Nov 10 '19 at 14:28 • @kelalaka FYI, UOWHF does not mean a universal hash family; it's an older name for target-collision-resistant. – Squeamish Ossifrage Nov 10 '19 at 17:38 • Maarten Bodewes, cramer-shoup uses the hash digest as an exponent for one computation. Squeamish Ossifrage, i'm new to this course, and doing this for a school project :). Thank you for the answers – Kelen Nihomori Nov 11 '19 at 5:19 A universal one-way hash function (or UOWHF), also known as a target-collision-resistant (or TCR) hash function, is a randomized hash function $$H_r(m)$$ with the following security: If an adversary commits to a message $$m$$, then upon being challenged with a random $$r$$, the adversary cannot find a distinct message $$m' \ne m$$ such that $$H_r(m) = H_r(m')$$. (More details, background, history, and references on UOWHF/TCR, particularly in signature applications.) Any collision-resistant hash function is obviously also TCR, but TCR is a much weaker security property—much all major ‘cryptographic hash functions’ like SHA-256 including broken ones like MD5 are generally conjectured to exhibit TCR in prefix-hash form $$H(r \mathbin\| m)$$ and in HMAC form $$\operatorname{HMAC-\!}H_r(m)$$, but in the off chance that they don't (the Merkle–Damgård construction does not necessarily preserve TCR), there's a generic construction called RMX from Halevi and Krawczyk's research program on randomized signatures, which was standardized by NIST in SP 800-106. If you like more modern flavors, you could use keyed BLAKE2 or KMAC128 too, since TCR—and the slightly stronger eTCR—was an explicit design goal for SHA-3. If you want a smaller digest, just truncate the hash function; if you want a larger digest, the easiest way is to use an XOF like the SHA-3 function SHAKE128 or like BLAKE2x. You could also use SHA-256 in ‘CTR mode’, yielding $$H(r \mathbin\| m \mathbin\| 0) \mathbin\| H(r \mathbin\| m \mathbin\| 1) \mathbin\| H(r \mathbin\| m \mathbin\| 2) \mathbin\| \dotsb$$, provided you make sure to pad it unambiguously, or use a standard (if somewhat more complicated) construction like HKDF-SHA256 or MGF1 of PKCS#1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4644307494163513, "perplexity": 1917.6254719066433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704839214.97/warc/CC-MAIN-20210128071759-20210128101759-00086.warc.gz"}
https://yutsumura.com/every-group-of-order-12-has-a-normal-subgroup-of-order-3-or-4/
Every Group of Order 12 Has a Normal Subgroup of Order 3 or 4 Problem 566 Let $G$ be a group of order $12$. Prove that $G$ has a normal subgroup of order $3$ or $4$. Contents Hint. Use Sylow’s theorem. (See Sylow’s Theorem (Summary) for a review of Sylow’s theorem.) Recall that if there is a unique Sylow $p$-subgroup in a group $GH$, then it is a normal subgroup in $G$. Proof. Since $12=2^2\cdot 3$, a Sylow $2$-subgroup of $G$ has order $4$ and a Sylow $3$-subgroup of $G$ has order $3$. Let $n_p$ be the number of Sylow $p$-subgroups in $G$, where $p=2, 3$. Recall that if $n_p=1$, then the unique Sylow $p$-subgroup is normal in $G$. By Sylow’s theorem, we know that $n_2\mid 3$, hence $n_p=1, 3$. Also by Sylow’s theorem, $n_3 \equiv 1 \pmod{3}$ and $n_3\mid 4$. It follows that $n_3=1, 4$. If $n_3=1$, then the unique Sylow $3$-subgroup is a normal subgroup of order $3$. Suppose that $n_3=4$. Then there are four Sylow $3$-subgroup in $G$. The order of each Sylow $3$-subgroup is $3$, and the intersection of two distinct Sylow $3$-subgroups intersect trivially (the intersection consists of the identity element) since every nonidentity element has order $3$. Hence two elements of order $3$ in each Sylow $3$-subgroup are not included in other Sylow $3$-subgroup. Thus, there are totally $4\cdot 2=8$ elements of order $3$ in $G$. Since $|G|=12$, there are $12-8=4$ elements of order not equal to $3$. Since any Sylow $2$-subgroup contains four elements, these elements fill up the remaining elements. So there is just one Sylow $2$-subgroup, and hence it is a normal subgroup of order $4$. In either case, the group $G$ has a normal subgroup of order $3$ or $4$. More from my site • Group of Order $pq$ Has a Normal Sylow Subgroup and Solvable Let $p, q$ be prime numbers such that $p>q$. If a group $G$ has order $pq$, then show the followings. (a) The group $G$ has a normal Sylow $p$-subgroup. (b) The group $G$ is solvable.   Definition/Hint For (a), apply Sylow's theorem. To review Sylow's theorem, […] • Sylow Subgroups of a Group of Order 33 is Normal Subgroups Prove that any $p$-Sylow subgroup of a group $G$ of order $33$ is a normal subgroup of $G$.   Hint. We use Sylow's theorem. Review the basic terminologies and Sylow's theorem. Recall that if there is only one $p$-Sylow subgroup $P$ of $G$ for a fixed prime $p$, then $P$ […] • Non-Abelian Group of Order $pq$ and its Sylow Subgroups Let $G$ be a non-abelian group of order $pq$, where $p, q$ are prime numbers satisfying $q \equiv 1 \pmod p$. Prove that a $q$-Sylow subgroup of $G$ is normal and the number of $p$-Sylow subgroups are $q$.   Hint. Use Sylow's theorem. To review Sylow's theorem, check […] • Every Sylow 11-Subgroup of a Group of Order 231 is Contained in the Center $Z(G)$ Let $G$ be a finite group of order $231=3\cdot 7 \cdot 11$. Prove that every Sylow $11$-subgroup of $G$ is contained in the center $Z(G)$. Hint. Prove that there is a unique Sylow $11$-subgroup of $G$, and consider the action of $G$ on the Sylow $11$-subgroup by […] • Every Group of Order 72 is Not a Simple Group Prove that every finite group of order $72$ is not a simple group. Definition. A group $G$ is said to be simple if the only normal subgroups of $G$ are the trivial group $\{e\}$ or $G$ itself. Hint. Let $G$ be a group of order $72$. Use the Sylow's theorem and determine […] • If a Sylow Subgroup is Normal in a Normal Subgroup, it is a Normal Subgroup Let $G$ be a finite group. Suppose that $p$ is a prime number that divides the order of $G$. Let $N$ be a normal subgroup of $G$ and let $P$ be a $p$-Sylow subgroup of $G$. Show that if $P$ is normal in $N$, then $P$ is a normal subgroup of $G$.   Hint. It follows from […] • Are Groups of Order 100, 200 Simple? Determine whether a group $G$ of the following order is simple or not. (a) $|G|=100$. (b) $|G|=200$.   Hint. Use Sylow's theorem and determine the number of $5$-Sylow subgroup of the group $G$. Check out the post Sylow’s Theorem (summary) for a review of Sylow's […] • A Group of Order $pqr$ Contains a Normal Subgroup of Order Either $p, q$, or $r$ Let $G$ be a group of order $|G|=pqr$, where $p,q,r$ are prime numbers such that $p<q<r$. Show that $G$ has a normal subgroup of order either $p,q$ or $r$. Hint. Show that using Sylow's theorem that $G$ has a normal Sylow subgroup of order either $p,q$, or $r$. Review […] 2 Responses 1. Shabnam says: Awesome explanation thanks a lot • Yu says: You are welcome!Thank you for the comment! If the Quotient is an Infinite Cyclic Group, then Exists a Normal Subgroup of Index $n$ Let $N$ be a normal subgroup of a group $G$. Suppose that $G/N$ is an infinite cyclic group. Then prove... Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482189416885376, "perplexity": 70.5635514170892}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00015.warc.gz"}
http://mathoverflow.net/questions/67225/induced-fibration-of-eilenberg-maclane-spaces
# Induced fibration of Eilenberg-MacLane spaces How does the inclusion $\mathbb Z\rightarrow \mathbb Q$ induce a fibration $K(\mathbb Z,n)\rightarrow K(\mathbb Q,n)$ with fibre $\Omega K(\mathbb Q/\mathbb Z,n)$? - This really isn't a great question, since it is not at all clear where your difficulty lies. One can define a functor $K(-, n)$ (as in my answer), and that's really all there is to it. (I had trouble deciding whether to answer at all, and whether this question would be better suited for math.stackexchange.com. It's definitely not a research-level question; see the faq.) –  Todd Trimble Jun 8 '11 at 9:48 This is a very simple exercise. –  Fernando Muro Jun 8 '11 at 9:57 I have now deleted my answer, since the question has been changed. You have to choose your models correctly to get this fiber in a point-set topology sense, but it isn't hard. –  Todd Trimble Jun 8 '11 at 10:36 It's still the same very simple exercise I used to solve as an undergraduate student. –  Fernando Muro Jun 8 '11 at 11:11 @Fernando Muro : very simple?? :) thanks anyway fernando. –  palio Jun 8 '11 at 11:19 ## 1 Answer Probably the most functorial approach is to use the Dold-Kan equivalence $$F:\{\text{chain complexes}\} \to \{\text{simplicial abelian groups}\}.$$ Let $A_{\ast}$ denote the chain complex with just $\mathbb{Q}/\mathbb{Z}$ in dimension $n-1$, let $B_{\ast}$ be the one with a surjective differential from $\mathbb{Q}$ in dimension $n$ to $\mathbb{Q}/\mathbb{Z}$ in dimension $n-1$, and let $C_{\ast}$ be the one with just a $\mathbb{Q}$ in dimension $n$. There is an evident short exact sequence (and therefore fibration) $A_{\ast}\to B_{\ast}\to C_{\ast}$, which gives a fibration $|FA_{\ast}|\to |FB_{\ast}|\to |FC_{\ast}|$ of topological abelian groups. Here $|FA_{\ast}|$ and $|FC_{\ast}|$ are $K(\mathbb{Q}/\mathbb{Z},n-1)$ and $K(\mathbb{Q},n)$ essentially by definition, and it is easy to produce a weak equivalence from the corresponding model for $K(\mathbb{Z},n)$ to $|FB_{\ast}|$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808976650238037, "perplexity": 342.0087362058162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922763.3/warc/CC-MAIN-20140909051433-00299-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/77458-maximum-revenue-problem.html
# Math Help - Maximum revenue problem. 1. ## Maximum revenue problem. Hi, I have revenue: $r(x)=100x-.0001x^2$ and cost: $c(x)=360+80x+.002x^2+.00001x^3$ I must maximise profit. profit is: $r(x)-c(x)=-.00001x^3-.0021x^2+20x-360$ $p'(x)=-.00003x^2-.0042x+20$. I cannot seem to factor p' to get this result. I have tried completing the square. $x^2+140x-\frac{20}{.00003}=0$ $(x+70)^2=70^2+\frac{20}{.00003}$ $x=-70\pm \sqrt{70^2+\frac{20}{.00003}}$ Calculating that out doesn't get me near 749. I make it $x\approx{3431.3998}$ Thanks Craig. 2. $p'(x)=-.00003x^2-.0042x+20=0$ $0=3x^2+420x-2000000$ $x=\frac{-420\pm\sqrt{420^2+24000000}}{6}$ $x=\frac{-420\pm4916.95...}{6}$ Gives 749.491... 3. Might be going crazy but can't find the edit button suddenly. Anyway your answer is fine, perhaps you're typing it in wrong?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190235495567322, "perplexity": 7523.2997109405505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277592.42/warc/CC-MAIN-20160524002117-00215-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.authorea.com/users/9090/articles/9204-distinguishing-disorder-from-order-in-irreversible-decay-processes/_show_article
02/14/2015 # Jonathan W. Nichols, Shane W. Flynn, William E. Fatherley, Jason R. Green Department of Chemistry, University of Massachusetts Boston Abstract Fluctuating rate coefficients are necessary when modeling disordered kinetic processes with mass-action rate equations. However, measuring the fluctuations of rate coefficients is a challenge, particularly for nonlinear rate equations. Here we present a measure of the total disorder in irreversible decay $$i\,A\to \textrm{products}$$, $$i=1,2,3,\ldots n$$ governed by (non)linear rate equations – the inequality between the time-integrated square of the rate coefficient (multiplied by the time interval of interest) and the square of the time-integrated rate coefficient. We apply the inequality to empirical models for statically and dynamically disordered kinetics with $$i\geq 2$$. These models serve to demonstrate that the inequality quantifies the cumulative variations in a rate coefficient, and the equality is a bound only satisfied when the rate coefficients are constant in time. # Introduction Rates are a way to infer the mechanism of kinetic processes, such as chemical reactions. They typically obey the empirical mass-action rate laws when the reaction system is homogeneous, with uniform concentration (s) throughout. Deviations from traditional rate laws are possible when the system is heterogeneous and there are fluctuations in structure, energetics, or concentrations. When traditional kinetic descriptions break down [insert citation], the process is statically and/or dynamically disordered [insert Zwanzig citation], and it is necessary to replace the rate constant in the rate equation with a time-dependent rate coefficient. Measuring the variation of time-dependent rate coefficients is a means of quantifying the fidelity of a rate coefficient and rate law. In our previous work a theory was developed for analyzing first-order irreversible decay kinetics through an inequality[insert citation]. The usefulness of this inequality is through its ability to quantify disorder, with the unique property of becoming an equality only when the system is disorder free, and therefore described by chemical kinetics in its classical formulation. The next problem that should be addressed is that of higher order kinetics, what if the physical systems one wishes to understand are more complex kinetic schemes, they would require a modified theoretical framework for analysis, but should and can be addressed. To motivate this type of development systems such as...... are all known to proceed through higher ordered kinetics, and all of these systems possess unique and interesting applications, therefore a more complete kinetics description of them should be pursued[insert citations]. Static and dynamic disorder lead to an observed rate coefficient that depends on time $$k(t)$$. The main result here, and in Reference[cite], is an inequality $\mathcal{L}(\Delta{t})^2 \leq \mathcal{J}(\Delta{t})$ between the statistical length (squared) $\mathcal{L}(\Delta{t})^2 \equiv \left[\int_{t_i}^{t_f}k(t)dt\right]^2$ and the divergence $\frac{\mathcal{J}(\Delta{t})}{\Delta{t}} \equiv \int_{t_i}^{t_f}k(t)^{2}dt$ over a time interval $$\Delta t = t_f - t_i$$. Both $$\mathcal{L}$$ and $$\mathcal{J}$$ are functions of a possibly time-dependent rate coefficient, originally motivated by an adapted form of the Fisher information[cite]. Reference 1 showed that the difference $$\mathcal{J}(\Delta t)-\mathcal{L}(\Delta t)^2$$ is a measure of the variation in the rate coefficient, due to static or dynamic disorder, for decay kinetics with a first-order rate law. The lower bound holds only when the rate coefficient is constant in first-order irreversible decay. Here we extend this result to irreversible decay processes with “order” higher than one. We show $$\mathcal{J}-\mathcal{L}^2=0$$ is a condition for a constant rate coefficient for any $$i$$. Accomplishing this end requires reformulating the definition of the time-dependent rate coefficient. In this work we extend the application of this inequality to measure disorder in irreversible decay kinetics with nonlinear rate laws (i.e., kinetics with total “order” greater thane unity). We illustrate this framework with proof-of-principle analyses for second-order kinetics for irreversible decay phenomena. We also connect this theory to previous work on first-order kinetics showing how the model simplifies in a consistent manner when working with first order models. # Disordered and nonlinear irreversible kinetics We consider the irreversible reaction types $i\,A \to \mathrm{products}\quad\quad\textrm{for}\quad i=1,2,3,\ldots,n$ with the nonlinear differential rate laws $\frac{dC_i(t)}{dt} = k_i(t)\left[C_i(t)\right]^i.$ Experimental data is typically a concentration profile corresponding to the integrated rate law. If the concentration profile is normalized, by dividing the concentration at a time $$t$$ to the initial concentration, it is called the survival function $S_i(t) = \frac{C_i(t)}{C_i(0)},$ the input to our theory. Namely, we define the effective rate coefficient, $$k_i(t)$$, through an appropriate time derivative of the survival function that depends on the order $$i$$ of reaction $k_i(t) \equiv \begin{cases} \displaystyle -\frac{d}{dt}\ln S_1(t) & \text{if } i = 1 \\[10pt] \displaystyle +\frac{d}{dt}\frac{1}{S_i(t)^{i-1}} & \text{if } i \geq 2. \end{cases}$ ## Bound for rate constants These forms of $$k(t)$$ satisfy the bound $$\mathcal{J}-\mathcal{L}^2 = 0$$ in the absence of disorder, when $$k_i(t)\to\omega_i$$. This is straightforward to show for the case of an $$i^{th}$$-order reaction ($$i\geq 2$$), with the traditional integrated rate law $\frac{1}{C_i(t)^{i-1}} = \frac{1}{C_i(0)^{i-1}}+(i-1)\omega_i t.$ and associated survival function $S_i(t) = \sqrt[i-1]{\frac{1}{1+(i-1)\omega_i tC_i(0)^{i-1}}}.$ In traditional kinetics, the rate coefficient of irreversible decay is assumed to be constant, in which case $$k(t)\to\omega_i$$, but this will not be the case when the kinetics are statically or dynamically disordered. In these cases, we will use the above definitions of $$k(t)$$. The statistical length and divergence can also be derived for these irreversible decay reactions. The time-dependent rate coefficient is $k_i(t) \equiv \frac{d}{dt}\frac{1}{S_i(t)^{i-1}} = (i-1)\omega_i C_i(0)^{i-1}$ The statistical length $$\mathcal{L}_i$$ is the integral of the cumulative time-dependent rate coefficient over a period of time $$\Delta{t}$$, and the divergence is the cumulative square of the rate coefficient, multiplied by the time interval. For the equations governing traditional kinetics, both the statistical length squared and the divergence are $$(i-1)^2\omega_i^2\left(C_i^{i-1}(0)\right)^2\Delta t^2$$: the bound holds when there is no static or dynamic disorder, and a single rate coefficient $$\omega_i$$ is sufficient to characterize irreversible decay. The nonlinearity of the rate law leads to solutions that depend on concentration. This concentration dependence is also present in both $$\mathcal{J}$$ and $$\mathcal{L}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768133163452148, "perplexity": 651.2344425391924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186780.20/warc/CC-MAIN-20170322212946-00577-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.koreascience.or.kr/search.page?keywords=formaldehyde
• Title, Summary, Keyword: formaldehyde ### Modification of Soy Protein Film by Formaldehyde (Formaldehyde 처리에 의한 대두단백 필름의 물성 개선) • Rhim, Jong-Whan • Korean Journal of Food Science and Technology • / • v.30 no.2 • / • pp.372-378 • / • 1998 • Two types of formaldehyde-treated soy protein isolate (SPI) films, formaldehyde-incorporated and formaldehyde-adsorbed films, and control SPI films were prepared. Cross-linking effect of formaldehyde on selected film properties such as color, tensile strength (TS), elongation at break (E), water vapor permeability (WVP), and water solubility (WS) were determined. Physical properties of formaldehyde-incorporated films were not geneally different from those of control films, while almost all of those among formaldehyde-adsorbed films were significantly different. Through cross-linking development within formaldehyde-adsorbed films, WS decreased significantly (P<0.05) from 26.1% to 16.6%, and TS increased two times while E decreased two times compared with control films. This was caused by insolubilization and hardening of protein by cross-linking most likely attributed to the significant changes in properties of protein films reacted with formaldehyde. ### Isoconversional Cure Kinetics of Modified Urea-Formaldehyde Resins with Additives • Park, Byung-Dae • Current Research on Agriculture and Life Sciences • / • v.30 no.1 • / • pp.41-50 • / • 2012 • As a part of abating formaldehyde emission of urea-formaldehyde resin, this study was conducted to investigate the rmalcure kinetics of both neat and modified urea-formaldehyde resins using differential scanning calorimetry. Neat urea-formaldehyde resins with three different formaldehyde/urea mol ratios (1.4, 1.2 and 1.0) were modified by adding three different additives (sodium bisulfite, sodium hydrosulfite and acrylamide) at two different levels (1 and 3wt%). An isoconversional method at four different heating rates was employed to characterize thermal cure kinetics of these urea-formaldehyde resins to obtain activation energy ($E{\alpha}$) dependent on the degree of conversion (${\alpha}$). The $E{\alpha}$ values of neat urea-formaldehyde resins (formaldehyde/urea = 1.4 and 1.2) consistently changed as the ${\alpha}$ increased. Neat and modified urea-formaldehyde resins of these two F/U mol ratios did show a decrease of the $E{\alpha}$ at the final stage of the conversion while the $E{\alpha}$ of neat urea-formaldehyde resin (formaldehyde/urea = 1.0) increased as the ${\alpha}$ increased, indicating the presence of incomplete cure. However, the change of the $E{\alpha}$ values of all urea-formaldehyde resins was consistent to that of the Ea values. The isoconversional method indicated that thermal cure kinetics of neat and modified urea-formaldehyde resins showed a strong dependence on the resin viscosity as well as diffusion control reaction at the final stage of the conversion. ### Optimum Conditions of Formaldehyde Degradation by the Bacterium Pseudomonas sp. YK-32 (세균 Pseudomonas sp. YK-32 균주에 의한 Formaldehyde 분해 최적조건) • Kim, Young-Mog;Lee, Yun-Kyoung;Kim, Kyoung-Lan;Lee, Eun-Woo;Lee, Myung-Suk • Korean Journal of Fisheries and Aquatic Sciences • / • v.41 no.2 • / • pp.102-106 • / • 2008 • Formaldehyde, an indoor volatile organic compound, is considered toxic due to its carcinogenic risk. Recently, we isolated a formaldehyde-degrading bacterium Pseudomonas sp. YK-32. A crude enzyme prepared from YK-32 also degraded formaldehyde, suggesting that YK-32 cells have formaldehyde hydrogenase activity which is one of the important factors in formaldehyde degradation. The formaldehyde hydrogenase activity was increased 1.25 fold by adding 0.1 % glucose and formaldehyde to the culture medium. In addition, treatment with 1 mM EDTA as a permeabilizer promoted the degradation of formaldehyde and increased the enzymatic activity. ### Indoor and Outdoor Formaldehyde Concentrations in Underground Environments (실내외 포름안데히드 농도에 관한 조사연구) • 김윤신;김미경 • Journal of Environmental Health Sciences • / • v.15 no.2 • / • pp.1-9 • / • 1989 • A pilot study was conducted in order to measure indoor and outdoor formaldehyde levels during August 3 - 22, 1988 in several underground spaces in Seoul. Formaldehyde concentrations were monitored during 1 week in selected sampling areas (subway station, underground shopping center, underpass, tunnel, underground parking lot) using passive formaldehyde monitors. In order to investigate a relationship between respiratory prevalence and levels of formaldehyde, each subject was asked to answer respiratory questions. The mean formaldehyde concentrations were 60.1 ppb in subway station, 122.2 ppb in underground shopping stores, 72.1 ppb in underpasses, 39.7 ppb in tunnel, and 75.9 ppb in underground parking lots, respectively. The mean indoor formaldehyde concentrations in underground environments varied from 28.6 ppb to 118.7 ppb. Generally, the mean formaldehyde concentrations in ticketing office in subway stations appeared higher than those level measured in platform. The mean formaldehyde concentrations of underground shopping center in Gangnam Terminal were higher than any other areas and it exceeded 100 ppb of the American Ambient Air Quality Standards of formaldehyde. Prevalence rates of respiratory symptoms of dwellers seemed to be related to higher indoor formaldehyde levels. ### Properties of Urea-Formaldehyde Resin Adhesives with Different Formaldehyde to Urea Mole Ratios • Park, Byung-Dae • Journal of the Korean Wood Science and Technology • / • v.35 no.5 • / • pp.67-75 • / • 2007 • As a part of abating the formaldehyde emission of urea-formaldehyde (UF) resin adhesive by lowering formaldehyde to urea (F/U) mole ratio, this study was conducted to investigate properties of UF resin adhesive with different F/U mole ratios. UF resin adhesives were synthesized at different F/U mole ratios of 1.6, 1.4, 1.2, and 1.0. Properties of UF resin adhesives measured were non-volatile solids content, pH level, viscosity, water tolerance, specific gravity, gel time and free formaldehyde content. In addition, a linear relationship between non-volatile solids content and sucrose concentration measured by a refractometer was established for a faster determination of the non-volatile solids content of UF resin. As F/U mole ratio was lowered, non-volatile solids content, pH, specific gravity, water tolerance, and gel time increased while free formaldehyde content and viscosity were decreased. These results suggested that the amount of free formaldehyde strongly affected the reactivity of UF resin. Lowering F/U mole ratio of UF resin as a way of abating formaldehyde emission consequently requires improving its reactivity. ### A Study on Free-formaldehyde in the Resin Finished cotton Fabric (III) -Extraction of Free-formaldehyde in the Urea-formaldehyde Resin-finished cotton fabric- (수지가공포의 유리 Formaldehyde에 관한 연구(III) -Urea Formaldehyde 수지가공포중의 유리 Formaldehyde 추출-) • Cho Soon Chae;Rhie Jeon Sook;Rhee Jong Mun;Shin Sang Jin • Journal of the Korean Society of Clothing and Textiles • / • v.5 no.1 • / • pp.23-26 • / • 1981 • In this paper, the extraction mechanism of free formaldehyde in the urea formaldehyde resin finished cotton fabric is discussed. An empirical equation for formaldehyde release has been formulated. $$F=3.7\times10^{-3}\;H\;T^{2.2326}+440$$ in which, F: the amount of free formaldehyde extracted ($\mu$g/g) H: extraction time (min) T: extraction temperature ($^{\circ}C$) ### Characterization of Formaldehyde-degrading Bacteria Isolated from River Sediment (하천 저질에서 분리한 Formaldehyde 분해 미생물의 특징) • Kim, Young-Mog;Lee, Eun-Woo;Kim, Su-Jeung;Lee, Myung-Suk • Korean Journal of Fisheries and Aquatic Sciences • / • v.41 no.2 • / • pp.84-88 • / • 2008 • A bacterium growing on formaldehyde as a sole carbon source was isolated by the dilution method from an enrichment culture containing formaldehyde. The isolated strain, YK-32, was identified as Pseudomonas sp. by morphological, biochemical, and genetic analyses. Pseudomonas sp. YK-32 completely degraded 0.05% formaldehyde within 24 hrs. The isolated strain had a high level of formaldehyde dehydrogenase activity, which is thought to be one of the important factors for formaldehyde degradation, when cells were cultivated in the presence of formaldehyde. ### A Study on the Disposable Diapers for Formaldehyde Content and its Recognizability and Consumer's Attitudes toward the Products (일회용 기저귀의 Formaldehyde 함량과 인지도 및 소비실태에 관한 연구) • Nam Sang Woo;Lee Sun Young • Journal of the Korean Society of Clothing and Textiles • / • v.11 no.3 • / • pp.101-109 • / • 1987 • This study was designed to measure the amount of formaldehyde in the disposable diapers of seven different products. It was aimed to investigate the actual situation of the diaper consumption and to relate it to the amount of formaldehyde measured. The degree of recognizability on the harmfulness of formaldehyde was also studied. The amount of formaldehyde was measured by means of the Acetyl Acetone method. The a ual situation of consumption and the recognizability of the formaldehyde were investigated by questionnaire. In the survey, the subjects had their babies aged from 0$\~$3 years and lived in Seoul. The statistical methods used were simple frequency and chi-square. The results obtained from this study were as follows; 1) Among seven (7) different disposable diapers, two were found to have less amount of formaldehyde than the Japanese regulation. 2) From the survey on the actual situation of consumption most respondents ($53.7\%$) experienced the dermatological problem after using the disposable diapers. Actually for the diapers which had a lot of formaldehyde, the respondents experienced the problems more severely. 3) The recognizability of formaldehyde was very low. The recognizability on the harnfulness of formaldehyde was lower, which represented the consumers had least or no knowledge about the formaldehyde release problem. ### Effect of Resorcinol as Free Formaldehyde Scavenger for Fabric Finished with Urea-formaldehyde Precondensate. (Urea-Formaldehyde 수지가공포에 있어 Resorcinol의 유리 Formaldehyde 포착효과) • Kang, In-Sook;Kim, Sung-Reon • Textile Coloration and Finishing • / • v.9 no.2 • / • pp.41-49 • / • 1997 • To control free formaldehyde release from fabric finished with N-methylol compounds, resin finished cotton fabric was treated with resorcinol solution, dried and cured. Factors affecting to control formaldehyde release have been investigated. It was shown that the aftertreatment with resorcinol greatly suppressed the free formaldehyde release. Up to concentration of about 5% of resorcinol, the concentration of resorcinol effected on the control of free and evolved formaldehyde. And at high concentration of resorcinol, however, the concentration became rather insensitive to contol formaldehyde release. Addition of some salt catalysts such as ammonium chloride, zinc nitrate, sodium acetate and ammonium acetate, was effective in decreasing formaldehyde release. Considering the effect on the control of formaldehyde and crease recovery, ammonium acetate was concidered to be the best catalyst. It was observed that the optimum curing temperature for the resorcinol treatment was about 15$0^{\circ}C$, and that the curing time did not affected formaldehyde release over three minutes. Although the treatment of resorcinol had a little adverse effect on crease recovery of resin finished fabric, this effect could be negligible. ### Physicochemical Properties of Non-Formaldehyde Resin Finished Cotton Fabric and their Optimal Treatment Condition (비포름알데하이드계 수지 가공제 처리한 면직물의 물리화학적 특성 변화와 최적 처리 조건에 관한 연구) • Kim, Han-Gi;Yoon, Nam-Sik;Huh, Man-Woo;Kim, Ick-Soo • Textile Coloration and Finishing • / • v.24 no.2 • / • pp.121-130 • / • 2012 • Cotton fabrics were treated with some commercial non-formaldehyde and low-formaldehyde resins, and then their effect on the physicochemical properties were respectively investigated including formaldehyde release, tear strength, shrinkage, and wrinkle recovery. Formaldehyde release less than 10ppm was obtained only by non-formaldehyde resin. Considering other factors, the optimal concentration of non-formaldehyde resin was shown to be 9-11%. In case of low-formaldehyde type, 5-7% resin concentration and curing temperature of $160{\sim}170^{\circ}C$ were recommended for optimal finishing condition. The choice and combination of resins and catalysts were also important factors, and preliminary considerations before treating cotton fabrics with resins used in this study are also important to get much better results.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6895259022712708, "perplexity": 23757.694481713654}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00450.warc.gz"}
http://wiki.chemprime.chemeddl.org/index.php/The_Amazing_Water_Diet
The Amazing Water Diet - ChemPRIME # The Amazing Water Diet back to Heat Capacities We typically require 2500 Calories per day in food to supply our energy needs. The Calorie is a unit of energy that is produced in our bodies through oxidation of food by oxygen in the air we breath. The energy is equal to that obtained by burning the food in oxygen, which is how the caloric value is obtained. The recommended daily allowance of water is six glasses per day of cold water. This is about 48 fluid ounces, or 1.4 L, at approximately 40oF (~4oC). Our body must use some energy to warm the water to body temperature (37o C). How much of our daily caloric intake goes into heating the water we drink? We'll calculate that in Example 1, but first we'll need to understand what's meant by the heat capacity of water. ## Heat Capacity When our body supplies heat energy to the water, a rise in temperature occurs (no complicating chemical changes or phase changes take place). The rise in temperature is proportional to the quantity of heat energy supplied. If q is the quantity of heat supplied and the temperature rises from T1 to T2 then q = C × (T2T1)      (1) OR q = C × (ΔT)      (1b) where the constant of proportionality C is called the heat capacity of the sample. The sign of q in this case is + because the sample has absorbed heat (the change was endothermic), and (ΔT) is defined in the conventional way. Since the mass of water we drink is variable, it is convenient to note that the quantity of heat needed to raise its temperature is proportional to the mass as well as to the rise in temperature. That is, q = C × m × (T2T1)      (2) OR q = C × m × (Δ T)      (2b) The new proportionality constant C is the heat capacity per unit mass. It is called the specific heat capacity (or sometimes the specific heat), where the word specific means “per unit mass.” Specific heat capacities provide a convenient way of determining the heat added to, or removed from, material by measuring its mass and temperature change. As mentioned [|previously], James Joule established the connection between heat energy and the intensive property temperature, by measuring the temperature change in water caused by the energy released by a falling mass. In an ideal experiment, a 1.00 kg mass falling 10.0 m would release 98.0 J of energy. If the mass drove a propeller immersed in 0.100 liter (100 g) of water in an insulated container, its temperature would rise by 0.234oC. This allows us to calculate the specific heat capacity of water: 98 J = C × 100 g × 0.234 oC C = 4.184 J/goC At 15°C, the precise value for the specific heat of water is 4.184 J K–1 g–1, and at other temperatures it varies from 4.178 to 4.218 J K–1 g–1. Note that the specific heat has units of g (not the base unit kg), and that since the Centigrade and kelvin scales have identical graduations, either oC or K may be used. Joule's experiments establish the connection between kinetic and potential energy and heat energy (measured in calories), which is the basis for understanding our metabolic needs. Example 1: How much food energy is required to raise the temperature of 1,400 mL of water (D = 1.0) from 4.0 oC to 37.0 oC, given that the specific heat capacity of water is 4.184 J K–1 g–1? Solution: q = 4.18 J/goC × 1,400 g × (37.0 - 4.0) q = 193 000 J or 193 kJ. To convert this energy to the US/British unit, we use a conversion that comes from the specific heat of water in those units, 1.0 calorie/g oC: 4.18 J = 1 calorie So 193,000 J x (1 calorie / 4.18 J) = 46,200 cal At first this makes no sense. It appears that 46,200 calories of energy is needed just to heat up the 6 glass of water we drink, but our daily food intake is only 2500 Calories. The confusion lies in the definition of a Calorie (with a capital "C"). 1 Calorie = 1000 calories So 193,000 J x (1 calorie / 4.18 J) x (1 Calorie / 1000 calories) = 46 Cal. So it does require 46/2500 x 100% or 1.8% of our daily food energy just to heat up the 6 glasses of water to body temperature! That is enough energy to walk nearly 2 miles! Other foods, and even the air we breath, require different abounts of heat to change their temperature by the same amount. The specific heats of several substances are given below: Specific heat capacities (25 °C unless otherwise noted) Substance phase Cp(see below) J/(g·K) air, (Sea level, dry, 0 °C) gas 1.0035 argon gas 0.5203 carbon dioxide gas 0.839 helium gas 5.19 hydrogen gas 14.30 methane gas 2.191 neon gas 1.0301 oxygen gas 0.918 water at 100 °C (steam) gas 2.080 water at 100 °C liquid 4.184 ethanol liquid 2.44 water at -10 °C (ice)) solid 2.05 copper solid 0.385 gold solid 0.129 iron solid 0.450 Example 2: We might breathe around 2 L of cold air per minute at -20 oC on a winter's day, and heat it in our lungs to near 37 oC before exhaling it. How much energy is required for warming inhaled cold air or 3 hours? Solution: 3 hours x 60 min/hr x 2 L/min = 360 L of air The density of air is about 1.3 g/L at -20 oC m = DV = 1.3 g/L x 360 L = 468 g q = 1.0035 J/goC × 468 g × (37.0 - (-20.0)) q = 26,800 J or ~27 kJ. This is only 6-7 dietary Calories. ## Electrical Energy Conversion The most convenient way to supply a known quantity of heat energy to a sample is to use an electrical coil. The heat supplied is the product of the applied potential V, the current I flowing through the coil, and the time t during which the current flows: q = V × I × t      (2) If the SI units volt for applied potential, ampere for current, and second time are used, the energy is obtained in joules. This is because the volt is defined as one joule per ampere per second: 1 volt × 1 ampere × 1 second = 1$\begin{matrix}\frac{\text{J}}{\text{A s}}\end{matrix}$ × 1 A × 1 s = 1 J EXAMPLE 3: An electrical heating coil, 230 cm3 of water, and a thermometer are all placed in a polystyrene coffee cup. A potential difference of 6.23 V is applied to the coil, producing a current of 0.482 A which is allowed to pass for 483 s. If the temperature rises by 1.53 K, find the heat capacity of the contents of the coffee cup. Assume that the polystyrene cup is such a good insulator that no heat energy is lost from it. Solution The heat energy supplied by the heating coil is given by q = V × I × t = 6.23 V × 0.482 A × 483 s = 1450 V A s = 1450 J However, q = C × (T2T1) Since the temperatue rises, T2 > T1 and the temperature change ΔT is positive: 1450 J = C × 1.53 K so that $\begin{matrix}C=\frac{\text{1450 J}}{\text{1}\text{.53 K}}=\text{948 J K}^{-\text{1}}\end{matrix}$ Note: The heat capacity found applies to the complete contents of the cup-water, coil, and thermometer taken together, not just the water. As discussed in other sections, an older, non-SI energy unit, the calorie, was defined as the heat energy required to raise the temperature of 1 g H2O from 14.5 to 15.5°C. Thus at 15°C the specific heat capacity of water is 1.00 cal K–1 g–1. This value is accurate to three significant figures between about 4 and 90°C. If the sample of matter we are heating is a pure substance, then the quantity of heat needed to raise its temperature is proportional to the amount of substance. The heat capacity per unit amount of substance is called the molar heat capacity, symbol Cm. Thus the quantity of heat needed to raise the temperature of an amount of substance n from T1 to T2 is given by q = C × n × (T2T1)      (4) The molar heat capacity is usually given a subscript to indicate whether the substance has been heated at constant pressure (Cp)or in a closed container at constant volume (CV). EXAMPLE 4: A sample of neon gas (0.854 mol) is heated in a closed container by means of an electrical heating coil. A potential of 5.26 V was applied to the coil causing a current of 0.336 A to pass for 30.0 s. The temperature of the gas was found to rise by 4.98 K. Find the molar heat capacity of the neon gas, assuming no heat losses. Solution The heat supplied by the heating coil is given by q = V × I × t = 5.26 V × 0.336 A × 30.0 s = 53.0 V A s = 53.0 J Rearranging Eq. (4), we then have $\begin{matrix}C_{m}=\frac{q}{n\text{(T}_{\text{2}}-\text{T}_{\text{1}}\text{)}}=\frac{\text{53}\text{.0 J}}{\text{0}\text{.854 mol }\times \text{ 4}\text{.98 K}}=\text{12}\text{.47 J K}^{-\text{1}}\text{ mol}^{-\text{1}}\end{matrix}$ However, since the process occurs at constant volume, we should write/> CV = 12.47 J K–1 mol–1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.741585910320282, "perplexity": 1134.1110157157657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771716.117/warc/CC-MAIN-20141217075251-00139-ip-10-231-17-201.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/14138/why-exactly-does-current-carrying-two-current-wires-attract-repel
# Why exactly does current carrying two current wires attract/repel? When to parallel wires carrying currents in same direction I1 & I2. http://www.youtube.com/watch?v=43AeuDvWc0k this video demonstrates that effect. My question is, why exactly does this happen? I know the reason, but I'm not convinced with it. One wire generates the magnetic field which is into the plane at another wire. Electrons are moving in the wire which experience Lorentz force F = q(V X B). My arguments are, 1. this force is experienced by the electrons, not the nucleus. And these electrons that are in motion are the "Free electrons". So, when the experience force just they alone should be drifted towards/away from the wire but not the entire atom(s). 2. The only force binding an electron to the material/matter is the coulombic attraction force from the nucleus. If the Lorentz force is sufficiently large, then it should be able to remove electrons from atoms. I other words, they should come out of the material. But I never heard/read of any thing like that happening. Why doesn't this happen? In any case, atoms must not experience any force, then why is it that entire wire is experiencing a force of i(L X B)? - You are right in both arguments. The thing is just, this "only force, ... the coulombic attraction" is incredibly much stronger than the Lorentz force due to the magnetic field of a single wire carrying current in the same direction. As for "In any case, atoms must not experience any force", this is obviously wrong, as can be seen very plainly when you think of Newton's third law and the fact that the coulomb attraction occurs between the electrons and nuclei in the wire of question. - Your question is assuming that the electrons are weakly interacting with the nucleus. The interaction with the nucleus is extremely strong. It is better to ask instead why do we have conductivity at all. Electrons are so tightly bound to nuclei of atoms, why should a tiny external electric field get them moving? The answer is that quantum mechanical effects can spread out electrons over many atoms. This is responsible for chemical bonding. In metals, the electrons have a spread out wavefunction, and the energy-band of spread-out electron states is only partly filled, so it only takes a little bit of energy to push an electron into motion. But for your original question, there is an easy way to see the answer. Consider two infinite charged wires 1 cm apart. You know that they repel, so they move apart. Now boost to a frame moving along the wires at a huge speed, near the speed of light. Relavistic time dilation slows down the rate at which they move apart. But the charge density has gone up in this frame, because of the length contraction. So there must be an additional attractive force due to the currents in the wires. In the limit that you are moving at the speed of light, the attractive like-current force must exactly cancel the repulsive like-charge electrostatic force. - Consider two infinite charged wires 1 cm apart, held together by a series of springs spaced 1 cm apart along the wire (1 spring per cm). Now boost to a frame moving along the wires at .866 c. The Lorentz factor is 2, so charge density doubles in the new frame because of length contraction. The spring density also doubles (2 springs per cm) to exactly cancel the increase in the repulsive electrostatic force. If there is an additional attractive force-per-meter due to the current in this frame, there must also be an additional repulsive force-per-meter we haven't considered. Right? –  Nick Jan 7 at 10:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8244773745536804, "perplexity": 369.98305711894625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898842.15/warc/CC-MAIN-20141030025818-00229-ip-10-16-133-185.ec2.internal.warc.gz"}
https://shelah.logic.at/papers/326/
# Sh:326 • Shelah, S. (1992). Vive la différence. I. Nonisomorphism of ultrapowers of countable models. In Set theory of the continuum (Berkeley, CA, 1989), Vol. 26, Springer, New York, pp. 357–405. • Abstract: We show that it is not provable in ZFC that any two countable elementarily equivalent structures have isomorphic ultrapowers relative to some ultrafilter on \omega. • Version 1995-09-04_10 (55p) published version (49p) Bib entry @incollection{Sh:326, author = {Shelah, Saharon}, title = {{Vive la diff\'erence. I. Nonisomorphism of ultrapowers of countable models}}, booktitle = {{Set theory of the continuum (Berkeley, CA, 1989)}}, series = {Math. Sci. Res. Inst. Publ.}, volume = {26}, year = {1992}, pages = {357--405}, publisher = {Springer, New York}, mrnumber = {1233826}, mrclass = {03C20 (03C15 03E35)}, doi = {10.1007/978-1-4613-9754-0_20}, note = {\href{https://arxiv.org/abs/math/9201245}{arXiv: math/9201245}}, arxiv_number = {math/9201245}, referred_from_entry = {See [Sh:326a]} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.762880265712738, "perplexity": 8753.137654311631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00399.warc.gz"}
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3568022/?tool=pubmed
BMC Syst Biol. 2012; 6: 142. Published online Nov 21, 2012. PMCID: PMC3568022 # Incremental parameter estimation of kinetic metabolic network models ## Abstract ### Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE). Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified). ### Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates) exceeds that of metabolites (chemical species). Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA) models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. ### Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future. Keywords: Incremental parameter estimation, Kinetic modeling, Metabolic network, GMA model ## Background The estimation of unknown kinetic parameters from time-series measurements of biological molecules is a major bottleneck in the ODE model building process in systems biology and metabolic engineering [1]. The majority of current estimation methods involve simultaneous (single-step) parameter identification, where model prediction errors are minimized over the entire parameter space. These methods often rely on global optimization methods, such as simulated annealing, genetic algorithms and other evolutionary approaches [1-3]. The problem of obtaining the best-fit parameter estimates however, is typically ill-posed due to issues related with data informativeness, problem formulation and parameter correlation, all of which contribute to the lack of complete parameter identifiability. Not to mention, finding the global minimum of model residuals over highly multidimensional parameter space is challenging and can become prohibitively expensive to perform on a computer workstation, even for tens of parameters. Here, we consider the modeling of cellular metabolism using the canonical power-law formalism, specifically the generalized mass action (GMA) systems [4,5]. The power-law formalism has many advantages, which have been detailed elsewhere [1,6]. Notably, power laws have a relatively simple structure that permits algebraic manipulation in the logarithmic scale, but nonetheless is capable of describing essentially any nonlinearity. Regulatory interactions among metabolites can also be described straightforwardly through the kinetic order parameters, establishing an equivalence between structural identification and parametric estimation. However, the number of parameters increases proportionally with the number of metabolites and fluxes, leading to a large-scale parameter identification problem, one where single-step estimation methods often struggle to converge. The integration of ODE often constitutes a major part of the computational cost in the parameter estimation, especially when the ODE model is stiff [7]. While stiffness can genuinely arise due to a large time scale separation of the reaction kinetics in the real system, stiff ODEs could also result from unrealistic combinations of parameter values during the parameter optimization procedure, especially when a global optimizer is used. The parameter estimation of ODE models using power-law kinetics is particularly prone to stiffness problem since many of the unknown parameters are the exponents of the concentrations. For this reason, alternative formulations have been proposed that avoid these ODE integrations either completely [7,8] or partially [9-11]. Particularly, computational cost could be significantly reduced by decomposing the estimation problem into two phases, starting with the calculation of dynamic reaction rates or fluxes from the slopes of concentration data, followed by the least square regressions of kinetic parameters [12-14]. In this case, the final parameter estimation is done one flux at a time, each involving only a handful of parameters and thus, the global minimum solution can be either computed analytically (for example, when using log-linear power-law flux functions) or determined efficiently. Moreover, as the first estimation phase (flux estimation) depends only on the assumption of the topology of the metabolic network, the flux estimates can subsequently be used to guide the selection of the most appropriate flux functions for the second phase or to detect inconsistencies in the assumed topology of the network separately from the flux equations [14]. However, the application of this method requires the number of metabolites to be equal to or larger than that of fluxes, so that the flux estimation can result in a unique solution. Since the reverse situation is more commonly encountered in the typical metabolic networks, a generalization of this incremental estimation approach becomes the main focus in this study. As noted above, the new parameter estimation method in this work is built on the concept of incremental identification [12,13] or dynamical flux estimation (DFE) method [14,15]. The proposed method provides two new contributions: (1) an ability to handle the more general scenario, where the number of reactions exceeds that of the metabolites and (2) high numerical efficiency through the reduction of the parameter search space. Specifically, two parameter estimation formulations are proposed with objective functions that depend on model prediction errors of metabolite concentrations and of concentration time-slopes. An extension of this strategy to circumstances where concentration data of some metabolites are missing is also presented. The proposed method is applied to two previously published GMA models and compared with single-step estimation methods, in order to demonstrate its efficacy. ## Methods The generalized mass action model of cellular metabolism describes the mass balance of metabolites, taking into account all metabolic influxes and effluxes and their stoichiometric ratios, as follows: $dXt,p/dt=X˙t,p=SvX,p,$ (1) where X(t,p) is the vector of metabolic concentration time profiles, SRm×n is the stoichiometric matrix for m metabolites that participate in n reactions, and v(X,p) denotes the vector of metabolic fluxes (i.e. reaction rates). Here, each flux is described by a power-law equation: $vjX,p=γj∏iXifji,$ (2) where γj is the rate constant of the j-th flux and fji is the kinetic order parameter, representing the influence of metabolite Xi on the j-th flux (positive: Xi is an activating factor or a substrate, negative: Xi is an inhibiting factor). In incremental parameter identification, a data pre-processing step (e.g. smoothing or filtering) is usually applied to the noisy time-course concentration data Xm(tk), in order to improve the time-slope estimates $X˙mtk$. Subsequently, the dynamic metabolic fluxes v(tk) are estimated from Equation (1) by substituting $X˙t$ with $X˙mtk.$ Finally, the kinetic parameters associated with the j-th flux (i.e. γj and fji’s) can be calculated using a least square regression of the power law flux function in Equation (2) against the estimated vj(tk). Note that for GMA models, the least square parameter regressions in the last step are linear in the logarithmic scale and thus, can be performed very efficiently. A unique set of dynamic flux values v(tk) can only be computed from $X˙mtk=Svtk,$ when the number of metabolites exceeds that of fluxes. However, a metabolite in general can participate in more than one metabolic flux (m<n). In such a situation, there exist an infinite number of dynamic flux combinations v(tk) that satisfy $X˙mtk=Svtk.$ The dimensionality of the set of flux solutions is equal to the degree of freedom (DOF), given by the difference between the number of fluxes and the number of metabolites: nDOF=n-m >0 (assuming S has a full row rank, i.e. there is no redundant ODE in Equation (1)). The positive DOF means that the values of nDOF selected fluxes can be independently set, from which the remaining fluxes can be computed. This relationship forms the basis of the proposed estimation method, in which the model goodness of fit to data is optimized by adjusting only a subset of parameters associated with the independent fluxes above. Specifically, we start by decomposing the fluxes into two groups: v(tk)=[ vI(tk)TvD(tk)T ]T , where the subscripts I and D denote the independent and dependent subset, respectively. Then, the parameter vector p and the stoichiometric matrix S can be structured correspondingly as p=[ pIpD ] and S=[ SISD ]. The relationship between the independent and dependent fluxes can be formulated by rearranging $X˙mtk=Svtk$ into: $vDtk=SD−1X˙mtk−SIvIXmtk,pI.$ (3) In this case, given pI, one can compute the independent fluxes vI(Xm(tk),pI) using the concentration data Xm(tk), and subsequently obtain vD(tk) from Equation (3). Finally, pD can be estimated by a simple least square fitting of vD(Xm(tk),pD) to the computed vD(tk) one flux at a time, when there are more time points than the number of parameters in each flux. In this study, two formulations of the parameter estimation of ODE models in Equation (1) are investigated, involving the minimization of concentration and slope errors. The objective function for the concentration error is given by $ΦCp,X=1mK∑k=1KXmtk−Xtk,pTXmtk−Xtk,p$ (4) and that for the slope error is given by $ΦSp,X=1mK∑k=1KX˙mtk−SvXmtk,pTX˙mtk−SvXmtk,p,$ (5) where K denotes the total number of measurement time points and X(tk,p) is the concentration prediction (i.e. the solution to the ODE model in Equation (1)). Figure Figure11 describes the formulation of the incremental parameter estimation and the procedure for computing the objective functions. Note that the computation of ΦC requires an integration of the ODE model and thus, the estimation using this objective function is expected to be computationally costlier than that using ΦS. On the other hand, metabolic mass balance is only approximately satisfied at discrete time points tk during the parameter estimation using ΦS, as the ODE model is not integrated. Flowchart of the incremental parameter estimation. There are several important practical considerations in the implementation of the proposed method. The first consideration is on the selection of the independent fluxes. Here, the set of these fluxes is selected such that (i) the m×m submatrix SD is invertible, (ii) the total number of the independent parameters pI is small, and (iii) the prior knowledge of the corresponding pI is maximized. The last two aspects should lead to a reduction in the parameter search space and the cost of finding the global optimal solution of the minimization problem in Figure Figure1.1. The second consideration is regarding constraints in the parameter estimation. Biologically relevant values of parameters are often available, providing lower and/or upper bounds for the parameter estimates. In addition, enzymatic reactions in the ODE model are often assumed to be irreversible and thus, dynamic flux estimates are constrained to be positive. Hence, the parameter estimation involves a constrained minimization problem, for which many global optimization algorithms exist. So far, we have assumed that the time-course concentration data are available for all metabolites. However, the method above can be modified to accommodate more general circumstances, in which data for one or several metabolites are missing. In this case, the ODE model is first rewritten to separate the mass balances associated with measured and unmeasured metabolites, such that $X˙t,p=X˙MX˙Ut,p=SMSUvXM,XU,p$ (6) where the subscripts M and U refer to components that correspond to measured and unmeasured metabolites, respectively. Again, if the fluxes are split into two categories vI and vD as above, the following relationship still applies for the measured metabolites: $vDtk=SD,M−1X˙Mtk−SI,MvItk$ (7) Naturally, the degree of freedom associated with the dynamic flux estimation is higher by the number of component in XU than before. Figure Figure22 presents a modification of the parameter estimation procedure in Figure Figure11 to handle the case of missing data, in which an additional step involving the simulation of unmeasured metabolites $X˙U=SUvXM,XU,p$ will be performed. In this integration, XM is set as an external variable, whose time-profiles are interpolated from the measured concentrations. The set of independent fluxes vI are now selected to include all fluxes that appear in $X˙U$ and those that lead to a full column ranked SD,M. If SD,M is a non-square matrix, then a pseudo-inverse will be done in Equation (7). Of course, the same considerations mentioned above are equally relevant in this case. Note that the initial conditions of XU will also need to be estimated. Flowchart of the incremental parameter estimation when metabolites are not completely measured. ## Results Two case studies: a generic branched pathway [7] and the glycolytic pathway of L. lactis[16], were used to evaluate the performance of the proposed estimation method. In addition, simultaneous estimation methods employing the same objective functions in Equations (4) and (5) were applied to these case studies, to gauge the reduction in the computational cost from using the proposed strategy. In order to alleviate the ODE stiffness issue, parameter combinations that lead to a violation in the MATLAB (ode15s) integration time step criterion is assigned a large error value (ΦC=103 for the branched pathway and 105 for the glycolytic pathway). Alternatively, one could also set a maximum allowable integration time and penalize the associated parameter values upon violation, as described above. In this study, the optimization problems were solved in MATLAB using publicly available eSSM GO (Enhanced Scatter Search Method for Global Optimization) toolbox, a population-based metaheuristic global optimization method incorporating probabilistic and deterministic strategies [17,18]. The MATLAB codes of the case studies below are available in Additional file 1. Each parameter estimation was repeated five times to ensure the reliability of the global optimal solution. Unless noted differently, the iterations in the optimization algorithm were terminated when the values of objective functions improve by less than 0.01% or the runtime has exceeded the maximum duration (5 days). ### A generic branched pathway The generic branched pathway in this example consists of four metabolites and six fluxes, describing the transformations among the metabolites (double-line arrows), with feedback activation and inhibition (dashed arrows with plus or minus signs, respectively), as shown in Figure Figure3A.3A. The GMA model of this pathway is given in Figure Figure3B,3B, containing a total of thirteen rate constants and kinetic orders. This model with the parameter values and initial conditions reported previously [7] were used to generate noise-free and noisy time-course concentration data (i.i.d additive noise from a Gaussian distribution with 10% coefficient of variation). The noisy data were smoothened using a 6-th order polynomial, which provided the best relative goodness of fit among polynomials according to Akaike Information Criterion (AIC) [19] and adjusted R2[20]. Subsequently, time-slopes of noise-free and smoothened noisy data were computed using the central finite difference approximation. A generic branched pathway. (A) Metabolic pathway map and (B) the GMA model equations [7]. Here, v1 and v6 were chosen as the independent fluxes as they comprise the least number of kinetic parameters and lead to an invertible SD. The two rate constants and two kinetic orders were constrained to within [0,25] and [0,2], respectively. In addition, all the reactions are assumed to be irreversible. Table Table11 compares simultaneous and incremental parameter estimation runs using noise-free data, employing the two objective functions above. Regardless of the objective function, the proposed incremental approach significantly outperformed the simultaneous estimation. When using the concentration-error minimization, simultaneous optimization met great difficulty to converge due to stiff ODE integrations. Only one out of five repeated runs could complete after relaxing the convergence criteria of the objective function to 1%, while the others were prematurely terminated after the prescribed maximum runtime of 5 days. In contrast, the proposed incremental estimation was able to find a minima of ΦC in less than 96 seconds on average with good concentration fit and parameter accuracy (see Figure Figure4A4A and Table Table1).1). By avoiding ODE integrations using ΦS, the simultaneous estimation of parameters could be completed in roughly 10 minutes duration, but this was much slower than the incremental estimation using ΦC. In this case, the incremental method was able to converge in below 2 seconds or over 250 times faster. The goodness of fit to concentration data and the accuracy of parameter estimates were relatively equal for all three completed estimations (see Figure Figure4B4B and Table Table1).1). The parameter inaccuracy in this case was mainly due to the polynomial smoothing of the concentration data, since the same estimations using the analytical values of the slopes (by evaluating the right hand side of the ODE model in Equation (1)) could give accurate parameter estimates (see Additional file 2: Table S1). Parameter estimations of the branched pathway model using noise-free data Simultaneous and incremental estimation of the branched pathway using in silico noise-free data (×). (A) concentration predictions using parameter estimates from incremental method by ΦC minimization (–––); (B) ... Table Table22 provides the results of the same estimation procedures as above using noisy data. Data noise led to a loss of information and an expected decline in the parameter accuracy. Like before, the simultaneous estimation using ΦC met stiffness problem and three out of five runs did not finish within the five-day time limit. The incremental approach using either one of the objective functions offered a significant reduction in the computational time over the simultaneous estimation using ΦS, while providing comparable parameter accuracy and concentration and slope fit (see Figure Figure55 and Table Table2).2). In this example, data noise did not affect the computational cost in obtaining the (global) minimum of the objective functions. Parameter estimations of the branched pathway model using noisy data Simultaneous and incremental estimation of the branched pathway using in silico noisy data (×). (A) concentration predictions using parameter estimates from incremental method by ΦC minimization (–––); (B) concentration ... Finally, the estimation strategy described in Figure Figure22 was applied to this example using noise-free data and assuming X3 data were missing. Fluxes v3 and v4 that appear in $X˙3$ were chosen to be among the independent fluxes and flux v1 was also added to the set such that the dependent fluxes can be uniquely determined from Equation (7). In addition to the parameters associated with the aforementioned fluxes, the initial condition X3(t0) was also estimated. The bounds for the rate constants and kinetic orders were kept the same as above, while the initial concentration was bounded within [0, 5]. Table Table33 summarizes the parameter estimation results. Four out of five repeated runs of ΦC simultaneous optimization were again prematurely terminated after 5 days. Meanwhile, the rest of the estimations could provide reasonably good data fitting with the exception of fitting to X3 data as expected (see Figure Figure6).6). Like data noise, missing data led to increased inaccuracy of the parameter estimates, regardless of the estimation methods. Finally, the computational speedup by using the incremental over the simultaneous estimation was significant, but was lower than in the previous runs due to the additional integration of XU and the larger number of independent parameters. The detailed values of the parameter estimates in this case study can be found in the Additional file 2: Tables S2 and S3. Parameter estimations of the branched pathway model using noise-free data with X3 missing Simultaneous and incremental estimation of the branched pathway with missing X3: in silico noisy-free data (×). (A) concentration predictions using parameter estimates from incremental method by ΦC minimization (---); (B) concentration ... ### The glycolytic pathway in Lactococcus. lactis The second case study was taken from the GMA modeling of the glycolytic pathway in L. lactis[16], involving six internal metabolites: glucose 6-phosphate (G6P) – X1, fructose 1, 6-biphosphate (FBP) – X2, 3-phosphoglycerate (3-PGA) – X3, phosphoenolpyruvate (PEP) - X4, Pyruvate – X5, Lactate – X6, and nine metabolic fluxes. In addition, external glucose (Glu), ATP and Pi are treated as off-line variables, whose values were interpolated from measurement data. The pathway connectivity is given in Figure Figure7A,7A, while the model equations are provided in Figure Figure77B. L. lactis glycolytic pathway. (A) Metabolic pathway map (Double-lined arrows: flow of material; dashed arrows with plus or minus signs: activation or inhibition, respectively) and (B) the GMA model equations [16]. The time-course concentration dataset of all metabolites were measured using in vivo NMR [21,22], and smoothened data used for the parameter estimations below were shown in Figure Figure8.8. The raw data has been filtered previously [16], and these smoothened data for all metabolites but X6, were directly used for the concentration slope calculation in this case study. In the case of X6, a saturating Hill-type equation: k1tn / (k2+tn) where t is time and the constants k1, k2, n are smoothing parameters, was fitted to the filtered data to remove unrealistic fluctuations. The central difference approximation was also adopted to obtain the time-slope data. Incremental estimation of the L. lactis model: Experimental data (×) compared with model predictions using parameters from concentration error minimization (–––) and slope error minimization (---). Fluxes v4, v7 and v9 were selected as the DOF, again to give the least number of pI and to ensure that SD is invertible. All rate constants were constrained to within [0, 50], while the independent and dependent kinetic orders were allowed within [0, 5] and [-5, 5], respectively. The difference between the bounds for the independent and dependent kinetic orders was done on purpose to simulate a scenario where the signs of the independent kinetic orders were known a priori. Table Table44 reports the outcome of the single-step and incremental parameter estimation runs using ΦC and ΦS. The values of the parameter estimates are given in the Additional file 2: Table S4. Like in the previous case study, there was a significant reduction in the estimation runtime by using the proposed method over the simultaneous estimation, with comparable goodness of fit in concentration and slope. None of the five repeats of ΦC simultaneous minimization converged within the five-day time limit, even after relaxing the convergence criteria of the objective function to 1%. On the other hand, the incremental estimation using ΦC was not only able to converge, but was also faster than the simultaneous estimation of ΦS that did not require any ODE integration. The incremental estimation using ΦC was able to provide parameters with the best overall concentration fit (see Figure Figure8),8), despite having a large slope error. Finally, minimizing ΦS does not guarantee that the resulting ODE is numerically solvable, as was the case of simultaneous estimation, due to numerical stiffness. But the incremental parameter estimation from minimizing ΦS can produce solvable ODEs with good concentration and slope fits. Parameter estimations of the L. lactis model ## Discussion In this study, an incremental strategy is used to develop a computationally efficient method for the parameter estimation of ODE models. Unlike most commonly used methods, where the parameter estimation is performed to minimize model residuals over the entire parameter space simultaneously, here the estimation is done in two incremental steps, involving the estimation of dynamic reaction rates or fluxes and flux-based parameter regressions. Importantly, the proposed strategy is designed to handle systems in which there exist extra degrees of freedom in the dynamic flux estimation, when the number of metabolic fluxes exceeds that of metabolites. The positive DOF means that there exist infinitely many solutions to the dynamic flux estimation, which is one of the factors underlying the parameter identifiability issues plaguing many estimation problems in systems biology [23,24]. The main premise of the new method is in recognizing that while many equivalent solutions exist for the dynamic flux estimation, the subsequent flux-based regression will give parameter values with different goodness-of-fit, as measured by ΦC or ΦS. In other words, given any two dynamic flux vectors v(tk) satisfying $X˙mtk=Svtk,$ the associated parameter pairs (pI, pD) may not predict the slope or concentration data equally well, due to differences in the quality of parameter regression for each v(tk). Also, because of the DOF, the minimization of model residuals needs to be done only over a subset of parameters that are associated with the flux degrees of freedom, resulting in much reduced parameter search space and correspondingly much faster convergence to the (global) optimal solution. The superior performance of the proposed method over simultaneous estimation was convincingly demonstrated in the two GMA modeling case studies in the previous section. The minimization of slope error, also known as slope-estimation-decoupling strategy method [7], is arguably one of the most computationally efficient simultaneous methods. In this strategy, the parameter fitting essentially constitutes a zero-finding problem and the estimation can be done without having to integrate the ODEs. Yet, the incremental estimation could offer more than two orders of magnitude reduction in the computational time over this strategy. There are many factors, including data-related, model-related, computational and mathematical issues, which contribute to the difficulty in estimating kinetic parameters of ODE models from time-course concentration data [1]. Each of these factors has been addressed to a certain degree by using the incremental identification strategy presented in this work. For example, in data-related issues, the proposed method can be modified to handle the absence of concentration data of some metabolites, as shown in Figure Figure2.2. Nevertheless, the method is neither able nor expected to resolve the lack of complete parameter identifiability due to insufficient (dynamical) information contained in the data [23,24]. As illustrated in the first case study, single-step and incremental approaches provided parameter estimates with similar accuracies, which expectedly deteriorated with noise contamination and loss of data. The appropriateness of using a particular mathematical formulation, like power law, is an example of model-related issues. As discussed above, this issue can be addressed after the dynamic fluxes are estimated, where the chosen functional dependence of the fluxes on a specific set of metabolite concentrations can be tested prior to the parameter regression [14]. Next, the computational issues associated with performing a global optimization over a large number of variables and the need to integrate ODEs have been mitigated in the proposed method by performing optimization only over the independent parameter subset and using a minimization of slope error, respectively. Finally, in this work, we have also addressed a mathematical issue related to the degrees of freedom that exist during the inference of dynamic fluxes from slopes of concentration data. However, extra degrees of freedom (mathematical redundancies) are also expected to influence the second step of the method, i.e. one-flux-at-a-time parameter estimation. For (log)linear regression of parameters in GMA models, such redundancy will lead to a lack of full column rank of the matrix containing the logarithms of concentration data Xm(tk) and thus, can be straightforwardly detected. The proposed estimation method has several weaknesses that are common among incremental estimation methods. As demonstrated in the first case study, the accuracy of the identified parameter relies on the ability to obtain good estimates of the concentration slopes. Direct slope estimation from the raw data, for example using central finite difference approximation, is usually not advisable due to high degree of noise in the typical biological data. Hence, pre-smoothing of the time-course data is often required, as done in this study. Many algorithms are available for such purpose, from simplistic polynomial regression and splines to more advanced artificial neural network [7,25] and Whittaker-Eilers smoother [26,27]. If reliable concentration slope estimates are not available, but bounds for the slope values can be obtained, then one can use interval arithmetic to derive upper and lower limits for the dependent fluxes and parameters using Equation (3) (or Equation (7) [28]. When the objective function involves integrating the model, validated solution to ODE with interval parameters can be used to produce the corresponding upper and lower bounds of concentration predictions [29]. Finally, the estimation can be reformulated, for example by minimizing the upper bound of the objective. In addition to the drawback discussed above, the proposed strategy requires a priori knowledge about the topology of the network. For cellular metabolism, such information has become more readily available as genome-scale metabolic network of many important organisms, including human, E. coli and S. cereviseae, have been and are continuously being reconstructed [30]. For other networks, many algorithms also exist for the estimation of network topology based on time-series concentration data, including Bayesian network inference, transfer entropy, and Granger causality [31-33]. ## Conclusions The estimation of kinetic parameters of ODE models from time-course concentration data remains a key bottleneck in model building in systems biology. The lack of complete parameter identifiability has been blamed as the root cause of the difficulty in such estimation. In this study, a new incremental estimation method is proposed that is able to overcome the existence of extra degrees of freedom in the dynamic flux estimation from concentration slopes and to significantly reduce the computational requirements in finding parameter estimates. The method can also be applied, after minor modifications, to circumstances where concentration data for a few molecules are missing. While the present work concerns with the GMA modeling of metabolic networks, the estimation strategies discussed in this work have general applicability to any kinetic models that can be written as $X˙tk=Svtk.$ The creation of computationally efficient parameter estimation methods, such as the one presented here, represents an important step toward genome-scale kinetic modeling of cellular metabolism. ## Competing interest The authors declare that they have no competing interests. ## Authors’ contributions GJ conceived of the study, carried out the parameter estimation and wrote the manuscript. GS participated in the design of the study. RG conceived and guided the study and wrote the manuscript. All authors have read and approved the final manuscript. ## Funding Singapore-MIT Alliance and ETH Zurich. ## Supplementary Material Incremental Estimation Code. Additional file 1 contains MATLAB codes for the parameter estimations in the two case studies: branched pathway model and L. lactis pathway model. Supplementary Tables. Additional file 2 contains the parameter estimation results of the branched pathway model using noise-free data and analytical slopes, the parameter estimates of the two case studies, and the parameter estimation results of five repeated runs. ## References • Chou IC, Voit EO. Recent developments in parameter estimation and structure identification of biochemical and genomic systems. Math Biosci. 2009;219(2):57–83. [PubMed] • Mendes P, Kell D. Non-linear optimization of biochemical pathways: applications to metabolic engineering and parameter estimation. Bioinformatics. 1998;14(10):869–883. [PubMed] • Moles CG, Mendes P, Banga JR. Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res. 2003;13(11):2467–2474. [PubMed] • Savageau MA. Biochemical systems analysis. I. Some mathematical properties of the rate law for the component enzymatic reactions. J Theor Biol. 1969;25(3):365–369. [PubMed] • Savageau MA. Biochemical systems analysis. II. The steady-state solutions for an n-pool system using a power-law approximation. J Theor Biol. 1969;25(3):370–379. [PubMed] • Voit EO. Computational analysis of biochemical systems: a practical guide for biochemists and molecular biologists. New York: Cambridge University Press; 2000. • Voit EO, Almeida J. Decoupling dynamical systems for pathway identification from metabolic profiles. Bioinformatics. 2004;20(11):1670–1681. [PubMed] • Tsai KY, Wang FS. Evolutionary optimization with data collocation for reverse engineering of biological networks. Bioinformatics. 2005;21(7):1180–1188. [PubMed] • Kimura S, Ide K, Kashihara A, Kano M, Hatakeyama M, Masui R, Nakagawa N, Yokoyama S, Kuramitsu S, Konagaya A. Inference of S-system models of genetic networks using a cooperative coevolutionary algorithm. Bioinformatics. 2005;21(7):1154–1163. [PubMed] • Maki Y, Ueda T, Masahiro O, Naoya U, Kentaro I, Uchida K. Inference of genetic network using the expression profile time course data of mouse P19 cells. Genome Inform. 2002;13:382–383. • Jia G, Stephanopoulos G, Gunawan R. Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method. Bioinformatics. 2011;27(14):1964–1970. [PubMed] • Bardow A, Marquardt W. Incremental and simultaneous identification of reaction kinetics: methods and comparison. Chem Eng Sci. 2004;59(13):2673–2684. • Marquardt W, Brendel M, Bonvin D. Incremental identification of kinetic models for homogeneous reaction systems. Chem Eng Sci. 2006;61(16):5404–5420. • Goel G, Chou IC, Voit EO. System estimation from metabolic time-series data. Bioinformatics. 2008;24(21):2505–2511. [PubMed] • Voit EO, Goel G, Chou IC, Fonseca LL. Estimation of metabolic pathway systems from different data sources. IET Syst Biol. 2009;3(6):513–522. [PubMed] • Voit EO, Almeida J, Marino S, Lall R, Goel G, Neves AR, Santos H. Regulation of glycolysis in Lactococcus lactis: an unfinished systems biological case study. Syst Biol (Stevenage) 2006;153(4):286–298. [PubMed] • Egea JA, Rodriguez-Fernandez M, Banga JR, Marti R. Scatter search for chemical and bio-process optimization. J Global Optimization. 2007;37(3):481–503. • Rodriguez-Fernandez M, Egea JA, Banga JR. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems. BMC Bioinformatics. 2006;7:483. [PubMed] • Akaike H. New Look at Statistical-Model Identification. IEEE T Automat Contr. 1974;Ac19(6):716–723. • Montgomery DC, Runger GC. Applied statistics and probability for engineers. 4. Hoboken, NJ: Wiley; 2007. • Neves AR, Ramos A, Costa H, van Swam II, Hugenholtz J, Kleerebezem M, de Vos W, Santos H. Effect of different NADH oxidase levels on glucose metabolism by Lactococcus lactis: kinetics of intracellular metabolite pools determined by in vivo nuclear magnetic resonance. Appl Environ Microbiol. 2002;68(12):6332–6342. [PubMed] • Neves AR, Ramos A, Nunes MC, Kleerebezem M, Hugenholtz J, de Vos WM, Almeida J, Santos H. In vivo nuclear magnetic resonance studies of glycolytic kinetics in Lactococcus lactis. Biotechnol Bioeng. 1999;64(2):200–212. [PubMed] • Raue A, Kreutz C, Maiwald T, Bachmann J, Schilling M, Klingmuller U, Timmer J. Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics. 2009;25(15):1923–1929. [PubMed] • Srinath S, Gunawan R. Parameter identifiability of power-law biochemical system models. J Biotechnol. 2010;149(3):132–140. [PubMed] • Almeida JS. Predictive non-linear modeling of complex data by artificial neural networks. Curr Opin Biotechnol. 2002;13(1):72–76. [PubMed] • Eilers PH. A perfect smoother. Anal Chem. 2003;75(14):3631–3636. [PubMed] • Vilela M, Borges CC, Vinga S, Vasconcelos AT, Santos H, Voit EO, Almeida JS. Automated smoother for the numerical decoupling of dynamics models. BMC Bioinformatics. 2007;8:305. [PubMed] • Jaulin L, Kieffer M, Didrit O, Walter E. Applied interval analysis: with examples in parameter and state estimation, robust control and robotics. London: Springer; 2001. • Lin YD, Stadtherr MA. Validated solution of ODEs with parametric uncertainties. 16th European Symposium on Computer Aided Process Engineering and 9th International Symposium on Process Systems Engineering. 2006;21:167–172. • Latendresse M, Paley S, Karp PD. Browsing metabolic and regulatory networks with BioCyc. Methods Mol Biol. 2012;804:197–216. [PubMed] • Imoto S, Kim S, Goto T, Miyano S, Aburatani S, Tashiro K, Kuhara S. Bayesian network and nonparametric heteroscedastic regression for nonlinear modeling of genetic network. J Bioinform Comput Biol. 2003;1(2):231–252. [PubMed] • Nagarajan R, Upreti M. Comment on causality and pathway search in microarray time series experiment. Bioinformatics. 2008;24(7):1029–1032. [PubMed] • Tung TQ, Ryu T, Lee KH, Lee D. In: Proceedings of the Twentieth IEEE International Symposium on Computer-Based Medical Systems:20-22 June 2007; Maribor, Slovenia. Kokol P, Los A, editor. Los Alamitos: IEEE Computer Society; 2007. Inferring gene regulatory networks from microarray time series data using transfer entropy; pp. 383–388. Articles from BMC Systems Biology are provided here courtesy of BioMed Central ## Formats: ### Related citations in PubMed See reviews...See all... ### Cited by other articles in PMC See all... • PubMed PubMed PubMed citations for these articles
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020975828170776, "perplexity": 1772.070432616893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007832.1/warc/CC-MAIN-20141125155647-00144-ip-10-235-23-156.ec2.internal.warc.gz"}
https://randomdeterminism.wordpress.com/page/2/
Announcing the visualcounter module It has been almost two years since I posted about the main idea of the visualcounter module. I am happy to announce the official release of the module. I have been using this module in my presentations for almost two years without any problems, so I believe that it is stable enough to be released. At present, the module is available on github and it should be available through ConTeXt garden soon. Look at the documentation to see some of the features of the module (in particular, the "star rating" example based on Jim Hefferon’s article in the Practex Journal). The module provides six counters. Two of these were created for proof of concept and are not well tested; the remaining four—scratchcounter, mayanumbers, markers, and countdown—are well tested and, hopefully, their interface will not change. This was my first module that uses the ConTeXt namespace macros. If you peek into the module, you’ll notice that I only define one macro; everything else is handled by the ConTeXt namespace macro \definenamespace. The other interesting feature of this module is that I use a separate metapost instance for displaying the counters. This avoids conflict with user definitions. For example, if a user decides to change the metapost definition of fill for whatever reason, \startMPdefinitions let fill = draw; \stopMPdefnition such a change will not affect the visual counter module! Any feedback is appreciated. Removing multiple blank lines when typesetting code listings The listings package in LaTeX has an option to collapse multiple empty lines into a single empty line when typesetting code lists. Today, there was a question on TeX.se how to do something similar when using the minted package. Since the vim module uses the same principle as the minted package, I wondered how one could collapse multiple empty lines into a single line? One of the fetures of the vim module is that you can source an arbitrary vimrc file before processing the code through the vim editor to generate syntax highlighted code. This feature makes it possible to delegate the task to collapsing multiple blanks lines into a single blank line to vim, the editor. Since the vim module first writes the source code in a file with extension .tmp, the following vimrc snippet will collapse all multiple blank lines into a single blank line whenever a .tmp file is loaded: au BufEnter *.tmp %s/$$^\s*\n$$\{2,\}/\r/ge | w Use this inside the vim module as follows (example also available on github): \usemodule[vim] \startvimrc[name=collapse] au BufEnter *.tmp %s/$$^\s*\n$$\{2,\}/\r/ge | w \stopvimrc \definevimtyping[CPPtyping][syntax=cpp, vimrc=collapse] \starttext \startCPPtyping i++; i++; i--; \stopCPPtyping \stoptext Agreed, this is not as simple as the extralines=1 option in the listings package. But, it is not too complicated when you consider the fact that I had not thought about this feature at all when I wrote the vim module. How I stopped worrying and started using Markdown like TeX These days I type most of simple documents (short articles, blog entries, course notes) in markdown. Markdown provides only the basic structured elements (sections, emphasis, urls, lists, footnotes, syntax highlighting, simple tables and figures) which makes it easy to transform the input into multiple output formats. Most of the time, I still want PDF output and for that, I use pandoc to convert markdown to ConTeXt. At the same time, I have the peace of mind that if I need HTML or DOC output, I’ll be able to get that easily. For most of the last decade, I have almost exclusively used LaTeX/ConTeXt for writing all my documents. After moving to Markdown, I miss three features of TeX: separation of content and presentation; conditional inclusion of content; and including external documents. In this post, I’ll explain how to get these with Markdown. Separation of content and presentation TeX gives you a lot of control for creating new structural elements. Let’s take a simple example. Suppose I want to write a file name in a document. Normally, I want the filename to appear in typewriter font. In LaTeX, I could type it as \texttt{src/hello.c} but it is better to define a custom macro \filename and use \filename{src/hello.c} The advantage is two-fold. Firstly, while writing the file, I am thinking in term of content (filename) rather than presentation (typewriter font). Secondly, in the future, if I want to change how a filename is displayed (perhaps as a hyper-link to the file), all I need to do is change the definition of the macro. Markdown, with its simplistic structure, lacks the ability to define custom macros. Conditional compilation TeX also makes it trivial to generate multiple versions of the document from the same source. Again, lets take an example. Suppose I am writing notes for a class. Normally, I like to include a short bullet list on my lecture slides, but include a detailed description in the lecture handout. In ConTeXt I can use modes as follows (LaTeX has a similar feature using the comments package): Feature of the solution \startitemize[n] \item Feature 1 \startmode[handout] Explanation of the feature ... \stopmode \item Feature 2 \startmode[handout] Explanation of the feature ... \stopmode \stopitemize To generate the slides version of my lecture notes, I compile them using context --mode=slides --result=slides <filename> This version just contains the bullet list. Since the handout mode is not set, the content between \startmode[handout] ... \stopmode is omitted. To generate the handout version of my lecture notes, I compile them using context --mode=handout --result=slides <filename> Since the handout mode is set, the content between \startmode[handout] ... \stopmode is included Such a conditional compilation is extremely useful to keep the slides and handouts in sync. Again, markdown with its simplistic feature set, lacks the ability of conditional compilation. Neither does Pandoc add this feature. Including external documents TeX makes it easy to include external documents. This is really important when you want to include source code in your documents. I teach an introductory programming class, and want to make sure that the example code included in my notes is correct. I write the code in a separate file, write the corresponding test files to ensure that the code works correctly, and then include it in my notes using \typeJAVAfile[src/FactoryExample.java] which gives me syntax highlighted source code. Pandoc does generate syntax highlighted source code, but does not provide any means to include external source code. So, I have to copy paste the code from the actual source file to the markdown document, but that is an error-prone process. If I only cared about PDF output (via LaTeX/ConTeXt backend), I could simply use the same TeX macros in the markdown document. Pandoc passes the TeX macros unchanged to the LaTeX/ConTeXt backend, so I would get a TeX document with all the bells and whistles. But, if I tried to generate HTML or DOC output, these TeX macros will be omitted, and I’d get a broken document. One of my reasons to switching to Markdown was the peace of mind that I can generate HTML or DOC output if needed. Using TeX macros in the source takes away that advantage. So, I started looking for possible solutions and found gpp—the generic pre-processor. It is similar to the C-preprocessor (that handles the #include, #define, stuff in C/C++) but provides many configuration options. I use it with the -H option, which requires macros to be specified in an HTML-like mode: <#include "file"> <#define MACRO|value> Use <#MACRO> Normally the <#...> does not appear in a document, so using gpp is safe. See the gpp documentation for complete details. I’ll show how to get the three features that I miss from TeX: 1. Separation of content and presentationWith gppI can define new macros that denote new structural elements, e.g., <#define filename|#1> The source is included in <#filename src/hello.c> When I compile the document using gpp -H, I get The source is included in src/hello.c Sure, this requires more typing that simply using ..., but that is the price that one has to pay for getting more structure. More importantly, I can define the #filename macro based on the output format: <#define filename|#1> <#ifdef HTML> <#define filename|<code class="filename">#1</code>> <#endif> <#ifdef TEX> <#define filename|\\filename{#1}> <#endif> The source is included in <#filename src/hello.c> Now, if I compile the document using gpp -H -DHTML=1, I get The source is included in <code class="filename">src/hello.c</code and if I compile using gpp -H -DTEX=1, I get The source is included in \filename{src/hello.c} This ensures that the document structure is passed to the output as well. To make it easy to manage macros, create three files, macros.gpp containing all macros, html.gpp overwriting some of the macros with HTML equivalents, and tex.gpp overwriting some of the macros with TeX equivalents. End macros.cpp file with .... <#ifdef HTML> <#include "html.gpp"> <#endif> <#ifdef TEX> <#include "tex.gpp"> <#endif> and then preprocess the document using gpp -DTEX=1 --include macors.gpp <filename> (or -DHTML=1 for HTML output). 2. Conditional compilationActually, the previous example already shows how to get conditional compilation: use the -D command line switch and check the variable definition using #ifdef. Thus, the above example translates to: Feature of the solution 1. Feature 1 <#ifdef HANDOUT> Explanation of the feature ... <#endif> 2. Feature 2 <#ifdef HANDOUT> Explanation of the feature ... <#endif> When I compile without -DHANDOUT=1, I get the slides version; when I compile with -DHANDOUT-1, I get the handout version. 3. Including external documentsExternal documents can be included with #includedirective. So, I can include an external file using ~~~ {.java} <#include "src/Factory.java"> ~~~ Putting it all together All that is needed is to run the gpp preprocessor and then pass the output to pandoc. gpp -H <options> <filename> | pandoc -f markdown -t <format> -o <outfile> Hide this in a wrapper script or a shell function or a Makefile, and you have a markdown processor with the important features of TeX! A ConTeXt style file for formatting RSS feeds for Kindle As I said in the last post, I bought an Amazon Kindle Touch sometime back, and I find it very useful for reading on the bus/train while commuting to work.I use it read novels and books, a few newspapers and magazines that I subscribe to, and RSS feeds of different blogs that I follow. Until now, I had been using ifttt to send RSS feeds to Instapaper; Instapaper then emails a daily digest as an ebook to kindle account at midnight; in the morning, I switch on my Kindle for a minute; the Kindle syncs new content over Wifi; and off I go. However, Kindle typesets ebooks very poorly,  so I decided to write a ConTeXt style file to typeset RSS feed (check it out on github).  To use this style: \usemodule[rssfeed] \starttext \setvariables [title={Title of the feed}, description={Description of the feed}, ] \starttitle[title={First feed entry}] .... \stopttitle \starttitle[title={Second feed entry}] ... \stoptitle \stoptext It uses the eink module to set the page layout and fonts, and use a light and clean style for formatting feed entries. Since the proof is in the pudding, look at following PDFs to see the style for different types of blogs. I use a simple ruby script to parse RSS feeds and uses Pandoc to convert the contents of each entry to ConTeXt. The script is without bells and whistles, and there is no site specific formatting of feeds. All feeds are handles the same way, and as a result, there are a few glitches: For example, IEEE uses some non-standard tags to denote math) which Pandoc doesn’t handle and the images generated by WordPress blogs that use $latex=...$ to typeset math are not handled correctly by ConTeXt, etc. The script also uses Mutt to email the generated PDF to my Kindle account. This way, I can simply add a cron job that runs the script at appropriate frequency (daily for usual blogs, weekly for low traffic blogs, and once a month for table of contents of different journals). A style file for eink readers Recently I bought an Amazon Kindle touch. It is more convenient than the IREX DR1000 for reading morning news and blogs (thanks to instapaper’s automated delivery of “Read Later” articles, and ifttt for sending RSS feeds to Instapaper). I have also started reading novels on the Kindle as opposed to the DR1000. Being small, the Kindle is easier to carry; and its hardware just works better than the DR1000: instant startup, huge battery life, and wifi; all areas where DR1000 was lacking. Still DR1000 is the best device when it comes to reading and annotating academic papers, which is surprising given that DR1000 came out 3.5 years ago; perhaps the “eink devices for reading and annotating academic papers” is too niche a niche market to have a successful product. DR1000 was \$800 and IREX is now bankrupt. Anyways, since I am reading novels on Kindle, I have updated my old ConTeXt style file for DR1000 to also handle Kindle and am releasing that as a ConTeXt module. Actually, as two ConTeXt modules: t-eink-devices that stores the dimensions and desired font sizes for eink devices (currently, it has data only for DR1000 and Kindle as those are the only devices that I have) and t-eink that sets an easy to read style that includes: • Paper size that matches the screen dimensions • Tiny margins, no headers and footers • Bookmarks for titles and chapters (both DR1000 and Kindle can use PDF bookmarks as table of contents) • A reasonable default style for chapter and title headings • A \startinterlude\stopinterlude environment for title pages, dedication, etc. I have only tested this with simple novels (mostly texts and pictures). That is why the module does not set any style for sections, subsections, etc, as I did not need them so far. This is mostly for personal use, but I am announcing this module in case someone wants to give it a shot. To use the module, simply add \usemodule[irex] [ % alternative=kinde, % or DR1000 % mainfont={Tex Gyre Schola}, % sansfont={Tex Gyre Heros}, % monofont={Latin Modern Mono}, % mathfont={Xits}, % size=, % By default, kindle uses 10pt and DR1000 uses 12pt font. % Use this setting if you want to set a font size. ] This module passes the font loading to the simplefonts module. So, use any name for mainfont etc. that simplefonts will understand. If you don’t set any option, then the default values, indicated above, are used. So, to test out the module, you can just use (for kindle): \usemodule[eink] or (for DR1000) \usemodule[eink][alternative=DR1000] Below are the samples from Le Petit Prince. The text and images were taken from this website and converted to ConTeXt using Pandoc. The text is also available from Project Gutenburg, Australia. If you have a Kindle or a DR1000, you can compare the quality of these PDFs (hyphenation, line-breaking, widows and orphans) from what you get from the eBook version. If I am to spend 5-10 hours reading a novel, I don’t mind spending 15 minutes extra (to create a PDF version of the book) to make that reading experience pleasant. The output is not perfect, especially in terms for float placement in the Kindle version (Page 5 has an underfull page because the figure was too big to fit in the page, the right float image on page 10 would have been better as a here figure, the right float figures on page 13-14 are much lower compared to where they are referred, etc.). But, I find these more tolerable than a chapter title appearing at the bottom of the page and occasionally loosing pagination when I highlight text (both of which happen with epub documents). Won’t it be nice if TeX could pretty-print files hosted on github, e.g., \typeRUBYfile{https://raw.github.com/adityam/filter/master/Rakefile} or include a remotely hosted markdown file in your document \processmarkdownfile{https://raw.github.com/adityam/filter/master/README.md} I wanted to add this feature to the filter and vim modules. Although I knew that ConTeXt could read remote files directly, I thought that it would be hard to plug into this mechanism. Boy, was I wrong. Look at the commit history of the change needed to add this feature. All I needed to do, was add \locfilename to get the local file name for a file. If the requested file is a remote file (i.e., starts with http:// or ftp://), ConTeXt downloads the file and stores it in the cache directory, and return the name of the cached file. Pretty neat, eh? With this change, \process<filter>file macro of the filter module can read remote files. Since, the vim module is built on top of the filter module, the \type<vim>file can also read remote files. The above feature is currently only available in the dev branch. I’ll make a new release once I add hooks to force re-download of remote files. Meanwhile, if you have a ConTeXt macro that reads files, just add a \locfilename at appropriate place, and your macro will be able to read remote files Update for the filter module: faster caching Over the last year, the code base of the filter module has matured considerably. Now, the module has all the features that I wanted when I started with it about a year and a half back. The last remaining limitation (in my eyes, at least) was that caching of results required a call to external programs (mtxrun) to calculate md5 hashes; as such, caching was slow. That is no longer the case. Now (since early December), md5 sums are calculated at the lua end, so there is no time penalty for caching. As a result, in MkIV, recompiling is much faster for documents having lots of external filter environments with caching enabled(i.e., environments defined with continue=yes option).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5458972454071045, "perplexity": 2872.8527250180723}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00022.warc.gz"}
http://alexanderpruss.blogspot.com/2008/11/tense-and-action.html
## Thursday, November 13, 2008 ### Tense and action Consider a version of John Perry's argument that action needs tense. You promised to call a friend precisely between 12:10 and 12:15 and no later. When it is between 12:10 and 12:15, and you know what time it is, this knowledge, together with the promise, gives you reason to call your friend. But if this knowledge is tenseless, then you could have it at 12:30, say. Thus, absurdly, at 12:30 you could have knowledge that gives you just as good a reason to call your friend.[note 1] Here, however, is a tenseless proposal. Suppose it is 12:12, and I am deliberating whether to call my friend. I think the following thought-token, with all the verbs in a timeless tense: 1. A phone call flowing from this deliberative process would occur between 12:10 and 12:15, and hence fulfill the promise, so I have reason that this deliberative process should conclude in a phone call to the friend. And so I call. Let's see how the Perry-inspired argument fares in this case. I knew the propositions in (1) at 12:12, and I could likewise know these propositions at 12:30, though if I were to express that knowledge then, I would have to replace both occurrences of the phrase "this deliberative process" in (1) by the phrase "that deliberative process." However, this fact is in no way damaging. For suppose that at 12:30, I am again deliberating whether to call my friend. I have, on this tenseless proposal, the very same beliefs that at 12:12 were expressed by (1). It would seem that where I have the same beliefs and the same knowledge, I have the same reasons. If this principle is not true, the Perry argument fails, since then one can simply affirm that one has the same beliefs and knowledge at 12:30 as one did at 12:12, but at 12:30 these beliefs and knowledge are not a reason for acting, while they are a reason for acting at 12:12. But I can affirm the principle, and I am still not harmed by the argument. For what is it that I conclude at 12:30 that I have (tenseless) reason to do? There is reason that the deliberative process should conclude in a call to the friend. But the relevant referent of "the deliberative process" is not the deliberative process that occurs at 12:30, call it D12:30, but the deliberative process that occurs at 12:12, call it D12:12. For (1) is not about the 12:30 deliberative process, but about the 12:12 one. The principle that the same beliefs and knowledge gives rise to the very same reasons may be true—but the reason given rise to is a reason for the 12:12 deliberative process to conclude in a phone call. But that is not what I am deliberating about at 12:30. At 12:30, I am deliberating whether this new deliberative process, D12:30, should result in a phone call to the friend. That I can easily conclude that D12:12 should result in a phone call to the friend is simply irrelevant. There is an awkwardness about the solution as I have formulated it. It makes deliberative processes inextricably self-referential. What I am deliberating about is whether this very deliberation should result in this or that action. But I think this is indeed a plausible way to understand a deliberation. When a nation votes for president, the nation votes not just for who should be president, but for who should result as president from this very election. (These two are actually subtly different questions. There could be cases where it is better that X be president, but it is better that Y result as president from this very election. Maybe X promised not to run in this election.) [I made some minor revisions to this post, the most important of which was to emphasize that (1) is a token.] #### 1 comment: Alexander R Pruss said... One may also need to deliberate about how long this deliberation process should last (if it starts at 12:12, it shouldn't last more than three minutes!)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917592167854309, "perplexity": 1159.0889177167714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777418.140/warc/CC-MAIN-20141217075257-00129-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.techwhiff.com/issue/10-3-pt-which-was-one-of-john-adams-s-accomplishments--247939
# 10. (3 pt) Which was one of John Adams's accomplishments? A. riding to warn leaders that the British were coming B. writing the major part of the Declaration of Independence C. leading troops at Bunker Hill D. defending British soldiers on trial after the Boston Massacre ###### Question: 10. (3 pt) Which was one of John Adams's accomplishments? A. riding to warn leaders that the British were coming B. writing the major part of the Declaration of Independence C. D. defending British soldiers on trial after the Boston Massacre ### Laetae sumus prope rivum frigidum sedemus. Laetae sumus prope rivum frigidum sedemus.... ### The moon Phobos orbits Mars (m = 6.42 x 1023 kg) at a distance of 9.38 x 106 m. What is the Phobos' orbital velocity? The moon Phobos orbits Mars (m = 6.42 x 1023 kg) at a distance of 9.38 x 106 m. What is the Phobos' orbital velocity?... ### (6/-7)/(3/11) 6/-7 divided by 3/11 (6/-7)/(3/11) 6/-7 divided by 3/11... ### The initial filtration step in the glomerulus of the mammalian kidney occurs primarily by: A.passive flow due to a pressure difference. B.passive flow resulting from a countercurrent exchange system. C.active transport of water, followed by movement of electrolytes along a resulting concentration gradient. D.active transport of electrolytes, followed by passive flow of water along the resulting osmolarity gradient. The initial filtration step in the glomerulus of the mammalian kidney occurs primarily by: A.passive flow due to a pressure difference. B.passive flow resulting from a countercurrent exchange system. C.active transport of water, followed by movement of electrolytes along a resulting concentration gr... ### Which is true about the elongation of the leading strand during DNA synthesis? A. It depends on the DNA polymerase action. B. It produces RNA fragments. C. It does not require a template strand. Which is true about the elongation of the leading strand during DNA synthesis? A. It depends on the DNA polymerase action. B. It produces RNA fragments. C. It does not require a template strand.... ### What is the enthalpy for 2H2S(g) + 3O2(g) -> 2H2O(I) + 2SO2(g). (∆H)​ what is the enthalpy for 2H2S(g) + 3O2(g) -> 2H2O(I) + 2SO2(g). (∆H)​... ### Help me with question 2 This is division :) Help me with question 2 This is division :)... ### What is a Spanish archipelago off the northwest coast of Africa? What is a Spanish archipelago off the northwest coast of Africa?... ### What is the melting point for neon what is the melting point for neon... ### When $\frac{1}{1111}$ is expressed as a decimal, what is the sum of the first 40 digits after the decimal point? When $\frac{1}{1111}$ is expressed as a decimal, what is the sum of the first 40 digits after the decimal point?... ### When you multiply 2 decimals, how do you know where to place the comma in the product? When you multiply 2 decimals, how do you know where to place the comma in the product?... ### Can someone help me with these word problems? Can someone help me with these word problems?... ### The diameter of a sphere us 4 centimeters. which represents the volume of the sphere the diameter of a sphere us 4 centimeters. which represents the volume of the sphere... ### Find the digit that makes (_1,258) divisible by 9 Find the digit that makes (_1,258) divisible by 9...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5292032957077026, "perplexity": 3797.3372250922735}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00250.warc.gz"}
http://math.stackexchange.com/questions/45115/how-do-you-define-the-decimals-indicator-e-3
# how do you define the decimals indicator: E-3? Sorry for the terrible question, but how do you define E-3, which is used in the calculators to indicate that the first 3 decimals in the number are not displayed ? 0.000563 = 5.63E-3 I need to write in my thesis something like: "Please, pay attention that the values are shifted by 3 decimals". thanks - x.yzwE-3 = x.yzw$\cdot 10^{-3}$. – t.b. Jun 13 '11 at 13:39 @Theo Buehler I actually need to write it down in words. Something like: "Please, pay attention, the values are shifted". – Patrick Jun 13 '11 at 13:52 Well, I'd say: "Please pay attention that the decimal point is shifted three digits to the left because of the decimal exponent $10^{-3}$", but I'm no native speaker. Out of curiosity: What kind of thesis are you writing that you can't assume familiarity with this on the part of your readers? I for one learned that in elementary school, maybe 3rd or 4th grade. By the way: your equality is wrong: 0.000563 = 5.63E-4 – t.b. Jun 13 '11 at 13:57 Instead of closing this answer, why doesn't one of the people voting to close actually, you know, answer it? – user1729 Aug 1 '13 at 15:29 @rschwieb It is about communicating maths. If I am interpreting it correctly, the OP wants to make it clear than when they are writing 5.3 they mean 0.0053. Which is not as uncommon as you might think. – user1729 Aug 1 '13 at 18:48 It depends on what you're writing about. It's probably easier to just write $5.63\times10^{-3}$ in the place of $0.00563$ in a math text. If you really don't want to write the $\times10^{-3}$ (for example, in a table), just say "please note that the values in this table represent the error(or whatever they represent) divided by $1000$." If you're writing about something physical that has units (for example, distance) write: "Measurement is in millimeters" or "Measured in $\text{mm}$."
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6076648235321045, "perplexity": 757.0458381375466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00076-ip-10-164-35-72.ec2.internal.warc.gz"}
https://research-information.bris.ac.uk/en/publications/penroses-new-argument-and-paradox
# Penrose's New Argument and Paradox Research output: Chapter in Book/Report/Conference proceedingChapter in a book ## Abstract In this paper we take a closer look at Penrose’s New Argument for the claim that the human mind cannot be mechanized and investigate whether the argument can be formalized in a sound and coherent way using a theory of truth and absolute provability. Our findings are negative; we can show that there will be no consistent theory that allows for a formalization of Penrose’s argument in a straight- forward way. In a second step we consider Penrose’s overall strategy for arguing for his view and provide a reasonable theory of truth and absolute provability in which this strategy leads to a sound argument for the claim that the human mind cannot be mechanized. However, we argue that the argument is intuitively implausible since it relies on a pathological feature of the proposed theory. Original language English Truth, Existence, and Explanation FilMat Studies in the Philosophy of Mathematics Springer Published - 2018 ### Publication series Name Boston Studies in the History and Philosophy of Science ## Fingerprint Dive into the research topics of 'Penrose's New Argument and Paradox'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864450991153717, "perplexity": 947.8706679757752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00167.warc.gz"}
http://en.wikipedia.org/wiki/Interaction_information
# Interaction information Jump to: navigation, search The interaction information (McGill 1954) or co-information (Bell 2003) is one of several generalizations of the mutual information, and expresses the amount information (redundancy or synergy) bound up in a set of variables, beyond that which is present in any subset of those variables. Unlike the mutual information, the interaction information can be either positive or negative. This confusing property has likely retarded its wider adoption as an information measure in machine learning and cognitive science. ## The Three-Variable Case For three variables $\{X,Y,Z\}$, the interaction information $I(X;Y;Z)$ is given by $\begin{matrix} I(X;Y;Z) & = & I(X;Y|Z)-I(X;Y) \\ \ & = & I(X;Z|Y)-I(X;Z) \\ \ & = & I(Y;Z|X)-I(Y;Z) \end{matrix}$ where, for example, $I(X;Y)$ is the mutual information between variables $X$ and $Y$, and $I(X;Y|Z)$ is the conditional mutual information between variables $X$ and $Y$ given $Z$. Formally, $\begin{matrix} I(X;Y|Z) & = & H(X|Z) + H(Y|Z) - H(X,Y|Z) \\ \ & = & H(X|Z)-H(X|Y,Z) \end{matrix}$ For the three-variable case, the interaction information $I(X;Y;Z)$ is the difference between the information shared by $\{Y,X\}$ when $Z$ has been fixed and when $Z$ has not been fixed. (See also Fano's 1961 textbook.) Interaction information measures the influence of a variable $Z$ on the amount of information shared between $\{Y,X\}$. Because the term $I(X;Y|Z)$ can be zero — for example, when the dependency between $\{X,Y\}$ is due entirely to the influence of a common cause $Z$, the interaction information can be negative as well as positive. Negative interaction information indicates that variable $Z$ inhibits (i.e., accounts for or explains some of) the correlation between $\{Y,X\}$, whereas positive interaction information indicates that variable $Z$ facilitates or enhances the correlation between $\{Y,X\}$. Interaction information is bounded. In the three variable case, it is bounded by $-min\ \{ I(X;Y), I(Y;Z), I(X;Z) \} \leq I(X;Y;Z) \leq min\ \{ I(X;Y|Z), I(Y;Z|X), I(X;Z|Y) \}$ ### Example of Negative Interaction Information Negative interaction information seems much more natural than positive interaction information in the sense that such explanatory effects are typical of common-cause structures. For example, clouds cause rain and also block the sun; therefore, the correlation between rain and darkness is partly accounted for by the presence of clouds, $I(rain;dark|cloud) \leq I(rain;dark)$. The result is negative interaction information $I(rain;dark;cloud)$. ### Example of Positive Interaction Information The case of positive interaction information seems a bit less natural. A prototypical example of positive $I(X;Y;Z)$ has $X$ as the output of an XOR gate to which $Y$ and $Z$ are the independent random inputs. In this case $I(Y;Z)$ will be zero, but $I(Y;Z|X)$ will be positive (1 bit) since once output $X$ is known, the value on input $Y$ completely determines the value on input $Z$. Since $I(Y;Z|X)>I(Y;Z)$, the result is positive interaction information $I(X;Y;Z)$. It may seem that this example relies on a peculiar ordering of $X,Y,Z$ to obtain the positive interaction, but the symmetry of the definition for $I(X;Y;Z)$ indicates that the same positive interaction information results regardless of which variable we consider as the interloper or conditioning variable. For example, input $Y$ and output $X$ are also independent until input $Z$ is fixed, at which time they are totally dependent (obviously), and we have the same positive interaction information as before, $I(X;Y;Z)=I(X;Y|Z)-I(X;Y)$. This situation is an instance where fixing the common effect $X$ of causes $Y$ and $Z$ induces a dependency among the causes that did not formerly exist. This behavior is colloquially referred to as explaining away and is thoroughly discussed in the Bayesian Network literature (e.g., Pearl 1988). Pearl's example is auto diagnostics: A car's engine can fail to start $(X)$ due either to a dead battery $(Y)$ or due to a blocked fuel pump $(Z)$. Ordinarily, we assume that battery death and fuel pump blockage are independent events, because of the essential modularity of such automotive systems. Thus, in the absence of other information, knowing whether or not the battery is dead gives us no information about whether or not the fuel pump is blocked. However, if we happen to know that the car fails to start (i.e., we fix common effect $X$), this information induces a dependency between the two causes battery death and fuel blockage. Thus, knowing that the car fails to start, if an inspection shows the battery to be in good health, we can conclude that the fuel pump must be blocked. Battery death and fuel blockage are thus dependent, conditional on their common effect car starting. What the foregoing discussion indicates is that the obvious directionality in the common-effect graph belies a deep informational symmetry: If conditioning on a common effect increases the dependency between its two parent causes, then conditioning on one of the causes must create the same increase in dependency between the second cause and the common effect. In Pearl's automotive example, if conditioning on car starts induces $I(X;Y;Z)$ bits of dependency between the two causes battery dead and fuel blocked, then conditioning on fuel blocked must induce $I(X;Y;Z)$ bits of dependency between battery dead and car starts. This may seem odd because battery dead and car starts are already governed by the implication battery dead $\rightarrow$ car doesn't start. However, these variables are still not totally correlated because the converse is not true. Conditioning on fuel blocked removes the major alternate cause of failure to start, and strengthens the converse relation and therefore the association between battery dead and car starts. A paper by Tsujishita (1995) focuses in greater depth on the third-order mutual information. ## The Four-Variable Case One can recursively define the n-dimensional interaction information in terms of the $(n-1)$-dimensional interaction information. For example, the four-dimensional interaction information can be defined as $\begin{matrix} I(W;X;Y;Z) & = & I(X;Y;Z|W)-I(X;Y;Z) \\ \ & = & I(X;Y|Z,W)-I(X;Y|W)-I(X;Y|Z)+I(X;Y) \end{matrix}$ or, equivalently, $\begin{matrix} I(W;X;Y;Z)& = & H(W)+H(X)+H(Y)+H(Z) \\ \ & - & H(W,X)-H(W,Y)-H(W,Z)-H(X,Y)-H(X,Z)-H(Y,Z) \\ \ & + & H(W,X,Y)+H(W,X,Z)+H(W,Y,Z)+H(X,Y,Z)-H(W,X,Y,Z) \end{matrix}$ ## The n-Variable Case It is possible to extend all of these results to an arbitrary number of dimensions. The general expression for interaction information on variable set $\mathcal{V}=\{X_{1},X_{2},\ldots ,X_{n}\}$ in terms of the marginal entropies is given by Jakulin & Bratko (2003). $I(\mathcal{V})\equiv -\sum_{\mathcal{T}\subseteq \mathcal{V}}(-1)^{\left\vert\mathcal{V}\right\vert -\left\vert \mathcal{T}\right\vert}H(\mathcal{T})$ which is an alternating (inclusion-exclusion) sum over all subsets $\mathcal{T}\subseteq \mathcal{V}$, where $\left\vert \mathcal{V}\right\vert =n$. Note that this is the information-theoretic analog to the Kirkwood approximation. ## Difficulties Interpreting Interaction Information The possible negativity of interaction information can be the source of some confusion (Bell 2003). As an example of this confusion, consider a set of eight independent binary variables $\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7},X_{8}\}$. Agglomerate these variables as follows: $\begin{matrix} Y_{1} &=&\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7}\} \\ Y_{2} &=&\{X_{4},X_{5},X_{6},X_{7}\} \\ Y_{3} &=&\{X_{5},X_{6},X_{7},X_{8}\} \end{matrix}$ Because the $Y_{i}$'s overlap each other (are redundant) on the three binary variables $\{X_{5},X_{6},X_{7}\}$, we would expect the interaction information $I(Y_{1};Y_{2};Y_{3})$ to equal $-3$ bits, which it does. However, consider now the agglomerated variables $\begin{matrix} Y_{1} &=&\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7}\} \\ Y_{2} &=&\{X_{4},X_{5},X_{6},X_{7}\} \\ Y_{3} &=&\{X_{5},X_{6},X_{7},X_{8}\} \\ Y_{4} &=&\{X_{7},X_{8}\} \end{matrix}$ These are the same variables as before with the addition of $Y_{4}=\{X_{7},X_{8}\}$. Because the $Y_{i}$'s now overlap each other (are redundant) on only one binary variable $\{X_{7}\}$, we would expect the interaction information $I(Y_{1};Y_{2};Y_{3};Y_{4})$ to equal $-1$ bit. However, $I(Y_{1};Y_{2};Y_{3};Y_{4})$ in this case is actually equal to $+1$ bit, indicating a synergy rather than a redundancy. This is correct in the sense that $\begin{matrix} I(Y_{1};Y_{2};Y_{3};Y_{4}) & = & I(Y_{1};Y_{2};Y_{3}|Y_{4})-I(Y_{1};Y_{2};Y_{3}) \\ \ & = & -2+3 \\ \ & = & 1 \end{matrix}$ but it remains difficult to interpret. ## Uses of Interaction Information • Jakulin and Bratko (2003b) provide a machine learning algorithm which uses interaction information. • Killian, Kravitz and Gilson (2007) use mutual information expansion to extract entropy estimates from molecular simulations. • Moore et al. (2006), Chanda P, Zhang A, Brazeau D, Sucheston L, Freudenheim JL, Ambrosone C, Ramanathan M. (2007) and Chanda P, Sucheston L, Zhang A, Brazeau D, Freudenheim JL, Ambrosone C, Ramanathan M. (2008) demonstrate the use of interaction information for analyzing gene-gene and gene-environmental interactions associated with complex diseases. ## References • Bell, A J (2003), ‘The co-information lattice’ [1] • Fano, R M (1961), Transmission of Information: A Statistical Theory of Communications, MIT Press, Cambridge, MA. • Garner W R (1962). Uncertainty and Structure as Psychological Concepts, JohnWiley & Sons, New York. • Han T S (1978). Nonnegative entropy measures of multivariate symmetric correlations, Information and Control 36, 133-156. • Han T S (1980). Multiple mutual information and multiple interactions in frequency data, Information and Control 46, 26-45. • Jakulin A & Bratko I (2003a). Analyzing Attribute Dependencies, in N Lavra\quad{c}, D Gamberger, L Todorovski & H Blockeel, eds, Proceedings of the 7th European Conference on Principles and Practice of Knowledge Discovery in Databases, Springer, Cavtat-Dubrovnik, Croatia, pp. 229–240. • Jakulin A & Bratko I (2003b). Quantifying and visualizing attribute interactions [2]. • Margolin A, Wang K, Califano A, & Nemenman I (2010). Multivariate dependence and genetic networks inference. IET Syst Biol 4, 428. • McGill W J (1954). Multivariate information transmission, Psychometrika 19, 97-116. • Moore JH, Gilbert JC, Tsai CT, Chiang FT, Holden T, Barney N, White BC (2006). A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility, Journal of Theoretical Biology 241, 252-261. [3] • Nemenman I (2004). Information theory, multivariate dependence, and genetic network inference [4]. • Pearl, J (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann, San Mateo, CA. • Tsujishita, T (1995), ‘On triple mutual information’, Advances in applied mathematics 16, 269-–274. • Chanda P, Sucheston L, Zhang A, Brazeau D, Freudenheim JL, Ambrosone C, Ramanathan M. (2008). AMBIENCE: a novel approach and efficient algorithm for identifying informative genetic and environmental associations with complex phenotypes. Genetics. 2008 Oct;180(2):1191-210. PMID 17924337. http://www.genetics.org/cgi/content/full/180/2/1191 • Killian B J, Kravitz J Y & Gilson M K (2007) Extraction of configurational entropy from molecular simulations via an expansion approximation. J. Chem. Phys., 127, 024107.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8347289562225342, "perplexity": 1717.0081358552038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776400583.60/warc/CC-MAIN-20140707234000-00010-ip-10-180-212-248.ec2.internal.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/67523/HTML/default/statug_introcom_sect021.htm
# Shared Concepts and Topics ### Splines and Spline Bases Subsections: This section provides details about the construction of spline bases with the EFFECT statement. A spline function is a piecewise polynomial function in which the individual polynomials have the same degree and connect smoothly at join points whose abscissa values, referred to as knots, are prespecified. You can use spline functions to fit curves to a wide variety of data. A spline of degree 0 is a step function with steps located at the knots. A spline of degree 1 is a piecewise linear function where the lines connect at the knots. A spline of degree 2 is a piecewise quadratic curve whose values and slopes coincide at the knots. A spline of degree 3 is a piecewise cubic curve whose values, slopes, and curvature coincide at the knots. Visually, a cubic spline is a smooth curve, and it is the most commonly used spline when a smooth fit is desired. Note that when no knots are used, splines of degree d are simply polynomials of degree d. More formally, suppose you specify knots . Then a spline of degree is a function with d – 1 continuous derivatives such that where each is a polynomial of degree d. The requirement that has d – 1continuous derivatives is satisfied by requiring that the function values and all derivatives up to order d – 1 of the adjacent polynomials at each knot match. A counting argument yields the number of parameters that define a spline with n knots. There are n + 1 polynomials of degree d, giving coefficients. However, there are d restrictions at each of the n knots, so the number of free parameters is = n + d + 1. In mathematical terminology this says that the dimension of the vector space of splines of degree d on n distinct knots is n + d + 1. If you have n + d + 1 basis vectors, then you can fit a curve to your data by regressing your dependent variable by using this basis for the corresponding design matrix columns. In this context, such a spline is known as a regression spline. The EFFECT statement provides a simple mechanism for obtaining such a basis. If you remove the restriction that the knots of a spline must be distinct and allow repeated knots, then you can obtain functions with less smoothness and even discontinuities at the repeated knot location. For a spline of degree d and a repeated knot with multiplicity , the piecewise polynomials that join such a knot are required to have only dm matching derivatives. Note that this increases the number of free parameters by m – 1 but also decreases the number of distinct knots by m – 1. Hence the dimension of the vector space of splines of degree d with n knots is still n + d + 1, provided that any repeated knot has a multiplicity less than or equal to d. The EFFECT statement provides support for the commonly used truncated power function basis and B-spline basis. With exact arithmetic and by using the complete basis, you obtain the same fit with either of these bases. The following sections provide details about constructing spline bases for the space of splines of degree d with n knots that satisfies .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601405620574951, "perplexity": 293.9750400852675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00489.warc.gz"}
https://en.wikipedia.org/wiki/Seasonal_flows_on_warm_martian_slopes
# Seasonal flows on warm Martian slopes Reprojected view of warm-season flows in Newton Crater Seasonal flows on warm Martian slopes (also called recurring slope lineae, recurrent slope lineae and RSL)[1][2] are thought to be salty water flows occurring during the warmest months on Mars. The flows are narrow (0.5 to 5 meters) and exhibit relatively dark markings on steep (25° to 40°) slopes, appear and incrementally grow during warm seasons and fade in cold seasons. Liquid brines near the surface almost certainly explain this activity,[3] but the exact source of the water and the mechanism behind its motion are not understood. On October 5, 2015, possible RSLs were reported on Mount Sharp near the Curiosity rover.[2] ## Overview Research indicates that in the past there was liquid water flowing on the surface of Mars,[4][5] creating large areas similar to Earth's oceans.[6][7][8][9] However, the question remains as to where the water has gone.[10] The Mars Reconnaissance Orbiter (MRO) is a multipurpose spacecraft launched in 2005 designed to conduct reconnaissance and exploration of Mars from orbit.[11] The spacecraft is managed by the Jet Propulsion Laboratory (JPL).[12] The HiRISE instrument is at the forefront of the ongoing RSL studies as it helps chart the features with images of closely monitored sites typically taken every few weeks.[13] ## Features Warm season flows on slope in Newton Crater (video-gif) Distinctive properties of recurring slope lineae (RSL) include slow incremental growth, formation on warm slopes in warm seasons, and annual fading and recurrence,[14] showing a strong correlation with solar heating.[14] RSL extend down slope from bedrock outcrops often following small gullies about 0.5 to 5 meters (1 ft 8 in to 16 ft 5 in) wide, with lengths up to hundreds of meters, and some of the locations display more than 1,000 individual flows.[15][16] RSL advance rates are highest at the beginning of each season, followed by much slower lengthening.[17] RSL appear and lengthen in the late southern spring and summer from 48°S to 32°S latitudes that favor equator-facing slopes, which are times and places with peak surface temperatures from −23 °C to 27 °C. Active RSL also occur in equatorial regions (0–15°S), most commonly in the Valles Marineris troughs.[17][18] Researchers surveyed flow-marked slopes with the Mars Reconnaissance Orbiter's CRISM and although there is no spectrographic evidence for actual water,[15] the instrument has now directly imaged perchlorate salts thought to be dissolved in water brines in the subsurface.[3] This may indicate the water quickly evaporates upon reaching the surface, leaving only the salts. The cause of the surface darkening and lightening is poorly understood: a flow initiated by salty water (brine) could rearrange grains or change surface roughness in a way that darkens the appearance, but the way the features brighten again when temperatures drop is harder to explain.[12][19] ## Hypotheses A number of different hypotheses for RSL formation have been proposed. The seasonality, latitude distribution, and brightness changes strongly indicate a volatile material —such as water or liquid CO 2 — is involved. One hypothesis is that RSL could form by rapid heating of nocturnal frost.[14] Another one proposes flows of carbon dioxide, but the settings in which the flows occur are too warm for carbon-dioxide frost (CO 2 ), and at some sites is too cold for pure water.[14] Other hypotheses include dry granular flows, but no entirely dry process can explain seasonal flows that progressively grow over weeks and months.[17] Seasonal melting of shallow ice would explain the RSL observations, but it would be difficult to replenish such ice annually.[17] However, recent direct observations of seasonal deposition of soluble salts strongly suggest that RSL are created by a water brine.[3] ### Brines The leading hypothesis involves the flow of brines —very salty water.[3][15][16][20][21][22] Salt deposits over much of Mars indicate that brine was abundant in Mars's past.[12][19] Salinity lowers the freezing point of water to sustain a liquid flow. Less saline water would freeze at the observed temperatures.[12] Thermal infrared data from the Thermal Emission Imaging System (THEMIS) on board the 2001 Mars Odyssey orbiter, have allowed the temperature conditions under which RSL form to be constrained. While a small number of RSL are visible at temperatures above the freezing point of water, most are not, and many appear at temperatures as low as −43 °C (230 K). Some scientists think that under these cold conditions, a brine of iron(III) sulphate (Fe2(SO4)3) or calcium chloride (CaCl 2 ) is the most likely mode of RSL formation.[23] Another team of scientists, using the CRISM instrument onboard MRO, reported that the evidence for hydrated salts is most consistent with the spectral absorption features of magnesium perchlorate (Mg(ClO4)2), magnesium chloride (MgCl2(H2O)x) and sodium perchlorate (NaClO 4 ).[24] [25] Experiments and calculations demonstrated that recurring slope lineae could be produced by the deliquescence and rehydration of hydrous chlorides and oxychlorine salts. However, under present Martian atmospheric conditions there is not enough water to complete this process. But, the authors believe that there is enough water still stored from the last time the climate was wetter.[26] These observations are the closest scientists have come to finding evidence of liquid water on the planet's surface today.[12][19] Frozen water, however, has been detected near the surface in many middle to high-latitude regions. Purported droplets of brine also appeared on struts of the Phoenix Mars Lander in 2008.[27] ### Source of water Liquid brine flows near the surface might explain this activity, but the exact source of the water and the mechanism behind its motion are not understood.[28][29] A hypothesis proposes that the needed water could originate in the seasonal oscillations of near-surface adsorbed water provided by the atmosphere; perchlorates and other salts known to be present on the surface, are able to attract and hold water molecules from the surrounding environment (hygroscopic salts),[17] but the dryness of the Martian air is a challenge. Water vapor must be efficiently trapped over very small areas, and seasonal variation in the atmospheric column abundance of water vapor does not match the RSL activity over active locations.[14][17] Deeper groundwater may exist and could reach the surface at springs or seeps,[30][31] but this cannot explain the wide distribution of RSL, extending from the tops of ridges and peaks.[17] Also, there are apparent RSL on equatorial dunes composed of permeable sand, unlikely to be a groundwater source.[17] ## Habitability These features form on sun-facing slopes at times of the year when the local temperatures reach above the melting point for ice. The streaks grow in spring, widen in late summer and then fade away in autumn. This is hard to model in any other way except as involving liquid water in some form, though the streaks themselves are thought to be a secondary effect and not a direct indication of dampness of the regolith. Although these features are now confirmed to involve liquid water in some form, the water could be either too cold or too salty for life. At present they are treated as potentially habitable, as "Uncertain Regions, to be treated as Special Regions". The "Special Regions" assessment says of them:[32] • "Although no single model currently proposed for the origin of RSL adequately explains all observations, they are currently best interpreted as being due to the seepage of water at > 250 K, with ${\displaystyle a_{w}}$ [water activity] unknown and perhaps variable. As such they meet the criteria for Uncertain Regions, to be treated as Special Regions. There are other features on Mars with characteristics similar to RSL, but their relationship to possible liquid water is much less likely" Here a "Special Region" is defined as a region on the Mars surface where Earth life could potentially survive. They were first reported in the paper by McEwan in Science, August 5, 2011.[33] They were already suspected as involving flowing brines back then, as all the other models available involved liquid water in some form. They were finally proven to involve liquid water, pretty much conclusively after detection of hydrated salts that change their hydration state rapidly through the season. This was the subject of a major NASA news announcement and press conference and also reported in a paper published on 28 September 2015.[34][35][36][37][38] The brines were not detected directly, because the resolution of the spectrometer isn't high enough for this, and also the brines probably flow in the morning. The spacecraft which observed, them, the Mars Reconnaissance Orbiter, is in a slowly precessing sun-synchronous orbit inclined at 93 degrees (orbital period 1 hr 52 minutes). Each time it crosses the Mars equator on the sunny side, South to North, the time is 3:00 pm, in the local solar time on the surface, all year round. This is the worst time of day to spot brines from orbit.[39] The evidence also suggests fairly substantial amounts of water, at least for microbes. At the end of the press conference, the researchers gave a rough estimate of a total annual flow of at least 100,000 tons for the entire Valles Marineres region. In this calculation they assumed only 5% water in the solution and a film with a thickness of 10 mm which is about what you need for the material to flow at all.[40] They are amongst the most favoured candidate sites for present day life on Mars. Whether they are habitable or not will depend on the temperature of the water and its salinity. ## References 1. ^ Kirby, Runyon; Ojha, Lujendra (August 18, 2014). "Recurring Slope Lineae". Encyclopedia of Planetary Landforms. Retrieved September 26, 2015. 2. ^ a b Chang, Kenneth (5 October 2015). "Mars Is Pretty Clean. Her Job at NASA Is to Keep It That Way.". New York Times. Retrieved 6 October 2015. 3. ^ a b c d Ojha, Lujendra; Wilhelm, Mary Beth; Murchie, Scott L.; McEwen, Alfred S.; et al. (28 September 2015). "Spectral evidence for hydrated salts in recurring slope lineae on Mars". Nature Geoscience. doi:10.1038/ngeo2546. Retrieved 2015-09-28. 4. ^ "Flashback: Water on Mars Announced 10 Years Ago". SPACE.com. June 22, 2000. Retrieved December 19, 2010. 5. ^ "Science@NASA, The Case of the Missing Mars Water". Retrieved March 7, 2009. 6. ^ ISBN 0-312-24551-3 7. ^ "PSRD: Ancient Floodwaters and Seas on Mars". Psrd.hawaii.edu. July 16, 2003. Retrieved December 19, 2010. 8. ^ "Gamma-Ray Evidence Suggests Ancient Mars Had Oceans | SpaceRef". SpaceRef. November 17, 2008. Retrieved December 19, 2010. 9. ^ Carr, M.; Head, J. (2003). "Oceans on Mars: An assessment of the observational evidence and possible fate". Journal of Geophysical Research. 108: 5042. Bibcode:2003JGRE..108.5042C. doi:10.1029/2002JE001963. 10. ^ "Water on Mars: Where is it All?". Archived from the original on December 3, 2007. Retrieved March 7, 2009. 11. ^ "Nasa Find Potential Signs Of Flowing Water On Mars". Huffpost UK. August 4, 2011. Retrieved August 5, 2011. 12. "NASA Spacecraft Data Suggest Water Flowing on Mars". Jet Propulsion Laboratory, Pasadena, California. Retrieved March 31, 2012. 13. ^ David, Leonard (23 September 2015). "Mars' Mysterious Dark Streaks Spur Exploration Debate". Space.com. Retrieved 2015-09-25. 14. Dundas, C. M.; McEwen, A. S. (March 16–20, 2015). NEW CONSTRAINTS ON THE LOCATIONS, TIMING AND CONDITIONS FOR RECURRING SLOPE (PDF). 46th Lunar and Planetary Science Conference (2015). Lunar and Planetary Institute. 15. ^ a b c Mann, Adam (February 18, 2014). "Strange Dark Streaks on Mars Get More and More Mysterious". Wired (magazine). Retrieved February 18, 2014. 16. ^ a b "Is Mars Weeping Salty Tears?". news.sciencemag.org. Retrieved August 5, 2011. 17. McEwen, A.; Chojnacki, M.; Dundas, C.; L. Ojha, L. (28 September 2015). Recurring Slope Lineae on Mars: Atmospheric Origin? (PDF). European Planetary Science Congress 2015. France: EPSC Abstracts. 18. ^ Stillman, D., et al. 2016. Characteristics of the Numerous and Widespread Recurring Slope Lineae (RSL) in Valles Marineris, Mars. Icarus: 285, 195-210. 19. ^ a b c "NASA Spacecraft Data Suggest Water Flowing on Mars". NASA. Retrieved July 5, 2011. 20. ^ "NASA Finds Possible Signs of Flowing Water on Mars". voanews.com. Retrieved August 5, 2011. 21. ^ Webster, Guy; Brown, Dwayne (December 10, 2013). "NASA Mars Spacecraft Reveals a More Dynamic Red Planet". NASA. Retrieved December 10, 2013. 22. ^ Wall, Mike (28 September 2015). "Salty Water Flows on Mars Today, Boosting Odds for Life". Space.com. Retrieved 2015-09-28. 23. ^ Mitchell, J.; Christensen, P. (March 16–20, 2015). RECURRING SLOPE LINEAE AND THE PRESENCE OF CHLORIDES IN THE SOUTHERN HEMISPHERE OF MARS (PDF). 46th Lunar and Planetary Science Conference (2015). Lunar and Planetary Institute. 24. ^ name="Ojha NG2015" 25. ^ name="Wall-Brines 2015"/> 26. ^ Wang, A., et al. 2017. ATMOSPHERE - SURFACE H2O EXCHANGE TO SUSTAIN THE RECURRING SLOPE LINEAE (RSL) ON MARS. Lunar and Planetary Science XLVIII (2017). 2351.pdf. 27. ^ "Mars Lander gets lucky break as 'water drops' discovered clinging to craft's leg". Daily Mail UK. March 18, 2009. Retrieved August 6, 2011. 28. ^ McEwen, Alfred.S.; Ojha, Lujendra; Dundas, Colin M. (June 17, 2011). "Seasonal Flows on Warm Martian Slopes". Science. American Association for the Advancement of Science. 333 (6043): 740–743. Bibcode:2011Sci...333..740M. doi:10.1126/science.1204816. ISSN 0036-8075. PMID 21817049. Retrieved August 5, 2011. 29. ^ "Seasonal Flows on Warm Martian Slopes". hirise.lpl.arizona.edu. Retrieved August 5, 2011. 30. ^ Levy, Joseph. "Hydrological characteristics of recurrent slope lineae on Mars: Evidence for liquid flow through regolith and comparisons with Antarctic terrestrial analogs." Icarus 219.1 (2012): 1-4. 31. ^ Martín-Torres, F. Javier; Zorzano, María-Paz; Valentín-Serrano, Patricia; Harri, Ari-Matti; Genzer, Maria (13 April 2015). "Transient liquid water and water activity at Gale crater on Mars". Nature Geocience. doi:10.1038/ngeo2412. Retrieved 2015-04-14. 32. ^ Rummel, John D.; Beaty, David W.; Jones, Melissa A.; Bakermans, Corien; Barlow, Nadine G.; Boston, Penelope J.; Chevrier, Vincent F.; Clark, Benton C.; de Vera, Jean-Pierre P.; Gough, Raina V.; Hallsworth, John E.; Head, James W.; Hipkin, Victoria J.; Kieft, Thomas L.; McEwen, Alfred S.; Mellon, Michael T.; Mikucki, Jill A.; Nicholson, Wayne L.; Omelon, Christopher R.; Peterson, Ronald; Roden, Eric E.; Sherwood Lollar, Barbara; Tanaka, Kenneth L.; Viola, Donna; Wray, James J. (2014). "A New Analysis of Mars "Special Regions": Findings of the Second MEPAG Special Regions Science Analysis Group (SR-SAG2)" (PDF). Astrobiology. 14 (11): 887–968. doi:10.1089/ast.2014.1227. ISSN 1531-1074. 33. ^ "Warm-Season Flows on Slope in Newton Crater". NASA Press Release. 34. ^ 35. ^ Amos, Jonathan. "Martian salt streaks 'painted by liquid water'". BBC Science. 36. ^ Staff (28 September 2015). "Video Highlight - NASA News Conference - Evidence of Liquid Water on Today's Mars". NASA. Retrieved 30 September 2015. 37. ^ Staff (28 September 2015). "Video Complete - NASA News Conference - Water Flowing on Present-Day Mars m". NASA. Retrieved 30 September 2015. 38. ^ Ojha, L.; Wilhelm, M. B.; Murchie, S. L.; McEwen, A. S.; Wray, J. J.; Hanley, J.; Massé, M.; Chojnacki, M. (2015). "Spectral evidence for hydrated salts in recurring slope lineae on Mars". Nature Geoscience. 8 (11): 829–832. Bibcode:2015NatGe...8..829O. doi:10.1038/ngeo2546. 39. ^ "Mars Reconnaissance Orbiter Telecommunications" (PDF). JPL. September 2006. 40. ^ press conference on the discovery of indirect evidence of water flowing in the RSLs - the video link here is to the question and answer about the quantity of water involved
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763442039489746, "perplexity": 10861.685643441158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00646-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/261857/normal-subgroup-of-prime-index-and-another-subgroup
# Normal subgroup of prime index and another subgroup Suppose that $N$ is a normal subgroup of a finite group $G$, and $H$ is a subgroup of $G$. If $|G/N| = p$ for some prime $p$, then show that $H$ is contained in $N$ or that $NH = G$. I imagine this is related to the fact that $|NH| = |N||H|/|N \cap H|$, but this is not really helping me. I considered the fact that since $N$ is normal, we get that $NH \leq G$, and I then used Largrange, but I'm stuck, and some help would be nice. - Consider the homomorphism $\phi:G\rightarrow G/N$ that sends $x$ to $xN$. Since $\phi(H)\leq G/N$, therefore $|\phi(H)|||G/N|$. Hence $|\phi(H)|=1$ or $|\phi(H)|=p=|G/N|$. In the first case we get $\phi(H)=\{N\}$, thus $H\leq N$. In the other case $\phi(H)=G/N$, we deduce that $\forall x\in G\ \exists h\in H[xN=hN]$, it is easy to show that this implies that $NH=G$. (Note that we don't need G to be finite ) - Recall that when we mod out by a normal subgroup $N$, there is a one-to-one correspondence between subgroups in $G/N$ and subgroups $H$ containing $N$ in $G$. Since the order of $G/N$ is prime, there are no proper subgroups of $G/N$ (by Lagrange's Theorem). This implies that there aren't any proper subgroups of $G$ that properly contain $N$, hence $H\subseteq N$. - Well, if you've got some $h\notin N$, then $|N\langle h\rangle|>|N|$. This is a group since $N$ is normal. Hence if $[G:N]$ is prime, then by Lagrange $N\langle h\rangle=G$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959960579872131, "perplexity": 33.99543886696262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.82/warc/CC-MAIN-20150521113208-00005-ip-10-180-206-219.ec2.internal.warc.gz"}
http://asstnotesideas.blogspot.ca/2017/02/
## Shopping Trolleys This is a short post explaining how to make a "Mega" Shopping Trolley, and is written from an Australian perspective. Now I'm not talking about those metal/plastic carts (known as "Trundles" in New Zealand) that you push around the supermarket and invariably has a wonky castor. Rather, I mean the sort of thing that you try and stuff as much as you can in, and pull behind you on the way home full of groceries. The sort of thing that looks like these: Now for a small amount of shopping these are almost OK, but they have two main issues. The first is that they never seem to be big enough for what you need to buy. The second is that they tend to wear out quickly with repeated use. Typically either the fabric rips or tears, or more likely the wheels (which are the cheapest part of the trolley) will break.That last can be really awkward when it happens on the way home. The solution to this is to make your own trolley, and it's simple (and relatively cheap) to do. I first constructed my own trolley shortly after I could no longer afford to run and keep my own car. Being on a fixed income, I still needed a way to get the groceries back home on the bus, which I now used. The earliest record of my Mega Trolley was in my comic Floods and Storms, which recounts what happened in 2007 when a monster storm hit the Newcastle area. Here's what it looked like: As you can see, it's easy to make. You need a work trolley (usually one that folds away, a large (50 litres or more) squarish bin with the top removed, and a couple of "bungee cords" (elasticized cords with hooks or looks on the end). You fold out the work trolley, place the bin on folded-out base plate, and then attach the two bungee cords around the bin so that each each is secured around a shaft of the trolley. For example, my current mega trolley looks like this: The bungee cords go inside and around the shaft. These have a loop on the end that you can lock, and were better to use than ones with hooks because the shafts of the trolley were square in shape and hooks wouldn't hold properly. This rig initially (in 2007) cost me $29 for the work trolley from ALDI,$20 for the bin from Big-W, and $7.50 for the special bungee cords from Bunnings. That's$56.50 all up, but the wheels won't break easily on this and the fabric won't rip (unless you're doing something SERIOUSLY WRONG)!! ## Usage In general a trolley with no groceries in will be easy to carry on a bus or train. It won't take up much more room than a regular commercial trolley, and the only extra weight will be your own shoulder/hand bag plus 2-4 shopping bags. One technique I've used is placing the unfilled trolley into the middle of a shopping cart. As you go from store to store you place the purchased items behind the trolley, and put the items you're about to buy in front. Here's an example of that. I started using this technique based on my own needs. Two full front compartments is about the limit of what the trolley will carry, and I have a practise of shopping for groceries at Coles / Woolworths first, and then at ALDI second (mainly because they have sorting tables, which allow me to repack the trolley). Of course if you have other things to buy, like a magazine for example, you might want to buy those before doing your grocery run, rather than pull a full trolley behind you. Once you've finished your grocery shopping, find somewhere to repack your items into the trolley. Removed the trolley from the cart and place on the floor. Place heavy hard/items at bottom and lighter/bulkier items at the top. From my own experience I've found it best to use the trolley with a variety of shopping bags. Heavy and solid items can be placed in the bin, but additional bags and be placed on top, and the handles of the bags placed around the trolley so that they won't fall off. Here's an example of that, being inspected by Bobby the dog. In the photo above I have a "cold bag" on top of the bin, and a couple of overflow bags on the back. The purple bag on the top probably has bread (or something else that I don't want crushed) in it. The main thing is to make sure that the trolley's centre of gravity is lower than the top of the bin, otherwise pulling the trolley will be a bit random. Pull the trolley with either palm up or down. It's better to pull it slightly to one side rather than directly behind. To get on the bus pull the trolley up to the door, step up, turn around and lift with two hands (bending knees if needed). When you exit the bus you can just pull the trolley out and allow the base plate to take the load. If you have to go up steps allow the wheels to take the load. Once you get home, unpack and store any objects in cool bags first. ## Maintenance and Replacements A mega trolley is like an axe or a broom, when any of the three parts (trolley / bin / cords) wear out, replace it and carry on. In practise I've found that the trolleys will wear out before the bin and bungees. Here's a comparison of three types of work trolley I've used for my mega trolley: The trolley on the left was purchased as a special from ALDI in 2007. It lasted for 2 years until I started having problems with the wheels. The issue was that the mechanism that folds up the trolley base plate also folded the wheels backwards into the flat of the trolley. After a while empty trolleys, being lighter, would tend to fold up as I was walking pulling the trolley along! The middle trolley is one of several bought from Bunnings over the years. This type of trolley sometimes also folded up unexpectedly, but the main problem with it is that they wheels have soft rubber rims, and these wear out over time, especially if you don't have a sidewalk and have to pull them down a blue metal street! The trolley on the right was also bought at ALDI (in fact I assembled it on the sorting table) and so far has no problems. The other issue with these trolleys is their height. I'm 185cm tall, and the taller one at right is far easier to pull behind me than the other two, which for me involved a bit of a crouch. The width of the trolley affects which bags you can attach to them. I've found that cold bags will fit on all of them, but regular "eco" shopping bags struggle with the one on the right. My solution is to use slightly larger bags from Big-W and longer bags from Bunnings. The other thing about these trolleys is the wheels. As you can see above, the first two have wheels that are explicitly attached to the trolley. That's only an issue when they wear out, like the ones in the middle, because you can't replace them. You then need to buy a new trolley. The wheels at right are attached with washers and a "splint". If these wear out I should be able to buy replacements or equivalents from a local hardware store like Bunnings. And that's it - pretty basic but way more effective than those silly frabric and plastic trolleys that break on you. ## Exporting from Leveller to Opensim This tutorial looks at working with Leveller (a powerful 3D terrain editor for windows from Daylon Graphics) and OpenSimulator (Opensim). Part 1 explained how to import terrains from Opensim into Leveller. This part explains how to export heightfields in Leveller back into Opensim. ## OpenSimulator Regions used This part uses the four regions of test00, test01, test10, and test11 to demonstrate exporting terrains from Leveller back into the grid. The listing for these regions likes a bit like... ...however the positions are only important  when importing the heightfields files using load-tile command. ## Exporting single regions To demonstrate exporting single regions we'll be using the following example from Leveller... This 256x256 document was created by creating eight shapes, giving each a height (as shown on the map at right and then selecting all shapes followed by executing a Shapes > Heightfield from Selected Points command. Finally the water level was set using Edit > Water Level > Elevation to 20. ## Exporting to In-World You can export the above Leveller document as a RAW file for loading in-world. To do this, select File > Export and pick the Second Life option. After selecting a file to save to (it needs to be an existing RAW file, but you can easily copy the file and rename that copy) you might see something like this: If the document has any heights less than zero you will get an error and the export will fail. It's best to play around with Original Elevations and Use per-pixel scaling. Once you've exported the file, you can go in-world and load it via your viewer. You have to be the owner of the region in order to upload the raw file. Go to the Region / Estate dialog box and select the Terrain tab. You'll see something like this: Select the Upload RAW terrain button, select the file and the terrain will be loaded. It should look something like this: The submerged area above is roughly at 20m and matches the default water level. However on the original Leveller document that coastline contour is 20.6, and clears the waters. There's some trade off and inaccuracy using the .raw format. Compare this with exporting in .ter format (below). ## Server commands for saving terrains The following two methods use server commands to save region's terrains. Server commands are made within the Opensim console. These examples are taken using ConEmu, a "Windows console emulator with tabs, which presents multiple consoles and simple GUI applications as one customizable GUI window with various features". ConEmu, rather than the Windows command line was used, as ConEmu behaves more like a Linux terminal, with the user being able scroll back through previous lines and reports, and use the up arrow to see (and then edit) previous commands. Users are also able to copy and paste from and to the console easier that with the standard  DOS prompt, and key shortcuts and colour schemes can be set for convenience.  All this makes using the Opensim console a lot easier for a Windows user. Other OSs may have their own equivalents. ConEmu was used as I predominantly use Windows and Windows apps (and hence Leveller). ## For single regions Leveller can export a heightfield to Opensim  simply by exporting using Terragen format (.ter). To do so, select File > Export and select the Terragen option. After selecting a file to save to, the dialog should look something like this: To load the terrain for a single region you must first change to that region. You do this by using the "change region" command. For example, to go to the test00 region we'd type... change region test00 [ENTER] ...and the console prompt should now show... Region (test00) # If it doesn't, you may have mistyped the name of the region. Making typos like that can create unexpected errors! Assuming you have now selected the region, You would use the "terrain load" command to save that to a particular file. For example... terrain load N:\OpenSim\HeightFields\Leveller\Tutorial\in-world\test00.ter ...would load the terrain from the test00.ter file in that directory (you need to put quotes around a file path name if it has spaces in). The format of the file save is determined by the extension used as follows: Extension File Format .r32/f32 32bit RAW, see RAW, 8 bit, 16 bit, and 32 bit explained .ter Terragen heightfield,see Terragen™ Terrain file specification .raw linden labs/Second Life RAW,see Tips for Creating Heightfields and Details on Terrain RAW Files .jpg/jpeg Joint Photographic Experts Group  image format. .bmp device independent BitMaP .png Portable Network Graphic .gif Graphics Interchange Format .tif/tiff Tagged Image File Format .gsd geographic survey data file As you can see above, the example file was loaded in the Terragen format. The reasons for choosing that format are that it is a heightfield format rather that a graphics format, and that Leveller uses a modified Terragen format for its documents. If this is successful the console should look something like this: After loading the file in the region it'll look something like this in Opensim: Here's a comparison between .raw and .ter exports, using the in-world map: A good work practice is to use the Leveller document as a master file, and to make changes in that, rather than later re-import that region a second time to make modifications. Here's the above region re-imported back into Leveller using a raw file generated in-world... ...not pretty is it? Most of the bumpiness and abstraction is caused by using an image format to save heightfield data. If you own or rent a region but can't access the console for that grid, see if you can get the administrator to use terrain save (see part 1) and terrain load to for you (using .ter format), rather than using a .raw file. ## For tilesets In Opensim you can create regions that are square with dimensions in multiples of 256, for example 256x256, 512x512, 768x768, and so on. However at present you cannot have two regions of different sizes adjacent, without having issues  with one or more of those regions. So, in general you won't see a 256x256 region next to a 512x512 region, but you may see a string of 256x256 or larger regions together. A "tileset" is a collection of adjacent or connected regions of the same size. The following is a 512x512 example we'll be using in Leveller (which will be a 2x2 tileset of 256x256 regions): You can just see the 256m grid lines on the map. This is also an example of  the MicroDEM colour scheme. Leveller allows you to change display colours for heights. It's also an example of me playing around with different tools! ## Export a Leveller document as a PNG Leveller can export the above as a PNG file. To do so, use the File > Export command and select the PNG option. After choosing a filename, and opening option, you should see something like this: After OKing that, the following gets created in that directory: The .wld and .xml  files are created by Leveller in the process of making the PNG. And the PNG looks something like this: The single PNG file created above for a tileset, can be loaded into Opensim using the terrain load-tile command in the console. This only works saving to a PNG file. The syntax is: terrain load-tile <filename> <tile width <tile height> <xstart> <ystart> ...and where: <filename> is the file name of the of the saved file (e.g. 2x2tileset(default).png), and <tile width <tile height> is the width and height of the tileset, and <xstart> <ystart> are the grid's x and y location of the south west corner of the tileset (e.g. 1102 1000 for the sample island is the test00 region). However, you must also select each region in the tileset in sequence, and then repeat the command for that image, in order to load the section of that PNG that corresponds to the region selected. For example, to load the PNG image above into test00, test01, test10 and test11, you would need to do the following commands: Unlike the save-tile command (see part 1), you can load images that are not just in the /bin directory! The result of the above commands is... Now this looks like the heightfield alright, but it seems to be a bit exaggerated, and what's happened with the water level? Comparing the Leveller map with the Opensim map, we can see this more clearly: The water level hasn't changed in Opensim so it must be something to do with the format of the PNG (16bpp and the map elevations range). You can either go back to Leveller and fiddle around with the export settings (and check the document for silly errors like setting the water level at the wrong height, though in the above example it was set to 20.0008m in Leveller), or you can fiddle around with the height field either in-world, or via the console. If you want to go via the console method, select the region and try playing around with the following commands: terrain elevate Raises the current heightmap by the specified amount. terrain lower Lowers the current heightmap by the specified amount. terrain multiply Multiplies the heightmap by the value specified. terrain rescale Rescales the current terrain to fit between the given min and max heights terrain revert Loads the revert map terrain into the regions heightmap. In this instance I was able to fix the height issue buy using terrain lower 11 in each region. The result looks like... But, the solution won't always be to do that command - it varies depending on the terrain. There is a third alternative, and that's to use the Export tileset feature of Leveller. ## Exporting tilesets from Leveller Leveller now has an Export Tileset option which will automatically export a Leveller document as a set of Terragen formatted files suitable to be imported using terrain load within Opensim. To create this, use File > Export Tileset. The following dialog will appear: There are several sets of options on this dialog. Output filename allows you to select both the directory to save the tile files in, and a "base name" to use how those files are named. The directory you use for a tileset export should either be empty, otherwise confusion might result.The base name (e.g. if you enter test.ter, the base name is test) is added to codes generated by Leveller to indicate where in the tileset the file goes. For example test_x0_y2.ter should be loaded into the region in the bottom row (numbering starts with 0) and 3 columns from the left. See below for an efficient way of doing this. Tiling has two ways to decide split up the document into tiles - by px per tile, and by the number of tiles across and down the tileset. If you always use the same size regions (and they'll always be square in Opensim) then you might just just use that option (e.g. 256), but choosing the second might be easier if you're working with larger sized regions. The Elevation offset is added to the the heightfields of the exported files, and works the opposite way to the same option on Import Tileset. It defaults to 20 which matches the default water level in Opensim, but can be changed. The water height may be an issue with the current example. Above I mentioned it was set to 20.0008m (some minor rounding errors). If we're using the Export Tileset method, that expects the water level to be at 0m in Leveller. Now we could either go back into Leveller and drop the heightfield by 20m, and then then set the water level to 0m. To drop the whole heightfield we'd either select all (or none) and then do Filter > Elevate and use the following settings: Setting the water level is as simple as Edit > Water lever > Elevation and entering 0.0 m (works better than just plain 0). Now that might a good thing to do if you're working on several documents and most of them have the Water Level set to 0. However, if we just wanted to export that tileset now and doing that later, we could just enter 0 m for the Elevation offset instead! Assuming that we do, here's what the dialog might now look like: After you OK that, the files in the Export sub-directory will look like this: The next step is loading the files into Opensim using the terrain load command as detailed above. You simply change regions and then terrain load for each. The following is how this would look in the console for the example tileset... ...and here's the result in-world: Part 3 shows how to make terrain using shapes. ## Importing from Opensim to Leveller Leveller is a powerful 3D terrain editor for windows from Daylon Graphics. With Leveller the user can "paint" terrain, use vectors to create contours (which can then create heightfields) and use a variety of filters and tools to create distinctive and realistic terrain features (see the website for a more complete list of features). Users can also import and export heightfields in a variety of formats, and the latest update includes methods to easily import from and export to terrains with OpenSimulator (Opensim for short). This tutorial gives a run though on how to do that and why. ## Why use an external terrain editor? While it is possible to create and edit terrain within Opensim, this is a mixture of in-world techniques inherited from Second Life, and brute force console commands. It is possible to get exactly what you want, but using an external terrain editor to create or edit terrain has the following advantages: • Editing a set of adjacent regions becomes a lot easier, as the editor can be used to edit the terrain for all the regions; • Real world terrain can often by imported into the editor and then exported into a grid; • It provides the user with greater precision in editing the terrain, and terrain can easily be copied and/or divided into different documents; • Dedicated terrain editors have tools which better emulate real terrain, and/or are easier to use; • Terrain editors may have several levels of "undo", and backing up terrain sets; and • Once you have a separate heightfield document, you can usually export it to a different editor if you want, and also use it in other applications (such as Unity). Using a dedicated terrain editor encourages a certain type of work flow.  The editor's version of the heightfield becomes the main or "master" version. Edits (and backups) then get made in the editor and exported to Opensim. Minor changes might be made in Opensim, but once the heightfield is in an editor's document format, that is the version that gets edited for major changes. The limiting factor is how you can get terrain into and out of Opensim. Any terrain editor you're using needs to be able to read and write in a format that Opensim can understand. In the following examples we'll be using a sample grid, as shown below: A four region island. As you can see the terrain's already been edited and the larger island is composed of regions eeny, meeny, miny, and moe. These will be used to demonstrate saving terrain to be imported to Leveller. The four other regions of test00, test01, test10, and test11 will be used to demonstrate exporting terrains from Leveller back into the grid. The listing for these regions likes a bit like... The positions only become relevant when importing and exporting tilesets of multiple regions. We'll tackle importing to Leveller in this part, as the tutorial assumes you already have one or more regions in  one or more grids (even if that is only a stand-alone grid). ## Importing via a viewer You can save a region's terrain as an RAW file from in-world via the viewer if your avatar owns the region. You need to find the Region/Estate menu option within the viewer. The location of this can be different depending on which viewer you're using (the example below is using Singularity) and sometimes the ALT-R shortcut is assigned to it. Find the option and select it. When the Region/Estate dialog appears, move to  the Terrain tab, as per below: Selecting Download RAW Terrain will save the current definition as a RAW so the image is used to store heightfield data file. A RAW file is an image file. It can be edited in Photoshop and other graphics editors but it can also be read by Leveller. After you've saved the file you'll have something like this: In this example we are only saving the terrain from the eeny region. To import this file into Leveller, go to that application and select from the File menu the Import option and from the list of formats select Second Life. You then use the browser to select the .raw file to read and will see something like this: The choices are Elevations, Water elevations, and Original elevations. The Water channel comes from raw channel 3, whereas the others come from raw channels 1+2, and 12+13, respectively. A local coordinate system is set with the origin in the south west corner, to match the ground coordinates that Second Life uses. After loading the raw file you should then see something like this in Leveller: This import can now be edited in Leveller. There may be some inaccuracy in using .raw files. ## Server commands for saving terrains The following two methods use server commands to save region's terrains. Server commands are made within the Opensim console. These examples are taken using ConEmu, a "Windows console emulator with tabs, which presents multiple consoles and simple GUI applications as one customizable GUI window with various features". ConEmu, rather than the Windows command line was used, as ConEmu behaves more like a Linux terminal, with the user being able scroll back through previous lines and reports, and use the up arrow to see (and then edit) previous commands. Users are also able to copy and paste from and to the console easier that with the standard  DOS prompt, and key shortcuts and colour schemes can be set for convenience.  All this makes using the Opensim console a lot easier for a Windows user. Other OSs may have their own equivalents. ConEmu was used as I predominantly use Windows and Windows apps (and hence Leveller). ## For single regions To save the terrain for a single region you must first change to that region. You do this by using the "change region" command. For example, to go to the eeny region we'd type... change region eeny [ENTER] ...and the console prompt should now show... Region (eeny) # If it doesn't, you may have mistyped the name of the region. Making typos like that can create unexpected errors! Assuming you have now selected the region, You would use the "terrain save" command to save that to a particular file. For example... terrain save N:\OpenSim\HeightFields\Leveller\Tutorial\single\eeny.ter ...would save the terrain to the eeny.ter file in that directory (you need to put quotes around a file path name if it has spaces in). The format of the file save is determined by the extension used as follows: Extension File Format .r32/f32 32bit RAW, see RAW, 8 bit, 16 bit, and 32 bit explained .ter Terragen heightfield,see Terragen™ Terrain file specification .raw linden labs/Second Life RAW,see Tips for Creating Heightfields and Details on Terrain RAW Files .jpg/jpeg Joint Photographic Experts Group  image format. .bmp device independent BitMaP .png Portable Network Graphic .gif Graphics Interchange Format .tif/tiff Tagged Image File Format .gsd geographic survey data file As you can see above, the example file was saved in the Terragen format. The reasons for choosing that format are that it is a heightfield format rather that a graphics format. Leveller also uses the .ter extension for its documents but these are not in Terragen format. Leveller can however easily import and export in Terragen format. To import this file into Leveller, from the File menu select Import and change the format to Terragen Terrain. After selecting a file you will something like this: Notice how using Terragen instead of Raw gives a smoother result. ## For tilesets In Opensim you can create regions that are square with dimensions in multiples of 256, for example 256x256, 512x512, 768x768, and so on. However at present you cannot have two regions of different sizes adjacent, without having issues  with one or more of those regions. So, in general you won't see a 256x256 region next to a 512x512 region, but you may see a string of 256x256 or larger regions together. A "tileset" is a collection of adjacent or connected regions of the same size, with each region being one tile. The following three examples are taken from 3rd Rock Grid: The above is a 2x2 tileset, surrounded by non-region water areas. The above is a 3x1 tileset, surrounded by non-region water areas. The above is a 2x3 tileset, surrounded by non-region water areas, but one that also includes gaps where there are no regions. The large island in the example used for this tutorial is a 2x2 tileset. ## Save-tile command It is possible to save a single file for a tileset using the terrain save-tile command in the console. This only works saving to a PNG file. The syntax is: terrain save-tile <filename> <tile width <tile height> <xstart> <ystart> ...and where: <filename> is the file name of the of the saved file (e.g. EnnyIsland-all.png), and <tile width <tile height> is the width and height of the tileset, and <xstart> <ystart> are the grid's x and y location of the south west corner of the tileset (e.g. 1100 1000 for the sample island with is the eeny region). The command is a little buggy in that it may fail if you add a path to the file name of the saved file (under Windows it gets saved in the \bin sub-directory) and a single command doesn't save all the regions terrain, but rather the terrain of the region you have currently selected. For example if your current region is moe and you do the following command... terrain save-tile EnnyIsland-all.png 2 2 1100 1000 ...you would get the following result: As you can see only the top left region (moe) has been saved to the file, the other regions are transparent. If you had eeny selected instead, you would get... To save all the region's terrain, you need to change regions one at a time and repeat  the same command: ...will produce this result: You can import this file to Leveller by doing an import as per above, but using the PNG option instead of .ter. The result will look something like this: Note how the terrain looks very terraced. This is a result of using a graphics format instead of a heightfield format. ## Saving tilesets for Leveller Leveller now has an Import Tileset option which will automatically import saved terrain files from a tileset and place them correctly - provided you name the terrain files correctly first. To do this, you follow through the procedure outlined under For single regions for each region in the tileset. Each file must be in Terragen format (.ter) and its name must use the syntax base_xN1_yN2, where base is any text you want as long as it is the same for each tile, N1 is the horizontal tile index starting at zero and growing eastward, and N2 is the vertical tile index starting at zero and growing northward. For example, in saving the sample island, the following commands were used... Notice how the regions were changed between terrain saves, and how the file names were adjusted slightly to reflect the position of that region in the tileset. This results in the following files in Import directory: We now swap to Leveller and select File > Import Tileset. After selected that directory we would see something like this: The elevation offset (set to -20 by default) will adjust the heightfield after import. Opensim's default water level is 20, but Leveller's (and other programs like L3DT) default water level is zero. This allows the user to automatically adjust when importing from Opensim, and there is an export equivalent that adds height back on. We won't change this value. After import you should see: This is much smoother the heightfield is when compared with the PNG import. We might want to adjust  those grid lines on the map. Going to Navigation > Gridlines we'll see something like this: Delete all spacings except for 1, and add 256 before OKing. Now the map will look like this, showing the different region's borders: You can have gaps in the tileset, like the 2x3 tileset from 3rd Rock Grid. In this case Leveller just replaces empty positions with blocks of 0m heightfields. In part 2 we'll show how to export heightfields back to Opensim
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3078562319278717, "perplexity": 2329.0907112224745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866326.60/warc/CC-MAIN-20180524131721-20180524151721-00472.warc.gz"}
https://materials.springer.com/search?searchTerm=consolidated%20palladium&propertyFacet=energy%20level&error=cookies_not_supported&code=e3070c6b-2811-4c63-8107-8739dfde8df4
14 result(s) using Focused Search for substance: consolidated palladium If you didn't find what you were looking for, see more results. Properties: energy level 14 result(s) using Focused Search for substance: consolidated palladium If you didn't find what you were looking for, see more results. Properties: energy level 1. # Landolt-Börnstein ## Energy levels for Pd-93 (Palladium-93) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 2. # Landolt-Börnstein ## Energy levels for Pd-95 (Palladium-95) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 3. # Landolt-Börnstein ## Energy levels for Pd-118 (Palladium-118) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 4. # Landolt-Börnstein ## Energy levels for Pd-120 (Palladium-120) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 5. # Landolt-Börnstein ## Energy levels for Pd-94 (Palladium-94) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 6. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-96 (Palladium-96) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 7. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-97(Palladium-97) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 8. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-98 (Palladium-98) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 9. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-113 (Palladium-113) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 10. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-114 (Palladium-114) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 11. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-115 (Palladium-115) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 12. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-116 (Palladium-116) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 13. # Landolt-Börnstein ## Energy levels and branching ratios for Pd-117 (Palladium-117) This document is part of Subvolume C ‘Tables of Excitations of Proton- and Neutron-rich Unstable Nuclei’ of Volume 19 ‘Nuclear States from Charged Particle Reactions’ of Landolt-Börnstein - Group I ‘Elementa... 14. # Landolt-Börnstein ## Tables of Excitations from Reactions with Charged Particles. Part 2: Z = 37 - 62 · 46-Palladium This document is part of Subvolume B2 'Tables of Excitations from Reactions with Charged Particles. Part 2: Z= 37 - 62' of Volume 19 'Nuclear States from Charged Particle Reactions'. It provides energy level... Page is not a valid page number. Please enter a number between 1 and 1 of 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8801993131637573, "perplexity": 18788.28813287238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00096.warc.gz"}
http://en.wikipedia.org/wiki/Exponential_integral
# Exponential integral Not to be confused with other integrals of exponential functions. Plot of E1 function (top) and Ei function (bottom). In mathematics, the exponential integral  Ei is a special function on the complex plane. It is defined as one particular definite integral of the ratio between an exponential function and its argument. ## Definitions For real nonzero values of x, the exponential integral Ei(x) is defined as $\operatorname{Ei}(x)=-\int_{-x}^{\infty}\frac{e^{-t}}t\,dt.\,$ The Risch algorithm shows that Ei is not an elementary function. The definition above can be used for positive values of x, but the integral has to be understood in terms of the Cauchy principal value due to the singularity of the integrand at zero. For complex values of the argument, the definition becomes ambiguous due to branch points at 0 and $\infty$.[1] Instead of Ei, the following notation is used,[2] $\mathrm{E}_1(z) = \int_z^\infty \frac{e^{-t}}{t}\, dt,\qquad|{\rm Arg}(z)|<\pi$ In general, a branch cut is taken on the negative real axis and E1 can be defined by analytic continuation elsewhere on the complex plane. For positive values of the real part of $z$, this can be written[3] $\mathrm{E}_1(z) = \int_1^\infty \frac{e^{-tz}}{t}\, dt = \int_0^1 \frac{e^{-z/u}}{u}\, du ,\qquad \Re(z) \ge 0.$ The behaviour of E1 near the branch cut can be seen by the following relation:[4] $\lim_{\delta\to0+}\mathrm{E_1}(-x \pm i\delta) = -\mathrm{Ei}(x) \mp i\pi,\qquad x>0,$ ## Properties Several properties of the exponential integral below, in certain cases, allow one to avoid its explicit evaluation through the definition above. ### Convergent series Integrating the Taylor series for $e^{-t}/t$, and extracting the logarithmic singularity, we can derive the following series representation for $\mathrm{E_1}(x)$ for real $x$:[5] $\mathrm{Ei}(x) = \gamma+\ln |x| + \sum_{k=1}^{\infty} \frac{x^k}{k\; k!} \qquad x \neq 0$ For complex arguments off the negative real axis, this generalises to[6] $\mathrm{E_1}(z) =-\gamma-\ln z-\sum_{k=1}^{\infty}\frac{(-z)^k}{k\; k!} \qquad (|\mathrm{Arg}(z)| < \pi)$ where $\gamma$ is the Euler–Mascheroni constant. The sum converges for all complex $z$, and we take the usual value of the complex logarithm having a branch cut along the negative real axis. This formula can be used to compute $\mathrm{E_1}(x)$ with floating point operations for real $x$ between 0 and 2.5. For $x > 2.5$, the result is inaccurate due to cancellation. A faster converging series was found by Ramanujan: ${\rm Ei} (x) = \gamma + \ln x + \exp{(x/2)} \sum_{n=1}^\infty \frac{ (-1)^{n-1} x^n} {n! \, 2^{n-1}} \sum_{k=0}^{\lfloor (n-1)/2 \rfloor} \frac{1}{2k+1}$ ### Asymptotic (divergent) series Relative error of the asymptotic approximation for different number $~N~$ of terms in the truncated sum Unfortunately, the convergence of the series above is slow for arguments of larger modulus. For example, for x = 10 more than 40 terms are required to get an answer correct to three significant figures.[7] However, there is a divergent series approximation that can be obtained by integrating $ze^z\mathrm{E_1}(z)$ by parts:[8] $\mathrm{E_1}(z)=\frac{\exp(-z)}{z}\sum_{n=0}^{N-1} \frac{n!}{(-z)^n}$ which has error of order $O(N!z^{-N})$ and is valid for large values of $\mathrm{Re}(z)$. The relative error of the approximation above is plotted on the figure to the right for various values of $N$, the number of terms in the truncated sum ($N=1$ in red, $N=5$ in pink). ### Exponential and logarithmic behavior: bracketing Bracketing of $\mathrm{E_1}$ by elementary functions From the two series suggested in previous subsections, it follows that $\mathrm{E_1}$ behaves like a negative exponential for large values of the argument and like a logarithm for small values. For positive real values of the argument, $\mathrm{E_1}$ can be bracketed by elementary functions as follows:[9] $\frac{1}{2}e^{-x}\,\ln\!\left( 1+\frac{2}{x} \right) < \mathrm{E_1}(x) < e^{-x}\,\ln\!\left( 1+\frac{1}{x} \right) \qquad x>0$ The left-hand side of this inequality is shown in the graph to the left in blue; the central part $\mathrm{E_1}(x)$ is shown in black and the right-hand side is shown in red. ### Definition by Ein Both $\mathrm{Ei}$ and $\mathrm{E_1}$ can be written more simply using the entire function $\mathrm{Ein}$[10] defined as $\mathrm{Ein}(z) = \int_0^z (1-e^{-t})\frac{dt}{t} = \sum_{k=1}^\infty \frac{(-1)^{k+1}z^k}{k\; k!}$ (note that this is just the alternating series in the above definition of $\mathrm{E_1}$). Then we have $\mathrm{E_1}(z) \,=\, -\gamma-\ln z + {\rm Ein}(z) \qquad |\mathrm{Arg}(z)| < \pi$ $\mathrm{Ei}(x) \,=\, \gamma+\ln x - \mathrm{Ein}(-x) \qquad x>0$ ### Relation with other functions The exponential integral is closely related to the logarithmic integral function li(x) by the formula $\mathrm{li}(x) = \mathrm{Ei}(\ln x)\,$ for positive real values of $x$ The exponential integral may also be generalized to ${\rm E}_n(x) = \int_1^\infty \frac{e^{-xt}}{t^n}\, dt,$ which can be written as a special case of the incomplete gamma function:[11] ${\rm E}_n(x) =x^{n-1}\Gamma(1-n,x).\,$ The generalized form is sometimes called the Misra function[12] $\varphi_m(x)$, defined as $\varphi_m(x)={\rm E}_{-m}(x).\,$ Including a logarithm defines the generalized integro-exponential function[13] $E_s^j(z)= \frac{1}{\Gamma(j+1)}\int_1^\infty (\log t)^j \frac{e^{-zt}}{t^s}\,dt$. The indefinite integral: $\mathrm{Ei}(a \cdot b) = \iint e^{a b} \, da \, db$ is similar in form to the ordinary generating function for $d(n)$, the number of divisors of $n$: $\sum\limits_{n=1}^{\infty} d(n)x^{n} = \sum\limits_{a=1}^{\infty} \sum\limits_{b=1}^{\infty} x^{a b}$ ### Derivatives The derivatives of the generalised functions $\mathrm{E_n}$ can be calculated by means of the formula [14] $\mathrm{E_n}'(z) = -\mathrm{E_{n-1}}(z) \qquad (n=1,2,3,\ldots)$ Note that the function $\mathrm{E_0}$ is easy to evaluate (making this recursion useful), since it is just $e^{-z}/z$.[15] ### Exponential integral of imaginary argument $\mathrm{E_1}(ix)$ against $x$; real part black, imaginary part red. If $z$ is imaginary, it has a nonnegative real part, so we can use the formula $\mathrm{E_1}(z) = \int_1^\infty \frac{e^{-tz}}{t} dt$ to get a relation with the trigonometric integrals $\mathrm{Si}$ and $\mathrm{Ci}$: $\mathrm{E_1}(ix) = i\left(-\tfrac{1}{2}\pi + \mathrm{Si}(x)\right) - \mathrm{Ci}(x) \qquad (x>0)$ The real and imaginary parts of $\mathrm{E_1}(x)$ are plotted in the figure to the right with black and red curves. ## Applications • Time-dependent heat transfer • Nonequilibrium groundwater flow in the Theis solution (called a well function) • Radiative transfer in stellar atmospheres • Radial diffusivity equation for transient or unsteady state flow with line sources and sinks • Solutions to the neutron transport equation in simplified 1-D geometries.[16] Goodwin–Staton integral ## Notes 1. ^ Abramowitz and Stegun, p. 228 2. ^ Abramowitz and Stegun, p. 228, 5.1.1 3. ^ Abramowitz and Stegun, p. 228, 5.1.4 with n = 1 4. ^ Abramowitz and Stegun, p. 228, 5.1.7 5. ^ For a derivation, see Bender and Orszag, p253 6. ^ Abramowitz and Stegun, p. 229, 5.1.11 7. ^ Bleistein and Handelsman, p. 2 8. ^ Bleistein and Handelsman, p. 3 9. ^ Abramowitz and Stegun, p. 229, 5.1.20 10. ^ Abramowitz and Stegun, p. 228, see footnote 3. 11. ^ Abramowitz and Stegun, p. 230, 5.1.45 12. ^ After Misra (1940), p. 178 13. ^ Milgram (1985) 14. ^ Abramowitz and Stegun, p. 230, 5.1.26 15. ^ Abramowitz and Stegun, p. 229, 5.1.24 16. ^ George I. Bell; Samuel Glasstone (1970). Nuclear Reactor Theory. Van Nostrand Reinhold Company.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 60, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843540191650391, "perplexity": 523.8397841267317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00180-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/higgs-boson-mass-consequences.923971/
# Higgs Boson Mass Consequences • B • Thread starter alejandromeira • Start date • #1 What are the consequences of the experimental value of the Higgs boson mass for theories of multiverse and supersymmetry? ## Answers and Replies • #2 Gold Member 2,159 1,064 The Higgs boson mass significantly constrains the available parameter space of many supersymmetry theories. But, there isn't a really good compact way of describing that impact because there are so many versions of SUSY and so many free parameters in the theory. It really has no obvious impact on theories of multiverse which really don't deserve the title of "theories" anyway. • #3 Ok. Thank you so much. I still have a lot to study. • #4 2,077 399 The most interesting (for me, can't speak for others) consequence of measured Higgs boson mass value is a few unexplained correlations: With measured top and Higgs masses, SM sits right on vacuum stability/metastability line. Sum of squares of all SM bosons' masses is equal to half of square of Higgs VEV to within 0.35%. • Last Post Replies 16 Views 825 • Last Post Replies 6 Views 680 • Last Post Replies 5 Views 1K • Last Post Replies 5 Views 1K • Last Post Replies 5 Views 609 • Last Post Replies 5 Views 1K • Last Post Replies 3 Views 921 • Last Post Replies 4 Views 2K • Last Post Replies 7 Views 994 • Last Post Replies 5 Views 1K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686304926872253, "perplexity": 4418.455951276606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00222.warc.gz"}
http://math.stackexchange.com/questions/11081/calculus-find-the-limit-exp-vs-power
# Calculus, find the limit, Exp vs Power? $\lim_{x\to\infty} \frac{e^x}{x^n}$ n is any natural number. Using L'hopital doesn't make much sense to me. I did find this in the book: "In a struggle between a power and an exp, the exp wins." Can I refer that line as an answer? If the fraction would have been flipped, then the limit would be zero. But in this case I the limit is actually $\infty$ - That statement you encountered is a nonrigorous version of a statement on growth rates; briefly, no matter how high you take $n$ in $x^n$, there is a value $x$ such that beyond it, $\exp(x)>x^n$. Now use this to see if a limit exists. –  J. M. Nov 20 '10 at 14:06 Repeated use of L'Hôpital's rule ($n$ times): $\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{x^{n}}=\underset{% x\rightarrow \infty }{\lim }\dfrac{e^{x}}{nx^{n-1}}=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{n(n-1)x^{n-2}}=\cdots =$ $=\cdots=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{n(n-1)\cdots 3\cdot x^{2}}=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{% n(n-1)\cdots 3\cdot 2x}=$ $=\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{% n(n-1)\cdots 3\cdot 2\cdot 1}=\underset{x\rightarrow \infty }{\lim }\dfrac{% e^{x}}{n!}=\infty$ To convince yourself: If you had $\underset{x\rightarrow \infty }{\lim }\dfrac{e^{x}}{x^{10}}$ you would have to apply L'Hôpital's rule ten times. Added 2: Plot of $\dfrac{e^{x}}{x^{3}}$ - Use the fact that $e^x \ge \left( 1 + \frac{x}{n} \right)^n$ for any $n > 0$. - HINT: One way of looking at this would be: $$\frac{1}{x^{n}} \biggl[ \biggl(1 + \frac{x}{1!} + \frac{x^{2}}{2!} + \cdots + \frac{x^{n}}{n!}\biggr) + \frac{x^{n+1}}{(n+1)!} + \cdots \biggr]$$ I hope you understand why i put the brackets inside those terms. - Why isn't L'Hôpital a good solution? Just use induction. - L'Hôpital is a thing whose use should be avoided, as usually the alternatives are much more instructive. –  J. M. Nov 20 '10 at 14:16 Considering the boy is just doing homework, I don't think a highbrow mathematical approach is needed here. Besides, I don't know what is so bad about L'Hôpital. Particularly in this case. –  Raskolnikov Nov 20 '10 at 14:18 I believe the problem is tailor-made for repeated application of L'Hopital's Rule, but here are some thoughts ... You could note that $e^{x} = (e^{x/n})^n$, and consider $\left( \lim \frac{e^{x/n}}{x}\right)^n$, so that you are comparing an exponential to a single power of $x$, which might be a bit less daunting for you. A bit more cleanly, and to make the numerator and denominator match better, define $y := \frac{x}{n}$. Then $$\frac{e^{x}}{x^n}=\frac{e^{ny}}{(ny)^n}=\frac{\left(e^{y}\right)^n}{n^n y^n}=\frac{1}{n^n}\frac{\left(e^{y}\right)^n}{y^{n}}=\frac{1}{n^n}\left(\frac{e^y}{y}\right)^n$$ Since $n$ is a constant, you can direct your limiting attention to $\frac{e^y}{y}$ (as $y \to \infty$, of course). - Let's consider the limit to infinity when taking log of that expression: $$\lim_{x\rightarrow\infty} \ln{\frac{e^x}{x^n}} = \lim_{x\rightarrow\infty}x-n\ln(x)=\lim_{x\rightarrow\infty}x(1-\frac{n\ln(x)}{x})=\infty$$ Therefore, the limit is $\infty$. The proof is complete. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500168561935425, "perplexity": 428.72885259677605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639121.73/warc/CC-MAIN-20150417045719-00060-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/relation-strength-interaction-and-decay-time.170906/
Relation strength interaction and decay time 1. May 19, 2007 da_willem There is this characteristic time associated with the decay of particles; ~10^-16s for electromagnetic decay, ~10^-23s for strong decay and >10^-13s for weak decay. Now I know that the decay time is to first order inversely proportional to the coupling constant squared (from a first order Feynman diagram with only a vertex contribution). So from this point of view I 'understand' why decay via strong interactions go faster than via weak interactions, but how can one see this physically? Short times for virtual particles correspond to high energies by the hup, and I've seen the relation between the virtual particle mass and the interaction range, but why do interactions with exchange of virtual massless gluons go faster than those with exchange of photons which goes faster than the exchange of massive intermediate vector bosons?! Last edited: May 19, 2007 2. May 21, 2007 Meir Achuz 1. $$\alpha(EM)$$, and $$\alpha(QCD)$$ each vary with energy. At energies for typical decays (~100 MeV) $$\alpha(QCD)\sim 100\alpha(EM)$$. 2. The effective weak coupling for typical decays $$\sim \alpha(EM)(M_p/M_W)^2$$. Last edited: May 21, 2007 Similar Discussions: Relation strength interaction and decay time
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559951424598694, "perplexity": 1432.1669665414654}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608954.74/warc/CC-MAIN-20170527133144-20170527153144-00456.warc.gz"}
http://mathhelpforum.com/advanced-algebra/195117-automorphisms-group-g.html
# Thread: Automorphisms of a group G 1. ## Automorphisms of a group G Dummit and Foote Section 4.4 Automorphisms Exercise 1 reads as follows: Let $\sigma \in Aut(G)$ and $\phi_g$ is conjugation by g, prove that $\sigma \phi_g \sigma^{-1}$ = $\phi_{\sigma (g)}$. ================================================== ==== A start to the proof is as follows: $\sigma \phi_g \sigma^{-1}$ = $\sigma (\phi_g (\sigma^{-1} (x)))$ = $\sigma (g. \sigma^{-1} (x). g^{-1} )$ = $\sigma (g). x . \sigma (g^{-1} )$ Now we have completed the proof if $\sigma (g^{-1}) = ( {\sigma (g) )}^{-1}$ But why is $\sigma (g^{-1}) = ( {\sigma (g) )}^{-1}$ in this case?? Peter 2. ## Re: Automorphisms of a group G For an automorphism we have $\phi (e) = e$ Thus $\phi (e) = \phi (g g^{-1}) = \phi (g) \phi( g^{-1} ) = e$ and from this it follows that $\phi (g^{-1}) = {[\phi (g)]}^{-1}$ Am I correct? Peter 3. ## Re: Automorphisms of a group G Originally Posted by Bernhard For an automorphism we have $\phi (e) = e$ Thus $\phi (e) = \phi (g g^{-1}) = \phi (g) \phi( g^{-1} ) = e$ and from this it follows that $\phi (g^{-1}) = {[\phi (g)]}^{-1}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910903573036194, "perplexity": 846.529163986572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806327.92/warc/CC-MAIN-20171121074123-20171121094123-00554.warc.gz"}
https://www.unisannio.it/it/biblio?s=year&o=asc&f%5Bauthor%5D=9243
UNIVERSITÀ DEGLI STUDI DEL SANNIO   Benevento # Pubblicazioni di ateneo Found 16 results Author Titolo Tipo [ Anno] Filters: Author is Grelle, Gerardo  [Clear All Filters] 1963 Italian Journal of Engineering Geology and Environment—Book Series 6, International Conference, Vajont, vol. 2013, pp. 447–454, 1963. 2013 Italian Journal of Geosciences, vol. 132, no. 3: GeoScienceWorld, pp. 341–349, 2013. Italian Journal of Geosciences, vol. 132, no. 3: Società Geologica Italiana, pp. 366–379, 2013. Italian Journal of Engineering Geology and Environment, vol. 6, pp. 447–454, 2013. 2014 Analysis and Management of Changing Risks for Natural Hazards, 2014. Water resources management, vol. 28, no. 4: Springer, pp. 969–978, 2014. Bulletin of Engineering Geology and the Environment, vol. 73, no. 3: Springer, pp. 877–890, 2014. 2015 Natural Hazards, vol. 77, no. 1: Springer, pp. 1–15, 2015. Engineering Geology for Society and Territory - Volume 2: Landslide Processes, pp. 1611–1613, 2015. Science of the Total Environment, vol. 532: Elsevier, pp. 208–219, 2015. 2016 PROCEDIA EARTH AND PLANETARY SCIENCE, vol. 12, 2016. 2017 NATURAL HAZARDS AND EARTH SYSTEM SCIENCES, vol. 17, pp. 881–885, 2017. Advancing Culture of Living with Landslides, vol. 2, pp. 471–479, 2017. GEOMORPHOLOGY, vol. 295, pp. 260–284, 2017. 2018 16th European Conference on Earthquake Engineering (16ECEE), 2018. 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530544877052307, "perplexity": 13906.339317302858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00323.warc.gz"}
http://gmatclub.com/forum/the-positive-integer-k-has-exactly-two-positive-prime-factor-60634.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 09 Oct 2015, 21:24 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # The positive integer k has exactly two positive prime factor Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Director Joined: 23 Sep 2007 Posts: 791 Followers: 5 Kudos [?]: 122 [1] , given: 0 The positive integer k has exactly two positive prime factor [#permalink]  27 Feb 2008, 19:33 1 This post received KUDOS 9 This post was BOOKMARKED 00:00 Difficulty: 95% (hard) Question Stats: 40% (01:54) correct 60% (01:19) wrong based on 287 sessions The positive integer k has exactly two positive prime factors, 3 and 7. If k has a total of 6 positive factors, including 1 and k, what is the value of K? (1) 3^2 is a factor of k (2) 7^2 is NOT a factor of k [Reveal] Spoiler: I searched thru 6-7 pages using keywords, but I did not find this question asked, I think this could be a newly added question in the gmatprep software. somewhat of a tricky wording question, especially when time is running short. oa is d. correction: oa is D. [Reveal] Spoiler: OA Attachments dsafasfds.JPG [ 14.91 KiB | Viewed 7648 times ] Last edited by Bunuel on 03 Nov 2013, 05:08, edited 2 times in total. Renamed the topic, edited the question and added the OA. Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes Intern Joined: 25 Feb 2008 Posts: 14 Followers: 0 Kudos [?]: 1 [0], given: 0 Re: Gmatprep DS: the positive integer k has exactly two [#permalink]  27 Feb 2008, 20:34 Based on stem, the 6 factors of k are 1,3,7,21, x and k . where 7 < x < k. If statement (1) is used, the factors are, 1, 3, 7, 9, 21, k. k = 63. sufficient Since stem says 3,7 are the only prime factors, x has to be 3^2 since x cannot be 7^2. - sufficient Answer (C) CEO Joined: 29 Mar 2007 Posts: 2585 Followers: 17 Kudos [?]: 270 [0], given: 0 Re: Gmatprep DS: the positive integer k has exactly two [#permalink]  28 Feb 2008, 06:19 gmatnub wrote: Gmatprep DS: the positive integer k has exactly two positive prime factors, 3 and 7. If K has a total of 6 positive factors, including 1 and k, what is the value of K? 1) 3^2 is a factor of k 2) 7^2 is NOT a factor of k I searched thru 6-7 pages using keywords, but I did not find this question asked, I think this could be a newly added question in the gmatprep software. somewhat of a tricky wording question, especially when time is running short. oa is a. K has 6 factors: 1,3,7,21,X,K (different factors) Essentially we need to find X then we will know K. 1: X must be 9. b/c K has two 3's as factors. 2: if 7^2 is not a factor of K then X cannot be 49. Since we only have 3 and 7 as prime factors, 3 must be the other factor and X would be 9. I get D Im not sure why OA is A... =( Manager Joined: 02 Aug 2007 Posts: 232 Schools: Life Followers: 3 Kudos [?]: 40 [2] , given: 0 Re: Gmatprep DS: the positive integer k has exactly two [#permalink]  16 Feb 2009, 21:55 2 This post received KUDOS Good question. I have a different way of solving this. Let P1 = Power of first factor Let P2 = Power of second factor The number of factors can be found using the equation (P1 + 1)(P2 + 1). This is a rule, I didn't come up with this. Therefore here we have: 2*3 or 3*2, both equal 6. statement 1: says that 3*2 is out, therefore sufficient statement 2: says that 3*2 is out, therefore sufficient. note that we cannot use 6*1, because then we have a 7^0 or a 3^0, which is not the case here. Answer D. What do you think? Intern Joined: 25 Dec 2008 Posts: 18 Schools: HBS, Stanford Followers: 0 Kudos [?]: 3 [0], given: 2 Re: Gmatprep DS: the positive integer k has exactly two [#permalink]  19 Feb 2009, 10:07 x1050us wrote: Based on stem, the 6 factors of k are 1,3,7,21, x and k . where 7 < x < k. If statement (1) is used, the factors are, 1, 3, 7, 9, 21, k. k = 63. sufficient Since stem says 3,7 are the only prime factors, x has to be 3^2 since x cannot be 7^2. - sufficient Answer (C) I don't understand why k=63, why can't it be 27 (due to 3 x 9)?? SVP Joined: 29 Aug 2007 Posts: 2493 Followers: 62 Kudos [?]: 605 [0], given: 19 Re: Gmatprep DS: the positive integer k has exactly two [#permalink]  19 Feb 2009, 11:11 DaveGG wrote: x1050us wrote: Based on stem, the 6 factors of k are 1,3,7,21, x and k . where 7 < x < k. If statement (1) is used, the factors are, 1, 3, 7, 9, 21, k. k = 63. sufficient Since stem says 3,7 are the only prime factors, x has to be 3^2 since x cannot be 7^2. - sufficient Answer (C) I don't understand why k=63, why can't it be 27 (due to 3 x 9)?? In that case, k would have 3^3 as factor. If so, the k would have more than 6 factors as under: 1, 3, 7, 9, 21, 27, 42, 63, and 189 gmatnub wrote: Gmatprep DS: the positive integer k has exactly two positive prime factors, 3 and 7. If K has a total of 6 positive factors, including 1 and k, what is the value of K? 1) 3^2 is a factor of k 2) 7^2 is NOT a factor of k We need one more either 3 or 7 to have 6 +ve factors of k. a: 3^2 makes 6 +ve factors. b. if there is no 7^2 as a factor of k, then it also makes sure that 3^3 is a factor of k. _________________ Intern Joined: 08 Jun 2008 Posts: 14 Location: United States (AL) Concentration: Strategy, Finance Schools: McCombs '14 GMAT 1: 710 Q46 V42 GPA: 3.81 WE: Information Technology (Accounting) Followers: 0 Kudos [?]: 14 [6] , given: 2 Re: What is the value of K - Confusing one [#permalink]  23 Sep 2009, 08:24 6 This post received KUDOS From the stem, we know that K's factors are 1, 3, 7, 21 (3*7), __, and K. 1) This tells us there are two factors of 3, so 9 is also a factor of K. K's factors are 1, 3, 7, 9, 21, and K. Since there are two 3's and a 7 in K's factors, then 3*3*7 = 63 is also a factor. Therefore K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT 2) If there are not 2 7's in K's factors, and there are exactly 6 factors total, there must be two factors of 3. Otherwise, if we were to use a non-prime factor, then K would have more than 6 factors. (Remember 'K' has exactly two positive prime factors) Therefore, K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT Answer is D. Manager Joined: 27 Oct 2008 Posts: 185 Followers: 1 Kudos [?]: 117 [4] , given: 3 Re: What is the value of K - Confusing one [#permalink]  26 Sep 2009, 09:39 4 This post received KUDOS 3 This post was BOOKMARKED Positive integer 'K' has exactly two positive prime factors, 3 and 7. If 'K' has a total of 6 factors, including 1 and 'K', what is the value of 'K'? (1) 3^2 is a factor of 'K' (2) 7^2 is not a factor of 'K'. Soln: Since k has two positive prime factors k = 3^a * 7^b k has a total of 6 factors meaning (a+1) * (b+1) = 6 this can be either (a+1) * (b+1) = 1 * 6 or (a+1) * (b+1) = 2 * 3 1 * 6 is not possible because one of the factors will become 0. In tat case k will have just one prime factor. Hence the only option is 2 * 3 So when a = 2, b = 1 and when a = 1, b = 2 thus k can be either 3^2 * 7^1 or 3^1 * 7^2 Now considering statement 1 alone, 3^2 is a factor of 'K'. This will be true only when k = 3^2 * 7^1 Thus statement 1 alone is sufficient Now considering statement 2 alone, 7^2 is not a factor of 'K'. This will be true only when k = 3^2 * 7^1 Thus statement 2 alone is sufficient Hence D GMAT Club Legend Joined: 09 Sep 2013 Posts: 6783 Followers: 367 Kudos [?]: 83 [1] , given: 0 Re: Positive integer 'K' has exactly two positive prime factors, [#permalink]  07 Nov 2013, 00:02 1 This post received KUDOS Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Intern Joined: 06 Sep 2013 Posts: 16 Followers: 0 Kudos [?]: 7 [0], given: 3 Re: The positive integer k has exactly two positive prime factor [#permalink]  09 Nov 2013, 16:06 gmatnub wrote: The positive integer k has exactly two positive prime factors, 3 and 7. If k has a total of 6 positive factors, including 1 and k, what is the value of K? (1) 3^2 is a factor of k (2) 7^2 is NOT a factor of k [Reveal] Spoiler: I searched thru 6-7 pages using keywords, but I did not find this question asked, I think this could be a newly added question in the gmatprep software. somewhat of a tricky wording question, especially when time is running short. oa is d. correction: oa is D. K=3^a * 7^b and (a+1) *(b+1) = 6 so either a=1, b=2, or a=2, b=1 both statements would help us get K=3^2 * 7= 63 Manager Joined: 04 Oct 2013 Posts: 84 Location: Brazil Schools: CBS '16 (D) GMAT 1: 660 Q45 V35 GMAT 2: 710 Q49 V38 Followers: 2 Kudos [?]: 34 [0], given: 38 Re: The positive integer k has exactly two positive prime factor [#permalink]  16 Nov 2013, 13:43 gmatnub wrote: The positive integer k has exactly two positive prime factors, 3 and 7. If k has a total of 6 positive factors, including 1 and k, what is the value of K? (1) 3^2 is a factor of k (2) 7^2 is NOT a factor of k The solutions that try to name each factor are dangerous because one can always run the risk to overlook one or two factors. Oddly enough, I feel that the best way to approach this problem is through "combinatories"! It is just a matter of seeing that the total number of factors in K (6 as mentioned in the stem) is the product of the "group of possible factors including 3" and "the group of possible factors including 7". Statement one is sufficient: As per the statement, the group of possible factors including 3 is 3 (0, 1 or 2 times) - therefore 3 possibilities. We do know that total number of factors of K is 6, so the group of possible factors including 7 has to be two - when 7 appears 0 or 1 time. So group of three - three elements (0,1 or 2) times group of 7 - two elements (0 or 1) equals 6! Statement two is also sufficient: The only possible factors of K is 6, so either "the group of factors including 7" is two (7^1) or three (7^2) possibilities. The statement rules out the later, that leaves you with two possibilities for "the group of factors including 7". Intern Joined: 04 May 2013 Posts: 47 Followers: 0 Kudos [?]: 4 [0], given: 7 Re: What is the value of K - Confusing one [#permalink]  30 Dec 2013, 18:14 samiam7 wrote: From the stem, we know that K's factors are 1, 3, 7, 21 (3*7), __, and K. 1) This tells us there are two factors of 3, so 9 is also a factor of K. K's factors are 1, 3, 7, 9, 21, and K. Since there are two 3's and a 7 in K's factors, then 3*3*7 = 63 is also a factor. Therefore K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT 2) If there are not 2 7's in K's factors, and there are exactly 6 factors total, there must be two factors of 3. Otherwise, if we were to use a non-prime factor, then K would have more than 6 factors. (Remember 'K' has exactly two positive prime factors) Therefore, K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT Answer is D. The bold part is what I do not understand. I am sorry, but I dont get the factors part where it says "there are two 3's and a 7 in K's factors". Can someone please explain why is this the case? What allows us to say this? I mean what allows us to say two 3's and a 7? 9 is 3^2, 21 is 3*7, but....? Math Expert Joined: 02 Sep 2009 Posts: 29802 Followers: 4905 Kudos [?]: 53658 [9] , given: 8167 Re: What is the value of K - Confusing one [#permalink]  31 Dec 2013, 03:16 9 This post received KUDOS Expert's post 6 This post was BOOKMARKED jjack0310 wrote: samiam7 wrote: From the stem, we know that K's factors are 1, 3, 7, 21 (3*7), __, and K. 1) This tells us there are two factors of 3, so 9 is also a factor of K. K's factors are 1, 3, 7, 9, 21, and K. Since there are two 3's and a 7 in K's factors, then 3*3*7 = 63 is also a factor. Therefore K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT 2) If there are not 2 7's in K's factors, and there are exactly 6 factors total, there must be two factors of 3. Otherwise, if we were to use a non-prime factor, then K would have more than 6 factors. (Remember 'K' has exactly two positive prime factors) Therefore, K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT Answer is D. The bold part is what I do not understand. I am sorry, but I dont get the factors part where it says "there are two 3's and a 7 in K's factors". Can someone please explain why is this the case? What allows us to say this? I mean what allows us to say two 3's and a 7? 9 is 3^2, 21 is 3*7, but....? Finding the Number of Factors of an Integer: First make prime factorization of an integer $$n=a^p*b^q*c^r$$, where $$a$$, $$b$$, and $$c$$ are prime factors of $$n$$ and $$p$$, $$q$$, and $$r$$ are their powers. The number of factors of $$n$$ will be expressed by the formula $$(p+1)(q+1)(r+1)$$. NOTE: this will include 1 and n itself. Example: Finding the number of all factors of 450: $$450=2^1*3^2*5^2$$ Total number of factors of 450 including 1 and 450 itself is $$(1+1)*(2+1)*(2+1)=2*3*3=18$$ factors. Back to the original question: The positive integer k has exactly two positive prime factors, 3 and 7. If k has a total of 6 positive factors, including 1 and k, what is the value of K? "k has exactly two positive prime factors 3 and 7" --> $$k=3^m*7^n$$, where $$m=integer\geq{1}$$ and $$n=integer\geq{1}$$; "k has a total of 6 positive factors including 1 and k" --> $$(m+1)(n+1)=6$$. Note here that neither $$m$$ nor $$n$$ can be more than 2 as in this case $$(m+1)(n+1)$$ will be more than 6. So, there are only two values of $$k$$ possible: 1. if $$m=1$$ and $$n=2$$ --> $$k=3^1*7^2=3*49$$; 2. if $$m=2$$ and $$n=1$$ --> $$k=3^2*7^1=9*7$$. (1) 3^2 is a factor of k --> we have the second case, hence $$k=3^2*7^1=9*7$$. Sufficient. (2) 7^2 is NOT a factor of k --> we have the second case, hence $$k=3^2*7^1=9*7$$. Sufficient. Answer: D. Hope it's clear. _________________ Intern Joined: 04 May 2013 Posts: 47 Followers: 0 Kudos [?]: 4 [0], given: 7 Re: What is the value of K - Confusing one [#permalink]  01 Jan 2014, 08:58 Bunuel wrote: jjack0310 wrote: From the stem, we know that K's factors are 1, 3, 7, 21 (3*7), __, and K. 1) This tells us there are two factors of 3, so 9 is also a factor of K. K's factors are 1, 3, 7, 9, 21, and K. Since there are two 3's and a 7 in K's factors, then 3*3*7 = 63 is also a factor. Therefore K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT 2) If there are not 2 7's in K's factors, and there are exactly 6 factors total, there must be two factors of 3. Otherwise, if we were to use a non-prime factor, then K would have more than 6 factors. (Remember 'K' has exactly two positive prime factors) Therefore, K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT Answer is D. The bold part is what I do not understand. I am sorry, but I dont get the factors part where it says "there are two 3's and a 7 in K's factors". Can someone please explain why is this the case? What allows us to say this? I mean what allows us to say two 3's and a 7? 9 is 3^2, 21 is 3*7, but....? Finding the Number of Factors of an Integer: First make prime factorization of an integer $$n=a^p*b^q*c^r$$, where $$a$$, $$b$$, and $$c$$ are prime factors of $$n$$ and $$p$$, $$q$$, and $$r$$ are their powers. The number of factors of $$n$$ will be expressed by the formula $$(p+1)(q+1)(r+1)$$. NOTE: this will include 1 and n itself. Example: Finding the number of all factors of 450: $$450=2^1*3^2*5^2$$ Total number of factors of 450 including 1 and 450 itself is $$(1+1)*(2+1)*(2+1)=2*3*3=18$$ factors. Back to the original question: The positive integer k has exactly two positive prime factors, 3 and 7. If k has a total of 6 positive factors, including 1 and k, what is the value of K? "k has exactly two positive prime factors 3 and 7" --> $$k=3^m*7^n$$, where $$m=integer\geq{1}$$ and $$n=integer\geq{1}$$; "k has a total of 6 positive factors including 1 and k" --> $$(m+1)(n+1)=6$$. Note here that neither $$m$$ nor $$n$$ can be more than 2 as in this case $$(m+1)(n+1)$$ will be more than 6. So, there are only two values of $$k$$ possible: 1. if $$m=1$$ and $$n=2$$ --> $$k=3^1*7^2=3*49$$; 2. if $$m=2$$ and $$n=1$$ --> $$k=3^2*7^1=9*7$$. (1) 3^2 is a factor of k --> we have the second case, hence $$k=3^2*7^1=9*7$$. Sufficient. (2) 7^2 is NOT a factor of k --> we have the second case, hence $$k=3^2*7^1=9*7$$. Sufficient. Answer: D. Hope it's clear. Thank you much Bunuel. Just one last question, and the reason that we are not acounting for the case when m = 0, and n = 5 is because 3^0 or 7^0 would be 1, and in that case, 3 is not a prime factor of k. Correct? Math Expert Joined: 02 Sep 2009 Posts: 29802 Followers: 4905 Kudos [?]: 53658 [0], given: 8167 Re: What is the value of K - Confusing one [#permalink]  02 Jan 2014, 04:19 Expert's post jjack0310 wrote: Bunuel wrote: jjack0310 wrote: From the stem, we know that K's factors are 1, 3, 7, 21 (3*7), __, and K. 1) This tells us there are two factors of 3, so 9 is also a factor of K. K's factors are 1, 3, 7, 9, 21, and K. Since there are two 3's and a 7 in K's factors, then 3*3*7 = 63 is also a factor. Therefore K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT 2) If there are not 2 7's in K's factors, and there are exactly 6 factors total, there must be two factors of 3. Otherwise, if we were to use a non-prime factor, then K would have more than 6 factors. (Remember 'K' has exactly two positive prime factors) Therefore, K's factors are 1, 3, 7, 9, 21, 63. SUFFICIENT Answer is D. The bold part is what I do not understand. I am sorry, but I dont get the factors part where it says "there are two 3's and a 7 in K's factors". Can someone please explain why is this the case? What allows us to say this? I mean what allows us to say two 3's and a 7? 9 is 3^2, 21 is 3*7, but....? Finding the Number of Factors of an Integer: First make prime factorization of an integer $$n=a^p*b^q*c^r$$, where $$a$$, $$b$$, and $$c$$ are prime factors of $$n$$ and $$p$$, $$q$$, and $$r$$ are their powers. The number of factors of $$n$$ will be expressed by the formula $$(p+1)(q+1)(r+1)$$. NOTE: this will include 1 and n itself. Example: Finding the number of all factors of 450: $$450=2^1*3^2*5^2$$ Total number of factors of 450 including 1 and 450 itself is $$(1+1)*(2+1)*(2+1)=2*3*3=18$$ factors. Back to the original question: The positive integer k has exactly two positive prime factors, 3 and 7. If k has a total of 6 positive factors, including 1 and k, what is the value of K? "k has exactly two positive prime factors 3 and 7" --> $$k=3^m*7^n$$, where $$m=integer\geq{1}$$ and $$n=integer\geq{1}$$; "k has a total of 6 positive factors including 1 and k" --> $$(m+1)(n+1)=6$$. Note here that neither $$m$$ nor $$n$$ can be more than 2 as in this case $$(m+1)(n+1)$$ will be more than 6. So, there are only two values of $$k$$ possible: 1. if $$m=1$$ and $$n=2$$ --> $$k=3^1*7^2=3*49$$; 2. if $$m=2$$ and $$n=1$$ --> $$k=3^2*7^1=9*7$$. (1) 3^2 is a factor of k --> we have the second case, hence $$k=3^2*7^1=9*7$$. Sufficient. (2) 7^2 is NOT a factor of k --> we have the second case, hence $$k=3^2*7^1=9*7$$. Sufficient. Answer: D. Hope it's clear. Thank you much Bunuel. Just one last question, and the reason that we are not acounting for the case when m = 0, and n = 5 is because 3^0 or 7^0 would be 1, and in that case, 3 is not a prime factor of k. Correct? Absolutely, m and n must be greater than zero because if they are not then 3 and 7 are not the factors of k. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 6783 Followers: 367 Kudos [?]: 83 [0], given: 0 Re: The positive integer k has exactly two positive prime factor [#permalink]  23 Feb 2015, 13:44 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Senior Manager Joined: 23 Jan 2013 Posts: 423 Schools: Cambridge'16 Followers: 2 Kudos [?]: 33 [0], given: 34 Re: The positive integer k has exactly two positive prime factor [#permalink]  24 Feb 2015, 06:06 Stem says that 3^x*7^y=k (x+1)*(y+1)=6 with at least one 3 and 7 xy+x+y+1=6 x(y+1)+y=5 only possibility is x=2 and y=1, so k=3^2*7=63 No need in any statement D Re: The positive integer k has exactly two positive prime factor   [#permalink] 24 Feb 2015, 06:06 Similar topics Replies Last post Similar Topics: 1 If k is a positive integer whose only prime factors are 2 and 5, is k 5 08 Jun 2015, 01:10 3 If N is a positive integer, does N have exactly three factors? 5 09 Apr 2015, 06:24 5 k is a positive integer. Is k prime? 6 30 Apr 2013, 02:54 16 The positive integer k has exactly two positive prime 10 20 Jun 2010, 11:38 25 If k is a positive integer, is k a prime number? 23 19 Jun 2010, 10:37 Display posts from previous: Sort by # The positive integer k has exactly two positive prime factor Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4755006432533264, "perplexity": 1186.169028712675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737940794.79/warc/CC-MAIN-20151001221900-00081-ip-10-137-6-227.ec2.internal.warc.gz"}
http://aas.org/archives/BAAS/v26n4/aas185/abs/S5011.html
An X-ray study of the supernova remnant W44 Session 50 -- Supernova Remnants Display presentation, Tuesday, 10, 1995, 9:20am - 6:30pm ## [50.11] An X-ray study of the supernova remnant W44 Ilana Harrus, John P.~Hughes (SAO) We report results from the analysis and modeling of data for the supernova remnant (SNR) W44. Spectral analysis of archival data from the Einstein Solid State Spectrometer, the ROSAT Position Sensitive Proportional Counter, and the Large Area Counters on {\it Ginga}, covering an energy range from 0.3 to 8~keV, indicates that the SNR can be described well using a nonequilibrium ionization model with temperature $\sim$0.8 keV, ionization timescale $\sim$9000 cm$^{-3}$ years, and elemental abundances close to the solar ratios. The column density toward the SNR is high: greater than 10$^{22}$ atoms cm$^{-2}$. As has been known for some time, W44 presents a centrally peaked surface brightness distribution in the soft X-ray band while at radio wavelengths it shows a limb-brightened shell morphology, in contradiction to predictions of standard models (e.g., Sedov) for SNR evolution. We have investigated two different evolutionary scenarios which can explain the centered X-ray morphology of the remnant: (1) the White and Long (1991) model involving the slow thermal evaporation of clouds engulfed by the supernova blast wave as it propagates though a clumpy interstellar medium (ISM), and (2) a hydrodynamical simulation of a blast wave propagating through a homogeneous ISM, including the effects of radiative cooling. Both models can have their respective parameters tuned to reproduce approximately the morphology of the SNR. We find that, for the case of the radiative-phase shock model, the best agreement is obtained for an initial explosion energy in the range $(0.5 - 0.6) \times 10^{51}$~ergs and an ambient ISM density of between 1.5 and 2 cm$^{-3}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8800327181816101, "perplexity": 2719.600830004759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826016.5/warc/CC-MAIN-20140820021346-00406-ip-10-180-136-8.ec2.internal.warc.gz"}
http://researchprofiles.herts.ac.uk/portal/en/publications/highprecision-radio-and-infrared-astrometry-of-lspm-j13141320ab--ii-testing-premainsequence-models-at-the-lithium-depletion-boundary-with-dynamical-masses(4cdfd6a1-78ff-4132-8231-b85df2b70736).html
# University of Hertfordshire ## High-Precision Radio and Infrared Astrometry of LSPM J1314+1320AB - II: Testing Pre--Main-Sequence Models at the Lithium Depletion Boundary with Dynamical Masses Research output: Contribution to journalArticle ### Documents • Trent J. Dupuy • Jan Forbrich • Aaron Rizzuto • Andrew W. Mann • Kimberly Aller • Michael C. Liu • Adam L. Kraus • Edo Berger Original language English 14 The Astrophysical Journal 827 1 https://doi.org/10.3847/0004-637X/827/1/23 Published - 3 Aug 2016 ### Abstract We present novel tests of pre$-$main-sequence models based on individual dynamical masses for the M7 binary LSPM J1314+1320AB. Joint analysis of our Keck adaptive optics astrometric monitoring along with Very Long Baseline Array radio data from a companion paper yield component masses of $0.0885\pm0.0006$ $M_{\odot}$ and $0.0875\pm0.0010$ $M_{\odot}$ and a parallactic distance of $17.249\pm0.013$ pc. We also derive component luminosities that are consistent with the system being coeval at an age of $80.8\pm2.5$ Myr, according to BHAC15 evolutionary models. The presence of lithium is consistent with model predictions, marking the first time the theoretical lithium depletion boundary has been tested with ultracool dwarfs of known mass. However, we find that the average evolutionary model-derived effective temperature ($2950\pm5$ K) is 180 K hotter than we derive from a spectral type$-$$T_{\rm eff}$ relation based on BT-Settl models ($2770\pm100$ K). We suggest that the dominant source of this discrepancy is model radii being too small by $\approx$13%. In a test that mimics the typical application of evolutionary models by observers, we derive masses on the H-R diagram using the luminosity and BT-Settl temperature. The estimated masses are $46^{+16}_{-19}$% (2.0$\sigma$) lower than we measure dynamically and would imply that this is a system of $\approx$50 $M_{\rm Jup}$ brown dwarfs, highlighting the large systematic errors possible when inferring masses from the H-R diagram. This is first time masses have been measured for ultracool ($\geq$M6) dwarfs displaying spectral signatures of low gravity. Based on features in the infrared, LSPM J1314+1320AB appears higher gravity than typical Pleiades and AB Dor members, opposite the expectation given its younger age. The components of LSPM J1314+1320AB are now the nearest, lowest mass pre$-$main-sequence stars with direct mass measurements. ### Notes Trent J. Dupuy, et al, 'HIGH-PRECISION RADIO AND INFRARED ASTROMETRY OF LSPM J1314+1320AB. II. TESTING PREMAIN-SEQUENCE MODELS AT THE LITHIUM DEPLETION BOUNDARY WITH DYNAMICAL MASSES', The Astrophysical Journal, Vol. 827 (1), 14pp, August 2016. doi:10.3847/0004-637X/827/1/23. © 2016. The American Astronomical Society. All rights reserved. ID: 11212439
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49383702874183655, "perplexity": 12977.52322597627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540586560.45/warc/CC-MAIN-20191214094407-20191214122407-00475.warc.gz"}
https://arxiv.org/abs/1906.00176
hep-ex # Title:Dark matter search in missing energy events with NA64 Abstract: A search for sub-GeV dark matter production mediated by a new vector boson $A'$, called dark photon, is performed by the NA64 experiment in missing energy events from 100 GeV electron interactions in an active beam dump at the CERN SPS. From the analysis of the data collected in the years 2016, 2017, and 2018 with $2.84\times10^{11}$ electrons on target no evidence of such a process has been found. The most stringent constraints on the $A'$ mixing strength with photons and the parameter space for the scalar and fermionic dark matter in the mass range $\lesssim 1$ GeV are derived. Thus, demonstrating the power of the active beam dump approach for the dark matter search. Comments: 7 pages, 4 figures, metadata corrected. arXiv admin note: substantial text overlap with arXiv:1710.00971, arXiv:1610.02988 Subjects: High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph) Report number: CERN-EP-2019-116 Cite as: arXiv:1906.00176 [hep-ex] (or arXiv:1906.00176v2 [hep-ex] for this version) ## Submission history From: Sergei Gninenko [view email] [v1] Sat, 1 Jun 2019 08:01:53 UTC (254 KB) [v2] Thu, 6 Jun 2019 21:15:15 UTC (254 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7923025488853455, "perplexity": 3658.676194382275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00370.warc.gz"}
https://socratic.org/questions/how-do-you-differentiate-f-x-sqrt-1-x-2-using-the-chain-rule
Calculus Topics # How do you differentiate f(x)=sqrt(1/x^2) using the chain rule? Mar 5, 2017 $\frac{d}{\mathrm{dx}} \sqrt{\frac{1}{x} ^ 2} = - \frac{\left\mid x \right\mid}{x} ^ 3$ #### Explanation: You can name: $y \left(x\right) = \frac{1}{x} ^ 2 = {x}^{-} 2$ so that: $\frac{\mathrm{df}}{\mathrm{dx}} = \frac{\mathrm{df}}{\mathrm{dy}} \frac{\mathrm{dy}}{\mathrm{dx}} = \frac{d}{\mathrm{dy}} \left(\sqrt{y}\right) \frac{d}{\mathrm{dx}} \left({x}^{-} 2\right) = \frac{1}{2 \sqrt{y}} \left(- 2 {x}^{-} 3\right) = \frac{1}{2 \sqrt{\frac{1}{x} ^ 2}} \left(- 2 {x}^{-} 3\right) = - \frac{\sqrt{{x}^{2}}}{x} ^ 3 = - \frac{\left\mid x \right\mid}{x} ^ 3$ You can also note that: $f \left(x\right) = \sqrt{\frac{1}{x} ^ 2} = \frac{1}{\left\mid x \right\mid}$ so that: $\left\{\begin{matrix}\frac{\mathrm{df}}{\mathrm{dx}} = \frac{d}{\mathrm{dx}} \left(\frac{1}{x}\right) = - \frac{1}{x} ^ 2 \text{ for " x > 0 \\ (df)/dx = d/dx (-1/x) = 1/x^2" for } x < 0\end{matrix}\right.$ which is clearly the same. ##### Impact of this question 231 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138834476470947, "perplexity": 8425.467021495502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653216.3/warc/CC-MAIN-20191014101303-20191014124303-00047.warc.gz"}
https://www.eevblog.com/forum/testgear/hantek-tekway-dso-hack-get-200mhz-bw-for-free/msg232730/
### Author Topic: Hantek - Tekway - DSO hack - get 200MHz bw for free  (Read 1759452 times) 0 Members and 2 Guests are viewing this topic. #### jellytot • Contributor • Posts: 36 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1725 on: May 07, 2013, 11:41:34 am » Well my Hantek DSO5102B  hw version 1005 is now unusable, over the past few months it was having difficulty passing its self calibration stopping at different parts of the calibration sequence. I decided to remove the board to clean it as there was some flux deposit still visible. Well after assembling it , it appeared to be working fine and passed calibration repeatedly, so I left it overnight only to find that it started failing again the next day. and now it doesn't get pass test 25/36 failing with the error 0x702 and then rebooting, when i try to use the scope its measurements are all over the place any ideas as to what is happening? has it happened to anyone else, or have i just been unlucky, is it curable? anyone got a spare board for sale? Regards jellytot #### NCG • Contributor • Posts: 18 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1726 on: May 08, 2013, 07:31:03 pm » Wild shot but for me the power connectors that go from PSU to main board were bit too loose - you can tell if the crashes sometimes end with white screen, also self cal did not finish. I had to separate the pins from the holder and press them individually to be bit more tight but simple reinserting of the plugs might also help. failing with the error 0x702 and then rebooting, when i try to use the scope its measurements are all over the place any ideas as to what is happening? has it happened to anyone else, or have i just been unlucky, is it curable? anyone got a spare board for sale? Regards jellytot #### jellytot • Contributor • Posts: 36 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1727 on: May 09, 2013, 05:16:52 am » Thanks NCG. I tried what you suggested but I'm still getting the same error I think its just a lemon i.e. HW 1005, as I was always getting intermittant  calibration errors before this permanent fault. #### jellytot • Contributor • Posts: 36 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1728 on: May 09, 2013, 07:12:42 am » Working!    turns out to be a faulty relay. Now all I've got to do is restore my backup    anyone know if a dds generator is sufficiently accurate to use for calibrating, don't have anything decent at hand. • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1729 on: May 09, 2013, 08:51:28 pm » Working!    turns out to be a faulty relay. Now all I've got to do is restore my backup    anyone know if a dds generator is sufficiently accurate to use for calibrating, don't have anything decent at hand. cool, can you tell us which relay was broken ? I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### FrankBuss • Supporter • Posts: 2321 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1730 on: May 09, 2013, 09:18:54 pm » - I actually made a couple of GPL request myself. One at Agilent for the 3000-X and another to vodafone. And in both cases I got the sources I asked for. Took time each time ( > 1 month ) but it worked out fine. Which sources did you get? The Agilent DSO-X 3012A uses WindowsCE and a big custom written program and some custom libraries (I've disassembled it a bit, thanks the the firmware update image is not crypted). And once you have shell access, which you get e.g. if you connect the internal UART and then use u-boot to boot the image (with telnetd patched) over network, you can do interesting things on it. The main program has some nice command line parameters So Long, and Thanks for All the Fish Electronics, hiking, retro-computing, electronic music etc.: https://www.youtube.com/c/FrankBussProgrammer #### jellytot • Contributor • Posts: 36 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1731 on: May 10, 2013, 09:10:53 am » Hi Tinhead. Its the calibration relay ch1. As you probably know I have been having intermittent calibration errors going way back, but it started to get worse so I pulled out the mainboard and noticed a lot of flux residue so I did a bad bad thing. I put it in an ultrasonic safe wash bath, Not recommended    according to the relay maker Nec. funny thing the unit worked ok for a few calibrations and I thought I'd fixed it. the next day it failed again repeatedly with the same error 0x702. I applied 5v to the relays and found that both in ch1 circuit were sticking. I didnt have any spare relays so swopped the ch1 calibration relay with the trigger relay and it appears to be working. I suspect that there maybe faulty or damaged relays and have ordered some to swop them out. So not 100% certain until I replace them. Tinhead I was thinking of restoring a backup as I deleted the factory calibration files but because I have changed  relays is it pointless restoring a backup ? I don't have good precise signal source to calibrate with. also my restore file is 69206016 while yours is 69206026 should it still work? • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1732 on: May 10, 2013, 11:33:54 am » Tinhead I was thinking of restoring a backup as I deleted the factory calibration files but because I have changed  relays is it pointless restoring a backup ?  I don't have good precise signal source to calibrate with. even if there could be small influence from new relays (they series resistance does not have any influence, the contact capacity might be bit different and for sure parasitic capacitance from the solder), you don't have precise signal source, so to restore backup is the best option. also my restore file is 69206016 while yours is 69206026 should it still work? where i wrote 69206026 ? It must be of course 69206016, thats exactly 66M (64M data and 2M oob bloks). I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### jellytot • Contributor • Posts: 36 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1733 on: May 11, 2013, 10:23:25 am » also my restore file is 69206016 while yours is 69206026 should it still work? where i wrote 69206026 ? From a screenshot on the restore instructions it mentions the file size transfered this is why I was worried #### jellytot • Contributor • Posts: 36 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1734 on: May 11, 2013, 10:44:15 am » just a tip for anyone that's doing the restore, Don't forget to use the usb port at the rear and not the front as I was doing for over an hour    trying different drivers for dnw until I realised But happy to say it worked great in the end. Thanks again Tinhead for your great work on these scopes. • Super Contributor • Posts: 2979 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1735 on: May 11, 2013, 10:51:42 am » just a tip for anyone that's doing the restore, Don't forget to use the usb port at the rear and not the front as I was doing for over an hour    trying different drivers for dnw until I realised But happy to say it worked great in the end. Thanks again Tinhead for your great work on these scopes. How were you managing to hook together a USB Type A port (front of scope) to another USB Type A port (computer)? • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1736 on: May 11, 2013, 10:59:08 am » just a tip for anyone that's doing the restore, Don't forget to use the usb port at the rear and not the front as I was doing for over an hour    trying different drivers for dnw until I realised yeah, dnw is sometimes tricky. Btw, I've just tested once again, the size of full dump is and must be 69206016, however supervivi is displaying 69206026. No idea why it's like that, but it is ^^ I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### jellytot • Contributor • Posts: 36 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1737 on: May 13, 2013, 04:07:22 am » How were you managing to hook together a USB Type A port (front of scope) to another USB Type A port (computer)? [/quote] Hi marmad. Yes, that was my point,  I didn't think, know or research about the computer to computer communication using A to A type cables    I have used them in the past for e.g. external hdd and assumed it would work in this case  Fortunately no damage occured and I learned something new and so offered my experience as a tip to others possibly doing the same #### Purevector • Contributor • Posts: 32 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1738 on: May 15, 2013, 04:12:08 pm » I'm pretty embarrassed to admit this, but I flashed my DSO5202B with firmware from an MSO5202D!!! The scope power up, shows the logo and then a partial screen before it locks up and reboots due to the watchdog. So, I have access to the shell through the UART and I have been able to stop the watchdog reset and I have copied a few DSO.exe files from this tread to the scope, but still no go.  I get a segmentation fault every time. Can anyone help me please?  I did not make a backup before I flashed it. • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1739 on: May 15, 2013, 05:52:47 pm » I did not make a backup before I flashed it. ? ? ? how you can not make backup??!!! seriously, this is bad ass "bug", the MSO firmware can be flashed on DSO, partially of course, which breakes the DSO ... So, I have access to the shell through the UART and I have been able to stop the watchdog reset and I have copied a few DSO.exe files from this tread to the scope, but still no go.  I get a segmentation fault every time. http://www.hantek.com.cn/Product/DSO5000Series/DSO5202B_Firmware.zip unzip it, decrypt the *.up file (gpg -d , pass is 0571tekway), guzip it, untar it and untar it again ... copy then the content to usb falsh drive, insert flash drive to DSO, boot to shell, kill the dsod process and copy following files from usb flash drive to DSO: protocol.inf to /protocol.inf dso.exe to /dso.exe English.lan to /OurLanguages/English.lan help.db to /help.db dsod to /dso/app/dsod rcS to /etc/init.d/rcS you need to chmod 777 all these files as well. Ttah's all, now after reboot the DSO should work as before. I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### Purevector • Contributor • Posts: 32 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1740 on: May 15, 2013, 06:14:59 pm » Thanks for the help... done this but there is still a problem.  The unit boots and the display shows up, but none of the buttons work and there is no waveform on the screen.  There are a couple errors in the startup log.  Here they are: kobject_register failed for usb_storage (-17) insmod: can't insert '/dso/driver/dso-usbstorage.ko': File exists S3C2410 USB Controller Core Initialized USB Function Character Driver Interface - 0.5, (C) 2001, Extenex Corp. usbctl: Opened for usb-char usbctl: Started for usb-char usbcore: registered new driver usblp drivers/dso_drivers/usblp.c: v0.13: USB Printer Device Class driver bwscon:0x2211d110 fpga bank 11811 dso-fpga: install ok kobject_register failed for s3c2440-i2c (-17) dso-i2c: can't register device insmod: can't insert '/dso/driver/dso-i2c.ko': Device or resource busy x gpio e: 0xaa0001a6, gpio g :0xfd62f19a , gdata:0x798c dso-spi: install ok dso-uart: install ok dso-buzzer: install ok 0x60c gpio_major_n = 6, io_minor_n = 12, output 1 dso-spi: open fpga file failed. no update file to foud now run app ..... Please press Enter to activate this console. • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1741 on: May 15, 2013, 06:35:34 pm » looks like missing or wrong /dn.rbf file (the FPGA design) go to https://www.eevblog.com/forum/chat/hantek-tekway-dso-hack-get-200mhz-bw-for-free/msg170862/#msg170862 download the dn.rbf.zip file, unzip it, copy from folder dst1000B_models one of the latest dn.rbf file to DSO as /dn.rbf When you have hw1007 model, the dn_hw1007_83E9_date111122.rbf should work for you. I remember i posted as well the 83EB FPGA design, but yeah, both will work. When you have older model, you need to choose older design. I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### Purevector • Contributor • Posts: 32 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1742 on: May 15, 2013, 06:44:08 pm » tinhead you are my new best friend !!!!! Everything works again - YAY • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1743 on: May 15, 2013, 07:14:02 pm » you welcome .. and make a backup (not that restore is faster than this what you did right now, but yeah, it is better to have one) I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### paul • Contributor • Posts: 31 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1744 on: May 18, 2013, 10:35:55 pm » I finally got round to up dating the firmware to 130306 in my scope. I am happy to say that the Delayed Sweep bug / weirdness is finally fixed, in all memory depths. My last post on this bug,  https://www.eevblog.com/forum/chat/hantek-tekway-dso-hack-get-200mhz-bw-for-free/msg189044/#msg189044 And its detailed as bug 15 on Tinheads list, but it was not properly fixed until now in 130306. I have been working around this bug for some time and I am glad to see it fixed. Paul. « Last Edit: May 24, 2013, 09:10:52 pm by paul » #### ayechon • Newbie • Posts: 4 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1745 on: May 19, 2013, 10:16:25 am » Hello, I just bought a DSO1062B and I did changed to DSO1202B with "DSO-BW-Change" thanks to this excellent forum. I have firmware version 2.01.1 (120909.0) and would like to download the 2.01.1 release (130129.0). What firmware I should download of the site Hantek? DSO1062B or DSO1202B. • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1746 on: May 19, 2013, 10:42:15 am » What firmware I should download of the site Hantek? DSO1062B or DSO1202B. the one for your current model name, not for the original name. I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### ayechon • Newbie • Posts: 4 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1747 on: May 19, 2013, 11:32:32 am » Thank you for your quick response. The update of the DSO with the file "dso1kb_2.01.1_DSO1202B up (130129.0)." return the message: "Firmware update failed, error: 0xfe No upgrade files on USB device detected! ditto for "dso1kb_backup_tool.up" file • Super Contributor • Posts: 1922 • Country: ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1748 on: May 19, 2013, 11:51:14 am » take another flash drive, ensure only the *.up file is on the drive, let the DSO enumerate USB (you should see flash drive detected message) <- that's the typical things to watch during firmware update. I don't want to be human! I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter ... I want to reach out with something other than these prehensile paws and feel the solar wind of a supernova flowing over me. #### ayechon • Newbie • Posts: 4 ##### Re: Hantek - Tekway - DSO hack - get 200MHz bw for free « Reply #1749 on: May 19, 2013, 12:13:11 pm » After formatting the USB drive in FAT32 functioning OK.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2675533890724182, "perplexity": 8897.11608592725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00169.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/mc2/chapter/10/lesson/10.2.4/problem/10-120
### Home > MC2 > Chapter 10 > Lesson 10.2.4 > Problem10-120 10-120. If the area of the triangle at right is $132$ cm$^{2}$, what is the height? Use the equation for finding the area of a triangle. $\text{Area} = \frac{1}{2} (\text{base})(\text{height})$ Substitute all the values that are known. $132 = \frac{1}{2} (16)(\text{height})$ Now simplify and solve for height.
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982581496238708, "perplexity": 1728.529792244494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00294.warc.gz"}
http://docplayer.net/1768686-Ion-exchange-reactions-of-clays.html
# ION EXCHANGE REACTIONS OF CLAYS Size: px Start display at page: ## Transcription 1 ABSTRACT ION EXCHANGE REACTIONS OF CLAYS BY D. R. LEWIS»* It has been recognized for many ypars that many aspects of clay technolofcy including soil treatment and drilling nnul treatment must remain in an essentially empirical state until a basis for the understanding of ion exchange reactions is established. Much of the work on ion exchange reactions of clays in the i)ast has been directed toward establishing total exchange capacities or determining the ionic distribution empirically. This information in general is not suitable for the evaluation of hypotheses designed to provide a basis for understanding the exchange reaction. When the techninues for characterizing the various clay minerals offered the possibilit.v of quantitative study, the solution and exchanger phase contributions to the ionic distribution covdd be experimentally evaluated in principle. The particular experimental techniques which have been used to measure ionic distril)uti()n, however, frequently neglected observations which are essential if the data are to he used for testing and developing theories of ion exchange. It is now well recognized that molecular adsorption, complex ion formation in solution, and ion-pair formation between a mobile solution ion and a fixed exchanger group may occur in addition to the ion exchange reaction. Therefore, if the data are to be useful to develop theories of ion exchange, the whole system must be selected to minimize such extraneous contributions. On the basis of recent tlieoreticsil worli, various experimental techniiiues art; evaluated from the point of view of their suitability for equilibrium ion distribution studies. The mass action, adsorption isotherm, and Gibbs-Donnan equilibrium formulations of the ion exchange theory are discussed as they may apply to clay systems. Recent progress is summarized in (1) solution thermodynamics of mixed electrolytes as it is relevant to ion exchange processes of clays, (2) the contributions of non-ideality of the clay exchanger phase, and (S) the work of swelling of clays which affects the ionic distributions in ion exchange reactions. It is concluded that the parameters which relate to the solid phase of the exchanger and those which relate to the solution are now sufficiently well recognized that future experiments can be planned which may more realistically provide an experimental basis for iinderstanding the process of equilibrium ion exchange distributions in aqueous clay-electrolyte systems. INTRODUCTION In principle, all of the answers to questions involving the interaction of matter are ealenlable from relatively few basic eoneepts. In this sense, it has been pointed out that all of Chemistry is now redneible to ajiplied mathematics. It appears unlikcl.v, however, that the entirely theoretical computations will soon displace the experimental aspects of chemistry. At the other extreme, experimental work which is without the guidance offered by a coherent body of theory frequently lacks the integration and direction necessary to achieve usefttl results. The experimental work concerned with the distribution of ions that will be reached when a clay mineral is iilaced in a solution of electrolytes originally was without such a guide, and only quite recently has there been any adequate theoretical body on which exxk'rimental studies might be planned. There have been some excellent systematic experimental studies which arc outstanding examples of intelligently planned w^ork (Schachtschabel, no date; Wiklander, 1950), but the colloidal nature of the clays and the great number of variations provided by the different members of each of the major clay mineral groups add many complexities which must be separated and * Publication No. 26, Exploration and Prodviction Research Division, Shell Development Co., Houston, Texas. ** Senior chemist, Exploration and Production Research Division, Shell Development Co., Houston 25, Texas. ( 54) meastu'cd if the results of the experiments are to be useful for any systeut other than the specific one which has been investigated. Accordingly, this discussion of the basis of ion exchange behavior Avill exclude non-exchange i)henomena. We will define an ion exchange reaction as a thermodynamieally reversible interaction between the ions originally occupying a fixed number of reacting sites of the insoluble exchanger with the several ionic species in the solution. This definition eliminates from this discussion such interesting and important topics as the irreversible fixation of ions.such as potassium ("Wicklander, ]95()), ammonium (Barshad, 1951; Joffe and Levine, 1947), zinc (Blgabaly, 1950; Elgabaly and Jenny, 1943), and lithium (Hoffmann and Klemen, 1950). Neither will this discussion concern itself with reactions resulting in covaleut bonds between certain clays and hydrogen, or with molecular adsorption from solution. By restricting our discussion to the clays, moreover, we have eliminated discussion of the reactions of the organic ion exchangers and of the inorganic zeolites. The early history of ion exchange studies starting with the systematic studies by Thompson and Way a century ago has recently been summarized by Detiel and Hostettler (1950), Duncan and Lister (1948), Kelley (1948), and Kunin and Myers (1950). When a clay mineral is placed in a solution containing several dissolved salts, the whole assembly wall in time reach a steady-state condition of ionic distribution between the clay and the solution which will persist for a very long period of time. It is important to know how this equilibrium distribution depends upon the nature of the exchanger and its physical condition and how it depends on the nature of the solution. In general, however, it is to be expected that a variety of processes, including the ion exchange reaction itself, may determine this distribution. Such processes as molecular adsorption, formation of complex solution ions, formation of difficultly soluble salts, or formatiou of complexes with the exchanger phase may be superimposed on the ion exchange reaction itself in a given system. (Bonner, Argersinger, and Davidson, 1952). In the present discussion of the ion exchange reaction, attention will be directed toward those systems in which the distribution of ions arises primarily from the ion exchange reaction itself. ION EXCHANGE PROPERTIES OF THE CLAY MINERAL GROUPS It is convenient to consider clays as multivalent polyelectrolytes in ion exchange reactions. For each of the major crystal structure groups, however, it is important to take into consideration the effect of the distribution of charges in the lattice. The relationship between crystalline structure of tlie silicate minerals and their ionexchanging properties have been discussed in considerable detail by Bagchi (1949). Kaolin Group. Many members of the kaolin group of clay minerals exhibit an almost complete freedom from 2 Fart IIJ PROPERTIES OF CLAYS yo isomorphoiis substitution yet have a small but definite ion exehanoe capacity. The sites of the exchanji'e reactivity of kaolinite are generally agreed to be associated with the structural Oil groups on the exposed clay surfaces. Because of the differences in the balance of electrical charges of those hydroxjd ions along the lateral surfaces and those formed by the hydration of silica at the broken edges of the crystals, tliere may well be more than one class of exchange sites on kaolinite. This picture of the exchange activity arising from the dissociation of the surface hydroxyl protons is consistent with the low magnitude of the total exchange capacities of minerals of this group. Aifapulgite Grnu-p. The fibrous clay group typified by attapulgite exhibits a very different geometry from the platy minerals and, accordingly, a different distribiition of the charges on the surface ions. In attapulgite itself a small amount of the silicon is frequently replaced b.v aluminum ions which give rise to the charge deficiency causing the ion exchange activity of attapulgite (Marshall, 1949). Because of its fibrous structure and the presence of channels parallel to the long axis of the crystals in which many of the mobile exchange ions are found, the rate of the ion exchange reaction in attapulgite minerals may be much slower than in platy minerals. This would be expected if the ions along the channel must diff'use into the solution phase to reach an equilibrium. lllite Group. The illite group of clay minerals are small particle size, plate-shaped clay minerals distinguished by their ability to fix potassium irreversiblj^ The iou exchange activity for the illites is attributed to isomorphous substitution occurring largely in the surface tetrahedral silica layers. This gives rise both to more favorable geometric configuration for microscopic counter-balancing of the unbalance in electrical charge and also to the possibility of formation of co-valent linkages. Either ccmdition is likely to produce an irreversible reaction. Moiiimorillonite Group. The most active cla.v group in terms of amount of ion exchange reactivit.v per unit weight of (day is the montmorillonite family. The high degree of their base exchange capacity and the rapidity of their reactions have long been recognized as outstanding attributes of this class of clay minerals. Minerals of this group are plate shaped, three-layer lattice minerals with a very high degree of isomorphous substitution, distributed both in the octahedral positions in which chiefly magnesium substitutes for aluminum, and in the tetrahedral coordination in which predominantly aluminum substitutes for silicon (Harry, 1950; Hendricks, 1945; Koss and Hendricks, 1945). Because of both the large base-exchange capacity and the widespread occurrence and economic importance of this group of minerals, a great deal of the experimental work has been done (Hauser, 1951). As there are these marked differences in the stnu'ture both geometrically and in electrical charge density of the principal groups of clay minerals, tliere will be large variations in the relative contributions of reversible ion exchange reactions, the degree of amphoteric nature of the clay minerals and phj'sical adsorption to the equilibrium distribution of ions in an aqueous elay-electroh'te system. EXPERIMENTAL TECHNIQUES Methods of Preparing Hydrogen-Clay. Although this discussion is more directly concerned with the interpretation of the data having to do with ion exchange properties of clays than with the determination of the exchange properties themselves, the usefulness of the data is freciuently affected considerably by the exact details of the method of determination of the exchange properties, and, accordingly, some attention must be given to the limitations of various techniques. One group of techiiicpies which are commonly employed involves the preparation of the hydrogen form of the clay either by dialysis or electrodialysis or by direct action of a solution of a mineral acid. The acid form of the clay is then treated with the base of the desired salt form and the equilibrium distribution determined from the degree of ccmversion (often measured by the change in ph of the suspension s.vstem), or the inflection in the titration curve is used to determine the total exchange capacity. The difficulties of interpretation of the titration curves of acid clays by either inorganic or organic bases are widely recognized (Marshall, 1949; Mitra and Rajagopalan, 1948; 1948a; Mukherjee and Mitra, 1946). In the first place, there is no general agreement about the nature of the exchange titration curve. The results of various researchers have varied from the production of definitely diprotic (Slabaugh and Culbertsoii, 1951) character in the titration curves to curves w'hicli have a very broad inflection or none at all and in which the establishment of an end-point corresponding to the completion of a reaction is very difficult even to estimate. Some investigators have titrated to an arbitrary pyl which the.v considered to be an end-point for the reaction, assuming that the distribution of proton activity of all the clays in the samples being titrated is the same, and that legitimate and reproducible conditions for measuring cell potentials in suspension are established in each suspension. The colloidal nature of the system complicates both the measurement of potentials and the interpretation of the potentials in terms of hydrogen ion activities (Mysels, 1951). Moreover, the anomalous behavior of the hydrogen ion in its reactions w'ith clays has long been known, and recentl.v the behavior of hydrogen ions in ion exchange reactions of clays has been found to exhibit a pattern that suggests that these ions are held to many clays partly b>- covaleiit bonds (Krishiiamoorthy and "Overstreet, 1950, 1950b). It is likely that studies of the equilibrium distribution ions on clays should not involve the preparation of the hydrogen form as a necessary step (Glaeser, 1946a; Vendl and Hung, 1943). A great deal of useful information concerning the polyelectrolyte nature of the clays can jjrobably be derived ultimately from the studies on the titration behavior of the hydrogen form of the clays, but such information is not a necessary and integral part of the stud.v of the exchange behavior of the clays. Method for Preparing Ani^nonium-Clay. The most satisfactory experimental technique to employ in a given set of experiments will depend to some extent on the intention of the application of the data. For example. 3 56 CLAYS AND CLAY TECHNOLOGY [Bull. 169 for the determination of the total exchange capacity of the clay minerals, a variety of satisfactory procedures employing either ammonium acetate or ammonium chloride solutions neutralized with ammonium hydroxide have been described which differ only in the details of the preparation and manipulation of the sample. (Bray, 1942; Glaeser, 1946; Graham and Sullivan, 1938; Lewis, 1951). The ammonium ions retained by the clay may either be determined directly on the clay or eluted and determined separately. For the determination of the total exchange capacity of a number of clays the use of an ammonium-form ion exchange resin of suitable character has proved very satisfactory (Lewis, 1952; Wiklander, 1949, 1951). Experimental Techniques. Experimental techniques may be adapted to micro quantities of clay, or methods may be used that permit the colorimetric determination of the exchange cations rapidly and easily, if less accurately. If the equilibrium distribution of ions between a clay phase and a solution phase is to be determined, the most direct method involves placing a clay with a known ion population in an electrolyte solution of known composition. After a suitable length of time both phases of this system are analyzed to determine the distribution of ions at equilibrium. This method, so direct in principle, is replete with pitfalls. It may be convenient to analyze chemically only the solution phase before and after the reaction, to determine the distribution of ions accomplished by the exchanger phase, thus requiring that the analytical procedure be very accurate in determining a small difference between two large numbers. Moreover, because the equilibrium water content of the clay depends strongly on its ion form, the concentration of the external solution changes as the ionic composition of the clay changes, and the degree of exclusion of molecular salts by the Donnan mechanism from the hydrated clay changes as the ionic form of the clay changes. The effect of the change in equilibrium water content with change in the ionic form of the exchanger may be so great that failure to consider it may so distort the results that ion exchange in its ordinary sense does not appear to take place (Lowen, Stoener, and Argersinger, 1951). For the most accurate equilibrium determination, the sohition phase and the exchanger phase should be physically separated in a manner which does not disturb the ionic equilibrium already established. For accurate work it is desirable to bring the clay to equilibrium with a given composition of electrolyte solution, separate the clay phase and repeatedly bring the clay to equilibrium with successive portions of the same solution. In this way the composition of the electrolyte solution is not altered by the contribution of the displaced cations from the exchanger phase, so that the composition of the equilibrium solution phase may be determined accurately either from an accurately prepared composition of the equilibrium solution or an accurate analysis of the initial solution. The clay phase should finally be separated and analyzed directly for the distribution of the ions participating in the exchange reaction or the exchanging pair displaced by a third cation and analyzed in the elution product. The direct experimental determination of equilibrium ionic distribution can be successful for studies of ion exchange if careful attention is paid to the details of the experiment, with suitable attention to analytical accuracy and proper manipulation of the sample, so that the final data provide an accurate picture of the equilibrium partition of electrolyte ions between the solution phase and the exchanger phase at equilibrium. Clay Chromatographic Methods. A modification of the column chromatographic technique has been used recently in determining the exchange isotherms for clays. This technique involves the preparation of a column consisting of the clay in an inert matrix (asbestos) that provides suitable flow properties for the column. The exchange isotherm is obtained by measuring the composition of the solution passing through the column as one exchange cation On the clay is displaced by another. This technique in principle possesses the virtues of greatly reducing the amount of analytical work required and of having inherent in the process the separation of the clay and electrolyte phases. If radioactive isotopes are used as tracers for following changes in composition of the eluted solution, the whole process can be put on an essentially automatic basis. The recently reported determination of the cesium-sodium isotherm at room temperature on a montmorillonite from Chambers, Arizona, (API 23) indicated considerable promise for this technique with the clay minerals (Faucher, Southworth, and Thomas, 1952). The colloidal character of the clay minerals, however, may cause mechanical difficulties in the preparation of suitable chromatographic columns, unless the columns are always operated with solutions having relatively high ionic strengths. Clay-Besin Reaction Methods. For the determination of the ionic distribution on clay particles at low solution concentrations, monofunctional sulfonic acid resins may be used by bringing an electrolyte solution and resin to equilibrium with the clay. After equilibrium is reached, it is possible from only a material balance and an analysis of the washed resin phase to determine the equilibrium distribution of ions on the clay equilibrium with the electrolyte solution. It has been demonstrated that the distribution of ions between a clay and a solution is independent of the presence of the exchanging resin. EXPERIMENTAL CONSIDERATIONS There are two major classes of objectives in the examination of the data which are obtained in the study of ion exchange reactions. The first of these requires only that sufficient data be accumulated so that a working equation or graph can describe the data and permit interpolation and extrapolation of the behavior of this system to conditions not precisely covered by the experiments. This method permits considerable latitude in the type of parameters and the manner of the mathematical combination to provide a description of the actual behavior of the particular process. With such a description the behavior of the distribution of calcium ions and sodium ions, for example, on a specified clay, could be summarized at the temperature and solution strength of the experiments over a relatively wide range of compositions of the exchanger and solution phases. Such descriptions of behavior serve a useful practical purpose. On the other hand, such descriptions in themselves pro- 4 Part II] PROPERTIES OF CLAYS 57 vide no clues which sng-gest either the magnitude or direction of changes in selectivity of sodium with respect to calcium as tlic temperature, tlie total strength of the solution, or the mineral species should change. The other objective is that of establishing a sound theoretical basis for understanding the different selectivities of tlie various ions when reacting with ditferent exchanger phases. The mathematical expression of these theoretical views would provide not only a description of the process, but also a basis for prediction of changes in the nature of the distribution with changes in a wide variety of parameters which enter either explicitly or implicitly into the equations. Good expermental data obtained from well-characterized solutions of electrolytes interacting with well-definecl clay nuneral species are necessary for either of these considerations. At present there is a great need for more experimental information on the ion exchange behavior of clays under circumstances Avhicli permit the examination of the data with a view to testing- various hj'potheses and theories which have been offered as a basis for the ionic selectivity in ion exc' angers. x\lthough the nature of the experimental work which is needed from both practical and theoretical standpoints in the study of ion exchange of clays was clearly pointed out by Bray in 1942, the present need for these data in clay systems is as great as it was at that time. Both the theoretical and the experimental studies designed to establish the contribution of the several conceivable parameters to the actual selectivity of an exchanger for ions in solution has proceeded at a greatly accelerated rate in systems involving synthetic organic resin exchangers. The intensity of activity in the investigation of clay systems is increasing at present. Those aspects which Bray pointed out as much-needed extensions of the experimental effort involve leaving the convenient range of ion distributions from the standpoint of analytical techniques in general and extending these studies to very wide ranges of composition of the exchanger phase and over wide ranges of total concentrations of the solution as well. While both of these directions are now being actively pursued by investigators of resin-electrolj^te systems, similar progress has not been made in clay investigations. Another aspect on which Bray felt that considerably more work should be done is that of greatly increasing the number of different ions present in a system. From a practical standpoint, particularly in connection with soils, the need for such investigations is undoubtedly great. Prom the standpoint of theory, however, our knowledge of the specific interactions between ions in solution and in the exchanger phase is much too inadequate to enable us to apply this information theoretically at present. In his recent review of the theoretical progress being made in the elucidation of the mechanism of ion exchange reactions, Boyd summarized the current status of ion exchange equilibrium theory as being somewhat confused and the disagreements in the literature far more numerous than agreements (1951). This sentiment echoed the conclusions expressed by Marshall in his discussion of the ion exchange reactions of clays when he reported that the only certain conclusion one can draw at present is that better experiments are needed (1949). The various approaches which are presently being made to establish the principal mechanisms by which the solid exchanger phase controls the distribution of exchangeable cations among its available ion exchange sites when in equilibrium with a solution of a given composition may be classified into several broad groups. The ion exchange equilibrium has been considered (1) as a class of reversible double-decomposition reaction which may be described by the principles of the law of mass action, (2) as an ionic adsorption reaction the behavior of which may be described by a suitable isotherm eqimtion for a mixture of electrolytes, (13) as a Gibbs-Donnan distribution between two phases, and (4) as reflecting the behavior of solution ions under the influence of a heteropolar ionic solid surface. Most investigators have preferred either the mass action or the adsorption description of the exchange process. In general, there are a number of changes whieh accompany the redistribution of ions in the ion exchange reaction. These variables must be considered when designing experiments to test the various hypotheses of the equilibrium distribution of ions in ion exchange reactions. They include the following processes wliich compete with the exchange reaction or accompany it: A. Ion-pair formation between solution ions and exchangers. B. Molecular adsorption of partiall,y dissociated electrolytes. O. Complex ion formation in solution. D. Change in distribution of ion species with changes in concentration of electrolytes. In addition to these processes, the solution concentration and composition may change during the ion exchange reaction bec'ause of the following factors which must be evaluated to permit cahndation of the equilibrium distribution: A. Variation of equilibrium water content of exchanger with change in ion composition. B. Change in solution volume resulting from exchange of electrolytes having different partial molar volumes. MASS-ACTION DESCRIPTION OF ION EXCHANGE REACTION If we consider a reversible reaction of the following form between monovalent cations A* and B* in solution and an exchange phase Z, A* + BZ^B* ^AZ (1) the law of mass-action describes the equilibrium distribution in terms of a product (B^) {AZ) K = (2) {A^) {BZ) In this expression the quantities in parentheses represent the activities of the various species. The activity of each species is a quantity a,-, such that 1,1; ^ p,- + RT In a, (3) where [Xi is the chemical potential of the species i, Xi its chemical potential in some arbitrary standard state, B the universal gas constant and T the absolute temperature. If the ion exchange reaction is truly reversible and if the activities ai^^{ai) can be evaluated, at constant temperature and pressure, the constant K can be calculated and the Gibbs free energy of the reaction computed from its value. 5 58 CLAYS AND CLAY TECHNOLOGY [Bull. 169 As a first approximation, the concentration of the ions in solution and in the exchanger phase have been substituted for the activities. In this form, the value of K is a mass law concentration product which is not expected to remain constant. For practical purposes, a closelj' related quantity, the selectivity coefficient D is frequently calculated as (AZ/BZ) D = (4) The typo of variation of equilibrium mass law product is illustrated in figure 1, Both the mass law concentration product and the selectivity coefficient are without direct theoretical utility themselves, although they are iiseful working quantities which differ from the thermodynamic quantities by suitable functions to convert the concentrations of ions to activities. The evaluation of the activities in both the solution and exchanger phases, however, involves several uncertainties at the present stage of our knowledge of these reactions. A number of approaches have been employed to evaluate the activities of the ions which are reacting both in the solution phase and the clay phase. For the solution phase the basic data required are the activity coefficients of the electrolytes in mixed ion solutions over the concentration and composition ranges employed in the reactions. In general, this information is not available, although Harned and Owen (1950) have summarized the available data and some rules for computing estimates of activities of electrolytes. The approximation is frequently made that the ion activity is that of the single electrolyte at a total ionic strength of the reacting mixed solution. There is the possible objection to all these methods that the activity of the dilute mixed electrolyte solution may not be the correct activity to use on the BZ -^ FiGUKK 1. AZ + B^ LOG Cg^/Cflt A',nri;ition iu mass-law x)ri)(luet. grounds that the exchange reaction occurs only in the immediate vicinity of the highly ionic crystalline clay exchanger, where its activity would be expected to be significantly different from that in the dilute solution both because of the change of dielectric constant of the FlGT'RK 2. Activity coefficients for O.Ol m HCl in electrolyte solutions. solvent and the potential energy of the ion in this environment (Davis and Kideal, 1948; Greyhume, 1951; Grimley, 1950; Weyl, 1950). Since the over-all process is the transfer of ions from the dilute solution to the exchanger, however, and since at equilibrium the chemical potential of ions of any species is the same throughout the system, the solution thermodynamic activities should be suitable when they are known. The equilibrium constant for the reaction (1) can be written for the mono-monovalent exchange as niab) Y. (B) (AZ) K = ^ : (5) m,{a) Y, {A) {BZ) where m^ (B) is the main ionic molality of the cation B+ with the solution anion, and Y- (B) is the mean ionic activity coefficient for this electrolyte. These quantities are defined in terms of the molalities of the cation m^. and anion m_ as m/= j)i/'+»w-'"'" (6) In this expression tv is the valence of the cation, v_ the valence of the anion and V =^ v+-{- V- (7) Analogously, the mean ionic activity coefficients are y/^yz + y^"- (8) The mean ionic activities of ions are influenced by the presence of dissimilar ions. The values of the mean ionic activity coefficients for electrolytes have been determined by emf measurements in suitable cells. The effect of 1-50 ### ph: Measurement and Uses ph: Measurement and Uses One of the most important properties of aqueous solutions is the concentration of hydrogen ion. The concentration of H + (or H 3 O + ) affects the solubility of inorganic and organic ### Chemistry. The student will be able to identify and apply basic safety procedures and identify basic equipment. Chemistry UNIT I: Introduction to Chemistry The student will be able to describe what chemistry is and its scope. a. Define chemistry. b. Explain that chemistry overlaps many other areas of science. The ### Chem101: General Chemistry Lecture 9 Acids and Bases : General Chemistry Lecture 9 Acids and Bases I. Introduction A. In chemistry, and particularly biochemistry, water is the most common solvent 1. In studying acids and bases we are going to see that water ### Range of Competencies CHEMISTRY Content Domain Range of Competencies l. Nature of Science 0001 0003 18% ll. Matter and Atomic Structure 0004 0006 18% lll. Energy and Chemical Bonding 0007 0010 23% lv. Chemical Reactions 0011 ### Solutions & Colloids Chemistry 100 Bettelheim, Brown, Campbell & Farrell Ninth Edition Introduction to General, Organic and Biochemistry Chapter 6 Solutions & Colloids Solutions Components of a Solution Solvent: The substance ### ionic substances (separate) based on! Liquid Mixtures miscible two liquids that and form a immiscible two liquids that form a e.g. Unit 7 Solutions, Acids & Bases Solution mixture + solvent - substance present in the amount solute - in the solvent solvent molecules solute particles ionic substances (separate) based on! Liquid Mixtures ### Chemical Reactions in Water Ron Robertson Chemical Reactions in Water Ron Robertson r2 f:\files\courses\1110-20\2010 possible slides for web\waterchemtrans.doc Properties of Compounds in Water Electrolytes and nonelectrolytes Water soluble compounds ### Chemical Reactions in Water Chemical Reactions in Water Ron Robertson r2 f:\files\courses\1110-20\2010 possible slides for web\waterchemtrans.doc Acids, Bases and Salts Acids dissolve in water to give H + ions. These ions attach ### Chemistry B11 Chapter 6 Solutions and Colloids Chemistry B11 Chapter 6 Solutions and Colloids Solutions: solutions have some properties: 1. The distribution of particles in a solution is uniform. Every part of the solution has exactly the same composition ### ACID-BASE TITRATIONS: DETERMINATION OF CARBONATE BY TITRATION WITH HYDROCHLORIC ACID BACKGROUND #3. Acid - Base Titrations 27 EXPERIMENT 3. ACID-BASE TITRATIONS: DETERMINATION OF CARBONATE BY TITRATION WITH HYDROCHLORIC ACID BACKGROUND Carbonate Equilibria In this experiment a solution of hydrochloric ### Forensic Science Standards and Benchmarks Forensic Science Standards and Standard 1: Understands and applies principles of scientific inquiry Power : Identifies questions and concepts that guide science investigations Uses technology and mathematics ### EXPERIMENT 10: Electrical Conductivity Chem 111 EXPERIMENT 10: Electrical Conductivity Chem 111 INTRODUCTION A. Electrical Conductivity A substance can conduct an electrical current if it is made of positively and negatively charged particles that are ### WEAK ACIDS AND BASES WEAK ACIDS AND BASES [MH5; Chapter 13] Recall that a strong acid or base is one which completely ionizes in water... In contrast a weak acid or base is only partially ionized in aqueous solution... The ### Chemistry 51 Chapter 8 TYPES OF SOLUTIONS. A solution is a homogeneous mixture of two substances: a solute and a solvent. TYPES OF SOLUTIONS A solution is a homogeneous mixture of two substances: a solute and a solvent. Solute: substance being dissolved; present in lesser amount. Solvent: substance doing the dissolving; present ### (1) e.g. H hydrogen that has lost 1 electron c. anion - negatively charged atoms that gain electrons 16-2. (1) e.g. HCO 3 bicarbonate anion GS106 Chemical Bonds and Chemistry of Water c:wou:gs106:sp2002:chem.wpd I. Introduction A. Hierarchy of chemical substances 1. atoms of elements - smallest particles of matter with unique physical and ### CHEMISTRY STANDARDS BASED RUBRIC ATOMIC STRUCTURE AND BONDING CHEMISTRY STANDARDS BASED RUBRIC ATOMIC STRUCTURE AND BONDING Essential Standard: STUDENTS WILL UNDERSTAND THAT THE PROPERTIES OF MATTER AND THEIR INTERACTIONS ARE A CONSEQUENCE OF THE STRUCTURE OF MATTER, ### Expt. 4: ANALYSIS FOR SODIUM CARBONATE Expt. 4: ANALYSIS FOR SODIUM CARBONATE Introduction In this experiment, a solution of hydrochloric acid is prepared, standardized against pure sodium carbonate, and used to determine the percentage of ### The Mole Concept. The Mole. Masses of molecules The Mole Concept Ron Robertson r2 c:\files\courses\1110-20\2010 final slides for web\mole concept.docx The Mole The mole is a unit of measurement equal to 6.022 x 10 23 things (to 4 sf) just like there ### Chapter 14 Solutions Chapter 14 Solutions 1 14.1 General properties of solutions solution a system in which one or more substances are homogeneously mixed or dissolved in another substance two components in a solution: solute ### IB Chemistry. DP Chemistry Review DP Chemistry Review Topic 1: Quantitative chemistry 1.1 The mole concept and Avogadro s constant Assessment statement Apply the mole concept to substances. Determine the number of particles and the amount ### Factors that Affect the Rate of Dissolving and Solubility Dissolving Factors that Affect the Rate of Dissolving and Solubility One very important property of a solution is the rate of, or how quickly a solute dissolves in a solvent. When dissolving occurs, there ### IB Chemistry 1 Mole. One atom of C-12 has a mass of 12 amu. One mole of C-12 has a mass of 12 g. Grams we can use more easily. The Mole Atomic mass units and atoms are not convenient units to work with. The concept of the mole was invented. This was the number of atoms of carbon-12 that were needed to make 12 g of carbon. 1 mole ### Chemistry: The Central Science. Chapter 13: Properties of Solutions Chemistry: The Central Science Chapter 13: Properties of Solutions Homogeneous mixture is called a solution o Can be solid, liquid, or gas Each of the substances in a solution is called a component of ### The component present in larger proportion is known as solvent. 40 Engineering Chemistry and Environmental Studies 2 SOLUTIONS 2. DEFINITION OF SOLUTION, SOLVENT AND SOLUTE When a small amount of sugar (solute) is mixed with water, sugar uniformally dissolves in water ### Notes: Acids and Bases Name Chemistry Pre-AP Notes: Acids and Bases Period I. Describing Acids and Bases A. Properties of Acids taste ph 7 Acids change color of an (e.g. blue litmus paper turns in the presence of an acid) React ### Chapter 1: Moles and equations. Learning outcomes. you should be able to: Chapter 1: Moles and equations 1 Learning outcomes you should be able to: define and use the terms: relative atomic mass, isotopic mass and formula mass based on the 12 C scale perform calculations, including ### Determining the Identity of an Unknown Weak Acid Purpose The purpose of this experiment is to observe and measure a weak acid neutralization and determine the identity of an unknown acid by titration. Introduction The purpose of this exercise is to identify ### The Mole Concept. A. Atomic Masses and Avogadro s Hypothesis The Mole Concept A. Atomic Masses and Avogadro s Hypothesis 1. We have learned that compounds are made up of two or more different elements and that elements are composed of atoms. Therefore, compounds ### Chapter 6. Solution, Acids and Bases Chapter 6 Solution, Acids and Bases Mixtures Two or more substances Heterogeneous- different from place to place Types of heterogeneous mixtures Suspensions- Large particles that eventually settle out ### Molarity of Ions in Solution APPENDIX A Molarity of Ions in Solution ften it is necessary to calculate not only the concentration (in molarity) of a compound in aqueous solution but also the concentration of each ion in aqueous solution. ### Types of Solutions. Chapter 17 Properties of Solutions. Types of Solutions. Types of Solutions. Types of Solutions. Types of Solutions Big Idea: Liquids will mix together if both liquids are polar or both are nonpolar. The presence of a solute changes the physical properties of the system. For nonvolatile solutes the vapor pressure, boiling ### Intermolecular forces, acids, bases, electrolytes, net ionic equations, solubility, and molarity of Ions in solution: Intermolecular forces, acids, bases, electrolytes, net ionic equations, solubility, and molarity of Ions in solution: 1. What are the different types of Intermolecular forces? Define the following terms: ### This value, called the ionic product of water, Kw, is related to the equilibrium constant of water HYDROGEN ION CONCENTRATION - ph VALUES AND BUFFER SOLUTIONS 1. INTRODUCTION Water has a small but definite tendency to ionise. H 2 0 H + + OH - If there is nothing but water (pure water) then the concentration ### Chapter 4 Notes - Types of Chemical Reactions and Solution Chemistry AP Chemistry A. Allan Chapter 4 Notes - Types of Chemical Reactions and Solution Chemistry 4.1 Water, the Common Solvent A. Structure of water 1. Oxygen's electronegativity is high (3.5) and hydrogen's ### Formulae, stoichiometry and the mole concept 3 Formulae, stoichiometry and the mole concept Content 3.1 Symbols, Formulae and Chemical equations 3.2 Concept of Relative Mass 3.3 Mole Concept and Stoichiometry Learning Outcomes Candidates should be ### Paper 1 (7405/1): Inorganic and Physical Chemistry Mark scheme AQA Qualifications A-level Chemistry Paper (7405/): Inorganic and Physical Chemistry Mark scheme 7405 Specimen paper Version 0.5 MARK SCHEME A-level Chemistry Specimen paper 0. This question is marked ### 5s Solubility & Conductivity 5s Solubility & Conductivity OBJECTIVES To explore the relationship between the structures of common household substances and the kinds of solvents in which they dissolve. To demonstrate the ionic nature ### V. POLYPROTIC ACID IONIZATION. NOTICE: K a1 > K a2 > K a3 EQUILIBRIUM PART 2. A. Polyprotic acids are acids with two or more acidic hydrogens. EQUILIBRIUM PART 2 V. POLYPROTIC ACID IONIZATION A. Polyprotic acids are acids with two or more acidic hydrogens. monoprotic: HC 2 H 3 O 2, HCN, HNO 2, HNO 3 diprotic: H 2 SO 4, H 2 SO 3, H 2 S triprotic: ### SCIENCE Chemistry Standard: Physical Science Standard: Physical Science Nature of Matter A. Describe that matter is made of minute particles called atoms and atoms are comprised of even smaller components. Explain the structure and properties of ### Acids and Bases: Definitions. Brønsted-Lowry Acids and Bases. Brønsted-Lowry Acids and Bases CHEMISTRY THE CENTRAL SCIENCE CHEMISTRY THE CENTRAL SCIENCE Professor Angelo R. Rossi Department of Chemistry Spring Semester Acids and Bases: Definitions Arrhenius Definition of Acids and Bases Acids are substances which increase ### SECTION 14 CHEMICAL EQUILIBRIUM 1-1 SECTION 1 CHEMICAL EQUILIBRIUM Many chemical reactions do not go to completion. That is to say when the reactants are mixed and the chemical reaction proceeds it only goes to a certain extent, and ### CHM1 Review for Exam 12 Topics Solutions 1. Arrhenius Acids and bases a. An acid increases the H + concentration in b. A base increases the OH - concentration in 2. Strong acids and bases completely dissociate 3. Weak acids and ### Dynamic Soil Systems Part A Soil ph and Soil Testing Dynamic Soil Systems Part A Soil ph and Soil Testing Objectives: To measure soil ph and observe conditions which change ph To distinguish between active acidity (soil solution ph) and exchangeable acidity ### SOLUBILITY, IONIC STRENGTH AND ACTIVITY COEFFICIENTS SOLUBILITY, IONIC STRENGTH AND ACTIVITY COEFFICIENTS References: 1. See `References to Experiments' for text references.. W. C. Wise and C. W. Davies, J. Chem. Soc., 73 (1938), "The Conductivity of Calcium ### Chapter 11 Properties of Solutions Chapter 11 Properties of Solutions 11.1 Solution Composition A. Molarity moles solute 1. Molarity ( M ) = liters of solution B. Mass Percent mass of solute 1. Mass percent = 1 mass of solution C. Mole ### Prentice Hall. Chemistry (Wilbraham) 2008, National Student Edition - South Carolina Teacher s Edition. High School. High School Prentice Hall Chemistry (Wilbraham) 2008, National Student Edition - South Carolina Teacher s Edition High School C O R R E L A T E D T O High School C-1.1 Apply established rules for significant digits, ### Chemistry Objectives Chemistry Objectives Matter, and Measurement 1. know the definition of chemistry and be knowledgeable 3-14 about specific disciplines of chemistry. 2. understand the nature of the scientific method and ### Chapter 2: Atoms, Molecules & Life Chapter 2: Atoms, Molecules & Life What Are Atoms? An atom are the smallest unit of matter. Atoms are composed of Electrons = negatively charged particles. Neutrons = particles with no charge (neutral). ### Soil Chemistry Ch. 2. Chemical Principles As Applied to Soils Chemical Principles As Applied to Soils I. Chemical units a. Moles and Avogadro s number The numbers of atoms, ions or molecules are important in chemical reactions because the number, rather than mass ### Osmolality Explained. Definitions Osmolality Explained What is osmolality? Simply put, osmolality is a measurement of the total number of solutes in a liquid solution expressed in osmoles of solute particles per kilogram of solvent. When ### In the box below, draw the Lewis electron-dot structure for the compound formed from magnesium and oxygen. [Include any charges or partial charges. Name: 1) Which molecule is nonpolar and has a symmetrical shape? A) NH3 B) H2O C) HCl D) CH4 7222-1 - Page 1 2) When ammonium chloride crystals are dissolved in water, the temperature of the water decreases. ### 12.3 Colligative Properties 12.3 Colligative Properties Changes in solvent properties due to impurities Colloidal suspensions or dispersions scatter light, a phenomenon known as the Tyndall effect. (a) Dust in the air scatters the ### Chemistry 132 NT. Solubility Equilibria. The most difficult thing to understand is the income tax. Solubility and Complex-ion Equilibria Chemistry 13 NT The most difficult thing to understand is the income tax. Albert Einstein 1 Chem 13 NT Solubility and Complex-ion Equilibria Module 1 Solubility Equilibria The Solubility Product Constant ### 1. Balance the following equation. What is the sum of the coefficients of the reactants and products? 1. Balance the following equation. What is the sum of the coefficients of the reactants and products? 1 Fe 2 O 3 (s) + _3 C(s) 2 Fe(s) + _3 CO(g) a) 5 b) 6 c) 7 d) 8 e) 9 2. Which of the following equations ### Formulas, Equations and Moles Chapter 3 Formulas, Equations and Moles Interpreting Chemical Equations You can interpret a balanced chemical equation in many ways. On a microscopic level, two molecules of H 2 react with one molecule ### stoichiometry = the numerical relationships between chemical amounts in a reaction. 1 REACTIONS AND YIELD ANSWERS stoichiometry = the numerical relationships between chemical amounts in a reaction. 2C 8 H 18 (l) + 25O 2 16CO 2 (g) + 18H 2 O(g) From the equation, 16 moles of CO 2 (a greenhouse ### Acid-Base (Proton-Transfer) Reactions Acid-Base (Proton-Transfer) Reactions Chapter 17 An example of equilibrium: Acid base chemistry What are acids and bases? Every day descriptions Chemical description of acidic and basic solutions by Arrhenius ### CHAPTER 10: INTERMOLECULAR FORCES: THE UNIQUENESS OF WATER Problems: 10.2, 10.6,10.15-10.33, 10.35-10.40, 10.56-10.60, 10.101-10. CHAPTER 10: INTERMOLECULAR FORCES: THE UNIQUENESS OF WATER Problems: 10.2, 10.6,10.15-10.33, 10.35-10.40, 10.56-10.60, 10.101-10.102 10.1 INTERACTIONS BETWEEN IONS Ion-ion Interactions and Lattice Energy ### Chemistry 151 Final Exam Chemistry 151 Final Exam Name: SSN: Exam Rules & Guidelines Show your work. No credit will be given for an answer unless your work is shown. Indicate your answer with a box or a circle. All paperwork must ### Chemistry. CHEMISTRY SYLLABUS, ASSESSMENT and UNIT PLANNERS GENERAL AIMS. Students should be able to i CHEMISTRY SYLLABUS, ASSESSMENT and UNIT PLANNERS GENERAL AIMS Students should be able to - apply and use knowledge and methods that are typical to chemistry - develop experimental and investigative skills, ### Experiment 9 - Double Displacement Reactions Experiment 9 - Double Displacement Reactions A double displacement reaction involves two ionic compounds that are dissolved in water. In a double displacement reaction, it appears as though the ions are ### W1 WORKSHOP ON STOICHIOMETRY INTRODUCTION W1 WORKSHOP ON STOICHIOMETRY These notes and exercises are designed to introduce you to the basic concepts required to understand a chemical formula or equation. Relative atomic masses of ### STATE UNIVERSITY OF NEW YORK COLLEGE OF TECHNOLOGY CANTON, NEW YORK COURSE OUTLINE CHEM 150 - COLLEGE CHEMISTRY I STATE UNIVERSITY OF NEW YORK COLLEGE OF TECHNOLOGY CANTON, NEW YORK COURSE OUTLINE CHEM 150 - COLLEGE CHEMISTRY I PREPARED BY: NICOLE HELDT SCHOOL OF SCIENCE, HEALTH, AND PROFESSIONAL STUDIES SCIENCE DEPARTMENT ### 4. Acid Base Chemistry 4. Acid Base Chemistry 4.1. Terminology: 4.1.1. Bronsted / Lowry Acid: "An acid is a substance which can donate a hydrogen ion (H+) or a proton, while a base is a substance that accepts a proton. B + HA ### Equilibria. Unit Outline Acid Base Equilibria 17Advanced Unit Outline 17.1 Acid Base Reactions 17.2 Buffers 17.3 Acid Base Titrations 17. Some Important Acid Base Systems In This Unit We will now expand the introductory coverage ### CHAPTER 6 Chemical Bonding CHAPTER 6 Chemical Bonding SECTION 1 Introduction to Chemical Bonding OBJECTIVES 1. Define Chemical bond. 2. Explain why most atoms form chemical bonds. 3. Describe ionic and covalent bonding.. 4. Explain ### Solute and Solvent 7.1. Solutions. Examples of Solutions. Nature of Solutes in Solutions. Learning Check. Solution. Solutions Chapter 7 s 7.1 s Solute and Solvent s are homogeneous mixtures of two or more substances. consist of a solvent and one or more solutes. 1 2 Nature of Solutes in s Examples of s Solutes spread evenly throughout ### Chapter 3 Molecules, Moles, and Chemical Equations. Chapter Objectives. Warning!! Chapter Objectives. Chapter Objectives Larry Brown Tom Holme www.cengage.com/chemistry/brown Chapter 3 Molecules, Moles, and Chemical Equations Jacqueline Bennett SUNY Oneonta 2 Warning!! These slides contains visual aids for learning BUT they ### Chapter 4: Structure and Properties of Ionic and Covalent Compounds Chapter 4: Structure and Properties of Ionic and Covalent Compounds 4.1 Chemical Bonding o Chemical Bond - the force of attraction between any two atoms in a compound. o Interactions involving valence ### Chemistry Diagnostic Questions Chemistry Diagnostic Questions Answer these 40 multiple choice questions and then check your answers, located at the end of this document. If you correctly answered less than 25 questions, you need to ### Solutions Review Questions Name: Thursday, March 06, 2008 Solutions Review Questions 1. Compared to pure water, an aqueous solution of calcium chloride has a 1. higher boiling point and higher freezing point 3. lower boiling point ### Stoichiometry and Aqueous Reactions (Chapter 4) Stoichiometry and Aqueous Reactions (Chapter 4) Chemical Equations 1. Balancing Chemical Equations (from Chapter 3) Adjust coefficients to get equal numbers of each kind of element on both sides of arrow. ### Chapter 8 How to Do Chemical Calculations Chapter 8 How to Do Chemical Calculations Chemistry is both a qualitative and a quantitative science. In the laboratory, it is important to be able to measure quantities of chemical substances and, as ### An acid is a substance that produces H + (H 3 O + ) Ions in aqueous solution. A base is a substance that produces OH - ions in aqueous solution. Chapter 8 Acids and Bases Definitions Arrhenius definitions: An acid is a substance that produces H + (H 3 O + ) Ions in aqueous solution. A base is a substance that produces OH - ions in aqueous solution. ### PART I: MULTIPLE CHOICE (30 multiple choice questions. Each multiple choice question is worth 2 points) CHEMISTRY 123-07 Midterm #1 Answer key October 14, 2010 Statistics: Average: 74 p (74%); Highest: 97 p (95%); Lowest: 33 p (33%) Number of students performing at or above average: 67 (57%) Number of students ### EXPERIMENT # 3 ELECTROLYTES AND NON-ELECTROLYTES EXPERIMENT # 3 ELECTROLYTES AND NON-ELECTROLYTES Purpose: 1. To investigate the phenomenon of solution conductance. 2. To distinguish between compounds that form conducting solutions and compounds that ### Chapter 4 Chemical Reactions Chapter 4 Chemical Reactions I) Ions in Aqueous Solution many reactions take place in water form ions in solution aq solution = solute + solvent solute: substance being dissolved and present in lesser ### Chem 1B Saddleback College Dr. White 1. Experiment 8 Titration Curve for a Monoprotic Acid Chem 1B Saddleback College Dr. White 1 Experiment 8 Titration Curve for a Monoprotic Acid Objectives To learn the difference between titration curves involving a strong acid with a strong base and a weak ### CHEM 102: Sample Test 5 CHEM 102: Sample Test 5 CHAPTER 17 1. When H 2 SO 4 is dissolved in water, which species would be found in the water at equilibrium in measurable amounts? a. H 2 SO 4 b. H 3 SO + 4 c. HSO 4 d. SO 2 4 e. ### Chapter 14 - Acids and Bases Chapter 14 - Acids and Bases 14.1 The Nature of Acids and Bases A. Arrhenius Model 1. Acids produce hydrogen ions in aqueous solutions 2. Bases produce hydroxide ions in aqueous solutions B. Bronsted-Lowry ### Physical pharmacy. dr basam al zayady Physical pharmacy Lec 7 dr basam al zayady Ideal Solutions and Raoult's Law In an ideal solution of two volatile liquids, the partial vapor pressure of each volatile constituent is equal to the vapor pressure ### INTRODUCTORY CHEMISTRY Concepts and Critical Thinking INTRODUCTORY CHEMISTRY Concepts and Critical Thinking Sixth Edition by Charles H. Corwin Chapter 13 Liquids and Solids by Christopher Hamaker 1 Chapter 13 Properties of Liquids Unlike gases, liquids do ### Q.1 Classify the following according to Lewis theory and Brønsted-Lowry theory. Acid-base 2816 1 Acid-base theories ACIDS & BASES - IONIC EQUILIBRIA LEWIS acid electron pair acceptor H +, AlCl 3 base electron pair donor NH 3, H 2 O, C 2 H 5 OH, OH e.g. H 3 N: -> BF 3 > H 3 N + BF ### Chapter 13 & 14 Practice Exam Name: Class: Date: Chapter 13 & 14 Practice Exam Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Acids generally release H 2 gas when they react with a. ### Chapter 17. How are acids different from bases? Acid Physical properties. Base. Explaining the difference in properties of acids and bases Chapter 17 Acids and Bases How are acids different from bases? Acid Physical properties Base Physical properties Tastes sour Tastes bitter Feels slippery or slimy Chemical properties Chemical properties ### Science 20. Unit A: Chemical Change. Assignment Booklet A1 Science 20 Unit A: Chemical Change Assignment Booklet A FOR TEACHER S USE ONLY Summary Teacher s Comments Chapter Assignment Total Possible Marks 79 Your Mark Science 20 Unit A: Chemical Change Assignment ### Chemical equilibria Buffer solutions Chemical equilibria Buffer solutions Definition The buffer solutions have the ability to resist changes in ph when smaller amounts of acid or base is added. Importance They are applied in the chemical ### Equilibrium, Acids and Bases Unit Summary: Equilibrium, Acids and Bases Unit Summary: Prerequisite Skills and Knowledge Understand concepts of concentration, solubility, saturation point, pressure, density, viscosity, flow rate, and temperature ### Acids and Bases. Chapter 16 Acids and Bases Chapter 16 The Arrhenius Model An acid is any substance that produces hydrogen ions, H +, in an aqueous solution. Example: when hydrogen chloride gas is dissolved in water, the following ### Solutions. ... the components of a mixture are uniformly intermingled (the mixture is homogeneous). Solution Composition. Mass percentageof solute= Solutions Properties of Solutions... the components of a mixture are uniformly intermingled (the mixture is homogeneous). Solution Composition 1. Molarity (M) = 4. Molality (m) = moles of solute liters ### Problems you need to KNOW to be successful in the upcoming AP Chemistry exam. Problems you need to KNOW to be successful in the upcoming AP Chemistry exam. Problem 1 The formula and the molecular weight of an unknown hydrocarbon compound are to be determined by elemental analysis ### Boyle s law - For calculating changes in pressure or volume: P 1 V 1 = P 2 V 2. Charles law - For calculating temperature or volume changes: V 1 T 1 Common Equations Used in Chemistry Equation for density: d= m v Converting F to C: C = ( F - 32) x 5 9 Converting C to F: F = C x 9 5 + 32 Converting C to K: K = ( C + 273.15) n x molar mass of element ### 7.4. Using the Bohr Theory KNOW? Using the Bohr Theory to Describe Atoms and Ions 7.4 Using the Bohr Theory LEARNING TIP Models such as Figures 1 to 4, on pages 218 and 219, help you visualize scientific explanations. As you examine Figures 1 to 4, look back and forth between the diagrams ### Q.1 Classify the following according to Lewis theory and Brønsted-Lowry theory. Acid-base A4 1 Acid-base theories ACIDS & BASES - IONIC EQUILIBRIA 1. LEWIS acid electron pair acceptor H, AlCl 3 base electron pair donor NH 3, H 2 O, C 2 H 5 OH, OH e.g. H 3 N: -> BF 3 > H 3 N BF 3 see ### Chapter 2 The Chemical Context of Life Chapter 2 The Chemical Context of Life Multiple-Choice Questions 1) About 25 of the 92 natural elements are known to be essential to life. Which four of these 25 elements make up approximately 96% of living ### ION EXCHANGE FOR DUMMIES. An introduction ION EXCHANGE FOR DUMMIES An introduction Water Water is a liquid. Water is made of water molecules (formula H 2 O). All natural waters contain some foreign substances, usually in small amounts. The water ### Paper 1 (7404/1): Inorganic and Physical Chemistry Mark scheme AQA Qualifications AS Chemistry Paper (7404/): Inorganic and Physical Chemistry Mark scheme 7404 Specimen paper Version 0.6 MARK SCHEME AS Chemistry Specimen paper Section A 0. s 2 2s 2 2p 6 3s 2 3p 6 ### Solutions CHAPTER Specific answers depend on student choices. CHAPTER 15 1. Specific answers depend on student choices.. A heterogeneous mixture does not have a uniform composition: the composition varies in different places within the mixture. Examples of non homogeneous
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066260814666748, "perplexity": 2706.4348342014973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00509-ip-10-145-167-34.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/71596/mechanical-definition-of-ordinals
Mechanical definition of ordinals It seems that one can construct ordinals from bottom up by successively introducing a new symbol each time a limit is taken: $$1,\ 2,\ \ldots,\ \omega,\ \omega +1,\ \omega +2,\ \ldots,\ \omega\cdot 2,\ \omega\cdot 2 +1,\ \ldots,\ \omega^{2},\ \ldots,\ \omega^{3},\ \ldots\ \omega^{\omega},\ \ldots,\ \omega^{\omega^{\omega}},\ \ldots, \epsilon_{0},\ \ldots$$ Can this be taken as a (mechanical) definition of ordinals? More abstract definitions like "an ordinal is a transitive well-ordered set satisfying certain properties" are much more appealing to me. Is this mechanical definition sufficient to prove things like "each well-ordered set is order isomorphic to exactly one ordinal?" - What do you do when you reach an uncountable ordinal - do you have an uncountable number of symbols to choose from? What if we just use each ordinal as a symbol for itself? –  Carl Mummert Oct 11 '11 at 1:37 Worse yet, the first uncountable ordinal $\omega_1$ cannot be reached as the limit of a countable sequence of smaller ordinals. So your process will give you at most the countable ordninals. –  Henning Makholm Oct 11 '11 at 1:44 @Henning: this is an argument in favor of taking each ordinal as a symbol for itself. –  Carl Mummert Oct 11 '11 at 1:50 @Carl, they are awfully hard to write down on paper (except by use of other symbols, and even then we don't get most of them), which strikes me as a rather basic requirement for symbols. –  Henning Makholm Oct 11 '11 at 1:53 In fact, the intent of the OP's method won't even get you to a nonrecursive ordinal, but it will get you to things that make $\varepsilon_{0}$ pale into insignificance (e.g. $\Gamma_{0}$, $\Gamma_{\varepsilon_{0}}$, the Bachmann-Howard ordinal, etc.). See en.wikipedia.org/wiki/Recursive_ordinal and en.wikipedia.org/wiki/Large_countable_ordinal –  Dave L. Renfro Oct 11 '11 at 14:52 As remarked in the comments, this is far from sufficient to cover even the countable ordinals. Personally, I see the problem with the three dots at the end which imply both an undefined idea of continuing this sequence, as well something that will terminate after at most $\omega_1$ many steps. I imagine you might get to some large countable ordinals, perhaps $\epsilon_{\epsilon_0}$ or even higher. However this will terminate long before $\omega_1$. Why is that a problem? Well, of course that we know about well-ordered sets whose order type is uncountable. But think of this reason: $\mu_0=\{\text{all those ordinals you wrote above}\}$, ordered by $\in$ this would be a transitive and well-ordered set. However it is not isomorphic to any of its members.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932137668132782, "perplexity": 294.1027928788918}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826343.66/warc/CC-MAIN-20140820021346-00199-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41598-018-26310-x?error=cookies_not_supported&code=fa058f8c-e78a-4f07-a45f-6acd56fb1cc7
Article | Open | Published: # Sorting Five Human Tumor Types Reveals Specific Biomarkers and Background Classification Genes ## Abstract We applied two state-of-the-art, knowledge independent data-mining methods – Dynamic Quantum Clustering (DQC) and t-Distributed Stochastic Neighbor Embedding (t-SNE) – to data from The Cancer Genome Atlas (TCGA). We showed that the RNA expression patterns for a mixture of 2,016 samples from five tumor types can sort the tumors into groups enriched for relevant annotations including tumor type, gender, tumor stage, and ethnicity. DQC feature selection analysis discovered 48 core biomarker transcripts that clustered tumors by tumor type. When these transcripts were removed, the geometry of tumor relationships changed, but it was still possible to classify the tumors using the RNA expression profiles of the remaining transcripts. We continued to remove the top biomarkers for several iterations and performed cluster analysis. Even though the most informative transcripts were removed from the cluster analysis, the sorting ability of remaining transcripts remained strong after each iteration. Further, in some iterations we detected a repeating pattern of biological function that wasn’t detectable with the core biomarker transcripts present. This suggests the existence of a “background classification” potential in which the pattern of gene expression after continued removal of “biomarker” transcripts could still classify tumors in agreement with the tumor type. ## Introduction Dozens of public genomic data repositories relevant to human biology have emerged to support biomedical science. These repositories include The Cancer Genome Atlas (TCGA1), the Genotype-Tissue Expression (GTEx) project2,3, and the Database of Genotypes and Phenotypes (dbGaP4). These databases are deep and diverse, containing data ranging from DNA sequence to epigenetic state to dynamic gene expression output. TCGA, for example, contains multiple measurements for over 14,000 tumors of multiple types. GTEx contains the gene expression patterns and matching DNA sequences for over 11,000 human tissue samples. dbGaP contains over 2.4 million molecular assays relevant to human disease and phenotypes. Clearly, there is massive opportunity to mine these and future databases for biological insight. A powerful quantitative measurement of genome information flow within a biological sample is the steady state RNA-based gene expression profile; this profile is captured in a gene expression vector representing the number of RNA molecules produced by tens of thousands of genes in the specimen. A researcher can aggregate profiles from multiple samples retrieved under varied biological contexts into m × n gene expression matrices (GEMs) where rows (m) are gene or RNA transcript identifiers and columns (n) are samples. One can think of a GEM as a compendium of molecular snapshots from different points of anatomy, developmental stage, and environment. While a liver gene vector, for example, has the same genes as a neighboring pancreas gene vector, their gene expression intensities vary in a manner reflective of their underlying biology. GEMs can be normalized to reduce technical variation between biological samples3,5,6. Correlation analysis can be performed to identify genes whose expression patterns are synchronized across samples using software such as WGCNA7 and KINC8. Similar gene expression patterns are predicted to have a common biological purpose. However, the brute force interrogation of all samples in a GEM may not be the best approach to detect context-specific gene interactions; overrepresentation of one biological condition might drown out rare gene interactions. One approach to addressing variation between samples involves sorting the gene expression vectors into sample clusters of similar contexts. Thus, liver expression profiles might produce one sample cluster, as will pancreas profiles, etc. Correlation analysis can be performed in sorted sample clusters. Sample sorting is becoming increasingly relevant given the rapid growth of samples in databases that are being interrogated for biological function. If a sample sorting technique can robustly sort mislabeled or outlier samples in to alternate groups, then noise should be reduced for each group possibly making it easier to identify meaningful biomarkers and pathways. Two approaches exist for clustering GEMs into sample groups of similar global gene expression patterns: knowledge-dependent and knowledge-independent methods. Both can increase the probability of detecting gene expression patterns relevant to the sorted biological contexts. Knowledge-dependent methods sort the GEM into sub-GEMs based upon annotations associated with the samples. Thus, one could prepare a mixed knowledge-independent condition GEM from a data repository like TCGA and then sort the GEM into tumor types prior to analysis. This approach, while logical – the metadata associated with TCGA is well curated – weakens if the sample label is assigned incorrectly or is representative of multiple subgroups. This can happen, for example, when a tumor type is incorrectly annotated or when tumors are related by molecular architecture as opposed to tissue of origin9. A less-biased, knowledge-independent approach uses clustering methods to sort datasets into groups based only upon their global expression pattern. For example, k-means clustering of global gene expression profiles sorts samples into a pre-defined number of groups and has been shown to improve gene-gene interaction detection10,11,12. Sample metadata is assigned after sorting to provide conditional context to the clusters. One sample cluster might be enriched for a specific biological context label, such as a tumor type, and any genetic relationships from that cluster can be associated with that label. A drawback to k-means clustering is that the number of clusters should be known beforehand, resulting in bias in the sample grouping if an inappropriate number of distinct clusters is chosen. There are emerging knowledge-independent clustering methods that can be applied to GEMs and that do not introduce as much bias. One method, t-distributed stochastic neighbor embedding (t-SNE13,14) – like most knowledge independent sample clustering approaches – relies upon strongly reducing the dimensionality of the gene expression space prior to sample comparison. Two common algorithms used to perform this task are principal component analysis (PCA15) and singular value decomposition (SVD15). Typically, this machine learning technique projects high dimensional data into two or three dimensions. It should be noted that t-SNE has been applied to GTEx16 and TCGA17 datasets, where the TCGA study used t-SNE as part of an integrated omics sorting workflow called MEREDITH17. Dynamic Quantum Clustering (DQC18), unlike other clustering approaches, does not need to use strong dimensionality reduction to analyze high-dimensional data; however, it is common in a DQC analysis to use modest SVD-based dimensionality reduction to speed up the analysis. DQC begins by replacing each column of a GEM (that can be thought of as a vector in an n-dimensional Euclidean space) by an analytic function in m-variables; specifically, each column of the GEM is replaced by a Gaussian function centered on the m-dimensional location specified by the corresponding gene expression vector. The sum of these functions is then used to create a potential function, V, that is a proxy for the density of the data in feature space19. By construction, the local minima and saddle points of this potential function represent regions of higher local density of the data. This potential function is then used to create a Hamiltonian operator as Equation (1): $$H=-\,\frac{1}{2m}{\nabla }^{2}+V$$ (1) Each individual Gaussian, ψ(x), is then evolved according to the corresponding Heisenberg equation of motion20 in Equation (2): $$\psi (x,\,\delta t)={e}^{-i\delta tH}\,\psi (x)$$ (2) The center of this time-dependent Gaussian will move a short distance, implementing a modified, operator form of gradient descent that moves the original center towards the nearest local minimum of V. Due to the non-local effect of quantum evolution, there are important differences that allow DQC to avoid the difficulties associated with simple gradient descent for many points in high dimension. In particular, choosing a low value for the mass parameter, m, exploits quantum tunneling to avoid getting trapped in small fluctuations, avoiding many of the issues related to working in high dimension. Another major difference between a DQC analysis in m-dimensions and other data-mining methods is that the entire analysis is encoded as an m-dimensional animation. This visual presentation of the computation provides a detailed record of each computational stage, showing how clusters and other structures form. A benefit of the DQC approach is that it avoids introducing bias; there is no need to invent hypotheses to test, assume the number or type of clusters that exist, or invoke prior sample knowledge. Selected frames from DQC animations are shown in this report and the full animations are available for viewing in Supplementary Videos 1 and 2. This study analyzed a mixed tumor type GEM from TCGA containing the RNAseq expression profiles of 2,016 tumors from five tumor types: lower grade glioma (LGG), glioblastoma multiforme (GBM), ovarian serous cystadenocarcinoma (OV), urothelial bladder cancer (BLCA), and papillary thyroid carcinoma (THCA). We discuss the clustering and biomarker discovery potential using t-SNE and DQC approaches. In addition, we examine the effect of salient biomarker removal and the ability of both techniques to continue to classify the tumors into meaningful groups in the absence transcripts with high classification potential. When applied to deeply sequenced tumors, our approach can be used to detect biomarker combinations that sort tumor types without prior knowledge of where the tumor was initiated. ## Results ### Tumor Separation via DQC and t-SNE Analyses We first examined the clustering potential of DQC18 and compared it to an approach that performs strong dimensional reduction, t-SNE13. To do this, we constructed a mixed tumor GEM consisting of 2,016 samples from TCGA1. Specifically, we mixed tumor expression profiles from five TCGA labeled sample groups: bladder cancer (BLCA; n = 427), glioblastoma (GBM; n = 174), lower grade glioma (LGG; n = 534), ovarian carcinoma (OV; n = 309), and thyroid cancer (THCA; n = 572). It should be noted that some of the groups contained low numbers of non-tumor samples with the same label as tumor type (BLCA = 19/427; GBM = 5/174; LGG = 0/534; OV = 0/309; THCA = 59/572). This GEM was used as the input for both the DQC and t-SNE analyses (Fig. 1) discussed in the following sections. We began by using DQC to produce animations showing the “quantum evolution” of the 2,016 tumor samples for the SVD-decomposition of the GEM containing all 73,599 transcripts (Fig. 1A). This initial analysis was done in both 50 and 60 SVD-dimensions. These values were chosen by the requirement that restricting to the first 50 or 60 SVD eigenvalues would approximate the original mixed GEM to better than the 1.5% level (as measured by the change in matrix norm defined as the square root of the sum of the squares of every entry in the matrix). Since the overall pattern of data-separation is evident in all dimensions after this DQC evolution, we only show plots for the first twelve SVD-dimensions (frames of dimensions 1–3 in Fig. 1). This DQC analysis produced clusters with structure in which BLCA, OV, and THCA samples formed extended flat shapes populated by many distinct sub-clusters. Note, however, that the overall shape of the region where these clusters are located only showed a slight change during DQC evolution. In contrast, the GBM and LGG samples showed obvious change with DQC evolution, first forming complex filamentary structures that then separate into two somewhat distorted clusters. After the initial DQC-evolution, we used DQC based feature selection (see Materials and Methods for details) to discover 48 “core transcripts” (Supplemental Table 1) that produced a very similar view of the data. We then subjected a subset GEM containing just the columns corresponding to these 48 transcripts to DQC evolution (Fig. 1B) in the full 48 dimensions. Because these 48 “core transcripts” do a very good job of representing the important structure in the data, the DQC results are very similar to the DQC evolution of the full GEM. As before, BLCA, OV, and THCA samples formed planes in three dimensions occupied by many distinct clusters. However, evolution of the GEM restricted to the “core transcripts” revealed that the GBM and LGG samples formed a “brain arch” with GBM samples distributed at one extreme and LGG along the rest of the structure (circled in Fig. 1B). This strong separation of GBM from LGG is evidence that – at least for glial cell tumors – the 48 “core transcripts” are providing information about tumor type and not just tissue of origin. It should be noted that only the DQC analysis revealed a strong separation of GBM and LGG tumors; the tSNE-analysis divided these tumors into several clusters, but none of these sub-clusters were highly enriched for a specific tumor type. The geometry visible in the previous plot changes markedly when the TCGA matrix is analyzed with the 48 core transcripts removed (Fig. 1C). In this case the shape of the data is entirely different, in that it exhibits four clearly separated lobes. Still, DQC evolution of this dataset showed that all five tumor types could be easily separated from one another. This surprising result led to further study described later in this report. Finally, to test the efficacy of DQC-based feature selection, we showed that repeated selection of 48 random transcripts did not separate the tumors into tissue of origin by t-SNE or DQC (representative result in Fig. 1D). Sample clustering of the full TCGA GEM was repeated using the t-SNE-HDBSCAN pipeline (Fig. 1A–D). Like DQC, t-SNE segregated the tumors into five groups using all 73,599 transcripts (Fig. 1A), the 48 core transcripts (Fig. 1B), or all transcripts minus the 48 core transcripts (Fig. 1C). Forty-eight random transcripts did not cluster the tumors via t-SNE (Fig. 1D; 1/20 runs shown with similar result). The 48-core transcript subset segregated the five tumor types into nine clusters in two dimensions as identified using the HDBSCAN/cluster ensembles consensus clustering approach (Fig. 1B). Consensus clusters are labeled as numbers in Fig. 1B and discussed below. In contrast with DQC, the 48 core transcripts failed to produce t-SNE embeddings that cleanly separated the five tumor types, failed to cleanly separate GBM from LGG, and failed to reveal the “brain arch” (Fig. 1B) although in general t-SNE embedding produced clusters that segregated more distinctly than DQC. ### The “Brain Arch” Tumor Substructure DQC analysis revealed an interesting substructure between LGG and GBM brain tumors. To examine this structure in more detail, we dissected the arch samples into seven groups by k-means clustering (Fig. 2A). Groups 1 through 5 are primarily LGG tumors while groups 6 and 7 are GBM. Interestingly, visualizing the expression levels across the “brain arch” shows a trend of epithelial, thyroid, and other genes turned on at the left-hand leg of the arch (LGG-enriched group 1) which tend to be off at the right-hand leg (GBM-enriched group 7). There also appears to be immune response, extracellular matrix, and differentiation genes turned on at the bottom of the right leg (GBM-enriched group 7 of the arch and off at the bottom of the left leg (LGG group 1)). ### Annotating Tumor Consensus Clusters Tumors that cluster by gene expression pattern would be expected to show enrichment for tumor type label and other attributes. Enrichment analysis was performed on the consensus clusters labeled in Fig. 1B (p < 0.001). TCGA patient attributes exist for age, race, ethnicity, gender, and tumor stage were available for the majority of the 2,016 tumor samples. t-SNE embeddings of the 48 core transcripts with patient attribute values labeled by color are shown in Fig. 3 and listed in Fig. 4. All clusters were statistically enriched (p < 0.001) for at least one tumor type label. Clusters 6, 7, and 8 were enriched for both GBM and LGG tumor type labels. Thyroid tumor clusters were enriched for the female gender. One of the bladder tumor clusters and two of the mixed brain tumor clusters were enriched for the male gender. The bladder and ovarian tumor clusters were uniformly enriched for age above 40 years, while several of the thyroid and brain tumor clusters were enriched for age below 40. The age enrichment corresponded roughly to stage enrichment: both thyroid clusters were enriched for tumors in their earliest stage while the mixed BLCA-OV clusters 2, 3, and 4 (where data was available) were enriched for stage IV tumors. Little enrichment was observed based on race or ethnicity although thyroid-enriched cluster 0 was also enriched for “Asian” as a race. The mixed GBM-LGG tumor cluster 6 was enriched for the annotation term “white”. It is worth noting that we did not attempt to correct for bias that may exist within the TCGA database itself with regard to sample collection. ### Tumor Classification Potential by DQC Based Transcript Selection and Removal As can be seen in Fig. 1C, the initial reduced GEM (i.e. the one obtained from the original GEM by removing the 48 core transcripts) can still be used to successfully segregate the tumors into five groups via either t-SNE or DQC. Moreover, the DQC analysis once again successfully separated the GBM tumors from the LGG tumors. So, we see that – at least for the case of glial cell tumors – information about tumor type and not just tissue of origin is being encoded in the reduced GEM. Still, brain tumors aside, the observed DQC and t-SNE substructure both distinguished the other tumors. Why would classification occur in the absence of the core biomarkers? It is well known that hundreds to thousands of genes might be differentially expressed between tumor types. In the case of GBM and LGG, it has previously been reported that 2275 genes are differentially expressed between these tumor types21. Therefore, one would expect several thousand transcripts to have a combined sorting ability as the gene set without core biomarkers is still likely embedded with differentially expressed genes. However, we were curious about the effect of deeper biomarker removal and the effect on sample sorting. Thus, we performed successive rounds of DQC-based feature (i.e. important transcripts) selection and removal. Since this part of the analysis was only focused on how systematic removal of features would affect the ability to separate tumors, we decided to expedite the analysis by limiting DQC-based feature selection to the first 12 SVD dimensions to identify the next layer of important transcripts. For these 12 eigenvectors, we plotted the sorted absolute values of their components and set a threshold based upon breaks in the plot. These thresholds steadily decreased as we repeated the process of important transcript selection and removal. At each iteration, we checked that the new set of important transcripts alone could still separate the glial cell tumors by diagnosis (Fig. 5) and not just tissue of origin. Furthermore, we tested the gene set for the significant enrichment of biological annotation terms (Fig. 6; Supplemental Table 2), removed the important transcripts, and then repeated the transcript selection process. We repeated this procedure a total of 18 times, thus identifying multiple layers of important transcripts corresponding to relaxed threshold magnitudes for the SVD-eigenvectors. Remaining transcripts after iterative removal of important transcripts were studied by both DQC and t-SNE (iterations 1–9 shown in Fig. 5). All iterations reveal varying capacity of the remaining transcripts to segregate the five tumor types. The number of transcripts in each interval also varied greatly. The number of transcripts identified at each level - from iteration 1 to 18- are 48 (core transcripts from the full GEM described above), 57, 55, 22, 22, 73, 55, 23, 146, 87, 41, 34, 413, 280, 209, 119, 80, 2101. The iterations with least number of transcripts, obtained in iterations 4 and 5, identified 22 RNA transcripts each; the largest, at 2101 transcripts, had significant tumor “classification potential” despite the population of transcripts being the least sensitive as measured by difference of absolute value of the components of the SVD-eigenvectors. We performed DQC evolutions for iterations 1–9 but due to size constraints, a single example DQC evolution for iteration 9 can be found in Supplemental Video 1. In this analysis, we noticed a trend where, for both DQC and t-SNE analyses, layers with larger transcript population size generally exhibit greater capacity for tumor segregation (Fig. 6A). In contrast, performing biochemical pathway enrichment analysis on the transcripts at each level, we see a decrease in biological function (i.e. enriched Reactome pathways; Fig. 6B) at each level. While the first core transcript iteration contains a complex set of tumor type and tissue relevant pathways, iterations 2, 3, 4, 6, 9, and 13 repeat identical pathways (Fig. 7). ### “Background” Tumor Classification Potential To test if the effect of classification potential was simply due to the number of transcripts in the GEM, random samplings of transcripts from the TCGA matrix were taken to determine the tumor classification potential of random GEMs of increasing size (Fig. 8). Random samplings of 200 transcripts or less either produced indistinct clusters or clusters of mixed tumor type identity. Larger random samples of transcripts, when t-SNE embedded and HDBSCAN clustered, were generally better able to segregate the five tumor types. We term this random classification potential the “background classification potential” as opposed to the more specific biomarker classification potential seen in early iterations of DQC based feature selection. ## Discussion The motivation for this study was to identify better methods for segregating samples into biologically-relevant groups based upon quantitative dimensions (i.e. steady state RNA expression). We expected that these dimensions would confer biological classification potential which in this study was the separation of tumor types without prior knowledge of the sample origin. To achieve our goal, we tested two approaches, DQC and t-SNE, both of which grouped samples into clusters that made biological sense based on sample annotation enrichment. The biological relevance of the tumor clusters was evidenced by enrichment of annotations relative to all tumors. The consensus clustering technique applied to one thousand t-SNE embeddings revealed nine tumor subpopulations (Figs 3 and 4). For example, we observed in cluster 3 that BLCA and OV tumors were similar enough to co-cluster and be enriched for late onset (> = 40 years) whereas cluster 4 was late onset but restricted to OV tumors. The unusual co-clustering of disparate tumor types in cluster 3 warrants further investigation. Further, we identified four clusters (0, 5, 6, 8) that were detected in younger patients (< = 40 years) and two additional clusters of tumors found in older patients (1, 2). We also observed an enrichment in clusters 0 and 5 for THCA and female attributes. It should be noted that 59/572 (10.3%) of samples labeled THCA were actually “Solid Tissue Normal” samples. These clusters make sense as thyroid cancer is 2.9 times more likely to occur in females22. Clusters 1, 6, 8 showed a male bias and clusters 6 and 8 are associated with brain cancer; this finding corroborates the male bias seen in brain tumor incidence23. None of the brain tumor samples were annotated as normal. With a deeper attribute dataset, we believe this approach would reveal higher quality sample groups and even more insight for the exploration of tumor biology. While both t-SNE and DQC performed well in clustering the tumors, there were some interesting differences. DQC revealed interesting patterns that were not observed with t-SNE. For example, the DQC “brain arch” driven by the 48 core transcripts revealed an interesting substructure connecting LGG and GBM brain tumors (Fig. 2). When we clustered the samples across the DQC arch from LGG to GBM, we found the 48 core transcripts expression levels varied across the “brain arch” and exhibited a trend where epithelial, thyroid, and other genes were often up-regulated in samples at the bottom of the left-hand leg of the arch (LGG group 1) and often down-regulated in samples at the bottom of the right-hand leg (GBM group 7). There also appears to be immune response, extracellular matrix, and differentiation genes turned on at the right-hand leg of the arch (GBM group 7) and off at the bottom of the left-hand leg (LGG group 1). These data suggest that the core 48 biomarkers have excellent classification potential for all tumors. Furthermore, a subset of these genes function differently in GBM and LGG tumors, which have very different survival times of 14.6 months (GBM24) versus 7 years (LGG25). This is not the first study to identify biomarkers and classify TCGA tumor types. Martinez et al. identified eight transcriptional superclusters using unsupervised hierarchical clustering of expression profiles between tumor sub-types across twelve TCGA tumor types26. In contrast to our approach, their study analyzed the top 1500 genes and used prior knowledge of tumor type in their analysis whereas our study input the full GEM and sorted tumors without using tumor type knowledge in the sorting process. Li et al. examined TCGA RNAseq data using a classification strategy where they classified 9096 tumor samples from 31 tumor types using GA/KNN as classification engine27. That classification study used prior knowledge of tumor type in the classification process whereas our study was blind to the sample labels. In Hoadley et al., a cluster-of-cluster (COCA) technique was used on RNAseq profiles (and other tumor molecular measurements) to determine a molecular taxonomy of 12 cancer types, discovering 11 molecular signature-based subtypes9. Their study showed that tissue of origin labels was not always indicative of the tumor’s molecular basis. In contrast to our study, they only used the 6000 most variable genes in their analysis. It is also typical to use many differentially expressed genes to characterize tumor type or subtype. Ceccarelli et al., for example, used 2275 differentially expressed genes to build “molecular profiles” of LGG and GBM tumors21 and Verhaak et al. identified 840 genes predictive of subtype in the case of GBM tumors alone28. We are aware that the 48 core transcripts used in this analysis are probably a subset of such a larger population of classifier genes. Our results confirm there are two means of classifying samples: (1) using a small number of transcripts of high significance or (2) many transcripts of low relative significance. By examining the functions of genes in the first iteration of DQC-based transcript selection (Fig. 7; Supplemental Table 2), it was clear that the first 48 core transcripts show more relevance to the tumor phenotype and thus are more likely to be mechanistic in tumor progression. We also demonstrated the existence of a “background classification” effect where a random sample of – on the order of 200 transcripts – recapitulates the classification potential of the transcripts identified by DQC-based feature selection (Fig. 8) and supports the convention of using sets of hundreds or thousands of differentially expressed genes to characterize tumor types. It is possible, then, that relaxing the transcript significance threshold used in DQC-based feature selection may not identify additional transcripts of interest so much as reveal the high classification potential arising from the aggregate small effects of many transcripts. While the “background classification” effect suggests that a sufficiently large set of random gene expression vectors has classification potential, it seems unlikely that all these genes would be involved in tumor specific biology. We assumed that through successive levels of DQC-based transcript selection we would remove differentially expressed genes and detect essentially random genes without collective function that merely contain the background classification potential. In fact, we did find that as we continued this process, the number of pathways we detected decreased after correcting for the number of transcripts found in the pathway (Fig. 6B). However, functional enrichment analysis, as defined by Reactome biochemical pathway enrichment in a gene group relative to all genes in the genome29, revealed a repeating pathway enrichment pattern for iterations 2, 3, 4, 6, and 7. The pathways that appear in these iterations appear to control protein synthesis and may be a signal for general cell growth and proliferation processes. It is interesting to note that the threshold used in the DQC-based feature selection process for the first seven levels did not decrease significantly. Rather, the significant drop in threshold was only seen in higher iterations. The drop-off in the number of observed reaction pathways appears to coincide with the drop-off of the threshold that had to be used in the selection process. A future experiment could determine if these repeating genes and pathways are present in normal tissue GEMs (e.g. GTEX datasets2) which would imply tissue specificity as opposed to a tumor type property. In conclusion, we describe and contrast two very different sample clustering algorithms: DQC and t-SNE. We implemented dimensionality reduction to identify the important transcripts and all subsequent DQC analysis was done without further dimensionality reduction. t-SNE analysis involved further embedding into two spatial dimensions. Both techniques are effective at clustering samples to detect substructures in a GEM. The fact that DQC worked in the full 48-dimension space of core transcripts is likely the reason it reveals more subtle aspects of the data. Unexpectedly, we discovered the confounding effect many random transcripts can classify and sort samples. We also repeatedly showed that this ability is not merely identifying tissue of origin as opposed to tumor type. This is an early but intriguing concept that should be addressed if a researcher seeks cause-and-effect as opposed to tumor type-associated biomarkers. Finally, while we applied these techniques to tumor data, the same approach can be applied beyond genomics contexts, a fact that has been previously shown for DQC19. ## Methods ### Gene Expression Matrix (GEM) Preparation RNAseq profiles, rsem processed at the transcript level, for five public tumor types were downloaded on April 1, 2016 from the TCGA Data portal at https://gdc-portal.nci.nih.gov. A total of 2,016 datasets were obtained comprised of the types as labeled by TCGA: BLCA (n = 427), GBM (n = 174), LGG (n = 534), OV (n = 309), and THCA (n = 572). Each expression profile was merged into a single gene expression matrix (GEM) with 73,599 transcripts labeled with knowngene5 UC-Santa Cruz genome database gene model identifiers. It is important to address potential batch effects – technical and biological variation between samples of the same group – in a high throughput genomics study6. The TCGA Batch Effects webserver (http://bioinformatics.mdanderson.org/tcgambatch/) was queried to gain insight on batch effects present in each cancer subtype. It was found that the Dispersion Separability Criterion (DSC) scores for the RNAseqv2 isoform data for each cancer subtype indicate a mild presence of batch effects in the data used in our study (p < 0.0005). The DSC scores for BLCA, GBM, LGG, OV, and THCA are 0.310, 0.000, 0.298, 0.089, and 0.252, indicating a low ratio of dispersion between vs. within batches for these cancer types. Lauss et al. found that performing a quantile normalization on colon cancer RNAseqv2 data from TCGA helped to reduce batch effects present in this data6. We performed a similar quantile normalization on the GEM used in our study. Furthermore, we did not detect any outliers using a Kolmogorov–Smirnov test as performed in8. The TCGA GEM was quantile normalized and randomly sorted to create a single tumor GEM for input into the DQC and t-SNE pipelines. A small fraction of the subtype labeled samples (83 out 2016–4.12%; 19 BLCA; 59 THCA; 5 GBM) included “Solid Tissue Normal” samples. These and other clinical annotations associated with each TCGA sample are included in Supplemental Table 3. ### DQC-Based Important Transcript Selection As DQC evolution proceeds it becomes apparent that sample data points separate well in some dimensions and not in others (see sample DQC tumor evolution animation in Supplemental Video 2). This separation allows for a novel form of feature selection where we select a subset of the RNA transcripts that play the most important role in the evolution of the data. To describe the DQC selection process it is convenient to consider the transpose of the original GEM, so that the rows are the tumor samples and the columns are the RNA transcript labels. The SVD-decomposition of this matrix rewrites the m × n GEM (m samples, n RNA transcripts) as Equation (3): $$GE{M}_{ij}=\sum _{l}{U}_{il}{S}_{ll}{V}_{lj}^{t}$$ (3) where U is an n × n matrix, S is an n × n diagonal real matrix with non-vanishing entries only along the diagonal arranged in decreasing value, and Vt is an m × n matrix. The rows of Vt are unit vectors in the m-dimensional space of transcripts and define a new coordinate system best adapted to plotting the data. As the corresponding eigenvalue of S goes down, the variance of the data in that direction drops as well. In general, the rows of Vt are linear combinations of the original features that correspond to columns in GEM. It is common to dimensionally reduce the matrix GEM by choosing an integer N that is smaller than the number of rows in GEM and defining Equation (4) $$GE{M}_{ij}^{N}=\sum _{l=1}^{N}{U}_{il}{S}_{ll}\,{V}_{lj}^{t},$$ (4) where the sum over l goes from 1 to N. The square root of the sum of the squares of the left-out terms in S ll provide an upper bound on the absolute error one makes in the full data matrix by using the dimensionally reduced matrix, instead of the original matrix. It is very convenient, when doing DQC evolution to replace the dimensionally reduced matrix $$GE{M}_{ij}^{N}$$by UN, an n × N matrix made up of the first N columns of U. Note that while the dimensionally reduced matrix UN has fewer columns than the original GEM it does not depend on fewer features, since each row of Vt depends upon many more than N features. DQC based feature selection simply examines those rows of Vt that correspond to directions where the DQC evolved data clearly separates. For each of these vectors, we plot the absolute value of the eigenvector components - sorted in decreasing order. These plots tend to show a close group of larger values, followed by a gap, followed by smaller values. We select those features corresponding to large values of the components of the row of Vt in question. We combined features identified in this way to arrive at our set of selected features. We should emphasize that this simple feature selection technique requires prior DQC evolution of the data to identify the useful directions of the SVD-decomposition. It should be noted that no theorem guarantees this approach will produce an exhaustive list of most important transcripts and that no information about tumor types was used at any stage of the selection process. ### t-SNE Analysis and Consensus Cluster Detection The full or partial GEM was evaluated separately by a dimensionality reduction (embedding), clustering, consensus, and enrichment pipeline. Embedding was performed using t-Distributed Stochastic Neighbor Embedding (t-SNE13) using the Python implementation from https://github.com/DmitryUlyanov/Multicore-TSNE. For each t-SNE run, one thousand two-dimensional randomly initialized embeddings were created. Each embedding was clustered individually using Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN30). A consensus of nine clusters was determined over the whole set of clustered embeddings using the Cluster Ensembles method31. Label enrichment of these consensus clusters for patient attributes associated with the tumor samples was evaluated using a Chi-squared test (p < 0.001). ### Background Classification Potential Repeated samples, varying in size, of random subsets of transcripts were extracted from the TCGA matrix and embedded using t-SNE. Twenty random sample subsets of size 25, 50, 75, 100, 125, 150, 175, and 200, ten samples of subsets of size 225, 250, 300, 400, and 500, and five subsets of size 100 were evaluated. These were treated with the same pipeline as above where the samples were embedded, clustered with HDBSCAN, consensus clusters were assigned by Cluster Ensembles, and then each of these consensus clusters was evaluated for tumor type enrichment. The percentage of clusters enriched for at least one tumor type was recorded. ### Functional Enrichment Analysis Functional enrichment of the core and subsequent iterations of transcripts was performed using an in-house Perl script modeled after the online DAVID tool at https://david.ncifcrf.gov. Tested attributes include human transcripts mapped to terms from InterPro32, PFAM33, the Gene Ontology (GO)34, the Kyoto Encyclopedia of Genes and Genomes (KEGG)35, Reactome36 and MIM37. Terms that were present in a gene list more often than in the genomic background were considered enriched (FDR <0.01). The full enrichment list is shown in Supplemental Table 2. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Weinstein, J. N. et al. The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet 45, 1113–1120, https://doi.org/10.1038/ng.2764 (2013). 2. 2. Mele, M. et al. Human genomics. The human transcriptome across tissues and individuals. Science 348, 660–665, https://doi.org/10.1126/science.aaa0355 (2015). 3. 3. Lonsdale, J. The Genotype-Tissue Expression (GTEx) project. Nat Genet 45, 580–585, https://doi.org/10.1038/ng.2653 (2013). 4. 4. Wong, K. M. et al. The dbGaP data browser: a new tool for browsing dbGaP controlled-access genomic data. Nucleic Acids Res 45, D819–d826, https://doi.org/10.1093/nar/gkw1139 (2017). 5. 5. Hruz, T. et al. Genevestigator v3: a reference expression database for the meta-analysis of transcriptomes. Advances in bioinformatics 2008, 420747, https://doi.org/10.1155/2008/420747 (2008). 6. 6. Lauss, M. et al. Monitoring of technical variation in quantitative high-throughput datasets. Cancer informatics 12, 193–201, https://doi.org/10.4137/cin.S12862 (2013). 7. 7. Langfelder, P. & Horvath, S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics 9, 559, https://doi.org/10.1186/1471-2105-9-559 (2008). 8. 8. Ficklin, S. P. et al. Discovering Condition-Specific Gene Co-Expression Patterns Using Gaussian Mixture Models: A Cancer Case Study. Scientific Reports 7, 8617 (2017). 9. 9. Hoadley, K. A. et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell 158, 929–944, https://doi.org/10.1016/j.cell.2014.06.049 (2014). 10. 10. Feltus, F. A., Ficklin, S. P., Gibson, S. M. & Smith, M. C. Maximizing capture of gene co-expression relationships through pre-clustering of input expression samples: an Arabidopsis case study. BMC Syst Biol 7, 44, https://doi.org/10.1186/1752-0509-7-44 (2013). 11. 11. Ficklin, S. P. & Feltus, F. A. A systems genetics approach and data mining tool to assist in the discovery of genes underlying complex traits in Oryza sativa. PLoS ONE 8, e68551, https://doi.org/10.1371/journal.pone.0068551 (2013). 12. 12. Botia, J. A. et al. An additional k-means clustering step improves the biological features of WGCNA gene co-expression networks. BMC Syst Biol 11, 47, https://doi.org/10.1186/s12918-017-0420-6 (2017). 13. 13. van_ der_ Maaten, L. J. P. & Hinton, G. E. Visualizing High-Dimensional Data Using t-SNE. Journal of Machine Learning Research 9, 2579–2605 (2008). 14. 14. van der Maaten, L. Accelerating t-SNE using Tree-Based Algorithms. Journal of Machine Learning Research 15, 3221–3245 (2014). 15. 15. Wall, M. E., Rechtsteine, A. & Rocha, L. M. In A Practical Approach to Microarray Data Analysis (eds D.P. Berrar, W. Dubitzky, & M. Granzow) Ch. 5, 92–109 (Kluwer, 2003). 16. 16. Taskesen, E. & Reinders, M. J. 2D Representation of Transcriptomes by t-SNE Exposes Relatedness between Human Tissues. PLoS One 11, e0149853, https://doi.org/10.1371/journal.pone.0149853 (2016). 17. 17. Taskesen, E. et al. Pan-cancer subtyping in a 2D-map shows substructures that are driven by specific combinations of molecular characteristics. Sci Rep 6, 24949, https://doi.org/10.1038/srep24949 (2016). 18. 18. Weinstein, M. & Horn, D. Dynamic quantum clustering: a method for visual exploration of structures in data. Physical review. E, Statistical, nonlinear, and soft matter physics 80, 066117, https://doi.org/10.1103/PhysRevE.80.066117 (2009). 19. 19. Weinstein, M. et al. Analyzing Big Data with Dynamic Quantum Clustering. arXiv:1310.2700 [physics.data-an] (2013). 20. 20. Messiah, A. Quantum Mechanics (Vol. I). (John Wiley & Sons., 1966). 21. 21. Ceccarelli, M. et al. Molecular Profiling Reveals Biologically Discrete Subsets and Pathways of Progression in Diffuse Glioma. Cell 164, 550–563, https://doi.org/10.1016/j.cell.2015.12.028 (2016). 22. 22. Rahbari, R., Zhang, L. & Kebebew, E. Thyroid cancer gender disparity. Future oncology (London, England) 6, 1771–1779, https://doi.org/10.2217/fon.10.127 (2010). 23. 23. Sun, T., Plutynski, A., Ward, S. & Rubin, J. B. An integrative view on sex differences in brain tumors. Cellular and molecular life sciences: CMLS 72, 3323–3342, https://doi.org/10.1007/s00018-015-1930-2 (2015). 24. 24. AmericanBrainTumorAssociation. http://www.abta.org/brain-tumor-information/types-of-tumors/glioblastoma.html (2017). 25. 25. Claus, E. B. et al. Survival and low-grade glioma: the emergence of genetic information. Neurosurgical focus 38, E6, https://doi.org/10.3171/2014.10.focus12367 (2015). 26. 26. Martinez, E. et al. Comparison of gene expression patterns across 12 tumor types identifies a cancer supercluster characterized by TP53 mutations and cell cycle defects. Oncogene 34, 2732–2740, https://doi.org/10.1038/onc.2014.216 (2015). 27. 27. Li, Y. et al. A comprehensive genomic pan-cancer classification using The Cancer Genome Atlas gene expression data. BMC Genomics 18, 508, https://doi.org/10.1186/s12864-017-3906-0 (2017). 28. 28. Verhaak, R. G. et al. Integrated genomic analysis identifies clinically relevant subtypes of glioblastoma characterized by abnormalities in PDGFRA, IDH1, EGFR, and NF1. Cancer cell 17, 98–110, https://doi.org/10.1016/j.ccr.2009.12.020 (2010). 29. 29. Croft, D. et al. The Reactome pathway knowledgebase. Nucleic Acids Res 42, D472–477, https://doi.org/10.1093/nar/gkt1102 (2014). 30. 30. McInnes, L., Healy, J. & Astels, S. hdbscan: Hierarchical density based clustering. Journal of Open Source Software 2 (2017). 31. 31. Campello, R., Moulavi, D. & Sander, J. In Advances in Knowledge Discovery and Data Mining 160-172 (Springer, 2013). 32. 32. Finn, R. D. et al. InterPro in 2017—beyond protein family and domain annotations. Nucleic Acids Research 45, D190–D199, https://doi.org/10.1093/nar/gkw1107 (2017). 33. 33. Finn, R. D. et al. The Pfam protein families database: towards a more sustainable future. Nucleic Acids Research 44, D279–D285, https://doi.org/10.1093/nar/gkv1344 (2016). 34. 34. Ashburner, M. et al. Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nature Genetics 25, 25–29, https://doi.org/10.1038/75556 (2000). 35. 35. Ogata, H. et al. KEGG: Kyoto Encyclopedia of Genes and Genomes. Nucleic Acids Research 27, 29–34 (1999). 36. 36. Fabregat, A. et al. The Reactome pathway Knowledgebase. Nucleic Acids Research 44, D481–D487, https://doi.org/10.1093/nar/gkv1351 (2016). 37. 37. Amberger, J. S., Bocchini, C. A., Schiettecatte, F., Scott, A. F. & Hamosh, A. OMIM.org: Online Mendelian Inheritance in Man (OMIM(R)), an online catalog of human genes and genetic disorders. Nucleic Acids Res 43, D789–798, https://doi.org/10.1093/nar/gku1205 (2015). ## Acknowledgements We thank Ken Matusow for valuable input into this analysis. Some of this work was performed on the Palmetto Cluster supercomputer at Clemson University. ## Author information ### Affiliations 1. #### Clemson University, Department of Genetics & Biochemistry, Clemson, 29634, SC, USA • Kimberly E. Roche • , Leland J. Dunwoodie • , William L. Poehlman •  & Frank A. Feltus 2. #### Quantum Insights Inc., Menlo Park, 94025, California, USA • Marvin Weinstein ### Contributions Study conception and design: F.A.F., K.E.R., M.W. Acquisition of data: L.J.D. Analysis and interpretation of data: F.A.F., K.E.R., M.W., L.J.D., W.L.P. Drafting of manuscript: F.A.F., K.E.R., M.W., L.J.D., W.L.P. ### Competing Interests The authors declare a financial competing interest in that M.W. is the founder of Quantum Insights Inc. which licenses the DQC algorithm for profit. The authors do not declare any non-financial competing interests. ### Corresponding author Correspondence to Frank A. Feltus.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067369222640991, "perplexity": 3609.385827871808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164469.99/warc/CC-MAIN-20180926081614-20180926102014-00280.warc.gz"}
https://www.physicsforums.com/threads/perturbation-theory-qualitative-question.603217/
# Perturbation theory (qualitative question) • Thread starter LogicX • Start date • #1 LogicX 181 1 ## Homework Statement How does the energy change (negative, positive or no change) in the HOMO-LUMO transition of a conjugated polyene where there are 5 double bonds when a nitrogen is substituted in the center of the chain? The substitution lowers the potential energy in the center of the box (everywhere else V(x)=0 for particle in a box). When there are 6 double bonds, the opposite change happens. Why? ## Homework Equations E1=<ψ0|H1ψ0> E(pertubed)= E0 + E1λ ## The Attempt at a Solution Ok, so if you look at the particle in a box ψ*ψ for n=5 and for n=6, the center of the n=5 is at the top of a peak, while for n=6 it is at a node (i.e. where the probability=0). I'm not sure how to use this info to say how the excitation energy would change. I think it means that for n=6 there is no change because there is no probability of an electron being there so the substitution does not change the excitation energy. And for n=5 there is a decrease in potential energy so E1 is more negative and the gap would be larger? (or would a decrease in V(x) mean that the gap is smaller?) Does any of that make sense? Again, I just need a qualitative answer, and it basically boils down to how E1 changes with the substitution. EDIT: I noticed this thread seems to be related but I'm still not quite sure of the answer: Last edited: • Last Post Replies 3 Views 643 • Last Post Replies 8 Views 913 • Last Post Replies 1 Views 622 • Last Post Replies 3 Views 697 • Last Post Replies 3 Views 679 • Last Post Replies 0 Views 405 • Last Post Replies 1 Views 384 • Last Post Replies 3 Views 775 • Last Post Replies 0 Views 573 • Last Post Replies 9 Views 1K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636924624443054, "perplexity": 1005.9486699451568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00764.warc.gz"}
http://mathhelpforum.com/trigonometry/197807-converting-b-w-polar-equations-rectangular-equations.html
# Math Help - CONVERTING b/w POLAR equations & RECTANGULAR equations 1. ## CONVERTING b/w POLAR equations & RECTANGULAR equations I 'd like to see the steps it takes to solve each of these problems. Thanks! Polar to Rectangular: 1.) r2 = sin2θ 2.) r = 2secθ ( i got x=2, not sure if it's right) 3.) r = 6/(2cosθ - 3sinθ) Rectangular to Polar: 1.) 2xy=1 2.) y2- 8y - 16 = 0 3.) x2 + y2 - 2ay = 0 4.) x2 = y3 ( my final answer is r= cot2θcscθ ) 2. ## Re: CONVERTING b/w POLAR equations & RECTANGULAR equations Hello, Crysland! Polar to Rectangular: $(1)\; r^2\:=\:\sin2\theta$ We have: . . . . . . . $r^2 \:=\:2\sin\theta\cos\theta$ Multiply by $r^2\!:\qquad\;\; r^4 \:=\:2r^2\sin\theta\cos\theta$ . . . . . . . . . . . . . $(r^2)^2 \:=\:2(r\cos\theta)(r\sin\theta)$ Substitute: . $(x^2+y^2)^2 \:=\:2xy$ $(2)\; r \:=\:2\sec\theta$ ( I got $x=2.$) . Right! $\text{We have: }\:r \:=\:\dfrac{2}{\cos\theta} \quad\Rightarrow\quad r\cos\theta \:=\:2 \quad\Rightarrow\quad x \:=\:2$ $(3)\; r \:=\:\dfrac{6}{2\cos\theta - 3\sin\theta}$ $\begin{array}{ccc}\text{We have:} & r(2\cos\theta - 3\sin\theta) \:=\:6 \\ \\ & 2r\cos\theta - 3r\sin\theta \:=\:6 \\ \\ & 2x - 3y \:=\:6 \end{array}$ Rectangular to Polar: $(1)\; 2xy\:=\:1$ $2(r\cos\theta)(r\sin\theta) \:=\:1 \quad\Rightarrow\quad 2r^2 \:=\:\frac{1}{\sin\theta\cos\theta} \quad\Rightarrow\quad r^2 \:=\:\frac{1}{2\sin\theta\cos\theta}$ . . . . . . . . . . . . . . . . . $\Rightarrow\quad r^2 \:=\:\frac{1}{\sin2\theta} \quad\Rightarrow\quad r^2 \:=\:\csc2\theta$ $(2)\; y^2- 8y - 16 \:=\: 0$ $(r\sin\theta)^2 - 8(r\sin\theta) - 16 \:=\:0 \quad\Rightarrow\quad r^2\sin^2\theta - 8r\sin\theta - 16 \:=\:0$ . . $r \;=\;\dfrac{8\sin\theta \pm \sqrt{64\sin^2\theta + 64\sin^2\theta}}{2\sin^2\theta} \;=\;\dfrac{8\sin\theta \pm\sqrt{128\sin^2\theta}}{2\sin^2\theta}$ . . $r \;=\;\dfrac{8\sin\theta \pm 8\sqrt{2}\sin\theta}{2\sin^2\theta} \;=\;\dfrac{8\sin\theta(1 \pm\sqrt{2})}{2\sin^2\theta} \;=\;\dfrac{4(1\pm\sqrt{2})}{\sin\theta}$ . . $r \;=\;4(1\pm\sqrt{2})\csc\theta$ $(3)\; x^2 + y^2 - 2ay \:=\: 0$ We have: . $r^2 - 2ar\sin\theta \:=\:0 \quad\Rightarrow\quad r(r - 2a\sin\theta) \:=\:0$ Then: . $r-2a\sin\theta \:=\:0 \quad\Rightarrow\quad r \;=\;2a\sin\theta$ (We can disregard $r = 0.$) $(4)\;x^2 \:=\: y^3$ (My final answer is: . $r\:=\:\cot^2\theta\csc\theta$ . Yes! We have: . $(r\cos\theta)^2 \;=\;(r\sin\theta)^3 \quad\Rightarrow\quad r^2\cos^2\theta \;=\;r^3\sin^3\theta$ . . . . . . . . . $\cos^2\theta \;=\;r\sin^3\theta \quad\Rightarrow\quad \dfrac{\cos^2\theta}{\sin^3\theta} \;=\;r$ . . . . . . . . . . $r \;=\;\frac{\cos^2\theta}{\sin^2\theta}\cdot\frac{1 }{\sin\theta} \quad\Rightarrow\quad r \;=\;\cot^2\theta\csc\theta$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937918782234192, "perplexity": 1221.7971904651747}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989043.35/warc/CC-MAIN-20150728002309-00018-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.beatthegmat.com/time-for-gmat-round-2-t294719.html
• Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to $200 Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code ## Time for GMAT Round 2 This topic has 7 expert replies and 4 member replies eric777 Junior | Next Rank: 30 Posts Joined 01 Nov 2015 Posted: 12 messages #### Time for GMAT Round 2 Fri Mar 31, 2017 10:55 am Hello, I studied for awhile last year, and looking back I realize I did not study well at all, and after a month break without really studying took the GMAT and scored an expected 570. 32Q/35V, a 3 or something on IR, and 6 on the essay. I restarted my studies this January, and have been doing practice problem after practice problem from the OG, including going through the Manhattan GMAT foundations book once at the start of my studies, and once again two weeks ago. I took a practice test at the end of January (Magoosh practice which was not a great practice test) and they rated me a 580, though I believe I have improved since then. I schedule my GMAT for the middle of May, and now I want to focus this last 6 weeks on really improving and I'm looking for some advice. I'm an engineer and work at a fortune 20 company in a technology research position, but I was always more of a C in calculus, while acing other random math courses kind of student. I know I have the aptitude, but my study skills aren't great. Some background: - military veteran -> only reason I'm mentioning it is that this hurt me as I never studied in high school, then 4 years without doing any math and then straight into engineering - I didn't really start studying in college until my last two years, and then even though I was taking all engineering courses I was a mostly A student - never took the ACT/SAT so I never properly developed study skills for tests like the GMAT I'm not sure what else would be applicable. I've noticed people do things like create flash cards or study cards, but I have a poor memory, even for flash cards. Another thing I've noticed is that other test takes have been completing an "error log". I've attempted that, but I don't really know what I'm supposed to get out of it? If I were looking at where my weaknesses were - I marked down what category of problem I'd get incorrect - then it was all over the place. I'd miss something random on a problem that wasn't confusing or anything, and each time I've taken the GMAT (real or practice) I've answered all questions in almost exactly 75 minutes. Any advice/help/encouragement would be appreciated. I'm not looking to go to Stanford or something. I'm interested in Michigan, Berkely, and a couple of other schools in that tier. Thank you! Need free GMAT or MBA advice from an expert? Register for Beat The GMAT now and post your question in these forums! ### GMAT/MBA Expert Bara GMAT Instructor Joined 26 Oct 2008 Posted: 354 messages Followed by: 21 members Thanked: 55 times Sat Apr 01, 2017 1:27 am HI Eric, I encourage you to create and/or tap into systems that already exist so you're not throwing darts blindfolded or reinventing the wheel, and I'd also ask you to really think about whether you're as 'bad off' as you think you are: because likely you're just rusty and need direction. You're not the first veteran or rusty student to embark on GMAT studies. There are various time lines, and examples of 'error logs' here in these posts, but I wonder if you really believe you are, and always have been, out of the loop, why you're not signing up for online, in person group or in person/online tutoring. Courses will give you the kind of structure, direction and materials, that soemone you purport yourself to be would benefit most from Certainly it makes 'good business' sense if you consider your time valuable: employ systems that already exist and have been known to improve student scores. It can be overwhelming, because there are alot of options out there, and certainly those of us here as 'featured experts' have been in the biz for long enough and/or are committed enough to student success that we can provide SOME advice. SOme of my colleagues can point to their resource pages, which might be very helpful, and others can identify more what your issues are, when you tellus more about what's up. This is the transcription of one of our clients who was a veteran which you might find helpful. From MBA Podcaster - - and I'm not sure they're around so much - - their links weren't working, but the name of the podcast had been: Dealing With a Low GMAT Score - MBA Podcaster. Google it. Maybe it's working now. "Let’s turn now to a first year student at the University of Chicago’s Booth School of Business, Ronald Rolph. Rolph was an infantry officer for the Marine Corp for the past eight years. He shared his personal experience of taking the GMAT with MBA Podcaster, “Initially I bought several of the test prep books just at a local bookstore and went through a few of them. I also took kind of a crash course, like a weekend seminar in Durham, North Carolina near where I was living at the time which provided some insight into the test. I think it gave me a relatively decent overview of the format and some of the types of questions and subject matter that was going to be covered but I really did not have sufficient time to prepare adequately. I was constantly being deployed as an active duty Captain in the Marine Corp, so trying to cram in studying between deployment and while overseas I really didn’t do it justice. So when I took that test initially I really felt that I was under prepared so much so that I actually canceled the scores when the test was over. I really wasn’t comfortable with even recording that score officially because I really didn’t know how to go about attacking the test appropriately and I ran out of time on I think both of the sections, the verbal and the quantitative section. Then after taking that test once it was really kind of a cold bath and a harsh dose of reality where I quickly realized that if I wanted to do well on the GMAT, I really needed to dedicate more time and energy toward preparing sufficiently to do as well as I would have liked on the test. I think because of the unique format of the GMAT being not only academically but psychologically prepared to take the test is a key component to being successful on it.” But Rolph said he was far from being mentally ready for the GMAT the first time he took the test, “As I was taking the test and you know, you see the clock ticking right there on the screen and struggling with the questions kind of a vicious cycle and I really didn’t do anywhere near as well as I had hoped on that initial test. By design the test is supposed to foster that kind of anxiety and make sure that people have adequately prepared and are able to handle those kinds of situations and I guess it just took me accepting the fact that I couldn’t do it on my own that just relying on my own previous academic experiences and my own studying wasn’t going to get me to the score that I really hoped to get. Just going through the books and doing self-study was not going to be enough, that I really needed outside help. I’d been out of school for about seven years at the time so the quantitative aspects in particular among my skills were very rusty.” Rolph contacted a test prep company in New York, (Test Prep New York). Because of his military deployment schedule, Rolph had to cram his test prep course into a single week. The company suggested he come to New York right away to work with a team of tutors who specialize in the GMAT. “I spent exactly a week up in New York City staying with a friend, having daily sessions with both verbal and quantitative tutors as well as going through some of the more intangible aspects of the test preparations, psychological aspect of the test, and confidence, etc., etc. Which really enabled me to go into the test from a much stronger, more confident perspective. And that sort of intangible aspect of the preparation I thought was as important if not maybe more important than the actual hard skills of the sentence correction or the data sufficiency problems on the test.” Rolph explains some of the techniques the test prep company used to help him use to calm his nerves, “Mental exercises, stress reduction routines, breathing, relaxation, mental cues to keep yourself calm during stressful situation specifically as you’re taking the computer based test. And just kind of reinforcing your mental capacity to go about taking that kind of a test and just building your confidence. Initially I was a little bit skeptical, coming from the military background sometimes those sort of touchy feely things sometimes I’m a little adverse to but looking back on it that really was invaluable and I think helped exponentially my performance on the test.” Rolph had only one math class as an undergraduate so he said he was especially unprepared for the quantitative section of the GMAT. “What the tutor did which I thought was really prudent, he kind of gaged my ability level through some initial tests and interactions and we kind of determined that trying to master everything, all of the quantitative content of the GMAT was going to be a loss cause. We would get diminishing returns there was no way we could do that. So he kind of picked and choose some of the most or more important concepts and we sort of conceded the fact that there were going to be some questions on it that were going to be beyond my level that I wasn’t going to be able to get or memorize the formulas for but he focused on some of the more general concepts, some of the more prevalent ones on the test and really reinforced those and just focused on those. And we were able to I think to mutually get me to master those.” The test prep tutors covered the verbal section as well, “Repetition, repetition, and more repetition. She had me get the full GMAT official prep book as well the verbal supplement and do literally every single question in both of those books and really by doing that you start to sense patterns for the questions that they ask you the types of questions they ask and some of the over arching concepts that they really like to test on the GMAT. And she sort of gave me some insight as to how to go about recognizing certain concepts within sentence correction and the reading comprehension and to really pick up on those quickly to save time.” The Saturday after his crash course in New York, Rolph flew back to North Carolina. He took the test on Monday, two days later. “I didn’t cram the day or the night before. I think that can be counterproductive. Just focused more on getting a good night sleep, eating right and make sure I was fresh for test day and really just kind of trying to clear my mind the day before.” I asked Rolph how his experience of taking the test the second time compared to the first time, “It was night and day. In a way it was good because I took it in the same test center so I already knew what it the place looked like, I knew where it was, I was just much more confident and I had a much clearer concept of what to expect and I was as close to fully prepared as I could have been under the time constraints.” Rolph his score wasn’t quite as high as the highest he had gotten on one of his GMAT practice test but, “You know I guess ultimately the proof is in the pudding. I was able to get in to one of the programs that I had been hoping to get into so it did the job.” ALl this to say: your experience is NOT unique, and once you identify WHAT you need to do to improve, you will improve. That's where the log comes in. You can get all fancy pants with the log, but generally, you want to identify the following things: 1. Did you know how to do this, or did you guess - - and get it right, get it wrong 2. Did you not know how to do this, and did you guess - - and get it right, get it wrong 3. Do you understand why you got it right/wrong 4. If you got it wrong, was it a deficiency in content knowledge or strategy 5. If you saw something like it again, could you get it correct Then the vitals: 1. Section 2. Question Number 3. If Math: DS or PS 4. Then the vitals: What 'types" of things were being tested, Ie. Geometry - Triangles - Right Triangle - Question about degrees in a triangle 5. How did you answer the question wrong. Often it's not only important THAT you get a question wrong but HOW you get it wrong. From that you can learn. NOW. In terms of support. There are various options: 1. Online static course - - these you can go at your own pace and your own time. There is varying amounts of support, often by email/text. 2. In person group course - - Varying levels of proficiency. The bigger the program the more likely it's a big-box program with a one-size-fits all model. If you're not looking for 700+ and don't mind going through everytihng that is on the test (incuding things you likely already know), these are great. 3. In person or online tutoring - - This is typically highly customized to your needs, is usually more expensive, but more streamlined and economical in terms of your time. You get what you pay for! Always ask for a bio of the tutor you're working with or the opportunity to have a quick talk with the instructor if your'e calling the tutoring company cold. Since you're planning to take the test mid-may and you're starting out where you are, you can use any of the above as your test-prep option. You can also self-study, but I believe given your background, this will be the most difficult. You can also do a combination of all of them. Something we do for clients is an evaluation, which essentially provides feedback on what you're doing, and what you need to do to improve your score. This feedback would help you decide next steps as well, and if you want more information about this, ping me privately, email or call. SO. Buckle up. Dive deep, and keep us posted on how we can continue to help you. Good Luck! _________________ Bara Sapir, MA, CHt, CNLP Founder/CEO & GMAT Badass Test Prep New York/Test Prep San Francisco Maximize your Score, Minimize your Stress! WORKSHOP:mindflowclass.com BOOK: http://tinyurl.com/TPNYSC TV: https://www.youtube.com/watch?v=McA4aqCNS-c Thanked by: eric777 ### GMAT/MBA Expert [email protected] Elite Legendary Member Joined 23 Jun 2013 Posted: 8708 messages Followed by: 460 members Thanked: 2732 times GMAT Score: 800 Sat Apr 01, 2017 9:30 am Hi eric777, I'm hoping that you can provide a bit more information about the work that you've done so far and your plans: 1) When did you take the Official GMAT (from your post, it's implied that you did so in January - but that seems to run counter to the plans that you discussed here: http://www.beatthegmat.com/looking-for-a-little-advice-maybe-words-of-encouragement-t288173.html#762230) 2) When are you planning to apply to Business School? 3) Going forward, how many hours do you think you can consistently study each week? Assuming that your goal score is still 700+, you're going to have to make some significant improvements to how you handle BOTH the Quant and Verbal sections. Continuing to study in the same ways as before will likely lead to the same general score results - so you'll have to make some adjustments to your study routine (and that will likely require that you invest in some new study materials). GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at [email protected] eric777 Junior | Next Rank: 30 Posts Joined 01 Nov 2015 Posted: 12 messages Mon Apr 10, 2017 3:04 pm [email protected] wrote: Hi eric777, I'm hoping that you can provide a bit more information about the work that you've done so far and your plans: 1) When did you take the Official GMAT (from your post, it's implied that you did so in January - but that seems to run counter to the plans that you discussed here: http://www.beatthegmat.com/looking-for-a-little-advice-maybe-words-of-encouragement-t288173.html#762230) 2) When are you planning to apply to Business School? 3) Going forward, how many hours do you think you can consistently study each week? Assuming that your goal score is still 700+, you're going to have to make some significant improvements to how you handle BOTH the Quant and Verbal sections. Continuing to study in the same ways as before will likely lead to the same general score results - so you'll have to make some adjustments to your study routine (and that will likely require that you invest in some new study materials). GMAT assassins aren't born, they're made, Rich Hi Rich, Sorry I'm just now getting back to this thread. I took a week off because I felt burnt out and I needed to refocus - so that meant avoiding all GMAT related topics until I could settle down. I took the official GMAT last April. I took a practice test this past January. I have my upcoming official test scheduled for this upcoming May. I think I've realized that I'm not intelligent enough, good enough of a test taker, or perhaps motivated enough for the exam for the score I need for the type of school I want to get in to. For instance, today I missed a question where asking to calculate the angles of the points of a star with the polygon in the middle. I knew that there were multiple triangles within the star, but just simply missed the polygon in the middle of the star, which led to me not being able to answer the question. Looking at the answer it was just like " well duh, of course". It's disheartening to think that solving those types of problems are so intuitive to people. But what's the strategy here? How do you learn a lesson from that question, for example? Is that type of question really just a mid 500s level question? ### GMAT/MBA Expert [email protected] Elite Legendary Member Joined 23 Jun 2013 Posted: 8708 messages Followed by: 460 members Thanked: 2732 times GMAT Score: 800 Mon Apr 10, 2017 6:29 pm Hi eric777, To start, many Test Takers find that training to face the GMAT is a challenging task, so you're not alone. When dealing with individual GMAT questions, it helps to remember that every aspect of each GMAT question is carefully chosen - the numbers involved, wording/descriptions and even the answer choices were chosen - by a human writer - to test you on certain (mostly "standard") concepts. You're rarely given that much information to work with, but what you are given is there for a reason - so you have to think in terms of what each prompt reminds you of (knowledge, patterns, prior questions that you've answered that were similar, etc). You don't have to be a genius to score at a high level on this Test, but you do have to take responsibility for the questions that you CAN get correct. Physically redoing questions that you've gotten wrong (step-by-step, on the pad) can help reinforce the knowledge, Tactics and patterns that you need to know to score at a high level. Beyond that work, you might need to analyze how you approach questions, the type of notes that you take, the frequency in which you try to do work "in your head", etc. If it's really been a couple of months since you last took a practice CAT, then you should take one soon. Make sure to take the FULL CAT - with the Essay and IR sections, take it away from your home, at the same time of day as when you'll take the Official GMAT, etc.. Once you have that score, you should report back here and we can discuss how best to proceed. GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at [email protected] eric777 Junior | Next Rank: 30 Posts Joined 01 Nov 2015 Posted: 12 messages Tue Apr 11, 2017 12:59 pm [email protected] wrote: Hi eric777, To start, many Test Takers find that training to face the GMAT is a challenging task, so you're not alone. When dealing with individual GMAT questions, it helps to remember that every aspect of each GMAT question is carefully chosen - the numbers involved, wording/descriptions and even the answer choices were chosen - by a human writer - to test you on certain (mostly "standard") concepts. You're rarely given that much information to work with, but what you are given is there for a reason - so you have to think in terms of what each prompt reminds you of (knowledge, patterns, prior questions that you've answered that were similar, etc). You don't have to be a genius to score at a high level on this Test, but you do have to take responsibility for the questions that you CAN get correct. Physically redoing questions that you've gotten wrong (step-by-step, on the pad) can help reinforce the knowledge, Tactics and patterns that you need to know to score at a high level. Beyond that work, you might need to analyze how you approach questions, the type of notes that you take, the frequency in which you try to do work "in your head", etc. If it's really been a couple of months since you last took a practice CAT, then you should take one soon. Make sure to take the FULL CAT - with the Essay and IR sections, take it away from your home, at the same time of day as when you'll take the Official GMAT, etc.. Once you have that score, you should report back here and we can discuss how best to proceed. GMAT assassins aren't born, they're made, Rich Thank you. I'm planning on taking a full-length practice test this Saturday. eric777 Junior | Next Rank: 30 Posts Joined 01 Nov 2015 Posted: 12 messages Sun Apr 16, 2017 7:17 am Took my first Manhattan GMAT (I've used up my free official ones) and scored a 640. 40 quant 37 verbal. I'm happy about that as compared to previous scores. This included writing the full-length essay but not IR. I'm not worried about the essay at all - it's the one area I'm naturally good at. Going to review all the questions. The verbal score is puzzling. I feel much stronger in verbal - and I do know that I go too fast on the exam (usually have 10-15 minutes left even when trying to go slowly). I've attached a copy of the score assessment. Any advice or encouragement is appreciated! Attachments This post contains an attachment. You must be logged in to download/view this file. Please login or register as a user. ### GMAT/MBA Expert DavidG@VeritasPrep Legendary Member Joined 14 Jan 2015 Posted: 2301 messages Followed by: 115 members Thanked: 1069 times GMAT Score: 770 Sun Apr 16, 2017 11:13 am Quote: Going to review all the questions. The verbal score is puzzling. I feel much stronger in verbal - and I do know that I go too fast on the exam (usually have 10-15 minutes left even when trying to go slowly). Important to bear in mind - those raw scores mean very different things in quant and verbal. Your V37 would have a significantly higher percentile than your Q40, so despite the lower number, your verbal score is actually stronger than your quant. In the meantime, keep reviewing your old exams and attempting to boil down the essence of those tests into 4-5 actionable takeaways. Then do some drilling areas that need it and gear up to take another exam and repeat the process. _________________ Veritas Prep | GMAT Instructor Veritas Prep Reviews Save$100 off any live Veritas Prep GMAT Course Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now! ### GMAT/MBA Expert [email protected] Elite Legendary Member Joined 23 Jun 2013 Posted: 8708 messages Followed by: 460 members Thanked: 2732 times GMAT Score: 800 Sun Apr 16, 2017 11:16 am Hi eric777, This score shows that you have a pretty good grasp of the 'core' material that is tested by the GMAT. Unfortunately, we can't view this score as accurate because you skipped the IR section. On Test Day, once you factor in the the 'check in' time, 'orientation section' of the Test, Essay, IR and first break, you'll have dealt with about 1.5 hours of activity before you see your first Quant question - and about 3 hours of activity before you see your first Verbal question. These are important aspects of the Test Day that you MUST train for if you want to maximize your performance. By skipping a section, you took a shorter, easier Exam that required less work - so you didn't face any of the endurance/fatigue challenges that you'll face on Test Day. This is meant to say that you really MUST take your CATs in a more rigorous fashion as you continue to study. Now that you have this result, you should plan to do a full review of the Exam. While there are a variety of different things to note (based on the type of Mistake Tracker/Error Log that you're using), here are some standard questions that you will want to answer: In each section, how many questions did you get wrong.... 1) Because of a silly/little mistake? 2) Because there was some math/verbal that you just could not remember how to do? 3) Because the question was too hard? 4) Because you were low on time and had to guess? Defining WHY you're getting questions wrong - and then working to 'fix' whatever needs fixing - is part of what it takes to to hone your skills and score at a higher level. GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at [email protected] eric777 Junior | Next Rank: 30 Posts Joined 01 Nov 2015 Posted: 12 messages Mon Apr 17, 2017 6:39 am [email protected] wrote: Hi eric777, This score shows that you have a pretty good grasp of the 'core' material that is tested by the GMAT. Unfortunately, we can't view this score as accurate because you skipped the IR section. On Test Day, once you factor in the the 'check in' time, 'orientation section' of the Test, Essay, IR and first break, you'll have dealt with about 1.5 hours of activity before you see your first Quant question - and about 3 hours of activity before you see your first Verbal question. These are important aspects of the Test Day that you MUST train for if you want to maximize your performance. By skipping a section, you took a shorter, easier Exam that required less work - so you didn't face any of the endurance/fatigue challenges that you'll face on Test Day. This is meant to say that you really MUST take your CATs in a more rigorous fashion as you continue to study. Now that you have this result, you should plan to do a full review of the Exam. While there are a variety of different things to note (based on the type of Mistake Tracker/Error Log that you're using), here are some standard questions that you will want to answer: In each section, how many questions did you get wrong.... 1) Because of a silly/little mistake? 2) Because there was some math/verbal that you just could not remember how to do? 3) Because the question was too hard? 4) Because you were low on time and had to guess? Defining WHY you're getting questions wrong - and then working to 'fix' whatever needs fixing - is part of what it takes to to hone your skills and score at a higher level. GMAT assassins aren't born, they're made, Rich So a few of the questions were because of silly mistakes. For example, the first question was along the lines of there are 4*10^11 starts and 50 million are suns like ours or something along those lines. I did the problem exactly how it should be done, but was off by one factor and got 800 instead of 8000. How do you guard against these types of things? ### GMAT/MBA Expert [email protected] Elite Legendary Member Joined 23 Jun 2013 Posted: 8708 messages Followed by: 460 members Thanked: 2732 times GMAT Score: 800 Mon Apr 17, 2017 9:18 am Hi eric777, Little mistakes can almost always be traced back to a lack of proper note-taking (you might also define this issue as doing too much work 'in your head'). Ultimately, you have to ask yourself what you are willing to do to guarantee that you get the question correct. Would you be willing to put in the extra effort to 'bulletproof' your work or not? The good news is the work is almost always pretty easy. GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at [email protected] ### GMAT/MBA Expert Brent@GMATPrepNow GMAT Instructor Joined 08 Dec 2008 Posted: 10763 messages Followed by: 1212 members Thanked: 5146 times GMAT Score: 770 Mon Apr 17, 2017 1:04 pm eric777 wrote: So a few of the questions were because of silly mistakes. For example, the first question was along the lines of there are 4*10^11 starts and 50 million are suns like ours or something along those lines. I did the problem exactly how it should be done, but was off by one factor and got 800 instead of 8000. How do you guard against these types of things? If silly mistakes are hurting your score, then it's important that you identify and categorize these mistakes. Some examples might include: - sloppy writing causes a 7 to mysteriously turn into a 1 - you forget that a question is an EXCEPT question. - you fail to notice crucial information such as x is an integer or w < 0. - you calculate Pat’s current age when the question asked for the Pat’s age 5 years from now. - and so on Once you have identified the types of mistakes that YOU typically make, you will be able to spot situations/questions in which you're prone to making errors. Cheers, Brent _________________ Brent Hanneson – Founder of GMATPrepNow.com Use our video course along with Check out the online reviews of our course Come see all of our free resources GMAT Prep Now's comprehensive video course can be used in conjunction with Beat The GMAT’s FREE 60-Day Study Guide and reach your target score in 2 months! ### Best Conversation Starters 1 Vincen 152 topics 2 lheiannie07 61 topics 3 Roland2rule 49 topics 4 ardz24 40 topics 5 LUANDATO 32 topics See More Top Beat The GMAT Members... ### Most Active Experts 1 Brent@GMATPrepNow GMAT Prep Now Teacher 141 posts 2 EconomistGMATTutor The Economist GMAT Tutor 107 posts 3 GMATGuruNY The Princeton Review Teacher 106 posts 4 Rich.C@EMPOWERgma... EMPOWERgmat 104 posts 5 Matt@VeritasPrep Veritas Prep 76 posts See More Top Beat The GMAT Experts
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32708123326301575, "perplexity": 2651.4877160706505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824618.72/warc/CC-MAIN-20171021062002-20171021082002-00609.warc.gz"}
http://www.ck12.org/algebra/Checking-for-Solutions-to-Systems-of-Linear-Inequalities/lecture/Testing-Solutions-for-a-System-of-Inequalities/r1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Checking for Solutions to Systems of Linear Inequalities ( Video ) | Algebra | CK-12 Foundation # Checking for Solutions to Systems of Linear Inequalities % Progress Practice Checking for Solutions to Systems of Linear Inequalities Progress % Testing Solutions for a System of Inequalities Shows an example to demonstrate how testing solutions to a system of inequalities works.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020140528678894, "perplexity": 1556.5281609816514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447546615.89/warc/CC-MAIN-20141224185906-00055-ip-10-231-17-201.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2011_AIME_II_Problems/Problem_9&oldid=133271
# 2011 AIME II Problems/Problem 9 ## Problem 9 Let be non-negative real numbers such that , and . Let and be positive relatively prime integers such that is the maximum possible value of . Find . ## Solution Note that neither the constraint nor the expression we need to maximize involves products with . Factoring out say and we see that the constraint is , while the expression we want to maximize is . Adding the left side of the constraint to the expression, we get: . This new expression is the product of three non-negative terms whose sum is equal to 1. By AM-GM this product is at most . Since we have added at least the desired maximum is at most . It is easy to see that this upper bound can in fact be achieved by ensuring that the constraint expression is equal to with —for example, by choosing and small enough—so our answer is An example is: Another example is ## Solution 2 (Not legit) There's a symmetry between and . Therefore, a good guess is that and , at which point we know that , , and we are trying to maximize . Then,
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690938591957092, "perplexity": 278.95507125923444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00079.warc.gz"}
https://superphysics.org/research/einstein/relativity/section-21/
Section 21 # The Foundations of Classical Mechanics and Special Relativity are Unsatisfactory March 22, 2022 Newton’s First Law is only valid for non-moving `K` which: • have unique states of motion, and • are in uniform translational motion relative to each other. Relative to other reference-bodies `K`, the law is not valid. Both in classical mechanics and in special relativity, we differentiate between: • viewpoints `K` [man outside the box] where the laws of nature can hold relatively [affected by c] • viewpoints `K` [man inside the box] where those laws cannot hold relatively [insignificant to c] But why are relativistic viewpoints more important than non-relativistic viewpoints? * *Superphysics Note: No, they are not more important or have more priority to Nature. All reference-bodies or viewpoints are of equal importance! A gas range has two identical pots with water. Steam is being emitted continuously from Pot A which is on a flame, but not from Pot B which has no flame. I can see that the flame causes steam. If both have no flame but Pot A still gives steam, then I will be puzzled. Similarly, I seek in vain for a real something in classical mechanics or special relativity which causes gravity which creates the different behaviour of bodies from viewpoints `K` [inside the box] and `K'`* [outside the box]. **Einstein Note: The objection is most important when the motion of the viewpoint is inherent e.g. when the viewpoint is rotating uniformly. Newton saw this objection and attempted to invalidate it, but without success*. *Superphysics Note: Here, Einstein explains that he invents inertial mass (and therefore the preference to relativistic spacetime) simply because he couldn’t find the cause for gravity. So he sources it from Newton’s Second Law in a spacetime that is in perpetual movement. This is why gravity in his General Relativity is not a force that acts from afar, but a warping of spacetime that changes movements of perpetually-moving objects. The cause of gravity has already been identified by Descartes as aethereal vortices. Newton discarded Descartes and so he couldn’t identify the cause of gravitation. But E. Mach recognised it most clearly. He claimed that mechanics must be placed on a new basis. It can only be solved by GR since its equations hold for every body of reference, whatever may be its state of motion.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979302644729614, "perplexity": 1738.7689732487838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00327.warc.gz"}
http://mathhelpforum.com/pre-calculus/118490-complex-numbers.html
1. ## complex numbers I am having trouble with these 2 questions Find the complex number z such that (5+2i)+ ((−3−2i)/z)=5i and Find the complex number z such that (2−2i)z+(1−4i)z bar=4+5i any help would be great thanks 2. Originally Posted by kblythe I am having trouble with these 2 questions Find the complex number z such that (5+2i)+ ((−3−2i)/z)=5i and Find the complex number z such that (2−2i)z+(1−4i)z bar=4+5i any help would be great thanks 1) $\frac{-3 - 2i}{z} = 3i - 5 \Rightarrow \frac{z}{-3 - 2i} = \frac{1}{-5 + 3i} \Rightarrow z = \frac{-3 - 2i}{-5 + 3i}$. Your job is to express this answer in cartesian form. 2) Let $z = x + iy$: $(2 - 2i)(x + iy) + (1 - 4i)(x - iy) = 4 + 5i$. Expand and equate the real and imaginary parts on each side. This will give you two simultaneous equations that you must solve for x and y. If you need more help, please show all your work and say where you're stuck.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87953782081604, "perplexity": 500.2957124034964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814787.54/warc/CC-MAIN-20180223134825-20180223154825-00704.warc.gz"}
https://byjus.com/ncert-solutions-class-10-maths/chapter-11-constructions/
NCERT Solutions for Class 10 Maths Chapter 11- Constructions NCERT Solutions for Class 10 Maths Chapter 11 Constructions are provided in a detailed manner, where one can find a step-by-step solution to all the questions for fast revisions. Solutions for the 11th chapter of NCERT Class 10 maths are prepared by subject experts under the guidelines of NCERT to assist students in their second term exam preparations. Get free NCERT Solutions for Class 10 Maths, Chapter 11 – Constructions at BYJU’S to accelerate the second term exam preparation. All the questions of NCERT exercises are solved using diagrams with a step-by-step procedure for construction. Solutions of NCERT help students boost their concepts and clear doubts. Access Answers of Maths NCERT Chapter 11 – Constructions Exercise 11.1 Page: 220 In each of the following, give the justification of the construction also: 1. Draw a line segment of length 7.6 cm and divide it in the ratio 5 : 8. Measure the two parts. Construction Procedure: A line segment with a measure of 7.6 cm length is divided in the ratio of 5:8 as follows. 1. Draw line segment AB with the length measure of 7.6 cm 2. Draw a ray AX that makes an acute angle with line segment AB. 3. Locate the points i.e.,13 (= 5+8) points, such as A1, A2, A3, A4 …….. A13, on the ray AX such that it becomes AA1 = A1A2 = A2A3 and so on. 4. Join the line segment and the ray, BA13. 5. Through the point A5, draw a line parallel to BA13 which makes an angle equal to ∠AA13B 6. The point A5 which intersects the line AB at point C. 7. C is the point divides line segment AB of 7.6 cm in the required ratio of 5:8. 8. Now, measure the lengths of the line AC and CB. It comes out to the measure of 2.9 cm and 4.7 cm respectively. Justification: The construction of the given problem can be justified by proving that AC/CB = 5/ 8 By construction, we have A5C || A13B. From Basic proportionality theorem for the triangle AA13B, we get AC/CB =AA5/A5A13….. (1) From the figure constructed, it is observed that AA5 and A5A13 contain 5 and 8 equal divisions of line segments respectively. Therefore, it becomes AA5/A5A13=5/8… (2) Compare the equations (1) and (2), we obtain AC/CB = 5/ 8 Hence, Justified. 2. Construct a triangle of sides 4 cm, 5 cm and 6 cm and then a triangle similar to it whose sides are 2/3 of the corresponding sides of the first triangle. Construction Procedure: 1. Draw a line segment AB which measures 4 cm, i.e., AB = 4 cm. 2. Take the point A as centre, and draw an arc of radius 5 cm. 3. Similarly, take the point B as its centre, and draw an arc of radius 6 cm. 4. The arcs drawn will intersect each other at point C. 5. Now, we obtained AC = 5 cm and BC = 6 cm and therefore ΔABC is the required triangle. 6. Draw a ray AX which makes an acute angle with the line segment AB on the opposite side of vertex C. 7. Locate 3 points such as A1, A2, A3 (as 3 is greater between 2 and 3) on line AX such that it becomes AA1= A1A2 = A2A3. 8. Join the point BA3 and draw a line through A2which is parallel to the line BA3 that intersect AB at point B’. 9. Through the point B’, draw a line parallel to the line BC that intersect the line AC at C’. 10. Therefore, ΔAB’C’ is the required triangle. Justification: The construction of the given problem can be justified by proving that AB’ = (2/3)AB B’C’ = (2/3)BC AC’= (2/3)AC From the construction, we get B’C’ || BC ∴ ∠AB’C’ = ∠ABC (Corresponding angles) In ΔAB’C’ and ΔABC, ∠ABC = ∠AB’C (Proved above) ∠BAC = ∠B’AC’ (Common) ∴ ΔAB’C’ ∼ ΔABC (From AA similarity criterion) Therefore, AB’/AB = B’C’/BC= AC’/AC …. (1) In ΔAAB’ and ΔAAB, ∠A2AB’ =∠A3AB (Common) From the corresponding angles, we get, ∠AA2B’ =∠AA3B Therefore, from the AA similarity criterion, we obtain ΔAA2B’ and AA3B So, AB’/AB = AA2/AA3 Therefore, AB’/AB = 2/3 ……. (2) From the equations (1) and (2), we get AB’/AB=B’C’/BC = AC’/ AC = 2/3 This can be written as AB’ = (2/3)AB B’C’ = (2/3)BC AC’= (2/3)AC Hence, justified. 3. Construct a triangle with sides 5 cm, 6 cm and 7 cm and then another triangle whose sides are 7/5 of the corresponding sides of the first triangle Construction Procedure: 1. Draw a line segment AB =5 cm. 2. Take A and B as centre, and draw the arcs of radius 6 cm and 7 cm respectively. 3. These arcs will intersect each other at point C and therefore ΔABC is the required triangle with the length of sides as 5 cm, 6 cm, and 7 cm respectively. 4. Draw a ray AX which makes an acute angle with the line segment AB on the opposite side of vertex C. 5. Locate the 7 points such as A1, A2, A3, A4, A5, A6, A7 (as 7 is greater between 5 and 7), on line AX such that it becomes AA1 = A1A2 = A2A3 = A3A4 = A4A5 = A5A6 = A6A7 6. Join the points BA5 and draw a line from A7 to BA5 which is parallel to the line BA5 where it intersects the extended line segment AB at point B’. 7. Now, draw a line from B’ the extended line segment AC at C’ which is parallel to the line BC and it intersects to make a triangle. 8. Therefore, ΔAB’C’ is the required triangle. Justification: The construction of the given problem can be justified by proving that AB’ = (7/5)AB B’C’ = (7/5)BC AC’= (7/5)AC From the construction, we get B’C’ || BC ∴ ∠AB’C’ = ∠ABC (Corresponding angles) In ΔAB’C’ and ΔABC, ∠ABC = ∠AB’C (Proved above) ∠BAC = ∠B’AC’ (Common) ∴ ΔAB’C’ ∼ ΔABC (From AA similarity criterion) Therefore, AB’/AB = B’C’/BC= AC’/AC …. (1) In ΔAA7B’ and ΔAA5B, ∠A7AB’=∠A5AB (Common) From the corresponding angles, we get, ∠A A7B’=∠A A5B Therefore, from the AA similarity criterion, we obtain ΔA A2B’ and A A3B So, AB’/AB = AA5/AA7 Therefore, AB /AB’ = 5/7 ……. (2) From the equations (1) and (2), we get AB’/AB = B’C’/BC = AC’/ AC = 7/5 This can be written as AB’ = (7/5)AB B’C’ = (7/5)BC AC’= (7/5)AC Hence, justified. Construction Procedure: 1. Draw a line segment BC with the measure of 8 cm. 2. Now draw the perpendicular bisector of the line segment BC and intersect at the point D 3. Take the point D as centre and draw an arc with the radius of 4 cm which intersect the perpendicular bisector at the point A 4. Now join the lines AB and AC and the triangle is the required triangle. 5. Draw a ray BX which makes an acute angle with the line BC on the side opposite to the vertex A. 6. Locate the 3 points B1, B2 and B3 on the ray BX such that BB1 = B1B2 = B2B3 7. Join the points B2C and draw a line from B3 which is parallel to the line B2C where it intersects the extended line segment BC at point C’. 8. Now, draw a line from C’ the extended line segment AC at A’ which is parallel to the line AC and it intersects to make a triangle. 9. Therefore, ΔA’BC’ is the required triangle. Justification: The construction of the given problem can be justified by proving that A’B = (3/2)AB BC’ = (3/2)BC A’C’= (3/2)AC From the construction, we get A’C’ || AC ∴ ∠ A’C’B = ∠ACB (Corresponding angles) In ΔA’BC’ and ΔABC, ∠B = ∠B (common) ∠A’BC’ = ∠ACB ∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion) Therefore, A’B/AB = BC’/BC= A’C’/AC Since the corresponding sides of the similar triangle are in the same ratio, it becomes A’B/AB = BC’/BC= A’C’/AC = 3/2 Hence, justified. 5. Draw a triangle ABC with side BC = 6 cm, AB = 5 cm and ∠ABC = 60°. Then construct a triangle whose sides are 3/4 of the corresponding sides of the triangle ABC. Construction Procedure: 1. Draw a ΔABC with base side BC = 6 cm, and AB = 5 cm and ∠ABC = 60°. 2. Draw a ray BX which makes an acute angle with BC on the opposite side of vertex A. 3. Locate 4 points (as 4 is greater in 3 and 4), such as B1, B2, B3, B4, on line segment BX. 4. Join the points B4C and also draw a line through B3, parallel to B4C intersecting the line segment BC at C’. 5. Draw a line through C’ parallel to the line AC which intersects the line AB at A’. 6. Therefore, ΔA’BC’ is the required triangle. Justification: The construction of the given problem can be justified by proving that Since the scale factor is 3/4 , we need to prove A’B = (3/4)AB BC’ = (3/4)BC A’C’= (3/4)AC From the construction, we get A’C’ || AC In ΔA’BC’ and ΔABC, ∴ ∠ A’C’B = ∠ACB (Corresponding angles) ∠B = ∠B (common) ∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion) Since the corresponding sides of the similar triangle are in the same ratio, it becomes Therefore, A’B/AB = BC’/BC= A’C’/AC So, it becomes A’B/AB = BC’/BC= A’C’/AC = 3/4 Hence, justified. 6. Draw a triangle ABC with side BC = 7 cm, ∠ B = 45°, ∠ A = 105°. Then, construct a triangle whose sides are 4/3 times the corresponding sides of ∆ ABC. To find ∠C: Given: ∠B = 45°, ∠A = 105° We know that, Sum of all interior angles in a triangle is 180°. ∠A+∠B +∠C = 180° 105°+45°+∠C = 180° ∠C = 180° − 150° ∠C = 30° So, from the property of triangle, we get ∠C = 30° Construction Procedure: The required triangle can be drawn as follows. 1. Draw a ΔABC with side measures of base BC = 7 cm, ∠B = 45°, and ∠C = 30°. 2. Draw a ray BX makes an acute angle with BC on the opposite side of vertex A. 3. Locate 4 points (as 4 is greater in 4 and 3), such as B1, B2, B3, B4, on the ray BX. 4. Join the points B3C. 5. Draw a line through B4 parallel to B3C which intersects the extended line BC at C’. 6. Through C’, draw a line parallel to the line AC that intersects the extended line segment at C’. 7. Therefore, ΔA’BC’ is the required triangle. Justification: The construction of the given problem can be justified by proving that Since the scale factor is 4/3, we need to prove A’B = (4/3)AB BC’ = (4/3)BC A’C’= (4/3)AC From the construction, we get A’C’ || AC In ΔA’BC’ and ΔABC, ∴ ∠A’C’B = ∠ACB (Corresponding angles) ∠B = ∠B (common) ∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion) Since the corresponding sides of the similar triangle are in the same ratio, it becomes Therefore, A’B/AB = BC’/BC= A’C’/AC So, it becomes A’B/AB = BC’/BC= A’C’/AC = 4/3 Hence, justified. 7. Draw a right triangle in which the sides (other than hypotenuse) are of lengths 4 cm and 3 cm. Then construct another triangle whose sides are 5/3 times the corresponding sides of the given triangle. Given: The sides other than hypotenuse are of lengths 4cm and 3cm. It defines that the sides are perpendicular to each other Construction Procedure: The required triangle can be drawn as follows. 1. Draw a line segment BC =3 cm. 2. Now measure and draw angle 90° 3. Take B as centre and draw an arc with the radius of 4 cm and intersects the ray at the point B. 4. Now, join the lines AC and the triangle ABC is the required triangle. 5. Draw a ray BX makes an acute angle with BC on the opposite side of vertex A. 6. Locate 5 such as B1, B2, B3, B4, on the ray BX such that such that BB1 = B1B2 = B2B3= B3B4 = B4B5 7. Join the points B3C. 8. Draw a line through B5 parallel to B3C which intersects the extended line BC at C’. 9. Through C’, draw a line parallel to the line AC that intersects the extended line AB at A’. 10. Therefore, ΔA’BC’ is the required triangle. Justification: The construction of the given problem can be justified by proving that Since the scale factor is 5/3, we need to prove A’B = (5/3)AB BC’ = (5/3)BC A’C’= (5/3)AC From the construction, we get A’C’ || AC In ΔA’BC’ and ΔABC, ∴ ∠ A’C’B = ∠ACB (Corresponding angles) ∠B = ∠B (common) ∴ ΔA’BC’ ∼ ΔABC (From AA similarity criterion) Since the corresponding sides of the similar triangle are in the same ratio, it becomes Therefore, A’B/AB = BC’/BC= A’C’/AC So, it becomes A’B/AB = BC’/BC= A’C’/AC = 5/3 Hence, justified. Exercise 11.2 Page: 221 In each of the following, give the justification of the construction also: 1. Draw a circle of radius 6 cm. From a point 10 cm away from its centre, construct the pair of tangents to the circle and measure their lengths. Construction Procedure: The construction to draw a pair of tangents to the given circle is as follows. 1. Draw a circle with radius = 6 cm with centre O. 2. Locate a point P, which is 10 cm away from O. 3. Join the points O and P through line 4. Draw the perpendicular bisector of the line OP. 5. Let M be the mid-point of the line PO. 6. Take M as centre and measure the length of MO 7. The length MO is taken as radius and draw a circle. 8. The circle drawn with the radius of MO, intersect the previous circle at point Q and R. 9. Join PQ and PR. 10. Therefore, PQ and PR are the required tangents. Justification: The construction of the given problem can be justified by proving that PQ and PR are the tangents to the circle of radius 6cm with centre O. To prove this, join OQ and OR represented in dotted lines. From the construction, ∠PQO is an angle in the semi-circle. We know that angle in a semi-circle is a right angle, so it becomes, ∴ ∠PQO = 90° Such that ⇒ OQ ⊥ PQ Since OQ is the radius of the circle with radius 6 cm, PQ must be a tangent of the circle. Similarly, we can prove that PR is a tangent of the circle. Hence, justified. 2. Construct a tangent to a circle of radius 4 cm from a point on the concentric circle of radius 6 cm and measure its length. Also verify the measurement by actual calculation. Construction Procedure: For the given circle, the tangent can be drawn as follows. 1. Draw a circle of 4 cm radius with centre “O”. 2. Again, take O as centre draw a circle of radius 6 cm. 3. Locate a point P on this circle 4. Join the points O and P through lines such that it becomes OP. 5. Draw the perpendicular bisector to the line OP 6. Let M be the mid-point of PO. 7. Draw a circle with M as its centre and MO as its radius 8. The circle drawn with the radius OM, intersect the given circle at the points Q and R. 9. Join PQ and PR. 10. PQ and PR are the required tangents. From the construction, it is observed that PQ and PR are of length 4.47 cm each. It can be calculated manually as follows In ∆PQO, Since PQ is a tangent, ∠PQO = 90°. PO = 6cm and QO = 4 cm Applying Pythagoras theorem in ∆PQO, we obtain PQ2+QO2 = PQ2 PQ2+(4)2 = (6)2 PQ2 +16 =36 PQ2 = 36−16 PQ2 = 20 PQ = 2√5 PQ = 4.47 cm Therefore, the tangent length PQ = 4.47 Justification: The construction of the given problem can be justified by proving that PQ and PR are the tangents to the circle of radius 4 cm with centre O. To prove this, join OQ and OR represented in dotted lines. From the construction, ∠PQO is an angle in the semi-circle. We know that angle in a semi-circle is a right angle, so it becomes, ∴ ∠PQO = 90° Such that ⇒ OQ ⊥ PQ Since OQ is the radius of the circle with radius 4 cm, PQ must be a tangent of the circle. Similarly, we can prove that PR is a tangent of the circle. Hence, justified. 3. Draw a circle of radius 3 cm. Take two points P and Q on one of its extended diameter each at a distance of 7 cm from its centre. Draw tangents to the circle from these two points P and Q Construction Procedure: The tangent for the given circle can be constructed as follows. 1. Draw a circle with a radius of 3cm with centre “O”. 2. Draw a diameter of a circle and it extends 7 cm from the centre and mark it as P and Q. 3. Draw the perpendicular bisector of the line PO and mark the midpoint as M. 4. Draw a circle with M as centre and MO as radius 5. Now join the points PA and PB in which the circle with radius MO intersects the circle of circle 3cm. 6. Now PA and PB are the required tangents. 7. Similarly, from the point Q, we can draw the tangents. 8. From that, QC and QD are the required tangents. Justification: The construction of the given problem can be justified by proving that PQ and PR are the tangents to the circle of radius 3 cm with centre O. To prove this, join OA and OB. From the construction, ∠PAO is an angle in the semi-circle. We know that angle in a semi-circle is a right angle, so it becomes, ∴ ∠PAO = 90° Such that ⇒ OA ⊥ PA Since OA is the radius of the circle with radius 3 cm, PA must be a tangent of the circle. Similarly, we can prove that PB, QC and QD are the tangent of the circle. Hence, justified 4. Draw a pair of tangents to a circle of radius 5 cm which are inclined to each other at an angle of 60° Construction Procedure: The tangents can be constructed in the following manner: 1. Draw a circle of radius 5 cm and with centre as O. 2. Take a point Q on the circumference of the circle and join OQ. 3. Draw a perpendicular to QP at point Q. 4. Draw a radius OR, making an angle of 120° i.e(180°−60°) with OQ. 5. Draw a perpendicular to RP at point R. 6. Now both the perpendiculars intersect at point P. 7. Therefore, PQ and PR are the required tangents at an angle of 60°. Justification: The construction can be justified by proving that ∠QPR = 60° By our construction ∠OQP = 90° ∠ORP = 90° And ∠QOR = 120° We know that the sum of all interior angles of a quadrilateral = 360° ∠OQP+∠QOR + ∠ORP +∠QPR = 360o 90°+120°+90°+∠QPR = 360° Therefore, ∠QPR = 60° Hence Justified 5. Draw a line segment AB of length 8 cm. Taking A as centre, draw a circle of radius 4 cm and taking B as centre, draw another circle of radius 3 cm. Construct tangents to each circle from the centre of the other circle. Construction Procedure: The tangent for the given circle can be constructed as follows. 1. Draw a line segment AB = 8 cm. 2. Take A as centre and draw a circle of radius 4 cm 3. Take B as centre, draw a circle of radius 3 cm 4. Draw the perpendicular bisector of the line AB and the midpoint is taken as M. 5. Now, take M as centre draw a circle with the radius of MA or MB which the intersects the circle at the points P, Q, R and S. 6. Now join AR, AS, BP and BQ 7. Therefore, the required tangents are AR, AS, BP and BQ Justification: The construction can be justified by proving that AS and AR are the tangents of the circle (whose centre is B with radius is 3 cm) and BP and BQ are the tangents of the circle (whose centre is A and radius is 4 cm). From the construction, to prove this, join AP, AQ, BS, and BR. ∠ASB is an angle in the semi-circle. We know that an angle in a semi-circle is a right angle. ∴ ∠ASB = 90° ⇒ BS ⊥ AS Since BS is the radius of the circle, AS must be a tangent of the circle. Similarly, AR, BP, and BQ are the required tangents of the given circle. 6. Let ABC be a right triangle in which AB = 6 cm, BC = 8 cm and ∠ B = 90°. BD is the perpendicular from B on AC. The circle through B, C, D is drawn. Construct the tangents from A to this circle. Construction Procedure: The tangent for the given circle can be constructed as follows 1. Draw the line segment with base BC = 8cm 2. Measure the angle 90° at the point B, such that ∠ B = 90°. 3. Take B as centre and draw an arc with a measure of 6cm. 4. Let the point be A where the arc intersects the ray. 5. Join the line AC. 6. Therefore, ABC be the required triangle. 7. Now, draw the perpendicular bisector to the line BC and the midpoint is marked as E. 8. Take E as centre and BE or EC measure as radius draw a circle. 9. Join A to the midpoint E of the circle 10. Now, again draw the perpendicular bisector to the line AE and the midpoint is taken as M 11. Take M as Centre and AM or ME measure as radius, draw a circle. 12. This circle intersects the previous circle at the points B and Q 13. Join the points A and Q 14. Therefore, AB and AQ are the required tangents Justification: The construction can be justified by proving that AG and AB are the tangents to the circle. From the construction, join EQ. ∠AQE is an angle in the semi-circle. We know that an angle in a semi-circle is a right angle. ∴ ∠AQE = 90° ⇒ EQ⊥ AQ Since EQ is the radius of the circle, AQ has to be a tangent of the circle. Similarly, ∠B = 90° ⇒ AB ⊥ BE Since BE is the radius of the circle, AB has to be a tangent of the circle. Hence, justified. 7. Draw a circle with the help of a bangle. Take a point outside the circle. Construct the pair of tangents from this point to the circle. Construction Procedure: The required tangents can be constructed on the given circle as follows. 1. Draw a circle with the help of a bangle. 2. Draw two non-parallel chords such as AB and CD 3. Draw the perpendicular bisector of AB and CD 4. Take the centre as O where the perpendicular bisector intersects. 5. To draw the tangents, take a point P outside the circle. 6. Join the points O and P. 7. Now draw the perpendicular bisector of the line PO and midpoint is taken as M 8. Take M as centre and MO as radius draw a circle. 9. Let the circle intersects intersect the circle at the points Q and R 10. Now join PQ and PR 11. Therefore, PQ and PR are the required tangents. Justification: The construction can be justified by proving that PQ and PR are the tangents to the circle. Since, O is the centre of a circle, we know that the perpendicular bisector of the chords passes through the centre. Now, join the points OQ and OR. We know that perpendicular bisector of a chord passes through the centre. It is clear that the intersection point of these perpendicular bisectors is the centre of the circle. Since, ∠PQO is an angle in the semi-circle. We know that an angle in a semi-circle is a right angle. ∴ ∠PQO = 90° ⇒ OQ⊥ PQ Since OQ is the radius of the circle, PQ has to be a tangent of the circle. Similarly, ∴ ∠PRO = 90° ⇒ OR ⊥ PO Since OR is the radius of the circle, PR has to be a tangent of the circle Therefore, PQ and PR are the required tangents of a circle. Also Access NCERT Exemplar for Class 10 Maths Chapter 11 CBSE Notes for Class 10 Maths Chapter 11 NCERT Solutions for Class 10 Maths Chapter 11 Constructions Topics present in NCERT Solutions for Class 10 Maths Chapter 11 includes division of a line segment, constructions of tangents to a circle, line segment bisector and many more. Students in class 9, study some basics of constructions like drawing the perpendicular bisector of a line segment, bisecting an angle, triangle construction etc. Using Class 9 concepts, students in Class 10 will learn about some more constructions along with the reasoning behind that work. NCERT Class 10, Chapter 11-Constructions is a part of Geometry. Over the past few years, geometry consists a total weightage of 15 marks in the final exams. Construction is a scoring chapter of geometry section. In the previous year exam, one question of 4 marks being asked from this chapter. List of Exercises in class 10 Maths Chapter 11 Exercise 11.1 Solutions (7 Questions) Exercise 11.2 Solutions (7 Questions) The NCERT solutions for Class 10 for the 11th chapter of Maths is all about construction of line segments, division of a Line Segment and Construction of a Circle, Constructions of Tangents to a circle using analytical approach. Students also have to provide justification of each answer. The topics covered in Maths Chapter 11 Constructions are: Exercise Topic 11.1 Introduction 11.2 Division of a Line Segment 11.3 Construction of Tangents to a Circle 11.4 Summary Some of the ideas applied in this chapter: 1. The locus of a point that moves in an identical distance from 2 points, is normal to the line joining both the points. 2. Perpendicular or Normal means right angles whereas, bisector cuts a line segment in two half. 3. The design of different shapes utilizing a pair of compasses and straightedge or ruler. Key Features of NCERT Solutions for Class 10 Maths Chapter 11 Constructions • NCERT solutions can also prove to be of valuable help to students in their assignments and preparation of CBSE term-wise and competitive exams. • Each question is explained using diagrams which makes learning more interactive. • Easy and understandable language used in NCERT solutions. • Provide detailed solution using an analytical approach. Frequently Asked Questions on NCERT Solutions for Class 10 Maths Chapter 11 What is the use of practising NCERT Solutions for Class 10 Maths Chapter 11? Practising NCERT Solutions for Class 10 Maths Chapter 11 provides you with an idea about the sample of questions that will be asked in the second term exam, which would help students prepare competently. These solutions are useful resources, which can provide them with all the vital information in the most precise form. These solutions cover all topics included in the NCERT syllabus, prescribed by the CBSE board. List out the topics of NCERT Solutions for Class 10 Maths Chapter 11? The topics covered in NCERT Solutions for Class 10 Maths Chapter 11 Constructions are Introduction to the constructions, the division of a line segment and construction of tangents to a circle and finally it gives the summary of all the concepts provided in the whole chapter. By referring to these solutions, you get rid of your doubts and also can exercise additional questions. Whether NCERT Solutions for Class 10 Maths Chapter 11 can view only online? For the ease of learning, the solutions have also been provided in PDF format, so that the students can download them for free and refer to the solutions offline as well. These NCERT Solutions for Class 10 Maths Chapter 11 can be viewed online. 1. good
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254474401473999, "perplexity": 1271.7177137165363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00038.warc.gz"}
http://math.stackexchange.com/questions/278425/is-it-possible-to-prove-a-mathematical-statement-by-proving-that-a-proof-exists/281615
# Is it possible to prove a mathematical statement by proving that a proof exists? I'm sure there are easy ways of proving things using, well... any other method besides this! But still, I'm curious to know whether it would be acceptable/if it has been done before? - Sure. For certain statements, you can even prove them by showing that there is no proof of their negation. –  Andres Caicedo Jan 14 '13 at 6:52 @AndresCaicedo not true. If you know you can't disprove something, then it's consistent, not proven. AC and ~AC are both consistent with ZF –  Jan Dvorak Jan 14 '13 at 6:54 I'm wondering how would you non-constructively prove that a proof exists. The proof of a proof would then count as a proof of the original concept. –  Jan Dvorak Jan 14 '13 at 6:57 @Jan Dvorak I understand your point The interesting question is "Are there any known theorems that use this proof-strategy in their proof"? –  Amr Jan 14 '13 at 7:03 @JanDvorak I am well aware of these issues, of course. The statement I wrote can be made precise, and the precise versions are true. For example, it is a theorem of ZF that any $\Pi^0_1$ statement about the natural numbers that is not refutable in PA is true. –  Andres Caicedo Jan 14 '13 at 7:05 There is a disappointing way of answering your question affirmatively: If $\phi$ is a statement such that First order Peano Arithmetic $\mathsf{PA}$ proves "$\phi$ is provable", then in fact $\mathsf{PA}$ also proves $\phi$. You can replace here $\mathsf{PA}$ with $\mathsf{ZF}$ (Zermelo Fraenkel set theory) or your usual or favorite first order formalization of mathematics. In a sense, this is exactly what you were asking: If we can prove that there is a proof, then there is a proof. On the other hand, this is actually unsatisfactory because there are no known natural examples of statements $\phi$ for which it is actually easier to prove that there is a proof rather than actually finding it. (The above has a neat formal counterpart, Löb's theorem, that states that if $\mathsf{PA}$ can prove "If $\phi$ is provable, then $\phi$", then in fact $\mathsf{PA}$ can prove $\phi$.) There are other ways of answering affirmatively your question. For example, it is a theorem of $\mathsf{ZF}$ that if $\phi$ is a $\Pi^0_1$ statement and $\mathsf{PA}$ does not prove its negation, then $\phi$ is true. To be $\Pi^0_1$ means that $\phi$ is of the form "For all natural numbers $n$, $R(n)$", where $R$ is a recursive statement (that is, there is an algorithm that, for each input $n$, returns in a finite amount of time whether $R(n)$ is true or false). Many natural and interesting statements are $\Pi^0_1$: The Riemann hypothesis, the Goldbach conjecture, etc. It would be fantastic to verify some such $\phi$ this way. On the other hand, there is no scenario for achieving anything like this. The key to the results above is that $\mathsf{PA}$, and $\mathsf{ZF}$, and any reasonable formalization of mathematics, are arithmetically sound, meaning that their theorems about natural numbers are actually true in the standard model of arithmetic. The first paragraph is a consequence of arithmetic soundness. The third paragraph is a consequence of the fact that $\mathsf{PA}$ proves all true $\Sigma^0_1$-statements. (Much less than $\mathsf{PA}$ suffices here, usually one refers to Robinson's arithmetic $Q$.) I do not recall whether this property has a standard name. Here are two related posts on MO: - Good answer! Also, if we were to name this proof technique, what do you think would be appropriate? –  chubbycantorset Jan 14 '13 at 18:04 (I've moved a comment answering the follow-up question above to the body of the answer, and added some references.) –  Andres Caicedo Sep 24 '13 at 15:41 A sort of 'flip' of this, of course (and one catch with the purported approach to e.g. Goldbach, which Andres is certainly well aware of), is that there is (almost certainly) no statement $\phi$ for which we can prove that e.g. PA doesn't prove $\phi$! This is because if PA is inconsistent then it proves everything, so proving that there's a statement that PA doesn't prove is tantamount to a proof of the consistency of PA, and as such (by Godel) is impossible within PA itself unless the theory is inconsistent. (Note: this doesn't rule out proofs from outside PA,a la Goodstein...) –  Steven Stadnicki Sep 24 '13 at 16:04 I'd say the model-theoretic proof of the Ax-Grothendieck theorem falls into this category. There may be other ways of proving it, but this is the only proof I saw in grad school, and it's pretty natural if you know model theory. The theorem states that for any polynomial map $f:\mathbb{C}^n \to\mathbb{C}^n$, if $f$ is injective (one-to-one), then it is surjective (onto). The theorem uses several results in model theory, and the argument goes roughly as follows. Let $ACL_p$ denote the theory of algebraically closed fields of characteristic $p$. $ACL_0$ is axiomatized by the axioms of an algebraically closed field and the axiom scheme $\psi_2, \psi_3, \psi_4,\ldots$, where $\psi_k$ is the statement "for all $x \neq 0$, $k x \neq 0$". Note that all $\psi_k$ are also proved by $ACL_p$, if $p$ does not divide $k$. 1. The theorem is true in $ACL_p$, $p>0$. This can be easily shown by contradiction: assume a counter example, then take the finite field generated by the elements in the counter-example, call that finite field $F_0$. Since $F_0^n\subseteq F^n$ is finite, and the map is injective, it must be surjective as well. 2. The theory of algebraically closed fields in characteristic $p$ is complete (i.e. the standard axioms prove or disprove all statements expressible in the first order language of rings). 3. For each degree $d$ and dimension $n$, restrict Ax-Grothendieck to a statement $\phi_{d,n}$, which is expressible as a statement in the first order language of rings. Then $\phi_{d,n}$ is provable in $ACL_p$ for all characteristics $p > 0$. 4. Assume the $\phi_{d,n}$ is false for $p=0$. Then by completeness, there is a proof $P$ of $\neg \phi_{d,n}$ in $ALC_0$. By the finiteness of proofs, there exists a finite subset of axioms for $ACL_0$ which are used in this proof. If none of the $\psi_k$ are used $P$, then $\neg \phi_{d,n}$ is true of all algebraically closed fields, which cannot be the case by (2). Let $k_0,\ldots, k_m$ be the collection of indices of $\psi_k$ used in $P$. Pick a prime $p_0$ which does not divide any of $k_0,\ldots,k_m$. Then all of the axioms used in $P$ are also true of $ACL_{p_0}$. Thus $ACL_{p_0}$ also proves $\neg \phi_{d,n}$, also contradicting (2). Contradiction. Therefore there is a proof of $\phi_{d,n}$ in $ACL_0$. So the proof is actually along the lines of "for each degree $d$ and dimension $n$ there is a proof of the Ax-Grothendieck theorem restricted to that degree and dimension." What any of those proofs are, I have no clue. - Hi. Do you see how to extend the argument to prove that the inverse should also be polynomial? –  Andres Caicedo Jan 18 '13 at 21:09 Not off the top of my head. I'm guessing it goes something like this: Since every function on finite fields is a polynomial function, there should be an upper bound $U(n, d, p)$ on the degree of the inverse for every $n$. If that function can be made independent of $p$, then just use "$\phi_{d,n}$ AND there is a polynomial of degree at most $U(n,d)$ which is an inverse of $f$" instead of just $\phi_{d,n}$. The proof would go the same. I don't know how to make the upper bound on the degree independent of $p$, however. (Is it possible?) –  RecursivelyIronic Jan 18 '13 at 23:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327898025512695, "perplexity": 169.29424491233377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443062.21/warc/CC-MAIN-20141017005723-00273-ip-10-16-133-185.ec2.internal.warc.gz"}
http://images.planetmath.org/quantumoperatoralgebrasinquantumfieldtheories
# quantum operator algebras in quantum field theories ## 0.1 Introduction This is a topic entry that introduces quantum operator algebras and presents concisely the important roles they play in quantum field theories. ###### Definition 0.1. Quantum operator algebras (QOA) in quantum field theories are defined as the algebras of observable operators, and as such, they are also related to the von Neumann algebra; quantum operators are usually defined on Hilbert spaces, or in some QFTs on Hilbert space bundles or other similar families of spaces. ###### Remark 0.1. Representations of Banach $*$-algebras (that are defined on Hilbert spaces) are closely related to C* -algebra representations which provide a useful approach to defining quantum space-times. ## 0.2 Quantum operator algebras in quantum field theories: QOA Role in QFTs Important examples of quantum operators are: the Hamiltonian operator (or Schrödinger operator), the position and momentum operators, Casimir operators, unitary operators and spin operators. The observable operators are also . More general operators were recently defined, such as Prigogine’s superoperators. Another development in quantum theories was the introduction of Frechét nuclear spaces or ‘rigged’ Hilbert spaces (Hilbert bundles). The following sections define several types of quantum operator algebras that provide the foundation of modern quantum field theories in mathematical physics. ### 0.2.1 Quantum groups; quantum operator algebras and related symmetries. Quantum theories adopted a new lease of life post 1955 when von Neumann beautifully re-formulated quantum mechanics (QM) and quantum theories (QT) in the mathematically rigorous context of Hilbert spaces and operator algebras defined over such spaces. From a current physics perspective, von Neumann’ s approach to quantum mechanics has however done much more: it has not only paved the way to expanding the role of symmetry in physics, as for example with the Wigner-Eckhart theorem and its applications, but also revealed the fundamental importance in quantum physics of the state space geometry of quantum operator algebras. ## 0.3 Basic mathematical definitions in QOA: ### 0.3.1 Von Neumann algebra Let $\mathcal{H}$ denote a complex (separable) Hilbert space. A von Neumann algebra $\mathcal{A}$ acting on $\mathcal{H}$ is a subset of the algebra of all bounded operators $\mathcal{L}(\mathcal{H})$ such that: • (i) $\mathcal{A}$ is closed under the adjoint operation (with the adjoint of an element $T$ denoted by $T^{*}$). • (ii) $\mathcal{A}$ equals its bicommutant, namely: $\mathcal{A}=\{A\in\mathcal{L}(\mathcal{H}):\forall B\in\mathcal{L}(\mathcal{H}% ),\forall C\in\mathcal{A},~{}(BC=CB)\Rightarrow(AB=BA)\}~{}.$ (0.1) If one calls a commutant of a set $\mathcal{A}$ the special set of bounded operators on $\mathcal{L}(\mathcal{H})$ which commute with all elements in $\mathcal{A}$, then this second condition implies that the commutant of the commutant of $\mathcal{A}$ is again the set $\mathcal{A}$. On the other hand, a von Neumann algebra $\mathcal{A}$ inherits a unital subalgebra from $\mathcal{L}(\mathcal{H})$, and according to the first condition in its definition $\mathcal{A}$, it does indeed inherit a $*$-subalgebra structure as further explained in the next section on C* -algebras. Furthermore, one also has available a notable bicommutant theorem which states that: “$\mathcal{A}$ is a von Neumann algebra if and only if $\mathcal{A}$ is a $*$-subalgebra of $\mathcal{L}(\mathcal{H})$, closed for the smallest topology defined by continuous maps $(\xi,\eta)\longmapsto(A\xi,\eta)$ for all $$ where $<.,.>$ denotes the inner product defined on $\mathcal{H}$ ”. For a well-presented treatment of the geometry of the state spaces of quantum operator algebras, the reader is referred to Aflsen and Schultz (2003; [AS2k3]). ### 0.3.2 Hopf algebra First, a unital associative algebra consists of a linear space $A$ together with two linear maps: $\displaystyle m$ $\displaystyle:A\otimes A{\longrightarrow}A~{},~{}(multiplication)$ (0.2) $\displaystyle\eta$ $\displaystyle:\mathbb{C}{\longrightarrow}A~{},~{}(unity)$ satisfying the conditions $\displaystyle m(m\otimes\mathbf{1})$ $\displaystyle=m(\mathbf{1}\otimes m)$ (0.3) $\displaystyle m(\mathbf{1}\otimes\eta)$ $\displaystyle=m(\eta\otimes\mathbf{1})={\rm id}~{}.$ This first condition can be seen in terms of a commuting diagram : $\begin{CD}A\otimes A\otimes A@>{m\otimes{\rm id}}>{}>A\otimes A\\ @V{{\rm id}\otimes m}V{}V@V{}V{m}V\\ A\otimes A@ >m>>A\end{CD}$ (0.4) Next suppose we consider ‘reversing the arrows’, and take an algebra $A$ equipped with a linear homorphisms $\Delta:A{\longrightarrow}A\otimes A$, satisfying, for $a,b\in A$ : $\displaystyle\Delta(ab)$ $\displaystyle=\Delta(a)\Delta(b)$ (0.5) $\displaystyle(\Delta\otimes{\rm id})\Delta$ $\displaystyle=({\rm id}\otimes\Delta)\Delta~{}.$ We call $\Delta$ a comultiplication, which is said to be coasociative in so far that the following diagram commutes $\begin{CD}A\otimes A\otimes A@<{\Delta\otimes{\rm id}}<{} (0.6) There is also a counterpart to $\eta$, the counity map $\varepsilon:A{\longrightarrow}\mathbb{C}$ satisfying $({\rm id}\otimes\varepsilon)\circ\Delta=(\varepsilon\otimes{\rm id})\circ% \Delta={\rm id}~{}.$ (0.7) A bialgebra $(A,m,\Delta,\eta,\varepsilon)$ is a linear space $A$ with maps $m,\Delta,\eta,\varepsilon$ satisfying the above properties. Now to recover anything resembling a group structure, we must append such a bialgebra with an antihomomorphism $S:A{\longrightarrow}A$, satisfying $S(ab)=S(b)S(a)$, for $a,b\in A$ . This map is defined implicitly via the property : $m(S\otimes{\rm id})\circ\Delta=m({\rm id}\otimes S)\circ\Delta=\eta\circ% \varepsilon~{}~{}.$ (0.8) We call $S$ the antipode map. A Hopf algebra is then a bialgebra $(A,m,\eta,\Delta,\varepsilon)$ equipped with an antipode map $S$ . Commutative and non-commutative Hopf algebras form the backbone of quantum ‘groups’ and are essential to the generalizations of symmetry. Indeed, in most respects a quantum ‘group’ is closely related to its dual Hopf algebra; in the case of a finite, commutative quantum group its dual Hopf algebra is obtained via Fourier transformation of the group elements. When Hopf algebras are actually associated with their dual, proper groups of matrices, there is considerable scope for their representations on both finite and infinite dimensional Hilbert spaces. ### 0.3.3 Groupoids Recall that a groupoid ${\mathsf{G}}$ is, loosely speaking, a small category with inverses over its set of objects $X=Ob({\mathsf{G}})$ . One often writes ${\mathsf{G}}^{y}_{x}$ for the set of morphisms in ${\mathsf{G}}$ from $x$ to $y$ . A topological groupoid consists of a space ${\mathsf{G}}$, a distinguished subspace ${\mathsf{G}}^{(0)}={\rm Ob(\mathsf{G)}}\subset{\mathsf{G}}$, called the space of objects of ${\mathsf{G}}$, together with maps $r,s~{}:~{}\hbox{}$ (0.9)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 69, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252175688743591, "perplexity": 622.9831046763518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646213.26/warc/CC-MAIN-20180319023123-20180319043123-00701.warc.gz"}
https://www.edumedia-sciences.com/en/media/182-field-force-potential
# Field - Force - PotentialHTML5 ## Summary The field is created by the fixed charge at any point, whether or not there is a test charge. A force will exist only if you place a charge on this pre-existing electric field. Remember a charge never experiences its own electrical field. The field is orthogonal to the equipotentials at any point and always points in the direction of decreasing potential. The spherical symmetry of this charge distribution is revealed by its spherical equipotentials. Click on the static charge in the center to change its sign. Click on the charge to catch it. Throw it to set new initial conditions. ## Learning goals • To show the existence of an electric field at any point even when there is no test charge. • To illustrate how a single charge will experience a repulsive or attractive force due to the presence of another single fixed charge. • To explain the link between force, field and potential (energy). • To view the orthogonality between the equipotentials and the electric field. • To observe that the electric field always points in the direction of decreasing potential.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440302014350891, "perplexity": 474.5969812787724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00235.warc.gz"}